Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Nyghtfall, I am interested in LuxRender because I want to do network rendering and later a renderfarm for my final render. I have an dual e5-2683 workstation (28 cores). Will this give me performance comparable to GPU rendered iRay? My plan is to use the CPU for rendering in Lux and another renderer called Clarisse and I have a 1080ti I want free for viewport work and test rendering in DAZ. Currently the 1080ti is about 2.5 X faster than the CPU's in DAZ iRay.
Like you I keep getting drawn to go back to it. I look at alot of stuff I did (more with R4 not so much with 2.5) and like some of it better. I just dont know if im willing to put up with bugs and workarounds but will prolly fire it up just to see..
Oh no I don't. I am never going back to CPU rendering. LOL
I'm the opposite. I look back at my Reality work and my only thoughts are of how long they took to render. Otherwise, they're just part of my artistic history that I'm still proud to show off. I work exclusively with Iray now, and am striving to become an Iray expert.
The only reason I brought up Reality in this thread was because you mentioned having probably gone overboard with the purchase of your new CPU.
Sorry, I'm not familiar enough with how CPU performance compares with GPU performance to answer that question. Bobvan recently bought a 10-core CPU, and expressed concern about having spent so much on a CPU that he's since discovered doesn't provide much benefit for Iray. I was simply reminding him that he can use his new CPU with Reality and it will crush render times in LuxRender. I understand he also has a 1080 Ti now. Whether render times with his CPU and Lux might be comparable to his GPU and Iray, I have no idea.
LOL I remember you saying something in another thread (jokingly) not too long ago. Thanks for making me want to play with Reality again or something to that effect. Sorry if I missunderstood yeah iray is alot easier to work with hence my hesitation..
Yeah, no worries. Back when you first mentioned having gotten the new CPU, I expressed mock hatred toward you for it. In a related thread, someone linked to Intel's announcement about their upcoming 18-Core CPU's. I commented that if I had that much rendering power in a CPU, I'd probably go back to Reality just to enjoy the freedom of not having to worry about capping out my 980 Ti's VRAM, wished Iray could use my 32 GB of RAM for textures, and that that was the one thing I hate about GPU rendering.
Shortly after that discussion, I helped bp select a new GPU for Iray, and realized I don't need a better CPU for Lux, just more VRAM for Iray. After running several memory tests with a new environment, I ordered my 1080 Ti. Then, I found this video about Intel's announcement, and felt even better about not being able to afford a new CPU and choosing to stick with GPU rendering:
I took it with good humour it was actually pretty funny. Goes to hsow my lack of knowlege of PC's I could of stuck with the original 6 core processor put in a second 1080ti and kept $500.00 in my pocket. Depending on what EVGA sends back I may have it all will see..
Deleted
I've been bouncing back and forth between 3dl and Iray with occasional experiments with other engines.
And Iray GPU seems just about the fastest you're going to get with near photorealistic rendering without spending a LOT of money (and at that point you could just buy better hardware and speed up Iray)
The money you spent for 10 cores is not wasted. Unless you spent your rent money, got in trouble with the wife, or you only render in DAZ iRay (and nothing else!) with less than 9.5Gb of textures, you made a good choice. After all, there are many other things in life besides iRay. There is gaming, video editing, and other applications that benefit from a stout CPU. Your computer should be useful for the foreseeable future. Even if you only bought the computer for rendering, as your skills develop and your renders get more sophisticated, you will hit the vRam barrier. It's good to have the 10 cores to fall back on. I also have a 1080ti, but it will take a few tricks to keep my scenes less than 9.5Gb in iRay. iRay is not a very sophisticated renderer. And there are just some things that GPU's are unable to render at this early time in the tech. You can always count on cores.
Cool and if I revisit Reality lux...
Nother question does combining cards just agment speed and Vram as well so 11 (or 9. something in win10's case) and 4 or 6 G card give you 13 to 16 G?
Its purely for speed gain got it
It matters for gaming, but is useable for rendering and doesn't seem to be an issue.
I just use the laptop for building and seeting scenes up same if I decide to test Reality / Lux with my 10 core cpu. I would install Reality on the laptop DS and just run the stand alone luxrender to render on the beast..
So i have an RTX 2080 ti arriving monday and I currently use a GTX 1080 (not the ti). I had debated selling the 1080 or keeping it. I didnt know keeping it would slow the 2080 to match the 1080 clock. Will using both still be significantly faster than just using the 2080 alone at full clock speed?
Iray has no way of directly controlling the clock speeds of any GPUs in a system running it as a client application. Rendering on multiple GPUs/CPUs with VASTLY different performance specs will give you less performance than you might think. But not because of any artificial clock speed reduction on the faster component's part.
Just some more caveats that don't seem to be discussed previously.
You need one cpu core for every gpu plus one for the OS. So if you have a quad core cpu you're limited to max three gpus for rendering. As to how to connect them you just need risers from 1x to 16x, this way you can use the cheap 1x slots on your mobo. You don't need multiple 16x slots. If you use multiple cards then go for blowers directly exhausting out of the case and be sure to have a good ventilation in the case too. At the very least two front fans and one back fan. Of course you will also need a powerful psu about 300/400w + 150/200w per card, depending on the cpu and cards. The psu is better at bottom with its own intake from below the case.
If you're using riser cards there is nor reason, besides aesthetics, to not remove the side panel and keep the GPU's outside the case. It will cut down on heat in the case substantially and then there is no need to bother with blower cards which usually have the worst performance of any card of that type.
Agreed .. with multiple gpus an open mining case is better .. I see many rigs using blowers anyway when the cards are close together, that's to avoid intaking hot hair from the next card I guess ..
Hi Padone , I love your content I was looking at it in the flash sale . I wish I had knew how to use Maya
anyway back to the thread subject if anyone has a large animation or render project there are render farms now that can connect to Daz Studio, Carrara and Poser using Iray Gpu connecting you through tcpi SFTP under the Daz advanced iray setting.some of the farms require you to use their connection portal, others ones all you have to do put in the SFTP ip and pass code they give you to connect. If you search them on rentrender.com use the icons listed to find your software most farms can connect daz now using using the Blender or carrara SFTP IP block , a 200+mps highspeed connect is required by most farms. I have used RenderRev once for small HQ projects . You just have to beware that render farms are not cheap, because your actually working in a virtual computer desktop server with massive GPU powers. So make sure you have everything ready to go and do your testing before connecting to the cloud server, they charge by the core hour/minute weather your testing or full rendering Render rev gets $1 a core minute which cost me in a 3 minute film project for Gus Gregory about $550 bucks to complete it & i had the rendering done for the project in a 48 hours instead of 4 weeks.
Render Farms are not for everyone but check it it may come in handy someday https://rentrender.com/iray-render-farms/
Fyi it's actually one cpu THREAD per gpu - so on a cpu with hyperthreading/smt your supported GPU limit is one less than double your core count.
I actually have a desktop with two GPU's sitting side-by-side in adjacent slots. One is a GTX-1080ti and one is a GTX-1070. No fancy cooling, just a couple case fans and the stock GPU fans. And since they're in adjacent slots the GPU fans are blowing on each other. The case stays closed with both side panels on. I'm not using any special software or tweaking any fan curves, etc. Just using the stock GPU fans. No water cooling. I've had this computer for years and done a lot of rendering with it. Overheating and throttling have never been an issue. And I know they have never been an issue because I'm usually measuring ACTUAL GPU temps and fan speeds and so on as I render.
So here are my actual, real world results during very long renders:
My 1080ti temp flattens at 79C and stays at that temperature forever. My 1070 flattens at around 75C. Fan speeds flatten at around 77% for the 1080ti.
Those are totally normal operating temperatures. Will the 1080ti ever get damaged? No, not even close. Max temp for this card is like 92C, which is when it does thermal throttling. Damage doesn't occur until 105C+. I believe it does some performance throttling in the mid-80's, but my GPU never gets close because the stock fan profile prevents that. Engineers DESIGN the system to "regulate" the temperature of the components so that they stay in a safe range. If the components get hotter, the fan speeds up. And at it's normal max temp during rendering (79C) the fan is only at 77%, so it has a lot more unused cooling capacity.
I'm not sure if anyone else actually has two GPU's that they can test, but I'd encourage you to do some testing before succumbing to the paranoia. And if you do find some overheating, first consider that maybe you bought some junk components, or did something to operate outside the components' design range (like overclocking, blocking the air flow, etc.), or didn't update your BIOS, or made a mistake when tweaking the BIOS settings, or have a case fan problem, etc.
Do you have a reference for that info? I've never heard it before. I'm not doubting you, but it doesn't seem to match my experience. I have a quad core CPU and I've done Iray renders with 3 Nvidia cards, plus I've done Luxrender renders with 3 Nvda + 1 ATI card. Sadly, I didn't look at the CPU usage while I was doing it and i've disassembled the rig since then. I'm just wondering what happens if you exceed that ratio. Does the rendering slow down or what? I was always under the impression that the CPU just sent a work unit to whichever CUDA core is available regardless of what GPU it's on (assuming the GPU has the required VRAM). I'm considering buying another GPU, but I want to make sure I can fully utilize it.
Kitsumo, you can try it yourself. In Windows, at the Command prompt run msconfig, and under the Boot tab, Advanced options, you can set your CPU to recognize as many processors as you want ("Number of processors"). Select "1", then restart, and I think you'll find that only one GPU is working.
I agree, it doesn't make much sense. When I did GPU programming in C++ I don't recall anything that required multiple cores to run multiple GPU's, but maybe I never went into that depth. It's been a while.
Ok. I'll take you guys' word for it. I don't really feel like restarting my PC right now. That just seems like such a weird requirement. BTW, how did you get started in GPU programming? I'm just barely learning Java and I know CUDA or OpenCL will be a whole world if difference, but I'm trying to get an idea of what's involved.
I remember learning the Commodore 64 where you had to learn the memory map so you wouldn't use a memory location that the kernel or basic routines were using. Then with structured languages, I was told "You store everything with variables. Don't worry about what the computer is doing" Now it seems that GPU programmers have to learn about GPU architecture. Aye! No es bueno!
This comes directly from the Iray Programmer's Manual:
Generally, Iray Photoreal uses all available CPU and GPU resources by default. Iray Photoreal employs a dynamic load balancing scheme that aims at making optimal use of a heterogeneous set of compute units. This includes balancing work to GPUs with different performance characteristics and control over the use of a display GPU.
Iray Photoreal uses one CPU core per GPU to manage rendering on the GPU. When the CPU has been enabled for rendering, all remaining unused CPU cores are used for rendering on the CPU. Note that Iray Photoreal can only manage as many GPUs as there are CPU cores in the system. If the number of enabled GPU resourses exceeds the number of available CPU cores in a system Iray Photoreal will ignore the excess GPUs.
One way you can verify this behavior is by going into your log file (Daz Studio Help menu > Troubleshooting > View Log File) after doing a CPU based render and doing a text search for "rend info : CPU: using". You should see a log line like the following:
With the number of cores being the same as the total number of CPU threads available in your system (my primary rendering machine has a 12-thread i7-8700K in it - hence the 12 above.) If you then start adding GPUs to the rendering process in addition to your CPU and re-check the log file, you will see that the core count always decreses by one each time an additional GPU is added.
As for the flash sale it's not me I'm not a PA. Also I agree that render farms are a good option.
@RayDAnt, wow, thanks for that link. That document answers a lot of questions I've had for a long time. I wish AMD would publish something similar for ProRender. They've released it on Github with some basic info, but nothing in depth. And they wonder why no one's using it.
Yeah, the CPU core per GPU thing is what I'd classify as true yet irrelevant. I can't imagine that someone with 2 GPU's can even find a single core CPU. I can't remember the last time I saw a single core CPU. Or even dual core. Heck, even my little $40 Raspberry Pi "desktop computer you can hold in your hand" is quad core.