Recommendations for a rendering PC? (Iray)

1246

Comments

  • GatorGator Posts: 1,320
    edited October 2017
    drzap said:
    ebergerly said:
    As I suspected your "facts" are more, better, and faster as opposed to an analysis of where best to spend money on a new computer based on cost/benefit (using facts and numbers) and what apps youll be running. For example is it better to spend money on top line CPU and water cooling or more GPUs.

    I think you are a little tone deaf.  Computer advice is specific to a user's needs.  The OP has describes his needs and wants in detail.   His computer choice is almost optimum for what he wants.  In this case, his budget allows for a water cooler and a top GPU.  He stated he didn't want more than one GPU.  Your analysis seems to only apply to your own situation.  The other people offering advice are giving it considering the needs of person asking.

    Yeah, remember he said it was to pump out professional projects ?

    "Only 15% faster" - that 15% faster could generate more income, making the additional $100 expense worth it.

    Post edited by Gator on
  • drzapdrzap Posts: 795
    drzap said:
    ebergerly said:
    Yeah, remember he said it was to pump out professional projects ?

    "Only 15% faster" - that 15% faster could generate more income, making the additional $100 expense worth it.

    Absolutely.  And that 15% percent is a gift that keeps on giving for the life of the computer.  15% is a significant gain for me.  If I received 15% interest rate on my savings, I would jump for joy.  15% yearly bonus?  YES!  Complete a 3 month job 15% faster?  worth it.  When the price is so low, 15% gain is a no brainer.  Unless I really don't have the budget for it.

  • ebergerlyebergerly Posts: 3,255
    All good points. And using the same logic I recommend he buys a set of 10 rack mounted computers with custom cooling and UPS. That way he can make a lot more money for his investment. 80 CPUs with 260 threads are much faster and better.
  • JamesJABJamesJAB Posts: 1,766

    If I had to choose one item to liquid cool in my computer it would be the GPU hands down.  This will allow your GPU to render in IRAY 100% usage at maximum boost speed all day without coming close to it's throttling temperature.

    Correct me if I'm wrong, but I could have sworn that I read somewhere a while back that computer liquid cooling uses a non-conductive liquid as coolant.  If this is true, a leak will not short out your computer.

  • drzapdrzap Posts: 795
    ebergerly said:
    All good points. And using the same logic I recommend he buys a set of 10 rack mounted computers with custom cooling and UPS. That way he can make a lot more money for his investment. 80 CPUs with 260 threads are much faster and better.

     

    Now you are trolling, because that is absolutely not the same logic.  I think you know this but you have lost your argument so you resort to ridiculous hyperboles.  The guy has stated his limits and his particular job has a point of diminishing returns (which is one of the reasons for his limit).  you should give up now.

  • drzapdrzap Posts: 795
    JamesJAB said:

    If I had to choose one item to liquid cool in my computer it would be the GPU hands down.  This will allow your GPU to render in IRAY 100% usage at maximum boost speed all day without coming close to it's throttling temperature.

    Correct me if I'm wrong, but I could have sworn that I read somewhere a while back that computer liquid cooling uses a non-conductive liquid as coolant.  If this is true, a leak will not short out your computer.

    I agree with this.  Liquid cooling the gpu would probably benefit iRay more than cooling the CPU.

  • On the subject of liquid cooling, consider yourselves to have been on the receiving end of a bucket of cold water. Giving advice is not a competitive sport, nor is it a form of foreplay.

  • drzapdrzap Posts: 795

    On the subject of liquid cooling, consider yourselves to have been on the receiving end of a bucket of cold water. Giving advice is not a competitive sport, nor is it a form of foreplay.

    Cold water is not optimum for cooling.  Can you make it a bucket of non-conductive liquid coolant?cheeky

  • Takeo.KenseiTakeo.Kensei Posts: 1,303

    @Hellboy : Since you don't seem to be going to use 2 GFX cards, you can spare a few bucks by buying a cheaper motherboard. The Asus Hero is for overclockers. The Asus Prime X370 is sufficient for your need (110 instead of 175). You can even go further down if you're sure you won't ever need a second gfx card at full speed and go for the Gigabyte AB350 or MSI B350 (75 both). If you ever add a card on these two motherboards, the only drawback is that one of the card will run at 4x speed

    For the CPU it really depends on your use. Zbrush performance will scale with the number of cores. The more you have, the better. The Ryzen 1600x (6 cores and costs 145 instead of 300) may be sufficient for you, but if you want to multitask a lot (rendering in Iray while sculpting) more core may be better. You can still multitask with 6 cores (2 for iray, 4 for Zbrush) but only you can know what you really need

    On the watercooling subject, there are two benefits either for the CPU or the GPU. Less noise and your components will last longer, as they will be cooler.

    Last point. Even if you don't want a second gfx card, it is recommended, as your viewport will lag while rendering with Iray if you only have one gfx card. Consequence : you can't sculpt with Zbrush while rendering.  So if you plan to do that, I'd advise to buy a second gfx card to drive the monitor (a GTX 1050 Ti 4GB)

     

  • ebergerlyebergerly Posts: 3,255

    That's good to know about ZBrush. I only used the trial for a month or whatever, and never checked to see how much it used the CPUs. I'm just curious how the actual response scales with CPU cores. For example, is the response dead slow with a 4-core CPU, or is it fine, and you only need an 8 core at super high subdivision levels or something? I guess I never ventured into super high rez areas when I used it because I don't recall it slowing down with my Ryzen 7 1700. But then again I'm not a serious, professional, heavy 3D user, just a hobbyist, so I wouldn't know about all the high end stuff.   

  • kyoto kidkyoto kid Posts: 41,931
    JamesJAB said:

    Another thing to keep in mind, he mentioned being in Costa Rica (tropical climate?)  If the house/appartment is relying on open windows or a swampcooler (water cooled air), the liquid cooling will be a very good option for keeping the CPU cool in a less than optimal ambient temperature.
    If this is the case, I would highly recomment seeing about paying the extra for the hibrid cooling version of the GTX 1080 ti.  (These cards liquid cool the GPU core and air cool the VRAM)

    ...I was just going to mention about his location..  Almost considered retiring there but too hot & humid for me.

  • JonstarkJonstark Posts: 2,738

    This is a fairly radical suggestion, but it might be a way of saving a lot of money and still exceeding the rendering needs. I'll preface this by saying I never render in Studio, in fact I barely use it at all for anything other than occasional rigging and character setup, so you may want to discount my opinion smiley

    However I do use Octane, which runs on the GPU.  I also use Carrara which renders on the CPU, and I like the freedom to switch between these two methods at will on the same PC at any time (I also have other unbiased render solutions with Luxcore, Luxus, Cycles, Thearender and I guess Iray in Studio too though I don't ever use it, but all of that is beside the point).

    I decided that the more CPU cores I could get would be a good thing, but I didn't want to pay through the nose.  I found there are a ton of slightly older-tech dual-Xeon workstations available used on Ebay for super cheap.  Workstations that a couple of years ago were selling for $5000 - $7000 are now selling on ebay for practically peanuts.  I've picked up 2 HP Z600's each for less than $200, and I got a HP Z620 (newer model, more cores) for a little over $350 (I was putting together a mini render farm on the ultra cheap, which is why I bought all these different workstations).

    What I found was that these older workstations work fine with the latest Nvidia cards.  In the Z620 I got I installed a 1070 and now have a 32-core render machine for CPU work plus the 1070 can drive any Octane work I want to do with no problems, and all of this was on the cheap.  Just as an aside, I can also play all the latest video games and run my Oculus through this machine too, so just because the tech is a little older doesn't mean I'm sacrificing anything for any non-rendering playtime activities I want.

    I'm just throwing it out there for consideration that if you watch ebay and get a good deal (very easy to do) you would get a Z600 or similar dual-xeon workstation for $150 - $200 or so (and these can handle up to 24 cores of CPU rendering), then add whatever graphics card you want, and have plenty of room in the budget to get a sizeable SSD too.  After going this route, I feel chagrined at my prior computer purchases where I was buying brand new the latest/greatest PC (and way overspending).   Just throwing it out there as food for thought, as I think a similar approach could work very well for someone with a Studio/Iray focus.

  • JonstarkJonstark Posts: 2,738
    ebergerly said:
    All good points. And using the same logic I recommend he buys a set of 10 rack mounted computers with custom cooling and UPS. That way he can make a lot more money for his investment. 80 CPUs with 260 threads are much faster and better.

    laugh  This made me chuckle.  I wonder how many threads Studio can handle?  I can't get more than 100 render cores out of Carrara laugh

  • kyoto kidkyoto kid Posts: 41,931
    ...100 cores. So I wonder if having a single box with dual 24 core Epyc CPUs (96 total threads) would work?
  • JonstarkJonstark Posts: 2,738
    edited October 2017
    kyoto kid said:
    ...100 cores. So I wonder if having a single box with dual 24 core Epyc CPUs (96 total threads) would work?

    Will indeed work, in theory.  There was someone who posted in the Carrara forum maybe a week ago who said he had purchased one of these monsters and was asking in advance if Carrara would be able to use his cores.  He hasn't posted yet about whether the machine he ordered has come in yet and and the results, but I couldn't help thinking that would be the absolute perfect machine for Carrara (and for that matter any other software that uses multiple cores). I tremendously envy that guy his machine, but not the amount of money he must have spent! smiley

    Post edited by Jonstark on
  • JonstarkJonstark Posts: 2,738
    edited October 2017
    Jonstark said:
    kyoto kid said:
     

    (accidentally double posted, pls ignore)

    Post edited by Jonstark on
  • Kevin SandersonKevin Sanderson Posts: 1,643
    edited October 2017

    Jon, I've always heard and read that DAZ Studio/3Delight is unlimited cores on one machine. The studios that pay for the full version don't have that.

    Post edited by Kevin Sanderson on
  • drzapdrzap Posts: 795
    edited October 2017
    Jonstark said:

    This is a fairly radical suggestion, but it might be a way of saving a lot of money and still exceeding the rendering needs. I'll preface this by saying I never render in Studio, in fact I barely use it at all for anything other than occasional rigging and character setup, so you may want to discount my opinion smiley

    However I do use Octane, which runs on the GPU.  I also use Carrara which renders on the CPU, and I like the freedom to switch between these two methods at will on the same PC at any time (I also have other unbiased render solutions with Luxcore, Luxus, Cycles, Thearender and I guess Iray in Studio too though I don't ever use it, but all of that is beside the point).

    I decided that the more CPU cores I could get would be a good thing, but I didn't want to pay through the nose.  I found there are a ton of slightly older-tech dual-Xeon workstations available used on Ebay for super cheap.  Workstations that a couple of years ago were selling for $5000 - $7000 are now selling on ebay for practically peanuts.  I've picked up 2 HP Z600's each for less than $200, and I got a HP Z620 (newer model, more cores) for a little over $350 (I was putting together a mini render farm on the ultra cheap, which is why I bought all these different workstations).

    What I found was that these older workstations work fine with the latest Nvidia cards.  In the Z620 I got I installed a 1070 and now have a 32-core render machine for CPU work plus the 1070 can drive any Octane work I want to do with no problems, and all of this was on the cheap.  Just as an aside, I can also play all the latest video games and run my Oculus through this machine too, so just because the tech is a little older doesn't mean I'm sacrificing anything for any non-rendering playtime activities I want.

    I'm just throwing it out there for consideration that if you watch ebay and get a good deal (very easy to do) you would get a Z600 or similar dual-xeon workstation for $150 - $200 or so (and these can handle up to 24 cores of CPU rendering), then add whatever graphics card you want, and have plenty of room in the budget to get a sizeable SSD too.  After going this route, I feel chagrined at my prior computer purchases where I was buying brand new the latest/greatest PC (and way overspending).   Just throwing it out there as food for thought, as I think a similar approach could work very well for someone with a Studio/Iray focus.

    Not so radical.  This is exactly what I did, except you went with HP and I went with Dell (actually an HP and a Dell)  When the big boys upgrade,  I like to swoop in and buy some cheap cores.  I'm sure plenty of people are doing it.  But here's a fairly radical idea.... try rendering in Octane and Carrara (or any other cpu renderer) at the same time.  Talking about production boost!  As long as your cooling is tight, you won't look at production rendering the same way again.  I use Arnold and Redshift and I setup my scenes to take advantage of each of their strengths.  It's like getting double the performance from each machine.

    Post edited by drzap on
  • kyoto kidkyoto kid Posts: 41,931
    Jonstark said:
    kyoto kid said:
    ...100 cores. So I wonder if having a single box with dual 24 core Epyc CPUs (96 total threads) would work?

    Will indeed work, in theory.  There was someone who posted in the Carrara forum maybe a week ago who said he had purchased one of these monsters and was asking in advance if Carrara would be able to use his cores.  He hasn't posted yet about whether the machine he ordered has come in yet and and the results, but I couldn't help thinking that would be the absolute perfect machine for Carrara (and for that matter any other software that uses multiple cores). I tremendously envy that guy his machine, but not the amount of money he must have spent! smiley

    ..same here.  This is a "lottery win" system.

  • kyoto kidkyoto kid Posts: 41,931

    Jon, I've always heard and read that DAZ Studio/3Delight is unlimited cores on one machine. The studios that pay for the full version don't have that.

    ...that may be true for 3DL but Iray?

  • kyoto kid said:

    Jon, I've always heard and read that DAZ Studio/3Delight is unlimited cores on one machine. The studios that pay for the full version don't have that.

    ...that may be true for 3DL but Iray?

    Here's some data on render speed scaling with CPU. Looks like you'd start to level off with dual Xeons. https://www.pugetsystems.com/labs/articles/NVIDIA-Iray-CPU-Scaling-786/

  • JamesJABJamesJAB Posts: 1,766
    kyoto kid said:

    Jon, I've always heard and read that DAZ Studio/3Delight is unlimited cores on one machine. The studios that pay for the full version don't have that.

    ...that may be true for 3DL but Iray?

    Here's some data on render speed scaling with CPU. Looks like you'd start to level off with dual Xeons. https://www.pugetsystems.com/labs/articles/NVIDIA-Iray-CPU-Scaling-786/

    There is a very big flaw in those tests.  When trying to decide on CPU scaling, you need to remove the GPU rendering completely from the equasion.
     

  • agent unawaresagent unawares Posts: 3,513
    edited October 2017
    JamesJAB said:
    kyoto kid said:

    Jon, I've always heard and read that DAZ Studio/3Delight is unlimited cores on one machine. The studios that pay for the full version don't have that.

    ...that may be true for 3DL but Iray?

    Here's some data on render speed scaling with CPU. Looks like you'd start to level off with dual Xeons. https://www.pugetsystems.com/labs/articles/NVIDIA-Iray-CPU-Scaling-786/

    There is a very big flaw in those tests.  When trying to decide on CPU scaling, you need to remove the GPU rendering completely from the equasion.
     

    Does the GPU contribution not remain more or less a constant baseline?

    EDIT: It looks like it more or less does. However the data is not presented in a way where you can easily look at it and peg CPU contribution. It needs to be graphed with the Y-axis as 1/seconds. That tells how much of your image per second is being rendered, which is an actually useful metric, and from their graph we can see that a single CPU core is contributing abouuuuuuut (1/175-1/300)/19=0.01% of the image per second, which is absolutely negligible compared to the first GPU's 1/300=0.33% per second and gets completely wiped out when adding more GPUs.

    Hey, I can still do math at midnight!

    EDIT: I'm looking at the 970 Iray Night Caustics data specifically.

    Post edited by agent unawares on
  • JamesJABJamesJAB Posts: 1,766
    JamesJAB said:
    kyoto kid said:

    Jon, I've always heard and read that DAZ Studio/3Delight is unlimited cores on one machine. The studios that pay for the full version don't have that.

    ...that may be true for 3DL but Iray?

    Here's some data on render speed scaling with CPU. Looks like you'd start to level off with dual Xeons. https://www.pugetsystems.com/labs/articles/NVIDIA-Iray-CPU-Scaling-786/

    There is a very big flaw in those tests.  When trying to decide on CPU scaling, you need to remove the GPU rendering completely from the equasion.
     

    Does the GPU contribution not remain more or less a constant baseline?

    EDIT: It looks like it more or less does. However the data is not presented in a way where you can easily look at it and peg CPU contribution. It needs to be graphed with the Y-axis as 1/seconds. That tells how much of your image per second is being rendered, which is an actually useful metric, and from their graph we can see that a single CPU core is contributing abouuuuuuut (1/175-1/300)/19=0.01% of the image per second, which is absolutely negligible compared to the first GPU's 1/300=0.33% per second and gets completely wiped out when adding more GPUs.

    Hey, I can still do math at midnight!

    EDIT: I'm looking at the 970 Iray Night Caustics data specifically.

    Currently running a set of tests using my dual 6 core Xeons and finding a very linear speed increase as I scale by restricting Daz Studio's avaliable CPU cores in the windows task manager.

  • JamesJAB said:
    JamesJAB said:
    kyoto kid said:

    Jon, I've always heard and read that DAZ Studio/3Delight is unlimited cores on one machine. The studios that pay for the full version don't have that.

    ...that may be true for 3DL but Iray?

    Here's some data on render speed scaling with CPU. Looks like you'd start to level off with dual Xeons. https://www.pugetsystems.com/labs/articles/NVIDIA-Iray-CPU-Scaling-786/

    There is a very big flaw in those tests.  When trying to decide on CPU scaling, you need to remove the GPU rendering completely from the equasion.
     

    Does the GPU contribution not remain more or less a constant baseline?

    EDIT: It looks like it more or less does. However the data is not presented in a way where you can easily look at it and peg CPU contribution. It needs to be graphed with the Y-axis as 1/seconds. That tells how much of your image per second is being rendered, which is an actually useful metric, and from their graph we can see that a single CPU core is contributing abouuuuuuut (1/175-1/300)/19=0.01% of the image per second, which is absolutely negligible compared to the first GPU's 1/300=0.33% per second and gets completely wiped out when adding more GPUs.

    Hey, I can still do math at midnight!

    EDIT: I'm looking at the 970 Iray Night Caustics data specifically.

    Currently running a set of tests using my dual 6 core Xeons and finding a very linear speed increase as I scale by restricting Daz Studio's avaliable CPU cores in the windows task manager.

    That's about what I would expect, a little sublinear to account for inefficiencies, that's what the site's data showed also. I'm interested to see what you get.

  • JamesJABJamesJAB Posts: 1,766

    IRAY Starter Essetials Benchamrk Scene
    Dell Precision T7500
    2x Intel Xeon X5680 3.33GHz (Total 12 cores 24 threads)
    24GB RAM (6x4GB DDR3 1333MHz 6 Channel)
    Windows 10 Pro 64Bit

    CPU Rendering Only, Optix Prime Disabled
    100 Iray itterations

    No Preload 24 threads
    51.90 Sec

    The following are all preloaded (first render window still open)

    --24 Threads
    31.29 Sec (7.50 Sec per itteration / thread)
    --20 Threads
    36.80 Sec (7.36 Sec per itteration / thread)
    --16 Threads
    44.90 Sec (7.19 Sec per itteration / thread)
    --12 Threads
    59.10 Sec (7.09 Sec per itteration / thread)
    --8 Threads
    90.81 Sec (7.26 Sec per itteration / thread)
    --4 Threads
    190.45 Sec (7.16 Sec per itteration / thread)
    --2 Threads
    461.61 Sec (9.23 Sec per itteration / thread)

    The render time reduction per core is extremely linear (not counting the 2 thread render).

    The following are render times using only primary CPU cores. 
    Secondary Hyperthreading cores are disabled. 
    Enabled cores are distributed across both CPUs.

    --12 Threads
    40.43 Sec (4.85 Sec per itteration / thread)
    --8 Threads
    61.99 Sec (4.96 Sec per itteration / thread)
    --6 Threads
    86.63 Sec (5.20 Sec per itteration / thread)
    --4 Threads
    136.83 Sec (5.47 Sec per itteration / thread)

    As you can see here, the per thread itteration time is quite a bit lower, but not as fast as running full Hyperthreading.

  • Here is how I would look at your first set of data. Render time does not decrease linearly, but render speed increases almost linearly. (This is an important distinction. Increasing speed linearly gives diminishing returns. Imagine driving somewhere at 1 mph, then at 2 mph; you've halved your time. A while later you are driving there at 50 mph, and an increase to 51 mph gains you almost no time at all.)

    (These should say "threads," not "cores," but I am going to sleep.)

    image

    image

    Notice that the speed increase is very slightly sublinear because multiple threads do not work completely efficiently together (the end of the line starts to bend down).

    On average, each of your threads was rendering 0.13% of your test image every second. If you had a GPU, the CPU speed would have been added to whatever speed the GPU was rendering, and the Render Speed graph would be shifted up to start at the GPU speed instead of crossing the y-axis at 0%.

    data1.png
    1069 x 556 - 24K
    data2.png
    1069 x 556 - 27K
  • ebergerlyebergerly Posts: 3,255
    edited October 2017
    I dont understand why you guys are even considering CPU rendering. Unless you have an unusual number of CPUs its much slower than a GPU. And much worse it freezes your computer so you cant do anything. Even the Puget Sytems link concludes you shouldnt even consider CPU for iray
    Post edited by ebergerly on
  • JamesJABJamesJAB Posts: 1,766
    ebergerly said:
    I dont understand why you guys are even considering CPU rendering. Unless you have an unusual number of CPUs its much slower than a GPU. And much worse it freezes your computer so you cant do anything.

    I was just making a point about how that article that was linked above was extremely flawed.
     

    Here is how I would look at your first set of data. Render time does not decrease linearly, but render speed increases almost linearly. (This is an important distinction. Increasing speed linearly gives diminishing returns. Imagine driving somewhere at 1 mph, then at 2 mph; you've halved your time. A while later you are driving there at 50 mph, and an increase to 51 mph gains you almost no time at all.)

    (These should say "threads," not "cores," but I am going to sleep.)

    image

    image

    Notice that the speed increase is very slightly sublinear because multiple threads do not work completely efficiently together (the end of the line starts to bend down).

    On average, each of your threads was rendering 0.13% of your test image every second. If you had a GPU, the CPU speed would have been added to whatever speed the GPU was rendering, and the Render Speed graph would be shifted up to start at the GPU speed instead of crossing the y-axis at 0%.

    That may be a nice statement about the CPU adding speed on top of the GPU, but in practice adding a CPU or a slower GPU into the render cluster can actualy slow down the entire render.  (Probably because the fast rendereds are waiting around for the slow ones to complete an important operation)
    So, every time you double the number of CPU threads, your render time cuts to just over .5 of what the previous time was.  Theoreticaly if I had 4 of my processor, the 48 thread render speed would be around 17 seconds.  When that is scaled up to say 5000 itterations, you are looking at 15 min vs 28 min.  Even then this benchmark scene is only rendering a 400x520 image.  On a complicated scene that doesn't fit in your GPU RAM, 24 threads vs 48 threads could be the difference between a 4 day render and a 2 day render.

  • ebergerlyebergerly Posts: 3,255

    By the way, speaking of CPU rendering with iray and back to the OP's question...

    I did my own tests, rendering a large iray scene solely with my Ryzen 7 1700 CPU with 8 cores/16 threads, and yes, it takes forever and locks up my machine. But also I happened to monitor my CPU temps during the 2 hour render, out of concern that maybe I too need some better cooling. 

    And with the CPU's 16 threads pegged at 100% for almost 2 hours, my CPU temps flattened at 60C in the first minute or so of the render, and never exceeded that value. I recall that AMD says the 1700 can run continuously at 75C, and it shuts down on thermal protection at 95C. Or something like that. So I was very relieved that even continuous CPU load for hours doesn't come close to even the continous temperature rating, and that's with the included Wraith Spire air/fan cooler. 

    And even if my computer room was 25F warmer (like 100F or something), I'd still be at the rated continous CPU temperature. So I doubt I'll be changing out my cooler for liquid or whatever. 

     

Sign In or Register to comment.