Don't be Jealous...My New Supercomputer

1235

Comments

  • GatorGator Posts: 1,320

    Won't the newer insanely high core count systems make those older workstations obsolete?  From prices I saw, sure you'd pay double to get an i9-7900, but you're getting like over 4x or higher performance of a single chip and much higher single thread performance.

  • JamesJABJamesJAB Posts: 1,766

    Honestly nowdays the term obsolete is deceptive when refering to desktop computers made in the last 10 years.
    As long as you have at least a quad core i7 (including i7 based Xenons) or an 8 core AMD FX processor, all you need to do to modernize your old computer is bring it up to at least 16GB of RAM and upgrade the video card.
    If you are on an i3, i5, or an FX (4 or 6 cores) then you just need to add a CPU upgrade into the mix. [8 core AMD FX, and older generation Core i7 and I7 based Xeon CPUs can be bought second hand very inexpensively and will still perform better than a modern i3 or i5)


    scott762_948aec318a :  My recent $135 CPU upgrade on my Dell workstation brought it up to 12 cores / 24 threads @ 3.33Ghz running 6 Channel REG ECC DDR3 and I can still upgrade it to 192GB of RAM
    Not bad for an eight year old Workstation that I bought off of ebay for $150 including shipping a year ago.

  • ebergerlyebergerly Posts: 3,255
    I think it depends on what youll use it for. I have a 16 CPU Ryzen, but practically not much of my software uses those cores. Iray rendering doesn't, VWD cloth sim does but it will use GPU in the future, video editing uses the cores, and so on. Yeah 16 or 32 cores gives you bragging rights if thats important. But practically it depends.
  • GatorGator Posts: 1,320
    ebergerly said:
    I think it depends on what youll use it for. I have a 16 CPU Ryzen, but practically not much of my software uses those cores. Iray rendering doesn't, VWD cloth sim does but it will use GPU in the future, video editing uses the cores, and so on. Yeah 16 or 32 cores gives you bragging rights if thats important. But practically it depends.

    Did you upgrade again already?  Ryzen 1700 is a 8 core / 16 thread CPU.

    I posted a screenshot in another thread when a scene was loading with my i7 6700K.  Appeared to be using all cores, but mine is only 4 core.  When you're loading a scene, what's your CPU use look like?

     

    Snip Load 05 Few Seconds After Completion.JPG
    943 x 759 - 108K
  • ebergerlyebergerly Posts: 3,255
    edited August 2017
    Just a terminology difference. My CPUID app calls the threads as CPUs and sometimes i slip and do the same. As far as scene loading i assume the hard drive or SSD is the limiting factor not the CPU, so do multiple threads really matter on scene loading?
    Post edited by ebergerly on
  • GatorGator Posts: 1,320
    JamesJAB said:

    Honestly nowdays the term obsolete is deceptive when refering to desktop computers made in the last 10 years.
    As long as you have at least a quad core i7 (including i7 based Xenons) or an 8 core AMD FX processor, all you need to do to modernize your old computer is bring it up to at least 16GB of RAM and upgrade the video card.
    If you are on an i3, i5, or an FX (4 or 6 cores) then you just need to add a CPU upgrade into the mix. [8 core AMD FX, and older generation Core i7 and I7 based Xeon CPUs can be bought second hand very inexpensively and will still perform better than a modern i3 or i5)


    scott762_948aec318a :  My recent $135 CPU upgrade on my Dell workstation brought it up to 12 cores / 24 threads @ 3.33Ghz running 6 Channel REG ECC DDR3 and I can still upgrade it to 192GB of RAM
    Not bad for an eight year old Workstation that I bought off of ebay for $150 including shipping a year ago.

    Not to shabby for the price.  smiley

    Of couse, slow of fast can be a matter of opinion...  I'm looking at upgrading my 6700K system, I am giving the Ryzen Threadrippers a hard look.

    Not as fast per core as the Intel Core X CPUs, but the 64 PCI lanes is what has me really interested.  I'm thinking all those dedicated lanes should help with IO a lot.  Biggest thing I wait on in Daz is a scene starting to render.  Loading a scene and parenting a Genesis 3 figure to another takes a long time too, but impacts my workflow much less.  I'm thinking the biggest boost will be the quad-channel DDR4 memory, M2 memory stick storage and to a lesser extent dedicated x16 PCI lanes for the two current cards I have. 

  • nicsttnicstt Posts: 11,715
    Tobor said:
    drzap said:

    You fail to make your point.

    I did make my point, but you are under numerous misunderstandings, including how Iray balances rendering load.

    1. GPUs *are* designed for high utilizations for long periods of time, even without water cooling. The consumer models most people here use are made for gaming, and gamers play for extended periods at a time. 

    2. nVidia GPUs will perfcap for thermal limits, maaging fan speed, overclock speed, and throttling GPU speeds as needed. IF heat *is* an issue, the card takes care of itself.

    3. It's very easy to see what the temps of a GPU are. On my machine, even at 100% for *hours* they're still not more than 60-65% rating. Conversely, my replacement Precision, with dual Xeons, 100% utilization after one hour will cause a temperature rise to at least 80% of rating.

    4. Assuming even a modest current-model GPU, total rendering time is not significantly reduced by including the CPU. This has been demonstrated here in numerous threads. Iray is designed for VCAs and professional appliances where loads are blanaced more or less evenly across similar rendering hardware. The typical Pascal GPU with 2000+ codes will outspeed whatever CPU is in the machine, yet Iray will try to balance the interations between the slow CPU and the fast GPU. Net result: a 25 minute GPU-only render might take 22 minutes with the CPU added in. Hardly worth the added power consumption, heat, and loss of use of the machine during the render.

     

    Reasonably high yes, but not what rendering does, at or close to 100% for long periods of time, they are not designed for that. A game constantly varies in its demands from the GPU, rendering as a rule remains constant. They are very different, in how much they tax the graphics card.

  • kyoto kidkyoto kid Posts: 42,117
    edited August 2017
    JamesJAB said:

    As far as I know, Dell Precision workstations will run any REG ECC DDR3 modules (provided that they are not too slow for the CPU)
    For "Legacy" workstation memory I look to ebay and get them second hand.

     

    Oh, I just read your post about CPU upgrading again.  You have one of the first gen i7 boards that can accept a Xeon X56xx series CPU, but you are still stuck with non ECC memory.
    You could always do like me and keep an eye out for a Precision T7500 with a second CPU installed (So you get the CPU riser with 6 more RAM slots) on ebay for cheap.
    Doing it this way frees up alot of your budget to spend on video cards. :P

    Here are a few bullet points about the T7500

    • 1100w power supply
    • 12 REG ECC DDR3 slots with second CPU riser installed (up to 192GB 12x16GB)
    • Dual CPUs up to Xeon X5690 for a total of 12 cores, 24 threads
    • Very robust cooling setup (and quiet at 100% CPU load)
    • 2 16x PCI-E slots with enough space to install the longest video cards.
    • Comes stock with Windows 7 pro (even if you buy one with no OS the Bios will most likely have the flag to auto activate it)

    ...not a big fan of Dell or HP for that fact.  Prefer "white box" systems which is what mine currently is.  Legacy hardware is going up in price as well.  The 24 GB jit I was considering for my current system went from 114$ in December (when I didn't have the extra funds) to 194$ today, a jump of 80$ for "old tech". That's the cheapest I can find (Nerwegg) for a tri channel DDR3 1333 kit that is compatible with my MB (Amazon and others have the same kit listed for 225$ or more)

    Haswell Xeons also support more advanced processes than Nehalem ones which could come into play for more efficient GPU card compatibility.   The big point is I can find 128 GB 4 channel kits without having to go the "used" route.

    Post edited by kyoto kid on
  • kyoto kidkyoto kid Posts: 42,117
    edited August 2017

    Won't the newer insanely high core count systems make those older workstations obsolete?  From prices I saw, sure you'd pay double to get an i9-7900, but you're getting like over 4x or higher performance of a single chip and much higher single thread performance.

    ...but get stuck having to run under W10 which forces updates (that can have bugs or other instabilities) on you along with riduculous usless features like Cortana and X-Box Live (the former which you need to hack the registry to completely disable so it doesn't come back  after a new update), wlll require I purchase new periphreals such as my printer, trackball, & such, and reserves a noticable portion of GPU VRAM.  No thanks.

    Post edited by kyoto kid on
  • kyoto kidkyoto kid Posts: 42,117
    ebergerly said:
    I think it depends on what youll use it for. I have a 16 CPU Ryzen, but practically not much of my software uses those cores. Iray rendering doesn't, VWD cloth sim does but it will use GPU in the future, video editing uses the cores, and so on. Yeah 16 or 32 cores gives you bragging rights if thats important. But practically it depends.

    ..Carrara will use as many cores/threads for rendering as you can throw at it. If you can afford to build a networked render box with dual Epyc 32 core CPUs it will use all 128 threads.

  • ebergerlyebergerly Posts: 3,255

      

    ebergerly said:
    ..Carrara will use as many cores/threads for rendering as you can throw at it. If you can afford to build a networked render box with dual Epyc 32 core CPUs it will use all 128 threads.

    Okay, but given the choice between rendering with a CPU with many cores and a GPU, isn't the GPU a clear choice on a cost per performance basis? I thought that CPU rendering, even with many cores, doesn't come close to the performance of a good GPU. But I may be wrong about that. I just recall testing my 8 CPU/16 thread Ryzen versus a GTX 1070, and the Ryzen was nowhere close in overall render time. 

  • kyoto kidkyoto kid Posts: 42,117
    edited August 2017
    nicstt said:
    Tobor said:
    drzap said:

    You fail to make your point.

    I did make my point, but you are under numerous misunderstandings, including how Iray balances rendering load.

    1. GPUs *are* designed for high utilizations for long periods of time, even without water cooling. The consumer models most people here use are made for gaming, and gamers play for extended periods at a time. 

    2. nVidia GPUs will perfcap for thermal limits, maaging fan speed, overclock speed, and throttling GPU speeds as needed. IF heat *is* an issue, the card takes care of itself.

    3. It's very easy to see what the temps of a GPU are. On my machine, even at 100% for *hours* they're still not more than 60-65% rating. Conversely, my replacement Precision, with dual Xeons, 100% utilization after one hour will cause a temperature rise to at least 80% of rating.

    4. Assuming even a modest current-model GPU, total rendering time is not significantly reduced by including the CPU. This has been demonstrated here in numerous threads. Iray is designed for VCAs and professional appliances where loads are blanaced more or less evenly across similar rendering hardware. The typical Pascal GPU with 2000+ codes will outspeed whatever CPU is in the machine, yet Iray will try to balance the interations between the slow CPU and the fast GPU. Net result: a 25 minute GPU-only render might take 22 minutes with the CPU added in. Hardly worth the added power consumption, heat, and loss of use of the machine during the render.

     

    Reasonably high yes, but not what rendering does, at or close to 100% for long periods of time, they are not designed for that. A game constantly varies in its demands from the GPU, rendering as a rule remains constant. They are very different, in how much they tax the graphics card.

    ...and that is part of why the Quadro line is more expensive as they are designed to handle peak output for an extended amount off time.  If I had the funds I would seriously consider a 16 GB P5000, however, one card alone would be almost the cost of the entire workstation I've been planning (about 2,500$)..

    Post edited by kyoto kid on
  • kyoto kidkyoto kid Posts: 42,117
    ebergerly said:

      

    ebergerly said:
    ..Carrara will use as many cores/threads for rendering as you can throw at it. If you can afford to build a networked render box with dual Epyc 32 core CPUs it will use all 128 threads.

    Okay, but given the choice between rendering with a CPU with many cores and a GPU, isn't the GPU a clear choice on a cost per performance basis? I thought that CPU rendering, even with many cores, doesn't come close to the performance of a good GPU. But I may be wrong about that. I just recall testing my 8 CPU/16 thread Ryzen versus a GTX 1070, and the Ryzen was nowhere close in overall render time. 

    ...Carrara does not support GPU rendering. Neither does Vue Infinite nateively.

  • ebergerlyebergerly Posts: 3,255

      ...Carrara does not support GPU rendering. Neither does Vue Infinite nateively.

    Oh, okay, I didn't know that. So rendering animations for example in Carrara must be really painful, no? I mean renders that might take a few minutes with a GPU would take many minutes with a CPU, even with 4 or 6 cores. 

  • kyoto kidkyoto kid Posts: 42,117
    edited August 2017

    ...yes but still not as bad as 3DL with UE in Daz Studio.  One scene I did that used an HDRI setting with UE and had only five frames of motion blur took sixteen and a half hours. to render,  That is on my current system.

    When I downloaded the trial version of Carrara 6.1 years ago on my old 1.6 GHz duo core 32 bit notebook, I did a 5 - 6 second flyaround of a Martian terrain I created which I believe took a little over 1 hr or so to render.  Granted there were no transmaps, reflectivity, or atmospheric effects to slow the process down.

    Post edited by kyoto kid on
  • JamesJABJamesJAB Posts: 1,766
    kyoto kid said:
    JamesJAB said:

    As far as I know, Dell Precision workstations will run any REG ECC DDR3 modules (provided that they are not too slow for the CPU)
    For "Legacy" workstation memory I look to ebay and get them second hand.

     

    Oh, I just read your post about CPU upgrading again.  You have one of the first gen i7 boards that can accept a Xeon X56xx series CPU, but you are still stuck with non ECC memory.
    You could always do like me and keep an eye out for a Precision T7500 with a second CPU installed (So you get the CPU riser with 6 more RAM slots) on ebay for cheap.
    Doing it this way frees up alot of your budget to spend on video cards. :P

    Here are a few bullet points about the T7500

    • 1100w power supply
    • 12 REG ECC DDR3 slots with second CPU riser installed (up to 192GB 12x16GB)
    • Dual CPUs up to Xeon X5690 for a total of 12 cores, 24 threads
    • Very robust cooling setup (and quiet at 100% CPU load)
    • 2 16x PCI-E slots with enough space to install the longest video cards.
    • Comes stock with Windows 7 pro (even if you buy one with no OS the Bios will most likely have the flag to auto activate it)

    ...not a big fan of Dell or HP for that fact.  Prefer "white box" systems which is what mine currently is.  Legacy hardware is going up in price as well.  The 24 GB jit I was considering for my current system went from 114$ in December (when I didn't have the extra funds) to 194$ today, a jump of 80$ for "old tech". That's the cheapest I can find (Nerwegg) for a tri channel DDR3 1333 kit that is compatible with my MB (Amazon and others have the same kit listed for 225$ or more)

    Haswell Xeons also support more advanced processes than Nehalem ones which could come into play for more efficient GPU card compatibility.   The big point is I can find 128 GB 4 channel kits without having to go the "used" route.

    For the most part I prefer to do custom builds.  My issue is getting computer building funds approved through the wife. cheeky
    I'm always on the lookout for crazy computer deals on ebay and craigslist, cause those are always easier to get funding approval on.  And older Xeons and REG ECC memory are always inexpensive in the used market. (I can get 6x4GB for $45 or 3x8GB for $55 right now on ebay) But yes, even the used market is a little inflated for RAM right now (When I first got this box a year ago, I bought 6x4GB for $33)

  • GatorGator Posts: 1,320
    kyoto kid said:
    ebergerly said:

      

    ebergerly said:
    ..Carrara will use as many cores/threads for rendering as you can throw at it. If you can afford to build a networked render box with dual Epyc 32 core CPUs it will use all 128 threads.

    Okay, but given the choice between rendering with a CPU with many cores and a GPU, isn't the GPU a clear choice on a cost per performance basis? I thought that CPU rendering, even with many cores, doesn't come close to the performance of a good GPU. But I may be wrong about that. I just recall testing my 8 CPU/16 thread Ryzen versus a GTX 1070, and the Ryzen was nowhere close in overall render time. 

    ...Carrara does not support GPU rendering. Neither does Vue Infinite nateively.

    Can't you export scenes to an external rendererer like Octane?  

    I've never compared the cost, but seems with how powerful consumer GPUs are, that would be cheaper.

  • GatorGator Posts: 1,320
    kyoto kid said:

    Won't the newer insanely high core count systems make those older workstations obsolete?  From prices I saw, sure you'd pay double to get an i9-7900, but you're getting like over 4x or higher performance of a single chip and much higher single thread performance.

    ...but get stuck having to run under W10 which forces updates (that can have bugs or other instabilities) on you along with riduculous usless features like Cortana and X-Box Live (the former which you need to hack the registry to completely disable so it doesn't come back  after a new update), wlll require I purchase new periphreals such as my printer, trackball, & such, and reserves a noticable portion of GPU VRAM.  No thanks.

    Topic for another thread, but Win 10 is nice.  You can disable the privacy stuff, easy to find how online.  I didn't like the idea of it at first, now I'll never go back to 7 from 10.  It is better.

  • GatorGator Posts: 1,320
    nicstt said:
    Tobor said:
    drzap said:

    You fail to make your point.

    I did make my point, but you are under numerous misunderstandings, including how Iray balances rendering load.

    1. GPUs *are* designed for high utilizations for long periods of time, even without water cooling. The consumer models most people here use are made for gaming, and gamers play for extended periods at a time. 

    2. nVidia GPUs will perfcap for thermal limits, maaging fan speed, overclock speed, and throttling GPU speeds as needed. IF heat *is* an issue, the card takes care of itself.

    3. It's very easy to see what the temps of a GPU are. On my machine, even at 100% for *hours* they're still not more than 60-65% rating. Conversely, my replacement Precision, with dual Xeons, 100% utilization after one hour will cause a temperature rise to at least 80% of rating.

    4. Assuming even a modest current-model GPU, total rendering time is not significantly reduced by including the CPU. This has been demonstrated here in numerous threads. Iray is designed for VCAs and professional appliances where loads are blanaced more or less evenly across similar rendering hardware. The typical Pascal GPU with 2000+ codes will outspeed whatever CPU is in the machine, yet Iray will try to balance the interations between the slow CPU and the fast GPU. Net result: a 25 minute GPU-only render might take 22 minutes with the CPU added in. Hardly worth the added power consumption, heat, and loss of use of the machine during the render.

     

    Reasonably high yes, but not what rendering does, at or close to 100% for long periods of time, they are not designed for that. A game constantly varies in its demands from the GPU, rendering as a rule remains constant. They are very different, in how much they tax the graphics card.

    But with the cost difference, you can use GeForce cards and upgrade every year or two and come out cheaper.  Yeah you're using it at 100% capacity, just don't keep it so long.  wink

  • Kevin SandersonKevin Sanderson Posts: 1,643
    edited August 2017

    Yes, Carrara has an Octane plug-in. Not many users, but it has one.

    In Carrara, it's easy to fake GI and it can really crank out the frames if you know what you're doing. It comes network render ready, too. There was/is a user who had networked multiple bare bones PCs in his garage who rendered out some animations. Howie Farkes first developed his marvelous outdoor sets for Carrara before doing some for studio. Unfortunately, there has been a slowdown in development of the software, but thanks to users you can get some of the recent figures in. The settings are fairly straight forward and no spaghetti to deal with. But right now, it's easier to do more realistic renders in DAZ Studio/Iray, so most of my free time goes there. But here are a couple videos from 7 years ago showing just a little of what you can do in Carrara - a scene that renders in under a minute on an old computer.

     



     

    Post edited by Kevin Sanderson on
  • kyoto kidkyoto kid Posts: 42,117
    JamesJAB said:
    kyoto kid said:
    JamesJAB said:

    As far as I know, Dell Precision workstations will run any REG ECC DDR3 modules (provided that they are not too slow for the CPU)
    For "Legacy" workstation memory I look to ebay and get them second hand.

     

    Oh, I just read your post about CPU upgrading again.  You have one of the first gen i7 boards that can accept a Xeon X56xx series CPU, but you are still stuck with non ECC memory.
    You could always do like me and keep an eye out for a Precision T7500 with a second CPU installed (So you get the CPU riser with 6 more RAM slots) on ebay for cheap.
    Doing it this way frees up alot of your budget to spend on video cards. :P

    Here are a few bullet points about the T7500

    • 1100w power supply
    • 12 REG ECC DDR3 slots with second CPU riser installed (up to 192GB 12x16GB)
    • Dual CPUs up to Xeon X5690 for a total of 12 cores, 24 threads
    • Very robust cooling setup (and quiet at 100% CPU load)
    • 2 16x PCI-E slots with enough space to install the longest video cards.
    • Comes stock with Windows 7 pro (even if you buy one with no OS the Bios will most likely have the flag to auto activate it)

    ...not a big fan of Dell or HP for that fact.  Prefer "white box" systems which is what mine currently is.  Legacy hardware is going up in price as well.  The 24 GB jit I was considering for my current system went from 114$ in December (when I didn't have the extra funds) to 194$ today, a jump of 80$ for "old tech". That's the cheapest I can find (Nerwegg) for a tri channel DDR3 1333 kit that is compatible with my MB (Amazon and others have the same kit listed for 225$ or more)

    Haswell Xeons also support more advanced processes than Nehalem ones which could come into play for more efficient GPU card compatibility.   The big point is I can find 128 GB 4 channel kits without having to go the "used" route.

    For the most part I prefer to do custom builds.  My issue is getting computer building funds approved through the wife. cheeky
    I'm always on the lookout for crazy computer deals on ebay and craigslist, cause those are always easier to get funding approval on.  And older Xeons and REG ECC memory are always inexpensive in the used market. (I can get 6x4GB for $45 or 3x8GB for $55 right now on ebay) But yes, even the used market is a little inflated for RAM right now (When I first got this box a year ago, I bought 6x4GB for $33)

    ...however my current system's MB does not support registered ECC memory so I am stuck paying the 194$
  • kyoto kidkyoto kid Posts: 42,117
    edited August 2017
    kyoto kid said:
    ebergerly said:

      

    ebergerly said:
    ..Carrara will use as many cores/threads for rendering as you can throw at it. If you can afford to build a networked render box with dual Epyc 32 core CPUs it will use all 128 threads.

    Okay, but given the choice between rendering with a CPU with many cores and a GPU, isn't the GPU a clear choice on a cost per performance basis? I thought that CPU rendering, even with many cores, doesn't come close to the performance of a good GPU. But I may be wrong about that. I just recall testing my 8 CPU/16 thread Ryzen versus a GTX 1070, and the Ryzen was nowhere close in overall render time. 

    ...Carrara does not support GPU rendering. Neither does Vue Infinite nateively.

    Can't you export scenes to an external rendererer like Octane?  

    I've never compared the cost, but seems with how powerful consumer GPUs are, that would be cheaper.

    ...not that I am aware of. Besides Octane by itself with a plugin is around a 600$ expenditure. Not really all that cheap a solution.
    Post edited by kyoto kid on
  • kyoto kidkyoto kid Posts: 42,117
    edited August 2017
    kyoto kid said:

    Won't the newer insanely high core count systems make those older workstations obsolete?  From prices I saw, sure you'd pay double to get an i9-7900, but you're getting like over 4x or higher performance of a single chip and much higher single thread performance.

    ...but get stuck having to run under W10 which forces updates (that can have bugs or other instabilities) on you along with riduculous usless features like Cortana and X-Box Live (the former which you need to hack the registry to completely disable so it doesn't come back  after a new update), wlll require I purchase new periphreals such as my printer, trackball, & such, and reserves a noticable portion of GPU VRAM.  No thanks.

    Topic for another thread, but Win 10 is nice.  You can disable the privacy stuff, easy to find how online.  I didn't like the idea of it at first, now I'll never go back to 7 from 10.  It is better.

     

    ..apologies but features like Cortana are useless bloat that only serve to get in the way of my work. The force fed updating is also a major flaw, not all of us are 80 year old grandmas who watch silly cat vids on youtube, tweet,  or post to Facebook.. Also not into paying for a high VRAM GPU card to have it's memory hamstrung by the OS either.

    To put it bluntly, not into handing control of my system totally over to MS.

    Oh and the discussion has relevance as the new CPU technology is forcing people to adopt W10.

    Post edited by kyoto kid on
  • kyoto kidkyoto kid Posts: 42,117
    edited August 2017

    @ Kevin. ...but again, Daz Studio still does not support large environments, environment and terrain generation, or modelling. I prefer to use mesh based environments and sets that I can alter instead of stock HDRIs which is the only way to get 'big" environments to render efficiently in Daz Studio.

    Post edited by kyoto kid on
  • GatorGator Posts: 1,320
    nicstt said:
    Tobor said:
    drzap said:

    You fail to make your point.

    I did make my point, but you are under numerous misunderstandings, including how Iray balances rendering load.

    1. GPUs *are* designed for high utilizations for long periods of time, even without water cooling. The consumer models most people here use are made for gaming, and gamers play for extended periods at a time. 

    2. nVidia GPUs will perfcap for thermal limits, maaging fan speed, overclock speed, and throttling GPU speeds as needed. IF heat *is* an issue, the card takes care of itself.

    3. It's very easy to see what the temps of a GPU are. On my machine, even at 100% for *hours* they're still not more than 60-65% rating. Conversely, my replacement Precision, with dual Xeons, 100% utilization after one hour will cause a temperature rise to at least 80% of rating.

    4. Assuming even a modest current-model GPU, total rendering time is not significantly reduced by including the CPU. This has been demonstrated here in numerous threads. Iray is designed for VCAs and professional appliances where loads are blanaced more or less evenly across similar rendering hardware. The typical Pascal GPU with 2000+ codes will outspeed whatever CPU is in the machine, yet Iray will try to balance the interations between the slow CPU and the fast GPU. Net result: a 25 minute GPU-only render might take 22 minutes with the CPU added in. Hardly worth the added power consumption, heat, and loss of use of the machine during the render.

     

    Reasonably high yes, but not what rendering does, at or close to 100% for long periods of time, they are not designed for that. A game constantly varies in its demands from the GPU, rendering as a rule remains constant. They are very different, in how much they tax the graphics card.

    But with the cost difference, you can use GeForce cards and upgrade every year or two and come out cheaper.  Yeah you're using it at 100% capacity, just don't keep it so long.  wink

    kyoto kid said:
    kyoto kid said:
    ebergerly said:

      

    ebergerly said:
    ..Carrara will use as many cores/threads for rendering as you can throw at it. If you can afford to build a networked render box with dual Epyc 32 core CPUs it will use all 128 threads.

    Okay, but given the choice between rendering with a CPU with many cores and a GPU, isn't the GPU a clear choice on a cost per performance basis? I thought that CPU rendering, even with many cores, doesn't come close to the performance of a good GPU. But I may be wrong about that. I just recall testing my 8 CPU/16 thread Ryzen versus a GTX 1070, and the Ryzen was nowhere close in overall render time. 

    ...Carrara does not support GPU rendering. Neither does Vue Infinite nateively.

    Can't you export scenes to an external rendererer like Octane?  

    I've never compared the cost, but seems with how powerful consumer GPUs are, that would be cheaper.

     

    ...not that I am aware of. Besides Octane by itself with a plugin is around a 600$ expenditure. Not really all that cheap a solution.

    Moot point since it isn't available, but for example with Iray and Daz it would be way more cost and power efficient to render with GPUs rather than CPUs.  Gotta consider TCO, software is part of it.

  • $579 for Octane and the Carrara plugin.

  • kyoto kidkyoto kid Posts: 42,117
    edited August 2017
    nicstt said:
    Tobor said:
    drzap said:

    You fail to make your point.

    I did make my point, but you are under numerous misunderstandings, including how Iray balances rendering load.

    1. GPUs *are* designed for high utilizations for long periods of time, even without water cooling. The consumer models most people here use are made for gaming, and gamers play for extended periods at a time. 

    2. nVidia GPUs will perfcap for thermal limits, maaging fan speed, overclock speed, and throttling GPU speeds as needed. IF heat *is* an issue, the card takes care of itself.

    3. It's very easy to see what the temps of a GPU are. On my machine, even at 100% for *hours* they're still not more than 60-65% rating. Conversely, my replacement Precision, with dual Xeons, 100% utilization after one hour will cause a temperature rise to at least 80% of rating.

    4. Assuming even a modest current-model GPU, total rendering time is not significantly reduced by including the CPU. This has been demonstrated here in numerous threads. Iray is designed for VCAs and professional appliances where loads are blanaced more or less evenly across similar rendering hardware. The typical Pascal GPU with 2000+ codes will outspeed whatever CPU is in the machine, yet Iray will try to balance the interations between the slow CPU and the fast GPU. Net result: a 25 minute GPU-only render might take 22 minutes with the CPU added in. Hardly worth the added power consumption, heat, and loss of use of the machine during the render.

     

    Reasonably high yes, but not what rendering does, at or close to 100% for long periods of time, they are not designed for that. A game constantly varies in its demands from the GPU, rendering as a rule remains constant. They are very different, in how much they tax the graphics card.

    But with the cost difference, you can use GeForce cards and upgrade every year or two and come out cheaper.  Yeah you're using it at 100% capacity, just don't keep it so long.  wink

    kyoto kid said:
    kyoto kid said:
    ebergerly said:

      

    ebergerly said:
    ..Carrara will use as many cores/threads for rendering as you can throw at it. If you can afford to build a networked render box with dual Epyc 32 core CPUs it will use all 128 threads.

    Okay, but given the choice between rendering with a CPU with many cores and a GPU, isn't the GPU a clear choice on a cost per performance basis? I thought that CPU rendering, even with many cores, doesn't come close to the performance of a good GPU. But I may be wrong about that. I just recall testing my 8 CPU/16 thread Ryzen versus a GTX 1070, and the Ryzen was nowhere close in overall render time. 

    ...Carrara does not support GPU rendering. Neither does Vue Infinite nateively.

    Can't you export scenes to an external rendererer like Octane?  

    I've never compared the cost, but seems with how powerful consumer GPUs are, that would be cheaper.

     

    ...not that I am aware of. Besides Octane by itself with a plugin is around a 600$ expenditure. Not really all that cheap a solution.

    Moot point since it isn't available, but for example with Iray and Daz it would be way more cost and power efficient to render with GPUs rather than CPUs.  Gotta consider TCO, software is part of it.

    ...again, Daz Studio does not handle large detailed environments very well.  Andrey's Forest and Polish's Cyberpunk Subway are two examples of heavy sets that cause the programme to run very sluggish.  Carrara simply handles large environments much better and renders faster than Iray in CPU mode.  As it has GI, AO, and even volumetrics, it can create pretty darn close to realistic results.  I don't just work in Daz Studio.

    Post edited by kyoto kid on
  • kyoto kidkyoto kid Posts: 42,117

    $579 for Octane and the Carrara plugin.

    ..more than I am able to spend considering I am putting away for the new workstation.  Also doesn't Octane crash if a scene exceeds VRAM or does it do like Iray and swap to the CPU?

  • Octane can use "Out of core" memory. When it's enabled, all textures and even the HDRI is loaded in system memory. Geometry must fit in the VRAM though.

  • DustRiderDustRider Posts: 2,903
    edited August 2017

    Just reserve the amount of system ram you think you will need, and Octane automatically does the rest (and no, it is not a hybrid renderer, all of the rendering is done on the GPU, even when using out of core memory). Note in the attached image that the Octane interface also includes a lot of useful information without needing a third party utility (that may, or may not be accurate). You can also set the amount of GPU memory for Octane to use, and reserve GPU memory for use in other applications (see attached).

    Carrara Octane.JPG
    1065 x 863 - 113K
    Post edited by DustRider on
Sign In or Register to comment.