Has anyone here bought a Treadripper rig and used it for Studio yet?

GatorGator Posts: 1,320
edited August 2017 in The Commons

Been thinking hard about getting one.  Doubt all the cores and threads will be used at all, but the key thing that's getting me looking at it are the 64 PCI lanes and the fact that my current rig is a "lowly" quad core.  The recommendations for Daz Studio are for a 6 core CPU when rendering with Iray, and that they claim 8 PCI lanes from 16 means only 80% of the performance from your video cards... currently I'm running 2, considering a third or more.

 

https://helpdaz.zendesk.com/hc/en-us/articles/207530513-System-Recommendations-for-DAZ-Studio-4-

To decide what computer to go with I would use the following guidelines:

  • Make sure your motherboard has the latest Intel chipset (currently the X99)
  • For graphics applications 32GB of RAM is a good minimum, most high end systems have 64GB but this will have little to no impact on your render time.
  • 6 core CPUs are enough if you are going to use Iray (or other GPU renderer), when you get into high core count GPUs, the CPU becomes less important.
  • Get as many large GPUs as you can… 4GB or memory is a minimum, over 8GB is probably not going to be needed. Core count is critical, so if two less expensive cards give you more cores than one expensive card, go for the two cards…

·        Make sure your motherboard offers at least 8 lanes of PCI per GPU installed (Ideally 16 lanes per GPU). If you have 16 lanes per GPU you get the full performance, having only 8 lanes per GPU gets you around 80% of the GPU performance, this influences the decision to buy one large card over multiple smaller cards.

 

On the task manager pic it appears to be using all threads loading up a good size scene with many Genesis 3 figures with my quad core CPU.  Anyone update from a fairly new quad core CPU system?

Post edited by Gator on
«13456789

Comments

  • kyoto kidkyoto kid Posts: 41,928
    edited August 2017
    ...well that makes my system pretty irrelevant. Nehalem i7 930 2.8GHz (4 core 8 thread). 12 GB DDR3 1333 tri channel memory (expandable to a maximun of 24 GB). 1 x 250 GB 7200 rpm boot HDD. 1 x 1 TB 7200 rpm library drive. Nvidia GT 460 GPU with 1 GB VRAM. The best I can do on my budget (without building a totally new system) is upgrade the CPU to a 6 core 980 Xtreme (or an LGA 1366 6 core Xeon) and the memory to 24 GB. Even so it would still be barely adequate based on the minimum specs for Daz 4.x. The issue with SSDs is they degrade with repetitive read/write operations which would be fairly frequent on the library drive. As to a new GPU the minimum I'd get would be a GTX 1070, however thanks to the cryptomining rush, the prices are still too volatile.
    Post edited by kyoto kid on
  • JamesJABJamesJAB Posts: 1,766

    ****"Get as many large GPUs as you can… 4GB or memory is a minimum, over 8GB is probably not going to be needed. Core count is critical, so if two less expensive cards give you more cores than one expensive card, go for the two cards…"****

    I will need to disagree with you on this point.  Iray has a diminishing returns curve when using multiple cards.  While you would for example have more Cuda cores in two GTX 1070 cards than you have in a GTX 1080 ti, you only get a 60 to 70% render speed boost from the second card and to make it worse they do not combine VRAM so you can still only render a sceen that will fit on an 8GB card.
    The faster GTX 1080 ti, not only will is still render faster than the above pair, but you have 11GB for loading the sceen.

    Maybe if you have high end Pascal based Nvidia Quadro cards the NVlink allows both cards to combine VRAM and act as one card for work loads.  Not sure if that feature is or when it will be implimented in Iray.

  • "If some's good, more is better"

  • ebergerlyebergerly Posts: 3,255
    edited August 2017

    In deciding whether to put your money into a powerful CPU, I'd suggest the following approach:

    1. Make a list of all the computationally-intensive apps/plugins you use on a regular basis
    2. Determine which of those make effective use of the CPU threads, and which make effective use of the GPU
    3. Invest your money based upon the answer

    For me, the results look something like this:

    • DAZ Studio:
      • Iray: GPU
      • VWD: CPU multi-threaded (future: GPU)
    • Blender:
      • Cycles Render: GPU
      • Cloth Sim: single CPU thread
    • Davinci Resolve Video Editor:
      • CPU multi-threaded

    So at the end of the day, for me at least, I only get a real benefit from my 16 CPU threads in video editing and with the VWD cloth sim, though the rumor is VWD is being re-worked to use GPU. As far as video editors in general, it seems that while some of the big name editors say they use GPU's, in practice there's less GPU use than you might expect. 

    So do I really need my 16 threads, or do I need a Threadripper?

    I suppose if I did lots of video editing then maybe. But it seems like more and more computationally intensive software is going to the much faster GPU's. 

    And honestly I can't imagine why anyone would rely on CPU threads for Iray or other rendering. The same scene might render in a little over 4 minutes with my GTX 1070, but takes over 23 minutes using the 16 threads of my CPU. Plus it locks up your computer while all the CPU threads are pegged at 100%. Makes more sense to invest in a better GPU than a better CPU. 

    Although, on the other hand "Threadripper" sounds awesome, and it "blows away" all the other CPU's smiley

    Post edited by ebergerly on
  • GatorGator Posts: 1,320
    JamesJAB said:

    ****"Get as many large GPUs as you can… 4GB or memory is a minimum, over 8GB is probably not going to be needed. Core count is critical, so if two less expensive cards give you more cores than one expensive card, go for the two cards…"****

    I will need to disagree with you on this point.  Iray has a diminishing returns curve when using multiple cards.  While you would for example have more Cuda cores in two GTX 1070 cards than you have in a GTX 1080 ti, you only get a 60 to 70% render speed boost from the second card and to make it worse they do not combine VRAM so you can still only render a sceen that will fit on an 8GB card.
    The faster GTX 1080 ti, not only will is still render faster than the above pair, but you have 11GB for loading the sceen.

    Maybe if you have high end Pascal based Nvidia Quadro cards the NVlink allows both cards to combine VRAM and act as one card for work loads.  Not sure if that feature is or when it will be implimented in Iray.

    Actually that is straight from DAZ on the link provided.  I thought from other posters that Iray scales very well, and was in the 90-some percent gain.

    For gaming I recall the boost being much lower, and returns really diminish with a 3rd and 4th card which is why they say they dropped support for 3 and 4 card SLI with the 10 series.

  • GatorGator Posts: 1,320

    "If some's good, more is better"

    Actually, not necessarily.  Applications that don't scale well with many threads favor a CPU with high IPC and raw clock speed.  Many games fall into that category. 

    Particularly with threadripper 1920x and 1950x, there's two dies interconnected.  The 1900x is being released at the end of the month, also sounds interesting as it uses the same x399 board with 64 PCI lanes but with presumably a single die.  The multiple die chips do suffer from higher memory latency to the far RAM modules.  64 PCI lanes is very attractive to run two GPUs at x16 (or more at x8), and still have plenty of lanes for other devices.

  • GatorGator Posts: 1,320
    ebergerly said:

    In deciding whether to put your money into a powerful CPU, I'd suggest the following approach:

    1. Make a list of all the computationally-intensive apps/plugins you use on a regular basis
    2. Determine which of those make effective use of the CPU threads, and which make effective use of the GPU
    3. Invest your money based upon the answer

    Of course, that's why this thread exists.  wink

    Seems from Daz's recommendation I could use a few more cores (although from other threads, more than a few more may be a waste).

    Current system is an i7 6700K which ain't bad, but the Intel chipset is x8/x8 with two vidya cards.  It supports M2 memory, but I can't use it with the second GPU installed.  My system could use an upgrade, question is what exactly.

  • ebergerlyebergerly Posts: 3,255

    Current system is an i7 6700K which ain't bad, but the Intel chipset is x8/x8 with two vidya cards.  It supports M2 memory, but I can't use it with the second GPU installed.  My system could use an upgrade, question is what exactly.

    Yes, but without knowing exactly what you'll use your computer for, nobody can answer your question but you. We can throw detailed specs around until we turn purple, but those are somewhat irrelevant if they don't affect YOUR particular applications. 

  • "If some's good, more is better"

    Actually, not necessarily.  Applications that don't scale well with many threads favor a CPU with high IPC and raw clock speed.  Many games fall into that category. 

    Particularly with threadripper 1920x and 1950x, there's two dies interconnected.  The 1900x is being released at the end of the month, also sounds interesting as it uses the same x399 board with 64 PCI lanes but with presumably a single die.  The multiple die chips do suffer from higher memory latency to the far RAM modules.  64 PCI lanes is very attractive to run two GPUs at x16 (or more at x8), and still have plenty of lanes for other devices.

    Heh.. yea, true.. it was kind of a throw-away comment.

    However in my own work environment, multi-tasking is always king.  So if I have a render going, I can work at the same time in Photoshop, while running a VM or two, while watching some YT videos.  Granted you end up paying more, but time is money.. and sanity, more importantly.

  • GatorGator Posts: 1,320
    ebergerly said:

    Current system is an i7 6700K which ain't bad, but the Intel chipset is x8/x8 with two vidya cards.  It supports M2 memory, but I can't use it with the second GPU installed.  My system could use an upgrade, question is what exactly.

    Yes, but without knowing exactly what you'll use your computer for, nobody can answer your question but you. We can throw detailed specs around until we turn purple, but those are somewhat irrelevant if they don't affect YOUR particular applications. 

    90% of productivity time is Daz Studio - that's what I'm looking to boost. 

    I also use Photoshop quite a bit, Zbrush and small extent Blender, Office, WinRAR, and other things...  But the performance at other things isn't what's really nagging at me to upgrade.  Gaming a little too, but that's not really in the equation (whatever CPU I get will be good enough).

    1950x could tempt me to re-encode my home library with H265.  laugh

  • GatorGator Posts: 1,320

    "If some's good, more is better"

    Actually, not necessarily.  Applications that don't scale well with many threads favor a CPU with high IPC and raw clock speed.  Many games fall into that category. 

    Particularly with threadripper 1920x and 1950x, there's two dies interconnected.  The 1900x is being released at the end of the month, also sounds interesting as it uses the same x399 board with 64 PCI lanes but with presumably a single die.  The multiple die chips do suffer from higher memory latency to the far RAM modules.  64 PCI lanes is very attractive to run two GPUs at x16 (or more at x8), and still have plenty of lanes for other devices.

    Heh.. yea, true.. it was kind of a throw-away comment.

    However in my own work environment, multi-tasking is always king.  So if I have a render going, I can work at the same time in Photoshop, while running a VM or two, while watching some YT videos.  Granted you end up paying more, but time is money.. and sanity, more importantly.

    Biggest dig for my system multi-tasking I guess is that I "only" have two Titans for video, and both are used for rendering.  Some apps get sluggish as the cards are at 100% rendering... noteably Word, Excel, and video playback.

    I recall you have a quad Titan rig, correct?  Do you leave one unchecked for rendering dedicated to your desktop display to do other stuff?

  • ebergerlyebergerly Posts: 3,255

    I dunno, seems like a no-brainer to me...

    You need either more GPU power, or you need to make your scenes more lean so they render faster, if render speed is all you're after. 

    And maybe add a throw-away GPU and leave it unchecked in Studio so that other apps can use it for graphics. 

  • I recall you have a quad Titan rig, correct?  Do you leave one unchecked for rendering dedicated to your desktop display to do other stuff?

    Yep.. though tbh it was rendering enough I invested in a second rig to actually work on.. between the two it's a nice fluid set up.

  • GatorGator Posts: 1,320
    ebergerly said:

    I dunno, seems like a no-brainer to me...

    You need either more GPU power, or you need to make your scenes more lean so they render faster, if render speed is all you're after. 

    And maybe add a throw-away GPU and leave it unchecked in Studio so that other apps can use it for graphics. 

    Rendering speed isn't that much of an issue...  once it loads.

    Here's what really hampers my workflow:

    1. Loading scenes is slow now.  It happened about the time of the 117 update.  Not saying that caused it, just that it happened around that time.  Could be from adding more Genesis 3 morphs... there was a sale and I got like 6 morph expression packs about the same time.  (Maybe I need to uninstall ones I rarely or never use).

    2. The time it takes for the scene to start to render.  Usually it's lighting, makes some tweaks, re-render... WAIT.  UGH.

     

    An M2 drive will speed everything up, I'm looking for that in any event.  Current system cannot with the video cards installed, and removing 2nd is out of the question.  Upgrading to an x399 board will allow me to add M2 drives and run both Titan X cards at x16, the latter which I hope will get the scene fed to the card faster.  I also render with a AMD FX-8320, that is noticably slower than the i7-6700K although cards are nearly the same speed (Titan X Pascal vs. 1080 Ti).

  • GatorGator Posts: 1,320
    I recall you have a quad Titan rig, correct?  Do you leave one unchecked for rendering dedicated to your desktop display to do other stuff?

    Yep.. though tbh it was rendering enough I invested in a second rig to actually work on.. between the two it's a nice fluid set up.

    Hmm... I was considering a third card for running the video output, a cheaper card like a GTX 1050 or 1060.  But that would drop the rendering cards from x16 down to x8.

    I do have a dedicated rendering rig too, that works well.  smiley

  • ebergerlyebergerly Posts: 3,255
    But that would drop the rendering cards from x16 down to x8.

    Are you sure that makes much of a difference? I thought I saw something that said that x16 vs x8 is almost irrelevant. Maybe I'm mis-remembering.

    Anyway, as far as load times and waiting to start the Iray render...

    I dunno how much of that is hardware-limited. I mean, there's just a ton of stuff to do to read all the data off the drive, and then a lot of CPU work to get it ready to send to the GPU and so on. 

    I suppose you could do a simple test to see if all the scene resources are installed on an SSD versus a mechanical drive and see what the difference is. As far as a SATA SSD vs an M2 or whatever, maybe that doesn't make much of a practical difference. Yeah, it's faster, but does that translate to something noticeable with Studio? Maybe not. 

     

  • GatorGator Posts: 1,320
    ebergerly said:
    But that would drop the rendering cards from x16 down to x8.

    Are you sure that makes much of a difference? I thought I saw something that said that x16 vs x8 is almost irrelevant. Maybe I'm mis-remembering.

    Anyway, as far as load times and waiting to start the Iray render...

    I dunno how much of that is hardware-limited. I mean, there's just a ton of stuff to do to read all the data off the drive, and then a lot of CPU work to get it ready to send to the GPU and so on. 

    I suppose you could do a simple test to see if all the scene resources are installed on an SSD versus a mechanical drive and see what the difference is. As far as a SATA SSD vs an M2 or whatever, maybe that doesn't make much of a practical difference. Yeah, it's faster, but does that translate to something noticeable with Studio? Maybe not. 

     

    I've read elsewhere for gaming the performance difference is small.  But according to Daz at the link I provided in the first post, dropping to x8 is 80% of the performance.  I'll gladly take a 20% gain there.

  • ebergerlyebergerly Posts: 3,255
    edited August 2017

    I just did a quick test of loading a fairly complex scene, and I don't see that there's much hardware limiting going on. Looks like only one of the CPU threads was used anywhere near 100%, and there was very little disk activity on my SATA SSD. I suppose if the CPU did multithreaded scene loading then a bunch of threads would be nice, but it doesn't look like it does. And yeah, an SSD is nice, though I'm not sure the type of SSD really makes that much difference. 

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    But according to Daz at the link I provided in the first post, dropping to x8 is 80% of the performance.  I'll gladly take a 20% gain there.

    Yes, but 80% of WHAT performance? 80% of 5 seconds isn't, in practical terms, of any real benefit. If transfer to GPU VRAM is only a small amount of time, then a 20% gain might not be of much use. 

    If I was you I'd start experimenting with various scene loads and watch the different performance monitors and see exactly where your slowdown is occurring. 

  • GatorGator Posts: 1,320
    edited August 2017
    ebergerly said:

    I just did a quick test of loading a fairly complex scene, and I don't see that there's much hardware limiting going on. Looks like only one of the CPU threads was used anywhere near 100%, and there was very little disk activity on my SATA SSD. I suppose if the CPU did multithreaded scene loading then a bunch of threads would be nice, but it doesn't look like it does. And yeah, an SSD is nice, though I'm not sure the type of SSD really makes that much difference. 

    I posted screenshot before of CPU use during a scene load... It's not 100%, but from my two rendering systems it does appear that CPU speed does impact loading times.  There's a big difference between the two on large scenes.  Like 2 minutes or so vs. 5.  Also a minute or two faster when it starts to draw the render.

    It does appear to be using all 4 cores.

     

    Snip Load 05 Few Seconds After Completion.JPG
    943 x 759 - 108K
    Post edited by Gator on
  • GatorGator Posts: 1,320
    ebergerly said:
    But according to Daz at the link I provided in the first post, dropping to x8 is 80% of the performance.  I'll gladly take a 20% gain there.

    Yes, but 80% of WHAT performance? 80% of 5 seconds isn't, in practical terms, of any real benefit. If transfer to GPU VRAM is only a small amount of time, then a 20% gain might not be of much use. 

    If I was you I'd start experimenting with various scene loads and watch the different performance monitors and see exactly where your slowdown is occurring. 

    If it were 5 seconds this thread wouldn't exist.  wink

    The time to start rendering greatly depends on scene complexity.  Sure, a simple scene will go in only a few seconds but I hardly do that.  Large scenes with a number of Genesis 3 figures usually takes anywhere from a few to 5 minutes.

    Sometimes my scenes are at the edge of my VRAM, so I can't leave a window open to speed that up.

  • ebergerlyebergerly Posts: 3,255

    It does appear to be using all 4 cores.

     

    Yes, but as with my test it's not using more than one core at anything over, say, 30%. Which implies that there is no hardware limit. If it's not using your 4 cores at anything near 100%, then why would adding more cores speed things up? There's a lot of headroom still available with your cores that isn't being used. It all depends upon how Studio is implementing the code to load stuff, and whether it can assign parts of the process to even more cores.  

  • GatorGator Posts: 1,320
    ebergerly said:

    It does appear to be using all 4 cores.

     

    Yes, but as with my test it's not using more than one core at anything over, say, 30%. Which implies that there is no hardware limit. If it's not using your 4 cores at anything near 100%, then why would adding more cores speed things up? There's a lot of headroom still available with your cores that isn't being used. It all depends upon how Studio is implementing the code to load stuff, and whether it can assign parts of the process to even more cores.  

    That's what you'd think, but as I posted before I have two rigs.  Both have the OS, Daz, and the library installed on SSD drives.  Both video cards have similar performance and are running at x8 PCI lanes.  Both DDR3 RAM.  The i7 6700 is significantly faster than the FX-8320.

    http://cpu.userbenchmark.com/Compare/Intel-Core-i7-6700K-vs-AMD-FX-8320/3502vs1983

  • ebergerlyebergerly Posts: 3,255
    edited August 2017
    If it were 5 seconds this thread wouldn't exist.  wink

     

    I think you may be missing the point. Loading a scene into the GPU has many steps, and each step depends on different hardware as well as how the software/drivers/etc are implemented. We can't assume that by adding more hardware at any step would necessarily improve performance significantly. The reason is because it depends on how the code is written, as well as whether you can practically break the steps down into sub-steps that you can hand off to other threads.

    If you get a 20% improvement in one step of the process it doesn't mean the overall loading will be significantly faster. It depends. That's why I suggest you do some simple tests.  

    Post edited by ebergerly on
  • GatorGator Posts: 1,320
    edited August 2017
    ebergerly said:
    If it were 5 seconds this thread wouldn't exist.  wink

     

    I think you may be missing the point. Loading a scene into the GPU has many steps, and each step depends on different hardware as well as how the software/drivers/etc are implemented. We can't assume that by adding more hardware at any step would necessarily improve performance significantly. The reason is because it depends on how the code is written, as well as whether you can practically break the steps down into sub-steps that you can hand off to other threads.

    If you get a 20% improvement in one step of the process it doesn't mean the overall loading will be significantly faster. It depends. That's why I suggest you do some simple tests.  

    There is ALWAYS a bottleneck somewhere. 

    Hence this thread... I'm looking to see if anyone has a Threadripper and running Studio with it.

    We do what we can control.  For example, if I can't re-write the application (I can't), then I throw more hardware at it to get performance to where I like it to be, or the best I can get it with what I'm willing to spend.

     

    ETA: You seem to have missed my post above.  With most other factors the same or very similar, there's a substancial performace difference with 2 different CPUs.

    Post edited by Gator on
  • ebergerlyebergerly Posts: 3,255

        

    ETA: You seem to have missed my post above.  With most other factors the same or very similar, there's a substancial performace difference with 2 different CPUs.

    Are you thinking it's a clock frequency thing? Looks like the two CPU's you referred to are 3.5GHz and 4.0GHz. But that's only a 12% difference. 

    I'd be interested to see if you come up with something to show how to improve the load times. I just tried an old Stonemason city scene with a few G3's and it took like 2 minutes to load, and maybe another minute to get the iray view. Would be nice to speed that up.

  • GatorGator Posts: 1,320
    ebergerly said:

        

    ETA: You seem to have missed my post above.  With most other factors the same or very similar, there's a substancial performace difference with 2 different CPUs.

    Are you thinking it's a clock frequency thing? Looks like the two CPU's you referred to are 3.5GHz and 4.0GHz. But that's only a 12% difference. 

    I'd be interested to see if you come up with something to show how to improve the load times. I just tried an old Stonemason city scene with a few G3's and it took like 2 minutes to load, and maybe another minute to get the iray view. Would be nice to speed that up.

    I've timed them before but lost the data some time ago.

    Typically the Intel processors do more IPC (instructions per cycle) as well as higher clock speed.  So it's a probably a combination of both.

    AMD has caught up a bit on IPC with the Ryzen line, I wish I could remember where I saw a chart comparing IPC on them and some Intel processors but I can't find it.  frown

  • nicsttnicstt Posts: 11,715

    I've started putting it together, well getting the components; case arrives tomorrow

     

  • GatorGator Posts: 1,320
    nicstt said:

    I've started putting it together, well getting the components; case arrives tomorrow

     

    Oooh nice.  What case did you order?  I'm looking at building one now.  Water cooling it?

  • FSMCDesignsFSMCDesigns Posts: 12,845

    LOL, too funny how the DAZ forums have become like the gaming forums with the inclusion of IRAYlaugh

Sign In or Register to comment.