Considering Building A 3D Workstation (Need Advice)

SubGeniusZeroSubGeniusZero Posts: 61

Hey guys. I'm considering building a 3D workstation, and will be using DAZ Studio as part of my 3D production pipeline (and using Iray and Octane as my primary rendering engines). I wanted to know what you guys think of this configuration:

 

AMD Threadripper 2950X Processor (16 cores, 32 threads)

X399 MSI MEG Creation Motherboard

NVIDIA Titan RTX 24GB Graphics Card

Corsair Vengeance 64GB (4x16) PC3200MHZ memory

Samsung 970 EVO 1TB - NVMe PCIe M.2 2280 SSD

Seagate IronWolf NAS 7200RPM Internal SATA Hard Drive 10TB 6Gb/s

1200 Watt Thermaltake PSU

Thermaltake Floe CPU Cooler (3 fans liquid cooler/radiator design)

Thermaltake View 37 RGB Computer Case

Windows 10 Pro English USB Flash Drive 32/64 bit

 

Any thoughts on whether this will be a suitable build? I'm particularly curious about the inclusion of the Titan RTX. I know from reading in other threads that the current iteration of Iray may or may not support the Tensor cores and the Ray-Tracing cores on the Titan; it's fine by me if it just uses the CUDA cores. There's over 4600 of them on the Titan RTX, after all. And if I render with the CPU + GPU, i should ger a performance boost, right? Am I also right in thinking that the 2950X will provide me with less memory latency than the 2990WX? I know the WX has 32 cores, and 32 cores is better than 16 cores, but I'm concerned about memory latency in reading from main memory, as well as the # of PCI-E lanes, and I'm also concerned about, of course, the price differential, and whether I'm going to see much of a performance curve given that the WX has a much lower idle clock than the X. Anyhow, just wanted everyone's thoughts on this.

Post edited by SubGeniusZero on

Comments

  • Do you need that powerful a CPU if your main computation applications will use the GPU (Iray and octane)?

  • You aren't using anything like all the pcie lanes provided by a TR cpu. Unless you have another application that makes use of all that cpu you can do better with a ryzen 7 2700. It will have a higher base clock and you won't need the AIO to cool it.

  • SubGeniusZeroSubGeniusZero Posts: 61
    edited January 2019

    Well, I know Iray can use both the GPU and CPU to render; that's why I was going with the 2950X as the CPU. As far as Octane goes, well yeah, that's purely GPU-bound. But I was hoping to harness all those CPU cores for combining the GPU and CPU on render jobs. Bear in mind I also have a GTX 1080 Ti Founders Edition which I will be sticking in this thing, too, and that will take up some of the lanes.

    Post edited by SubGeniusZero on
  • If you check the benchmark test, even a CPU as fast as the Threadripper doesn't make a great deal of difference to render time compared to a good GPU - and with 24GB of RAM on the card you probably won't often drop back to CPU.

  • Hmm, a very good point. Still though, it wouldn't hurt to have a multicore CPU in the box, for when I need to render with something other than a ray-trace renderer. I like to have a backup plan, is all, in case one piece of software breaks I like to have another I can work with. (I was thinking of going with  Lightwave 3D as my main 3D package, instead of Maya, because even though Maya is the be-all-end-all 800 pound gorllla of 3D apps, Lightwave is FAR more affordable.)

  • Well then the rig looks fine.

    I should point out that the claims about the 2990XE and memory latency are mostly bogus. The issue with the 2990XE and it underperforming on benchmarks compared to lower thread count parts has been shown to be an issue with Windows. It doesn't appear on Linux. Doesn't really matter until Microsoft patches Win 10.

  • Wow, thanks for letting me know that, I had no idea. I should've known -- Microsoft, again, causng problems as always. None of this would be an issue for me in the first damn place if Apple would support Nvidia cards on macOS, or ship Nvidia cards in their Macs. Because if they did, I would happily invest in a macOS solution instead. I adore macOS and loathe Windows, to be honest.

  • nicsttnicstt Posts: 11,715
    edited January 2019

    I use a Threadripper; when a scene drops to CPU, if I can't be bothered spending time optimising it so it does fit, the CPU does the render in a reasonable amount of time.

    A 980ti is about 3 times quicker, than the 1950X I have.

    I plan on upgrading when the next Threadripper appears. I do, however, have a use for the cores elsewhere.

    Post edited by nicstt on
  • That's excellent to hear. Yeah, I'm thinking about maybe upping the configuration to use two Titan RTX cards linked with NVLink, since the reviews I've read where NVLink has been reviewed suggest that there's resource sharing between the cards going on with NVLink on these cards, and that may keep me from ever having to drop to CPU in the first place.

  • That's excellent to hear. Yeah, I'm thinking about maybe upping the configuration to use two Titan RTX cards linked with NVLink, since the reviews I've read where NVLink has been reviewed suggest that there's resource sharing between the cards going on with NVLink on these cards, and that may keep me from ever having to drop to CPU in the first place.

    It's my understanding that resource sharing has to be enabled by the software in question and Daz and the iRay render engine don't. With Nvidia being the developer of iRay and them not wanting to affect Quadro sales I wouldn't count on them ever enabling resource sharing on RTX cards in iRay.

  • Well, if I have two Titan RTX cards, and they don't share their resources, then will having the two cards actually speed up rendering times? I figured more CUDA cores = less render times; or is that not a reliable formula?

  • RayDAntRayDAnt Posts: 1,159
    edited January 2019
    I have a Titan RTX (afaik I'm the only DS user so far with one.) I'm busy working at the moment, but later today I'll go through this thread and give you what relevant feedback I can.
    Post edited by RayDAnt on
  • Well, if I have two Titan RTX cards, and they don't share their resources, then will having the two cards actually speed up rendering times? I figured more CUDA cores = less render times; or is that not a reliable formula?

    They'll both render the scene and that will speed up the render. What they won't do now and likely will never do in DS is pool VRAM.

  • So really, I'd maybe be better off finding a used price on a GV100 card, then, wouldn't I? Or would the GV100 be overkill? Or would  two Titan RTXes render faster than one GV100 with 32GB of VRAM?

  • RayDAntRayDAnt Posts: 1,159
    edited January 2019

    Any thoughts on whether this will be a suitable build?

    Absolutely.

    I'm particularly curious about the inclusion of the Titan RTX. I know from reading in other threads that the current iteration of Iray may or may not support the Tensor cores and the Ray-Tracing cores on the Titan; it's fine by me if it just uses the CUDA cores. There's over 4600 of them on the Titan RTX, after all

    As of right now, no Turing architecture based GPU (ie. 20XX series cards and the Titan RTX) are officially supported by any version of Daz Studio. However the current publicly available beta release (Daz Studio 4.11 beta) does have support for NVidia's top-of-the-line professional research oriented Volta architecture GPUs. And since Turing is technically just a real-time graphics rendering optimized revision of Volta (rather than being an incremental update to NVidia's previous generation gaming GPU 10XX series architecture Pascal) this means you can use eg. a Titan RTX in DS 4.11 beta for rendering right now - no problem.

    However, performance is severely limited from what it should be because the only parts of Turing currently being utilized by DS 4.11 beta are those whose design is directly inherited from Volta - ie.Cuda cores, Tensor cores (for applicable workloads like denoising), and NVLink-based memory pooling between multiple GPUs (can't personally vouch for this last one since I don't have two Turing cards to test with, but according to Iray build notes its been a supported feature for over a year.) This unfortunately leaves RT Cores (the single most useful feature of Turing from a DS perspective) unavailable for the time being since it is an original design to Turing.

    But as you rightly point out, not only is the sheer quantity of Cuda cores in the Titan RTX enough to put it on the performance map, Turing era Cuda cores are around 1.5x more efficient at doing traditional compute workloads than previous generations - ergo the sometimes huge compute performance uplifts being seen on 20XX series cards. Fwiw here's a link to my (Cuda core only) Titian RTX benchmarks.

     

    And if I render with the CPU + GPU, i should ger a performance boost, right?

    Technically yes, but when you factor in things like cpu pricing and power usage you're almost (if not outright) better off sticking to just GPU rendering.

    Consider this: I currently have my Titan RTX paired with a 6-core 12-thread i7 8700K (@4.7GHz on all cores.) the 8700K by itself gives me a CPU Mark score of 17,728 and a render time of around 24 minutes using Sickleyield's benchmarking scene in DS 4.11 beta. According to PassMark, a stock Threadripper 2950X gets a CPU Mark score of 25,691 - making it approximately 60% faster than my 8700K. Which, assuming my math is correct, would make it able to do the same benchmark in roughly 14 minutes.

    Meanwhile my Titan RTX does the same benchmark  in 1 minute 5 seconds. That's almost 24x faster than my 8700K. Or 14x faster than a hypothetical 2950X. And consider that the 2950X is an $800 180W max TDP part, whereas a Titian RTX is a $2500 280W max TDP part. Meaning that adding a 2950X to a Titan RTX for rendering gets you roughly a 7% performance increase in exchange for a 30% increase in price and a 64% increase in overall power consumption. For those sorts of price/power/performance disparities you'd be far better off going with a cheaper CPU and an extra Titan RTX (or three.)

     

     

    Post edited by RayDAnt on
  • RayDAntRayDAnt Posts: 1,159

     

    It's my understanding that resource sharing has to be enabled by the software in question and Daz and the iRay render engine don't. With Nvidia being the developer of iRay and them not wanting to affect Quadro sales I wouldn't count on them ever enabling resource sharing on RTX cards in iRay.

    Straight from the Daz Studio Pro BETA update thread

    Iray 2017.1 beta, build 296300.616

    Added and Changed Features

    • Iray Photoreal
      • Texture sharing on NVLINK capable systems: A new render context option "iray_nvlink_peer_group_size" has been added. Enabled CUDA devices are divided into peer groups of the specified size. The group size needs to be a factor of the total number of enabled CUDA devices. Textures are subsequently shared between CUDA devices in a peer group and each texture is only uploaded to one of the devices in the peer group.

     

    It may still need to be implemented as some sort of toggle in the DS user interface, but the Iray core functionality has apaprently been there since last July. There just weren't any consumer friendly cards in existence at the time that would've made fully implementing it worth the effort.

     
  • That's great news RayDAntT Thank you! I do have a use for the 16 core 2950X however, outside of rendering, so I will probably stick with that as my processor . . . unless maybe I could get by with the 1950X instead. Hmmm. Will have to think about that. AMD's X399 chipset makes that a possibility, since I could always choose to upgrade later, going all the way to the Threadripper 2 2990WX if I wanted to, with that chipset and that motherboard or a competitor to it. (That's what makes building your own system so attractive in the first place.:-) ) But yeah, maybe stick the extra money into another Titan RTX and chain them together with NVLink. That would be the smart thing. Do you think I could maybe honestly even think about doing -- gasp -- Iray-rendered short animated films with a setup like that? Or would render times be prohibitive of producing a reasonable number of frames at, say, 4K resolution in 10-bit color per day?

  • 31415926543141592654 Posts: 975
    edited January 2019

    I have an Nvidia Quadro M6000 - which has 24Gb vram and 3,024 cuda cores. I have an MSI 299 with a 14 core intel I-9.  It is not an exact match to yours, but when I add the CPU to the GPU, my iray rendering is about 30% faster.

    Also, pay strong attention to cooling. I have a water cooled system on my CPU ... and on long constant renders it will still get hot ... it works fine on animations since there is a short break between each render, but a single long render ( ummm .... maybe like 20 minutes or more ) I have to pause the render and let it rest a moment.

    EDIT:  By the way, yes iray animations are possible with this setup. It would still mean hitting render before going to bed and then stopping / checking in the morning ... but it works.

    Post edited by 3141592654 on
  • RayDAntRayDAnt Posts: 1,159
    edited January 2019

    . Do you think I could maybe honestly even think about doing -- gasp -- Iray-rendered short animated films with a setup like that? Or would render times be prohibitive of producing a reasonable number of frames at, say, 4K resolution in 10-bit color per day?

    The other day I decided (for shits and giggles) to attempt an 8K render of a complex outdoor environment (including water and a fully attired g8 figure) with realistic daytime lighting and it took about 50 minutes to render. So right now that would probably be a stretch. However once RT Cores get fully implemented in the DS/Iray pipeline (which imo is only a matter of time) I wouldn't be surprised to see a 5x performance increase on all RTX cards in raytracing heavy tasks.

    Post edited by RayDAnt on
  • I have an Nvidia Quadro M6000 - which has 24Gb vram and 3,024 cuda cores. I have an MSI 299 with a 14 core intel I-9.  It is not an exact match to yours, but when I add the CPU to the GPU, my iray rendering is about 30% faster.

    Also, pay strong attention to cooling. I have a water cooled system on my CPU ... and on long constant renders it will still get hot ... it works fine on animations since there is a short break between each render, but a single long render ( ummm .... maybe like 20 minutes or more ) I have to pause the render and let it rest a moment.

    EDIT:  By the way, yes iray animations are possible with this setup. It would still mean hitting render before going to bed and then stopping / checking in the morning ... but it works.

    Is that an AIO or is that a custom loop? If its an AIO What size rad do you have? I'm surprised the system even gets to steady state in 20 minutes.

  • Is that an AIO or is that a custom loop? If its an AIO What size rad do you have? I'm surprised the system even gets to steady state in 20 minutes.

    It is a custom built desktop I made specifically for my Daz rendering. Five fans, one is water cooled.

  • kenshaw011267kenshaw011267 Posts: 3,805
    edited January 2019

    OK, you don't know the terminology. 

    Only one fan on the Radiator makes it almost certainly a 120mm radiator. You bought the the whole thing in a package and installed it correct? The problem is that likely enough that is too little cooling for that CPU. Did you compare the rated cooling of the cooler to the output of your CPU before you bought it? Those i-9's put out a ton of heat and I wouldn't cool them with less than a very beefy tower air cooler or a 240mm AIO, double the cooling capacity of what you have likely.

    Post edited by kenshaw011267 on
  • You bought the the whole thing in a package and installed it correct?

    Yes, the fan and radiator were prepackaged. Thanks for the tip ... if I have the money to get more RAM, I will consider getting a stronger cooler. It has not been a big issue as I set this up to do animation and that little break between each frame render allows the CPU to cool down quite a bit.

  • You bought the the whole thing in a package and installed it correct?

    Yes, the fan and radiator were prepackaged. Thanks for the tip ... if I have the money to get more RAM, I will consider getting a stronger cooler. It has not been a big issue as I set this up to do animation and that little break between each frame render allows the CPU to cool down quite a bit.

    Great. If you do, try and find basically the same one you have now but with a 240mm radiator instead of a 120. It will have 2 fans, basically be twice the size. Just make sure your case has the space to fit it. Pretty much every AIO made comes in both sizes and most also come in a 360mm, 3 fan, size as well. The 3 fan size is usually really hard to fit especially as a retrofit.

  • Hi Guys!

    I was reading so many posts online about what it is better for rending with iray and Ihave a big mix what to get.

    I would like to have a fast rendering workstation for working iray. What are your recommendation for me. I need to buy a new pc.

    thanks

Sign In or Register to comment.