Has anyone here bought a Treadripper rig and used it for Studio yet?

1235789

Comments

  • Finally getting started my living room looks like a computer shop

    Here's the chasis 

    Thermaltake x9's? I have one of those on my wish list :D

    Yeper and I love em all kinds of options

  • nicsttnicstt Posts: 11,715

    I'm in the process of installing everything; just finished building it, but I'm low on time atm.

    Just built it as I could to make sure all the parts worked

    Considering the time I have going to take me a couple of days to build a disk image.

  • nicstt said:

    I'm in the process of installing everything; just finished building it, but I'm low on time atm.

    Just built it as I could to make sure all the parts worked

    Considering the time I have going to take me a couple of days to build a disk image.

    Looking forward to hearing how it does.

    Still building myself made a couple changes which required ordering another couple items but shouldn't be long I hope

  • kyoto kidkyoto kid Posts: 41,925
    DustRider said:
    kyoto kid said:
    DustRider said:

    So do you guys really get that much of a performance boost to justify overclocking and the expense of water cooling? I mean, if you're going from say 3.5 GHz to 4.0 GHz by overclocking, does that really translate to a significant performance difference in what you're doing? I can't imagine it matters much with Studio, especially for rendering. 

    Or are you like me where you just like playing with cool new technology? smiley

    In have a budget rig with an AMD FX 8350 Black in a mATX case that I water cooled and love it!! I've literally had this machine rendering for weeks without a break, and CPU temps never toped 48C (the most so far was 5 weeks, the CPU running at 90-100% the whole time). The plus for me is how quiet the system is compared to any air cooled system I've ever run, and that the temps stay so low under continuous heavy load. Also the water cooler was only slightly more than a really good air cooler, but the water cooler removes almost all of the heat from the small case, so the GPU can stay much cooler as well, where air cooling would just add heat to the case.

    ...I'm running 5 fans on my Antec P-193 (including the big one on the left side panel) plus one on the CPU cooler and they are pretty much inaudible.

    Liquid cooling just seeems too much to bother with as I don't overclock anything since I am not into games.  During Iray rendering my CPU tops out at between 65 - 72C.

     

    Your Antec P-130 is much larger than the micro ATX case I have the FX 8350 in. In fact, many of the large (i.e. slow/quiet fan) after market CPU air coolers won't even fit in my case, and the fan/cooler it came with simply would not work for me (I was afraid it was going shake the motherboard apart the vibration/noise was so bad). Also, comparing your system to this one is a bit of an apples and oranges thing for cooling. I have a much smaller case (max of 3 fans, I have one, plus the radiator/fan installed, and a small internal fan to cool the MB), a CPU with 8 cores vs your 4 cores, all 8 cores are running at 4.1ghz. Plus it has 32Gb of RAM, and a full length GTX 960, all crammed into the small case. So I would guess it's generating a lot more heat under full load than your system is. It's not uncommon for the ambient temp where it's at to go over 90 degrees in the afternoon - yet the CPU stays at the 48C mark, on very hot days it has gone to 52C for an hour or so - but that's still an amazing temp for a CPU that is known for generating a lot of heat. IIRC you aren't able to use yours on hot days due to CPU heat??

    Anyhoo, my post wasn't intended to compare systems, It was to show that sometimes, like in my case, water cooling is actually beneficial, not just something "cool" (pun intended). Some other factors in choosing the Corsair H-75 for cooling was my wifes desire to have a quiet system (fan sound bothers her), the intended original use of the computer (extended hours/days under full load, including CPU and GPU), and the poor airflow of the case (the original cooler kept the CPU at around 80C, but also increased the GPU temps). After about 10-12 hours of research, I decided to go with the water cooled system. I wasn't a fan of water cooling either, but it seemed to be my best option. Now I'm a big fan of it. Pre built liquid cooling systems are very simple to install, about the same as a new CPU cooler and case fan. If I ever get another desktop, it will probably have water/liquid cooling.

    ...for smaller cases yeah, I can see that.  That is what years ago kept me  away form off the shelf prebuilds as they usually were in the smallest case (with only one rear fan) that could be found and had barely adequate PSUs to support what was installed inside.

    I'll stick to large air cooled cases myself.  The new generation 10xx series GPUs consume less power than older generation cards (meaning less heat and less need for liquid cooling unless you intend to overclock them) as well as do Xeon CPUs.

    Wish I could get another P-193 for the new build but that model has been discontinued in favour of the side panel windows and flashy lights trend.  May have found a case that is worthy though which has provisions for up to 11 fans (that are surprisingly whisper quiet)  and like the P-193, has a couple on the left side panel by the GPU(s) and CPU cooler, and is very spacious inside.

  • nicsttnicstt Posts: 11,715

    Yeh, the PSU I have is way too loud, going to return it and get a silent one (it only turns on the fan when needed).

  • kyoto kidkyoto kid Posts: 41,925
    ebergerly said:

    ...but it's about twice the card - nearly twice as many CUDA cores (most important for Iray performance) and memory throughput.  If you were just figuring that price on core performance it's a good deal, but as an added bonus you get 3 GB more of VRAM which is a huge deal since that fancy video card means nothing in Iray if you don't have enough VRAM and drop to CPU rendering.

     

    My only point is that it's easy to get dazzled by specs, but if those specs don't make a significant practical difference in the work YOU are doing, then all of those specs are irrelevant. Twice as many CUDA cores sounds awesome, but it's irrelevant compared to the fact that it gives you a 33% improvement in Iray render speeds. Now if you have other software that takes advantage of all those cool specs, then fine. I'm only suggesting you first think about YOUR needs and YOUR software and decide what's the best. 

    Regarding VRAM in the 1080ti...yeah, it's like 11GB, and W10 takes a portion of that. However, I just tried loading the largest scene I have which is a big Stonemason city scene with tons of textures,and 2 G3 characters. And I added an additional 2 G3's for good measure.

    When I start Studio my GTX 1070 ram usage is 400MB (out of a total of 8GB). When I load the monster scene and turn on Iray viewport, the GPU memory used goes up to 6GB. And when I do a render, it runs out of VRAM and the whole thing dumps to the CPU and my 16 threads do the rendering, not the GPU.

    But if I remove a couple of the characters, the VRAM usage drops to 4.5GB, and it renders fine with the GPU only. 

    Now, it took me some work to build a scene with enough stuff to make it run out of memory, and in general I don't get near that. So yeah, bigger can be better, but only if you really NEED it. 

    The point being, think before you decide. 

    ...yeah, I need all I can get as many of my scenes can top out at around 7 GB.  Also staying with W7 using the basic desktop and no "gadgets" which helps conserve VRAM for rendering as well.  For myself, SOTA in this case comes with too big a price (and I'm not just talking in zlotys either).

  • kyoto kidkyoto kid Posts: 41,925
    JamesJAB said:
    ebergerly said:

    Well, the fact that I'm showing only 6GB of my 8GB being used prior to doing a formal Iray render, and as soon as I hit Render it dumps to CPU makes me think there's a couple GB being taken by something or else it should render fine. 

    But why that's not and issue with iray doing the viewport render seems strange. 

    This is really weird, and I'm guessing there aren't a lot of people out there who really understand what's going on, including me. 

    When Iray does it's preview window, it's not rendering at full quality.  Also if you are rendering a image that's larger than your preview window, that will also cause it to need more Vram to render.
    Try switching back to the texture shaded preview windows before hitting render and see if that makes a difference.  Also, keeping multiple old render windows open will eat into your avaliable VRAM, because it is keeping it ready to continue on the render.

    ..the downside of Texture Shaded vidw mode is if you are using an HDRI you don't see the backdrop. 

    This is why I tend not to use HDRIs as I have to do a bunch of test renders to get the positioning where I want it.  Iray View mode on my system will cause Daz to crash to the Desktop as I only have 10.7 GB of available system memory and a 1 GB GPU card.

  • GatorGator Posts: 1,320
    Argh, really want to order but so far having no luck finding a 64gb 3200 MHz DDR4 RAM kit.
  • JamesJABJamesJAB Posts: 1,766
    Argh, really want to order but so far having no luck finding a 64gb 3200 MHz DDR4 RAM kit.

    Then just get 2 32GB kits with the size you want for each individual stick
  • BTW, someone mentioned earlier that RAM speed probably doesn't matter a whole lot. That couldn't be further from the truth with Ryzen (and thus Threadripper and Epyc). The interconnect speed of the CCX and dies are directly tied to RAM speed. So the faster your RAM, the faster your entire system will be.

     

  • kyoto kidkyoto kid Posts: 41,925
    edited September 2017

    ...not worth the trouble of having to deal with all the rubbish of W10 though (and it's not just the VRAM issue either).

    If Daz supported one of the major Linux versions, it would be a different story as all three CPUs also support Linux (particularly Epyc).  Would be crazy fun rendering in Carrara with 64 CPU threads and 128 GB of fast 8 channel DDR4.

    Post edited by kyoto kid on
  • ebergerlyebergerly Posts: 3,255
    edited September 2017

    BTW, someone mentioned earlier that RAM speed probably doesn't matter a whole lot. That couldn't be further from the truth with Ryzen (and thus Threadripper and Epyc). The interconnect speed of the CCX and dies are directly tied to RAM speed. So the faster your RAM, the faster your entire system will be.

     

    If you cant back up all the tech specs and marketing hype with actual real-world performance data in apps that people actually use, then claims that a technology are faster or better or awesome or epic are pretty much useless. Saving 1 second in a 5 minute render is "faster", but does anyone really care?
    Post edited by ebergerly on
  • jestmartjestmart Posts: 4,449

    It has been explained numerous times that there is a lot of pre render calculations that take anywhere from 30 to 90 seconds every time a render begins making renders in the 2 to 3 minute range statistically meaningless.

  • ebergerlyebergerly Posts: 3,255
    jestmart said:

    It has been explained numerous times that there is a lot of pre render calculations that take anywhere from 30 to 90 seconds every time a render begins making renders in the 2 to 3 minute range statistically meaningless.

    Oh really? So if you and I render the identical scene, and both measure render time from AFTER the scene has loaded, the comparison is meaningless?
  • nicsttnicstt Posts: 11,715

    Ram speed doesn't add much, and buying faster ram, but not too fast and reducing its speed is a great way of improving system performance. I got 4x16 3000MHz; wouldn't have minded 128GB, but 32GB sticks are crazy-expensive - even in a market that is itself, crazy expensive.

  • kyoto kidkyoto kid Posts: 41,925
    edited September 2017

    ...what about 8 x 16 GB?  Or does the MB not have 8 DIMM slots?

    I also thought that Ryzen boards and the CPUs only support up to 64 GB.

    Post edited by kyoto kid on
  • GatorGator Posts: 1,320
    nicstt said:

    Ram speed doesn't add much, and buying faster ram, but not too fast and reducing its speed is a great way of improving system performance. I got 4x16 3000MHz; wouldn't have minded 128GB, but 32GB sticks are crazy-expensive - even in a market that is itself, crazy expensive.

    Yeah, really depends on the app.  Some see next to no benefit, some see a little bit.

    What I found interesting was that there are few games with a substancial difference.  Typically not much in maximum framerates, as you'd expect but a substancial difference in the minimum or average framerate.  IIRC, the difference with Treadripper was more dramatic as the Infinity fabric (tying the cores together) is synced with the memory speed.  To a lesser effect Ryzen, and also IIRC even less the Intel chips.

    Of course, I'm not buying a Threadripper for gaming, but I do some gaming on the side.

  • GatorGator Posts: 1,320
    kyoto kid said:

    ...what about 8 x 16 GB?  Or does the MB not have 8 DIMM slots?

    I also thought that Ryzen boards and the CPUs only support up to 64 GB.

    Threadripper theoretically will go to 2 TB, there's no memory modules large enough to test that!

  • kyoto kidkyoto kid Posts: 41,925
    edited September 2017

    ...ahh so they at least have 8 DIMM slots then

    There are 128 GB modules available, but they are insanely expensive (you could build a pretty raging workstation for the price of just one stick) and primairly for enterprise servers.  With those it would take 16 modules to make up 2 TB (for a mere 108,000$).

    Post edited by kyoto kid on
  • ebergerlyebergerly Posts: 3,255
    edited September 2017

    I keep thinking we're approaching a point where future improvements in technology exceed what we can even use. Monitors are becoming such high resolution that our eyes can't even notice the difference when new monitors come out. And they're getting so large they can't fit in most living rooms. RAM is getting so large that few can even make use of it. HDD's are being replaced by SSD's, and once your stuff loads instantly, anything more than instantly isn't really noticeable. CPU's aren't really increasing in clock speed, but rather increasing cores, and it seems less and less software really makes use of it. Even low end GPU's now can play most of the games really well, unless people really need and can tell the difference with 4k or 8k or 16k or whatever. 

    Heck, without video games where would this technology really be needed? For most average users who aren't doing 3D or video editing or some professional technical stuff, is all of this going to be necessary? 

    I dunno. Maybe something amazing will come along in 5 years that really needs 48 cores, 96 threads, 1TB RAM, 20TB SSD's, and 16k monitors. I just can't imagine what that would be. 

    Post edited by ebergerly on
  • nicsttnicstt Posts: 11,715
    kyoto kid said:

    ...what about 8 x 16 GB?  Or does the MB not have 8 DIMM slots?

    I also thought that Ryzen boards and the CPUs only support up to 64 GB.

    Yeh could have done that.

    ... Didn't need 128, i don't think. :) I also don't like filling all slots, I would have been slightly more likely to get 128 if they weren't so crazily priced.

  • ebergerly said:

    I keep thinking we're approaching a point where future improvements in technology exceed what we can even use. Monitors are becoming such high resolution that our eyes can't even notice the difference when new monitors come out. And they're getting so large they can't fit in most living rooms.

    Hey. I want a razor-sharp LCD that covers my entire wall someday. Don't kill my dream.

    ebergerly said:

    RAM is getting so large that few can even make use of it.

    I use 16 GB easily while doing TG renders. I've recently upgraded to 32 just so that I can do other things at the same time without crashing my render, and I expect I'll easily break 16 now that I can.

    ebergerly said:

    HDD's are being replaced by SSD's, and once your stuff loads instantly, anything more than instantly isn't really noticeable.

    Ah, but they can still improve much on the space/money ratio.

    ebergerly said:

    CPU's aren't really increasing in clock speed, but rather increasing cores, and it seems less and less software really makes use of it.

    Really it isn't less and less. It's true for gaming that for some reason almost everything does not use multiple threads efficiently. But DS will use all my cores, TG will use all my cores, WM will use all my cores... granted a lot of software isn't built to do this properly yet, and some algorithms will never be able to so some software will never even be rewritten to use multiple threads, but a lot is, and the options are not growing less but more.

  • ebergerlyebergerly Posts: 3,255

    Yeah, but I think most people are interested in graphics-intensive stuff, not CPU-intensive stuff that calculates rather than makes images. And a lot of CPU-intensive stuff seems to be transferring to the faster GPU's, like video encoding, etc. I suppose engineering stuff that just makes calculations might use the CPU, but I think the average user is more about visuals and graphics. 

  • CPU rendering is still a huge thing. In TG you can still only use CPU and this is not likely ever to change due to their architecture as far as I understand. And I very much like TG. Where is this CPUs are not for making images idea coming from?

  • ebergerlyebergerly Posts: 3,255

    CPU rendering is still a huge thing. In TG you can still only use CPU and this is not likely ever to change due to their architecture as far as I understand. And I very much like TG. Where is this CPUs are not for making images idea coming from?

     A lot of places. Just look at D|S Iray rendering where CPU renders are pretty much useless compared to a decent graphics card. And even VWD the cloth sim is going to GPU to speed things up. And some of the video encoding software is advertising going to GPU to speed things up (although in practice many aren't quite there yet). 

    BTW, what's TG?

  • ebergerly said:

    CPU rendering is still a huge thing. In TG you can still only use CPU and this is not likely ever to change due to their architecture as far as I understand. And I very much like TG. Where is this CPUs are not for making images idea coming from?

     A lot of places. Just look at D|S Iray rendering where CPU renders are pretty much useless compared to a decent graphics card. And even VWD the cloth sim is going to GPU to speed things up. And some of the video encoding software is advertising going to GPU to speed things up (although in practice many aren't quite there yet). 

    BTW, what's TG?

    Terragen.

    GPU rendering is getting bigger every day but CPU rendering is not obsolete - definitely not to the point where "most people who want to render only want a GPU" is a thing. Even large studios still use CPU rendering because, eventually, it scales better. And there's nothing like the VRAM limitations which hit GPU rendering. Want a 25 GB scene - sure, have fun.

  • kyoto kidkyoto kid Posts: 41,925
    edited September 2017

    @ ebergerly:

    ...some good points there.

    When I built my system over 4 (almost 5) years ago, 12 GB of system memory and 1 GB of VRAM was considered  a "shredding" rig.  Of course we didn't have Iray yet, so memory, CPU cores and CPU clock speed were more important for rendering.  The GPU, well if you weren't into gaming, it just made the displays look and respond better.

    With the whole GPU rendering schtick since the introduction of Iray, suddenly a card's VRAM was more important than system memory or CPU cores and this is where things began to get really costly. If like myself, you create friarly "heavy" scenes on a regular basis, you need all the VRAM you can get your hands on.  Sadly for us, with Nvidia that tops out at 12 GB with the Titan Xp (for 1,200$ or 1,500$ for the integrated water cooled version).  You really have to dig deep in the wallet to exceed that (a Quadro P5000 with 16 GB of VRAM retails for around 2,500$). 

    Now along comes AMD with their relatively affordable 16 GB HBM 2 memory Vega card priced at under 1,000$ (which unfortunately for Iray is useless).  So the ball is now back in Nvidia's court. which still does not offer a prosumer card with faster HBM2 while AMD has 2 (there is also an 8 GB Vega).  As a matter of fact, Nvidia's top of the line 5,000$ Quadro P6000 still uses GDDR5x. The first Quadro available with HBM 2 is the 6,500$ - 8,000$ (depending on vendor) Quadro GP100.  For that you could build a pretty nice workstation with a 16 core Threadripper, dual 1080 Ti GPUs, 128 GB of four channel DDR4 3000, several SSDs and probably have a nice bit of change left in the pocket to buy more Daz goodies.

    However what the GP100 does add is improved floating point performance and Nvidia's new NV link technology that connects between cards replacing the traditional SLI link.  Besides a fatter pipeline between the cards that translates to faster communication, NV reportedly link also allows for a process called "memory pooling".  Accroding to the hype from Nvidia, pooling memory between two GP100s will for the first time allow users access to the total memory of both linked cards (in the case of two GP100s, 32 GB).  This sort of sounds too good to be true, and I haven't been able to find  much more detail on what this would mean for rendering.  If it indeed is the "holy grail" we've been looking for, it will only be affordable for professional production studios with big budgets considering the card's steep price. There are also MBs with NVLInk slots that accellerate data transfer bwtreen GPUS and CPUs but so far those are not available on the consumer market and probably won't be so for a while.

    As to CPU cores/threads, true, Daz doesn't utilise all available cores during the production phase. however if by chance that mega scene, with a dozen posed & dressed G3 F/Ms, half of Stonemason's Urban Sprawl 3, volumetric haze, and several dozen emissive lights exceeds the 1080 Ti's memory (a bit easier with Windows 10 as you actually have about 9 instead of GB of available VRAM) then whatever CPU threads and memory you have will come into play (not sure what the maximum thread limit is for rendering in Daz/Iray). 

    Anyway, it really does seem to be escalating into womewhat a tehcnological "arms race" which pretty much looks to leave most of us behind, depending on what bread crumbs are allowed to fall from the table.

    For myself the watershed is the Win10 requirement for the new generation CPUs,  I'll choose to stay on the "lee side" for now as over the last two years, I've see W10 as more a bust then benefit for a number of reasons besides its reserving VRAM.

    Post edited by kyoto kid on
  • kyoto kidkyoto kid Posts: 41,925
    edited September 2017
    ebergerly said:

    CPU rendering is still a huge thing. In TG you can still only use CPU and this is not likely ever to change due to their architecture as far as I understand. And I very much like TG. Where is this CPUs are not for making images idea coming from?

     A lot of places. Just look at D|S Iray rendering where CPU renders are pretty much useless compared to a decent graphics card. And even VWD the cloth sim is going to GPU to speed things up. And some of the video encoding software is advertising going to GPU to speed things up (although in practice many aren't quite there yet). 

    BTW, what's TG?

    Terragen.

    GPU rendering is getting bigger every day but CPU rendering is not obsolete - definitely not to the point where "most people who want to render only want a GPU" is a thing. Even large studios still use CPU rendering because, eventually, it scales better. And there's nothing like the VRAM limitations which hit GPU rendering. Want a 25 GB scene - sure, have fun.

    ...yeah I'm still leaning towards just building a monster CPU based workstation with dual 8 or 12 core Xeons, a boatload of memory, and modest GPU for test renders, and letting the whole GPU arms race go where it wants.  For one better scaling is important to my work as well since I am looking to render in large format for high/gallery quality printing.

    Carrara renders pretty darn fast when you have a lot of cores and memory available. One older thread I researched mentioned that people were getting day plus render jobs down to 6 hours or less going to a dual Xeon multi core setup. One post even included a screen shot that showed 36 cores at work rendering.  Still trying to find the maximum single system core limit for Carrara, I know that it will handle up to 100 cores but not sure if that is only networked rendering. Same for Daz.

    Post edited by kyoto kid on
  • kyoto kid said:
    ebergerly said:

    CPU rendering is still a huge thing. In TG you can still only use CPU and this is not likely ever to change due to their architecture as far as I understand. And I very much like TG. Where is this CPUs are not for making images idea coming from?

     A lot of places. Just look at D|S Iray rendering where CPU renders are pretty much useless compared to a decent graphics card. And even VWD the cloth sim is going to GPU to speed things up. And some of the video encoding software is advertising going to GPU to speed things up (although in practice many aren't quite there yet). 

    BTW, what's TG?

    Terragen.

    GPU rendering is getting bigger every day but CPU rendering is not obsolete - definitely not to the point where "most people who want to render only want a GPU" is a thing. Even large studios still use CPU rendering because, eventually, it scales better. And there's nothing like the VRAM limitations which hit GPU rendering. Want a 25 GB scene - sure, have fun.

    ...yeah I'm still leaning towards just building a monster CPU based workstation with dual 8 or 12 core Xeons, a boatload of memory, and modest GPU for test renders, and letting the whole GPU arms race go where it wants.  For one better scaling is important to my work as well since I am looking to render in large format for high/gallery quality printing.

    I am building an overclocked Ryzen 1700 box right now for my CPU rendering needs. 16 threads is going to blow my FX-8350 out of the water and I'm going to be head-over-heels sending CPU renders over there while I do quick GPU stuff on my main box. I looked into building a dual-core Xenon at the same time just because I could have squeezed an extra couple threads out of it, but ultimately the bang for the buck wasn't there for me. Really looking forward to seeing how this Threadripper build turns out.

Sign In or Register to comment.