Has anyone here bought a Treadripper rig and used it for Studio yet?

1246789

Comments

  • Wow, looks like a monster setup there!  Appears that you got two of those Thermaltake Core x9 cases?

    Also looks like the PC parts on top, water cooling on bottom?  May want to look into that, that's naturally what I would do but my AIO coolers say they radiators should be above the device.

    Also, forgot a piece of info from a friend heavy into water cooling, he recommends against any sort of dyed fluid.  He said one UV dye separated on him, was quite a PITA as he had to disassemble the whole setup.  IIRC a simple flush wasn't sufficient to clean up the blocks too, or he needed some special stuff to get the blocks clean.  Long story short a lot of wasted time.

    Yep two of them and they're great to work with lots of options and room.

    Will have MB with the four GPUS in the top but am considering putting the hdd/ssds in the bottom,Will be using two power supplies one for the motherboard and gpus and one for the cooling system and hdd/ssd's

    I'll have to use an AIO for the cpu at the moment as they're no waterblocks available yet for doing a custom loop with the TR4 but the custom loop for the gpus will be in the bottom.

    Had heard that about dyed fluid but that the problem came from non factory dyed fluid and that the newer factory fluids weren't prone to seperation. The Thermaltake kit came with light blue tinted coolant and has al the anti-corrosion chemicals.

     

  • DustRiderDustRider Posts: 2,888

    So do you guys really get that much of a performance boost to justify overclocking and the expense of water cooling? I mean, if you're going from say 3.5 GHz to 4.0 GHz by overclocking, does that really translate to a significant performance difference in what you're doing? I can't imagine it matters much with Studio, especially for rendering. 

    Or are you like me where you just like playing with cool new technology? smiley

    In have a budget rig with an AMD FX 8350 Black in a mATX case that I water cooled and love it!! I've literally had this machine rendering for weeks without a break, and CPU temps never toped 48C (the most so far was 5 weeks, the CPU running at 90-100% the whole time). The plus for me is how quiet the system is compared to any air cooled system I've ever run, and that the temps stay so low under continuous heavy load. Also the water cooler was only slightly more than a really good air cooler, but the water cooler removes almost all of the heat from the small case, so the GPU can stay much cooler as well, where air cooling would just add heat to the case.

  • Robert FreiseRobert Freise Posts: 4,600
    edited September 2017
    ebergerly said:

    So do you guys really get that much of a performance boost to justify overclocking and the expense of water cooling? I mean, if you're going from say 3.5 GHz to 4.0 GHz by overclocking, does that really translate to a significant performance difference in what you're doing? I can't imagine it matters much with Studio, especially for rendering. 

    Or are you like me where you just like playing with cool new technology? smiley

    I haven't done any overclocking but have some programs that are cpu intensive and the thread count on the cpu is of benefit  and then with four GPUs in the system water cooling is pretty much mandatory so the system doesn't melt everything in the room and yeah sometimes I like playing with the cool new tech IF i think it's something I'll benefit from and besides I'm almost 64  own my home and vehicle free and clear and am financially secure and have no significant other to get upset about how I choose to spend money so i'm able to splurge some if I want to.devil

    Post edited by Robert Freise on
  • GatorGator Posts: 1,320
    ebergerly said:

    So do you guys really get that much of a performance boost to justify overclocking and the expense of water cooling? I mean, if you're going from say 3.5 GHz to 4.0 GHz by overclocking, does that really translate to a significant performance difference in what you're doing? I can't imagine it matters much with Studio, especially for rendering. 

    Or are you like me where you just like playing with cool new technology? smiley

    IIRC, with Threadripper you won't get the max factory boost clocks on air cooling. 

    It does look cooler, but also give you much more room inside with better airflow and cooling and not all the weight of a gigantic air cooler pulling down on it since most mobos are vertically mounted.

    It's also really good for multiple GPUs.  My main box is currently running two Titan X Pascals, and the GPU temps hit 80C, top one will only be base clock and lower card will run a bit higher but not max factory clock.

     

    A bit of both probably, playing with new tech and some practical purpose (especially GPUs as I listed above).

  • kyoto kidkyoto kid Posts: 41,928
    DustRider said:

    So do you guys really get that much of a performance boost to justify overclocking and the expense of water cooling? I mean, if you're going from say 3.5 GHz to 4.0 GHz by overclocking, does that really translate to a significant performance difference in what you're doing? I can't imagine it matters much with Studio, especially for rendering. 

    Or are you like me where you just like playing with cool new technology? smiley

    In have a budget rig with an AMD FX 8350 Black in a mATX case that I water cooled and love it!! I've literally had this machine rendering for weeks without a break, and CPU temps never toped 48C (the most so far was 5 weeks, the CPU running at 90-100% the whole time). The plus for me is how quiet the system is compared to any air cooled system I've ever run, and that the temps stay so low under continuous heavy load. Also the water cooler was only slightly more than a really good air cooler, but the water cooler removes almost all of the heat from the small case, so the GPU can stay much cooler as well, where air cooling would just add heat to the case.

    ...I'm running 5 fans on my Antec P-193 (including the big one on the left side panel) plus one on the CPU cooler and they are pretty much inaudible.

    Liquid cooling just seeems too much to bother with as I don't overclock anything since I am not into games.  During Iray rendering my CPU tops out at between 65 - 72C.

     

  • FWIWFWIW Posts: 320

    Less VRAM available or not I think I am definitely aiming for the 1080 ti, just because 8GB of available is like 8GB better than I have now lol. I'm on an older Envy 17 Laptop running what GPU-Z says is an Intel HD Graphics 4600, integrated. So I can't even imagine the boost I will get from that.

  • kyoto kidkyoto kid Posts: 41,928
    ...well, if you are still on W7 or 8.1 and have all the flashy stuff disabled you shouldn't have to worry about losing much of your VRAM.
  • FWIWFWIW Posts: 320

    No, I am stuck with Windows 10 unfortunately. 

  • kyoto kidkyoto kid Posts: 41,928
    ...ugh. Doing my best to avoid it as when I can finally afford a big VRAM GPU card I don't want it seriously crippled.
  • FWIWFWIW Posts: 320

    Good luck. I have gotten used to it but I definitely preferred Win 7. 

  • GatorGator Posts: 1,320
    edited September 2017

    Less VRAM available or not I think I am definitely aiming for the 1080 ti, just because 8GB of available is like 8GB better than I have now lol. I'm on an older Envy 17 Laptop running what GPU-Z says is an Intel HD Graphics 4600, integrated. So I can't even imagine the boost I will get from that.

    I think the 1080 Ti is the best bang for your buck right now for Iray...  It's about the same performance wise the Titan X Pascal, and you used to have to pay $1200 last year.  Now you get that performance for only $700 - you only lose 1 TB of RAM.

    Terrible decision by Nvidia as far as selling Titan cards, but the numbers of 1080 Tis may balance that.  Or not... curious on those numbers.

    Post edited by Gator on
  • ebergerlyebergerly Posts: 3,255

    I think the 1080 Ti is the best bang for your buck right now for Iray... 

    Yeah, I'm not sure I'd put "1080ti" and "best bang for the buck" in the same sentence. Or even paragraph. 

    A 1080ti right now is costing about $750 at newegg. A 1080 is going for $530. And a 1070 is going for $450. 

    The real indication of "bang for the buck" is something like "dollars per % improvement in Iray render times". I don't have the numbers off the top of my head, but I think if you compare the $$ per %, the 1080ti is way more expensive for what you get. I may be wrong, but I think the 1080 wins that battle easily. 

  • ebergerlyebergerly Posts: 3,255

    Using the D|S iray benchmark scene, the 1070 was taking 3 minutes to render it, and the 1080ti there were a lot of people quoting around 2 minutes for the same scene. So that's a 33% improvement from the 1070 to the 1080ti. Because (3-2)/3 = 33%. And the price difference is $300. 

    So you're paying $300 for a 33% performance improvement. I dunno, sure seems like a lot of money for a relatively small improvement. 

  • GatorGator Posts: 1,320
    ebergerly said:

    I think the 1080 Ti is the best bang for your buck right now for Iray... 

    Yeah, I'm not sure I'd put "1080ti" and "best bang for the buck" in the same sentence. Or even paragraph. 

    A 1080ti right now is costing about $750 at newegg. A 1080 is going for $530. And a 1070 is going for $450. 

    The real indication of "bang for the buck" is something like "dollars per % improvement in Iray render times". I don't have the numbers off the top of my head, but I think if you compare the $$ per %, the 1080ti is way more expensive for what you get. I may be wrong, but I think the 1080 wins that battle easily. 

    Asus version $709 at Newegg, which is EVGA's price on the base model, but they are OOS at the moment.  Nvidia Founder's Edition is $699, also OOS.  Pays to be patient.

    The pricing is about right, it's about 60% more than the 1070 but it's about twice the card - nearly twice as many CUDA cores (most important for Iray performance) and memory throughput.  If you were just figuring that price on core performance it's a good deal, but as an added bonus you get 3 GB more of VRAM which is a huge deal since that fancy video card means nothing in Iray if you don't have enough VRAM and drop to CPU rendering.

    Here is a chart of the specs:

    http://www.techadvisor.co.uk/feature/pc-components/nvidia-geforce-gtx-1080-ti-vs-1080-vs-1070-vs-1060-vs-1050-3640925/

    And if you also do some gaming, SLI doesn't scale as nearly as well as Iray.  It's better to have a bigger double the performance card vs. two half power cards SLI.

  • DustRiderDustRider Posts: 2,888
    edited September 2017
    ebergerly said:

    Using the D|S iray benchmark scene, the 1070 was taking 3 minutes to render it, and the 1080ti there were a lot of people quoting around 2 minutes for the same scene. So that's a 33% improvement from the 1070 to the 1080ti. Because (3-2)/3 = 33%. And the price difference is $300. 

    So you're paying $300 for a 33% performance improvement. I dunno, sure seems like a lot of money for a relatively small improvement. 

    Unless of course your scene takes more than the memory available on the 1070/1080, then the 1080ti is a bargain when looking at performance wink. If the two had the same amount of available memory, then the pure performance increase is a viable sole comparison benchmark. But when comparing these two (or three) you also need to take into account the additional memory, which if needed, you should look at the comparison of speeds between the 1080ti and what ever CPU will be used.

    Post edited by DustRider on
  • DustRiderDustRider Posts: 2,888
    kyoto kid said:
    DustRider said:

    So do you guys really get that much of a performance boost to justify overclocking and the expense of water cooling? I mean, if you're going from say 3.5 GHz to 4.0 GHz by overclocking, does that really translate to a significant performance difference in what you're doing? I can't imagine it matters much with Studio, especially for rendering. 

    Or are you like me where you just like playing with cool new technology? smiley

    In have a budget rig with an AMD FX 8350 Black in a mATX case that I water cooled and love it!! I've literally had this machine rendering for weeks without a break, and CPU temps never toped 48C (the most so far was 5 weeks, the CPU running at 90-100% the whole time). The plus for me is how quiet the system is compared to any air cooled system I've ever run, and that the temps stay so low under continuous heavy load. Also the water cooler was only slightly more than a really good air cooler, but the water cooler removes almost all of the heat from the small case, so the GPU can stay much cooler as well, where air cooling would just add heat to the case.

    ...I'm running 5 fans on my Antec P-193 (including the big one on the left side panel) plus one on the CPU cooler and they are pretty much inaudible.

    Liquid cooling just seeems too much to bother with as I don't overclock anything since I am not into games.  During Iray rendering my CPU tops out at between 65 - 72C.

     

    Your Antec P-130 is much larger than the micro ATX case I have the FX 8350 in. In fact, many of the large (i.e. slow/quiet fan) after market CPU air coolers won't even fit in my case, and the fan/cooler it came with simply would not work for me (I was afraid it was going shake the motherboard apart the vibration/noise was so bad). Also, comparing your system to this one is a bit of an apples and oranges thing for cooling. I have a much smaller case (max of 3 fans, I have one, plus the radiator/fan installed, and a small internal fan to cool the MB), a CPU with 8 cores vs your 4 cores, all 8 cores are running at 4.1ghz. Plus it has 32Gb of RAM, and a full length GTX 960, all crammed into the small case. So I would guess it's generating a lot more heat under full load than your system is. It's not uncommon for the ambient temp where it's at to go over 90 degrees in the afternoon - yet the CPU stays at the 48C mark, on very hot days it has gone to 52C for an hour or so - but that's still an amazing temp for a CPU that is known for generating a lot of heat. IIRC you aren't able to use yours on hot days due to CPU heat??

    Anyhoo, my post wasn't intended to compare systems, It was to show that sometimes, like in my case, water cooling is actually beneficial, not just something "cool" (pun intended). Some other factors in choosing the Corsair H-75 for cooling was my wifes desire to have a quiet system (fan sound bothers her), the intended original use of the computer (extended hours/days under full load, including CPU and GPU), and the poor airflow of the case (the original cooler kept the CPU at around 80C, but also increased the GPU temps). After about 10-12 hours of research, I decided to go with the water cooled system. I wasn't a fan of water cooling either, but it seemed to be my best option. Now I'm a big fan of it. Pre built liquid cooling systems are very simple to install, about the same as a new CPU cooler and case fan. If I ever get another desktop, it will probably have water/liquid cooling.

  • ebergerlyebergerly Posts: 3,255
    edited September 2017

    ...but it's about twice the card - nearly twice as many CUDA cores (most important for Iray performance) and memory throughput.  If you were just figuring that price on core performance it's a good deal, but as an added bonus you get 3 GB more of VRAM which is a huge deal since that fancy video card means nothing in Iray if you don't have enough VRAM and drop to CPU rendering.

     

    My only point is that it's easy to get dazzled by specs, but if those specs don't make a significant practical difference in the work YOU are doing, then all of those specs are irrelevant. Twice as many CUDA cores sounds awesome, but it's irrelevant compared to the fact that it gives you a 33% improvement in Iray render speeds. Now if you have other software that takes advantage of all those cool specs, then fine. I'm only suggesting you first think about YOUR needs and YOUR software and decide what's the best. 

    Regarding VRAM in the 1080ti...yeah, it's like 11GB, and W10 takes a portion of that. However, I just tried loading the largest scene I have which is a big Stonemason city scene with tons of textures,and 2 G3 characters. And I added an additional 2 G3's for good measure.

    When I start Studio my GTX 1070 ram usage is 400MB (out of a total of 8GB). When I load the monster scene and turn on Iray viewport, the GPU memory used goes up to 6GB. And when I do a render, it runs out of VRAM and the whole thing dumps to the CPU and my 16 threads do the rendering, not the GPU.

    But if I remove a couple of the characters, the VRAM usage drops to 4.5GB, and it renders fine with the GPU only. 

    Now, it took me some work to build a scene with enough stuff to make it run out of memory, and in general I don't get near that. So yeah, bigger can be better, but only if you really NEED it. 

    The point being, think before you decide. 

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    edited September 2017

    BTW, something I still haven't figured out about GPU VRAM and W10 grabbing some of it...

    When I load D|S, an empty scene, my GPU VRAM is using 400MB. Apparently W10 isn't taking any of it. Or maybe GPU-Z isn't reporting it. And when I load the big scene, the GPU memory used is 6GB. But it renders the iray viewport fine, without dumping to CPU.

    Only when I hit RENDER and do a formal render does it run out of available GPU VRAM and dump to CPU. 

    I guess I'm still not clear on exactly what's going on with W10 grabbing VRAM. 

    Post edited by ebergerly on
  • JamesJABJamesJAB Posts: 1,766
    ebergerly said:

    BTW, something I still haven't figured out about GPU VRAM and W10 grabbing some of it...

    When I load D|S, an empty scene, my GPU VRAM is using 400MB. Apparently W10 isn't taking any of it. Or maybe GPU-Z isn't reporting it. And when I load the big scene, the GPU memory used is 6GB. But it renders the iray viewport fine, without dumping to CPU.

    Only when I hit RENDER and do a formal render does it run out of available GPU VRAM and dump to CPU. 

    I guess I'm still not clear on exactly what's going on with W10 grabbing VRAM. 

    Don't worry, I dont' understand how this supposed VRAM reservation works either.
    Here's my 4GB Quadro K5000M Laptop for example: (behaves the same on my GTX 1060 6GB desktop, but it's temp out of order waiting on a motherboard)

    This is all based on looking at VRAM usage according to MSI Afterburner tray icon monitoring

    Windows 10 running the desktop @ 4K resolution 3840*2160 with 12 Chrome tabs open : 431MB of Vram in use.

    Open Daz Studio display set to Texture Shaded : 534MB of Vram in use.

    Load a saved scene that includes (1 Genesis 8 character with hair and cloths, a Car with Iray shaders, and Stonemason's Urban Future 5) 746MB of Vram in use.

    Click Render : 2535MB of Vram in use. (cancel render but leave render window open)

    Increase Genesis8 Render SubD minimum to 3 and click render : 3450MB of Vram in use.  (Cancel Render and keep most recent render window open)

    Add Bethany 7 HD into scene and click render : 3782MB of Vram in use.  (Cancel Render and keep most recent render window open)

    Added Hair with Iray mats to Bethany click render : 3918MB of Vram in use.  (Cancel Render and keep most recent render window open)

    Added a dress with Iray mats to Bethany click render : 3918MB of Vram in use.

    I'm just not seeing this Vram reservation issue in practice.

  • ebergerlyebergerly Posts: 3,255

    Well, the fact that I'm showing only 6GB of my 8GB being used prior to doing a formal Iray render, and as soon as I hit Render it dumps to CPU makes me think there's a couple GB being taken by something or else it should render fine. 

    But why that's not and issue with iray doing the viewport render seems strange. 

    This is really weird, and I'm guessing there aren't a lot of people out there who really understand what's going on, including me. 

  • JamesJABJamesJAB Posts: 1,766
    ebergerly said:

    Well, the fact that I'm showing only 6GB of my 8GB being used prior to doing a formal Iray render, and as soon as I hit Render it dumps to CPU makes me think there's a couple GB being taken by something or else it should render fine. 

    But why that's not and issue with iray doing the viewport render seems strange. 

    This is really weird, and I'm guessing there aren't a lot of people out there who really understand what's going on, including me. 

    When Iray does it's preview window, it's not rendering at full quality.  Also if you are rendering a image that's larger than your preview window, that will also cause it to need more Vram to render.
    Try switching back to the texture shaded preview windows before hitting render and see if that makes a difference.  Also, keeping multiple old render windows open will eat into your avaliable VRAM, because it is keeping it ready to continue on the render.

  • GatorGator Posts: 1,320
    ebergerly said:

    ...but it's about twice the card - nearly twice as many CUDA cores (most important for Iray performance) and memory throughput.  If you were just figuring that price on core performance it's a good deal, but as an added bonus you get 3 GB more of VRAM which is a huge deal since that fancy video card means nothing in Iray if you don't have enough VRAM and drop to CPU rendering.

     

    My only point is that it's easy to get dazzled by specs, but if those specs don't make a significant practical difference in the work YOU are doing, then all of those specs are irrelevant. Twice as many CUDA cores sounds awesome, but it's irrelevant compared to the fact that it gives you a 33% improvement in Iray render speeds. Now if you have other software that takes advantage of all those cool specs, then fine. I'm only suggesting you first think about YOUR needs and YOUR software and decide what's the best. 

    Regarding VRAM in the 1080ti...yeah, it's like 11GB, and W10 takes a portion of that. However, I just tried loading the largest scene I have which is a big Stonemason city scene with tons of textures,and 2 G3 characters. And I added an additional 2 G3's for good measure.

    When I start Studio my GTX 1070 ram usage is 400MB (out of a total of 8GB). When I load the monster scene and turn on Iray viewport, the GPU memory used goes up to 6GB. And when I do a render, it runs out of VRAM and the whole thing dumps to the CPU and my 16 threads do the rendering, not the GPU.

    But if I remove a couple of the characters, the VRAM usage drops to 4.5GB, and it renders fine with the GPU only. 

    Now, it took me some work to build a scene with enough stuff to make it run out of memory, and in general I don't get near that. So yeah, bigger can be better, but only if you really NEED it. 

    The point being, think before you decide. 

    Keep in mind it's probably not just a 33% improvement - you're relying on one simple benchmark.

    And now we remove all the time it takes to feed the scene to the card, since it's effected by many other big variables in the host system like CPU, system bus and memory speed.  But it's also removing many of the performance variables of the card too. 

    It's like comparing an old 60's muscle car and a modern one, and to remove all other variables we simply base it on how fast they go from 0-100 in a straight line.  By taking out the differences in the driver skill, the modern car isn't that much faster.  But we also took out many important variables like the braking and steering.  Have the same driver run a number of laps in each on a closed road course, and you'll see a bigger discrepancy in performance.

     

    A much better benchmark would be the same exact system hardware, with a number of benchmarks running a 1070 then a 1080 Ti.  I suspect you'll see a bigger delta as theoretically (all the info we have) the system should be able to feed the information into a 1080 Ti's memory much faster than a 1070.  I'm not saying Sickleyield's test is bad, but in the thread we're not benchmarking overall performance.

  • ebergerlyebergerly Posts: 3,255

    Keep in mind it's probably not just a 33% improvement - you're relying on one simple benchmark.

    And now we remove all the time it takes to feed the scene to the card, since it's effected by many other big variables in the host system like CPU, system bus and memory speed.  But it's also removing many of the performance variables of the card too. 

    Well if you're going to guess, without real data, then clearly the 1080ti is the best bang for the buck. I'm sure there's some benchmarks out there that will show you get a 5,000% improvement in performance over the 1070. Maybe 10,000%. 

    So I guess we should all go out right now and buy one while they're relatively cheap. smiley 

  • GatorGator Posts: 1,320
    ebergerly said:

    Keep in mind it's probably not just a 33% improvement - you're relying on one simple benchmark.

    And now we remove all the time it takes to feed the scene to the card, since it's effected by many other big variables in the host system like CPU, system bus and memory speed.  But it's also removing many of the performance variables of the card too. 

    Well if you're going to guess, without real data, then clearly the 1080ti is the best bang for the buck. I'm sure there's some benchmarks out there that will show you get a 5,000% improvement in performance over the 1070. Maybe 10,000%. 

    So I guess we should all go out right now and buy one while they're relatively cheap. smiley 

    I think you missed the point.  While the benchmark thread here is helpful, it's a subset of the performance of the card for Iray in Studio and not a measure of overall performance.

  • JamesJAB said:

    Well I was goung to comment about thread usage and dynamic cloth, but I was just suprised and dissapointed with Marvelous Designer...
    Daz Studio's Optitex Dynamic Cloth will use every single thread that you can throw at it (it will max out my 24 threads)
    Marvelous Designer on the other hand looks like it's locked down to 4 threads...

    In MD version 6.5 you can change number of CPU used in simulation properties.

    Clip.jpg
    356 x 989 - 114K
    Clip_2.jpg
    364 x 989 - 108K
  • ebergerlyebergerly Posts: 3,255
    edited September 2017

     

    I think you missed the point.  While the benchmark thread here is helpful, it's a subset of the performance of the card for Iray in Studio and not a measure of overall performance.

    And you're *assuming* that if we had a lot more benchmarks that they would show "overall performance" to be significantly higher. But since you don't have those additional benchmarks how can you assume performance would be higher? 

    So if we had 2 more benchmarks, is that enough? How about 3? And how specifically should the benchmarks be configured to be acceptable?  

    Get my point? 

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255

    Which raises the issue...

    Since there is concern that the existing benchmark scene that Sickleyield was kind enough to prepare and distribute is not really sufficient for some reason, perhaps Scott or someone would propose and prepare an updated version so those concerns could be addressed.

    Personally, I'm not quite sure why (or if...) the existing scene is insufficient, so maybe someone could list the concerns and throw together a scene that we could try. 

  • GatorGator Posts: 1,320
    edited September 2017
    ebergerly said:

     

    I think you missed the point.  While the benchmark thread here is helpful, it's a subset of the performance of the card for Iray in Studio and not a measure of overall performance.

    And you're *assuming* that if we had a lot more benchmarks that they would show "overall performance" to be significantly higher. But since you don't have those additional benchmarks how can you assume performance would be higher? 

    So if we had 2 more benchmarks, is that enough? How about 3? And how specifically should the benchmarks be configured to be acceptable?  

    Get my point? 

    It's a data point.  All that we have.  It's not meant to dump on Sickleyield at all, I'm appreciative she created the thread and the info that is there.  smiley

     

    My information is based on information from Nvidia Iray developers, and what I have seen with a few different systems and cards.

    When I upgraded from two Titan X Maxwell cards to two Titan X Pascal cards, you'd say "it's not worth it, you'll spend $2400 to only see 60% improvement in Sickleyield's benchmark."

    I'll use that case since everything in the system remained the same, I simply swapped the video cards.  I saw scenes got fed to the card and start to render faster, which isn't a big deal when the final scene is set up, you simply push render and wait a few hours for it to finish... two minutes loading difference?  Big deal.  But it's a pretty big deal working on a scene making a number of lighting adjustments - render, tweak, render again, tweak, etc.  And the interface runs much more smoothly making posing (including IK) and scene adjustments easier and faster.  All things impacting workflow not captured in one limited benchmark, and very worthwhile to someone who uses Studio a fair bit. 

    Post edited by Gator on
  • Finally getting started my living room looks like a computer shop

    Here's the chasis 

    Thermaltake x9's? I have one of those on my wish list :D

  • JamesJABJamesJAB Posts: 1,766
    JamesJAB said:

    Well I was goung to comment about thread usage and dynamic cloth, but I was just suprised and dissapointed with Marvelous Designer...
    Daz Studio's Optitex Dynamic Cloth will use every single thread that you can throw at it (it will max out my 24 threads)
    Marvelous Designer on the other hand looks like it's locked down to 4 threads...

    In MD version 6.5 you can change number of CPU used in simulation properties.

    After I get my workstation back up, I'll investigate if there is a hard coded max that's less than my setup.  (12 cores, 24 threads)

Sign In or Register to comment.