Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Yeper and I love em all kinds of options
I'm in the process of installing everything; just finished building it, but I'm low on time atm.
Just built it as I could to make sure all the parts worked
Considering the time I have going to take me a couple of days to build a disk image.
Looking forward to hearing how it does.
Still building myself made a couple changes which required ordering another couple items but shouldn't be long I hope
...for smaller cases yeah, I can see that. That is what years ago kept me away form off the shelf prebuilds as they usually were in the smallest case (with only one rear fan) that could be found and had barely adequate PSUs to support what was installed inside.
I'll stick to large air cooled cases myself. The new generation 10xx series GPUs consume less power than older generation cards (meaning less heat and less need for liquid cooling unless you intend to overclock them) as well as do Xeon CPUs.
Wish I could get another P-193 for the new build but that model has been discontinued in favour of the side panel windows and flashy lights trend. May have found a case that is worthy though which has provisions for up to 11 fans (that are surprisingly whisper quiet) and like the P-193, has a couple on the left side panel by the GPU(s) and CPU cooler, and is very spacious inside.
Yeh, the PSU I have is way too loud, going to return it and get a silent one (it only turns on the fan when needed).
...yeah, I need all I can get as many of my scenes can top out at around 7 GB. Also staying with W7 using the basic desktop and no "gadgets" which helps conserve VRAM for rendering as well. For myself, SOTA in this case comes with too big a price (and I'm not just talking in zlotys either).
..the downside of Texture Shaded vidw mode is if you are using an HDRI you don't see the backdrop.
This is why I tend not to use HDRIs as I have to do a bunch of test renders to get the positioning where I want it. Iray View mode on my system will cause Daz to crash to the Desktop as I only have 10.7 GB of available system memory and a 1 GB GPU card.
BTW, someone mentioned earlier that RAM speed probably doesn't matter a whole lot. That couldn't be further from the truth with Ryzen (and thus Threadripper and Epyc). The interconnect speed of the CCX and dies are directly tied to RAM speed. So the faster your RAM, the faster your entire system will be.
...not worth the trouble of having to deal with all the rubbish of W10 though (and it's not just the VRAM issue either).
If Daz supported one of the major Linux versions, it would be a different story as all three CPUs also support Linux (particularly Epyc). Would be crazy fun rendering in Carrara with 64 CPU threads and 128 GB of fast 8 channel DDR4.
Here's some links Amazon shows to have a total of 10 (9 of the Corsair and one of the G-SKILL
https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&IsNodeId=1&N=100007611 600336949 600561668
https://smile.amazon.com/Corsair-CMR64GX4M4C3200C16-VENGEANCE-PC4-25600-Desktop/dp/B071JQJG13/ref=sr_1_10?ie=UTF8&qid=1505709575&sr=8-10&keywords=3200+MHz+DDR4+RAM+kit
https://smile.amazon.com/G-SKILL-TridentZ-PC4-25600-Platform-F4-3200C16Q-64GTZSK/dp/B01K8THPKA/ref=sr_1_39?ie=UTF8&qid=1505709677&sr=8-39&keywords=3200+MHz+DDR4+RAM+kit
It has been explained numerous times that there is a lot of pre render calculations that take anywhere from 30 to 90 seconds every time a render begins making renders in the 2 to 3 minute range statistically meaningless.
Ram speed doesn't add much, and buying faster ram, but not too fast and reducing its speed is a great way of improving system performance. I got 4x16 3000MHz; wouldn't have minded 128GB, but 32GB sticks are crazy-expensive - even in a market that is itself, crazy expensive.
...what about 8 x 16 GB? Or does the MB not have 8 DIMM slots?
I also thought that Ryzen boards and the CPUs only support up to 64 GB.
Yeah, really depends on the app. Some see next to no benefit, some see a little bit.
What I found interesting was that there are few games with a substancial difference. Typically not much in maximum framerates, as you'd expect but a substancial difference in the minimum or average framerate. IIRC, the difference with Treadripper was more dramatic as the Infinity fabric (tying the cores together) is synced with the memory speed. To a lesser effect Ryzen, and also IIRC even less the Intel chips.
Of course, I'm not buying a Threadripper for gaming, but I do some gaming on the side.
Threadripper theoretically will go to 2 TB, there's no memory modules large enough to test that!
...ahh so they at least have 8 DIMM slots then
There are 128 GB modules available, but they are insanely expensive (you could build a pretty raging workstation for the price of just one stick) and primairly for enterprise servers. With those it would take 16 modules to make up 2 TB (for a mere 108,000$).
I keep thinking we're approaching a point where future improvements in technology exceed what we can even use. Monitors are becoming such high resolution that our eyes can't even notice the difference when new monitors come out. And they're getting so large they can't fit in most living rooms. RAM is getting so large that few can even make use of it. HDD's are being replaced by SSD's, and once your stuff loads instantly, anything more than instantly isn't really noticeable. CPU's aren't really increasing in clock speed, but rather increasing cores, and it seems less and less software really makes use of it. Even low end GPU's now can play most of the games really well, unless people really need and can tell the difference with 4k or 8k or 16k or whatever.
Heck, without video games where would this technology really be needed? For most average users who aren't doing 3D or video editing or some professional technical stuff, is all of this going to be necessary?
I dunno. Maybe something amazing will come along in 5 years that really needs 48 cores, 96 threads, 1TB RAM, 20TB SSD's, and 16k monitors. I just can't imagine what that would be.
Yeh could have done that.
... Didn't need 128, i don't think. :) I also don't like filling all slots, I would have been slightly more likely to get 128 if they weren't so crazily priced.
Hey. I want a razor-sharp LCD that covers my entire wall someday. Don't kill my dream.
I use 16 GB easily while doing TG renders. I've recently upgraded to 32 just so that I can do other things at the same time without crashing my render, and I expect I'll easily break 16 now that I can.
Ah, but they can still improve much on the space/money ratio.
Really it isn't less and less. It's true for gaming that for some reason almost everything does not use multiple threads efficiently. But DS will use all my cores, TG will use all my cores, WM will use all my cores... granted a lot of software isn't built to do this properly yet, and some algorithms will never be able to so some software will never even be rewritten to use multiple threads, but a lot is, and the options are not growing less but more.
Yeah, but I think most people are interested in graphics-intensive stuff, not CPU-intensive stuff that calculates rather than makes images. And a lot of CPU-intensive stuff seems to be transferring to the faster GPU's, like video encoding, etc. I suppose engineering stuff that just makes calculations might use the CPU, but I think the average user is more about visuals and graphics.
CPU rendering is still a huge thing. In TG you can still only use CPU and this is not likely ever to change due to their architecture as far as I understand. And I very much like TG. Where is this CPUs are not for making images idea coming from?
A lot of places. Just look at D|S Iray rendering where CPU renders are pretty much useless compared to a decent graphics card. And even VWD the cloth sim is going to GPU to speed things up. And some of the video encoding software is advertising going to GPU to speed things up (although in practice many aren't quite there yet).
BTW, what's TG?
Terragen.
GPU rendering is getting bigger every day but CPU rendering is not obsolete - definitely not to the point where "most people who want to render only want a GPU" is a thing. Even large studios still use CPU rendering because, eventually, it scales better. And there's nothing like the VRAM limitations which hit GPU rendering. Want a 25 GB scene - sure, have fun.
@ ebergerly:
...some good points there.
When I built my system over 4 (almost 5) years ago, 12 GB of system memory and 1 GB of VRAM was considered a "shredding" rig. Of course we didn't have Iray yet, so memory, CPU cores and CPU clock speed were more important for rendering. The GPU, well if you weren't into gaming, it just made the displays look and respond better.
With the whole GPU rendering schtick since the introduction of Iray, suddenly a card's VRAM was more important than system memory or CPU cores and this is where things began to get really costly. If like myself, you create friarly "heavy" scenes on a regular basis, you need all the VRAM you can get your hands on. Sadly for us, with Nvidia that tops out at 12 GB with the Titan Xp (for 1,200$ or 1,500$ for the integrated water cooled version). You really have to dig deep in the wallet to exceed that (a Quadro P5000 with 16 GB of VRAM retails for around 2,500$).
Now along comes AMD with their relatively affordable 16 GB HBM 2 memory Vega card priced at under 1,000$ (which unfortunately for Iray is useless). So the ball is now back in Nvidia's court. which still does not offer a prosumer card with faster HBM2 while AMD has 2 (there is also an 8 GB Vega). As a matter of fact, Nvidia's top of the line 5,000$ Quadro P6000 still uses GDDR5x. The first Quadro available with HBM 2 is the 6,500$ - 8,000$ (depending on vendor) Quadro GP100. For that you could build a pretty nice workstation with a 16 core Threadripper, dual 1080 Ti GPUs, 128 GB of four channel DDR4 3000, several SSDs and probably have a nice bit of change left in the pocket to buy more Daz goodies.
However what the GP100 does add is improved floating point performance and Nvidia's new NV link technology that connects between cards replacing the traditional SLI link. Besides a fatter pipeline between the cards that translates to faster communication, NV reportedly link also allows for a process called "memory pooling". Accroding to the hype from Nvidia, pooling memory between two GP100s will for the first time allow users access to the total memory of both linked cards (in the case of two GP100s, 32 GB). This sort of sounds too good to be true, and I haven't been able to find much more detail on what this would mean for rendering. If it indeed is the "holy grail" we've been looking for, it will only be affordable for professional production studios with big budgets considering the card's steep price. There are also MBs with NVLInk slots that accellerate data transfer bwtreen GPUS and CPUs but so far those are not available on the consumer market and probably won't be so for a while.
As to CPU cores/threads, true, Daz doesn't utilise all available cores during the production phase. however if by chance that mega scene, with a dozen posed & dressed G3 F/Ms, half of Stonemason's Urban Sprawl 3, volumetric haze, and several dozen emissive lights exceeds the 1080 Ti's memory (a bit easier with Windows 10 as you actually have about 9 instead of GB of available VRAM) then whatever CPU threads and memory you have will come into play (not sure what the maximum thread limit is for rendering in Daz/Iray).
Anyway, it really does seem to be escalating into womewhat a tehcnological "arms race" which pretty much looks to leave most of us behind, depending on what bread crumbs are allowed to fall from the table.
For myself the watershed is the Win10 requirement for the new generation CPUs, I'll choose to stay on the "lee side" for now as over the last two years, I've see W10 as more a bust then benefit for a number of reasons besides its reserving VRAM.
...yeah I'm still leaning towards just building a monster CPU based workstation with dual 8 or 12 core Xeons, a boatload of memory, and modest GPU for test renders, and letting the whole GPU arms race go where it wants. For one better scaling is important to my work as well since I am looking to render in large format for high/gallery quality printing.
Carrara renders pretty darn fast when you have a lot of cores and memory available. One older thread I researched mentioned that people were getting day plus render jobs down to 6 hours or less going to a dual Xeon multi core setup. One post even included a screen shot that showed 36 cores at work rendering. Still trying to find the maximum single system core limit for Carrara, I know that it will handle up to 100 cores but not sure if that is only networked rendering. Same for Daz.
I am building an overclocked Ryzen 1700 box right now for my CPU rendering needs. 16 threads is going to blow my FX-8350 out of the water and I'm going to be head-over-heels sending CPU renders over there while I do quick GPU stuff on my main box. I looked into building a dual-core Xenon at the same time just because I could have squeezed an extra couple threads out of it, but ultimately the bang for the buck wasn't there for me. Really looking forward to seeing how this Threadripper build turns out.