Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
..OK did a bit of research on DDR3 vs. DDR4 and for the most part performance isn't alll that differnet and ins some cases DDR3L memory cab be slightly faster than DDR4 The main part is power. DDR4 consumes a flat 1.2 volts while DDR3 consumes 1.5 Granted in server or render farms this can make a huge difference but with most individual systems it would be almost negligible on the power bill. DDR3L also can operate at 1.35V.
The other difference is maximum speed and latency. DDR4 tends to be superior on both with a maximum clock speed of 4,266 MHz available compared to 2,133MHz for DDR3/3L.
The other case I confirmed was having fewer large memory sticks of more smaller memory sticks increases the speed advantage and doesn't put as much strain on the memory bus. So for my design 4 x 32 GB would be preferential to 8 x 16 GB however the one rub the largest sticks available in a 128 GB memory kit are 16 GB whereas with DDR4 I can get 4 x 32 GB. Of course the latter means purchasing the more expensive Haswell E5 2630v3 or 2640v3 CPUs instead of the Sandy Bridge E5 2690s.
So an interesting choice here. Thank you both AllenArt and Silver Dolphin for getting me do a bit more research. As I have noticed, the performance difference between Haswell/DDR4 and Sandy Bridge/DDR3L is not as significant as I thought it would be, however cost wise, it is a different case. Need to plug some different configurations together and see what I can come up with. If I can build a system that is reasonably fast at CPU rendering, then an expensive big memory GPU can wait as I can always add it later. Who knows, by that time we may actually see HBM 2 memory in consumer cards.
Great deal, a lot of workstations come off company use after about 3 years, and there's not anyone playing attention to resale value.
All Windows versions have, for quite some time. It's just that 10 is the most/most noticeable....and this above and beyond the 'eye candy' factor that can consume a video card's memory, too.
Indeed. This started with the first Windows kernel release after the first hot-plug drivers became available for Windows NT 4 and hot swap monitors. We first noticed it on our Stereoscopic Displays that we were using at the time. Back then 32MB was a huge VRAM amount, and we immediately saw the new framebuffer memory hit because we started getting tearing on the refreshes on the displays. Even though it was only VESA 640x480 @ 8bit that was being allocated by default, it caused problems when you were trying to use TIFF heightfield images (computed directly by the cards) that were taking almost all of the buffer that was available already.
Kendall
Thanks, I couldn't quite remember when...but I knew it was quite some time ago.
I set my fan profiles to a pretty aggressive curve and it helped a lot. Much more than I expected it would.
Nope, although if I open Studio it starts to, but there might be some variance in if it does or not.
My default scene, a plane, an extra camera and nothing else uses 56MB, as soon as I open Studio. Adding other resources doesn't chnage it until I start to render.
..interesting. For me, openng Daz by itself increases the GPU load by 202 MB (GPU-Z 0.8.5). Base system at idle with no programmes open shows 63 MB used. On W7 Home Premium Ed, running a GTX 460 (1 GB) and dual displays.
My question for coming here was Why does my GPU hang out at around 7% while Daz is rendering and I am looking at the Windows 10 task manager GPU tab. To figure out why it's going so slow.
My Answer:
Windows is prioritizing the GPU to run the screen. As soon as the screen sleeps, GPU pops up to 97+%; thankfully, TManager has that timeline graph to verify after I move the mouse to wake the screen back up. So, lousy as it seems, try setting a long update interval, turning off the screen, and checking back after the interval is over. 5 mins?
Or, perhaps you should select the correct compute engine from the GPU dropdown. Try "compute_0" rather than "3D".