Is this upgrade worth it?

Okay I could use some DAZ forum wise wisdom in helping to decide if this upgrade is worth it.

1. Can I safely run 2 GTX 1070 (or 1070 and 1070 Ti) 8GB VRAM with my current 750 watt power supply?

2. Will adding a second card and doubling my CUDA cores halve my rendering time?

3. What the heck is going on with my total VRAM available for each card according to the DS Render Log?

Background to question 1:

My power supply is Corsair CS750M ( https://www.corsair.com/us/en/Categories/Products/Power-Supply-Units/cs-series-config/p/CP-9020078-NA )

My system contains: MSI Z97 Gaming 5 (MS-7917) motherboard, 32GB DDR3 RAM @2400 MHz, Intel Core i7-4790K, Corsair H80i CPU Water-cooler, quantity 2 of 3.5” 4TB HDD, Samsung EVO 860 500GB SSD, Liteon DVD/Blu-Ray ROM drive. The ASUS Nvidia GTX 1070 8GB and the ASUS GTX 970 3.5+0.5GB.

The power supply calculators seem to say yes but I seem to be over 80% usage when running at full load.

Background to question 2:

I know in this setup the most VRAM you can get for Iray is 4GB due to the GTX 970. I realize that not all the rendering time is related to the CUDA cores. They only come into effect when it begins calculating iterations.

Background to question 3:

I put the GTX 970 into my system tonight to see if it would work and it has been running for a couple of hours now with no apparent issue. My system is quiet. I ran Daz Studio 4.11.0.236 and loaded a small scene with Merlin’s Wild Borders, Eva 8, Linda Ponytail, DX Platform Boots, Long T-shirt, CityScapes background, and Sunlight, just to do a little test. Here is the weird part (from my log file):

First render using both cards and the OpTix setting:

2019-01-19 19:15:00.868 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1070): compute capability 6.1, 8 GiB total, 6.65893 GiB available, display attached

2019-01-19 19:15:00.868 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 1 (GeForce GTX 970): compute capability 5.2, 4 GiB total, 3.32088 GiB available

Second render using only the GTX 1070 and Optix setting:

2019-01-19 19:35:54.886 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1070): compute capability 6.1, 8 GiB total, 2.7296 GiB available, display attached

Why the heck is less VRAM available to render when I am running the 1070 alone?

Running both cards together decreased render time by 33% from 470 sec to 316 sec.

Comments

  • JazzyBearJazzyBear Posts: 798

    Where is your monitor connected? Which Card? That affects what VRAM Windows RESERVES for Displays.

    If your power supply is a couple years old it tends to loose efficiency although if plugged into a UPS for clean steady power then not so much of an issue.

    Grab a program that monitors VRAM live so you can see how much windows is taking and how much DAZ uses during renders. This will also monitor CPU Cores and temps so you can see how things are affected.

     

  • DustRiderDustRider Posts: 2,691

    I would say yes, if your replacing the 970 with either a 1070 or 1070ti. IIRC your 970 draws more power than the 1070 for sure, and I'm pretty sure the 1070ti will use less power as well (though it might be close to the same). If my memory isn't incorrect, I'm sure someone else will be along to correct me smiley

  • DustRiderDustRider Posts: 2,691
    edited January 2019

    I'll correct my memory (just got back to the computer). The GTX 970 has a max power setting of 145W, the 1070 is 150W and the 1070 Ti is 180W. So, you'd have very little difference with the 1070 (increase in 5 watts), with and increase of 35W with the 1070 Ti. Obviously, the 1070 should be no problem, I'll let others give you their opinion on the 1070Ti. Note that Nvidia recommends a 500W power supply for all three cards - of course that is a single card system, so the extra 35W with the 1070 Ti may not be a problem.

    Post edited by DustRider on
  • TaozTaoz Posts: 9,735
    DustRider said:

    I would say yes, if your replacing the 970 with either a 1070 or 1070ti. IIRC your 970 draws more power than the 1070 for sure, and I'm pretty sure the 1070ti will use less power as well (though it might be close to the same). If my memory isn't incorrect, I'm sure someone else will be along to correct me smiley

    My 1070 draws an average of about 105 W when rendering, according to GPU-Z.   

  • nicsttnicstt Posts: 11,714
    Taoz said:
    DustRider said:

    I would say yes, if your replacing the 970 with either a 1070 or 1070ti. IIRC your 970 draws more power than the 1070 for sure, and I'm pretty sure the 1070ti will use less power as well (though it might be close to the same). If my memory isn't incorrect, I'm sure someone else will be along to correct me smiley

    My 1070 draws an average of about 105 W when rendering, according to GPU-Z.   

    The average is only one consideration; the spikes are more important.

  • First I'd definitely pull the 970. Adding a 1070 would make it really irrelevant and it is definitely drawing more than its rated TDP. I also wouldn't buy the 1070yi. or a factory overclocked 1070. Either of those would draw more than 150W and could push you close to your PSU's max draw.

    Finally putting in a new PSU isn't that hard. Corsair sells this for $140

    https://www.newegg.com/Product/Product.aspx?Item=N82E16817139140&ignorebbr=1

  • hjakehjake Posts: 812
    edited January 2019

    Thank you to everyone who replied.

    To answer all questions above and ask more questions:

    1. Currently my monitor is connected to the 1070 since it was the main card and I occasionally play gamnes like Fallout and Elder Scrolls on it. The motherboard has an Intel HD Graphics 4600 on it and I have that hooked up to a HDMI switch which is connected to my Samsung 27" 4K and my AOC 27" QuadHD. I was thinking about using it as the display card when using DS. If I do that and change settings in Windows 10 to display only on the Intel video do I need to physically disconnect the 1070 from the display or will setting it to do not display to 1070 be enough to free up memory in DS?

    2. My system has always been on an APC UPS. Speaking of that I ran the PowerChute monitoring software while I ran a simple Iray scene of a G3 model in an outfit, long hair, and the skydome on with both cards running. It registered a peak "Load on Battery Backup" of 384 watts and typically ran at 369 watts. When not running 3D or games my sytem draws 115 to 187 watts with MS Office and web browser going. Based on this additional info it seems the power supply calculators are greatly over estimating my system's power draw. I would interested in comment about power calculations versus what PowerChute says.

    3. My intention is to get a regular 1070 with 8GB VRAM, if it is still available. But my question about performance increase is still not answered. I understand by matching VRAM with my current 1070 I can handle larger scenes but my current bigger concern is reducing render time. Would I be better to wait to buy a 2070 at its high price or could I yield the same benefit with a second 1070? Please assume that I have done all other standard optimizations such as reducing textures, rendering in layers, and removing unecessary objects from the scene.

    4. Please recommend some VRAM monitoring software which will let me know accurately how much VRAM is available in real-time?

    5. I would also appreciate an answer to question 3 about why less VRAM was available on the 1070 when I rendered with only the 1070?

    Post edited by hjake on
  • TaozTaoz Posts: 9,735
    nicstt said:
    Taoz said:
    DustRider said:

    I would say yes, if your replacing the 970 with either a 1070 or 1070ti. IIRC your 970 draws more power than the 1070 for sure, and I'm pretty sure the 1070ti will use less power as well (though it might be close to the same). If my memory isn't incorrect, I'm sure someone else will be along to correct me smiley

    My 1070 draws an average of about 105 W when rendering, according to GPU-Z.   

    The average is only one consideration; the spikes are more important.

    I haven't seen any spikes over 125 W yet, with constant GPU load at 99-100%. That's with 3.7 GB VRAM in use, I can't max it out with only 8 GB RAM. That's with a factory overclocked card btw.

  • hjake said:

    Thank you to everyone who replied.

    To answer all questions above and ask more questions:

    1. Currently my monitor is connected to the 1070 since it was the main card and I occasionally play gamnes like Fallout and Elder Scrolls on it. The motherboard has an Intel HD Graphics 4600 on it and I have that hooked up to a HDMI switch which is connected to my Samsung 27" 4K and my AOC 27" QuadHD. I was thinking about using it as the display card when using DS. If I do that and change settings in Windows 10 to display only on the Intel video do I need to physically disconnect the 1070 from the display or will setting it to do not display to 1070 be enough to free up memory in DS?

    2. My system has always been on an APC UPS. Speaking of that I ran the PowerChute monitoring software while I ran a simple Iray scene of a G3 model in an outfit, long hair, and the skydome on with both cards running. It registered a peak "Load on Battery Backup" of 384 watts and typically ran at 369 watts. When not running 3D or games my sytem draws 115 to 187 watts with MS Office and web browser going. Based on this additional info it seems the power supply calculators are greatly over estimating my system's power draw. I would interested in comment about power calculations versus what PowerChute says.

    3. My intention is to get a regular 1070 with 8GB VRAM, if it is still available. But my question about performance increase is still not answered. I understand by matching VRAM with my current 1070 I can handle larger scenes but my current bigger concern is reducing render time. Would I be better to wait to buy a 2070 at its high price or could I yield the same benefit with a second 1070? Please assume that I have done all other standard optimizations such as reducing textures, rendering in layers, and removing unecessary objects from the scene.

    4. Please recommend some VRAM monitoring software which will let me know accurately how much VRAM is available in real-time?

    5. I would also appreciate an answer to question 3 about why less VRAM was available on the 1070 when I rendered with only the 1070?

    1) It is my understanding that Windows will reserve a certain amount of VRAM on every graphics card except "pro" cards like Quadro's. So it won't matter if the card is hooked up as the video output or not except the one being used for video will have that extra load as well obviously.

    2) I'd trust the monitoring software over any load calculator. The load calculator is making assumptions about full load under synthetic benchmarks and things like that which don't necessarily reflect the real world. They give you are nice cushion for designing a system but if you have actual data to draw on that's better. 

    3) The 2070 is definitely faster than the 1070 at rendering. Is it worth the extra cost? That's up to you. Whether you can still find a 1070? Newegg has some cards at around $350 but who knows how long that will last, some refurbished ones lower than that too.

    4) To the best of my knowledge there is no software that will let you monitor VRAM usage in a meaningful way.

    5) No idea. not enough data.

  • hjakehjake Posts: 812

    Thanks kenshaw011267.

  • BendinggrassBendinggrass Posts: 1,367

    The 2xxx cards are supposed to be a clear advance in rendering capability..... so I read and so I am told. I don't have the personal knowledge myself.

    However, 2 X 1070ti cards would give you almost 5000 cuda cores, and according to the old knowledge that would have a great effect on speeding up renders.

  • CUDA is not equivalent across generations of cards. What the scaling factor is from the 10xx cards to the 20xx cards is not entirely clear to me but if you look at the benchmarks thread here it seems clear it is significant.

  • hjakehjake Posts: 812
    edited January 2019

    Bendinggrass yes 2 GTX 1070 cards gives me more CUDA cores but will it reduce my rendering time by approx 50%? If I only get a 20-30% rendering time reduction then it won't be worth it because I am not rendering animation at this time which even a 10% reduction per frame can add up to some serious time reduction.

    I just did my googling on the RTX 2070. The new RT Cores and Tensor Cores will not give any benefit in Iray at this time. Iray, and then Daz 3D, will have to do a render engine update to take advantage of their impact on rendering. For Nvidia rendering time reduction is a lower priority than implementing the new hybrid raytrace-rasterization that Microsoft's new API will introduce. They want to bring some aspects of raytracing to games. Also certain AI learning apps will be able to take advantage of the new cores. CUDA cores will keep doing what they are doing but they are now a smaller part of the new direction NVidia is travelling for consumer cards because they found the new hybrid raytrace-rasterization benefits more when handling raytrace bounding box calculations that are handed off to the RT/Tensor cores and the CUDA cores are left to handle straight shading. So CUDA core won't be increasing much as the work gets balanced between the 3 core types. I think the 3000 series cards will focus on shrinking the die, lower heat/power usage and optimizing how the cards handling raytracing based on Nvidia experience with games for the 2000 series. The 1000 series (e.g. 1070) brought CUDA to its potential and now they are moving on to the pseudo real-time raytracing.

     

    Post edited by hjake on
  • Don't count on that. CUDA is huge in the compute world and Nvidia makes its real money selling Quadro cards and all that matters there is CUDA.

  • hjakehjake Posts: 812
    edited January 2019

    Don't count on that. CUDA is huge in the compute world and Nvidia makes its real money selling Quadro cards and all that matters there is CUDA.

    I guess we will see. I would have thought consumer cards brought in more revenue not just for gaming but also crypto-currency.

    The articles I read seem to indicate that they didn't increase CUDA cores because they weren't getting more performance for more cores and that CUDA cores don't lend themselves well to real-time raytracing, but I could have misunderstood.

    In the end I have decided to forego a second 1070 and buy a 2070 some in the "DAZ soon" future.

    Post edited by hjake on
  • They did increase CUDA though.

    GTX 1070 1920 CUDA

    RTX 2070 2304 CUDA

    GTX 1080 2560

    RTX 2080 2944

    GTX 1080ti 3584

    RTX 2080ti 4352

    Nvidia definitely makes more money from Duadro than they make from gaming cards. They release every generation of architecture to the professional market 6 to 8 months before they release it to the gaming market, if it ever even comes out to consumers,Votta didn't. They barely mention the gaming market in their revenue forecasts. When miners were buying up every gaming card out there was it having any effect on the availablity of Quadro's? No. Nvidia didn't divert any of their Quadro production to the gaming market. They simply didn't care.

    Real time raytracing is important to the professional rendering community, i.e. hollywood, which buys large quantities of Quadros to render movies on. These new cards also have tensor cores which enable AI denoising which drastically speeds up renders, if the demonstrations are to be believed. Presumably that also means tensor cores have other applications in AI and deep learning which are super hot right now, we've had several large brokerages inquiring after the availability of Turing Quadro and Tesla cards. Someone(s) out there is shopping some sort of AI/predictive app for the stock market that has the big guys interested enough to at least be thinking about dropping big money on hardware.

  • hjakehjake Posts: 812

    kenshaw011267, thank you for taking the time to offer useful feedback. Cheers :-)

Sign In or Register to comment.