Second GPU
I've been on the fence about this for a long time, and I need someone to push me to one side or the other.
I have a single GTX 1070, and I'm trying to decide whether to get a GTX 1080 to run in parallel. The reason is mainly to improve my Iray 3D View response, and secondarily to improve render times, which I expect to be about a 40% improvement.
Now I'm not sure how much improvement in 3D View response I'd get. Right now, with the proper settings recommended by Havos, the response has improved drastically and is almost where I had hoped to get it with a second GPU.
But still I like the thought of a second GPU. Though the cost of $510 is kind of painful. BTW, looks like a 1070 is almost the cost of a 1080 now ($480?), so it's pretty much out of the question IMO.
Anyone have any thoughts ? I'd love to hear from those with dual GPU's for some advice on what improvement I can expect overall.
Thanks.

Comments
Honestly, with the inflated GTX 1070 prices right now... ebay auctions for used 1070 cards are ending over the $400 mark.
My suggestion is (if you can afford to do it) go and buy a GTX 1080 ti, then imediately put your 1070 up on ebay and get paid back for over half the cost of the 1080 ti.
Thanks. Yeah, i wasn't thinking of getting a 1070 for the reason you mentioned...prices aren't much better than the price of a 1080, so I figured I wouldn't even consider a 1070 and go for a 1080.
The 1080ti, on the other hand, is WAY out of reasonable price range IMO. Something like $720? Yeah, it's better performance than a 1080, but over $200 better?
I was thinking that adding a 1080 to the 1070 would be as good as just a single 1080ti, and cost less out of pocket.
But the big question is whether any of this is worth it...
The 1080 is considerably more powerfull than the 1070. You could always buy the 1080, then sell the 1070 and buy a matching 1080 as a second card.
Although, in my opinion the 1080 ti is worth the extra $200 over the regular 1080. First off you will have a big viewport perfomance jump, a big render speed improvement and finaly you will have an extra 3GB of VRAM for Iray to use. (the VRAM is a big one when you work with more complex scenes because the whole thing needs to fit or that device will not be included in the render job)
Now there's an idea....sell the 1070 and buy a 1080 for about the same price.
I'll have to check on ebay and see what used 1070's are going for.
6 min left 13 bids $430 (ended at $435 at a non-optimal ending time)
51 min left 37 bids $$405
2 hours left 8 bids $432
These are the three closest to ending on ebay as of when I posted this
Cool. Thanks.
So if I'm lucky I can replace the 1070 with a 1080 for maybe $70 when it's all said and done, then I can buy either a 1080 or 1080ti.
Awesome.
The second card won't improve your normal viewport response. That's handled via the openGL driver, not to mention your second GPU won't be connected to a monitor.
Since I have three monitors connected to my 1070, will a second GPU help anything?
Thanks JamesJAB, that's about the perfect solution since the prices of 1070's and 1080's are so close right now.
Now I just have to remember how to sell on eBay
The last time I sold something was like...forever ago.
Sell your 1070 for a decent amount then the difference isn't that great.
The 1080 Ti is definitely worth $200 over the 1080, 3584 vs 2460 CUDA cores. Almost 50% more cores for $200 more... yeah it's worth it. CUDA cores is what you want for performance in Iray. Also no small detail is jumping from 11 GB of VRAM from 8 GB.
http://www.techadvisor.co.uk/feature/pc-components/nvidia-geforce-gtx-1080-ti-vs-1080-vs-1070-vs-1060-vs-1050-3640925/
Easy to say, but if you look at the raw numbers, the cost per performance is a lot higher with the 1080ti. You're paying a lot for the improvement you get. As I recall the 1080ti gives about a 30-40% improvement in render times for the Studio benchmark over the 1070 (2 minutes vs. 3 minutes), and the cost difference is about $250. So yeah, if it's worth it to drop from 3 minutes to 2 minutes, or 30 minutes to 20 minutes for the cost of $250, then yeah it's worth it.
For some of us, we're off doing other stuff during rendering, so the difference might not be a big deal.
3 more GB of VRAM is huge.
Even if you're not using it right away, that extra mem lets you keep a render window open, so that a follow up render after tweaking only takes a few seconds saving a lot of time.
I have to imagine that your preview speed is going to improve as well. Jumping from Titan X Maxwells to Titan X Pascals there was a notice able improvement in the preview window performance.
Also keep in mind that many of the performace numbers now are counting the pre-loaded speed to ignore other factors like CPU speed and others... but that speed to get the scene loaded into the video card really impacts your workflow. The memory bandwidth of the 1080 Ti blows away the 1070. In short, yeah it's worth it...
"huge"....."noticeable improvement"..."blows away"....
Those are probably the most commonly used, yet least defined phrases in internet computer enthusiast forums
But I'm more of a numbers guy. "Blows away" is pretty meaningless to me. I've seen people refer to a 10% improvement as "blows away". To me, that's closer to "tolerance of error".
So yeah, for you it may be a no-brainer to spend the money. For me and others it's only worth it if the numbers back it up.
The link of mine a few posts above explains the differences of the techinical specs of the 10 series cards...
8 GB of VRAM to 11 GB.
(From link) 1070 256 GB/s memory bandwith, 1080 Ti 484 GB/s. Nearly double blows away.
No good way to quantify the preview responsiveness, I didn't record video of it. You'll just have to trust us.
I don't know why you'd doubt it, for Iray DAZ & Nvidia state the number of CUDA cores is most important. The 1080 Ti has nearly double the 1070.
2 minutes vs 3 minutes...
Looking at the benchmark thread, it's more like 115 to 197 seconds. That's over 40% improvement in render times, and it's not taking into account the size of the scene in that benchmark (fairly small). With a scene that takes much longer to hit 95% convergence, I believe the difference would be even wider since the non-GPU aspects of rendering (loading geometry, textures, etc) would become a much smaller fraction of the total time.
Still, you're right. It's not a big deal if you're off doing other stuff. But it has minor benefits -- room temperature doesn't jump 5 degrees, it's actually done rendering after a good night's rest instead of at 73%, less wear/tear on gpu, etc. And major benefits - 3 GB extra memory.
So while teh 1080 ti isn't necessary by a long shot, the price difference is entirely justified.
I have a second GPU so I can use my computer during rendering. GTX 970 and GTX 1080. I only use the GTX 1080 for rendering and then my computer works for other tasks while rendering. If I have both rendering, then my computer slows to a crawl
Okay I'm embarassed, but I still haven't decided whether to buy a second GPU. I have the money for a 1080ti, but I just can't force myself to push the button and buy a $750 GPU. Especially since I don't really need it. And I don't want to sell my GTX 1070 because I keep thinking that if I have that plus a 1080ti, then the 1070 will drive my 3 monitors and take part in rendering, and the 1080ti will do all the rendering while not losing the VRAM that Windows takes for the monitors. Though I'm not sure Windows won't grab 2 GB from both GPU's. Does anyone know for sure?
The other thing is the new NVIDIA Volta cards that are coming. Some say in 1Q 2018. And I worry if I plop down $750 for a 1080ti, the next day they'll release the Volta (2080ti?), and the price of the 1080ti will drop to $20 and I'll kick myself.
On the other hand I haven't bought myself a good present in a long time, so I deserve it. And I recently got a big chunk of unexpected money, so I should spend it. That's what money is for, right? And somebody posted results with a 1080ti plus a 1070 and said they did the benchmark in 1 minute, vs. 3 minutes with just a 1070. Compared to just one 1080ti which gives 2 minutes vs. 3 minutes with just a 1070.
But a Volta will render the same scene in less than a second or something like that. Well, maybe not that good, but still...
Can't decide....................................................
Well, I can't comment on a lot of that (still struggling along with a 750Ti) but I don't think you'd usually need to worry about a Ti model in the inital round of releases - there is generally a gap of several months or a year before those appear.
My finger is getting closer to the 1080ti BUY button................................
Don't do it! Next week there will be the anouncement of the 1280Ti Daz Edition with internal memory for your whole content library. It won't allow you to render nude however but that's a small price to pay.
If you're talking about windows 10 memory allocation grabbing memory - the bad news is that it grabs memory on all video cards that have ports for monitors. As I understand it, this has been going on for some time - it's just that prior to windows 10 the buffers were for 1024 X 768 X 16-bit color. But a lot of new monitors no longer support this standard, so the space reserved matches the highest resolution the card supports for EACH port on the card. This is to prevent a blue-screen crash if you plug a new monitor into an unused port.
Do it... !!!
I went the same way you did ... or you wanted to ...
First i rendered on a 980ti. Then i added a 1070ti (wrong decision). I had to sell the 980ti and rendered alone on the 1070ti. The limitations (in comparism to a 1080ti) are really huge. Speed... limited ram ... really not worth it.
Do yourself a favor.... buy a 1080ti and be happy!
Darn. Thanks.
But what's strange, and I've never understood, is that according to GPU-Z that 2GB isn't taken until I open Studio and load a scene. I would have thought it would grab the 2GB as soon as Windows boots up.
Like right now, with Studio loaded but in Texture Shaded mode with no scene loaded the VRAM usage is 650 MB. When I load my heaviest scene (still in Texture Shaded mode), the VRAM usage goes up to 1 GB. Then when I go into Iray view mode it goes up to 6 GB as soon as the first grainy iray image comes up. And then when I Render, it goes up to 7.5 GB, with 99% GPU load (CPU load only around 12%).
So it seems like it's on the brink of dumping to CPU (since the 1070 has 8GB of VRAM), but it doesn't.
So I'm trying to figure where the Windows VRAM grabbing is occurring.
The VRAM thing really won't help much if I have a 1070 with a 1080ti since the 8GB 1070 will be the limit. And I don't really have those big heavy scenes. But yeah, I guess if I do get a big one I can just disable the 1070 and use the 1080ti.
But on the other hand I won't have to buy a bigger power supply. Even with both GPU's at their absolute maximum I won't get above maybe 400 watts total (250 plus 150), plus a bit for for the rest of the computer. Maybe 500 watts absolute max on a 750 watt power supply.
Well.... and what is the question? :-)
Windows takes some vram... depending on how many Displays are connected to the GFX card and depending on your screen res. Even apps like Firefox Chrome and so on ... "eat" some vram ... depending on things like open tabs and stuff ...
Then there is DAZ which reserves at startup some ram ... more when you load a scene ... and most at the beginning of an Iray render. After rendering for a few minutes DAZ frees up some Vram again.
Normal behaviour :-)
And 7.5GB is nearly 8GB ... a little texture here and there more ... and you render on CPU
I was running the 980ti and the 1080(which needs far less power) on a 600Watts "Be Quiet" PSU. No problems :-)
??? The 1070ti has not released yet. Which BTW, ebergerly, is yet one more option that hasn't been mentioned. The 1070ti is very nearly a full 1080 with just one sim disabled. It is releasing in just a week or so.
1070ti specs
2432 CUDA cores
1607 base close (This might be locked to prevent overclocking, but you really should not be overclocking for Iray anyway.)
8 GB DDR5 (not the GDDRX that the 1080 has.)
~$429
That adds a very intriguing option to the lineup. And yeah, Volta is coming. But there is ALWAYS something coming. It is a matter of how long you want to wait. I'd really like to know what Volta can do myself, especially what the VRAM specs will be.
As for 11GB vs 8GB, that the 1080ti has, I think that would be worth it. Yes, people use "huge" and the like to describe the difference. Well, this is something that is hard to define for Daz Iray because everybody uses it differently. So it is hard to define just how "huge" that 3GB is to you. I can tell you that it really, really sucks to run out of VRAM and drop to CPU mode. When that happens, I have to make changes to make a scene fit, reduce render size, or try to render it in parts and hope I can Photoshop them together later. When you look at the products that are coming out lately, many of them are pushing a lot of high resolution textures. Like clothes with tons of different surface areas and all of them are 4K textures. And now you have DFORCE, when I look over the thread, it looks like DFORCE eats up your VRAM big time. Who knows what upcoming features Daz is looking at adding that could add to the VRAM use even more? That is why you want as much as you can buy. Because nothing sucks more than knowing you could have bought more, but didn't, and then your scene will not render because of that decision.
Obviously, a 1080ti is not for everybody. We can't all afford one. But if you can afford one, and this is something you want to do, it is well worth getting, IMO.
Darn. $770 for the highly rated MSI GTX 1080ti on newegg.
So outrider, did you mean the 1070ti is almost a 1080, or almost a 1080ti? Cuz I dont' think a 1080 is really that much faster than a 1070 is it? I dont recall any benchmarks with the 1080 on Sickleyield's scene.