1080 GTX Ti 11GB + 970 GTX Ti 4GB (Good idea to use both?)
in The Commons
SLI MODE IS DISABLED
Primary GPU: 1080 GTX Ti 11GB (Cuda Cores 3584)
Secondary GPU: 970 GTX 4GB (Cuda Cores: 1664)
Combined Cuda Cores: 5248
My question is if my 970 GTX 4GB card will cap the VRAM of the 1080 Ti down from 11GB to 4GB. Anyone experienced with this would be great to get your advice if I am better off just using the 1080 Ti without the 970.
Thanks,

Comments
Iray can drop the 4GB card if it can't load the scene without dropping the 11GB card. I have seen reprots that the speed of the faster card may be throttled, though that isn't what is meant to happen as far as I know and others have had different experiences so you may want to test it.
...hmm, on the 1080Ti vs 1080 thread, someone mentioned that you would be limited to the memory on the smaller card.
So were that true, if like I was to use my old 1 GB card to just run the displays and a 1080 Ti for rendering I'd only have 1 GB of VRAM available.
The initial statement is incorrect, and has been set right on numerous occasions. The cards are used or not independently
...thank you for the clarification, saves me having to go to a tech forum.
I have a 970 and a 1070 but after several crashed or frozen scenes I now switch off the 970 in the IRay advanced panel and just use it for driving the display. I'm certainly not limited to 4GB though as most of my scenes exceed that.
The only time I drop back to CPU is when I exceed the 8GB of the 1070. However, that can happen after several trial renders even if the VRAM requirement is within the 8GB. It seems that IRay or DAZ Studio fails to release the VRAM even if I make sure the old trial render windows are closed.This problem is not consistent though and I get confusing VRAM figures from GPUZ when I would expect the same for each render of the same scene.
...there is a script that allows you to purge memory in Daz. Not sure if that works for GPUs.
Isn't it correct if both cards are enabled?
If both cards are enabled, and the scene can't be loaded then the 4 GB card is automatically disabled?
That was me, judging from RH's reply I may be parroting bad info.
Yes, if you have a 4GB card and am 11GB then a scene that fits into 4GB will use both, a scene that fits into 11GB but not 4GB will use only the 11GB card, and a scene that needs more than 11GB won't use either (where the requirements have to take account of other demands on the RAM, of course). Once a card has been dropped due to running out of memory it won't be used until (at least) DS is restarted, regardless of scene size; some people find they need to restart the system to regain the use of a GPU that was dropped for memory overflow.
I use a 970 and a 980ti; I don't use the 970 for rendering - it drives 3 x 2560x1440 monitors.
I do on occasions if I'm away from comp add it to the render, and is a useful boost, but not that much; on the 1080ti, it would be less still.
The extra electric, noise and heat usually stop me from adding it.
My advice is to try it out, and see how it works for you; many of us in these forums would be curious about the results. Get GPUz, and you can check out what each card is doing.
On the two "unbalanced" cards: As Richard notes there's been the suggestion that renders run at the speed of the slowerest GPU. This is unrelated to memory, but architecture. I don't know if this is still true of the latest Pascal-enabled drivers. It's possible it's old pre-Pascal news. It would be interesting if some folks with this type of arrangement posted some new benchmarks.
I'd personally put the 970 to work as the display monitor, then experiment with turning it off or on to see how well your renders progress. Or you could always give it to me. I don't mind hand-me-downs.
Thanks Richard! I have been using a 980 TI and recently installed a 1070. This might explain some of the behaviour that I have been seeing. When in doubt restart!
Working with a GTX 980 and a few Titan X's (with a similar vram differential) I found the 980 would tend to drop out using most sets that involved 3 or more actors with the 4gb limitation. In many cases I had to hide actors and use spot rendering techniques to surpass that limit. That said, it was absolutely still worthwhile including those extra cuda cores when possible. "Good idea to use both?" Situationally, yes.
I'm glad someone else brought this up, as this memory leak that occurs after numerous renders is a constant issue for me as well. Restarting Daz isn't always a workaround for me either as GoZ/ZBrush bridge tends to break and stops importing morphs after restarting several times. It requires a full reboot.