Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2026 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2026 Daz Productions Inc. All Rights Reserved.
Comments
Huh? The link is to Nvidia's Visual Conputing Appliance. This thing is a rack server with 8 (COUNT THEM, EIGHT) current generation 24 GB Quadro or Tesla cards for making movies and very high end professional rendering deep learning, and other intensive uses. It costs close to $100 geand. I was being facetious.
The NV Link connections for the current 2080 RTX & TI cards, the TITAN RTX. and the Quadro RTX 6000 & 8000 cards are all the same but the internal electronics for the Quadro, Titan, and 2080 NVLinks themselves are very difderent (assumed due to the large difference in price between the 2080 & Quadro bridges).
I found this article about NVLink compatability https://www.pugetsystems.com/labs/articles/NVIDIA-NVLink-Bridge-Compatibility-Chart-1330/
To the OP, your best bet is get the newest, most powerfil Nvidia GPU you can afford. That will give you the most bang for the buck and will speed up your rendering. Be suire to try to keep your scene under 8GB,
Just remember this, you could have a million CUDA cores but they are useless if you exceed VRAM and the scene gets dumped to CPU mode. So VRAM is major factor if you really want to go with larger scenes. A 1080 is pretty decent card. This was the fastest card available in 2016. There are only a few GPUs in the world that are actually faster than a 1080. Additionally there are even fewer GPUs that go beyond 8GB, which is the real bottleneck you have.
Get your scenes under 8GB, and all is well. You certainly get another GPU, you can run as many GPUs as you want. As long as the scene fits into each individual GPU's VRAM they will combine their powers like Voltron and give you super powers. But any GPU without enough VRAM for the scene will not be used at all, leaving your Voltron missing a leg or arm, which is decidedly less cool.
HUH? I was telling people not to buy the pro Nvlink because actual testing has shown that there doesn't appear to be an difference, at least internally.
https://www.pugetsystems.com/labs/articles/NVIDIA-NVLink-Bridge-Compatibility-Chart-1330/
Oh, okay, I completely misunderstood. DOH
While all of this applies 100% to gaming, Iray does not leverage these capabilities.
nVidia recommends disabling SLI for Iray, and in my own testing enabling SLI on my 2 980Tis slows them down noticeably when rendering.
The closest proxy for what nVidia used to call "Turbo caching" (reserving system RAM to the GPU) would be memory pooling (treating all the VRAM as a single entity). This only works for the latest 2000 series cards, and only for the models equipped with NVlink (and with the bridge actually installed). This is also not yet implemented in Daz Studio, nor in the Windows driver (it was suggested on this forum that it works with a Linux box running Iray Server).
So are you saying that if I buy an RTX 2080 with 8gb VRAM, that doesn't mean that I now have 16gb if I still have my 8gb 1080 installed? So DAZ will only recognise a max limit of the biggest GPU VRAM rather than combine them?
It's more like...
Iray will attempt to load the scene into each installed video card's onboard RAM. If a card has enough RAM to fit the entire scene, it will participate in the render. If a card does not have enough RAM it will be left out of the render job.
Oh, so if I have an 11Gb card and an 8Gb card, and need, say 10, then the 11Gb card will still render on GPU rather than the whole thing going to CPU while scenes with < 8Gb would render on both? I'm asking because I have an 1070 with 8 and was wondering what would happened if I added an 11 Gb card.
That is exactly right. Keep in mind that it is the textures that kill you, not vertices. You don't need a 4k texture for things far away from the camera, but most people don't bother because even an 8GB scene is pretty complex. If you are exceeding that, I'd say you need to make better use of instances, or smaller textures, and I think there are addons for both of those problems. And then, there's always the 2080ti with 11gigs.
But honestly, I am holding off on upgrading my video cards because AMD is releasing their new architecture at the high end next year. Even if you don't want an AMD card, they'll force NVidia to try harder...
Most of the salient points have already been made. But just to mention/go into further details on a couple things:
We just recently had someone benchmark their $330 Ryzen 7 3700X over in the updated benchmarking thread, and it scored a total of 0.633 iterations per second in overall performance. Which is only slightly faster than what my own 1050 non-Ti embedded laptop GPU managed to achieve (0.621 iterations per second.) Even at launch a 1050 was no more than $110 (1/3 the price of a 3700X.)
All of this is to say that even modern high-end CPU performance is negligible when it comes to relative perfomance in Cuda GPU accelerated applications.
SLI is redundant and somewhat in conflict with the way Iray already works internally whenever you are using it on a system with multiple GPUs or a single GPU + CPU active for rendering. SLI works by having the CPU divide up complex graphics workloads into separate chunks for multiple GPUs to work on separately and then recombining the results of those operations (framebuffer outputs) over a physically separate data line (SLI connectors/bridges) for instantaneous direct output to the framebuffer on whichever GPU is home to the system's primary display. Which makes perfect sense for motion based gaming.
In contrast Iray - completely independently from SLI hardware or software support on a system - already works under multiple active rendering device conditions by breaking up complex graphics workloads into separate chunks on the CPU, sending those chunks to each device, and them recombining those chunks for iterative direct output to disk for safe keeping. Which makes perfect sense for stills based graphics rendering. In a best case scenario, mixing Iray with SLI results in no change to underlying rendering performance. In a worst case scenario, having SLI enabled leads to worse rendering performance in Iray because the system now has the added step of needing to constantly copy data from the framebuffer of whichever GPU drives the main display back to disk. Ergo why SLI is not your friend when it comes to tasks like Iray rendering.
Using virtually any two Nvidia graphics cards in a system for Iray rendering is already functionally identical to having SLI enabled by default in a game really well optimized for it. So this is where you can have a ball..