Decreasing Maximum Iteration Samples Reduces Multi-GPU Advantage
in The Commons
Just noticed that if I decrease the maximum number of samples for multiple GPU render, it really diminishes the perfomance. For example running Sickfield's benchmarking scene If set to the default value of 5000, 2 x 2080 TI's is twice as fast as a single 2080 ti. But if I drop it down to only 200 samples, difference between 1 vs 2 GPU's is barely noticeable. Is this just the nature of converging algorithms?

Comments
I think the issue is that 200 iterations is only a few seconds on either a 2080ti or 2. The dual cards definitely have an advantage once the rendering process begins but both cards need the scene loaded into VRAM which takes more time than loading the scene onto 1, this is why adding cards doesn't directly scale to render time reduction.
A large part of it will be the render preparation time. It doesn't matter if you're rendering 10 samples or 10000, it will still take the same time to load in the render data (although that time will vary dependent on the scene) - something two cards can't help with, as they can't start working without the data. This just becomes a less noticeable overhead when rendering for longer.
Thanks guys. Running a scene with a crap load of chrome, carbon fibre and a bunch of LED's. Using 500 samples for improved quality and the 2nd card is making a noticeable impact.
Any test is going to be more accurate the longer it is run; I'm trying to think of any exceptions to that statement but can't offhand think of any.
Multi GPU performance advantage really ramps up rendering at 4k with low sample values.
when rendering animation with iray you would want to keep your iterations as low as possible to speed up render time with out sacrificing quality of the animation
I render this iray animation with 35 iteration,other than animation i can not think of any other reason for low iteration settings
click to play animation.