At what point will we see a jump in rendering.
I know not much about Iray and how fast it renders. Whether it is efficient or slow with todays standards. Regardless - where do we stand with the future of rendering? To me rendering a still image is totally fine. I have a 2080ti and it does more than enough for a render. If I want a short few second test clip I can usually pump out a minimal scene, 1080p, 60FPS overnight if I have most things bare minimum. Maybe 250 iterations each frame?
But what's to come? Are we stuck relying on faster GPUS or will it come from the software end?
Post edited by Usernamenottaken on

Comments
You've already got just about the best GPU there is for Iray rendering at the moment. NVIDIA will (porportably) be releasing the 3000 series cards later this year so there be a performance boost with the RTX 3080ti over the 2080ti.
Rendering animations always takes a long time... even for studios with render farms. I have no idea of anything big around the corner, software or hardware, that might dramatically reduce render times.
At one point, George Lucas's special effects company, Industrial Light and Magic, reputedly had more computing power than NASA!
Cheers,
Alex.
But even for stills, it is narrow. By the time the 3080ti rolls around, 8k displays might already be a thing. The 2080ti holds well in terms of 4k rendering but for real time raytracing in 4k it doesn't perform all too well. My guess is the 3080ti will hold up better with 4k raytracing but will not be ready for 8k raytracing. In terms of 8k stills rendering it shall perform equally to 2080ti in 4k. So there goes the performance gain.
Better hardware only means faster rendering if the requirements are equal. But they change over time, too. 8 years ago FHD was amazing. Nowadays it's merely smartphone quality. And even this waning ...
the issue with GPU rendering is everything has to be loaded on your graphics card so you are always going to limited by size
things like out of core textures can help
the advantage is of course graphics cards are cheaper than powerful CPUs
but if money not the object most professional production companies will use render farms not racks of graphics cards
Software will improve but you cannot expect massive improvements from that side. If there were truly significant gains to be had from better algorithms or optimization then the big VFX houses would be paying to get that done.
Hardware improvements are where you can expect the most speed improvements. To start with CPU compute power has grown by a factor of 2 in less than 5 years.In 2016 the absolute best CPU you could get, for this sort of workload, was a Broadwell Xeon with 22c/44t in a dual socket system (so 44c/88t total). Now Epyc Rome can give you a 64c/128t CPU, so a dual socket setup is 128c/256t (bpth CPU's run at roughly the same base clock and the Epyc has an IPC advanatge as well).
The improvement in GPU compute in that same timespan have been equally impressive. The 980ti was the flagship Nvidia GPU and it was 2800 CUDA at a base clock of, IIRC, 1.1 Ghz with 6Gb of VRAM. Today you can get the 2080ti with 4300 CUDA, 1.3 Ghz clock and 11Gb VRAM. The Turing cards also give you the Tensor and RT pipelines which can both be leveraged for significant gains in rendering.
Just look at those numbers and then consider what it would be like rendering a Genesis 2 scene today on modern HW.
You're from a gaming background?
Just curious, as there have been continual improvements; RTX and AMD's alternative should give some decent improvements over time.
... And the more they do, the more folks add 'features'.
How can you use other tools with Daz content given the fact we cannot export high resolution (with skinning and weight maps) models? I've played with Unreal Engine just to see what's up and I just couldn't get the look I wanted with fbx exports, going through Maya. I played with various smoothing options in Maya to increase the poly count but you lose the weight mapping.
AI implemented in hardware. All your 2080s are going to be replaced by clockless ASICs that draw 20 watts.
The Future is here! REAL TIME RAY TRACING with 30 Frames in HD.
.

RT Cores on the GPU IS the jump in Rendering:
I waited 30 years for real time rendering with Ray Tracing - i started in the 90ths with a Renderman License and a 286er IBM.. Just the Graphic Card and a 19" Eiso Monitor was 16 000 USD... THe whole system with software License 60 000 USd.... One decent indoor Render toke about 1 Week for one Image
In my Eyes the future is allready here...and just made me invest in a new Ryzen x570 PC with a 2070 Super RTX...for 1200 USD -- Because i really want to know if i can do this at home now...
And after 2 Weeks testing in Unreal Engine i can say - YES! ...24 Frames in HD can be done with my Hobbist Setup! But the Time for baking large Scenes and Lightmaps will still make me upscale my system to minimum 16 Cores....Or maybe a Threadripper?
So ... The Future is MORE CPU CORES AND MORE RT CORES... and People with Talent - can do Cinematic Movies at home...
Double Posting - Sorry