Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
There's a BUNCH of features (turnkey simultaneous sterescopic rendering and AI based convergence completion assessment to name a few) that Iray either already supports or is in the early stage of supporting that I'd persoanlly love to see Daz Studio fully implement - hopefully sooner rather than later.
Motion blur was another one I've seen requested that Iray has support for.
No Out-of-Core feature yet, then??
Iray explicitly does not directly support out-of-core rendering (there's a paragraph in the official Iray design document explaining exactly why.)
In short, the load balancing mechanism Iray uses that allows it to achieve almost perfectly linear speed increases when adding additional GPUs to a render job directly conflicts with how OOC works. Adding OOC directly into the mix would eliminate this multi-GPU performance advantage. Hence its absence.
So they want us to buy more GPUs and/or the models with the most VRAM. OK.
The guy behind the Coreteks youtube channel has stated that in the future, GPUs will probably become integrated to the motherboard or something of the sort, and a standardized part of all desktop machines. That is, all GPUs will be the same... which probably means that limit-you-to-multi-GPU-cards approach nVidia wants to keep us locked down into will fall by the wayside down the road.
Here's the full quote with key points highlighted (The Iray Light Transport Simulation and Rendering System, page 36):
To simplify data access and maximize performance, Iray is not required to support out of core scene data, i.e. each GPU holds a full copy of all scene data. This may limit the size of supported scenes on low-end and older generation GPUs.
However, this limitation does not apply to the outputs of Iray, as framebuffers can be of arbitrary size and there can be multiple outputs enabled at the same time (see Sec. 4.1).
As current GPUs feature memory sizes of up to 24GB and the core also supports instancing of objects, scenes originating from the application domain of Iray so far also did not exceed this constraint. For more complex scenes such as those seen in the visual effects industry, this is of course insufficient. Possibilities to overcome this limitation include unified virtual memory (UVM) or NVLINK.
Now if only NVLink based memory pooling worked on anything other than Linux...
That doesn't make make even the slightest bit of sense me. Perhaps you are thinking of APUs which have a GPU and CPU die on the same chip. Using multiple dies on a chip is generally accepted as to where the market will go as it gets harder to shrink nodes. It is easier to make a multi-chip CPU with several small dies than it is one large die. At any rate this also makes it easier to mix and match dies from CPUs and GPUs together on a single chiplet package. Both the upcoming consoles from Sony and Microsoft will feature APU designs. But unless they make APUs that can use CUDA, it doesn't help Iray.
For us, this doesn't change how GPUs work. GPUs that have multiple chips will simply be sold as one big GPU like they are today, and the PC will see them this way. Just like how a PC sees the Ryzen 3950x, which has two 8 core chiplets, as a single 16 core monster.
Well, I think they main thing he was saying was that GPUs would become standardized in coming years, such that they all settle down on one common design that everyone uses, rather than something that one major maker of GPUs makes theirs function ONE way, and another make theirs do tihngs in a different way. That is, a GPU would be a GPU would be a GPU. I may have not have heard him quite right, though. He was also making reference to how at one point, sound-cards were a seperate doodad installed into a computer, via a card, but that at some point they stopped doing it that way, and the sound card became part of the main board, and standardized. Sounded like he was suggesting something like that might happen in a few years with GPUs.
As long as there is competition, there will always be something that each brand wants to do that gives them an edge. The day that everything becomes standardized would be the day competition dies (and probably mean Intel has monopolized the market).
Also any kind of new tech will face some kind of format war, like VHS-Beta, or Gsync-Freesync. VR is facing some of that with how certain companies force customers exclusively in their store. Even phones lack a standard, you either have Android or you are embedded in Apple's ecosystem. Ray tracing is the current new thing, but there will be something else in a number of years.
The video I saw was about ray tracing, and some of the facts he presented were straight up wrong. He stated the Quake RTX was not using RT cores, which is totally false. There were some fundamental misunderstandings of how ray tracing even works. He had some valid points, and some things I've even suggested here, like how RT cores could backfire on Nvidia, but some of his predictions are also misinformed and a real reach. And at the end of the day its only his opinion. All 3 companies are working on their own version of hardware based ray tracing, and all 3 will be different in some way.
The current situation in the GPU market is already at that point where a GPU is a GPU. Its actually been more standardized than ever, even with the ray tracing stuff. Even before ray tracing came along, Nvidia has CUDA. Nobody else does, which is why Iray only works with Nvidia. And that will not change no matter what happens with ray tracing.
Not familiar with the guy but his knowledge of PC hsitory is woeful.
There were standardized GPU's, That's where the terms CGA, EGA and VGA come from. People wanted better and so the GPU makers went far beyond the standards.
GPU's were integrated into motherboards, not all that long ago. These were most recently low end Nvidia GPU's. There were engineering issues with this approach and it was not satisfactory to many buyers. So this went away, Nvidia actuially sued about this and the case is still ongoing but there is effectively no chance that GPU's will go back to being integrated into motherbaords. Full featured flagship GPU's simply take more power and space than is available on ATX motherboards and for things like rendering, simulations, machine learning and AAA games, the high end GPU's are much prefered. There is also the case that the enthusiast market, which drives basically all PC desktop design, hates integrated GPU's as they feel they're being made to pay for something they don't use. Intel went the route of putting a terrible GPU on most of their CPU's so people make fun of it and avoid using it. AMD makes APU's which are more capable but are still only aimed at the budget end of the market.
Will GPU's become standardized? Not even remotely possible. There are soon to be three major GPU manufacturers, Amd, Nvidia and starting next year Intel. Nvidia is not about to get in with any standards process because they absolutely dominate the market, both the consumer and professional ones, and having a standard all GPU's adhere to would take away their advantages. AMD has tried with such initiatives as OpenCL and OpenGL but they've gone pretty much no where. Intel is not getting into the market to be just a name plate on some commodity item. They're getting in because GPU's are the major growth market in datacenters. The one I work in had less than 1% of racks with GPU's 4 years ago, when I started. Now we're on track to having more GPU's installed than we will have CPU's by the end of 2020. Intel wants to offer something, no one outside Intel knows what, to convince people who make datacenter purchase decisions and people who write server software to use their offerings so again there is no way Intel wants any sort of standard.
Well opencl is here from quite a while now, so nvidia didn't have to come out with cuda or optix at all. As I can see it, that was mainly a choice for marketing protection. It is also interesting that nvidia and amd opencl drivers overwrite each other, so you have to fix it yourself in the registry if you want them to work together.
https://community.amd.com/message/2909519
I believe you are correct. IIRC the specifications for OpenCL were first released.December 8, 2008, initially developed by Apple, in conjunction with several other interested parties, including AMD. It was in direct response to Nvidias Cuda. Nvidia started Cuda sometime in 2006 IIRC.
I've seen a couple of references to OpenGL in the past week that seem to confuse it with OpenCL. So for those who haven't been around since the days of punch cards and modems that used the telephone handset (or fast forward a bit to the days of SGI graphics workstations), I thought I might take a moment to clear up the confusion. OpenGL has been around since 1992, and is/was designed to provide a cross-language, cross-platform application programming interface (API) for the real time rendering of 2D and 3D graphics. OpenGL was a huge step forward in unifying real time graphics visualization programming on the various flavors of Unix, and other operating systems. OpenGL is not related to OpenCL, which is/was designed for GPU computing, and not limited to visual data display. OpenGL was initially developed by Silicon Graphics Incorporated (SGI - they made the hottest graphics workstations of the time), and is currently managed by Khronos Group (which is probably where some of the confusion between OpenGL and OpenCL comes from, as OpenCL is also managed by Khronos Group). While there seems to be waning support/use of OpenGL, it's interesting to note that EVEE, Blenders realtime render engine, is based on OpenGL. So it obviously still as a lot left in the tank
As has been pointed out CUDA came first. OpenCl was a direct response to CUDA. CUDA is far more capable and widely used. I have an entire datacenter full of servers and not one Radeon pro. It's not like we're opposed to them just no one ever comes asking for an OpenCl machine. OTOH we're actually backlogged on deploying Quadro racks as we're back ordered on them.
So, being the novice that I am perhaps I could insert a more simplistic inquiry here. For someone who is starting out with a a basic 1080 GPU, would it be best to play the waiting game for some of the advancements mentioned here to come to pass within a year so, or would just investing in a 2080TI be a worthwhile venture? I generally use DS for animations which have up to 5 characters and a theater, concert or night club environment, most of the time with the number of frame exceeding 8000. Currently I can get frames rendering in about 12 seconds for 1080x782 resolution with one character and an environment. (Stage, room a few seating props)
Most of your animations are over 8000 frames? Are you rendering entire 5-minute videos in uninterrupted shots?
Gordig, I convert MikuMiku Dance animations into DS and they range from 6000 to 11000 frames. I use image series rendering so I can break it up somewhat and not render strait through. I really want to add better environments and a few more characters so thats why I I really need to start looking for a bit more horsepower.