NVidia iray and Daz3D... so... when can we expect to have GPU and CPU work together?

This may sound petty, but it's something that would vastly increase my productivity and make me incredibly happy. Is there a reason that iray can't be made to function simultaneously with the CPU in large renders? I mean... luxrender already does this, so it's at least possible in concept...

Comments

  • agent unawaresagent unawares Posts: 2,127

    It already does. CPU effect on the render time is negligible.

  • th3Digitth3Digit Posts: 16,000

    it uses out of core textures AFAIK as I often get a verbose message about compiling textures accompanied by a 100% CPU use on the desktop meter gadget just before the geometry is loaded on my card seen in GPU-Z with a fair bit of RAM usage too.

    I am no geek but this suggests to me both are used just not simultaneously.

  • It already does. CPU effect on the render time is negligible.

    Hm. I may have been misinformed, then, but I was under the impression that if a scene required more VRAM than my card had (8GB in my case, 4 in the case of a friend's), then the entire scene was dumped to the CPU. While I can handle pretty big scenes, my friend is stymied with scenes with more than two models... which seems a bit stingy, all things considered.

  • th3Digit said:

    it uses out of core textures AFAIK as I often get a verbose message about compiling textures accompanied by a 100% CPU use on the desktop meter gadget just before the geometry is loaded on my card seen in GPU-Z with a fair bit of RAM usage too.

    I am no geek but this suggests to me both are used just not simultaneously.

    Heh. I'm not a proper geek, either; it just seems to me that there ought to be a more efficient blending of CPU and GPU usage than what we're currently experiencing.

  • jestmartjestmart Posts: 3,296

    So what you really want is for Iray to be able to use the GPU(s) as math co-processor(s) even if the scene excedes video ram capacity.

  • It already does. CPU effect on the render time is negligible.

    Hm. I may have been misinformed, then, but I was under the impression that if a scene required more VRAM than my card had (8GB in my case, 4 in the case of a friend's), then the entire scene was dumped to the CPU. While I can handle pretty big scenes, my friend is stymied with scenes with more than two models... which seems a bit stingy, all things considered.

    That wasn't your original question. You can, in the Advanced tab of render Settings, enable both nVidia GPU(s) and the CPU. However, as yu say any GPU that cannot load the needed data (or runs out of memory during the render) will be dropped - that has nothing to do with utilising the CPU or not. This is something that would require a chnage on nVidia's end - I believe the newest/forthcoming GPUs have or may have nVlink (or soemthing like that) which at least allows multiple GPUs to pool their memory, though I don't think it gives access to system RAM.

  • SpottedKittySpottedKitty Posts: 5,516

    I was under the impression that if a scene required more VRAM than my card had (8GB in my case, 4 in the case of a friend's), then the entire scene was dumped to the CPU.

    Yes, that's right. The "negligible" comment means that a render that drops back to the CPU is much slower than one that stays running in the graphics card — even if some way of combining CPU/GPU rendering were to be developed, the CPU contribution to the render wouldn't improve the speed all that much.

  • It already does. CPU effect on the render time is negligible.

    Hm. I may have been misinformed, then, but I was under the impression that if a scene required more VRAM than my card had (8GB in my case, 4 in the case of a friend's), then the entire scene was dumped to the CPU. While I can handle pretty big scenes, my friend is stymied with scenes with more than two models... which seems a bit stingy, all things considered.

    That wasn't your original question. You can, in the Advanced tab of render Settings, enable both nVidia GPU(s) and the CPU. However, as yu say any GPU that cannot load the needed data (or runs out of memory during the render) will be dropped - that has nothing to do with utilising the CPU or not. This is something that would require a chnage on nVidia's end - I believe the newest/forthcoming GPUs have or may have nVlink (or soemthing like that) which at least allows multiple GPUs to pool their memory, though I don't think it gives access to system RAM.

    Like I said- not a proper geek. But thank you, yes that is what I was thinking; and your answer is, if not hopeful, at least enlightening. It would be nice if there was a software fix down the road that wasn't going to require all new hardware, 'cause, gosh, I don't just grow money in my backyard, but... meh.

  • PadonePadone Posts: 330

    I mean... luxrender already does this

    Also, you can use luxrender with DAZ Studio if you want. Personally I export to Blender via obj/fbx then use Cycles most.

    https://www.daz3d.com/reality-4-daz-studio-edition

    https://www.daz3d.com/luxus

     

Sign In or Register to comment.