Using multiple PCs for rendering

Is it possible to use more than one PC to render with Iray (using daz studio)?  If so, can you point me to instructions on how to set it up?

Comments

  • mjc1016mjc1016 Posts: 15,001

    No, not really...

    There are network rendering capabilities, but they are geared toward dedicated Nvidia machines. 

  • Thanks--I was hoping I was overlooking something.  sad 

    I've been exporting to Blender and using Cycles because I just can't get Iray to render on my PC.  A farm probably wouldn't help anyway, but I was hoping to build a new PC and still use my existing one for "support".  I hate PC upgrades...so many choices and in the end, I remember it's just a hobby and I don't really need any of it!

     

  • mjc1016mjc1016 Posts: 15,001
    edited December 2016

    Unlike Luxrender where it is dead easy to add a machine to render on (I've had every machine in the house capable of running Luxrender running it for renders), Iray is much more difficult to 'network' without a boatload of special hardware or farming it out.

    Post edited by mjc1016 on
  • mjc1016mjc1016 Posts: 15,001
    mjc1016 said:

    No, not really...

    There are network rendering capabilities, but they are geared toward dedicated Nvidia machines. 

    I've been asked to pass it on that it's more of a yes, but...the current version has rudimentary support for connecting to other machines.  In the changelog, one of the latest builds in the private build channel (the first link in the chain) expands those capabilities.

    http://docs.daz3d.com/doku.php/public/software/dazstudio/4/change_log#4_9_3_163

  • HavosHavos Posts: 5,575

    Check out this thread where some one has discussed how to use IRay server for multiple PC rendering:

    http://www.daz3d.com/forums/discussion/132951/tutorial-iray-server-render-farm-batch-rendering-for-daz-studio#latest

  • Thanks--I'll take a look at the links!

  • marblemarble Posts: 7,500
    edited December 2016

    Copied from another thread where it was posted by mistake:

    Seeing as a second machine would need a capable NVidia card, I'd be more interested in NVidia overcoming the inability to combine the VRAM of two cards in the same machine effectively. By that I mean that if I had a card with 6GB and another with 2GB of VRAM, I would currently be unable to use the second card if my scene happened to be larger than 2GB (as it almost certainly would be). More to the point for myself, I have a 4GB 970 and would like at some point to add a 1070 and use all of the VRAM, either up to the limit of the larger card or the sum of both cards. Having to be able to load the whole scene on both cards seems the worst of all solutions.

    Post edited by marble on
  • hphoenixhphoenix Posts: 1,335
    marble said:

    Copied from another thread where it was posted by mistake:

    Seeing as a second machine would need a capable NVidia card, I'd be more interested in NVidia overcoming the inability to combine the VRAM of two cards in the same machine effectively. By that I mean that if I had a card with 6GB and another with 2GB of VRAM, I would currently be unable to use the second card if my scene happened to be larger than 2GB (as it almost certainly would be). More to the point for myself, I have a 4GB 970 and would like at some point to add a 1070 and use all of the VRAM, either up to the limit of the larger card or the sum of both cards. Having to be able to load the whole scene on both cards seems the worst of all solutions.

    That's not really possible.  The way Iray works, the CUDA program (that is running on the individual CUDA cores, i.e., the real Iray rendering code) has to have access to the whole scene (since it's a global illumination rendering algorithm)  so the only way this could work is with bus transfers whenever the current object being tested resided on another cards memory.  This would slow Iray to a crawl, as in EVERY iteration it would be slowing down to load from the other card multiple times.  It's not that it's the 'worst of all solutions' but that it is required to achieve reasonable speed within the nature of the algorithm.

     

  • IvyIvy Posts: 7,165
    edited December 2016

    I never used 2 computers for rendering the same  1 scene. But I do  use external hard drives as my content source to render animation with 2 computers. , it allows me to render 1 scene with one computer and then start working on another scene using another computer. all in the same project folder.  using external HHD allows you more versatility to your content.

    I use 2 computers and a lap top sometimes if i have a lot of scenes I need to get rendered  rendering on my 2 desk top computers and building scenes with my lap top. plugged into my external hard drive thats contains my project I am working on.  It speed up my work flow for rendering animation. But  if you going to render animation with Iray. investing in a render farm would be the only way to go. I prefer 3dl for rendering animation . As far as rendering 1 scene at the same time with 2 computers it can't be done with daz studio

    Post edited by Ivy on
  • Ghosty12Ghosty12 Posts: 2,080
    edited December 2016

    About the only other way that could work is via CPU mode since you are not using the video cards of those machines it could be done, but it would have to be enabled in the software..  And well I can't see why it can't be done, since programs like Carrara, Poser with its queue manager and Luxrender with its network option can do it as far as I know..

    Post edited by Ghosty12 on
  • marblemarble Posts: 7,500
    hphoenix said:
    marble said:

    Copied from another thread where it was posted by mistake:

    Seeing as a second machine would need a capable NVidia card, I'd be more interested in NVidia overcoming the inability to combine the VRAM of two cards in the same machine effectively. By that I mean that if I had a card with 6GB and another with 2GB of VRAM, I would currently be unable to use the second card if my scene happened to be larger than 2GB (as it almost certainly would be). More to the point for myself, I have a 4GB 970 and would like at some point to add a 1070 and use all of the VRAM, either up to the limit of the larger card or the sum of both cards. Having to be able to load the whole scene on both cards seems the worst of all solutions.

    That's not really possible.  The way Iray works, the CUDA program (that is running on the individual CUDA cores, i.e., the real Iray rendering code) has to have access to the whole scene (since it's a global illumination rendering algorithm)  so the only way this could work is with bus transfers whenever the current object being tested resided on another cards memory.  This would slow Iray to a crawl, as in EVERY iteration it would be slowing down to load from the other card multiple times.  It's not that it's the 'worst of all solutions' but that it is required to achieve reasonable speed within the nature of the algorithm.

     

    I'm not very clued up on the technology so I'm probably making all the wrong connections here but doesn't Octane with its Out-of-Core feature allow some of the scene (the textures) to be stored in system RAM? How much VRAM that might save I have no idea but I belive there is a 25% increase in render times when OOC kicks in. That would still be a lot faster than CPU rendering. I think Octane uses Cuda cores too, right?

  • cridgitcridgit Posts: 1,765
    edited May 2022

    Redacted

    Post edited by cridgit on
  • There are some entries on batch rendering in the chnage log http://docs.daz3d.com/doku.php/public/software/dazstudio/4/change_log#4_9_3_163

  • hphoenixhphoenix Posts: 1,335
    marble said:
    hphoenix said:
    marble said:

    Copied from another thread where it was posted by mistake:

    Seeing as a second machine would need a capable NVidia card, I'd be more interested in NVidia overcoming the inability to combine the VRAM of two cards in the same machine effectively. By that I mean that if I had a card with 6GB and another with 2GB of VRAM, I would currently be unable to use the second card if my scene happened to be larger than 2GB (as it almost certainly would be). More to the point for myself, I have a 4GB 970 and would like at some point to add a 1070 and use all of the VRAM, either up to the limit of the larger card or the sum of both cards. Having to be able to load the whole scene on both cards seems the worst of all solutions.

    That's not really possible.  The way Iray works, the CUDA program (that is running on the individual CUDA cores, i.e., the real Iray rendering code) has to have access to the whole scene (since it's a global illumination rendering algorithm)  so the only way this could work is with bus transfers whenever the current object being tested resided on another cards memory.  This would slow Iray to a crawl, as in EVERY iteration it would be slowing down to load from the other card multiple times.  It's not that it's the 'worst of all solutions' but that it is required to achieve reasonable speed within the nature of the algorithm.

     

    I'm not very clued up on the technology so I'm probably making all the wrong connections here but doesn't Octane with its Out-of-Core feature allow some of the scene (the textures) to be stored in system RAM? How much VRAM that might save I have no idea but I belive there is a 25% increase in render times when OOC kicks in. That would still be a lot faster than CPU rendering. I think Octane uses Cuda cores too, right?

    Octane uses a very different rendering algorithm.  While it does share some similarities, the way it handles the scene doesn't restrict memory access the way Iray does.  It's a question of the algorithms.  Global Illumination (calculated, as opposed to simulated) requires all the objects in the scene to be available for testing energy contributions (see Radiosity algorithms) need access to all the objects for diffuse interreflection calculations.  Simulated GI can be more localized, permitting objects to be batched in areas, allowing for memory swapping that isn't continuous....it only happens when you are on boundaries of volumes.

    Just because something uses CUDA or OpenCL doesn't mean the algorithms that are implemented to run ON those cores are the same.

     

  • stevenrobertsstevenroberts Posts: 8
    edited April 2020

    Am I missing something here? My idea is I have 2 PCs on the same network both have Daz studio installed my idea is to copy my .duf with animation to the second PC and have it render the image sequence (.png) odd numbered files while the 1st PC renders the even numbered files and have both drop those images into one of my network folders then composite in Hitfilm

    EDIT well crud I assumed Daz Studio had Nth options similar to 3D Studio Max but no just range so Im reduced to having one PC work on frames 1-30 while PC 2 works on 31-60 still I can use both PCs to drop those frames off in a shared network folder 

    20200413_112458.jpg
    4032 x 2268 - 2M
    Post edited by stevenroberts on
  • kenshaw011267kenshaw011267 Posts: 3,805

    Am I missing something here? My idea is I have 2 PCs on the same network both have Daz studio installed my idea is to copy my .duf with animation to the second PC and have it render the image sequence (.png) odd numbered files while the 1st PC renders the even numbered files and have both drop those images into one of my network folders then composite in Hitfilm

    EDIT well crud I assumed Daz Studio had Nth options similar to 3D Studio Max but no just range so Im reduced to having one PC work on frames 1-30 while PC 2 works on 31-60 still I can use both PCs to drop those frames off in a shared network folder 

    As long as each DS can find all the assets that works fine.

Sign In or Register to comment.