Why is this scene grainy?

Been trying to get this out of being grainy but im not sure whats wrong.

Screenshot 2023-04-14 222712.jpg
2497 x 1387 - 624K
«1

Comments

  • It is probably dropping to CPU, which will slow it down a lot so it may be timing out. It also looks as if the environment model is not getting a lot of direct light, which also slows convergence.

  • pectabytepectabyte Posts: 41

    I need a suggestion on what to do.

  • TimberWolfTimberWolf Posts: 233
    edited April 2023

    Short answer - render it for longer. I'm sure Richard is right and that you've maxed out your GPU to the point where it's given up and transferred the rendering over to the CPU. Which will.... take... ages. The fact that the denoiser doesn't appear to have kicked in pretty much proves the point.

    Tell us which GPU you have and we can start to make some suggestions but that many characters with that many hair models and that many clothing models is a big ask for most consumer grade GPUs. Not without a ton of optimisation and even then...

    You could render the environment as an HDRI and then add the characters (plenty of YouTube videos on how to do this - www.youtube.com/watch?v=JeLdxBQzDdw) which will help enormously but maybe your GPU wouldn't be able to do that. I just can't tell without more info.

    Post edited by TimberWolf on
  • pectabytepectabyte Posts: 41

    I have two GTX 2800. One is form my regular computer and games. My other card is for rendering. I can literally rendering Daz while I play a video game. This image renders rather quickly and it has 9200 max samples.

  • TimberWolfTimberWolf Posts: 233

    Ok. I'll take your word that the scene renders on an 8Gb card so if that's the case you need more iterations. Simple as that. A suggestion to speed it up is to set the denoising to much later on in the process - if you're needing 10k+ iterations to render than you'd really want the denoiser to kick in at around 9950 or thereabouts. Booting it in after 8 samples really will slow your process down. Another solution is to use an external denoiser after your render has finished  such as the one produced by Intel which, imho, is better than the Nvidia one built in to Daz anyway.

    https://declanrussell.com/portfolio/intel-open-image-denoiser-2/

    Under render settings set 'Rendering Quality Enable' to Off and then set the max samples and max time to the longest you are prepared to wait. Whichever of those conditions is reached first will end the render. Or... set the sliders to max to end the render when you choose.

    At the end of the day you'll just have to render it longer or use a post-render process to remove the grain which will inevitably result in a loss of some small detail.

  • pectabytepectabyte Posts: 41

    I wish I could render it from Daz. I have a total of 15000 max samples and they dont make a difference.

    Screenshot 2023-04-16 202342.jpg
    2433 x 1296 - 512K
  • crosswindcrosswind Posts: 4,740
    edited April 2023

    You meant GTX 2080...  I wonder how you could render such a scene with one 2080, well you ticked 1 or 2 cards for rendering? Any scene optimization in there?

    It seems that you used an HDR + 1 or 2 harsh point lights in the scene, and with multiple emmissive light sources, DOF etc., grains would be easily seen in such a 'big scene' loaded with 7 people. You can make some testing:  hide 5 - 6 people / hide cabin / optimize or turn off DOF / switch to mesh light, etc.  you might see the improvement accordingly and understand why ... You may also try to use Canvases if you could not resolve this issue at the end of the day...

    ps: RTX 2080, typo~enlightened

    Post edited by crosswind on
  • PerttiAPerttiA Posts: 9,457

    There is no GTX 2800 or GTX 2080. There is RTX 2080 though, but without seeing the log, it is impossible to say what is happening.

    I can render 7 fully clothed G8 characters with complete western town in little over 5 minutes on 12GB RTX 3060.

  • crosswindcrosswind Posts: 4,740
    edited April 2023

    It really depends. I ever tested lots of scenes, even rendered 15 - 20 fully-equiped chars + env. with my old Quadro 6000, but so what,  they cannot be apple to apple comparable...

    Post edited by crosswind on
  • PerttiAPerttiA Posts: 9,457

    crosswind said:

    It really depends. I ever tested lots of scenes, even rendered 15 - 20 fully-equiped chars + env. with my old Quadro 6000, but so what,  they cannot be apple to apple comparable...

    My point was, without knowing what OP is doing, with which hardware and settings, there is no way to give any advice.

  • pectabytepectabyte Posts: 41

    Here is the latest render. Even Max Samples is 15000 and Denoise is activated, its still very grainy. I included an image of my video cards. I have two of them. I can render on one whil I game on the other. There is NO slow down.

    gfdsgdf.jpg
    3432 x 1270 - 2M
  • PerttiAPerttiA Posts: 9,457

    Help->Troubleshooting->View Log file

    Save the file to a location you can easily find and Attach the file to your post with the "Attach a file" link above the "Post Comment" button.

    That file will help us in helping you

  • pectabytepectabyte Posts: 41

    PerttiA said:

    Help->Troubleshooting->View Log file

    Save the file to a location you can easily find and Attach the file to your post with the "Attach a file" link above the "Post Comment" button.

    That file will help us in helping you

    Here ya go. :)

    txt
    txt
    PectabytesLog.txt
    7M
  • crosswindcrosswind Posts: 4,740

    PerttiA said:

    crosswind said:

    It really depends. I ever tested lots of scenes, even rendered 15 - 20 fully-equiped chars + env. with my old Quadro 6000, but so what,  they cannot be apple to apple comparable...

    My point was, without knowing what OP is doing, with which hardware and settings, there is no way to give any advice.

    OK~ got it.enlightened

  • crosswindcrosswind Posts: 4,740
    edited April 2023

    A long log... but in most cases our gut feeling is correct:

    Your last attemp of rendering was still using CPU which means you ran out of VRAM of single 2080 card when initiating the render of this 'big scene'.  The rendering was rolled back to CPU and it took 2 hours to generate such a 'grainy result', because Denoising function is deactivated when using CPU to render...

    Besides, even if you set max samples of 15000, you did not change max time 7200 secs, so your render was ended up the moment max time was reached (2 hours) and only 4544 samples had been iterated and converged...

    Line 34532 - CUDA out of memory...(device 1)
    Line 38078 - denoiser is not available on CPU and will be disabled.
    Line 38086 - 3407 texture maps ~ (in which 270+ 4K maps, 107+ 2K maps, etc. These maps probably consumed around 13 GB - 15 GB VRAM at least, even after compression)
    Line 38256 - Maximum render time exceeded ~ (4544 iterations updated on Canvas after 7200 secs.)

    If you wanna use GPU to render this scene, better first tick 2 cards in devices list. If it's still rolled back to CPU, you may try Scene Optimizer...
     

    Post edited by crosswind on
  • pectabytepectabyte Posts: 41

    crosswind said:

    A long log... but in most cases our gut feeling is correct:

    Your last attemp of rendering was still using CPU which means you ran out of VRAM of single 2080 card when initiating the render of this 'big scene'.  The rendering was rolled back to CPU and it took 2 hours to generate such a 'grainy result', because Denoising function is deactivated when using CPU to render...

    Besides, even if you set max samples of 15000, you did not change max time 7200 secs, so your render was ended up the moment max time was reached (2 hours) and only 4544 samples had been iterated and converged...

    Line 34532 - CUDA out of memory...(device 1)
    Line 38078 - denoiser is not available on CPU and will be disabled.
    Line 38086 - 3407 texture maps ~ (mostly 4K , 2K)
    Line 38256 - Maximum render time exceeded ~ (4544 iterations updated on Canvas after 7200 secs.)

     

    So what do you recomend I do? Talk to me like im 12 years old. :D

  • PerttiAPerttiA Posts: 9,457

    Ok, the card has just 8GB's of VRAM and you are running W10, that means you have around 4GB's of VRAM available for Iray rendering. The log tells that DS is not even trying to use the GPU.

    Based on my own renderings, one G8 character with clothing and hair requires some 700MB to 1.5GB's of VRAM and I mostly use hair and clothing that are light on resources.
    Remove 4 characters from the scene and try if it renders on GPU this time.

    The RTX 3060 12GB could probably handle that scene, depending on how resource heavy are the clothing and hair you have used.

  • crosswindcrosswind Posts: 4,740
    edited April 2023

    pectabyte said:

    crosswind said:

    A long log... but in most cases our gut feeling is correct:

    Your last attemp of rendering was still using CPU which means you ran out of VRAM of single 2080 card when initiating the render of this 'big scene'.  The rendering was rolled back to CPU and it took 2 hours to generate such a 'grainy result', because Denoising function is deactivated when using CPU to render...

    Besides, even if you set max samples of 15000, you did not change max time 7200 secs, so your render was ended up the moment max time was reached (2 hours) and only 4544 samples had been iterated and converged...

    Line 34532 - CUDA out of memory...(device 1)
    Line 38078 - denoiser is not available on CPU and will be disabled.
    Line 38086 - 3407 texture maps ~ (in which 270+ 4K maps, 107+ 2K maps, etc. These maps probably consumed around 13 GB - 15 GB VRAM at least, even after compression)
    Line 38256 - Maximum render time exceeded ~ (4544 iterations updated on Canvas after 7200 secs.)

    If you wanna use GPU to render this scene, better first tick 2 cards in devices list. If it's still rolled back to CPU, you may try Scene Optimizer...

     

    So what do you recomend I do? Talk to me like im 12 years old. :D

    I just tried to give you more clues and info. :D  If you can well judge the situation and understand these mechanism, you're gonna have more chance to make it render on your 2 GPUs ~

    - Iray engine will consume 2 - 2.2GB VRAM on your Display Card (device 1),  so roughly you'll have 13GB VRAM left when you tick 2 cards for rendering
    - Install GPU-Z and use it to frequently monitor VRAM used and GPU load, etc
    - Optimize your scene by all means - try Scene Optimizer, Iray Resource Saver, etc; avoid using HD figures and high defination wearables / environments...
    - If still cannot make it work, I'm afraid you have to cut the numbers of your crew...

    Post edited by crosswind on
  • PerttiAPerttiA Posts: 9,457
    edited April 2023

    crosswind said:

    If you wanna use GPU to render this scene, better first tick 2 cards in devices list. If it's still rolled back to CPU, you may try Scene Optimizer...

    Two cards don't help, unless they are connected with the NVLink, that is the only way to get the cards to 'combine' their VRAM, although even then only for the textures.

    Post edited by PerttiA on
  • crosswindcrosswind Posts: 4,740
    edited April 2023

    PerttiA said:

    crosswind said:

    If you wanna use GPU to render this scene, better first tick 2 cards in devices list. If it's still rolled back to CPU, you may try Scene Optimizer...

    Two cards don't help, unless they are connected with the NVLink, that is the only way to get the card to 'combine' their VRAM, although even then only for the textures.

    Of course 2 cards will help as the VRAM consumption will be allocated to both of them. I've been using 3 cards for a long time... NVLINK is just for mem pooling, nothing else is really helpful~ I tested a lot and then sold it out~~

    SNAG-2023-4-19-0031.png
    458 x 965 - 49K
    SNAG-2023-4-19-0032.png
    458 x 965 - 45K
    SNAG-2023-4-19-0033.png
    458 x 965 - 45K
    Post edited by crosswind on
  • PerttiAPerttiA Posts: 9,457

    crosswind said:

    PerttiA said:

    crosswind said:

    If you wanna use GPU to render this scene, better first tick 2 cards in devices list. If it's still rolled back to CPU, you may try Scene Optimizer...

    Two cards don't help, unless they are connected with the NVLink, that is the only way to get the card to 'combine' their VRAM, although even then only for the textures.

    Of course 2 cards will help as the VRAM consumption will be allocated to both of them. I've been using 3 card for a long time... NVLINK is just for mem pooling, nothing useful.

    The scene has to fit on all the used GPU's independently, if both cards have only 8GB's of VRAM, it doesn't increase the total VRAM to 16GB's but one has only 8GB's on two different cards 

  • crosswindcrosswind Posts: 4,740
    edited April 2023

    All texture related consumption will be allocated to all cards on average, iray engine and geometry consumption will be on main card (display card) ~

    Post edited by crosswind on
  • PerttiAPerttiA Posts: 9,457

    Several cards will render faster than one card alone, but it will not increase the available VRAM unless the cards are joined with NVLink.

  • crosswindcrosswind Posts: 4,740

    PerttiA said:

    Several cards will render faster than one card alone, but it will not increase the available VRAM unless the cards are joined with NVLink.

    Funny thing ~ I've already posted screenshots above ~ Why don't you check some threads or video on youtube to understand more~

    like this: https://www.daz3d.com/forums/discussion/617831/is-it-worth-having-2x3090s-in-nvlink-when-it-does-not-work-in-daz#latest and his video ~~

  • crosswindcrosswind Posts: 4,740
    edited April 2023

    PerttiA said:

    crosswind said:

    PerttiA said:

    crosswind said:

    If you wanna use GPU to render this scene, better first tick 2 cards in devices list. If it's still rolled back to CPU, you may try Scene Optimizer...

    Two cards don't help, unless they are connected with the NVLink, that is the only way to get the card to 'combine' their VRAM, although even then only for the textures.

    Of course 2 cards will help as the VRAM consumption will be allocated to both of them. I've been using 3 card for a long time... NVLINK is just for mem pooling, nothing useful.

    The scene has to fit on all the used GPU's independently, if both cards have only 8GB's of VRAM, it doesn't increase the total VRAM to 16GB's but one has only 8GB's on two different cards 

    Wrong ~  It's a calculation from the engine: total tex. consumption will be allocated to all cards on average, just a simple mathmatics.... Have you ever done any similar test?

    Post edited by crosswind on
  • PerttiAPerttiA Posts: 9,457

    crosswind said:

    PerttiA said:

    Several cards will render faster than one card alone, but it will not increase the available VRAM unless the cards are joined with NVLink.

    Funny thing ~ I've already posted screenshots above ~ Why don't you check some threads or video on youtube to understand more~

    like this: https://www.daz3d.com/forums/discussion/617831/is-it-worth-having-2x3090s-in-nvlink-when-it-does-not-work-in-daz#latest and his video ~~

    Your sceenshots only say that all the cards were used, I wasn't saying they would not be used. DS does use all the cards available if the scene does fit independently on the VRAM of each card and it makes the rendering time faster, but if all the cards have too little VRAM to fit the scene, the scene renders on CPU.

    As I said on that thread, there were posts on the benchmarking thread from people with working setups with NVLink. Sometimes one just needs to find the right combination of drivers, OS and software. Insisting on using the latest everything is not always the best choice or even working.

  • crosswindcrosswind Posts: 4,740
    edited April 2023

    PerttiA said:

    crosswind said:

    PerttiA said:

    Several cards will render faster than one card alone, but it will not increase the available VRAM unless the cards are joined with NVLink.

    Funny thing ~ I've already posted screenshots above ~ Why don't you check some threads or video on youtube to understand more~

    like this: https://www.daz3d.com/forums/discussion/617831/is-it-worth-having-2x3090s-in-nvlink-when-it-does-not-work-in-daz#latest and his video ~~

    Your sceenshots only say that all the cards were used, I wasn't saying they would not be used. DS does use all the cards available if the scene does fit independently on the VRAM of each card and it makes the rendering time faster, but if all the cards have too little VRAM to fit the scene, the scene renders on CPU.

    As I said on that thread, there were posts on the benchmarking thread from people with working setups with NVLink. Sometimes one just needs to find the right combination of drivers, OS and software. Insisting on using the latest everything is not always the best choice or even working.

    It's not that a scene independently fits the VRAM of each card, but the engine will first split the total tex. consumption for allocation. As long as the average consumption does not exceed the remaining VRAM of any single card, it'll  go for rendering on all GPUs. That's why he may have chance to render a scene with 12-13GB VRAM consumption by using two 2080s.

    In other words, as long as the total VRAM consumption (engine + geo + tex.) does not exceed the total remaining VRAM of the ticked cards, normally the rendering will not be rolled back to CPU. And that's why we'd better observe the vram used from time to time and make some simple calculation beforehand ~

    Post edited by crosswind on
  • crosswind said:

    PerttiA said:

    crosswind said:

    PerttiA said:

    Several cards will render faster than one card alone, but it will not increase the available VRAM unless the cards are joined with NVLink.

    Funny thing ~ I've already posted screenshots above ~ Why don't you check some threads or video on youtube to understand more~

    like this: https://www.daz3d.com/forums/discussion/617831/is-it-worth-having-2x3090s-in-nvlink-when-it-does-not-work-in-daz#latest and his video ~~

    Your sceenshots only say that all the cards were used, I wasn't saying they would not be used. DS does use all the cards available if the scene does fit independently on the VRAM of each card and it makes the rendering time faster, but if all the cards have too little VRAM to fit the scene, the scene renders on CPU.

    As I said on that thread, there were posts on the benchmarking thread from people with working setups with NVLink. Sometimes one just needs to find the right combination of drivers, OS and software. Insisting on using the latest everything is not always the best choice or even working.

    It's not that a scene independently fits the VRAM of each card, but the engine will first split the total tex. consumption for allocation. As long as the average consumption does not exceed the remaining VRAM of any single card, it'll  go for rendering on all GPUs. That's why he may have chance to render a scene with 12-13GB VRAM consumption by using two 2080s.

    In other words, as long as the total VRAM consumption (engine + geo + tex.) does not exceed the total remaining VRAM of the ticked cards, normally the rendering will not be rolled back to CPU. And that's why we'd better observe the vram used from time to time and make some simple calculation beforehand ~

    Each card works independently, the main Iray engine sends different passes to each card as a complete set of data but different starting vectors for the paths and combines the results as it does the passes sent sequentially to a single card. Memory is shared, for materials, only when cards are paired via nVlink.

  • crosswindcrosswind Posts: 4,740
    edited April 2023

    VRAM on multi-cards could be shared for material / texture no matter there's NVlink or not... too many tests proved that, the difference may be just the way of  'memory pooling' + performance drop. But it seems that 'memory pooling' could not be seen on consumer cards nowadays even with NVlink attached / SLI on~, and performance issue depends on various factors...

    I've been using workstation cards and have never seen 'significant lower performance' without NVlinks. Finally I sold the links 1.5 years ago ~  Peer to peer based on PCIe 4 / 5 is fast.. and NVlink is gone on Ada L.. This doc. needs to be up-to-date as well..

     

    Post edited by crosswind on
  • PerttiAPerttiA Posts: 9,457

    https://forums.developer.nvidia.com/t/rtx-a6000-ada-no-more-nv-link-even-on-pro-gpus/230874

    "NVLink is far from dead. If you check out the Information on our pages 70 you will see that the technology is still in use and developing.
    The fact that it is not supported on Desktop or Workstation GPUs any longer does not mean the technology is discontinued."

    Returning to what I said about using the latest of everything...

    But all of this is besides the point, which is that the OP doesn't have enough VRAM on their GPU's to render that heavy scene on them.

Sign In or Register to comment.