Are renders only iteration-based or are there different processes involved?

Still fairly new to Daz here, and wondering if """fully""" rendering an image is always necessary for the "small"-details.

My question is based on the fact that when I observe my renders, I tend to notice that the hair on my models in their first iterations seem to have "white dots" on them. These dots however are removed after some more renders/iterations.

However, is this merely from more iterations, or because Daz completes a different process to smooth the hair out.

I'm asking, because i've started to render very large resolutions. Looking at the render and the time render, sometimes I feel like that before the progression bar even moves from 0% to 1%, the render preview is already of fairly high quality.

I've considered maximizing the render time to 20 minutes, because i'm experiencing (at higher resolutions) that the render just hangs for hours rendering on end without any noticable increase in quality.

So my question is, if I max render times to 20 minutes, could i potentially lose out on any smoothening processes or anything else of those sorts, by doing so?

Comments

  • TheKDTheKD Posts: 2,674

    If it looks good enough to you, it's done. The more complex the calculations are, the longer it will potentially take to clear up. Things like hair have translucency it's more complex to calculate than something that just has reflexion. The more iterations you run, the more diminishing the returns are. It's rare I need more than 5000 iterations for a finished image.

  • onixonix Posts: 282
    edited July 2019

    You are talking about iray render, 3delight works in one pass.

    In essence, you can understand Iray as making photo under very very low light.  If you try to take a photo with your camera in the darkness you will; get a similar noisy picture. You can imp[rove it by increasing the exposure of a single picture or overlapping multiple pictures over each other. 

    Current Iray with AI denoiser fixes problems of noisy pictures, so you only need to render long enough until the picture becomes detailed enough big pictures usually do not contain that much of detail so there is no need for long render time. Small pictures, on the other hand, are way more detailed so they need more time but they also render faster kinda evening the total time.

    If your render time is too short on the big picture, it will still look pretty good, but you will lose some details like reflections, shadows highlights or even parts of the picture in the dark areas. So hair looks pretty dull.

    in theory, you can't have universal rendering time because every picture needs different render time depending on the amount of detail and lighting so there is no point to limit rendering time, just stop when you believe it looks good enough it can be sometimes 3 minutes sometimes it will take 30.

     

    Post edited by onix on
  • Alright, thanks for th answers! yes

  • "Are renders only iteration-based or are there different processes involved?"

    Itterations/samples are simply a measure of the number of times a render is refined, they have no bearing on the final render, unless set too low, in which case the render will cease before reaching convergence, if render quality is enabled. Disable render quality and the render will continue until it reaches the itteration count.

    "Still fairly new to Daz here, and wondering if """fully""" rendering an image is always necessary for the "small"-details."

    This will depend on the "quality" of the render you want. While a lower convergence/itteration count can result in an image that is "good enough", it will still have flaws and may be evident enough to detract from the overall render. Especially to someone that knows what to look for.

     

    White Dots and why they disappear.

    The white dots you are seeing are commonly referred to as "FireFlies" and they, generally, disappear with more itterations/higher convergence as they have been recalculated.

    They are not limited to hair only and may actually reappear in later itterations, depending on if the render engine messes up a calculation.

     

    "Looking at the render and the time render, sometimes I feel like that before the progression bar even moves from 0% to 1%, the render preview is already of fairly high quality"

    Not to discount your experiences, but that's a bit hard to believe.

    I've attached a couple of quick test renders, all default settings with a G8F, toulouse hair and H&C school uniform.

    At 1% convergence and 100% scale for viewing, the areas marked are failry evident that they haven't cooked long enough.

    At 95% convergence you have to zoom to about 200% to see the flaws in certain areas.

    At 100% convergence, certain flaws are still present, but you have to zoom to ~400% to real find them.

     

    what you may be experiencing is related to screen size and screen resolution. take a 1080 image and put it on a phone, then put the same image on a 40" monitor. The phone screen, even if at the same resolution, will hide many errors, whereas the 40" will show every error.

    just spitballing.

     

    "I've considered maximizing the render time to 20 minutes, because i'm experiencing (at higher resolutions) that the render just hangs for hours rendering on end without any noticable increase in quality."

    Test it out and see if it works or not. Set up a scene, like the attached, render at the 20 min, note the convergence and itteration counts, then 0 the time(unlimited) and let it render to 95% convergence and 100% convergence, then put the renders side by side and look for flaws in each and differences between the three and see which looks the best to you. Be sure you are at 100% size of the render, anything smaller will hide the errors.

    It'll take some time to train your eye to spot these things but it'll be well worth the investment so you can produce better renders in the future.

     

    "So my question is, if I max render times to 20 minutes, could i potentially lose out on any smoothening processes or anything else of those sorts, by doing so?"

    Yes, no and maybe.

    Your scene composition will be the biggest determining factor, with render settings and shader settings and lighting also playing factors. A single character lit with an HDRI may not even take that long, where as a bar scene with 20-30 characters may take days if not a week or more to render.

     

    The kd response:

    "Things like hair have translucency it's more complex to calculate than something that just has reflexion."

    Did you mean transparency instead of translucency? Translucency is kind of hit or miss on hair products, as to whether it's used or not, where as transparency is fairly common, fiber mesh excluded, and does effect render times due to the complexity.

    Even if you do mean translucency, i find it to be less computationally intensive than reflections.

    of course it's always about the scene composition and the complexity of items being reflected and the lighting, render and shader settings of the scene.

     

    "The more iterations you run, the more diminishing the returns are. It's rare I need more than 5000 iterations for a finished image."

    This is going to be completely scene dependent. While a single character lit with an hdri might not benefit from higher itterations, a larger scene, whether pixel count and/or number of assets in said scene, may require a much much higher itteration count just to produce something that isn't a pixelated mess.

    My 4k renders are in the mid 20k to high 30k itterations on average to reach 100% convergence. 95% convergence is still too pixelated for my taste in general.

     

    ONiX response:

    "You are talking about iray render, 3delight works in one pass."

    Unless you turn on progressive render in the render settings, in which case 3delight works just like iray.

    It's a bit better to have it set to progressive as you get more of the scene rendered faster to determine if you want to spend the time rendering or need to adjust something.

    And it doesn't, generally, add anything to the overall render time.

     

    "In essence, you can understand Iray as making photo under very very low light. "

    Not even close to true.

    The render settings, specifically tone mapping, are consistent with exterior, overcast conditions.

    F/stop:8, ISO:100 Shutter:128.

    For low light situations, i.e. interiors without natural light:

    F/stop:2-4, ISO:400-800 shutter:60-80

    If you test with tone mapping off, all you get is a mostly white image, that will never hit convergence.

    See attached.

    To get any kind of results i had to reduce the intensity to .0.0003

    see attached

     

    "Current Iray with AI denoiser fixes problems of noisy pictures, so you only need to render long enough until the picture becomes detailed..."

    The denoiser's usefullness is compeletely dependent on the scene composition and individual requirements.

    In the testing i've been doing, with the included denoiser as well as the stand alone intel and nvidia, the results are way too soft and too many details are lost.

    Hard surfaces seemed to produce better results, while humans just resulted in a smudged looking image.

    The other problem is deciding when to stop the render.

    I tested various itteration counts, 100, 200 etc, and just in testing those it took longer to find a point that was even viable for the denoiser to work somewhat well than the render by itself would take.

     

    "... big pictures usually do not contain that much of detail so there is no need for long render time. Small pictures, on the other hand, are way more detailed so they need more time but they also render faster kinda evening the total time."

    If you are referring to the number of items in a scene as opposed to just the render size, sorta, but not always.

    A single character lit with an HDRI may take less time than a street scene with 20+ characters, buildings, vehicles and various and sundry other objects,, or it may not.

    You have to take everything into consideration, render settings, shader settings, lighting, camera, etc. etc. changing one aspect of a particular scene can cause the scene to exponentially increase in render time.

    Render that single character at 16k(15360x8640) and it, more than likely, will take longer than the street scene rendered at 1000x1000.

     

    When it comes to render size, the more pixels the longer it will take. The general rule of thumb is that with each doubling of size there is a 4x increase in render time. It is variable depending on scene composition, but holds fairly true.

     

    "If your render time is too short on the big picture, it will still look pretty good, but you will lose some details like reflections, shadows highlights or even parts of the picture in the dark areas"

    Again, what do you mean by "Big Picture"?

    Honestly i'm still trying to figure out what you may be doing that is producing the results you're talking about.

    At worst the image may be blown out(bloom filter enabled) or it will be very pixelated(grainy), but i've never seen results like what you're talking about.

     

    "... it can be sometimes 3 minutes sometimes it will take 30."

    I hope you meant 30 days on that, cause that can happen.

    Depending on your hardware, scene composition and every other factor, you could be looking at minutes, to weeks.

     

     

    In closing.

    Take everything presented in this and every other forum post, with a grain of salt. We are all basing our suggestions on our personal experience, training, hardware and taste.

    What works in one case, won't work in another. Try out the suggestions and see what does and doesn't work for you and your personal style.

    This is art, make a pretty picture.

     

     

    1%edit.png
    2000 x 2000 - 2M
    95%edit.png
    2000 x 2000 - 2M
    100%.png
    2000 x 2000 - 1M
    tone map off.png
    2000 x 2000 - 443K
    0.0003 intensity.png
    2000 x 2000 - 1M
  • dijituldijitul Posts: 146

    To add to all this, your iray render times will increase significantly if you enable Depth of Field (which introduces blur, and blur needs more samples to smooth out), or if you enable Caustics.  Denoiser can be helpful but it is not discrimating -- it will smooth out more than just noise, especially slight details in textures.  

    I do wish the iray engine could discern areas where blur is introduced and prioritize rendering in those areas later in the process, or if DAZ could add a feature where you can marquee areas to a stopped render and continue rendering only in that area.

    Hope this helps.

  • ParadigmParadigm Posts: 421

    [...] or if DAZ could add a feature where you can marquee areas to a stopped render and continue rendering only in that area.

    Hope this helps.

    You can do essentially that with just the smallest amount of postwork. Cancel the render but do not close the render and save last render, which will be where it stopped. This is also saved in "C:/Users/yourname/AppData/Roaming/DAZ 3D/Studio4/temp/render"

    From there, continue the render then when you're happy just merge them in photoshop or GIMP with all the feathering amnd marqueeing you want!

  • onixonix Posts: 282

    "Are renders only iteration-based or are there different processes involved?"

    Itterations/samples are simply a measure of the number of times a render is refined, they have no bearing on the final render, unless set too low, in which case the render will cease before reaching convergence, if render quality is enabled. Disable render quality and the render will continue until it reaches the itteration count.

    "Still fairly new to Daz here, and wondering if """fully""" rendering an image is always necessary for the "small"-details."

    This will depend on the "quality" of the render you want. While a lower convergence/itteration count can result in an image that is "good enough", it will still have flaws and may be evident enough to detract from the overall render. Especially to someone that knows what to look for.

     

    White Dots and why they disappear.

    The white dots you are seeing are commonly referred to as "FireFlies" and they, generally, disappear with more itterations/higher convergence as they have been recalculated.

    They are not limited to hair only and may actually reappear in later itterations, depending on if the render engine messes up a calculation.

     

    "Looking at the render and the time render, sometimes I feel like that before the progression bar even moves from 0% to 1%, the render preview is already of fairly high quality"

    Not to discount your experiences, but that's a bit hard to believe.

    I've attached a couple of quick test renders, all default settings with a G8F, toulouse hair and H&C school uniform.

    At 1% convergence and 100% scale for viewing, the areas marked are failry evident that they haven't cooked long enough.

    At 95% convergence you have to zoom to about 200% to see the flaws in certain areas.

    At 100% convergence, certain flaws are still present, but you have to zoom to ~400% to real find them.

     

    what you may be experiencing is related to screen size and screen resolution. take a 1080 image and put it on a phone, then put the same image on a 40" monitor. The phone screen, even if at the same resolution, will hide many errors, whereas the 40" will show every error.

    just spitballing.

     

    "I've considered maximizing the render time to 20 minutes, because i'm experiencing (at higher resolutions) that the render just hangs for hours rendering on end without any noticable increase in quality."

    Test it out and see if it works or not. Set up a scene, like the attached, render at the 20 min, note the convergence and itteration counts, then 0 the time(unlimited) and let it render to 95% convergence and 100% convergence, then put the renders side by side and look for flaws in each and differences between the three and see which looks the best to you. Be sure you are at 100% size of the render, anything smaller will hide the errors.

    It'll take some time to train your eye to spot these things but it'll be well worth the investment so you can produce better renders in the future.

     

    "So my question is, if I max render times to 20 minutes, could i potentially lose out on any smoothening processes or anything else of those sorts, by doing so?"

    Yes, no and maybe.

    Your scene composition will be the biggest determining factor, with render settings and shader settings and lighting also playing factors. A single character lit with an HDRI may not even take that long, where as a bar scene with 20-30 characters may take days if not a week or more to render.

     

    The kd response:

    "Things like hair have translucency it's more complex to calculate than something that just has reflexion."

    Did you mean transparency instead of translucency? Translucency is kind of hit or miss on hair products, as to whether it's used or not, where as transparency is fairly common, fiber mesh excluded, and does effect render times due to the complexity.

    Even if you do mean translucency, i find it to be less computationally intensive than reflections.

    of course it's always about the scene composition and the complexity of items being reflected and the lighting, render and shader settings of the scene.

     

    "The more iterations you run, the more diminishing the returns are. It's rare I need more than 5000 iterations for a finished image."

    This is going to be completely scene dependent. While a single character lit with an hdri might not benefit from higher itterations, a larger scene, whether pixel count and/or number of assets in said scene, may require a much much higher itteration count just to produce something that isn't a pixelated mess.

    My 4k renders are in the mid 20k to high 30k itterations on average to reach 100% convergence. 95% convergence is still too pixelated for my taste in general.

     

    ONiX response:

    "You are talking about iray render, 3delight works in one pass."

    Unless you turn on progressive render in the render settings, in which case 3delight works just like iray.

    It's a bit better to have it set to progressive as you get more of the scene rendered faster to determine if you want to spend the time rendering or need to adjust something.

    And it doesn't, generally, add anything to the overall render time.

     

    "In essence, you can understand Iray as making photo under very very low light. "

    Not even close to true.

    The render settings, specifically tone mapping, are consistent with exterior, overcast conditions.

    F/stop:8, ISO:100 Shutter:128.

    For low light situations, i.e. interiors without natural light:

    F/stop:2-4, ISO:400-800 shutter:60-80

    If you test with tone mapping off, all you get is a mostly white image, that will never hit convergence.

    See attached.

    To get any kind of results i had to reduce the intensity to .0.0003

    see attached

     

    "Current Iray with AI denoiser fixes problems of noisy pictures, so you only need to render long enough until the picture becomes detailed..."

    The denoiser's usefullness is compeletely dependent on the scene composition and individual requirements.

    In the testing i've been doing, with the included denoiser as well as the stand alone intel and nvidia, the results are way too soft and too many details are lost.

    Hard surfaces seemed to produce better results, while humans just resulted in a smudged looking image.

    The other problem is deciding when to stop the render.

    I tested various itteration counts, 100, 200 etc, and just in testing those it took longer to find a point that was even viable for the denoiser to work somewhat well than the render by itself would take.

     

    "... big pictures usually do not contain that much of detail so there is no need for long render time. Small pictures, on the other hand, are way more detailed so they need more time but they also render faster kinda evening the total time."

    If you are referring to the number of items in a scene as opposed to just the render size, sorta, but not always.

    A single character lit with an HDRI may take less time than a street scene with 20+ characters, buildings, vehicles and various and sundry other objects,, or it may not.

    You have to take everything into consideration, render settings, shader settings, lighting, camera, etc. etc. changing one aspect of a particular scene can cause the scene to exponentially increase in render time.

    Render that single character at 16k(15360x8640) and it, more than likely, will take longer than the street scene rendered at 1000x1000.

     

    When it comes to render size, the more pixels the longer it will take. The general rule of thumb is that with each doubling of size there is a 4x increase in render time. It is variable depending on scene composition, but holds fairly true.

     

    "If your render time is too short on the big picture, it will still look pretty good, but you will lose some details like reflections, shadows highlights or even parts of the picture in the dark areas"

    Again, what do you mean by "Big Picture"?

    Honestly i'm still trying to figure out what you may be doing that is producing the results you're talking about.

    At worst the image may be blown out(bloom filter enabled) or it will be very pixelated(grainy), but i've never seen results like what you're talking about.

     

    "... it can be sometimes 3 minutes sometimes it will take 30."

    I hope you meant 30 days on that, cause that can happen.

    Depending on your hardware, scene composition and every other factor, you could be looking at minutes, to weeks.

     

     

    In closing.

    Take everything presented in this and every other forum post, with a grain of salt. We are all basing our suggestions on our personal experience, training, hardware and taste.

    What works in one case, won't work in another. Try out the suggestions and see what does and doesn't work for you and your personal style.

    This is art, make a pretty picture.

     

     

    Looks like you are using some other terminology:

    Iray renderer essentially replicates the physical process of emitting and capturing photons.(probably it is done in the reverse directions) so each of that grain is essentially one virtual photon which bounced into the camera. more photons you capture more quality you get. same as in real photography.

    You can ramp up the sensitivity to any level and the result will be a bright or dark picture but it will be equally noisy.

    Convergence is quite an irrelevant thing as it probably just measures how much picture is changing over time and if it is no longer changing it is assumed that it is done.

    it is also pretty tricky to use as well, because if your picture is all HDRI which converges on one go, but it has one badly lit item you will quickly reach 90% convergence even if that one item looks pretty bad. and then it will be stuck on that percentage for ages. so the better way is not to use convergence but just visually decide when it is enough.

    Even without denoiser picture can converge quickly but can be still missing a detail. Light can bounce in several steps. one step bounces will be very quick and quickly produce clean picture which resembles 3delight. but it will be missing all those reflections and subtle details.  If you render it for longer you start noticing more details appearing like reflections in the eyes, and hair.

    To make rendering fatser you can strategically place light sources so that every point of visible surface will get one step bounce and rendering will be super quick but it will be very plain and primitive. Using indirect light is where you get all that beauty but also extreme rendering times.

    If your picture is taking more than 30 minutes to render on the typical 1080 card you probably have to adjust lighting and introduce more light sources. If you do not have proper GPU it is better to stick to 3Delight which will take about 3-5 minutes for a picture

    Sometimes you can cheat a bit and use emissive surfaces that will render almost instantly noise-free like HDRI. 

    The picture size refers to the proportion between picture size and size of meaningful  2d features in that picture. same way as you can take some reference and scale it up making it big but it will not change its complexity or you can scale it down and you will start losing information. 

    Now about AI denoiser.:

    Denoiser does not care about 3D items or geometry it cares about 2d picture complexity  in terms of lines color gradients  and other stuff which you would be required to draw if you were drawing it by hand,

    This thing essentially recognizes various features in the picture and preserves them while removing the stuff it cannot understand.  So if you have a picture with one line in the empty space AI can recognize that line pretty well just from few dots, but if you have many intersecting lines they are much harder to reconstruct because you don't know which dot belongs to which line.

    So if you make small picture it will be washed but, but if you make big-picture denoise it and then scale down it will be better.

    Another problem is with color gradients, If we have a square with a color gradient from one side to another, this gradient will be hard to recognize if you have a noisy picture, especially if it is weak gradient. so denoiser will recognize that it is a square and paint it all in the single color but if you let it ender for longer time you will start noticing that it is not a single color.

    So picture quality greatly depends on the fact how much of that kind of complexity it has.

    if you have a picture with strong gradients big uniform surfaces it will be very quick to render, but if you have a picture with many lines and weak gradients it will take a long time to render in full detail even if it will look clean very quickly.

     

     

     

     

     

    .

     

     

     

     

  • The images that look decent at 1%, in many cases, have incorrect lighting. There is a fast-pass, which is generated, based on a quick sample of lighting and just the textures, which is what you first see filling-out fast in a render. Given time, you will see shadows become more correct, as well as reflections and distortions and blending of edges and colors. The white you see in hair, is due a single photon (light ray), hitting the glossy/reflect sub-pixel component, beaming that pure white light back at you. It will take many passes of light to get the actual "pixel" value, as you would see it, if it didn't only hit that quick calculated perfect reflection angle of the surface within that pixel.

    The red and green and blue things are "fire-flies". Mostly red... They are failures of photons, incorrect values devoid of the other colors. Essentially mathmaticlal eratta/errors where a math formula fails and the ony output is a locked pure red, pure green or pure blue value. That is what the fire-fly filter attempts to remove, but some manual post-work is much faster. Eventually, the correct light-values will replace the erronious R G B value, unless it is in a place that is dark and no more light enters that pixel-space. (White is R + G + B 100%, which is commonly just a pure reflection surface. If that is the only sub-pixel surface, then that is what you will see. Normally, it is just ONE element of a sub-pixel surface.)

    EG, in one pixel, you may have three triangles with a 512x512 image. A photon just happened to hit a shiny spot first, but there will eventually be more non-shiny spots, resulting in the true color within that single pixel. Such as the situation with hair, which has a lot of sub-pixel data within that single rendered pixel, as you see through a few layers or you are rendering a HD hair-image, but representing it in only a 30x30 pixel space. It takes a few hundred photons to get that pixels true color. Which is why dark shaded areas have more noise and take longer to get the "true color/shade". As well as multi-layer transparent areas, like hair.

  • nicsttnicstt Posts: 11,714

    You are the ultimate arbiter of when an image is done.

    Personally, I don't like 4.11 for its longer render times, and loss of favourites.

    I use 4.12 beta. It seems to start rendering faster, and to do said render faster too; this is nothing to do with having an RTX card as I use a 980ti for rendering. I get about a 30% increase.

    ... And quite often more as I stop some renders earlier than I used to if I'm there during the render. I avoid the AI denoiser due to poor (IMO) skin renders. It certainly has it's uses.

    You'll decide what works for you. What works for others may very well be moot.

Sign In or Register to comment.