General GPU/testing discussion from benchmark thread

1121314151618»

Comments

  • outrider42outrider42 Posts: 3,679
    edited July 2019

    Iray has a setting for how many times a light can bounce. The default is -1, which is the software's value for infinity. Because the default is infinite, this is why you can sometimes create a scene that seems to take forever to converge as the light just keeps bouncing around. You can reduce this setting for faster renders, but dropping too low will obviously cause problems, especially with transparent materials.

    Each individual light also has multiple settings that effect the length of the ray being cast, and even how the ray diffuses over distance. You'll find these in the light's parameters tab.

    Everything is based on mathematical equations. Light does not bounce randomly. If it did, we would not be able to simulate it with any accuracy. The reason that light might appear to bounce randomly is because of tiny imperfections in the surface the light strikes. Every piece of matter in existence is simply a glob of other smaller particles, going down to the molecular level. Even the most shiny and smooth surface you can see is not as perfectly smooth as you think it is.

    That's where things like sub surface scattering come into play, these material settings attempt to replicate how light scatters when it strikes a surface. Make no mistake here, it is not random.

    The famous noise seen in Iray is not because of random number games. There are several things that cause it, from your quality setting to white balance and tone.

    Post edited by outrider42 on
  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    ebergerly said:
    RayDAnt said:

     

    You don't seem to understand what an Iteration is. By definition, an iteration is a single full completion of some sort of cyclical process. In the case of 3D rendering, the process in question is calculating one sample pixel value for each and every pixel location in a scene's framebuffer - ie. the same number of samples as there exist pixels in the final rendered image. So in the case of the benchmarking scene I constructed (which renders out to a 900x900 pixel square) each and every iteration works out to be:

    900 * 900 = 810,000 pixels

     810,000 samples.

    I tend to agree about what an iteration is, but to expand a bit on samples... 

    In ray tracing/path tracing you send out a ray for each and every pixel in your image to determine a color for that pixel. In its simplest form you just determine what object the ray hit and look up its material color at that point. For a 1920 x 1080 image that comes out to over 2 million pixels. Initially, you send out one ray at the center of each pixel into the scene, and it bounces all over the scene to determine what final color to use for that pixel, based on effects of scene lights and shadows, etc. 

    However, as you can see if you turn the samples down to "1" or something like that, the resulting image looks like garbage, because even a very small pixel still covers a part of an object in the scene that is probably a range of colors. So for example, if "behind" pixel # 1,234,786 in the scene is a flower with a brown stem and bright blue leaf, and the first ray sees the brown stem and colors the pixel brown, it won't reflect the "average" color that pixel should be (ie, mostly bright blue).

    So the idea is to, inside each pixel, send out more "sample" rays at random directions/locations inside that pixel and into the scene. And you get a color from each of those "samples", and when you're done making the set number of samples per pixel (I think Studio/Iray defaults to 5,000?) you average them together to get the single, final pixel color for your image. In its simplest form it's just a "for" loop that calculates random rays inside the bounds of the pixel and sends them into the scene, depending on the number of samples you specify. And with 5,000 samples and 2 million pixels, that's 10 BILLION initial rays, and that doesn't even include the bounces. Of course in real renderers there are things like BVH's to greatly optimize all those ray calculations, but that's the general idea. 

    There's a terminology difference in Iray. In Iray the sample parameter is the number of iteration. If you turn it down to 1 you've just got the result of one render pass

    The sample parameter doesn't control the number of bounce.  You have the "Max path lenght" for that.

    There is no way to know how many rays were shot unless you have a log from Iray because it depends on the scene geometry and shaders and you can't predict the number of bounce of each path. You can only know for camera rays. If that's what you call initial rays, your estimation is a bit too high

    ebergerly said:

    Of course, in a real app like Iray there are many optimizations and software decisions made that might affect whether the software decides it needs another sample for any particular pixel or not. For example, if half the pixels are still changing color significantly between samples, but the rest have negligible changes, why bother sending more rays for the non-changing ones? I've never looked inside the Iray code, so I have no idea what it actually does, but I suppose it's possible they may use some logic to stop sampling those pixels that have converged, and maybe use some of those GPU cores for other pixels?

    According to some people Iray always calculate every pixels even if they have converged because Iray programmers thought it was better to conform to some benchmarking rules

    Aala said:
    As for your second point, in the latest Iray patch notest we have this:
    • Added a Deep Learning based postprocessing method to determine the quality of a partially converged image. See Programmers Manual section 3.11. It optionally outputs a heatmap visualizing convergence in the rendered image.

    I'm not exactly sure if this is used in DAZ though.

    It's just for the denoiser. Not in DS for the moment. It's certainly the attempt to get a quality criteria that would be similar to SSIM for videos

    ebergerly said:

     

    Aala said:

    Shouldn't a ray bounce as many times as it can until it loses energy? A ray of light can't bounce around forever since some of its energy is absorbed each time it's reflected, especially by dark colors.

    I don't think the average (or even non-average) raytracer models/simulates light energy.

    Of course it does. Law of energy conservation is fundamental to PBR rendering. Just .look at Iray Uber for ex Glossy color effect. You can set it to scatter only, scatter transmit or scatter transmit intensity.

    If you do a google search with the terms PBR energy conservation + the engine of your choice you'll get more or less the same explanation

     

    Aala said:

    Just to be clear BTW, when I'm talking about light rays and energy, I'm not talking about the real-world physics of how light bounces, but rather how the renderer simulates it. When I say 'losing energy', I mean the average rate of absorption for a particular color, not the rate of kinetic energy loss.

    Iray is a bidirectionnal pathtracer. You initially shoot rays from the camera but you also shoot light rays from light sources.

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    RayDAnt said:
    RayDAnt said:
    RayDAnt said:

    Even in the hypothethical case where Iray and Driver version were frozen there's still variance,

    Of course there's always going to be a certain base level of underlying variance in the rendering process wehn you're dealing with an unbiased rendering engine (which is what Iray is.) But is that variance going to be enough to throw things off statistically using a given testing methodology? Let's find out.

    Here are the 4.12.0.033 Beta render times (in seconds) for 5 completed runs of the benchmarking scene I created rendered on my Titan RTX system, all completed under completely identical testing conditions (1800 iterations, 72f ambient room temp, custom watercooled components, identical daz/iray/driver versions, etc):

    Run #1: 231.283s render
    Run #2: 231.370s render
    Run #3: 231.397s render
    Run #4: 231.389s render
    Run #5: 231.368s render

    Which gives us the following descriptive statistics for rendering time.
    Min: 231.281s (Run #1)
    Mean: 231.361s (all Runs averaged together)
    Max: 231.397s (Run #3)

    And the following descriptive statistics for variance between rendering times.
    Min: 0.002s (Run #2 - Run #5)
    Max: 0.116s (Run #3 - Run #1)

    The two most popular thresholds for statisitcal signficance are 5% and 1% of the total. What percentage of 231.281s (the average render time) is 0.116s (the largest variance here observed)? Let's do the math:
    (0.116 / 231.281) * 100 = 0.000501554 or 0.05%

    The variance you are talking about here is literally insignificant. At least, it is if you are adhering to the testing/reporting methodology I set up.

    No you still don't get it. Prove me that every iteration have the same number of samples

    You don't seem to understand what an Iteration is. By definition, an iteration is a single full completion of some sort of cyclical process. In the case of 3D rendering, the process in question is calculating one sample pixel value for each and every pixel location in a scene's framebuffer - ie. the same number of samples as there exist pixels in the final rendered image. So in the case of the benchmarking scene I constructed (which renders out to a 900x900 pixel square) each and every iteration works out to be:

    900 * 900 = 810,000 pixels

     810,000 samples.

    I do know.

    ...you know that iterations - by definition - consists of a finite number of pixel value samples in any given scene (that anyone can determine via a very simple math equation?) Then why would you be asking me to prove that to you?

    So that you come to the right conclusion by yourself and stop with your definition that you try to arrange to conform to your need. An iteration is just a calculation loop. See my post above for an additionnal hint

    And as I said I have little interrest in continuing the subject.

    RayDAnt said:
     

    Rendering Quality is what determines convergence and not  Rendering Converged Ratio. Rendering Converged Ratio just controls how many pixels (as a percentage) in a scene's framebuffer need to have been determined by the Rendering Quality factor provided to be "finished" in order for the overall scene as a whole to be classified as "finished".

    In order to dial in the closest thing possible to true 100% convergence, the values needed would be:

    • Render Quality = 1.0840011e+18 (ie. much more than the maximum 10000 DS supports by default)
    • Rendering Converged Ratio = 100%

    Which, as anyone still following this particular conversation thread can guess, would take a mighty long time.

    Uninterresting since the initial point was the useability of convergence ratio as a control value as some other suggested. There were no point raised about "true convergence" whatever that would mean and I find that pretty much meaningless. Does calculating 1.0840011e+18 time the color of a white pixel make it a true white pixel or will it make it more white than white (choose your color) ?

    Since you want to stick to iteration as control value in your bench this discussion has no meaning too

  • ebergerlyebergerly Posts: 3,255
    edited July 2019
    Regarding the PBR/energy/physics aspect of raytracing, this is yet another case where complexities and terminology can result in a never ending rabbit hole of discussion. At its core, a ray tracer is a solver of geometric problems. A ray is a dumb mathematical vector whose only properties are magnitude and direction. No energy or other physical properties. When it hits a scene object, it basically asks the surface's material description "what color are you, and where do I go next?". And you don't want complex physics calculations cuz it would slow things to a crawl. However, you DO need to know about the surface physical properties, and how light responds to it, when you design the material description. But still you're just telling a dumb vector a color and where to go next. PBR just means using more smarts about material properties and how light responds in making that determination. A simple diffuse model says to the ray "my surface color is red, and your next bounce should be in random direction (0.9,0.3,0.86)". But its certainly true that more complex models take into account more accurate physical models of the surface before they tell the vector what to do. But at the end of the day its a geometry calculator, not a physics simulator. So I agree, clearly physics and how light responds to surfaces are a major consideration in any raytracer.
    Post edited by ebergerly on
  • Richard HaseltineRichard Haseltine Posts: 107,885

    Since these topics do not appear to lend themselves tio civil discussion the thread is locked.

This discussion has been closed.