Daz Studio Pro BETA - version 4.21.0.5! (*UPDATED*)

245678

Comments

  • Richard HaseltineRichard Haseltine Posts: 99,519

    Torquinox said:

    Richard Haseltine said:

    Torquinox said:

    I hope someone at Daz is evaluating the information @outrider42 provided in this thread and is doing what must be done to improve future versions of DS. @outrider42  thank you for your diligence and caring to take the time and trouble to do these tests. The work you're doing is a credit to you and a boon to the community. It's pretty cool! yes

    Daz does not write Iray (or 3Delight)

    And? Daz implements it. If they can work with Apple folks to get DS to run on current Apple machines, they can probably have a meet with Nvidia and/or 3DL folks to see what's what and to see what can be done.

    With Apple it was, as far as I know, a matter of getting help with modifying DS - not getting Apple to modify MacOS.

  • Dolce SaitoDolce Saito Posts: 165
    edited May 2022

    Is it only me or applying (any) G8.1 figure on top of a G8/8.1 figure results in fingernails two/three times longer than usual all the time? I can't somehow find a workaround for this.

    Post edited by Dolce Saito on
  • RobotHeadArtRobotHeadArt Posts: 916

    I tried a scene with one character, one hair, clothes, and an HDRI.

    4.15: Rendering Time: 34.10 seconds
    4.20.1.38 beta: Total Rendering Time: 30.81 seconds

    So in my case, the beta version of Iray is faster.

  • Richard HaseltineRichard Haseltine Posts: 99,519

    Dolce Saito said:

    Is it only me or applying (any) G8.1 figure on top of a G8/8.1 figure results in fingernails two/three times longer than usual all the time? I can't somehow find a workaround for this.

    That sounds like a bad character set-up somewhere - if it's new behaviour it is probably down to a newly installed character or morph set.

  • nicsttnicstt Posts: 11,715

    I too have been comparing 4.15 (4.15.1.91) and 4.20

    A few render passes, and although 4.15 has a very sligh edge, it is statistically insignificant.

  • Leonides02Leonides02 Posts: 1,379
    edited May 2022

    I'm receiving this error when trying to render in the Beta:

    Iray [WARNING] - API:DATABASE ::   1.0   API    db   warn : Transaction is released without being committed or aborted. Automatically aborting.

    The render then quits, and I can't start again because the "renderer is in use" (says another error)

    I don't get this with the general release.

    Post edited by Leonides02 on
  • prixatprixat Posts: 1,588
    edited May 2022

    outrider42 said:

    The next big question is the picture quality worth the extra time and iterations that 4.20 ran to hit 95%? Well, here you go. Judge for yourself.

     

    I think I've found some improvement... it seems to help a bit with Iray's "Dull Eye" problem:
    (4.15 on the left, 4.20 on the right, you can see more detail when played on YouTube)

     

    Post edited by prixat on
  • Dolce SaitoDolce Saito Posts: 165

    Richard Haseltine said:

    Dolce Saito said:

    Is it only me or applying (any) G8.1 figure on top of a G8/8.1 figure results in fingernails two/three times longer than usual all the time? I can't somehow find a workaround for this.

    That sounds like a bad character set-up somewhere - if it's new behaviour it is probably down to a newly installed character or morph set.

    Might be. While loading some chars, I always get "Duplicate IDs found". Then I click okay.

    I remember one of you guys having a script to fix this duplicate ID somewhere in the past. How can I fix it?

  • Richard HaseltineRichard Haseltine Posts: 99,519

    Duplicate IDs probably just indicates a clash in name between two characters that would be fine on their own, but it might possibly be related to the nail issue.

  • I have been trying to render an animation. I have tried several times and get the same error message. Anyone ever ran across this?

    using an iMac, M1 processor Daz Studio 4.20 Beta fully updated to the newest build. 

    16GB of Ram with over 500GB of hard drive space 

    Screen Shot 2022-05-27 at 7.29.03 PM.png
    754 x 660 - 508K
  • lutherdan_f779727b4f said:

    I have been trying to render an animation. I have tried several times and get the same error message. Anyone ever ran across this?

    using an iMac, M1 processor Daz Studio 4.20 Beta fully updated to the newest build. 

    16GB of Ram with over 500GB of hard drive space 

    I've seen this on both my Intel and M1 Macs.  You need to save the animation as individual renders and combine them into a movie later.  I believe QuickTime player will import a series of images and make a .mov file from them.

    Lee

     

  • I will give that try, Thanks 

  • outrider42outrider42 Posts: 3,679

    Richard Haseltine said:

    Torquinox said:

    I hope someone at Daz is evaluating the information @outrider42 provided in this thread and is doing what must be done to improve future versions of DS. @outrider42  thank you for your diligence and caring to take the time and trouble to do these tests. The work you're doing is a credit to you and a boon to the community. It's pretty cool! yes

    Daz does not write Iray (or 3Delight)

    History has proven Daz doesn't necessarily have to change Iray to impact rendering performance. There was no new version of Iray in 4.14, yet that version was the one that introduced "more efficient handling of normal maps", which at the time made rendering much faster than 4.12.

    So while it might well be something Iray did...there is that possibility that something is off in how the plugin is handled.

    I have seen some scenes not change too dramatically in render time. But others do. It clearly depends on what the exact content is, and what material settings are used in that content. My observations are certainly not unique. Every single benchmark posted after 4.20 is slower than it was on the same hardware previous to 4.20 in the benchmark thread. Every single one.

    So there is something used in that scene that has been hurt by the changes in 4.20. We just have not found exactly what it is.

  • Richard HaseltineRichard Haseltine Posts: 99,519

    outrider42 said:

    Richard Haseltine said:

    Torquinox said:

    I hope someone at Daz is evaluating the information @outrider42 provided in this thread and is doing what must be done to improve future versions of DS. @outrider42  thank you for your diligence and caring to take the time and trouble to do these tests. The work you're doing is a credit to you and a boon to the community. It's pretty cool! yes

    Daz does not write Iray (or 3Delight)

    History has proven Daz doesn't necessarily have to change Iray to impact rendering performance. There was no new version of Iray in 4.14, yet that version was the one that introduced "more efficient handling of normal maps", which at the time made rendering much faster than 4.12.

    The 4.12 General Release was 4.12.1.118 (see http://docs.daz3d.com/doku.php/public/software/dazstudio/4/change_log_4_12_1_118).

    The 4.14 General Release was 4.14.0.10 (see http://docs.daz3d.com/doku.php/public/software/dazstudio/4/change_log_4_14_0_10), which included builds 4.12.2.9, 4.12.2.31 and 4.12.2.50... each of which integrated a newer version of Iray.

    See https://www.daz3d.com/forums/discussion/449311/daz-studio-pro-4-14-nvidia-iray for details of the new Iray versions included.

    So yes, 4.14 had an Iray update over 4.12- though it was incorporated in 4.12.x.x builds after the last general release of 4.12.x.x.

    So while it might well be something Iray did...there is that possibility that something is off in how the plugin is handled.

    I have seen some scenes not change too dramatically in render time. But others do. It clearly depends on what the exact content is, and what material settings are used in that content. My observations are certainly not unique. Every single benchmark posted after 4.20 is slower than it was on the same hardware previous to 4.20 in the benchmark thread. Every single one.

    So there is something used in that scene that has been hurt by the changes in 4.20. We just have not found exactly what it is.

  • johndoe_36eb90b0johndoe_36eb90b0 Posts: 235
    edited June 2022

    I can confirm @outrider42 claims about considerable slowdown and worse image quality:

    DAZ Studio 4.16.0.3 General Release

    2022-06-01 18:30:16.337 Iray [INFO] - IRAY:RENDER ::&nbsp; &nbsp;1.0&nbsp; &nbsp;IRAY&nbsp; &nbsp;rend info : Device statistics:<br /> 2022-06-01 18:30:16.337 Iray [INFO] - IRAY:RENDER ::&nbsp; &nbsp;1.0&nbsp; &nbsp;IRAY&nbsp; &nbsp;rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): <strong>776</strong> iterations, 3.012s init, 12.532s render<br /> 2022-06-01 18:30:05.193 Total Rendering Time: <strong>18.62</strong> seconds

    DAZ Studio 4.20.1.38 Public Beta

    2022-06-01 18:37:31.338 Iray [INFO] - IRAY:RENDER ::&nbsp; &nbsp;1.0&nbsp; &nbsp;IRAY&nbsp; &nbsp;rend info : Device statistics:<br /> 2022-06-01 18:37:31.338 Iray [INFO] - IRAY:RENDER ::&nbsp; &nbsp;1.0&nbsp; &nbsp;IRAY&nbsp; &nbsp;rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): <strong>837</strong> iterations, 1.899s init, 19.265s render<br /> 2022-06-01 18:37:25.737 [INFO] :: Total Rendering Time: <strong>24.11</strong> seconds

    Both instances launched with Iray preview off, same scene (a single G8F character on a black backdrop, default DTHDR-RuinsB-500.hdr environment map, dome rotation 135 degrees), same rendering settings (95% convergence) result in 32% longer render time and 7.73% more iterations.

    Just from looking at the resulting PNG file size (731, 574 bytes for 4.16 and 747,833 bytes for 4.20 or 2.22% larger file) it is apparent that 4.20 has rendered longer per iteration, did more iterations, and the result still has more noise (because noise is harder to compress due to the randomness it introduces).

    A simple check with IrfanView says image produced with 4.16 has 84,205 unique colors, while image produced with 4.20 has 93,283 unique colors which confirms there is more noise despite the tremendous slowdown which is baffling considering the extremely low scene complexity (one G8F figure with one geograft and two geoshells and only environment HDRI lighting).

    To prove that quality is indeed worse I did further testing by rendering the same scene in 4.16 with 15,000 samples, time limit set to 0, and rendering quality set to off to get a (reasonably) fully converged image (resulting PNG file size is 573,834 bytes).

    I then computed the PSNR between that reference image, and the images produced by both 4.16 and 4.20 using default render settings. The results are as follows:

    Daz Studio 4.16.0.3 General Release

    PSNR =&nbsp;<strong>49.96 dB</strong>

    Daz Studio 4.20.1.38 Public Beta

    PSNR =&nbsp;<strong>43.25 dB</strong>

    Obviously, bigger number means better quality.

    Something has to be done and soon, because new Iray (and by extension Daz Studio 4.20 which uses it) is apparently worse on all possible metrics. It not only renders considerably slower, but it produces images with worse peak signal-to-noise ratio.

    Test configuration

    Windows 10 (Version 10.0.19044.1706)<br /> NVIDIA Studio Driver 512.59<br /> GeForce RTX 3090<br /> Intel Core i7-9800X<br /> 64 GB DDR4-3200 RAM<br /> Samsung 970 Pro 1TB NVME SSD

    Post edited by Richard Haseltine on
  • Richard HaseltineRichard Haseltine Posts: 99,519
    edited June 2022

    johndoe_36eb90b0 said:

    I can confirm @outrider42 claims about considerable slowdown and worse image quality:

    DAZ Studio 4.16.0.3 General Release

    2022-06-01 18:30:16.337 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2022-06-01 18:30:16.337 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 776 iterations, 3.012s init, 12.532s render
    2022-06-01 18:30:05.193 Total Rendering Time: 18.62 seconds

    DAZ Studio 4.20.1.38 Public Beta

    2022-06-01 18:37:31.338 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2022-06-01 18:37:31.338 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 837 iterations, 1.899s init, 19.265s render
    2022-06-01 18:37:25.737 [INFO] :: Total Rendering Time: 24.11 seconds

    Both instances launched with Iray preview off, same scene (a single G8F character on a black backdrop, default DTHDR-RuinsB-500.hdr environment map, dome rotation 135 degrees), same rendering settings (95% convergence) result in 32% longer render time and 7.73% more iterations.

    Just from looking at the resulting PNG file size (731, 574 bytes for 4.16 and 747,833 bytes for 4.20 or 2.22% larger file) it is apparent that 4.20 has rendered longer per iteration, did more iterations, and the result still has more noise (because noise is harder to compress due to the randomness it introduces).

    A simple check with IrfanView says image produced with 4.16 has 84,205 unique colors, while image produced with 4.20 has 93,283 unique colors which confirms there is more noise despite the tremendous slowdown which is baffling considering the extremely low scene complexity (one G8F figure with one geograft and two geoshells and only environment HDRI lighting).

    While thosde could indicate noise they could also indicate more successful rendering of subtle distinctions.

    To prove that quality is indeed worse I did further testing by rendering the same scene in 4.16 with 15,000 samples, time limit set to 0, and rendering quality set to off to get a (reasonably) fully converged image (resulting PNG file size is 573,834 bytes).

    I then computed the PSNR between that reference image, and the images produced by both 4.16 and 4.20 using default render settings. The results are as follows:

    Daz Studio 4.16.0.3 General Release

    PSNR = 49.96 dB

    Daz Studio 4.20.1.38 Public Beta

    PSNR = 43.25 dB

    Obviously, bigger number means better quality.

    Something has to be done and soon, because new Iray (and by extension Daz Studio 4.20 which uses it) is apparently worse on all possible metrics. It not only renders considerably slower, but it produces images with worse peak signal-to-noise ratio.

    Test configuration

    Windows 10 (Version 10.0.19044.1706)
    NVIDIA Studio Driver 512.59
    GeForce RTX 3090
    Intel Core i7-9800X
    64 GB DDR4-3200 RAM
    Samsung 970 Pro 1TB NVME SSD

    Post edited by Richard Haseltine on
  • jbowlerjbowler Posts: 779
    edited June 2022

    Richard Haseltine said:

    johndoe_36eb90b0 said:

    Just from looking at the resulting PNG file size (731, 574 bytes for 4.16 and 747,833 bytes for 4.20 or 2.22% larger file) it is apparent that 4.20 has rendered longer per iteration, did more iterations, and the result still has more noise (because noise is harder to compress due to the randomness it introduces).

    A simple check with IrfanView says image produced with 4.16 has 84,205 unique colors, while image produced with 4.20 has 93,283 unique colors which confirms there is more noise despite the tremendous slowdown which is baffling considering the extremely low scene complexity (one G8F figure with one geograft and two geoshells and only environment HDRI lighting).

    While thosde could indicate noise they could also indicate more successful rendering of subtle distinctions.

    That's very very unlikely; color count might increase but file size is very unlikely to increase.  It might be that there is some change in the PNG compressor settings but that possibility can be eliminated by running pngcrush --brute on both files, https://pmt.sourceforge.io/pngcrush/ and readily availble as a pre-compiled package.  I've never seen a case where PNG file size increases during a DAZ Studio render; that is a pre-requisite of the hypothesis.

    It is also possible to diff the files (PhotoShop diff etc) and this will reveal the surfaces which have changed - it won't be all of them.

    When running tests like this I always use 100% convergence; 95% rarely shows anything since 1/20th of the pixels in the image are in some unknown, uncoverged, state.  Lower the render quality if the render time is unacceptably long, do not lower the convergence!  (In fact I always use 100% convergence for everything; I don't want randomly unconverged pixels.)

    My own recent renders, only three so far, suggest the same degradation as @johndoe_36eb90b0 documents.  I always render at 100% convergence (variable render quality) with a canvas.  It seems that the Iray convergence calculation has changed yet again; immediately prior to the recent 4.20 builds (4.20.0 and before I think) convergence depended fundamentally on the "tone mapping" EV even if tone mapping was not enabled.  Lowering the EV (causing more intensities >1 in the image) caused render times to increase massively.  Raising the EV to make the image dark (so that no, or very few, pixels were likely to exceed RGB(1,1,1)) resulted in massive speed increases (and more fireflies).  This seems to have changed, I don't see the speed change I saw before, I haven't confirmed that yet.

    In my experience the noise, and the lack of covergence, is in the hair, particularly strand based hair.  Strand based hair tends to have very bright specular highlights (though I assume it depends on the precise surface settings), resulting in some pixels with a very high intensity.  I notice that although the effect of EV on convergence seems to have dropped the effect of distance remains; making the hair occupy less of the scene produces faster convergence.  That's not new and is to be expected if the hair is the major contributor to render time.

     

    Post edited by jbowler on
  • Richard Haseltine said:

    While those could indicate noise they could also indicate more successful rendering of subtle distinctions.

    Both renders at 95% convergence still have somewhat visible fireflies if you go pixel-hunting, they are just in different places on character skin. If rendering was more successful then those would be gone in 4.20.

    That is actually why I bothered to calculate PSNR.

    Of course, NVIDIA can claim (by using some other image quality metric such as SSIM) that the image quality is now much better because there are more distinct "colors".

    However, even if the quality is measurably better by some other image quality metric, sacrificing so much performance to get there while still not reaching better convergence should, in my opinion, not be an acceptable tradeoff.

  • outrider42outrider42 Posts: 3,679

    Richard Haseltine said:

    outrider42 said:

    Richard Haseltine said:

    Torquinox said:

    I hope someone at Daz is evaluating the information @outrider42 provided in this thread and is doing what must be done to improve future versions of DS. @outrider42  thank you for your diligence and caring to take the time and trouble to do these tests. The work you're doing is a credit to you and a boon to the community. It's pretty cool! yes

    Daz does not write Iray (or 3Delight)

    History has proven Daz doesn't necessarily have to change Iray to impact rendering performance. There was no new version of Iray in 4.14, yet that version was the one that introduced "more efficient handling of normal maps", which at the time made rendering much faster than 4.12.

    The 4.12 General Release was 4.12.1.118 (see http://docs.daz3d.com/doku.php/public/software/dazstudio/4/change_log_4_12_1_118).

    The 4.14 General Release was 4.14.0.10 (see http://docs.daz3d.com/doku.php/public/software/dazstudio/4/change_log_4_14_0_10), which included builds 4.12.2.9, 4.12.2.31 and 4.12.2.50... each of which integrated a newer version of Iray.

    See https://www.daz3d.com/forums/discussion/449311/daz-studio-pro-4-14-nvidia-iray for details of the new Iray versions included.

    So yes, 4.14 had an Iray update over 4.12- though it was incorporated in 4.12.x.x builds after the last general release of 4.12.x.x.

    So while it might well be something Iray did...there is that possibility that something is off in how the plugin is handled.

    I have seen some scenes not change too dramatically in render time. But others do. It clearly depends on what the exact content is, and what material settings are used in that content. My observations are certainly not unique. Every single benchmark posted after 4.20 is slower than it was on the same hardware previous to 4.20 in the benchmark thread. Every single one.

    So there is something used in that scene that has been hurt by the changes in 4.20. We just have not found exactly what it is.

    Fine, whatever, lets just blame it all on the Iray Dev Team. But that doesn't solve your problem one bit. You can play the blame game all you want, the fact remains that Daz Studio Iray is slower than before with a number of different people not only saying so, but providing hard statistical proof that it is slower.

    Also, I have eyeballs. I can use my eyeballs to see that 4.20 images are no better than previous versions. My eyeballs also tell me that in a number cases it actually is worse, in spite of rendering longer. And I provided proof in pictures of this.

    You guys can try sweeping this under the rug, but in the long term this is going to catch up to your company. Iray is too damn slow. Does anybody remember the Star Wars Reflections video and demo? My 3090 can play this free to download scene at 48 frames PER SECOND, this with 3 characters in a room full of light and reflections. The image was 1440p with DLSS enabled, so the internal resolution was a little lower. But Iray can't replicate this anywhere near that speed. At about 60 seconds, running 48 renders for every second is 2,880 images. Does anybody in this room get shudders thinking of trying to render 2880 images of ANYTHING with Iray? Maybe the Iray images would have better quality, but the time investment is not worth it.

    If anybody wants to try the demo for themselves, here it is. You can run benchmarks during the demo while it plays. https://www.techpowerup.com/download/nvidia-star-wars-rtx-raytracing-tech-demo/

    Keep in mind this is pretty old now, the demo was created in 2017 on old versions of Unreal and the new Unreal can do way more. Because you see, Unreal is actually improving their software. <.<

  • jbowler said:

    That's very very unlikely; color count might increase but file size is very unlikely to increase.

    That's incorrect. Higher color count (assuming pixels are not sorted by color) means higher data entropy. Higher data entropy always results in worse compression.

    jbowler said:

    I've never seen a case where PNG file size increases during a DAZ Studio render; that is a pre-requisite of the hypothesis.

    You can test it yourself -- render a scene with less iterations, save a PNG, then render the same scene with more iterations and save a PNG. The PNG file rendered with more iterations will have smaller size.

    jbowler said:

    It is also possible to diff the files (PhotoShop diff etc) and this will reveal the surfaces which have changed - it won't be all of them.

    I did that, since it is a figure on black background only figure is showing the (noisy) difference.

    jbowler said:

    When running tests like this I always use 100% convergence; 95% rarely shows anything since 1/20th of the pixels in the image are in some unknown, uncoverged, state.

    The goal was comparison with default, out-of-the-box, Iray settings, but even 100% convergence will appear noiser unless you also increase the number of iterations to compensate for the slowdown in rendering.

    jbowler said:

    In my experience the noise, and the lack of covergence, is in the hair, particularly strand based hair.

    That's why I didn't use any hair on the figure, it only had fibermesh brows and eyelashes.

    By the way I just did the same 15000 iterations, render quality off, 0 seconds max time render in 4.20 and the resulting PNG file is again larger compared to 4.16 -- 603,843 bytes, 58,952 unique colors in 4.20 .vs. 573,834 bytes, 48,572 unique colors in 4.16.

    What bothers me the most is the rendering time though:

    4.16.0.3
    2022-06-01 20:16:54.155 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2022-06-01 20:16:54.155 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 15000 iterations, 3.125s init, 228.329s render
    2022-06-01 20:16:49.112 Total Rendering Time: 3 minutes 54.61 seconds

    4.20.1.38
    2022-06-01 22:02:26.413 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2022-06-01 22:02:26.413 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 15000 iterations, 1.659s init, 337.565s render
    2022-06-01 22:02:03.756 [INFO] :: Total Rendering Time: 5 minutes 42.25 seconds

    That's a huge performance hit (~46%) for the same number of iterations, even if the resulting image quality is really better on some perceptual (but hard to actually perceive by a human) quality metric.

    Seeing that made me do more testing and I realized my PSNR test was not valid.

    Namely I had the spectral rendering on in both versions, however the difference is that the new one defaults to Rec.709 colorspace where the old one doesn't even have that setting, so I redid the renders with spectral rendering off and recalculated PSNR so the results are now directly comparable while the difference is much less pronounced:

      15000 iterations 4.16 15000 iterations 4.20
    100% converged 4.16 53.18 dB 52.29 dB
    100% converged 4.20 52.01 dB 52.84 dB

     

     

     

    If you compare 4.16 and 4.20, 4.20 does indeed have a bit worse PSNR (more noise). My apologies for making a mistake and not taking spectral rendering into account when comparing image quality.

    As for rendering time, turning off spectral rendering does not seem to make it better:

    4.20.1.38 (spectral rendering off)
    2022-06-01 23:20:58.372 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2022-06-01 23:20:58.372 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 15000 iterations, 1.613s init, 359.513s render
    2022-06-01 23:18:26.530 [INFO] :: Total Rendering Time: 6 minutes 4.8 seconds

    My conclusion is that I will have to stay on 4.16.0.3 General Release for as long as I can.

  • @outrider42

    Your criticism is otherwise valid but comparing Iray to a raytracing demo is not really fair -- demos and games use much lower polygon count geometry and various tricks to achieve high frame rate. Also, the only raytraced parts in that demo are area light shadows, reflections, and ambient occlusion (probably calculated at much lower resolution), all other stuff is rendered, not raytraced.

    To conclude -- images rendered with 4.20 are a bit noiser which in itself wouldn't be such a big deal were it not for the slowdown so I think that focusing on figuring out what makes it slower would be a good first step.

    We can continue to gripe about broken ghost lights (and I agree mesh and light should have separate opacity values like you said), and about thin film changes, but at this point I have no illusions that NVIDIA will ever reconsider undoing that damage (I implored them to reconsider personally through my developer contacts and the only thing I heard back were crickets) so frankly I don't see a point in bringing that up.

    At this point, the only thing that people who are fed up with Daz issues can do is use one of the numerous Daz to X tutorials (where X is Blender, Maya, etc), learn the new progrma and workflow, convert the legally owned assets to new format, and switch without looking back.

  • jbowlerjbowler Posts: 779

    johndoe_36eb90b0 said:

    Richard Haseltine said:

    While those could indicate noise they could also indicate more successful rendering of subtle distinctions.

    Both renders at 95% convergence still have somewhat visible fireflies if you go pixel-hunting, they are just in different places on character skin. If rendering was more successful then those would be gone in 4.20.

    Not at 95% because that says "ignore 5% of the pixels, your choice."  I have, in the past, tried 100% convergence with render quality of around 10 just to see what would happen; this was on a scene which converged rapidly anyway; no emissive surfaces in frame, minimal surface reflection, no HD textures, no complex geometry like strand based hair, minimal subdivision.  The PNG file size got consistently smaller but I really couldn't see any improvement in the image.  I do always use the firefly filter, geometrically correct Gaussian pixel filter, sampling radius 1.5 (though no one has explained what that means to my knowledge.)

    That is actually why I bothered to calculate PSNR.

    That only works with a Beauty canvas; i.e. with the full original dynamic range and no tone mapping, but the noise on the brightest pixels will swamp that in the darker ones even using the logarithmic scale (i.e. noise in dB).  The challenge for NVidia is that they have the full precision, floating point, values in their hands and have to make some guess as to what the tone mapping will do.  It think in the past it must have simply ignored values above the tone-mapped peak (i.e. 1.0) because of all the comments about making sure the scene is "fully lit", but for certain it wasn't doing that in recent versions prior to 4.20.1

    Of course, NVIDIA can claim (by using some other image quality metric such as SSIM) that the image quality is now much better because there are more distinct "colors".

    However, even if the quality is measurably better by some other image quality metric, sacrificing so much performance to get there while still not reaching better convergence should, in my opinion, not be an acceptable tradeoff.

    Changing "Render Quality" or "Rendering Converged Ratio" (did that name just change?) makes absolutely no difference to the quality of the output; that's why they can be changed on the fly during a render.  They do exactly the same thing as "max time" (which I always set to 0) and max samples (which I always set to -1); they just stop the render at some point.  One approach is to ignore them all; disable "Render Quality Enable", max time:=0, max samples:=-1 (turn off limits) then just stop the render when it looks good enough.  Previously (prior to 4.16) it was possible, with complete reliability, to change any of the four settings mid-render; cancel, change the settings, restart, but 4.16 seemed to break the quality/convergence changes, at least with a canvas.  Cancelling the render saves the output PNG or JPEG and the canvases, so it should always be possible to cancel, do not change anything, examine the saved PNG/JPEG and/or canvases and restart if they are not good enough.  I find this is a good general approach to using DAZ.

    DAZ can certainly be dinged for allowing a setting in the render window to be changed mid-render in a way that causes a restart, but the only thing apart from that which I see as an issue is that the behavior of the controls should be both consistent and documented.  It doesn't matter if the behavior depends on NVidia; the UI is DAZ and therefore the onus in on DAZ not to change the behavior of UI controls without extensive warning and, preferably, work rounds.  It's the same as ghost lights.

  • prixatprixat Posts: 1,588

    Viewing it from the other side, nVidia are actually fixing long overdue faults in Iray and adding more features.

    Since they're putting Iray 2022 as a renderer in Omniverse, customers expect it to render correctly before they get the speed back up (by buying the next generation of GPU).

  • jbowlerjbowler Posts: 779

    johndoe_36eb90b0 said:

    jbowler said:

    That's very very unlikely; color count might increase but file size is very unlikely to increase.

    That's incorrect. Higher color count (assuming pixels are not sorted by color) means higher data entropy. Higher data entropy always results in worse compression.

    PNG doesn't compress the colors, it compresses one of six transforms of the colors.  The transforms are chosen on a per-row (constant Y ordinate) basis.  So the entropy being compressed is also changed on each row; it's not simply the entropy of the colors (though one of the filters just tries to compress the colors).  So a random sequence of white then black pixels results in a very large PNG while a gray scale shade from left to right results in a suprisingly small PNG; a very small PNG (it approaches the limit of zlib, IRC 1:4096).

    The reason both of you are wrong with regard to PNG (though the general statements are correct) is that introducing noise invariably increases the size of a PNG whereas removing the noise while introducing colors almost invariably decreases the size of the PNG.  Your original data was about PNG size, you included the color information, but that was irrelevant to @richard's comment because the PNG size is what matters.

    I've never seen a case where PNG file size increases during a DAZ Studio render; that is a pre-requisite of the hypothesis.

    You can test it yourself -- render a scene with less iterations, save a PNG, then render the same scene with more iterations and save a PNG. The PNG file rendered with more iterations will have smaller size.

    You misread my comment; I said exactly what you did.  I have verified this on many occasions by cancelling a render, recording the PNG (r.png in the temp directory) and restarting the render.  It's tedious but it gives a lot of data :-)

  • jbowlerjbowler Posts: 779

    johndoe_36eb90b0 said:

    By the way I just did the same 15000 iterations, render quality off, 0 seconds max time render in 4.20 and the resulting PNG file is again larger compared to 4.16 -- 603,843 bytes, 58,952 unique colors in 4.20 .vs. 573,834 bytes, 48,572 unique colors in 4.16.

    What bothers me the most is the rendering time though:

    4.16.0.3
    2022-06-01 20:16:54.155 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2022-06-01 20:16:54.155 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 15000 iterations, 3.125s init, 228.329s render
    2022-06-01 20:16:49.112 Total Rendering Time: 3 minutes 54.61 seconds

    4.20.1.38
    2022-06-01 22:02:26.413 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2022-06-01 22:02:26.413 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 15000 iterations, 1.659s init, 337.565s render
    2022-06-01 22:02:03.756 [INFO] :: Total Rendering Time: 5 minutes 42.25 seconds

    That's a huge performance hit (~46%) for the same number of iterations, even if the resulting image quality is really better on some perceptual (but hard to actually perceive by a human) quality metric.

    That explains everything I've experienced; there may be a good justification for slowing down the iteration time that much; that's a 50% (45.8%, 3dp) increase in iteration time, but it has to be really good.  Like halving the number of iterations required.

    "Perceptual" means perceived by a real human being; if it's hard to perceive it isn't perceptual.  Image quality is not defined scientifically; most famously (for me) Ulichney observed that adding blue noise to an image with limited dynamic range increases perceptual quality [Robert Ulichney, Digital Halftoning].  Nevertheless that is post processing; all I want is an EXR that I can halftone ("tonemap" as it is called these days) in another program.  Having DAZVidia take twice as long to produce some that is half as good (as they say; multiply by .75) is a bug.

  • nonesuch00nonesuch00 Posts: 18,032

    johndoe_36eb90b0 said:

    jbowler said:

    That's very very unlikely; color count might increase but file size is very unlikely to increase.

    That's incorrect. Higher color count (assuming pixels are not sorted by color) means higher data entropy. Higher data entropy always results in worse compression.

    jbowler said:

    I've never seen a case where PNG file size increases during a DAZ Studio render; that is a pre-requisite of the hypothesis.

    You can test it yourself -- render a scene with less iterations, save a PNG, then render the same scene with more iterations and save a PNG. The PNG file rendered with more iterations will have smaller size.

    jbowler said:

    It is also possible to diff the files (PhotoShop diff etc) and this will reveal the surfaces which have changed - it won't be all of them.

    I did that, since it is a figure on black background only figure is showing the (noisy) difference.

    jbowler said:

    When running tests like this I always use 100% convergence; 95% rarely shows anything since 1/20th of the pixels in the image are in some unknown, uncoverged, state.

    The goal was comparison with default, out-of-the-box, Iray settings, but even 100% convergence will appear noiser unless you also increase the number of iterations to compensate for the slowdown in rendering.

    jbowler said:

    In my experience the noise, and the lack of covergence, is in the hair, particularly strand based hair.

    That's why I didn't use any hair on the figure, it only had fibermesh brows and eyelashes.

    By the way I just did the same 15000 iterations, render quality off, 0 seconds max time render in 4.20 and the resulting PNG file is again larger compared to 4.16 -- 603,843 bytes, 58,952 unique colors in 4.20 .vs. 573,834 bytes, 48,572 unique colors in 4.16.

    What bothers me the most is the rendering time though:

    4.16.0.3
    2022-06-01 20:16:54.155 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2022-06-01 20:16:54.155 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 15000 iterations, 3.125s init, 228.329s render
    2022-06-01 20:16:49.112 Total Rendering Time: 3 minutes 54.61 seconds

    4.20.1.38
    2022-06-01 22:02:26.413 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2022-06-01 22:02:26.413 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 15000 iterations, 1.659s init, 337.565s render
    2022-06-01 22:02:03.756 [INFO] :: Total Rendering Time: 5 minutes 42.25 seconds

    That's a huge performance hit (~46%) for the same number of iterations, even if the resulting image quality is really better on some perceptual (but hard to actually perceive by a human) quality metric.

    Seeing that made me do more testing and I realized my PSNR test was not valid.

    Namely I had the spectral rendering on in both versions, however the difference is that the new one defaults to Rec.709 colorspace where the old one doesn't even have that setting, so I redid the renders with spectral rendering off and recalculated PSNR so the results are now directly comparable while the difference is much less pronounced:

      15000 iterations 4.16 15000 iterations 4.20
    100% converged 4.16 53.18 dB 52.29 dB
    100% converged 4.20 52.01 dB 52.84 dB

     

     

     

    If you compare 4.16 and 4.20, 4.20 does indeed have a bit worse PSNR (more noise). My apologies for making a mistake and not taking spectral rendering into account when comparing image quality.

    As for rendering time, turning off spectral rendering does not seem to make it better:

    4.20.1.38 (spectral rendering off)
    2022-06-01 23:20:58.372 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2022-06-01 23:20:58.372 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 15000 iterations, 1.613s init, 359.513s render
    2022-06-01 23:18:26.530 [INFO] :: Total Rendering Time: 6 minutes 4.8 seconds

    My conclusion is that I will have to stay on 4.16.0.3 General Release for as long as I can.

    A lot of iRay bugs have been fixed between 4.16 and 4.20. Apparently those bugs had a lot to do with quicker renders in 4.16. Most of the bugs were related to opacity & transparency I believe. e.g. The bug that enabled the Kindred Arts products regarding "ghost lights" to function was a bug.

  • @jbowler

    We agree on many points but...

    I know how PNG compression works. I don't see how adding more unique colors to an image can ever result in a smaller file size if those colors represent either noise or texture details which in itself are high-frequency image component and usually look like noise with varying degree / type of randomness.

    This image of a photograph broken down into bitplanes should help:

    In the above image, the distribution of the lowest (0th, bottom right) bitplane pixel values which represent highest frequency components of the original photograph (texture detail or noise) is approaching full randomness and is almost impossible to reduce its entropy using any current compression algorithm.

    TL;DR -- to represent more colors (which is a pre-requisite to have either more noise or more texture detail in an image) you always need more bits, not less.

    Unless of course the colors are neatly sorted in a gradient, but such images tend not to be very useful :-)

  • jbowlerjbowler Posts: 779

    johndoe_36eb90b0 said:

    I know how PNG compression works.

    The bit planes are irrelevant because PNG doesn't compress them.  It doesn't compress the image either; it compresses a transform of the image.  My simple example is a 255 pixel wide image where each pixel is a byte (8bpp grayscale or color mapped) and the pixel values in the image rows increase from 1 to 255.  This will be optimally compressed in PNG by transforming the first row with the SUB filter and the subsequent rows with the UP filter.  The result is a sequence of blocks of 256 bytes for each row, the first row consists of just the byte value "1", the subsequent rows consist of the value 2 followed by 255 bytes with the value 0.  A more contrived example where each row has pixel values 2 greater than the previous row actually ends up with all the rows after the first having the value 2.  The result compresses really well - close to the Zlib limit.

    Now supppose the original image color reduced in an attempt to reduce its size.  The result will be larger.  I think this was the point that Richard was making - the original image actually had very low entropy, it would compress well in JPEG too, but an approximation has higher entropy so compresses less well.   This is what happens while rendering a single image, the size of the PNG goes down with iterations.  At some point the decrease ceases to be perceptual, but the decrease still happens with ridiculous numbers of iterations.  This is because of the reduction of entropy/noise in the image.

    But entropy is not a function of the number of colors; it's a function of the absence of patterns in the image.  My example has a lot of colors, all the colors in the palette but one, but it has very very low entropy.  It also has a pattern that PNG identifies; other images with low entropy can't be readily compressed by PNG.  JPEG works better but depends on having low levels of blue noise (in the audio sense.)

  • innes53_4e67625942innes53_4e67625942 Posts: 88
    edited June 2022

    IceCrMn said:

    Hmm,, I'm seeing a 10 second shutdown with this new beta.

    I've noticed this for a long time...there is a solution...After closing Daz Studio, open Task Manager (I have a shortcut to Task Manager on the task bar) and you'll see a Daz file still open...just close it in Task manager. Then reopen Daz if you want.

    Post edited by Richard Haseltine on
  • johndoe_36eb90b0johndoe_36eb90b0 Posts: 235
    edited June 2022

    @jbowler

    And what do you think the SUB filter does if not encoding discrete pixel values in the range from 0 to 255 (so 8 bits) as some sort of filter selection code, repetition count, and (in your contrived example) a small numerical difference between subsequent pixels which for the value of 1 can be represented with a shortest (1 bit) symbol by the encoder if it is repeated most often?

    I mean, slice the data any way you want, if you split your hypothetical 256 pixels wide left-to-right gray gradient into 8 bitplanes you will get this (ordered from highest to lowest bitplane):

    128 x 01
      64 x 0011
      32 x 00001111
      16 x 0000000011111111
       ...

    So you don't have to use any SUB filter to encode that image when you can encode it well enough with any other method including RLE.

    The point I am making is -- the amount of randomness in the lowest 3-4 planes determines how well the image can be compressed. Your contrived example is a bunch of repeating patterns so despite having more "colors" (or shades of gray) it is a well known exception from the rule I mentioned.

    Post edited by johndoe_36eb90b0 on
Sign In or Register to comment.