General GPU/testing discussion from benchmark thread
This discussion has been closed.
Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Yes. There are areas where you will never be able to achieve hyper-accuracy. Therefore in areas where you can achive hyper-accuracy (like this one), you do. Again, it's statistical best practice.
Richard is right.
I know very little about these specific stuff, and I like to come here and learn something.
But I see the discussion has become really agressive, and this is not pleasant to read anymore.
Guys, we're here to suggest a dozen of us, I doubt anything we say is gonna change the world market: take it easy! xD
I read all this and really wonder how we ever found out that a 2080 TI renders roughly twice as fast as a 1080 TI, and how people were able to make up their mind that this was enough value to hit the buy button.
People don't need a cost vs performance number. If they did, hardware sites out there would provide such things but they don't. Instead they present the benchmark numbers, conclude some things and maybe give a rough verbal recommendation like "this is incredible value" or "buy if you don't mind spending a lot of money and want to have the absolute fastest" or "buy only if you do a lot of this because that wasn't improved much". People have things like intelligence, intuition and common sense. Everyone knows their own financial situation and use cases best. No arbitrary number will ever accomplish anything more than what is already happening on the internet today. People deciding if some piece of hardware is worth it by browsing related web sites, watching a million YouTube videos and talking to other people in forums. And believe it or not, people trying out the hardware for themselves and then sharing their own findings. The internet allows people to return items if they don't want them or sell them in various market places. There is so little actual risk in buying goods today that this "value fatalism" is really not warranted by any stretch of the imagination.
Only that we already know exactly what we get with a 2080 TI, because people bought it and tested it. The only reason you even get to calculate anything is because others took the plunge. This is far more valuable than any arbitrary number that you may come up with.
So I guess, if you really wanna help then get all the latest hardware first and benchmark it. More action less math
I'm not talking about benchmarks, that's useful. I'm talking about your "bang for the buck" number that will never reflect the actual value people see with various amounts of money at their disposal, various use cases (i.e. Iray only, other renderers, gaming, other 3d software), various requirements for VRAM etc. etc.
Again, if you really wanna help, spend the cash and start benchmarking. Your math isn't going to do anything. People can understand simple conclusions and deduce the value of something on their own. They read opinions and facts and gather all the information they need to make purchasing decision. It does not take your magic number, nor your warnings and cautioning or anything of the sort, for people to make up their minds about some piece of hardware.
It is time to draw a line under this aspect of the discussion, thank you.
For those who don't venture into the Technical Help forum, Padone and kenshaw011267 and others have done some great work that I was unaware of. Apparently 4.11 is (or can be) much slower than 4.10 in rendering for some reason, but proper use of the denoiser (certainly not at the default setting of 8) can make a huge difference in render times for certain scenes where the user can accept the results. And there's also talk of increased VRAM usage.
My guess is that this is just one more indication that all of this RTX-related stuff is a huge work in progress, and probably won't be ironed out for months. But as far as benchmarking is concerned, I'm thinking that denoising may become an integral part of rendering for many, especially in the near term as folks try to deal with the hugely increased 4.11 render times (though I haven't verified this myself, and I'm not sure it's a universal phenomenon). But I've tried a couple of scenes and it seems to be the case.
Here's the link for anyone interested:
https://www.daz3d.com/forums/discussion/337916/anyone-had-any-luck-bringing-down-4-11-iray-rendertimes#latest
Thanks. Do you know what's causing it?
Wasn't it a bug in 4.10 that was corrected in 4.11 where some rays were not calculated thus the end effect beeing that 4.10 beeing quicker ?
outrider42, yeah, I'm starting to see your point. I just happened to render an indoor scene, cuz I was curious to see if there was a significant difference in render time for the identical scene on the identical computer, and it took over 27 minutes to render. And my fuzzy recollection was that this scene used to render in like 5 minutes or maybe 10. I kind dismissed it until I read that thread, and realized there may be something strange going on.
However, after monkeying with the denoiser as they recommended (cranking it up to maybe 100-200+) I was surprised at the speed and quality. After just a few minutes it was looking almost like the final render, although some of the tough-to-converge spots looked a bit crummy.
Unfortunately this is just one more area that's gonna take a lot of fidding and research, and honestly I'm pretty drained by all of this nonsense that's being dragging on since last year. And I don't see any hope we'll get any resolution until maybe 1Q next year. All the RTX-20xx Supers won't be out and available probably until mid-August, and who knows when Iray will fully implement optimized versions of all the RTX technologies (not just "support RTX").
So back to the topic of this thread: personally, I won't waste my time on even thinking about any sort of benchmarking-related stuff until maybe the end of 2019. Which is fine, cuz all of this just gives me a big headache.
Although I'm contemplating buying an RTX-2080 Super or one of those others maybe in September so I can embark on a journey to convert my simple CUDA/c++ raytracer to a DXR (Direct X raytracer) or maybe NVIDIA Optix raytracer so I can better understand how RTX works.
Based on my experiences with 4.11 beta I would qualify this. Depends on your scene setup and user's end goal.
There are times where 600-700 iterations renders out a pretty good quality simple portrait. No not 99% Converged. If I remember right was more like 70-85% depending on scene setup for a very simple test scene involving a character, hair and some outfit. The time cost for the extra crispness is just too much for me with current technology state.
And under that setup if i used 500 iterations as denoiser-kick in time, the hair detail would be so muddy-smeared looking. In fact don't even use denoiser for that. But have the lucky luxury to have a RTX2080Ti.
For complex scenes using say for example JackTomalin's uxurious sets with 3+ characters, I find 95iterations is just about right with a pixel radius of 1.03 to quickly deal with 99%-100% of the green firelfies.. Detail loss at the 1800-2800 iterations @ 2560 x 1440px is minimal that way for all the render. But then again, I rarely get to 95% converged because that extra time isn't worth it to me for the extra detail alot of the time.
Main point again was just I would qualify that statement based on different scene compositions and users end-quality goal.
Indeed, the best use of the denoiser is to NOT finish the render to 100%. The best use case is that the render looks good enough to end it early, which can save a lot of time on rendering. It depends entirely on the scene, but for what I have done, when I render a 3D environment, the denoiser can produce a great quality image in just a few iterations, and I can stop the render even though it may be only 5% converged. The savings on time are hard to really gauge here, because again it varies, but my denoised images only takes a few minutes to produce. However, rendering the same scene without denoising to near 100% might an hour or more. That is a huge difference.
But as I was saying, you can actually turn the denoiser on and off during the render, to see where the render is at.
So what is the denoiser doing? The denoiser is taking the image and essentially guessing what the unrendered pixels will be. When you start a render, and the denoiser starts at say 8 iterations, then the denoiser will lookvery abstract when it kicks in. Almost like a painting, its actually kind of a cool effect, LOL. As the render converges, and more pixels are produced, the denoiser updates itself along with the current progress of the render. So the more converged the render is, the better at guessing the remaining pixels the denoiser gets. At a certain point, which again can vary wildly from scene to scene, you may reach point where the denoiser is able to make a very good guess as to the remaining pixels. It is at this point you can probably stop the render and have an image on par with a fully converged image...but in much less time.
That is all it is. I have tested this numerous times, and for me, I have never seen a difference when the denoiser starts. I have tried different scenes, with and without people, and it just does not matter if I start the denoiser at 8 or if I start it in the hundreds. Of course, this is my experience, but there is a quote from Richard in one of these threads stating that Nvidia has instructed them it does not matter when the denoiser starts as to the final quality of the image at 100%. Here it is, from thread linked above:
"I believe we did see an nVidia statement that the end result did not depend on when the denoiser was started, but obviously it is going to have an impact on the tiem taken to reach that state."
For the quality of the denoiser, in my testing, the more complex the surface material is, the better the chance is that it will lose some detail. Some people HATE the denoiser because of this. But again, it all depends on what they are doing in their scenes.
Nvidia claims the denoiser is based on AI they trained. To do this, they basically feed the algorithm tons of images. Most of these are of inorganic things, like brick. It is logical to assume that the denoiser struggles with human skin because it does not know the human being rendered. Perhaps if we could somehow train the algorithm images of our Daz characters it might be better at filling in the pixels. But this just my thoughts on it.
This also gives us hope that the denoiser will improve with time. Each time the denoiser updates, hopefully it has more information behind it to perform better.
One of the more recent 4.11 releases added Tensor support for the denoiser. But it only speeds up the process a bit from what I have heard (I don't have a RTX card.) If you have a RTX card, I would be interested in seeing some images with Tensor enabled VS without, and the times it took to render to a set iteration count.
A denoiser that loses detail is a useless piece of tech to me. I'm still shocked that this so called artificial intelligence denoiser is lacking any kind of intelligence. At least RayDAnt explained what the problem is. Those algorithms were simply not trained for the scenarios we most typically render in Daz Studio. Characters. Lots and lots of characters.
But even so, if I have a texture on some wall, and the detail of it gets washed out, the denoiser has failed. And I'd say that is going to be the norm more than the exception for realistic rendering. There aren't that many perfectly flat and clean surfaces in the real world.
I was under the impression that an AI denoiser integrated into a renderer would take the geometry and maybe the textures into account and intelligently decide what is likely noise and what isn't. Instead I'm getting blurred eyelashes in a portrait. The denoiser apparently oblivious to what I'm rendering and just wiping the whole image with a wash cloth. I see no other explanation than sheer stupidity when an intelligent denoiser cannot differentiate between noise and eyelashes. Why does a renderer even need a denoiser when all it does is purely a post effect anyway? And then we needed a new piece of hardware for that?
There, my denoiser rant.
If memory serves, the Nvidia AI denoiser actually does have in-built support for doing just this. The issue is (once again) in the training process. For the denoiser to intelligently enhance eg. human skin, not only would it need to be trained on a set of images containing human skin, it would also need to have been told (via human generated metadata) which specific portions of those training images were human skin. So that it can then use those pixel patterns specifically for inhancing the skin portions (identified - once again - by human generated metadata) of your renders and doing its magic. In other words, lots of time and effort on the human side is needed (at least during the early stages) in the training process. And obviously no one wants to spend time doing that.
I can't say I have had blurry eyelashes. Just some minor skin detail loss.
That been said, I'm sorry it was this denoiser that took your virginity. Now you will be biased against all denoisers
. Not all denoisers are equal. Probably the best denoiser on the market (especially for animating) is Altus. Gets incredible results most of the time and cuts render times in half. Unfortunately, Nvidia's AI denoiser hasn't quite lived up to its potential yet. AI denoisers need to be trained for their particular jobs. It is the renderer manufacturer's job (Nvidia for iRay) to train the denoisers for their engine and application and I doubt Daz3d is doing that (even though they have built up a substantial library of training material in the form of the Daz Studio forums and galleries). I was under the impression that Nvidia's denoiser would learn as it goes, therefore getting better and better as you complete more renders. I'm not sure how that would work if my understanding is correct about that function.
In my POV the problem is elsewhere
It fells like the denoiser and the renderer are two separate products that where not meant to work together. Proof of that is that it is strictly used as a post process with Iray although it could work in a better integrated way by providing normal, albedo + extra datas on the fly. But since there is no albedo output from Iray, the denoiser doesn't work at it's best
As for the loss of details, I'm not quite sure that extra training will change the behaviour of the denoiser. I tend to think that it's rather the result of the chosen algorithms with the lack of datas. So if things stay as they are you'll have to choose between speed and accuracy.
There is also an other use that comes to my mind for the denoiser, which is to get a quicker preview in real time, especially if RTcores kick in, to get a better idea of the final render
If I were a scientist, I would choose RTX.
This reminds me of photoshop content aware option where the patch tool synthesizes nearby content for seamless blending of surrounding content. Except photoshop works better.
So since this is "General GPU/testing discussion..."
Something interesting has happened involving Nvidia GPUs because of how high Nvidia set the pricing scale for the new Geforce RTX cards.
The price range for a new 11GB RTX 2080 ti is from about $1100 - $1400 depending on who makes it, the cooling setup, and overclock setings.
The price range for a used 16GB Quadro P5000 is around $1250 used and $1400 new on ebay.
You can get the Professional version of the GTX 1080 with 5GB more VRAM than the RTX 2080 ti for about the same price. I don't know about you, but I would jump on the extra VRAM in a heartbeat. That will give you a bunch of extra headroom to turn on Speed vs Memory instancing and Optix prime on scenes that struggle to fit into 11GB of VRAM. Or it would just be 5GB more for creating larger more complex scenes.
On a side note, you can also get a used 24GB Quadro M6000 (Pro version of the Maxwell based Geforce GTX Titan X) for around $1150 on ebay. Shure it will render quite a bit slower than the RTX 2080 ti, but it has more than double the VRAM...
I think this highlights the fact that with much of this computer technology it's fun and easy to focus on shiny new hardware, but the real issues and challenges are around software. And few of us have much clue about what's really involved and required under the hood. Especially, like I said, with proprietary software that is millions of lines of code, and one mistake in one of those lines can throw everything off.
I recall everyone in the tech community last year getting so excited about RTX "tensor cores", like we'd just press a Tensor Core button and our world would light up with awesomeness.
And now we're 9 months later and just starting to realize how the hardware is kind of a side issue, and the real functioning of the denoiser is in how, and if, and how well it's implemented and optimized in software. And in fact it might be many more months before the real impacts and benefits are understood. And for some they may be way less than awesome.
Like I've always said, this RTX stuff is insanely complex. Each component is an endless rat's nest of tiny technical details that most of us can only guess at, or just repeat what we saw on the internet. And while that might give the tech enthusiast community endless hours of speculation, for those of us who just want to get somewhere it's incredibly annoying and headache-inducing. I recently saw a presentation from some NVIDIA engineers that sounded more like all of this RTX-related stuff is still a work in progress, and folks are still coming up with ways to improve denoising algorithms and stuff.
Great. And for the rest of us we now have to figure out the intracacies of the new denoiser, and how we can get it to speed up the dreadfully slow 4.11 renders.
BTW, let me predict that we'll be going thru this same rat's nest of endless speculation when folks start to realize that one of the other RTX features, physics simulations (Physx/Flex/CUDA10) is also included and can probably do some awesome stuff for hair/fluid/rigid body/whatever simulations. If and when it gets fully implemented and optimized.