General GPU/testing discussion from benchmark thread

191012141518

Comments

  • ebergerlyebergerly Posts: 3,255
    RayDAnt, do you really think that striving for hyper-accuracy is warranted here? Given all the variables in hardware, software, reporting, etc, why not just assume the results are ballpark numbers, plus or minus 15 seconds or whatever?
  • RayDAntRayDAnt Posts: 1,154
    edited July 2019
    ebergerly said:
    RayDAnt, do you really think that striving for hyper-accuracy is warranted here?

    Yes. There are areas where you will never be able to achieve hyper-accuracy. Therefore in areas where you can achive hyper-accuracy (like this one), you do. Again, it's statistical best practice.

    Post edited by RayDAnt on
  • LenioTGLenioTG Posts: 2,118

    Richard is right.

    I know very little about these specific stuff, and I like to come here and learn something.

    But I see the discussion has become really agressive, and this is not pleasant to read anymore.

    Guys, we're here to suggest a dozen of us, I doubt anything we say is gonna change the world market: take it easy! xD

  • ebergerlyebergerly Posts: 3,255
    edited July 2019
    RayDAnt said:
    ebergerly said:
    RayDAnt, do you really think that striving for hyper-accuracy is warranted here?

    Yes. There are areas where you will never be able to achieve hyper-accuracy. Therefore in areas where you can achive hyper-accuracy (like this one), you do. Again, it's statistical best practice.

    But with a tiny sample population of like 2 or 3 people reporting render times with a particular configuration how can you get any statistical accuracy? Yeah, if we had a sample population of hundreds or thousands or millions you could take statistical averages and get high accuracy, but with only a handful of user reports?
    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,990

    I read all this and really wonder how we ever found out that a 2080 TI renders roughly twice as fast as a 1080 TI, and how people were able to make up their mind that this was enough value to hit the buy button.

    People don't need a cost vs performance number. If they did, hardware sites out there would provide such things but they don't. Instead they present the benchmark numbers, conclude some things and maybe give a rough verbal recommendation like "this is incredible value" or "buy if you don't mind spending a lot of money and want to have the absolute fastest" or "buy only if you do a lot of this because that wasn't improved much". People have things like intelligence, intuition and common sense. Everyone knows their own financial situation and use cases best. No arbitrary number will ever accomplish anything more than what is already happening on the internet today. People deciding if some piece of hardware is worth it by browsing related web sites, watching a million YouTube videos and talking to other people in forums. And believe it or not, people trying out the hardware for themselves and then sharing their own findings. The internet allows people to return items if they don't want them or sell them in various market places. There is so little actual risk in buying goods today that this "value fatalism" is really not warranted by any stretch of the imagination.

     

  • ebergerlyebergerly Posts: 3,255
    edited July 2019
    Yeah bluejaunte you raise a good point. There are many diverse reasons why people buy new technology. Everything from "OOOOHHH, thats new and shiny" (which I do on occasion) to a detailed business justification. Though for a forum of iray renderers, it seems like it cant hurt to maintain info that will tell users how much bang (iray-wise) they can expect for their hard-earned $$$. And I dont think Jay or Linus or GN do any iray benchmarking in Studio (AFAIK).
    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,990
    ebergerly said:
    Yeah bluejaunte you raise a good point. There are many diverse reasons why people buy new technology. Everything from "OOOOHHH, thats new and shiny" to a detailed business justification. Though for a forum of iray renderers, it seems like it cant hurt to maintain info that will tell users how much bang (iray-wise) they can expect for their hard-earned $$$. And I dont think Jay or Linus or GN do any iray benchmarking in Studio (AFAIK).

    Only that we already know exactly what we get with a 2080 TI, because people bought it and tested it. The only reason you even get to calculate anything is because others took the plunge. This is far more valuable than any arbitrary number that you may come up with.

    So I guess, if you really wanna help then get all the latest hardware first and benchmark it. More action less math laugh

  • ebergerlyebergerly Posts: 3,255
    edited July 2019
    The purpose of this benchmarking exercise was to compile data in a form where users had a one-stop-shop where they could compare new cards with their cards, and provide a simple metric to help them understand all the complex data. Apparently now its morphing into something far more complex and less user friendly, IMO, but still should hopefully help users decide. Not sure why thats a bad and unwanted thing.
    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    edited July 2019
    BTW, Id encourage folks not to lose sight of the fact that a lot of users have put in a lot of their personal time to report and sift thru and consolidate all this data, even though they have no obligation and are not paid tech support. And the sole purpose was to help others make some sense of a huge complexity. I know RayDAnt is spending a lot of time sorting thru a bunch of complex new technology. So personally I tend to be thankful rather than dismissive of their efforts. Of course we can provide input or help with the effort, but I'd hate to lose the well-intentioned support.
    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,990

    I'm not talking about benchmarks, that's useful. I'm talking about your "bang for the buck" number that will never reflect the actual value people see with various amounts of money at their disposal, various use cases (i.e. Iray only, other renderers, gaming, other 3d software), various requirements for VRAM etc. etc.

    Again, if you really wanna help, spend the cash and start benchmarking. Your math isn't going to do anything. People can understand simple conclusions and deduce the value of something on their own. They read opinions and facts and gather all the information they need to make purchasing decision. It does not take your magic number, nor your warnings and cautioning or anything of the sort, for people to make up their minds about some piece of hardware.

  • Richard HaseltineRichard Haseltine Posts: 107,922

    It is time to draw a line under this aspect of the discussion, thank you.

  • ebergerlyebergerly Posts: 3,255
    edited July 2019

    For those who don't venture into the Technical Help forum, Padone and kenshaw011267 and others have done some great work that I was unaware of. Apparently 4.11 is (or can be) much slower than 4.10 in rendering for some reason, but proper use of the denoiser (certainly not at the default setting of 8) can make a huge difference in render times for certain scenes where the user can accept the results. And there's also talk of increased VRAM usage. 

    My guess is that this is just one more indication that all of this RTX-related stuff is a huge work in progress, and probably won't be ironed out for months. But as far as benchmarking is concerned, I'm thinking that denoising may become an integral part of rendering for many, especially in the near term as folks try to deal with the hugely increased 4.11 render times (though I haven't verified this myself, and I'm not sure it's a universal phenomenon). But I've tried a couple of scenes and it seems to be the case. 

    Here's the link for anyone interested:

    https://www.daz3d.com/forums/discussion/337916/anyone-had-any-luck-bringing-down-4-11-iray-rendertimes#latest

     

     

    Post edited by ebergerly on
  • outrider42outrider42 Posts: 3,679
    Daz 4.11 uses the same version of Iray 2018 that the 4.11 beta has had for a long time. It has nothing to do with RTX yet. Only Iray 2019 will fully support RTX.

    And I reported the beta being slower than 4.10 in certain scenes ages ago. Just look at numbers on the benchmark I created.
  • ebergerlyebergerly Posts: 3,255
    Daz 4.11 uses the same version of Iray 2018 that the 4.11 beta has had for a long time. It has nothing to do with RTX yet. Only Iray 2019 will fully support RTX.

    And I reported the beta being slower than 4.10 in certain scenes ages ago. Just look at numbers on the benchmark I created.

    Thanks. Do you know what's causing it? 

  • outrider42outrider42 Posts: 3,679
    No, I don't. Some people have guessed it may be how much "work" Iray does during an iteration has changed. Going back to migenius, they state that different versions of Iray can change how iterations are calculated, which is why you cannot directly compare benchmarks from different version.

    Its also why I am a very vocal critic of Daz policy...they do not offer a way to go back after updating. They do not offer past versions of Studio for download. I think this policy sucks, plain and simple. If Iray is slower, whatever the reason may be, customers should have the CHOICE to use which version they prefer. If not for the denoiser, I would NEVER use 4.11. Ever.
  • Takeo.KenseiTakeo.Kensei Posts: 1,303

    Wasn't it a bug in 4.10 that was corrected in 4.11 where some rays were not calculated thus the end effect beeing that 4.10 beeing quicker ?

  • ebergerlyebergerly Posts: 3,255
    edited July 2019

    outrider42, yeah, I'm starting to see your point. I just happened to render an indoor scene, cuz I was curious to see if there was a significant difference in render time for the identical scene on the identical computer, and it took over 27 minutes to render. And my fuzzy recollection was that this scene used to render in like 5 minutes or maybe 10. I kind dismissed it until I read that thread, and realized there may be something strange going on. 

    However, after monkeying with the denoiser as they recommended (cranking it up to maybe 100-200+) I was surprised at the speed and quality. After just a few minutes it was looking almost like the final render, although some of the tough-to-converge spots looked a bit crummy.

    Unfortunately this is just one more area that's gonna take a lot of fidding and research, and honestly I'm pretty drained by all of this nonsense that's being dragging on since last year. And I don't see any hope we'll get any resolution until maybe 1Q next year. All the RTX-20xx Supers won't be out and available probably until mid-August, and who knows when Iray will fully implement optimized versions of all the RTX technologies (not just "support RTX"). 

    So back to the topic of this thread: personally, I won't waste my time on even thinking about any sort of benchmarking-related stuff until maybe the end of 2019. Which is fine, cuz all of this just gives me a big headache. 

    Although I'm contemplating buying an RTX-2080 Super or one of those others maybe in September so I can embark on a journey to convert my simple CUDA/c++ raytracer to a DXR (Direct X raytracer) or maybe NVIDIA Optix raytracer so I can better understand how RTX works. 

    Post edited by ebergerly on
  • outrider42outrider42 Posts: 3,679
    The one bug I know of in 4.10 is the chromatic ss bug. This only happens when rendering a surface with chromatic ss without a background, the render would have missing pixels. Iray 2018 fixes that bug. If you used surfaces without chromatic ss you never saw this issue.

    Beyond that, as far as I recall the only other changes are minor. A scene rendered in 4.10 might look slightly different in 4.11.
  • outrider42outrider42 Posts: 3,679
    edited July 2019
    The denoiser is strickly a post render process, even though it does work during a render. Also, it makes absolutely no difference to the final 100% render when you start the denoiser. It will look the same whether you start at the default 8 iterations or 500 iterations. It might look wildly different during the render, but just wait it out to full convergence. Here is a little known fact, you can even enable and disable the denoiser right in the middle of an active render!

    Notice the denoiser has 2 toggles in the settings. To do this, first the denoiser needs to be toggled as "available." The other option to turn the denoiser on can be off. Start a render. Take careful notice for a small notch on the left side of the render window. When you click this you get a pop out menu with several options from the filtering settings. You can change these on the fly and see the result in sort of real time. (It depends on your hardware. It will take a several iterations for the new settings to be visible in the render.)

    So you can turn the denoiser on and off as often as you wish and watch the result. However, once the render is done you can no longer play with these settings. So be aware of how close the 100% you are if this render is something you wish to keep so the settings are what you want.

    Additionally, you can also adjust tone mapping and bloom this same way. These are all post effects and NOT a part of the actual Iray render process.

    Since these are post effects, you always use any other software to denoise a finished image. There is a thread that discusses using a different denoiser outside of Daz on finished images.

    I like the denoiser myself, but it has limits. It can wash out fine detail, especially in skin. For 3D environments, like houses and room and whatever, te denoiser is fantastic.

    Post edited by outrider42 on
  • Saxa -- SDSaxa -- SD Posts: 880
    edited July 2019
    It will look the same whether you start at the default 8 iterations or 500 iterations.

    Based on my experiences with 4.11 beta I would qualify this.  Depends on your scene setup and user's end goal.

    There are times where 600-700 iterations renders out a pretty good quality  simple portrait.  No not 99% Converged.  If I remember right was more like 70-85% depending on scene setup for a very simple test scene involving a character, hair and some outfit.  The time cost for the extra crispness is just too much for me with current technology state. 

    And under that setup if i used 500 iterations as denoiser-kick in time, the hair detail would be so muddy-smeared looking.  In fact don't even use denoiser for that.  But have the lucky luxury to have a RTX2080Ti.

    For complex scenes using say for example JackTomalin's uxurious sets with 3+ characters, I find 95iterations is just about right with a pixel radius of 1.03 to quickly deal with 99%-100% of the green firelfies..  Detail loss at the 1800-2800 iterations @ 2560 x 1440px is minimal that way for all the render.  But then again, I rarely get to 95% converged because that extra time isn't worth it to me for the extra detail alot of the time. 

    Main point again was just I would qualify that statement based on different scene compositions and users end-quality goal.

    Post edited by Saxa -- SD on
  • outrider42outrider42 Posts: 3,679

    Indeed, the best use of the denoiser is to NOT finish the render to 100%. The best use case is that the render looks good enough to end it early, which can save a lot of time on rendering. It depends entirely on the scene, but for what I have done, when I render a 3D environment, the denoiser can produce a great quality image in just a few iterations, and I can stop the render even though it may be only 5% converged. The savings on time are hard to really gauge here, because again it varies, but my denoised images only takes a few minutes to produce. However, rendering the same scene without denoising to near 100% might an hour or more. That is a huge difference.

    But as I was saying, you can actually turn the denoiser on and off during the render, to see where the render is at.

    So what is the denoiser doing? The denoiser is taking the image and essentially guessing what the unrendered pixels will be. When you start a render, and the denoiser starts at say 8 iterations, then the denoiser will lookvery abstract when it kicks in. Almost like a painting, its actually kind of a cool effect, LOL. As the render converges, and more pixels are produced, the denoiser updates itself along with the current progress of the render. So the more converged the render is, the better at guessing the remaining pixels the denoiser gets. At a certain point, which again can vary wildly from scene to scene, you may reach point where the denoiser is able to make a very good guess as to the remaining pixels. It is at this point you can probably stop the render and have an image on par with a fully converged image...but in much less time.

    That is all it is. I have tested this numerous times, and for me, I have never seen a difference when the denoiser starts. I have tried different scenes, with and without people, and it just does not matter if I start the denoiser at 8 or if I start it in the hundreds. Of course, this is my experience, but there is a quote from Richard in one of these threads stating that Nvidia has instructed them it does not matter when the denoiser starts as to the final quality of the image at 100%. Here it is, from thread linked above:

    "I believe we did see an nVidia statement that the end result did not depend on when the denoiser was started, but obviously it is going to have an impact on the tiem taken to reach that state."

    For the quality of the denoiser, in my testing, the more complex the surface material is, the better the chance is that it will lose some detail. Some people HATE the denoiser because of this. But again, it all depends on what they are doing in their scenes.

    Nvidia claims the denoiser is based on AI they trained. To do this, they basically feed the algorithm tons of images. Most of these are of inorganic things, like brick. It is logical to assume that the denoiser struggles with human skin because it does not know the human being rendered. Perhaps if we could somehow train the algorithm images of our Daz characters it might be better at filling in the pixels. But this just my thoughts on it.

    This also gives us hope that the denoiser will improve with time. Each time the denoiser updates, hopefully it has more information behind it to perform better.

    One of the more recent 4.11 releases added Tensor support for the denoiser. But it only speeds up the process a bit from what I have heard (I don't have a RTX card.) If you have a RTX card, I would be interested in seeing some images with Tensor enabled VS without, and the times it took to render to a set iteration count.

  • bluejauntebluejaunte Posts: 1,990

    A denoiser that loses detail is a useless piece of tech to me. I'm still shocked that this so called artificial intelligence denoiser is lacking any kind of intelligence. At least RayDAnt explained what the problem is. Those algorithms were simply not trained for the scenarios we most typically render in Daz Studio. Characters. Lots and lots of characters.

    But even so, if I have a texture on some wall, and the detail of it gets washed out, the denoiser has failed. And I'd say that is going to be the norm more than the exception for realistic rendering. There aren't that many perfectly flat and clean surfaces in the real world.

    I was under the impression that an AI denoiser integrated into a renderer would take the geometry and maybe the textures into account and intelligently decide what is likely noise and what isn't. Instead I'm getting blurred eyelashes in a portrait. The denoiser apparently oblivious to what I'm rendering and just wiping the whole image with a wash cloth. I see no other explanation than sheer stupidity when an intelligent denoiser cannot differentiate between noise and eyelashes. Why does a renderer even need a denoiser when all it does is purely a post effect anyway? And then we needed a new piece of hardware for that?

    There, my denoiser rant. laugh

  • RayDAntRayDAnt Posts: 1,154
    edited July 2019

    I was under the impression that an AI denoiser integrated into a renderer would take the geometry and maybe the textures into account and intelligently decide what is likely noise and what isn't. Instead I'm getting blurred eyelashes in a portrait. The denoiser apparently oblivious to what I'm rendering and just wiping the whole image with a wash cloth.

    If memory serves, the Nvidia AI denoiser actually does have in-built support for doing just this. The issue is (once again) in the training process. For the denoiser to intelligently enhance eg. human skin, not only would it need to be trained on a set of images containing human skin, it would also need to have been told (via human generated metadata) which specific portions of those training images were human skin. So that it can then use those pixel patterns specifically for inhancing the skin portions (identified - once again - by human generated metadata) of your renders and doing its magic. In other words, lots of time and effort on the human side is needed (at least during the early stages) in the training process. And obviously no one wants to spend time doing that.

    Post edited by RayDAnt on
  • outrider42outrider42 Posts: 3,679

    I can't say I have had blurry eyelashes. Just some minor skin detail loss.

  • drzapdrzap Posts: 795
    edited July 2019

    A denoiser that loses detail is a useless piece of tech to me. I'm still shocked that this so called artificial intelligence denoiser is lacking any kind of intelligence. At least RayDAnt explained what the problem is. Those algorithms were simply not trained for the scenarios we most typically render in Daz Studio. Characters. Lots and lots of characters.

    But even so, if I have a texture on some wall, and the detail of it gets washed out, the denoiser has failed. And I'd say that is going to be the norm more than the exception for realistic rendering. There aren't that many perfectly flat and clean surfaces in the real world.

    I was under the impression that an AI denoiser integrated into a renderer would take the geometry and maybe the textures into account and intelligently decide what is likely noise and what isn't. Instead I'm getting blurred eyelashes in a portrait. The denoiser apparently oblivious to what I'm rendering and just wiping the whole image with a wash cloth. I see no other explanation than sheer stupidity when an intelligent denoiser cannot differentiate between noise and eyelashes. Why does a renderer even need a denoiser when all it does is purely a post effect anyway? And then we needed a new piece of hardware for that?

    There, my denoiser rant. laugh

    That been said, I'm sorry it was this denoiser that took your virginity.  Now you will be biased against all denoiserswink.   Not all denoisers are equal.  Probably the best denoiser on the market (especially for animating) is Altus.  Gets incredible results most of the time and cuts render times in half.  Unfortunately, Nvidia's AI denoiser hasn't quite lived up to its potential yet.  AI denoisers need to be trained for their particular jobs.  It is the renderer manufacturer's job (Nvidia for iRay) to train the denoisers for their engine and application and I doubt Daz3d is doing that (even though they have built up a substantial library of training material in the form of the Daz Studio forums and galleries).  I was under the impression that Nvidia's denoiser would learn as it goes, therefore getting better and better as you complete more renders.  I'm not sure how that would work if my understanding is correct about that function.

    Post edited by drzap on
  • Takeo.KenseiTakeo.Kensei Posts: 1,303

    A denoiser that loses detail is a useless piece of tech to me. I'm still shocked that this so called artificial intelligence denoiser is lacking any kind of intelligence. At least RayDAnt explained what the problem is. Those algorithms were simply not trained for the scenarios we most typically render in Daz Studio. Characters. Lots and lots of characters.

    But even so, if I have a texture on some wall, and the detail of it gets washed out, the denoiser has failed. And I'd say that is going to be the norm more than the exception for realistic rendering. There aren't that many perfectly flat and clean surfaces in the real world.

    I was under the impression that an AI denoiser integrated into a renderer would take the geometry and maybe the textures into account and intelligently decide what is likely noise and what isn't. Instead I'm getting blurred eyelashes in a portrait. The denoiser apparently oblivious to what I'm rendering and just wiping the whole image with a wash cloth. I see no other explanation than sheer stupidity when an intelligent denoiser cannot differentiate between noise and eyelashes. Why does a renderer even need a denoiser when all it does is purely a post effect anyway? And then we needed a new piece of hardware for that?

    There, my denoiser rant. laugh

    In my POV the problem is elsewhere

    It fells like the denoiser and the renderer are two separate products that where not meant to work together. Proof of that is that it is strictly used as a post process with Iray although it could work in a better integrated way by providing normal, albedo + extra datas on the fly. But since there is no albedo output from Iray, the denoiser doesn't work at it's best

    As for the loss of details, I'm not quite sure that extra training will change the behaviour of the denoiser. I tend to think that it's rather the result of the chosen algorithms with the lack of datas. So if things stay as they are you'll have to choose between speed and accuracy.

    There is also an other use that comes to my mind for the denoiser, which is to get a quicker preview in real time, especially if RTcores kick in, to get a better idea of the final render

  • ArtAngelArtAngel Posts: 1,942
    RayDAnt said:
    LenioTG said:
    LenioTG said:

    I'm not understanding.

    So we're not going to see a speed improvement in render times with Iray 2019.1.0.0?

    They are discussing the tensor cores, used for de-noising, not the RTX cores, which should have an impact on render speeds once we have a version of Iray that uses them.

    Until now I had thought that RTX cores and tensor cores were the same thing :O

    What's the difference?

    They both accelerate certain kinds of mathematical equations. However the type of equations they each accelerate is completely different. RTCores accelerate ray-tracing (which is heavily used in graphics rendering) whereas Tensor Cores accelerate deep learning algorithms. They serve completely different (although sometimes indirectly related) purposes.

    If I were a scientist, I would choose RTX.

    Indeed, the best use of the denoiser is to NOT finish the render to 100%. The best use case is that the render looks good enough to end it early, which can save a lot of time on rendering. It depends entirely on the scene, but for what I have done, when I render a 3D environment, the denoiser can produce a great quality image in just a few iterations, and I can stop the render even though it may be only 5% converged. The savings on time are hard to really gauge here, because again it varies, but my denoised images only takes a few minutes to produce. However, rendering the same scene without denoising to near 100% might an hour or more. That is a huge difference.

    But as I was saying, you can actually turn the denoiser on and off during the render, to see where the render is at.

    So what is the denoiser doing? The denoiser is taking the image and essentially guessing what the unrendered pixels will be. When you start a render, and the denoiser starts at say 8 iterations, then the denoiser will lookvery abstract when it kicks in. Almost like a painting, its actually kind of a cool effect, LOL. As the render converges, and more pixels are produced, the denoiser updates itself along with the current progress of the render. So the more converged the render is, the better at guessing the remaining pixels the denoiser gets. At a certain point, which again can vary wildly from scene to scene, you may reach point where the denoiser is able to make a very good guess as to the remaining pixels. It is at this point you can probably stop the render and have an image on par with a fully converged image...but in much less time.

    That is all it is. I have tested this numerous times, and for me, I have never seen a difference when the denoiser starts. I have tried different scenes, with and without people, and it just does not matter if I start the denoiser at 8 or if I start it in the hundreds. Of course, this is my experience, but there is a quote from Richard in one of these threads stating that Nvidia has instructed them it does not matter when the denoiser starts as to the final quality of the image at 100%. Here it is, from thread linked above:

    "I believe we did see an nVidia statement that the end result did not depend on when the denoiser was started, but obviously it is going to have an impact on the tiem taken to reach that state."

    For the quality of the denoiser, in my testing, the more complex the surface material is, the better the chance is that it will lose some detail. Some people HATE the denoiser because of this. But again, it all depends on what they are doing in their scenes.

    Nvidia claims the denoiser is based on AI they trained. To do this, they basically feed the algorithm tons of images. Most of these are of inorganic things, like brick. It is logical to assume that the denoiser struggles with human skin because it does not know the human being rendered. Perhaps if we could somehow train the algorithm images of our Daz characters it might be better at filling in the pixels. But this just my thoughts on it.

    This also gives us hope that the denoiser will improve with time. Each time the denoiser updates, hopefully it has more information behind it to perform better.

    One of the more recent 4.11 releases added Tensor support for the denoiser. But it only speeds up the process a bit from what I have heard (I don't have a RTX card.) If you have a RTX card, I would be interested in seeing some images with Tensor enabled VS without, and the times it took to render to a set iteration count.

    This reminds me of photoshop content aware option where the patch tool synthesizes nearby content for seamless blending of surrounding content. Except photoshop works better. 

  • JamesJABJamesJAB Posts: 1,766

    So since this is "General GPU/testing discussion..."

    Something interesting has happened involving Nvidia GPUs because of how high Nvidia set the pricing scale for the new Geforce RTX cards.

    The price range for a new 11GB RTX 2080 ti is from about $1100 -  $1400 depending on who makes it, the cooling setup, and overclock setings.

    The price range for a used 16GB Quadro P5000 is around $1250 used and $1400 new on ebay.

    You can get the Professional version of the GTX 1080 with 5GB more VRAM than the RTX 2080 ti for about the same price.  I don't know about you, but I would jump on the extra VRAM in a heartbeat.  That will give you a bunch of extra headroom to turn on Speed vs Memory instancing and Optix prime on scenes that struggle to fit into 11GB of VRAM.  Or it would just be 5GB more for creating larger more complex scenes.

    On a side note, you can also get a used 24GB Quadro M6000 (Pro version of the Maxwell based Geforce GTX Titan X) for around $1150 on ebay.  Shure it will render quite a bit slower than the RTX 2080 ti, but it has more than double the VRAM...

  • ebergerlyebergerly Posts: 3,255
    edited July 2019

    I think this highlights the fact that with much of this computer technology it's fun and easy to focus on shiny new hardware, but the real issues and challenges are around software. And few of us have much clue about what's really involved and required under the hood. Especially, like I said, with proprietary software that is millions of lines of code, and one mistake in one of those lines can throw everything off. 

    I recall everyone in the tech community last year getting so excited about RTX "tensor cores", like we'd just press a Tensor Core button and our world would light up with awesomeness. laugh

    And now we're 9 months later and just starting to realize how the hardware is kind of a side issue, and the real functioning of the denoiser is in how, and if, and how well it's implemented and optimized in software. And in fact it might be many more months before the real impacts and benefits are understood. And for some they may be way less than awesome. 

    Like I've always said, this RTX stuff is insanely complex. Each component is an endless rat's nest of tiny technical details that most of us can only guess at, or just repeat what we saw on the internet. And while that might give the tech enthusiast community endless hours of speculation, for those of us who just want to get somewhere it's incredibly annoying and headache-inducing. I recently saw a presentation from some NVIDIA engineers that sounded more like all of this RTX-related stuff is still a work in progress, and folks are still coming up with ways to improve denoising algorithms and stuff. 

    Great. And for the rest of us we now have to figure out the intracacies of the new denoiser, and how we can get it to speed up the dreadfully slow 4.11 renders. 

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    edited July 2019

    BTW, let me predict that we'll be going thru this same rat's nest of endless speculation when folks start to realize that one of the other RTX features, physics simulations (Physx/Flex/CUDA10) is also included and can probably do some awesome stuff for hair/fluid/rigid body/whatever simulations. If and when it gets fully implemented and optimized. 

    Post edited by ebergerly on
This discussion has been closed.