General GPU/testing discussion from benchmark thread

13468918

Comments

  • LenioTGLenioTG Posts: 2,118

    So my 2080 TI is pretty much dying. It has gotten progressively worse to a point where I had to underclock it to even start a game. Now it started to hang even just in Windows, or "resetting" itself to lower clock right after reboot. Guess I'm gonna have to go through warranty procedures.

    I'm sorry to hear this :(
    I hope you'll get a refund!

  • outrider42outrider42 Posts: 3,679
    Ouch. Which vender is it?
  • RayDAntRayDAnt Posts: 1,120

    Yeah, sorry to hear that bluejaunte. If you don't mind my asking, how long have you had it? Apart from the usual cases of DOA/dead-by-week's-end cases immediately after the Turing launch, I can't say I've heard much about failing RTX cards. Assuming the culprit is indeed a sub-standard card succumbing to the rigors of longterm rendering use, knowing how long it took to manifest could be very useful to others for knowing when to consider their cards fully "burned in".

  • ParadigmParadigm Posts: 421

    Can someone give me an answer weather daz3d's iray is using those Tensor Cores now or not?

    The answer you're looking for is "yes, kinda" but if you mean to speed up rendering it's still a "no." The tensor cores are used in the new post denoiser but doesn't speed up completion time.

    They do help speed up what I like to call Time-To-Cancel, e.g. where I'll stop the render because it's "good enough."

  • LenioTGLenioTG Posts: 2,118
    Paradigm said:

    Can someone give me an answer weather daz3d's iray is using those Tensor Cores now or not?

    The answer you're looking for is "yes, kinda" but if you mean to speed up rendering it's still a "no." The tensor cores are used in the new post denoiser but doesn't speed up completion time.

    They do help speed up what I like to call Time-To-Cancel, e.g. where I'll stop the render because it's "good enough."

    Yes but Nvidia denoiser is too agressive.

    I use the Intel one after the render is done.

  • fred9803fred9803 Posts: 1,558

    Personally I'm not sold on denoising in general as any denoising method is only an approximation of the final image, and an actual rendering will be more accurate.

    I can see the point if one needs to render many images in a limited time, or render images that are inherently noisy, but for hobbists like most of us rendering relatively simple scenes or portraits noise artifacts isn't really an issue.... we just wait a bit longer. Not saying it isn't for those aforementioned but for most us amatures, denoising an image by whatever method isn't something necessary.

  • nicsttnicstt Posts: 11,714

    It seems to be incredibly effective in architectural scenes, but my own experiences, when chanractersa are involved, is it is almost useless.

  • LenioTGLenioTG Posts: 2,118
    nicstt said:

    It seems to be incredibly effective in architectural scenes, but my own experiences, when chanractersa are involved, is it is almost useless.

    Do you mean the denoiser?

    IMHO, it depends on the lights you're using. If they are hard lights, you'll still see a little bit of noise in darker areas after 2000 iterations.

    The builtin denoiser would blur everything out, while the external Intel one will just remove that little bit of noise and leave everything else untouched. Actually, it surprises why not everyone is already using it nowadays.

  • fred9803fred9803 Posts: 1,558

    There wouldn't be many people rendering architectural scenes unless there are some characters in it. I just don't think that the average user gets much advantage from denoising the scenes they do compared to just letting the scene render out. Unless, as I said before, they're in a particulay hurry to do them. In which case there would be a definate advantage to artifically denoising them..... quality vs quantity.

  • LenioTGLenioTG Posts: 2,118
    fred9803 said:

    There wouldn't be many people rendering architectural scenes unless there are some characters in it. I just don't think that the average user gets much advantage from denoising the scenes they do compared to just letting the scene render out. Unless, as I said before, they're in a particulay hurry to do them. In which case there would be a definate advantage to artifically denoising them..... quality vs quantity.

    Have you used the Intel denoiser?

    Yes, I do many renders, but I hit stop when I feel they're up to the task. If a tool lets you reach that stage sooner it doesn't mean you're sacrificing quality.

  • nicsttnicstt Posts: 11,714
    edited June 2019
    LenioTG said:
    nicstt said:

    It seems to be incredibly effective in architectural scenes, but my own experiences, when chanractersa are involved, is it is almost useless.

    Do you mean the denoiser?

    IMHO, it depends on the lights you're using. If they are hard lights, you'll still see a little bit of noise in darker areas after 2000 iterations.

    The builtin denoiser would blur everything out, while the external Intel one will just remove that little bit of noise and leave everything else untouched. Actually, it surprises why not everyone is already using it nowadays.

    Yes.

    The denoiser takes away skin characteristics; basically the stuff that stops them looking toony. Blemishes, and sub-surface fetures seem to vanish, unless one waits until nearly as long as it would have taken anyway. The exact amount varies between renders and yes lighting does play a part, but it too varies.

    It just might save up to 10% of the time (at the most), but I have to spend time saving that time...

    ... Yes that does sound a bit like an oxymoron to me too. :)

    Post edited by nicstt on
  • LenioTGLenioTG Posts: 2,118
    nicstt said:
    LenioTG said:
    nicstt said:

    It seems to be incredibly effective in architectural scenes, but my own experiences, when chanractersa are involved, is it is almost useless.

    Do you mean the denoiser?

    IMHO, it depends on the lights you're using. If they are hard lights, you'll still see a little bit of noise in darker areas after 2000 iterations.

    The builtin denoiser would blur everything out, while the external Intel one will just remove that little bit of noise and leave everything else untouched. Actually, it surprises why not everyone is already using it nowadays.

    Yes.

    The denoiser takes away skin characteristics; basically the stuff that stops them looking toony. Blemishes, and sub-surface fetures seem to vanish, unless one waits until nearly as long as it would have taken anyway. The exact amount varies between renders and yes lighting does play a part, but it too varies.

    It just might save up to 10% of the time (at the most), but I have to spend time saving that time...

    ... Yes that does sound a bit like an oxymoron to me too. :)

    I have tested out the denoisers extensively, and my experience and opinion is completely different, but I've already said what my conscience wanted me to say.
    Everyone is free to do whatever he wants :)

  • Daz Jack TomalinDaz Jack Tomalin Posts: 13,120
    RayDAnt said:

    Yeah, sorry to hear that bluejaunte. If you don't mind my asking, how long have you had it? Apart from the usual cases of DOA/dead-by-week's-end cases immediately after the Turing launch, I can't say I've heard much about failing RTX cards. Assuming the culprit is indeed a sub-standard card succumbing to the rigors of longterm rendering use, knowing how long it took to manifest could be very useful to others for knowing when to consider their cards fully "burned in".

    I lost one of my a few weeks back - same kinda thing, 8 months old.  That said, the RMA process was a breeze, so not really too fussed if it should happen again.

  • bluejauntebluejaunte Posts: 1,861
    Ouch. Which vender is it?

    ZOTAC GeForce RTX 2080 Ti Gaming Twin Fan, which was just the cheapest ass one I could find at the time.

     

    RayDAnt said:

    Yeah, sorry to hear that bluejaunte. If you don't mind my asking, how long have you had it? Apart from the usual cases of DOA/dead-by-week's-end cases immediately after the Turing launch, I can't say I've heard much about failing RTX cards. Assuming the culprit is indeed a sub-standard card succumbing to the rigors of longterm rendering use, knowing how long it took to manifest could be very useful to others for knowing when to consider their cards fully "burned in".

    It's well within warranty, so not a huge problem other than the hassle. Bought in January. I have ordered a replacement that should arrive today because I can obviously not work without a graphics card. Depending on what the vendor finds is wrong with it, I might get a replacement that I then have to sell (more hassle) or get a refund. The replacement one is a Gigabyte GeForce RTX 2080 Ti Gaming OC so we'll see how that goes.

  • RayDAntRayDAnt Posts: 1,120

    Posted this a couple months ago in one of the now-defunct DS beta threads. Reposting it here because it's relevant. 

    bluejaunte said:

    I always thought AI means it would actually analyze geometry and textures. If not that then at least recognize what are patterns and what is noise. This denoiser to me does none of that. Look at the arm texture in the above image. Shouldn't it be absolutely obvious to any intelligent algorithm that this is not noise? In my own tests it blurs out even the eyelashes.

    Deep-learning based applications are only useful in working with a given workload if the assets used to initially train them are representative of that particular workload. In the case of the Nvidia OptiX denoiser currently included with DS 4.11+, the initial training was done using a large cache of actual project renders provided to Nvidia by developers of the 3D space planning program 3DVIA (eg. what powers the website HomeByMe.) Here's a representative sampling (original source: Nvidia) of what these scenes typically looked like:

    Notice what's almost completely lacking here: People, much less closeups of people sufficient to show skin or hair textures. One of the simplest way to describe what a deep-learning neural network actually does when you run it is to say that it takes an input, compares it to a bunch of pre-existing outputs, modifies it to be consistent with those pre-existing outputs, and then outputs the result. Hence why the denoiser (right now) tends to treat people like chairs or hubcaps - because all it knows about is chairs and hubcaps. In order for the ai denoiser currently included with DS to be really useful for typical (ie. people included) Daz Studio renders it would need to be trained with LOTS of renders of actual Daz Studio content. All this is to day that Daz Studio's use of Nvidia's proof of concept AI denoiser (because that's all it really is without an application-specific dataset to work from) is very much a beta feature - not so much in terms of whether the code itself crashes or not (which would really be an alpha level issue anyway) but as regarding whether what it does do is really the right thing for the situation.

    Assuming that Daz Studio's developers do intend to fully captialize on AI denosing as a final render enhancement solution (rather than just a viewport enhancement technique - which is also a valid use case btw) in the future (which would indeed seem to be the case), my expectation is that once Turing RTCore support gets worked out, there will be some sort of effort (perhaps crowd-sourced even) to put together a big enough collection of Daz Studio content renders for DS developers to have a big enough dataset to get the denoiser properly trained by Nvidia for them. For that matter, it may even be possible right now (hadn't thought to even look it up until now) for anyone with a big enough dataset and powerful enough GPU (one with Tensor cores) to train the denoiser using their own personal render library for Daz Studio content specific results. 

  • LenioTGLenioTG Posts: 2,118

    Everyone one of your posts is enlightening, RayDAnt :D

    I guess you do this for work too.

    I hadn't thought about the viewport functionality!

    And I had no idea about the people stuff.

  • outrider42outrider42 Posts: 3,679

    That's pretty much what I mentioned before. If we could somehow "train" the AI for the characters in rendering then it would work a lot better.

    As it stands, it works great for 3D backgrounds. It can work with people depending on the circumstances. The 2 renders of Kala 8 I posted used the denoiser...nobody ever noticed.

  • bluejauntebluejaunte Posts: 1,861

    I find it disapponting that the Iray denoiser is purely a post thing. Octane's denoiser somehow works in tandem with AILights. Don't know how that works but it sounds smarter.

    Of course I missed the express delivery guy today. They always have a talent of showing up exactly in the 30 mins you're not around and making sure that you paid extra for absolutely no reason whatsoever.

  • outrider42outrider42 Posts: 3,679
    Well, even the gaming iteration of denoising is a post process, applied at the end of each frame generated.

    I really wish the tensor cores could written to add a little help to active rendering, like how the RT cores can. It really should be possible. Its silly that so much of the chip die can sit idle. For most people, they are only using one third of the chip for CUDA.
  • RayDAntRayDAnt Posts: 1,120
    edited June 2019
    I really wish the tensor cores could written to add a little help to active rendering, like how the RT cores can. It really should be possible.

     Afraid there simply isn't anything for them to contribute to such a process. Tensor Core is just a fancy name for Nvidia's implementation of an ASIC that accelerates certain kinds of low precision matrix math operations - specifically very large batches of simultaneous multiply-add operations. And that sort of self-contained math operation simply doesn't exist as part of the typical 3D rendering workload.

    ETA: Imo it's actually quite interesting to look at the overall trends that have emerged in GPU hardware development over recent decades. From the mid 90s to mid 2000s, GPU design was primarily about what were essentially graphics-rendering-specific collections of ASICs. Then, starting in the late 2000s to mid 2010s, the design focus shifted to collections of GPGPUs where graphics (while still the primary expected workload) was just one use out of many. Now, starting with the late 2010s, the focus has shifted to diverse sets of ASICs in a sort of callback to where the tech was two decades prior. I expect it's one of those cyclical processes (like colors going in/out of style) that's just fated to switch around on people (due to technological advancements) every decade or so:

    1. Develop specialized tech to address specific processing tasks
    2. Re-develop that tech for generalized processing
    3. (Re)discover that other specific processing tasks benefit better from differently specialized tech
    4. Repeat
    Post edited by RayDAnt on
  • ArtAngelArtAngel Posts: 1,506
    RayDAnt said:
    I really wish the tensor cores could written to add a little help to active rendering, like how the RT cores can. It really should be possible.

     Afraid there simply isn't anything for them to contribute to such a process. Tensor Core is just a fancy name for Nvidia's implementation of an ASIC that accelerates certain kinds of low precision matrix math operations - specifically very large batches of simultaneous multiply-add operations. And that sort of self-contained math operation simply doesn't exist as part of the typical 3D rendering workload.

    ETA: Imo it's actually quite interesting to look at the overall trends that have emerged in GPU hardware development over recent decades. From the mid 90s to mid 2000s, GPU design was primarily about what were essentially graphics-rendering-specific collections of ASICs. Then, starting in the late 2000s to mid 2010s, the design focus shifted to collections of GPGPUs where graphics (while still the primary expected workload) was just one use out of many. Now, starting with the late 2010s, the focus has shifted to diverse sets of ASICs in a sort of callback to where the tech was two decades prior. I expect it's one of those cyclical processes (like colors going in/out of style) that's just fated to switch around on people (due to technological advancements) every decade or so:

    1. Develop specialized tech to address specific processing tasks
    2. Re-develop that tech for generalized processing
    3. (Re)discover that other specific processing tasks benefit better from differently specialized tech
    4. Repeat

    There's gold in dem der hills black gold, texas tea truth in dem der words. Back in 1999 a novel was written about various arketing ploys. In that novel one marketing exec doubled sales and became an overnight sensation when he convinced a shampoo brand to add the words rinse and repeat to the bottle.

  • RayDAntRayDAnt Posts: 1,120
    edited June 2019
    ArtAngel said:
    RayDAnt said:
    I really wish the tensor cores could written to add a little help to active rendering, like how the RT cores can. It really should be possible.

     Afraid there simply isn't anything for them to contribute to such a process. Tensor Core is just a fancy name for Nvidia's implementation of an ASIC that accelerates certain kinds of low precision matrix math operations - specifically very large batches of simultaneous multiply-add operations. And that sort of self-contained math operation simply doesn't exist as part of the typical 3D rendering workload.

    ETA: Imo it's actually quite interesting to look at the overall trends that have emerged in GPU hardware development over recent decades. From the mid 90s to mid 2000s, GPU design was primarily about what were essentially graphics-rendering-specific collections of ASICs. Then, starting in the late 2000s to mid 2010s, the design focus shifted to collections of GPGPUs where graphics (while still the primary expected workload) was just one use out of many. Now, starting with the late 2010s, the focus has shifted to diverse sets of ASICs in a sort of callback to where the tech was two decades prior. I expect it's one of those cyclical processes (like colors going in/out of style) that's just fated to switch around on people (due to technological advancements) every decade or so:

    1. Develop specialized tech to address specific processing tasks
    2. Re-develop that tech for generalized processing
    3. (Re)discover that other specific processing tasks benefit better from differently specialized tech
    4. Repeat

    There's gold in dem der hills black gold, texas tea truth in dem der words. Back in 1999 a novel was written about various arketing ploys. In that novel one marketing exec doubled sales and became an overnight sensation when he convinced a shampoo brand to add the words rinse and repeat to the bottle.

    Evil corporate marketing ploy conspiracy theories are fun and all (and oftentimes do coexist/pre-empt the existence of actually worthwhile technical advancements.) But at least in the case of GPU/CPU computing hardware, the true explanation is much less... Mulder-ish: the management of bottlenecking

    Technological advancement in computing is a wicked problem of doing the most amount of work in the least amount of time. And every computing job actually covers a mix of generalized and specialized computing tasks. A particular hardware design can only favor one or the other (generalized or specialized computing) leaving the unfavored one as the platform's bottleneck. Nvidia's Turing introduced Tensor and RT cores in order to specifically address the two biggest hardware-level computational bottlenecks in GPGPU and graphics found in modern workloads (working multi-dimensional arrays and performing ray-tracing respectively.) The ultimate upshot of which is going to be that bottlenecking will simply manifest somwhere else in the computational chain. Thereby giving chip developers new marching orders for what to focus on next generation.

    Post edited by RayDAnt on
  • outrider42outrider42 Posts: 3,679
    But really, its just a matter of coding. Everything is a matter of code. The program has to be written to take advantage of the specialized RT cores, which is why we are sitting here waiting on Daz Iray to get support for them. If Nvidia really wanted to, they could find a way to get Tensor cores involved in some rendering process as well.

    Just think if denoising could be applied to the actual shading calculation of a render. Or perhaps Tensor could be written to focus on shadows. These are the things that take the longest to render, and any kind of help would be a boost.
  • RayDAntRayDAnt Posts: 1,120
    edited June 2019
    But really, its just a matter of coding. Everything is a matter of code.

    Not in the case of ASICs. ASICs, by definition, are custom designed on a hardware level to perform one specific computing application extremely well at the price of being useless for doing anything else. And unfortunately no amount of coding can change that. It's the major downside of ASICs in general, and a big part of why chip designers are always so keen on moving away from ASIC-oriented design despite all the benefits they seem to offer (ASICs go against the grain of generalized computing as a whole.)  

     

    The program has to be written to take advantage of the specialized RT cores, which is why we are sitting here waiting on Daz Iray to get support for them.

    Technically it's actually the other way 'round. The way you incorporate ASIC-style processors in a program that previously did whatever the ASIC does in-house is more a matter of removing large chunks of imbedded code and replacing them with short calls to your specialized processing units (in this case, RTCores) rather than adding to it.

    The end result is a simpler, more elegant code base. Thus how Nvidia gets away with claiming that RTCores streamline the coding process for ray-tracing equipped applications.  If you're writing a ray-tracing app from scratch for the first time, then having RTCores at your disposal will make it breeze. If, however, you are dealing with a veteran app like Iray (with an approximately 30-year-old code base) then getting RTCores fully implemented is gonna mean essentially having to re-write/cut down 30 years of coding from scratch. Hence the long wait.

     

    If Nvidia really wanted to, they could find a way to get Tensor cores involved in some rendering process as well.

    Just think if denoising could be applied to the actual shading calculation of a render. Or perhaps Tensor could be written to focus on shadows. These are the things that take the longest to render, and any kind of help would be a boost.

    Again, general computing principles do not apply to ASIC-level hardware. Tensor cores are extremely specific in their functionality. They are designed to take large, functionally identical sets of data (eg. pre-rendered pixel values in a comparison matrix for DLSS) as input and condensing them down to smaller sets of data as output. 3D rendering at its core is about taking functionally diverse sets of data (eg. texture, light intensity, positional data values), processing each one of those elements differently, and then deriving a single pixel value from them. There really is nothing whatsover, from a basic hardware level, Tensor cores can directly offer to the 3D rendering process. Although they are extremely useful for doing things like transformations on a rendered image (because once you get to that stage of the process you are dealing with large sets of functionally identical data in the form of pixel values...)

    Post edited by RayDAnt on
  • LenioTGLenioTG Posts: 2,118

    I'm not understanding.

    So we're not going to see a speed improvement in render times with Iray 2019.1.0.0?

  • LenioTG said:

    I'm not understanding.

    So we're not going to see a speed improvement in render times with Iray 2019.1.0.0?

    They are discussing the tensor cores, used for de-noising, not the RTX cores, which should have an impact on render speeds once we have a version of Iray that uses them.

  • LenioTGLenioTG Posts: 2,118
    LenioTG said:

    I'm not understanding.

    So we're not going to see a speed improvement in render times with Iray 2019.1.0.0?

    They are discussing the tensor cores, used for de-noising, not the RTX cores, which should have an impact on render speeds once we have a version of Iray that uses them.

    Until now I had thought that RTX cores and tensor cores were the same thing :O

    What's the difference?

  • RayDAntRayDAnt Posts: 1,120
    edited June 2019
    LenioTG said:
    LenioTG said:

    I'm not understanding.

    So we're not going to see a speed improvement in render times with Iray 2019.1.0.0?

    They are discussing the tensor cores, used for de-noising, not the RTX cores, which should have an impact on render speeds once we have a version of Iray that uses them.

    Until now I had thought that RTX cores and tensor cores were the same thing :O

    What's the difference?

    They both accelerate certain kinds of mathematical equations. However the type of equations they each accelerate is completely different. RTCores accelerate ray-tracing (which is heavily used in graphics rendering) whereas Tensor Cores accelerate deep learning algorithms. They serve completely different (although sometimes indirectly related) purposes.

    Post edited by RayDAnt on
  • LenioTGLenioTG Posts: 2,118
    RayDAnt said:
    LenioTG said:
    LenioTG said:

    I'm not understanding.

    So we're not going to see a speed improvement in render times with Iray 2019.1.0.0?

    They are discussing the tensor cores, used for de-noising, not the RTX cores, which should have an impact on render speeds once we have a version of Iray that uses them.

    Until now I had thought that RTX cores and tensor cores were the same thing :O

    What's the difference?

    They both accelerate certain kinds of mathematical equations. However the type of equations they each accelerate is completely different. RTCores accelerate ray-tracing (which is heavily used in graphics rendering) whereas Tensor Cores accelerate deep learning algorithms. They serve completely different (although sometimes indirectly related) purposes.

    Ah okay, thanks for the explanation! :D

  • RayDAntRayDAnt Posts: 1,120

    It's already been picked up by others here on the forum, but according to the most recent Daz Studio changelogs:

    • Update to NVIDIA Iray RTX 2019.1 (317500.1752)

    • Source maintenance; IK

    DAZ Studio : Incremented build number to 4.12.0.5

    and 

    • Source maintenance

    • Extended DzCreateNewItemDlg SDK API; added setNewItemName(), getNewItemName(); added parameter to addOption()

    • Extended DzObject public API; added isBuildingGeom(), isBuildingGeomValid()

    • Updated SDK version to 4.12.0.10; SDK min is 4.5.0.100

    • Updated SDK API documentation; DzCreateNewItemDlg

    • Update to NVIDIA Iray RTX 2019.1.1 (317500.2554)

    • Fixed a timing/update issue in DzMeshSmoother

    DAZ Studio : Incremented build number to 4.12.0.10 

     Iray with full RTX acceleration support is already at its second iteration on the private alpha release channel. So odds are that the next public beta (the first update on 4.12 pro official) is gonna have RTCore support. So if you've been waiting for that - soon (fingers crossed.)

This discussion has been closed.