2060 to launch Jan 15 for $349

2»

Comments

  • LenioTGLenioTG Posts: 2,118

    Well...I currently have no interest at all in those scenes, since my scenes have 2-4 characters...so 6Gb seems fine to me, but maybe that's because I currently only have 3...

  • kyoto kid said:

    ...however "in-render" optimisations (like using the Scene Optimiser) do compromise on final render quality, particularly at larger resolution sizes.  Manually doing so in a very involved scene can become a case of diminishing returns time-wise.  

    My railway station scene with all the characters (8), effects, and emissive lighting involved, at  2,000 x 1,500 resolution and high quality setting, would definitely dump to the CPU with only 4 or even 6 GB (and that isn't my most complex work).

    As I recall the railway station has no figures taking up the whole height of the image, so you could almost certainly cut character textures to a quarter of the ir usual size and possibly clothing textures too  -that, even if you couldn't push it further, would drop the pixel count to 1/16 of its initial value. The more distant figures could probably be trimmed further, or even stripped of their maps.

  • nicsttnicstt Posts: 11,715
    scorpio said:
    kyoto kid said:
    Robinson said:

    I'm suprised nobody has mentioned rendering speed here.  You've all been focused on RAM.  The advantage of buying an RTX card, even a 2060, is once the RT cores get involved (when NVIDIA release the next iteration of Optix), you'll be getting a 6x - 10x boost in performance.  That's HUGE, especially if you make animations.  Remember that iRay doesn't yet make use of RT.  It's all CUDA based.

    ...all that speed is moot if a scene ad render process exceeds the card's VRAM.  6 GB is the low end for GPU rendering.  Fine for portraits and relatively simple scenes, but throw in several characters with clothing, hair, and HD morphs, along with transmaps, reflectivity and emissive lights and the process could likely dump to the CPU.

    Not if you handle the scene and optimise it properly. I had a 4GB card and managed to do quite well with it.

    You statement, whilst correct, doesn't render (forgive the pun), kyoto kid's point moot; if it drops to CPU, the card is useless. Further, said optimisation can take a lot of time; note: I said can, not will.

  • kyoto kidkyoto kid Posts: 42,159
    edited January 2019
    kyoto kid said:

    ...however "in-render" optimisations (like using the Scene Optimiser) do compromise on final render quality, particularly at larger resolution sizes.  Manually doing so in a very involved scene can become a case of diminishing returns time-wise.  

    My railway station scene with all the characters (8), effects, and emissive lighting involved, at  2,000 x 1,500 resolution and high quality setting, would definitely dump to the CPU with only 4 or even 6 GB (and that isn't my most complex work).

    As I recall the railway station has no figures taking up the whole height of the image, so you could almost certainly cut character textures to a quarter of the ir usual size and possibly clothing textures too  -that, even if you couldn't push it further, would drop the pixel count to 1/16 of its initial value. The more distant figures could probably be trimmed further, or even stripped of their maps.

    ...those are just tests.

    The final render will be at a much larger resolution and higher quality for printing.

    Post edited by kyoto kid on
  • RobinsonRobinson Posts: 751
    edited January 2019

    We will?

    Please point me towards the data to back up such a claim.

     

    Simple.  Previous generation (1080 Ti) managed around 1.2 gigarays per second.  The RTX generation (2080 Ti) will give you 10 gigarays per second (primary rays).  I've downloaded and run the DirectX RT demos to get a rough vaue for my own 2070 of over 4 gigarays per second.   My 970 is around x 10 slower than that.  That's actual running code, though it's DXR not iRay. 

    My expectation is that new renderers are going to appear this year, perhaps in beta form, that will completely blow the older generation of cards to pieces.  If that doesn't happen with iRay (though I know NVIDIA are working on a new iteration of Optix that will use RT cores and iRay can make use of Optix), Daz will absolutely have to consider changing renderer, or writing their own because iRay is going to die.  Remember that the biggest hit in any ray tracer is BVH traversal.  BVH traversal is what the RT cores do really well.  They will replace the several thousand clocks a CUDA invocation does against the data with far fewer (by an order of magnitude).  I'm actually quite optimistic about this technology, if you hadn't noticed.

    I've heard similar things with respect to the next Octane Renderer, currently under development.  

     

    Post edited by Robinson on
  • bluejauntebluejaunte Posts: 2,036
    Robinson said:

    We will?

    Please point me towards the data to back up such a claim.

     

    Simple.  Previous generation (1080 Ti) managed around 1.2 gigarays per second.  The RTX generation (2080 Ti) will give you 10 gigarays per second (primary rays).  I've downloaded and run the DirectX RT demos to get a rough vaue for my own 2070 of over 4 gigarays per second.   My 970 is around x 10 slower than that.  That's actual running code, though it's DXR not iRay. 

    My expectation is that new renderers are going to appear this year, perhaps in beta form, that will completely blow the older generation of cards to pieces.  If that doesn't happen with iRay (though I know NVIDIA are working on a new iteration of Optix that will use RT cores and iRay can make use of Optix), Daz will absolutely have to consider changing renderer, or writing their own because iRay is going to die.  Remember that the biggest hit in any ray tracer is BVH traversal.  BVH traversal is what the RT cores do really well.  They will replace the several thousand clocks a CUDA invocation does against the data with far fewer (by an order of magnitude).  I'm actually quite optimistic about this technology, if you hadn't noticed.

    I've heard similar things with respect to the next Octane Renderer, currently under development.  

     

    May wanna read this though, specifically about how each rendered frame requires time for not just ray casting but shading too, where RT cores won't help.

    https://www.chaosgroup.com/blog/what-does-the-new-nvidia-rtx-hardware-mean-for-ray-tracing-gpu-rendering-v-ray

  • RobinsonRobinson Posts: 751
    May wanna read this though, specifically about how each rendered frame requires time for not just ray casting but shading too, where RT cores won't help.

    https://www.chaosgroup.com/blog/what-does-the-new-nvidia-rtx-hardware-mean-for-ray-tracing-gpu-rendering-v-ray

    Yes, but what he didn't say was with the RT cores used for BVH traversal, there are more CUDA cores available for shading.  Obviously the more complex your lighting model, the more time it will take. 

  • bluejauntebluejaunte Posts: 2,036
    Robinson said:
    May wanna read this though, specifically about how each rendered frame requires time for not just ray casting but shading too, where RT cores won't help.

    https://www.chaosgroup.com/blog/what-does-the-new-nvidia-rtx-hardware-mean-for-ray-tracing-gpu-rendering-v-ray

    Yes, but what he didn't say was with the RT cores used for BVH traversal, there are more CUDA cores available for shading.  Obviously the more complex your lighting model, the more time it will take. 

    Absolutely, but if you were to render a scene that only spends 20% of a frame on ray casting, you are not going to see a 6x - 10x boost in performance. If anything that may be a very best case scenario.

  • RobinsonRobinson Posts: 751

    Absolutely, but if you were to render a scene that only spends 20% of a frame on ray casting, you are not going to see a 6x - 10x boost in performance. If anything that may be a very best case scenario.

    We'll see later on in the year.  But even if "only" 3x - 5x faster, the difference between this new generation of cards and the previous one is huge.

  • thepenguin99thepenguin99 Posts: 62
    edited January 2019

    Aren't some people already using the ray tracing for rendering and getting like a 30% boost? I don't think we are going to see them being multiple times faster. Honestly even a 30% boost vs what they currently do rendering Iray would make them interesting.

    Post edited by thepenguin99 on
  • bluejauntebluejaunte Posts: 2,036
    Robinson said:

    Absolutely, but if you were to render a scene that only spends 20% of a frame on ray casting, you are not going to see a 6x - 10x boost in performance. If anything that may be a very best case scenario.

    We'll see later on in the year.  But even if "only" 3x - 5x faster, the difference between this new generation of cards and the previous one is huge.

    For sure. Even now the 2080 TI seems to render about twice as fast in Iray as a 1080 TI. That's without any RT cores being used at all. This is only explainable by the other things they upgraded, outside of just better/more CUDA cores and RT. Like what is described here as a new integer and floating point pipeline maybe.

    https://www.pcworld.com/article/3305717/components-graphics/nvidia-turing-gpu-geforce-rtx-2080-ti.html

  • RobinsonRobinson Posts: 751

    Aren't some people already using the ray tracing for rendering and getting like a 30% boost? I don't think we are going to see them being multiple times faster. Honestly even a 30% boost vs what they currently do rendering Iray would make them interesting.

    I'm not aware of any big engine that's RT enabled yet.  The only "real-world" usage I've seen is BFV, which isn't a use case that would really interest us (obviously games have real-time constraints) and the DirectX RT samples which are on their Github and as samples have extremely simple shading models.

  • kyoto kidkyoto kid Posts: 42,159

    ...I wish these reviews would include straight 3D rendering benchmarks.

  • kyoto kid said:

    ...I wish these reviews would include straight 3D rendering benchmarks.

    There are some rendering bench!take out there. Check Gamers Nexus or LTT's review of the 2080ti one had them.

  • nicsttnicstt Posts: 11,715
    edited January 2019
    Robinson said:

    We will?

    Please point me towards the data to back up such a claim.

     

    Simple.  Previous generation (1080 Ti) managed around 1.2 gigarays per second.  The RTX generation (2080 Ti) will give you 10 gigarays per second (primary rays).  I've downloaded and run the DirectX RT demos to get a rough vaue for my own 2070 of over 4 gigarays per second.   My 970 is around x 10 slower than that.  That's actual running code, though it's DXR not iRay. 

    My expectation is that new renderers are going to appear this year, perhaps in beta form, that will completely blow the older generation of cards to pieces.  If that doesn't happen with iRay (though I know NVIDIA are working on a new iteration of Optix that will use RT cores and iRay can make use of Optix), Daz will absolutely have to consider changing renderer, or writing their own because iRay is going to die.  Remember that the biggest hit in any ray tracer is BVH traversal.  BVH traversal is what the RT cores do really well.  They will replace the several thousand clocks a CUDA invocation does against the data with far fewer (by an order of magnitude).  I'm actually quite optimistic about this technology, if you hadn't noticed.

    I've heard similar things with respect to the next Octane Renderer, currently under development.  

     

    Gigarays: yeh a new metric, with very little in the way of information on how it measures. This is based on Nvidia's marketting department? Well, they've got it wrong before. But there is still no data, and certainly not in sufficient numbers to lend it validity.

    "My expectation", "I've heard" - this is not data; data is only reliable when there is enough of it.

    It's a new card; I really hope that the prices being paid yeild comparable performance gains.

    ... And what bleujaunte said; again speculation leaning towards guess-work based on limited testing (and no testing as it isn't yet available)

    You've spend the cash, you're seeing nice improvements, you hope that you still see more; me too. But I'm still seeing unfulfilled promises.

    Post edited by nicstt on
  • RobinsonRobinson Posts: 751
    nicstt said:

    Gigarays: yeh a new metric, with very little in the way of information on how it measures.

    It's measuring dispatch of primary rays into the scene.  It's not really "new" as such, except for the prefix "giga", which hasn't been seen before in RT hardware.

  • nicsttnicstt Posts: 11,715
    edited January 2019

    I stand by my statements; in affect we don't know, we can hope, but so far...

    Edit:

    And if I'm understanding that correctly; new cards can measure ten times as many rays that are 'fired' into a scene as previously; great! Not big on information, and zero data.

    Post edited by nicstt on
  • outrider42outrider42 Posts: 3,679

    There is some data. Some people got Battlefield V's ray tracing to work on the Titan V. The V did OK, but only on certain maps.

    You will see many headlines claim the V does "surprisingly well" with ray tracing enabled. But they may not mention the maps that this "surprisingly well" performance happen are on maps that do not have anything ray traced, or you have to read far to see it. But on other maps with lots of ray tracing, the Titan V stumbles and loses 30 frames vs the 2080ti. To be exact, the numbers were 67 vs 97. That kind of difference is massive in a video game. The V also has major advantages in other areas, with a ton more CUDA cores over the 2080ti. Plus the V has a big advantage in asynchronous compute, which is something that ray tracing and DirectX 12 benefit from. Another important note is that Battlefield V's ray tracing is actually very light. They only trace reflections. That's it, not even shadows are ray traced. So it stands to reason the gap would grow with shadows and global illumination.

    Obviously this is only a video game, but it does all of the same things. The rays get cast in the first part of the frame and then the engine resolves the rest of the frame.

    OptiX Prime uses more memory than standard OptiX. This is part of how turning OptiX Prime Acceleration ON in Iray is faster for everything besides Turing. So if you want save a little VRAM, try turning Optix off.

    Another fun fact, Optix Prime can fall back to CPU, but standard Optix CANNOT. Standard Optix is a pure GPU renderer. So if Nvidia was to ditch Optix Prime in Iray...there might be a problem for CPU users. So Prime is still needed in Iray. We have to hope the solution they come up with can make us all happy.

    Also, AMD has fired the first high VRAM gaming card shot. I'm surprised nobody mentioned this yet, but AMD is releasing the Radeon 7. The Radeon 7 will have 16GB of HBM2 VRAM. BWAHAHA!

    When you guys thought no game needs more than 8 or 11, boom, AMD just broke that barrier. I did state in another thread that I thought AMD might release a GPU with 16gb in order to make their card stand out. Unlike the Vega Frontier, the R7 is a true gaming card. It will have some perks for content creators, but it is a gaming card first.

    And right on cue, The Division 2 is showing that to play the game at 4K Ultra, they recommend 11gb of VRAM. I told you so.

    And right before the R7 was announced, they Phil Spencer of Xbox on stage. He didn't say much, other than the next Xbox will be using AMD...which we all already knew. But the R7 having 16gb is no coincidence. I believe the next Xbox will have either 12 or 16gb of dedicated VRAM. And this will signal the start of higher capacity gaming cards. So the next series that Nvidia releases will bump up the VRAM spec across the board like I said earlier in the thread.

  • kyoto kidkyoto kid Posts: 42,159

    ...the disadvantage of the Titan V, the 3,000$ price tag.

    Regarding the Radeon 7 series, the unfortunate part is Nvida wants you to pay 2,300$ for a Quadro RTX5000 to get 16 GB of VRAM instead of 700$. Most likely anything Nvidia competes with will be more expensive than its AMD counterpart.

  • More expensive but not that much more. Nvidia will have little choice but to cut the prices of the 20xx cards.

  • I've been reading this thread with an eye toward building a rendering dream machine, but not simply throwing money away in the process.  But while there are still a lot of questions regarding the value of the RT cores of the 2000 series boards, I am wrestling with a more fundamental question that I bet some of you already know the answer to. If multiple GPUs are utilized in DS IRAY rendering, I understand that they are both utilized even if they are different boards.  More to the point, if they have different amounts of VRAM, only one board's VRAM will be uitilized in my understanding.  But which card prevails?  The one with more or the one with less VRAM?

  • I've been reading this thread with an eye toward building a rendering dream machine, but not simply throwing money away in the process.  But while there are still a lot of questions regarding the value of the RT cores of the 2000 series boards, I am wrestling with a more fundamental question that I bet some of you already know the answer to. If multiple GPUs are utilized in DS IRAY rendering, I understand that they are both utilized even if they are different boards.  More to the point, if they have different amounts of VRAM, only one board's VRAM will be uitilized in my understanding.  But which card prevails?  The one with more or the one with less VRAM?

    No, each card is separate - if the scene fits on both, both will be used, if it fits on one but not the other then only the one that can hold the scene will be used, if it doesn't fit into either card then Iray will fall back to the CPU.

  • I've been reading this thread with an eye toward building a rendering dream machine, but not simply throwing money away in the process.  But while there are still a lot of questions regarding the value of the RT cores of the 2000 series boards, I am wrestling with a more fundamental question that I bet some of you already know the answer to. If multiple GPUs are utilized in DS IRAY rendering, I understand that they are both utilized even if they are different boards.  More to the point, if they have different amounts of VRAM, only one board's VRAM will be uitilized in my understanding.  But which card prevails?  The one with more or the one with less VRAM?

    No, each card is separate - if the scene fits on both, both will be used, if it fits on one but not the other then only the one that can hold the scene will be used, if it doesn't fit into either card then Iray will fall back to the CPU.

    So to be clear.  If the entire scene can fit in each of two cards, then both will be used, speeding the render by ~2x over the single card speed (assuming they are identical cards).  But if they are of unequal size and the scene only fits on one of the two, the "smaller" card is ignored during the rendering process?

  • nicsttnicstt Posts: 11,715

    There is some data. Some people got Battlefield V's ray tracing to work on the Titan V. The V did OK, but only on certain maps.

    You will see many headlines claim the V does "surprisingly well" with ray tracing enabled. But they may not mention the maps that this "surprisingly well" performance happen are on maps that do not have anything ray traced, or you have to read far to see it. But on other maps with lots of ray tracing, the Titan V stumbles and loses 30 frames vs the 2080ti. To be exact, the numbers were 67 vs 97. That kind of difference is massive in a video game. The V also has major advantages in other areas, with a ton more CUDA cores over the 2080ti. Plus the V has a big advantage in asynchronous compute, which is something that ray tracing and DirectX 12 benefit from. Another important note is that Battlefield V's ray tracing is actually very light. They only trace reflections. That's it, not even shadows are ray traced. So it stands to reason the gap would grow with shadows and global illumination.

    Obviously this is only a video game, but it does all of the same things. The rays get cast in the first part of the frame and then the engine resolves the rest of the frame.

    OptiX Prime uses more memory than standard OptiX. This is part of how turning OptiX Prime Acceleration ON in Iray is faster for everything besides Turing. So if you want save a little VRAM, try turning Optix off.

    Another fun fact, Optix Prime can fall back to CPU, but standard Optix CANNOT. Standard Optix is a pure GPU renderer. So if Nvidia was to ditch Optix Prime in Iray...there might be a problem for CPU users. So Prime is still needed in Iray. We have to hope the solution they come up with can make us all happy.

    Also, AMD has fired the first high VRAM gaming card shot. I'm surprised nobody mentioned this yet, but AMD is releasing the Radeon 7. The Radeon 7 will have 16GB of HBM2 VRAM. BWAHAHA!

    When you guys thought no game needs more than 8 or 11, boom, AMD just broke that barrier. I did state in another thread that I thought AMD might release a GPU with 16gb in order to make their card stand out. Unlike the Vega Frontier, the R7 is a true gaming card. It will have some perks for content creators, but it is a gaming card first.

    And right on cue, The Division 2 is showing that to play the game at 4K Ultra, they recommend 11gb of VRAM. I told you so.

    And right before the R7 was announced, they Phil Spencer of Xbox on stage. He didn't say much, other than the next Xbox will be using AMD...which we all already knew. But the R7 having 16gb is no coincidence. I believe the next Xbox will have either 12 or 16gb of dedicated VRAM. And this will signal the start of higher capacity gaming cards. So the next series that Nvidia releases will bump up the VRAM spec across the board like I said earlier in the thread.

    Personally such statements as no game needs X RAM, if true would mean that games still ran using normal RAM with on the most basic of graphics chips - think the one in the Amiga. In other words (I'll be polite), a very short-sighted statement.

    ... And you are correct, there is some data, just not enough to form reliable opinions.

  • I've been reading this thread with an eye toward building a rendering dream machine, but not simply throwing money away in the process.  But while there are still a lot of questions regarding the value of the RT cores of the 2000 series boards, I am wrestling with a more fundamental question that I bet some of you already know the answer to. If multiple GPUs are utilized in DS IRAY rendering, I understand that they are both utilized even if they are different boards.  More to the point, if they have different amounts of VRAM, only one board's VRAM will be uitilized in my understanding.  But which card prevails?  The one with more or the one with less VRAM?

    No, each card is separate - if the scene fits on both, both will be used, if it fits on one but not the other then only the one that can hold the scene will be used, if it doesn't fit into either card then Iray will fall back to the CPU.

    So to be clear.  If the entire scene can fit in each of two cards, then both will be used, speeding the render by ~2x over the single card speed (assuming they are identical cards).  But if they are of unequal size and the scene only fits on one of the two, the "smaller" card is ignored during the rendering process?

    Exactly. 

     

  • nicsttnicstt Posts: 11,715

    I've been reading this thread with an eye toward building a rendering dream machine, but not simply throwing money away in the process.  But while there are still a lot of questions regarding the value of the RT cores of the 2000 series boards, I am wrestling with a more fundamental question that I bet some of you already know the answer to. If multiple GPUs are utilized in DS IRAY rendering, I understand that they are both utilized even if they are different boards.  More to the point, if they have different amounts of VRAM, only one board's VRAM will be uitilized in my understanding.  But which card prevails?  The one with more or the one with less VRAM?

    No, each card is separate - if the scene fits on both, both will be used, if it fits on one but not the other then only the one that can hold the scene will be used, if it doesn't fit into either card then Iray will fall back to the CPU.

    So to be clear.  If the entire scene can fit in each of two cards, then both will be used, speeding the render by ~2x over the single card speed (assuming they are identical cards).  But if they are of unequal size and the scene only fits on one of the two, the "smaller" card is ignored during the rendering process?

    Rendering scales well between cards; but not double; if (note IF) VRAM sharing comes to Studio, there will be a performance hit for that memory sharing; this is from the various posts I've seen. As some of them are from Nvidia, I presume they are accurate, but time will tell what actual difference it makes.

  • edited January 2019

    I've been reading this thread with an eye toward building a rendering dream machine, but not simply throwing money away in the process.  But while there are still a lot of questions regarding the value of the RT cores of the 2000 series boards, I am wrestling with a more fundamental question that I bet some of you already know the answer to. If multiple GPUs are utilized in DS IRAY rendering, I understand that they are both utilized even if they are different boards.  More to the point, if they have different amounts of VRAM, only one board's VRAM will be uitilized in my understanding.  But which card prevails?  The one with more or the one with less VRAM?

    No, each card is separate - if the scene fits on both, both will be used, if it fits on one but not the other then only the one that can hold the scene will be used, if it doesn't fit into either card then Iray will fall back to the CPU.

    So to be clear.  If the entire scene can fit in each of two cards, then both will be used, speeding the render by ~2x over the single card speed (assuming they are identical cards).  But if they are of unequal size and the scene only fits on one of the two, the "smaller" card is ignored during the rendering process?

    Exactly.

    Thanks to all of you that helped clarify this!  I feel a LOT more confident laying down the thousands of dollars to put something in place that will, in fact, work.

    Mod edit :To sort out the quotations

     

    Post edited by Chohole on
  • Hmph!  I guess my inexperience in these forums is showing!

    Thanks to all of you that helped clarify this!  I feel a LOT more confident laying down the thousands of dollars to put something in place that will, in fact, work.

  • kyoto kidkyoto kid Posts: 42,159

    ...just got an email from Newegg this morning mentioning they are now in stock .

Sign In or Register to comment.