OT: New Nvidia Cards to come in RTX and GTX versions?! RTX Titan first whispers.

12122232527

Comments

  • jonascsjonascs Posts: 35
    edited November 2018

    Well despite all the negative press I'm happy!

    How so...

     

    Well I think like this, time is money. 

    And with my collaboration with an american companion we're doing a lot of renders about +800 or so.

    With my purchase of two 2080ti the question is, is it worth it?

    Well at my ordinary job I get about 100$ a day that's about $12.5 an hour.

    if a render that usually takes 1 hour now takes 0.5 hours I have saved $6.25.

    I spend most my free time doing renders or stuff by the computer, but lets say I only render for one hour a day, that will result in 480 days to have paid for my cards by saving time.

    So it's a matter on how to look at it! ;)

    Post edited by jonascs on
  • nicsttnicstt Posts: 11,715
    ghosty12 said:
    nicstt said:
    Artini said:

    Are there any thoughts of RTX 2070 cards?

    They just started to appearing and some of them has quite good prices, like MSI RTX 2070 ARMOR 8G or Palit GeForce RTX 2070 Dual.

    Still, one can find good deals on GTX 1080 or GTX 1080 Ti, but if one would like to go for RTX cards, may be it is an option.

     

    Yeh, don't buy it.

    Personally, the 20 series cards are rediculously over-priced. We're expected to pay now for something still not available... No thanks.

    The only 20 series I can sort of understand folks buying is the ti version; as silly as the price is, there is no card available that does what it does for the price.

    Not to forget that some who bought the new cards are now having to RMA them due to the cards going kaput..

    All new tech has some of those; there is no evidence to suggest this is different to usual; forum, Youtube and similar posts are not evidence. Just a small number of points for those collecting data, and likely not useful.

  • outrider42outrider42 Posts: 3,679
    I'm sure some people are happy, and that's totally fine! I'm glad that some people are happy with their purchase. I bet you'll be even happier when Octane adds support for the ray tracing cores, which is very exciting.

    But a lot of people are experiencing sticker shock with Turing. If the prices weren't so high everything would be different. Even if they were closer to the supposed MSRP that would be a big difference.
  • Ghosty12Ghosty12 Posts: 1,988
    nicstt said:
    ghosty12 said:
    nicstt said:
    Artini said:

    Are there any thoughts of RTX 2070 cards?

    They just started to appearing and some of them has quite good prices, like MSI RTX 2070 ARMOR 8G or Palit GeForce RTX 2070 Dual.

    Still, one can find good deals on GTX 1080 or GTX 1080 Ti, but if one would like to go for RTX cards, may be it is an option.

     

    Yeh, don't buy it.

    Personally, the 20 series cards are rediculously over-priced. We're expected to pay now for something still not available... No thanks.

    The only 20 series I can sort of understand folks buying is the ti version; as silly as the price is, there is no card available that does what it does for the price.

    Not to forget that some who bought the new cards are now having to RMA them due to the cards going kaput..

    All new tech has some of those; there is no evidence to suggest this is different to usual; forum, Youtube and similar posts are not evidence. Just a small number of points for those collecting data, and likely not useful.

    That is true, went a little overboard there as the tech channels have jump on that bandwagon. As has been said it is a shame that the new cards cost so much for what they are..

  • DAZ_Rawb said:

    Just a few more benchmarks from a laptop, we threw the 2080 (at the factory default clock) in an external enclosure to see how well it would work:

    Laptop + eGPU 2080: 6m10s

    Laptop built-in 1070: 11m34s

    Laptop built-in 1070 + eGPU 2080: 4m16s

     

    So good news from those of you wanting to do external GPU stuff. Not only does it not seem to slow it down much, but it will still even work with a built-in card to get a render done faster.

     

    Let's not get distracted from what my "benchmarks" are here for, so that you all know that the 20-series cards work with the latest public beta of Daz Studio. That was not the case with the 10-series, which took some time between when those cards were released and when they weren't a very expensive paperweight in Iray. Also, in case any of you were looking forward to the 6x speed improvement with the RTX functionality I wanted to show that the current build of Daz Studio doesn't have any accelleration for the RTX's additional hardware functionality. So while these cards are faster than their 10-series counterparts (2080 faster than 1080 and maybe a touch ahead of a non-OC 1080ti) even without those features enabled should make people very optimistic about the potential speed increase of the RTX hardware being fully utilized.

    Which external enclosure did you use? I was going to end up building a desktop but now that I have started researching about egpu I am heavily leaning to go that route as my current laptop has thunderbolt 3 capability. I was very happy to see that you are able to use the existing gpu in the laptop with this solution and that alone pretty much has solidified my decision to go this route seeing your results. I am looking at the Akitio Node Pro, as that seems to be what most people are reviewing so wanted to see what you used with your setup for those results.

    Thanks

  • DAZ_RawbDAZ_Rawb Posts: 817
    DAZ_Rawb said:

    Just a few more benchmarks from a laptop, we threw the 2080 (at the factory default clock) in an external enclosure to see how well it would work:

    Laptop + eGPU 2080: 6m10s

    Laptop built-in 1070: 11m34s

    Laptop built-in 1070 + eGPU 2080: 4m16s

     

    So good news from those of you wanting to do external GPU stuff. Not only does it not seem to slow it down much, but it will still even work with a built-in card to get a render done faster.

     

    Let's not get distracted from what my "benchmarks" are here for, so that you all know that the 20-series cards work with the latest public beta of Daz Studio. That was not the case with the 10-series, which took some time between when those cards were released and when they weren't a very expensive paperweight in Iray. Also, in case any of you were looking forward to the 6x speed improvement with the RTX functionality I wanted to show that the current build of Daz Studio doesn't have any accelleration for the RTX's additional hardware functionality. So while these cards are faster than their 10-series counterparts (2080 faster than 1080 and maybe a touch ahead of a non-OC 1080ti) even without those features enabled should make people very optimistic about the potential speed increase of the RTX hardware being fully utilized.

    Which external enclosure did you use? I was going to end up building a desktop but now that I have started researching about egpu I am heavily leaning to go that route as my current laptop has thunderbolt 3 capability. I was very happy to see that you are able to use the existing gpu in the laptop with this solution and that alone pretty much has solidified my decision to go this route seeing your results. I am looking at the Akitio Node Pro, as that seems to be what most people are reviewing so wanted to see what you used with your setup for those results.

    Thanks

    For our testing we just used the Akitio Node (not-pro). Additionally we were able to get it running on a new Mac Laptop when we switched the card out with a 1080, and running through a fairly complicated guide online.

  • outrider42outrider42 Posts: 3,679
    Didn't you guys blow up a GPU one time in an enclosure? I would be terrified of putting one of these monster Turing cards in an enclosure.
  • Didn't you guys blow up a GPU one time in an enclosure? I would be terrified of putting one of these monster Turing cards in an enclosure.

    You just have to increase the cooling for these enclosers or point a desk fan at it so it doen't overheat.

  • ArtiniArtini Posts: 8,971
    edited November 2018
    DAZ_Rawb said:
    DAZ_Rawb said:

    Just a few more benchmarks from a laptop, we threw the 2080 (at the factory default clock) in an external enclosure to see how well it would work:

    Laptop + eGPU 2080: 6m10s

    Laptop built-in 1070: 11m34s

    Laptop built-in 1070 + eGPU 2080: 4m16s

     

    So good news from those of you wanting to do external GPU stuff. Not only does it not seem to slow it down much, but it will still even work with a built-in card to get a render done faster.

     

    Let's not get distracted from what my "benchmarks" are here for, so that you all know that the 20-series cards work with the latest public beta of Daz Studio. That was not the case with the 10-series, which took some time between when those cards were released and when they weren't a very expensive paperweight in Iray. Also, in case any of you were looking forward to the 6x speed improvement with the RTX functionality I wanted to show that the current build of Daz Studio doesn't have any accelleration for the RTX's additional hardware functionality. So while these cards are faster than their 10-series counterparts (2080 faster than 1080 and maybe a touch ahead of a non-OC 1080ti) even without those features enabled should make people very optimistic about the potential speed increase of the RTX hardware being fully utilized.

    Which external enclosure did you use? I was going to end up building a desktop but now that I have started researching about egpu I am heavily leaning to go that route as my current laptop has thunderbolt 3 capability. I was very happy to see that you are able to use the existing gpu in the laptop with this solution and that alone pretty much has solidified my decision to go this route seeing your results. I am looking at the Akitio Node Pro, as that seems to be what most people are reviewing so wanted to see what you used with your setup for those results.

    Thanks

    For our testing we just used the Akitio Node (not-pro). Additionally we were able to get it running on a new Mac Laptop when we switched the card out with a 1080, and running through a fairly complicated guide online.

    It looks like, RTX 2080 could be something to consider, especially when SDK and drivers from Nvidia will appear to support all new features.

     

    Post edited by Artini on
  • DAZ_Rawb said:
    DAZ_Rawb said:

    Just a few more benchmarks from a laptop, we threw the 2080 (at the factory default clock) in an external enclosure to see how well it would work:

    Laptop + eGPU 2080: 6m10s

    Laptop built-in 1070: 11m34s

    Laptop built-in 1070 + eGPU 2080: 4m16s

     

    So good news from those of you wanting to do external GPU stuff. Not only does it not seem to slow it down much, but it will still even work with a built-in card to get a render done faster.

     

    Let's not get distracted from what my "benchmarks" are here for, so that you all know that the 20-series cards work with the latest public beta of Daz Studio. That was not the case with the 10-series, which took some time between when those cards were released and when they weren't a very expensive paperweight in Iray. Also, in case any of you were looking forward to the 6x speed improvement with the RTX functionality I wanted to show that the current build of Daz Studio doesn't have any accelleration for the RTX's additional hardware functionality. So while these cards are faster than their 10-series counterparts (2080 faster than 1080 and maybe a touch ahead of a non-OC 1080ti) even without those features enabled should make people very optimistic about the potential speed increase of the RTX hardware being fully utilized.

    Which external enclosure did you use? I was going to end up building a desktop but now that I have started researching about egpu I am heavily leaning to go that route as my current laptop has thunderbolt 3 capability. I was very happy to see that you are able to use the existing gpu in the laptop with this solution and that alone pretty much has solidified my decision to go this route seeing your results. I am looking at the Akitio Node Pro, as that seems to be what most people are reviewing so wanted to see what you used with your setup for those results.

    Thanks

    For our testing we just used the Akitio Node (not-pro). Additionally we were able to get it running on a new Mac Laptop when we switched the card out with a 1080, and running through a fairly complicated guide online.

    Ok awesome, thanks for the info!

  • bluejauntebluejaunte Posts: 1,861

    So, seeing as used 1080 TI still sell for about 600-700 bucks, should I maybe sell my two and get one 2080 TI now?

  • outrider42outrider42 Posts: 3,679

    So, seeing as used 1080 TI still sell for about 600-700 bucks, should I maybe sell my two and get one 2080 TI now?

     

    That makes me really happy I got my 1080ti when I did.

    It's a very tough call. One thing to consider is that these cards will only work on the beta, so you will be stuck on beta until a general release comes out. My testing with the beta has not gone very well, it is clearly slower for me. Do you use other applications that would benefit? 

  • bluejauntebluejaunte Posts: 1,861

    So, seeing as used 1080 TI still sell for about 600-700 bucks, should I maybe sell my two and get one 2080 TI now?

     

    That makes me really happy I got my 1080ti when I did.

    It's a very tough call. One thing to consider is that these cards will only work on the beta, so you will be stuck on beta until a general release comes out. My testing with the beta has not gone very well, it is clearly slower for me. Do you use other applications that would benefit? 

    I'm constantly on the beta as a PA actually. Other applications, well games for sure or 3D production software I use, but that's not even really the point. It just seemed at current prices it may be a good opportunity to get the new card which seems to be roughly twice as fast and is just new tech with possible RTX stuff being supported at some point. And only one card instead of two.

  • outrider42outrider42 Posts: 3,679

    So, seeing as used 1080 TI still sell for about 600-700 bucks, should I maybe sell my two and get one 2080 TI now?

     

    That makes me really happy I got my 1080ti when I did.

    It's a very tough call. One thing to consider is that these cards will only work on the beta, so you will be stuck on beta until a general release comes out. My testing with the beta has not gone very well, it is clearly slower for me. Do you use other applications that would benefit? 

    I'm constantly on the beta as a PA actually. Other applications, well games for sure or 3D production software I use, but that's not even really the point. It just seemed at current prices it may be a good opportunity to get the new card which seems to be roughly twice as fast and is just new tech with possible RTX stuff being supported at some point. And only one card instead of two.

    Well, if there is no downside for what you do, I don't see why not. My concern is if having 2 1080ti's is better for other apps that make use of multi-gpu, since in normal operation the 2080ti is only 35% or so faster. Thus for normal operations having two 1080ti's should beat a single 2080ti. And while my and SY's benches show roughly double speed over 1080ti, I am unsure if that holds true through all rendering scenes. I have a feeling it is, but I have no way to confirm it.

    The other thing is that for me at least, the beta is sometimes much slower at rendering the same scenes. For the SY scene, I was able to match the time, but I have not been able to match my times from 4.10 in any other scene. Sometimes it is not even close, the beta sometimes take nearly twice as long to render the exact same scene. I am not sure what is going on here, why is it sometimes slower, and sometimes it is just a little, and sometimes it is a lot? Have you not experienced this? Just for testing, have you tried some different scenes in both versions of Daz to compare? I'm very curious. And if there are some scenes where you lose time in the beta, you would not be able to switch back to 4.10 to render them.

    However, you are correct, it is almost always better to be using 1 card if the render speeds are indeed the same. Less power, less heat, and I think the render can start up a bit faster on one card. And it is a big difference for gaming since SLI is almost nonexistant now. And if you plan on using Octane, that renderer is getting full RTX support for sure. Right now we still are not entirely sure Iray Optix Prime will...or if it even can. Iray has yet to release a new SDK post Turing.

  • bluejauntebluejaunte Posts: 1,861

    So, seeing as used 1080 TI still sell for about 600-700 bucks, should I maybe sell my two and get one 2080 TI now?

     

    That makes me really happy I got my 1080ti when I did.

    It's a very tough call. One thing to consider is that these cards will only work on the beta, so you will be stuck on beta until a general release comes out. My testing with the beta has not gone very well, it is clearly slower for me. Do you use other applications that would benefit? 

    I'm constantly on the beta as a PA actually. Other applications, well games for sure or 3D production software I use, but that's not even really the point. It just seemed at current prices it may be a good opportunity to get the new card which seems to be roughly twice as fast and is just new tech with possible RTX stuff being supported at some point. And only one card instead of two.

    Well, if there is no downside for what you do, I don't see why not. My concern is if having 2 1080ti's is better for other apps that make use of multi-gpu, since in normal operation the 2080ti is only 35% or so faster. Thus for normal operations having two 1080ti's should beat a single 2080ti. And while my and SY's benches show roughly double speed over 1080ti, I am unsure if that holds true through all rendering scenes. I have a feeling it is, but I have no way to confirm it.

    The other thing is that for me at least, the beta is sometimes much slower at rendering the same scenes. For the SY scene, I was able to match the time, but I have not been able to match my times from 4.10 in any other scene. Sometimes it is not even close, the beta sometimes take nearly twice as long to render the exact same scene. I am not sure what is going on here, why is it sometimes slower, and sometimes it is just a little, and sometimes it is a lot? Have you not experienced this? Just for testing, have you tried some different scenes in both versions of Daz to compare? I'm very curious. And if there are some scenes where you lose time in the beta, you would not be able to switch back to 4.10 to render them.

    However, you are correct, it is almost always better to be using 1 card if the render speeds are indeed the same. Less power, less heat, and I think the render can start up a bit faster on one card. And it is a big difference for gaming since SLI is almost nonexistant now. And if you plan on using Octane, that renderer is getting full RTX support for sure. Right now we still are not entirely sure Iray Optix Prime will...or if it even can. Iray has yet to release a new SDK post Turing.

    Honestly haven't done any comparisons. Maybe I should and I've been rendering slow all this time laugh Although, didn't I recently post results of 2x 1080 TI in that thread where Rawb posted some 2080 TI benchmarks? That seemed pretty normal and that would have been done in the beta.

    I have no other application that even uses the 2nd 1080 TI. Only use it for Iray rendering. It's not even connected by SLI for gaming as my mainboard doesn't even support that. And I read mostly bad stuff about SLI gaming anyway.

  • outrider42outrider42 Posts: 3,679

    On the subject of failing 2080ti's, GamersNexus has taken the step of actively trouble shooting people's cards. They are asking people to send their failing 2080i's in to them for testing. They will attempt to fix them or at least diagnose and document the issue. To be clear, this does not void warranty. They are starting to get some of these failed cards in. Here are the first two videos covering what they have found so far.

    There is one common theme with these failing cards: They all use the reference board from Nvidia. Even the EVGA model. So if you are thinking about a 2080ti like bluejaunte, perhaps consider one of the 3rd party CUSTOM board varients. These tend to be the more expensive models, but maybe, just maybe there is value in going with them over the reference.

  • ArtiniArtini Posts: 8,971
    edited November 2018

    For me it was interesting to see that even RTX 2080 easily beats 1080 Ti in Octanebench, even with the current drivers.

    The comparison is at 21 minutes of "Beyond Turing - Ray Tracing and the Future of Computer Graphics"

    image

     

    rtx2080at21m8s.jpg
    1479 x 1057 - 105K
    Post edited by Artini on
  • I found this interesting article about 2080 and 2080ti tests that were run using V-Ray and mentions about the potential performance hit using nvlink bridge and how you can gain some shared memory, at least in these V-Ray tests, with it. From what all I am seeing, there really is no support yet for the RT cores but is coming to some of the main render engines soon such as V-Ray and Octane.

    https://www.chaosgroup.com/blog/profiling-the-nvidia-rtx-cards

    Also found this article noteworthy:

    https://techgage.com/article/nvidia-geforce-rtx-performance-in-octanerender-redshift-v-ray/

  • bluejauntebluejaunte Posts: 1,861

    I found this interesting article about 2080 and 2080ti tests that were run using V-Ray and mentions about the potential performance hit using nvlink bridge and how you can gain some shared memory, at least in these V-Ray tests, with it. From what all I am seeing, there really is no support yet for the RT cores but is coming to some of the main render engines soon such as V-Ray and Octane.

    https://www.chaosgroup.com/blog/profiling-the-nvidia-rtx-cards

    Also found this article noteworthy:

    https://techgage.com/article/nvidia-geforce-rtx-performance-in-octanerender-redshift-v-ray/

    This is pretty interesting:

    For NVLink to work on Windows, GeForce RTX cards must be put in SLI mode from the NVIDIA control panel (this is not required for Quadro RTX cards, nor is it needed on Linux, and it’s not recommended for older GPUs). If the SLI mode is disabled, NVLink will not be active. This means that the motherboard must support SLI, otherwise you will not be able to use NVLink with GeForce cards. Also note that in an SLI group, only monitors connected to the primary GPU will work. Additionally, if two GeForce GPUs are linked in SLI mode, at least one of them must have a monitor attached (or a dummy plug) so that Windows can recognize them (this is not required for Quadro RTX cards nor is it necessary on Linux).

    ...

    Note that the available memory for GPU rendering is not exactly doubled with NVLink; V-Ray GPU needs to duplicate some data on each GPU for performance reasons, and it needs to reserve some memory on each GPU as a scratchpad for calculations during rendering. Still, using NVLink allows us to render much larger scenes than would fit on each GPU alone.

    It sounds like they didn't have to implement anything to get NVLINK, and even memory pooling, ot work in V-Ray as long as the cards are connected and in SLI mode. This contradicts what we heard earlier, doesn't it?

  • I found this interesting article about 2080 and 2080ti tests that were run using V-Ray and mentions about the potential performance hit using nvlink bridge and how you can gain some shared memory, at least in these V-Ray tests, with it. From what all I am seeing, there really is no support yet for the RT cores but is coming to some of the main render engines soon such as V-Ray and Octane.

    https://www.chaosgroup.com/blog/profiling-the-nvidia-rtx-cards

    Also found this article noteworthy:

    https://techgage.com/article/nvidia-geforce-rtx-performance-in-octanerender-redshift-v-ray/

    This is pretty interesting:

    For NVLink to work on Windows, GeForce RTX cards must be put in SLI mode from the NVIDIA control panel (this is not required for Quadro RTX cards, nor is it needed on Linux, and it’s not recommended for older GPUs). If the SLI mode is disabled, NVLink will not be active. This means that the motherboard must support SLI, otherwise you will not be able to use NVLink with GeForce cards. Also note that in an SLI group, only monitors connected to the primary GPU will work. Additionally, if two GeForce GPUs are linked in SLI mode, at least one of them must have a monitor attached (or a dummy plug) so that Windows can recognize them (this is not required for Quadro RTX cards nor is it necessary on Linux).

    ...

    Note that the available memory for GPU rendering is not exactly doubled with NVLink; V-Ray GPU needs to duplicate some data on each GPU for performance reasons, and it needs to reserve some memory on each GPU as a scratchpad for calculations during rendering. Still, using NVLink allows us to render much larger scenes than would fit on each GPU alone.

    It sounds like they didn't have to implement anything to get NVLINK, and even memory pooling, ot work in V-Ray as long as the cards are connected and in SLI mode. This contradicts what we heard earlier, doesn't it?

    Yeah that is what I am taking away from it as well. I suppose that it will remain in the hands of the developers for the render engines more than anything else. I am glad that Octane is actively working to implement changes for these features as well, not really sure how much will be done for DAZ's iRay with this and Octane is a somewhat streamlined alternative to take advantage of new features from DAZ if iRay for DAZ doesn't evolve with RTX tech.

  • nicsttnicstt Posts: 11,715
    DAZ_Rawb said:

    Just as an update for you all:

    We got an 2080RTX in the office today and gave it a try on the latest Daz Studio public beta and the latest drivers from Nvidia.

    Good news is that it works right out of the box. Currently the performance is pretty good but we don't believe that the hardware raytracing functionality has been enabled in the currently shipping version of Iray.

     

    In the attached test scene, with the 2080RTX in a older computer (compared to the machine with the 980Ti) here is what we found:

    980Ti: 10 minutes and 30 seconds

    2080RTX: 6 minutes and 3 seconds.

     

    I wasn't able to test with a 1080 or 1080Ti machine today but hopefully the attached scene file (which can be rendered with only free assets you get when you sign up on our site) will let you all judge for yourself.

    Those results seem to be in-line with the expected raw performance boost when going from 980 Ti to 2080 Ti (based on number of CUDA cores, faster RAM and GPU clocks).

    I'd expect ~25% raw performance increase when going from 1080 Ti to 2080 Ti.

    The real question however is how much we get when Iray SDK starts using RT cores?

    V-Ray Lead GPU Developer Blago recently posted this in V-Ray forums:
     


    There is no public NVIDIA SDK that supports RT Cores with CUDA yet. We expect such in 2019.

    It seems that we will have to wait a bit longer to get the performance improvements from RT cores.

    @DAZ_Rawb:

    Could you please benchmark AI denoising on 2080 Ti? That should be a lot faster with Tensor cores.

    Considering the number of Cuda Cores in different generations is largely useless.

  • I found this interesting article about 2080 and 2080ti tests that were run using V-Ray and mentions about the potential performance hit using nvlink bridge and how you can gain some shared memory, at least in these V-Ray tests, with it. From what all I am seeing, there really is no support yet for the RT cores but is coming to some of the main render engines soon such as V-Ray and Octane.

    https://www.chaosgroup.com/blog/profiling-the-nvidia-rtx-cards

    Also found this article noteworthy:

    https://techgage.com/article/nvidia-geforce-rtx-performance-in-octanerender-redshift-v-ray/

    This is pretty interesting:

    For NVLink to work on Windows, GeForce RTX cards must be put in SLI mode from the NVIDIA control panel (this is not required for Quadro RTX cards, nor is it needed on Linux, and it’s not recommended for older GPUs). If the SLI mode is disabled, NVLink will not be active. This means that the motherboard must support SLI, otherwise you will not be able to use NVLink with GeForce cards. Also note that in an SLI group, only monitors connected to the primary GPU will work. Additionally, if two GeForce GPUs are linked in SLI mode, at least one of them must have a monitor attached (or a dummy plug) so that Windows can recognize them (this is not required for Quadro RTX cards nor is it necessary on Linux).

    ...

    Note that the available memory for GPU rendering is not exactly doubled with NVLink; V-Ray GPU needs to duplicate some data on each GPU for performance reasons, and it needs to reserve some memory on each GPU as a scratchpad for calculations during rendering. Still, using NVLink allows us to render much larger scenes than would fit on each GPU alone.

    It sounds like they didn't have to implement anything to get NVLINK, and even memory pooling, ot work in V-Ray as long as the cards are connected and in SLI mode. This contradicts what we heard earlier, doesn't it?

    It depends on what tech you rely. V-Ray is using DXR. So Windows has to see the linking between the cards => that's why they must use SLI mode like for gaming because DirectX is used. That may not be the case if using Optix or Vulkan directly. The drawback of DXR is that you need at least Windows 10 build 1809 (I've been testing W10 lately and I'm really not convinced particularly with the recent October update mess)

    Anyway, be it Optix, Vulkan or DXR these tech are not mature and need time for development. We'll have to wait and see how they evolve

    What I rather find interresting in the benchmarks are the Cuda vs RT Core comparison which shows RT core speed up potential. It's a pity there is no 2070 in the bench because I'm sure it would also beat the GTX 1080 Ti with the RT core advantage, which would make these cards really interresting in term of performance/price

  • Ghosty12Ghosty12 Posts: 1,988
    edited December 2018

    Just some more info on the RTX Titan which by the article is not much at all..

    https://videocardz.com/79200/nvidia-rtx-titan-teased-by-influencers

    Post edited by Ghosty12 on
  • kyoto kidkyoto kid Posts: 40,656

    ...only 12 GB and probably 500$ or more than the 2080 Ti. 

    Yep, I knew they were not about to release a 16 GB enthusiast card. 

  • outrider42outrider42 Posts: 3,679

    They would if AMD did, but...

  • kyoto kidkyoto kid Posts: 40,656
    edited December 2018

    ...AMD did. in June of last year when they released the Radeon Vega Frontier with16 GB HMB 2 memory, 4096 stream processors (cores), 2048 bit memory bus and (MSRP of 999$ compared to 1,200$ for the then top of the line Titan Xp.   

    Post edited by kyoto kid on
  • Kevin SandersonKevin Sanderson Posts: 1,643
    edited December 2018

    Paperspace has a 24 GB P6000 Quadro option in their virtual server. $1.10 per hour plus storage starting at $5 https://www.paperspace.com/pricing Never buy a workstation again. A user here has found Splashtop and Paperspace will work with DAZ Studio. He was running into mouse problems with other configurations of Paperspace. Don't know if user StealthWorks was using the paid or free version of Splashtop. It starts at $5. https://www.daz3d.com/forums/discussion/293261/daz-studio-unusable-on-cloud-server

    Paperspace has Xeon CPUs.

    Post edited by Kevin Sanderson on
  • kyoto kidkyoto kid Posts: 40,656
    edited December 2018

    ...do you need a monthly/annual subscription or is it a single use "as needed" service?  This would be great for the final finished render process. if you are looking to do large format gallery quality work

    Post edited by kyoto kid on
  • Kevin SandersonKevin Sanderson Posts: 1,643
    edited December 2018

    KK, you can do hourly or the more expensive monthly for use. Storage is quite affordable and monthly. You can cancel or upgrade at anytime from the console. The Pricing and Billing FAQ has that at the bottom of the page I linked to above. I may do this eventually. Here's another page about billing. https://support.paperspace.com/hc/en-us/articles/216609067-How-does-Paperspace-billing-work-Monthly-Hourly-

     

    Post edited by Kevin Sanderson on
Sign In or Register to comment.