Iray - Titan X, Titan V, 1080Ti vs 2080/1180 RTX? (Turing Discussion)

Kitana AeonKitana Aeon Posts: 17

I am building a new computer and wondering what kind of performance difference there is between the Titan offerings, the 1080Ti and the recently announced Turing RTX architecture. Since, the 2080/1180 RTX (Turing) cards are not released yet I know part of this is just speculation. How does Daz3D and Iray work with the existing cards Titan X, Titan V and 1080Ti? 

Since there will be a Generational leap in technology here with the RTX series of cards, I have been wondering are the Titan X or Titan V worth grabbing if the go on sale? Is there a significant reduction in render times over a 1080Ti FTW or SC2? 

Also is the Daz3D Developer Team going to be optimizing for the Turing Generation of Technology? If so how long might we expect to see benefits make it down to us users for this new technology? Would it be wiser to just wait until we have concrete information about the RTX lineup of consumer cards (Which may be months away but rumors suggest they could be coming to market as early as August 30th for the first GeForce RTX cards.)

My goal is to get the best bang for my buck. And I expect that it will be a couple of years before we see any real games that can properly utilize the RTX technology, but how might this affect working in Daz3D? Will we see updates to Iray that utilitize this Real Time Ray Tracing? Should I be eyeing getting an entry level $2,300 RTX 5000? Or will it still be a long time before the benefits of using it can be felt by a Daz3D user?

https://wccftech.com/nvidia-geforce-rtx-2080-turing-tu104-gpu-pcb-exposed/

Edited to change iRay to Iray

Post edited by Richard Haseltine on
«1

Comments

  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    Nobody knows the answer to those questions. You can start with the Iray/Studio benchmark times shown below to get a feel for the relative performance for a bunch of different GPU's, but as far as the newer stuff nobody knows. I think it mostly depends on your budget and how long you're willing to wait, and what kind of performance you need. If a single 1080ti will render your scenes in an acceptable amount of time, why worry about spending tons of $$ and waiting forever to get the latest and greatest? 

    BenchmarkNewestCores.jpg
    514 x 524 - 62K
    Post edited by ebergerly on
  • I don't think you understood the question. What I was asking is... A. Is Daz3D working currently on integrating the RTX SDK into Daz3d? If so how long can we expect to wait to see this update. B. Is getting a Titan series card worth it over a GeForce? Two seperate questions... I know that no one knows the actual performance of the next gen tech now. But I figured it was worth starting a discussion about the new tech anyway in a thread that looks at last gen tech and gets feedback that might help people who are looking to upgrade.

  • Pack your patience. Last time a new card hit we had to wait for Nvidia to upgrade Iray so it would work. Once they finally had it done, it didn't take DAZ long to release it, but, DAZ had to wait for Nvidia. You don't know how long it will take this time. There were frustrated early purchasers of new cards who had to wait a long time last time. But it's up to Nvidia. 

  • Richard HaseltineRichard Haseltine Posts: 96,718
    edited August 2018

    I don't think you understood the question. What I was asking is... A. Is Daz3D working currently on integrating the RTX SDK into Daz3d? If so how long can we expect to wait to see this update.

    It wouldn't be Daz, at least for Iray, it would be nVidia. I didn't check the changelogs posted in the 4.11 beta thread to see if the nwe family was already supported in that version of Iray. Actually, t would be down to nVidia providing OpenCL support for the new family (if relevant) as far as using them for dForce was concerned too.

     

    Post edited by Richard Haseltine on
  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    I don't think you understood the question. 

    Or perhaps you didn't understand the answer. 

    As I said, nobody knows the answer to your questions. The best you'll get is the table I posted if you want to see relative performance for cards tested so far. There are no performance numbers for cards that haven't been released yet, or cards that nobody here has been willing to spend big bucks for, or cards that nobody has cared to do benchmark tests for. Nobody knows when an update to Iray will be implemented in Studio. Nobody knows whether the new cards will be "worth it over a GeForce", especially since it all depends on YOU and what you need and what you're willing to accept and what you're willing to pay for. 

     

    Post edited by ebergerly on
  • To be honest its the first time i heard about 2080 ... imo it all depends for what you need it, what is the purpose. If it's a job related then probably yes, if it's as a side job or hobby or even a long term project, then i would say no. I would rather have 2 reliable GPU's than something very new and well not always good. The new 2080 sounds like a commercial that is kinda a bit too hyped for no apparent reason, a bit better that 1080 but not that much, you pay extra cash. Since i can plug in 4 GPU's i consider to have 4x hybrid 1080 than 4 x 2080.

  • Kitana AeonKitana Aeon Posts: 17
    edited August 2018

    Ok, so the adoption of new Iray features seems to be relatively quick for the DAZ team. That is good to know. Yeah I know that Iray is an Nvidia product and as such we will ultimately be waiting for their SDK release. Has the DAZ team go their hands on the early SDK yet? I know that many companies like Autodesk and Adobe are already working with the technology in partnership with Nvidia.

    How about the question of Titan cards compared to GeForce cards? I will be using DAZ as part of my design pipeline for my own company. So speed and quality are signficant factors for me.

    The idea of getting a $2,300 Turing RTX card for real-time ray-tracing (RT-RT) is not out of the realm of possibility. What I have to wonder is if Iray will be continued or if it will be replaced by this technology. It does seem to be inline with the idea, albeit with a major update required. What are your thoughts on this new architecture? 

    Post edited by Kitana Aeon on
  • CGHipsterCGHipster Posts: 241

    The current speculation is that the new 2080 card is 50% faster than the current 1080ti, this was pure speculation based on the cards reported speeds and timings which can be found via google, youtube and various gaming sites. 

    http://www.cgchannel.com/2018/08/nvidia-unveils-quadro-rtx-5000-6000-and-8000-gpus/

    If someone didn't like their hard earned cash and wants to spend the next foreseeable future complaining about crash logs on all of their modeling software then jumping on those new cards when they first release is a good way to punish the wallet and mind.  An even wiser man might wait for the hype to settle down and let others go through the headaches of software glitches and empty pockets.  Then watch ebay and the local markets including online stores that become saturated with unwanted Titans and GTX 1080ti's as those great cards prices drop steeply due to the rush for the new architecture.

    I know I will be upgrading my two 1070's with resale cards will be like stealing and I expect the resale of Titans and 1080ti's to be plentiful and fruitful.

  • I am reading that the 2060 is at least on par with 1080Ti performance. Which is kind of shocking. They usually don't match a previous generation's top performing card with the next generations lowest tier card. I guess this technological jump is so great they simply cannot hold it back. Unless all of these reports are totally misleading or untrue. I can say it is an exciting time to be in the industry. This kind of tech is quite literally the holy grail of digital art tech.

  • CGHipsterCGHipster Posts: 241

    I am reading that the 2060 is at least on par with 1080Ti performance. Which is kind of shocking. They usually don't match a previous generation's top performing card with the next generations lowest tier card. I guess this technological jump is so great they simply cannot hold it back. Unless all of these reports are totally misleading or untrue. I can say it is an exciting time to be in the industry. This kind of tech is quite literally the holy grail of digital art tech.

    The majority of reports are speculation.  I think the actual focus for Nvidia is on the RTX Quaddro and Deep Learning cards and they definitely have a massive improvement for any 3d professional or data scientist who can afford it.  For a hobbyist I personally feel that any of the 1080ti's and 1070's will still be a mainstream pick.

  • Kitana AeonKitana Aeon Posts: 17
    edited August 2018

    I just watched the NVidia twitch live stream where they announced the launch of the GeForce RTX 2070 ($499) RTX 2080 ($699) and RTX 2080Ti ($999) September 20th (Pre-orders open now) and according to their infographics the 2070RTX is lightyears ahead of the current gen 1080Ti and Titan series. I know real life benchmarks will tell a slightly different tale but I can't imagine that they would blatantly lie so boldly. They showed that it was between 4-6x faster than a 1080 Ti.

    Note that it's claim is 4-6 TIMES not Percent faster... :O

    Post edited by Kitana Aeon on
  • CGHipsterCGHipster Posts: 241

    I just got my "Pre-Order" email marketing from Nvidia.  Looks like the pricing on this is 1200USD for the 2080Ti.  So, its not a bad price at all if you consider the current 1080Ti is the same price almost.  I will eagerly await to hear the feedback from those who spring for one.

  • prixatprixat Posts: 1,585

    I just watched the NVidia twitch live stream where they announced the launch of the GeForce RTX 2070 ($499) RTX 2080 ($699) and RTX 2080Ti ($999) September 20th (Pre-orders open now) and according to their infographics the 2070RTX is lightyears ahead of the current gen 1080Ti and Titan series. I know real life benchmarks will tell a slightly different tale but I can't imagine that they would blatantly lie so boldly. They showed that it was between 4-6x faster than a 1080 Ti.

    Note that it's claim is 4-6 TIMES not Percent faster... :O

    You have to read those slides very carefully. They were talking about a specific function, in this case, 4-6x faster at Ray Tracing.

    They did that several times in the presentation, where the slide showed a specific feature but the speech only highlighted the impressive sounding numbers.

  • Note that it's claim is 4-6 TIMES not Percent faster... :O

    Nvidia publicated fps for games that are supporting raytracing. With cudacores simulating raytracing they had around  2 fps, with the new RTX generation around 10-12 fps. That sounds right, the new cores are specialised in realtime Raytracing and can give it a big boost. Roughly 25% of the chip die is covered by tensor- and raytracing cores. People that only want to play games will have little benefit, cause the most games dont support raytracing. They will have 10-20% faster fps for games, nothing worth to upgrade from gtx to rtx cards.

     

    The best would be to wait 2-3 month to know the timeline of integration in iRay and a statement of nVidia about the speed impact for their renderengine. Without that information there is a risk to buy the old gtx generation only to realize, that the new rtx generation is much fore faster in iRay because it has specialised cores. 

  • CGHipsterCGHipster Posts: 241

    I have to admit, I'm excited for the RTX cards although I may not spring for one in the first 6 months.  I have 2 1070's that I will sell and can spring the difference once I see all the happy posters and not the angry frustrated posts of bugs and incompatibility. cheeky

  • I will probably wait for black friday before making a purchase. Especialy with the release cards all being sold out.

  • If you're building a comercial venture where Daz Studio Iray rendering factors into the workflow, then the smart move is to go with proven results and wait for the future to prove itself.

    I run a Titan X Pascal and 1080ti simultaneously for Iray rendering, and they work together well, and are essentially the same card. However, I understand the Titan xp (the X Pascal's replacement) has a few things that make it slightly different from the X Pascal.

    Ultimately, you are going to have to look at 2 primary factors: CUDA count and VRAM. If you're serious about making money with rendering, then you're going to need serious hardware and software, and frankly, Autodesk has been doing it right for a lot longer than Daz Studio, and has an Iray plugin. You can still use GTX cards, but you're going to need a lot of them, and a lot of cooling.

    It's not going to be like Robert Rodriguez with a rented camera and a few cases of beer and some close friends making a cult blockbuster. If you expect to make money with 3D rendering, expect to spend money on the hardware to do it. In that case, your primary goal should be an Nvidia VCA loaded with Titan Vs with HBM2 so you get the benefit of pooled VRAM as well as pooled CUDA cores. Even then, keep your staff low, and don't quit the job that pays the light bill chasing a will o' the wisp.

  • JD_MortalJD_Mortal Posts: 758
    edited September 2018

    Best bang for your buck... Are you counting "Operational costs", such as price for electricity to run hours of rendering (including air-conditioning costs from the generated heat and mess-up re-renderings), supporting hardware costs (power supplies).

    I use Titan-X (Maxwells) and Titan-Xp (Pascal) and Titan-V (Volta) cards. For the 12GB VRAM and Core-counts V.S. power-consumption.

    The only thing you need to focus on is CUDA CORES count and VRAM size. The use of TENSOR CORES (Volta) is still a future technology that is not yet in "this game" yet. (It is in the near future, as well as RT CORES. When it rains, it pours. All it takes is one person to figure-out how to use it, and another person to steal the ideas and add them to the programs.)

    Just for the record, "Titans" are a GeForce card.

    The more cores and ram you can cram into one card, the better... at any initial cost. A singel card uses about 200-300watts of power when fully loaded in a render. (It only renders if there is enough VRAM to fit the whole project. Otherwise you are not using it at all. For IRAY.) Thus, having fewer cards will yield the best bang for the buck. Long-term.

    Having 8 cards with half the cuda-cores, because they are $100 per card, when they will demand up to 2400watts... Will cost you 2x more in electric bills, 2x more in hardware to run it, and may be limiting future version updates to that "older hardware".

    It is better to only have the 4 cards, with a max of 1200watts, managed by one power supply, one computer, half the electricity to render, half the electricity to cool down the room with your air-conditioner, and be well within having equipment that will be supported for years to come, instead of being on the bottom end of expiration.

    However, it makes no sense to save $500 or $2000 on a card, when you will easily spend double in power costs, over the lifetime of owning the card. Ultimately, if you are not rendering that much, you shouldn't be looking at these cards anyways.

    My two machines, when Daz3d eventually works with Titan-V's... (I use them with other rendering engines just fine.) They pull up to 2400watts total, more if I game with them. (That includes the processor and cooling.) Those are operated outside in summer, because it produces too much heat to waste cooling with the AC unit. That is exactly 8189 BTUs that need cooling. Thus, another 1200 watts to power the AC unit...

    Heat from wattage, converted to BTUs: https://www.rapidtables.com/convert/power/Watt_to_BTU.html

    Air-conditioners BTU power consumption in watts: https://www.generatorjoe.net/html/wattageguide.html

    Rendering 24 hours a day times 3.6Kwh (3600watts) = 86.4Kwh... Times 28 days a month = 2,419.2Kwh... Times 12 months... 29,030.4Kwh... Times Kwh rate $0.12 = $3,483.00

    That would be double with the cards that have half the cuda-cores and may not work after two or three years, or be supported. Granted, you may not be rendering 24/7/365... You may not want to sit around and wait 2x to 4x longer for each individual render, and having renderings done in a week, as opposed to two weeks or a month, may be needed to earn money over someone-else. Waiting 2 hours to find-out that it needs to be redone, is horrible. Waiting only one hour, or 15 min... or only a few seconds, is a blessing! (Biggest time saver, by far.)

    Post edited by JD_Mortal on
  • ebergerlyebergerly Posts: 3,255
    edited September 2018

    As mentioned before, since the average user has one or maybe two GPU's, and doesn't render anywhere near 24/7/365, for the most part "operational costs" are somewhat irrelevant. Additional cost of electricity for a single GPU is barely noticeable for most users. And a 250watt GPU generates about the same heat as two 100watt light bulbs, or two humans standing in your room. And I doubt any of us can prove any measurable difference in running time of their airconditioners solely due to the addition of a GPU.  It depends on a LOT of things (insulation of the house, outside temps, size of room, render time, thermostat settings, etc.). 

    Regarding core count, if you look at the chart I posted, I also include core counts for some of the pre-RTX devices, and as you can see there is no direct relationship between cores and render times. More cores does NOT mean shorter render times. And that is especially true with the new Turing architecture, where the jobs are divided up between the different components, all with different software controlling them.There's RT cores, Tensor cores, CUDA cores, there's Optix ray tracing software, there's NGX AI software for the tensor cores, there's new shaders, there's new CUDA software, there's new Physx software, etc. You can't just say "oh, it has more cores so it's faster". 

    Post edited by ebergerly on
  • Reality1Reality1 Posts: 115
    edited September 2018

    I don't think you understood the question. What I was asking is... A. Is Daz3D working currently on integrating the RTX SDK into Daz3d? If so how long can we expect to wait to see this update. B. Is getting a Titan series card worth it over a GeForce? Two seperate questions... I know that no one knows the actual performance of the next gen tech now. But I figured it was worth starting a discussion about the new tech anyway in a thread that looks at last gen tech and gets feedback that might help people who are looking to upgrade.

    I can't speak for DAZ. But. If my understanding of the technical issues involved here is accurate, these are not as much DAZ questions as they are nVidia questions.

    DAZ could eventually develop software hooks into Studio to use the Turing cores, but I don't see any compelling reason why they might even want to do this since DAZ studio is not technically doing the rendering. If DAZ made an SDK available, it would not allow developers to speed up Iray renders because Iray is nVidia's code. nVidia could try to put this on DAZ, but gee, that would be interesting. It should be nVidia that will be providing DAZ with a new Iray SDK.

    So when will the new GEForce cards be capable of rendering realtime in Studio? Hard to say without the inside scoop. It seems certain that nVidia will eventually update the Iray code they gave DAZ to take advantage of the Turing cores, and when they do, I'm sure it will be a top priority for DAZ to integrate it into Studio, making the new render improvements available.

    These things usually take quite a while, but there is some feirce competition in the industry presently that might speed things up. Could also make it buggy and unusable because the job was rushed.

    Also, I'm wondering if the new real-time ray-tracing will look as good frown as current Iray renders.

    Post edited by Reality1 on
  • Pack your patience. Last time a new card hit we had to wait for Nvidia to upgrade Iray so it would work. Once they finally had it done, it didn't take DAZ long to release it, but, DAZ had to wait for Nvidia. You don't know how long it will take this time. There were frustrated early purchasers of new cards who had to wait a long time last time. But it's up to Nvidia. 

    I really regretted upgrading to the 1070. I was able to limp along with a beta version of Studio, but it completely crippled my version of Octane. Interesting that. It seems really short-sighted that nVidia has not devised a way to abstract the hardware at the driver level considering how many products they offer and how often they are upgraded.

  • Reality1Reality1 Posts: 115
    edited September 2018
    JD_Mortal said:
    Air-conditioners BTU power consumption in watts: https://www.generatorjoe.net/html/wattageguide.html
     

    You could always move, and heat with them (in the winter).wink

    Post edited by Reality1 on
  • It seems to me that, at any given time, you have three choices with graphics cards:

    Card A - The latest and greatest card that currently works with Daz Studio

    Card B - A newer and more powerful card than Card A. It's available now, but doesn't yet work in Daz because the appropriate drivers etc. have not been produced.

    Card C - A card currently under development that will be even more powerful than card B, and will be released at some point in the few months (probably).

    Unless you have a really compelling reason to wait, I'd always buy card A. If you wait for B or C, there'll be cards D, E, F and the rest of the alphabet tempting you to wait even longer for the next big thing.

     

  • ebergerlyebergerly Posts: 3,255
    edited September 2018
    FWIW, I could be wrong but I assume that DAZ's part in this is pretty much limited to doing the user interface (any user options/inputs needed for rendering) and more importantly making sure the scene elements are in a format that can be used as input to Iray/Optix renderer. That includes putting materials in the right format that can be used by the RTX shaders, etc. But other than that I assume Studio isn't even concerned with or aware of the hardware architecture. Thats what CUDA and Iray and Optix and the drivers, etc, are concerned with.
    Post edited by ebergerly on
  • It seems to me that, at any given time, you have three choices with graphics cards:

    Card A - The latest and greatest card that currently works with Daz Studio

    Card B - A newer and more powerful card than Card A. It's available now, but doesn't yet work in Daz because the appropriate drivers etc. have not been produced.

    Card C - A card currently under development that will be even more powerful than card B, and will be released at some point in the few months (probably).

    Unless you have a really compelling reason to wait, I'd always buy card A. If you wait for B or C, there'll be cards D, E, F and the rest of the alphabet tempting you to wait even longer for the next big thing.

    B was an option for a while with the Pascal cards, but it isn't usually an option (though the usual warning against buying version 1.0 of anything of course applies).

  • ebergerlyebergerly Posts: 3,255
    Richard, are you saying that usually the finalized drivers and related software are usually available for new generations?
  • ebergerly said:
    Richard, are you saying that usually the finalized drivers and related software are usually available for new generations?

    Certainly the length of the delay with Pascal was very unusual.

  • ebergerlyebergerly Posts: 3,255
    edited September 2018
    Yeah I haven't been following that. I assumed that they usually don't release everything finalized and perfect all at once due to the complexity, but rather introduce incremental features behind the scenes as they get them finalized.
    Post edited by ebergerly on
  • I bought a Nvidia RTX 2080 and tested, if the Raytracing-Cores go to an iRay realtime performance or help the editor iray mode to be almost realtime. 

    Test setup:  CPU Intel Core i7 8700K, Z370 Chipset, 32GB RAM, SSD with 2GB/s transfer rate (that matters because the texture loading is much faster), DAZ 4.10 on Winsows 10 64Bit, CUDA 9.1
    rendering od the iray Starter scene, Optix Prime on:

    CPU + GPU: >8 min. How disappointing! With the 1070 it rendered in 3:11 Min.! But I think, maybe the Intel security patches slowed down the performance.
    8700K + 2080 + 1070: First Picture: 0:20, 90%: 0:34, 95%: 2:00, 100%: 3:10

    But with DAZ 4.11 Beta, it's much much faster - it uses a iRay codebase that requires at least Kepler Chipset:
    8700K + 2080:             First Picture: 0:05, 90%: 0:09, 95%: 0:59, 100%: 1:21
    8700K + 2080 + 1070: First Picture: 0:05, 90%: 0:06, 95%: 0:08, 100%: 0:46

    CUDA 10 had the same performance as CUDA 9. Adding the CPU to rendering tendentially reduces the performance.

    So conclusion is, software improvement has more effect than the new grapphics cards have. The performance also scales good when you add a GTX 1070 to the RTX 2080.
    It's obvious: Currently neither CUDA 9 nor CUDA 10 nor DAZ Studio use the new Raytracing cores. So it's just evolution, no revolution. As I read here in the forum, only Optix will support the raytracing cores, but iRay uses Optix Prime, which only uses the normal cores. It's a pity! Nvidia failed to get the software ready. So we can only hope that it will be soon, but for iRay, nobody commited to yet :-(

    At least, if you use the newes Gfx cards with DAZ 4.11, you have a nice performance. When using 2080+1070, the iray shading in the editor feels almost smooth and you can see a picture which gives you an impression of the scene in about 3 seconds. 

  • NewVision said:

    I bought a Nvidia RTX 2080 and tested, if the Raytracing-Cores go to an iRay realtime performance or help the editor iray mode to be almost realtime. 

    Test setup:  CPU Intel Core i7 8700K, Z370 Chipset, 32GB RAM, SSD with 2GB/s transfer rate (that matters because the texture loading is much faster), DAZ 4.10 on Winsows 10 64Bit, CUDA 9.1
    rendering od the iray Starter scene, Optix Prime on:

    CPU + GPU: >8 min. How disappointing! With the 1070 it rendered in 3:11 Min.! But I think, maybe the Intel security patches slowed down the performance.
    8700K + 2080 + 1070: First Picture: 0:20, 90%: 0:34, 95%: 2:00, 100%: 3:10

    But with DAZ 4.11 Beta, it's much much faster - it uses a iRay codebase that requires at least Kepler Chipset:
    8700K + 2080:             First Picture: 0:05, 90%: 0:09, 95%: 0:59, 100%: 1:21
    8700K + 2080 + 1070: First Picture: 0:05, 90%: 0:06, 95%: 0:08, 100%: 0:46

    CUDA 10 had the same performance as CUDA 9. Adding the CPU to rendering tendentially reduces the performance.

    So conclusion is, software improvement has more effect than the new grapphics cards have. The performance also scales good when you add a GTX 1070 to the RTX 2080.
    It's obvious: Currently neither CUDA 9 nor CUDA 10 nor DAZ Studio use the new Raytracing cores. So it's just evolution, no revolution. As I read here in the forum, only Optix will support the raytracing cores, but iRay uses Optix Prime, which only uses the normal cores. It's a pity! Nvidia failed to get the software ready. So we can only hope that it will be soon, but for iRay, nobody commited to yet :-(

    At least, if you use the newes Gfx cards with DAZ 4.11, you have a nice performance. When using 2080+1070, the iray shading in the editor feels almost smooth and you can see a picture which gives you an impression of the scene in about 3 seconds. 

    So it worked for you? I have an RTX 2080 as well and i haven't been able to get it to render anything in iray at all (using DAZ 4.10 on windows 10). Did you have to mess with any settings or anything?

Sign In or Register to comment.