2080 Ti Hallelujah!
Because products are becoming higher in quality, and g8 takes longer to render, my 980 was struggling. I spent a lot of time outdoors over the summer, came back from Spain keen to get back into rendering, and had to face up to tediously long renders.
I thought about getting a 1080Ti ... it is both the right time and the wrong time. They are still expensive here in the UK. I didn't fancy spending on a product that is essentially obsolete. I should have bought one a year ago.
The 2080 is too slow. I wanted a decent increment in performance. The price of the 2080Ti is breathtakingly, stupidly high, but I am retired and I spend a lot of time working on 3D ideas. I decided to take a gamble that the real-time rendering might amount to something in the next couple of years (discussions suggest that Optix Prime might not get any benefit, but who knows).
So the 2080Ti dropped straight in. I had a panic because I hadn't seated one of my PCI-E power connectors fully, and the board got sniffy and told me to get more dilithium crystals. But I figured it out. I am already using Studio 4.11 beta, so a quick driver update was all it took to get going.
It is all good. It just works. I'm so pleased. I'm poor, and will be eating bread and peanut paste till Xmas, but hey, quick renders.
I have attached a quirky image that I have been working on. It's a complex scene with 2 x G8 and 1 x G3, plus water, and took 20 minutes to 90%.

Comments
Cool. Personally I'll be waiting since nearly every article I have read has said that the performance you get over the 1080 it doesn't come close to justifying the opening price on the 2080 ti and many tech reports have said they can't recommend it at this time at the price it's at now.
here is a good article https://www.extremetech.com/gaming/278454-nvidia-rtx-2080-and-rtx-2080-ti-review-you-cant-polish-a-turing
Love this quote also "ExtremeTech, therefore, does not recommend purchasing either the RTX 2080 or RTX 2080 Ti for their hoped-for performance in future games. Ray tracing may be the future of gaming, but that future isn’t here yet"
I am about to purchase a new vehicle, so I can't justify any cost on a new GPU, so I am envious of you. Enjoy the rendering!
Yes, all you say is true. It is not a rational purchase at this time. However, (I speak personally) this isn't an entirely rational hobby and it brings me so much pleasure I have been able to persuade myself :-)
I hope your vehicle is a thing of joy and splendour.
@Valkeerie, Did you replace the 980? Or did you just add the 2080Ti?
I'm asking for two reasons. 1) Having both might affect the speed of the render, and 2) I've got a 1080 in a rig with room for two more video cards and I'm curious if it will play nice with later cards, like the 1080Ti, or even better, a 2080Ti. No idea when I'll be able to afford a second card, though.
Congratulations on the new card. Hopefully you can still afford some jam to go with that peanut paste, for a bit of variety in your diet.

I replaced the 980. With its 4Gb I figured (based on reading here) that it might cause more trouble with textures than it was worth. I've read that GPU memory should match in multiple cards - don't know how true that is.
No kidding, my GF looks at me with that WTH look when I talk about rendering or how much I have spent on it, LOL
My i7 system in almost 3 years old so maybe I can get a 2080ti or better once I upgrade in a year, guess we'll see.
yeah, the new vehicle won't be a thing of joy or splendour, just a used way to get from point a to point b comfortably and affordably, LOL. I stopped buying new awhile back and it makes no sense to drive the car of my dreams in Dallas traffic, but appreciate the thought
Keep us informed on how the new card is performing
Interesting. I've not read that, but I have read that Daz/Iray will "drop" any card that doesn't have enough memory to hold the entire scene. So (as an example,) if one card has 8GB and one has 11GB, theoretically, only the 8GB card will be ignored if the scene takes 9GB, but both will be ignored if the scene takes 12GB.
I'll have to do some real research on mixing cards, whether or not different memory capacities are a problem, and whether or not it's feasible to mix architectures, (Maxwell and Pascal, for example.) Or wait until the same card I already own drops low enough at Newegg to fit my budget! (Blasted cryptominers!)
That is how it works, yes - there is no issue with having cards with different capacities installed and marked for use with Iray
It is very important to note that those are all gaming benchmarks, and as such have little to do with actual Iray performance. The people who have tried out the 2 Iray benchmarks posted by sickleyield and me are seeing almost a double performance jump over the 1080ti. Not quite double, but certainly far higher than the 30% or so you see in gaming. And this is without Iray being optimized for Turing just yet. It stands to reason that it could get faster with more updates.
Here is the Daz user benchmark thread, which is also now in my sig. https://www.daz3d.com/forums/discussion/53771/iray-starter-scene-post-your-benchmarks#latest
Keep in mind that no video game is making use of Turing's new features, the Tensor cores and the Ray Tracing cores. So of course video games are only going to show relatively small improvements. Right now Iray does not support the Ray Tracing cores (and there is question as to whether it can, but that's another topic.) But I believe that Iray is making use of the Tensor cores to some degree, and that is where the extra performance is coming from. Right now it is believed that Turing is running off the Volta drivers for Iray, and Volta does have Tensor cores, so this is logical. (The only Volta card available is the $3000 Titan V.)
The 2080ti is about twice the price of the 1080ti, but again, it also gets you about twice the performance. So it is not such a bad investment for Iray, and as I said, it might yet improve even more with driver updates.
I read the ExtremeTech article and it makes some very sound points about the way Nvidia introduces new technology. I found myself nodding in agreement - the advantages of real-time ray-tracing may not be realised within the lifetime of the card.
It's good to see that you are experiencing decent speedups in your benchmarking, and prior to purchase I did read the thread where you discussed them. I'll mention two things I forgot to say in my original post.
The first is that my PC (win7) has been stable and reliable for about three years and I had serious doubts about sticking two 1080Tis in the chassis. I didn't think my power supply would handle it, and I would have to tear it down to stick a new one in. I can do that, I built it from scratch, but I didn't want to. The other factors were cooling and noise, and again, a single 2080Ti made more sense. If I was planning a chassis for dual 1080Tis I'd probably go for water cooling.
Lastly, I have the Asus dual fan 2080Ti, and it is quieter on full render than the 980 was on idle. I'm so accustomed to the sound of VTOL aircraft coming in to land, I don't even realise the 2080Ti card is busy.
Like you, my next upgrade is likely going to be a single card, rather than two. Its either that, or break the bank and get 2 or 3 cards and do a custom water cooling loop.
My current rig has twin Titan X's plus a 770 to drive my displays, and having 3 cards in there, even with a good case that is designed for good air flow and cards that have very good air cooling systems on them, the cards are so close together that they just feed each other heat and it heatsoaks the case so i have to take the side off to vent it. Even doing that, and using MSI afterburner to create custom fan profiles so the fans work harder and sooner, they still hit their thermal throttling thresholds. And of course there is the noise on top of that.
When i am convinced there is a single card that will give me a reasonable performance boost over running the twin Titan X's, then i will probably end up doing that, much easier to keep cool.
Great posts here. I also have an old computer with Windows 7, but I have bought GTX 1080, just before they released GTX 1080Ti.
If the drivers for 2080 Ti become more mature and iray become much faster with it, then I will consider a purchase.
Right now, I've just watching the threads about 2080 Ti, so thanks for yours.
I'm glad it was useful. Just to say, I've spent about 7 hours in Studio 4.11 beta since I installed the card, lots of test renders, and it has all been solid. Not one flaky moment. I was very apprehensive about the purchase and I can say that I'm delighted with the result. In the past I've had to avoid atmosphere effects (mist, fog, clouds etc) because they were so slow, but now it's fog in everything :-) More fog!
It would be great if there could be one thread on this forum where those people who purchased a 2080Ti could focus on sharing their experience how it works in DAZ Studio without having to justify their purchase.
- - -
I shared some test scene results last week and wonder if others who purchased a 2080 Ti can observe a similar behavior:
- - -
Can anyone else confirm that currently a RTX 2080 Ti is not using Optix Prime Acceleration at all?
I was under the impression that Optix Prime Acceleration is based on CUDA.
https://en.wikipedia.org/wiki/OptiX
- - -
- - -
For using dForce the 2080Ti had to be initialized when running it for the first time but then worked.
Would still be great to be able to use both 2080 Ti for dForce...
- - -
According to a former nVidia employee who posted in one of these threads OptiX uses CUDA, OptiX Prime is a different thing and doesn't. Iray uses OptiX Prime.
The 1080 Ti is used to run the display, OpenGL, PhysX.
The two 2080 Ti are assigned as Cuda device.
That way it is possible to render and edit in photoshop at the same time...
I did consider upgrading to i9 and a Asus Sage mainboard.
But because it is currently not quite clear in which direction this all is headed I am happy with the speed increase I get by just swapping the rendering GPUs to 2080 Ti.
At this time all signs point to Turing not using optix. The former Nvidia employee posted that optix prime may not be able to update to use the ray tracing cores. But that does not mean that there will never be an optix prime update for Turing at all. I believe that Turing will get much faster if/when it gets proper optix prime updates, even if it does not get to use the ray tracing cores. Pretty much all cards run faster with optix on, and by a noticeable margin.
Its too bad nobody has done a Titan V test on these benchmarks. I'd like to see how it compares.
The 2080ti is closer to being a Titan than Ti. Even the price is the exact same the last Titan had.
Unless one needs a new card one option probably worth considering is to wait if nvidia will release a "real Ti" next year. The 2080Ti is also a bit limited regarding memory for my taste considering the high price of what used to be a Titan.
Re: Optix Prime
Yes, I've just unticked the Optix Prime box and the difference seems negligible.
One thing I have noticed, both with the 980 and now the 2080 Ti is that GPU memory seems either to leak or become fragmented. After a few hours of rendering it will drop out into CPU rendering. I then have to reboot, and it is fine again. Memory leaks are legendary in complex programs, so this would not be completely unexpected.
I experience the same on my GT-1070. I render simple scenes for a while, make some adjustments to poses, render & save, adjust, render & save ... and then suddenly it's using CPU and taking forever. Same scene, same content, but the memory goes from "fits" to "doesn't fit" after a while.
Is not the feature of the new Daz Studio 4.11 beta?
I still use 4.10 beta and it seems pretty stable.
I have been experiencing the memory leak/fragmentation thing for a long time. I thought 4.10 was better than 4.9 (but it still did it in 4.10), and it is quite possible that 4.11 beta still needs a little honing.
I'm not so much talking about a new Titan. Previous Titans have been always expensive with the excuse of having more power and far more ram. But the ram advantage has been gone with the last gen so Titan isn't that interesting anymore if this continues:
Titan 6GB Feb 2013 $999
780Ti 3GB Nov 2013 $699
TitanX 12GB Mar 2015 $999
980Ti 6GB Jun 2015 $649
TitanXp 12GB Apr 2017 $1200
1080Ti 11GB Mar 2017 $699
The release cycle of the last couple years has been first Titan for $1000+ and x80 for $550-$650. Some time later the Ti which was more or less a cheap Titan for $650-$700.
Now we got the x80 and Ti together. Ti now with Titan price and Titan release period but nothing named Titan in sight.
Not sure whats going on. It looks like the 2080Ti is what used to be Titan now. Maybe there will still be a classic Ti equivalent next year? The full next year without any release would be a bit strange. They have been releasing a highend gpu every year for some time now. But who knows maybe they will just sit on their back and wait for Intel gpus in 2020.
Yes them changing things around and the uncertainty is why it might be not a bad idea to wait if someone is only considering the upgrade. Have they dropped the Titan or will it come later? Will there be a cheaper highend card like the classic Ti? Is the Ti replacing the Titan?
The classic Ti didn't cost much more than the x80. 780Ti was $50 plus, 980Ti was $100 plus, 1080Ti had the same price. Something for $800 to $850 would still work.
There is also the possibility of intel releasing a high end card for 2020. If it comes with somewhat good performance and decent price Nvidia would most likely lower their prices with the next gen again.
My first guess would be it won't change much at least at the moment as it's still very limited. On the other hand with daz iray there is a good change in memory consumption with the same scene when just increasing the resolution.
Thanks for posting this. I was under the impression the NVLink would pool the memory of multiple 2080 TIs, I'm glad I found out now and did not go out to buy 4 ;-)
I'm still in the air between a 2950x and a 2990xw + 64GB RAM but will most likely get dual RTX 2080 TIs - most of the images I render are in the 960x640 to 1280x960 range - quite small if you consider the size most people aim for (I do so for retro console graphics and rendering so don't need the excess size). Combining the Ryzen 32 cores with dual RTX 2080 TIs, do you believe I will end up with renders that take roughly a minute or two at most? For comparison sake, I'm using a 960 with 2GB of RAM (crunched and slow) with a 4790k that has 16GB of RAM. Even a 640x640 takes almost 2 hours with a lot of lighting and shadows (fireplace + gen8 character), then again, that could be pushing into CPU only render mode since the card is so limited with VRAM.
Thanks!
It really depends on what your are trying to render. Iray seems to scale very well. So if a card is 3 times faster in one scene, it is probably 3 times faster in just about all scenes. With this information you can check out the bench thread in my sig and compare your 960 to those who have tested with the 2080ti or any other card. There are two benches, SY's and mine, so be sure that you are comparing the right ones. Then it is simple matter of calculating the differences with your current render speeds. A 2080ti will certainly be a lot faster than a 960, but some things can still pump up long render times.
BTW, if you render that small, try using the beta 4.11 with its new denoiser enabled. I think for you the denoiser will feel like magic! Give it a try.
One more thing, just to blow some minds. It has been tested and proven that Nvlink CAN POOL VRAM in the 2080ti. The people at Vray have tested this and verify it works. There is a performance penalty for the Nvlink mode, but it is still faster than the 2080ti alone. The other caveat is nobody actually knows how much VRAM gets pooled, the tools that report VRAM are not reporting it correctly. But they can render a scene that would not fit on a single 2080ti.
Here is the link that I mentioned. Judging by how they did this, I would think this is possible for Iray as well. But nobody has tested this yet. I hope any Daz users who do have two 2080tis can see this and try it. I know some of them did buy Nvlink.
https://www.chaosgroup.com/blog/profiling-the-nvidia-rtx-cards
If anyone does have the hardware to test this, I'd love to see them also run my and SY's benchmarks to see how much of a difference the render speeds are.
Additionally, another Vray post tested 2080tis with the QUADRO version of the Nvlink and got the VRAM to pool even easier. My bet is that it also performs better than the gaming Nvlink. Of course the Quadro Nvlinks are super expensive. Also note that the 2080ti can only have two cards in Nvlink.
See that test on the far right? That is the one that would only work with Nvlink mode enabled.
Do you mean "The 980 is too slow."?