In my honest opinion, I would cancel the preorder if all you're gonna use it for is DAZ Studio. From experience, DAZ3D are really slow to update Iray with the newest features, and since there isn't anything new yet about RTX support for Iray itself, I expect support for Iray on DAZ Studio to be even further in the future.
Last time, the delay was NVIDIA - after NVIDIA spent months getting Iray ready for the new cards, DAZ got it out in Studio quickly. The positive for this time, DAZ has the new Denoiser already in the beta.
So I am slogging around with a GTX 745 with like 384 cores... so if I went to just a GTX1070 it should fit in my system and give me like a 5x-7x speed jump?
No mention of memory pooling. Surely they would tout that if it was a feature. So yeah I think it'll just act as a better SLI and not double memory, hence the lower cost too.
That page and almost every other preorder page also fails to mention anything about Tensor cores. But we now know that Tensor is indeed a part of the die, and they have a big role what is taking place. So just because this page lacks that info is not an indication of a lack of memory pooling. If you watch the conference, the CEO mentions NVLink and memory pooling right before jumping to name prices of the RTX cards. That's some weird timing to talk about NVLink if it isn't going to actually do what he says for the gaming cards. He pretty much said the same words I did, that the NVLink'ed GPUs are like single big GPU. We don't have confirmation, but we also don't have a denial. We need to get this question answered outright.
...the only time he mentions pooling memory resources between two cards is when he brings up the Quadro 8000 (10,000$ ea). As I mentioned there is different pricing between the Consumer RTX NVLinks and the Quadro/Tesla NVLinks (see attachments below). For the GeForce series it mentions multiple card bridges which pretty much seem to be configured like as the old SLI ones. They may allow faster data transfer between cards than SLI but nothing about pooling memory in the description.
It makes perfect sense why the GeForce series would not have memory pooling as one, do gamers rally need that much VRAM (?), and two, it would seriously undercut the Quadro series (particularly the 2,300$ RTX5000 using two RTX2070s which would provide more CUDA and Tensor cores than the single Quadro while offering the same combined amount of VRAM [16 GB])
I was kind of amused by his comments comparing 2070 performance to the Titan-X/Xp as the latter has 4 more GB of VRAM and again, I see VRAM as the single most important feature for GPU based rendering (the more the better). If your scene winds up being say 9 GB, suddenly that old 12 GB Titan-X or Xp is looking pretty good even if it is "slower" than the 2070.
...well if you are a bit strapped I would say the 1080 Ti as like mentioned above, they are now around 600$. Gives you 3 more GB overhead than the standard 2080. and is 400$ (or more) less than the 2080 Ti. Heck I'm still running with a Maxwell Titan X and it works just fine compared to Iray CPU rendering.
I was looking so forward to this and expected at least the 2080 ti to have 16 GB VRAM. But 11??? Guess they saved me a lot of money because I'll still be sitting on the 1080 ti for a while it seems.
It's a gaming card, so you don't need it. Not to mention the Titan V having 12 GB.
Too bad Titan V doesn't look like it's aimed at us rendering folks. 12 GB for 3K?
No mention of memory pooling. Surely they would tout that if it was a feature. So yeah I think it'll just act as a better SLI and not double memory, hence the lower cost too.
That page and almost every other preorder page also fails to mention anything about Tensor cores. But we now know that Tensor is indeed a part of the die, and they have a big role what is taking place. So just because this page lacks that info is not an indication of a lack of memory pooling. If you watch the conference, the CEO mentions NVLink and memory pooling right before jumping to name prices of the RTX cards. That's some weird timing to talk about NVLink if it isn't going to actually do what he says for the gaming cards. He pretty much said the same words I did, that the NVLink'ed GPUs are like single big GPU. We don't have confirmation, but we also don't have a denial. We need to get this question answered outright.
do gamers rally need that much VRAM (?)
Nope. Would make very little sense from that point of view. Games will need to run on commonly used hardware, devs cannot cater to a tiny minority that might be memory-pooling. To even get anywhere close to 11GB you'd need to be gaming at 8k resolution which is not feasible if you want decent frame rate. Some time in the future 8k gaming is not unlikely, by then GPU's will be fast enough to handle it and probably also have more VRAM to go along with it. So, I think to memory-pool today will do nothing for gamers. It would really just be useful to the ultra niche GPU rendering crowd that I doubt Nvidia is very concerned with. To the pros they wanna sell Quadros which covers the VRAM problem, the rest is a group of indies or hobbyists and they can render well enough with 11GB. Not to mention out of core rendering that only really Iray seems to lack.
I was looking so forward to this and expected at least the 2080 ti to have 16 GB VRAM. But 11??? Guess they saved me a lot of money because I'll still be sitting on the 1080 ti for a while it seems.
It's a gaming card, so you don't need it. Not to mention the Titan V having 12 GB.
Too bad Titan V doesn't look like it's aimed at us rendering folks. 12 GB for 3K?
I was looking so forward to this and expected at least the 2080 ti to have 16 GB VRAM. But 11??? Guess they saved me a lot of money because I'll still be sitting on the 1080 ti for a while it seems.
It's a gaming card, so you don't need it. Not to mention the Titan V having 12 GB.
Too bad Titan V doesn't look like it's aimed at us rendering folks. 12 GB for 3K?
I am neither a Renderer, nor a Gamer; I'm a hybrid of the two, although I do far more Rendering.
I also hate being pidgeon-holed... Expecially by companies hoping to flog me something.
I was looking so forward to this and expected at least the 2080 ti to have 16 GB VRAM. But 11??? Guess they saved me a lot of money because I'll still be sitting on the 1080 ti for a while it seems.
It's a gaming card, so you don't need it. Not to mention the Titan V having 12 GB.
Too bad Titan V doesn't look like it's aimed at us rendering folks. 12 GB for 3K?
I was looking so forward to this and expected at least the 2080 ti to have 16 GB VRAM. But 11??? Guess they saved me a lot of money because I'll still be sitting on the 1080 ti for a while it seems.
It's a gaming card, so you don't need it. Not to mention the Titan V having 12 GB.
Too bad Titan V doesn't look like it's aimed at us rendering folks. 12 GB for 3K?
I am neither a Renderer, nor a Gamer; I'm a hybrid of the two, although I do far more Rendering.
I also hate being pidgeon-holed... Expecially by companies hoping to flog me something.
Well, I agree, I'll do what I want with the card.
But Nvidia markets it as a gaming card... which is kinda nuts with the price tag. And the gap now to a $3K Titan. Raytracing will be nice to have in games, but not with the 2xxx generation cards, they're too expensive. Maybe 3xxx generation it can get widespread adoption.
No mention of memory pooling. Surely they would tout that if it was a feature. So yeah I think it'll just act as a better SLI and not double memory, hence the lower cost too.
That page and almost every other preorder page also fails to mention anything about Tensor cores. But we now know that Tensor is indeed a part of the die, and they have a big role what is taking place. So just because this page lacks that info is not an indication of a lack of memory pooling. If you watch the conference, the CEO mentions NVLink and memory pooling right before jumping to name prices of the RTX cards. That's some weird timing to talk about NVLink if it isn't going to actually do what he says for the gaming cards. He pretty much said the same words I did, that the NVLink'ed GPUs are like single big GPU. We don't have confirmation, but we also don't have a denial. We need to get this question answered outright.
do gamers rally need that much VRAM (?)
Nope. Would make very little sense from that point of view. Games will need to run on commonly used hardware, devs cannot cater to a tiny minority that might be memory-pooling. To even get anywhere close to 11GB you'd need to be gaming at 8k resolution which is not feasible if you want decent frame rate. Some time in the future 8k gaming is not unlikely, by then GPU's will be fast enough to handle it and probably also have more VRAM to go along with it. So, I think to memory-pool today will do nothing for gamers. It would really just be useful to the ultra niche GPU rendering crowd that I doubt Nvidia is very concerned with. To the pros they wanna sell Quadros which covers the VRAM problem, the rest is a group of indies or hobbyists and they can render well enough with 11GB. Not to mention out of core rendering that only really Iray seems to lack.
I wouldn't speak so fast on that. 4K gaming is only getting started. Just look at the underwhelming response of gamers to Turing...they expect more.
But it may very well be that NVLink does not pool memory. The consumer one is about half the speed of the Quadro line's. That may be enough to eliminate the feature. But then it also makes one wonder, what is the point? Gamers have largely given up on the idea of SLI as fewer games even support it now.
It doesn't encroach on Quadro regardless. If you do CAD seriously, you really want Quadro. Look at the benchmarks Linus does in this video, even the very cheapest Quadro cards totally stomp the best consumer cards here because of the specialized software Quadro uses.
Meanwhile, back to the subject of RTX and Iray, check out this article of render engines that are looking to use RTX and Tensor.
Otoy OctaneRender: GPU-accelerated, unbiased, physically correct renderer is demonstrating performance improvements of 5-8x with Octane 2019’s path-tracing kernel — running at 3.2 billion rays/second on NVIDIA Quadro RTX 6000, compared with 400 million rays/second on Quadro P6000.
The P6000 is not exactly a slouch! It has 3840 Pascal CUDA cores.
So...are you guys still convinced that Iray, the render engine created by Nvidia, will not see much of an impact from Nvidia RTX? I believe it will be a pretty substantial increase.
I also believe the wait on the Iray SDK will not be too long. Remember that Turing has been held back a little bit, it was originally projected to launch early this year, and the cryto boom messed everything up. They've had over 6 months to polish this launch out and its software. So I expect the Iray SDK to be released much sooner than it was for Pascal.
I'd say if NVIDIA does not make good use of RTX in Iray eventually that would be pretty strange indeed. And once they do make good use of it, and all they are saying about the technology is true, then yes there should be rather massive performance gains. Sadly, it never seems to me like Iray is of very much interest to NVIDIA because the market for it is teensy weensy. I'm not aware Iray is actually used as production renderer much. They throw it into applications like Substance and Daz Studio so that anyone uses the damn thing at all. At least that's what it feels like to me
I've got two EVGA GeForce GTX 980 Ti's, which I bought about a year apart from each other.
I've decided to replace them with one NVIDIA RTX 2080 Ti Founders Edition. Maybe a year from now I'll look at getting another; hopefully after a price drop.
My primary reason is the jump in CUDA processing power for about half the juice I'm using now. I'm hoping a lot of the ray tracing benefits ultimately show up.
Oh I know what you are saying. Iray seems pretty niche, and I almost never hear anyone talk about it. Its so niche that I have worried that Nvidia might one day give up and just kill it off, and where would that leave Daz Studio? (One reason why I never liked being tied so much to one renderer.) However, Iray is owned and developed by Nvidia, and if they have any desire to push their Iray software forward, it would have to come now. If it does not do the trick, well, I really don't know. But if Nvidia does pull this off, if they do make Tensor and RTX and Iray sing, it could give them a big advantage over other GPU render engines. Because while others might use RTX, Nvidia should know how to best use their own hardware.
The Migenius group I have spoke about doing benchmarks makes RealityServer, which like Daz Studio uses Iray as it renderer. So there's that. I have thought it would be interesting if Nvidia made a version of Iray for games. I guess that's not happening. It certainly would be nice if Iray had a real time mode, which might still happen.
...part why I am patiently waiting for the release of Octane4 which will include a more affordable subscription track. With out of core rendering, the need for memory pooling is not that crucial and even an old Titan X or a 1080 Ti would be more than sufficient
I somewhat doubt though that Nvidia will pull the plug on Iray as they open sourced their MDL SDK.
Otoy OctaneRender: GPU-accelerated, unbiased, physically correct renderer is demonstrating performance improvements of 5-8x with Octane 2019’s path-tracing kernel — running at 3.2 billion rays/second on NVIDIA Quadro RTX 6000, compared with 400 million rays/second on Quadro P6000.
Otoy is just restating what nVidia said in the presentation.
To put that into context, a fast Intel CPU can give us ~100Mrays. (via Embree)
OptiX on a 1080ti gives us ~400Mrays. Multiply that by the 4-6x everyone is quoting, gets us to the Gigaray range of the RT cores.
One of the block diagrams was interesting though, it showed the Raytracing being done asynchronously, in paralel to the render! That might save a bit of time, though I don't know if Iray already does this.
The other thing in the same block diagram was that the denoiser is still post-render! Does that mean Iray will need a new stop condition as the three existing conditions have no way of evaluating 'Denoise Quality'.
Again, I would caution folks not to immediately assume that more is better when it comes to hardware specs. More cores or more gigapaths/second or “a beast of a GPU” doesn’t mean that your particular applications will benefit. Which is why NVIDIA says carefully crafted marketing stuff like the performance is “UP TO six times more” than previous generations or whatever.
The “up to” is the important thing. My car’s speedometer says it can go up to 150 mph. But in real world benchmarks I’ve compiled over the last year of driving it’s never exceeded like 80mph. And in some cases, as much as I wish it would, it never exceeds 20 mph. Or even 0 mph.
Keep in mind that when you’re doing computer stuff there are TWO aspects to solving problems. One is the capabilities of the solver (hardware, and software algorithms, and gigapaths and cores and unleashing the beast and such), but just as important is the type of problem you need to solve. Just like in the real world I can’t take advantage of my car’s top speed (due to real world stuff like stop signs and heavy traffic and winding roads), in the software world the parameters of the problem define how (or if) I can take full advantage of the solver.
For example, let’s assume I have a very low res image (200x100), and for every pixel I want to remove the red component. So for each of 20,000 pixels I want to do a very simple operation, which is replace the Red in the RGB values for that pixel with 0. So if I have a GPU with 20,000 cores I can do all of those at the same time. Cool.
But what if my GPU has 80,000 cores? Will throwing more cores at the problem make it solve faster? Probably not. There’s a point at which the problem doesn’t really benefit from more workers working on the problem.
Also keep in mind that some problems don’t lend themselves to the parallel/simultaneous solutions that GPU’s are so good at. A very simple example:
Let’s say I need to add 2 + 3 + 16. I can have one core add 2 + 3. But a second core has to wait for that result before it adds the 16. So I can’t do those operations in parallel (simultaneously). It requires a “serial” solution, where you solve one thing, then the next, and so on.
And as I’m finding as I investigate ray tracers in more detail, some of the equations you need to solve with ray tracing look like this (a serial equation to solve whether a ray is hitting an object or not):
B^2 – 4 * A * C
In this case you have to solve 4 things in series before you get the final answer. More cores or more hardware or a bigger beast may not help. So another approach in addition to adding more parallel processors (and one that NVIDIA has taken apparently) is to design the hardware specifically to handle this particular type of equation. So you design the hardware to have the right registers, storage locations, math operators, etc., so that these particular equations are solved very fast and efficiently. Instead of getting a million workers who simply replace a number with “0”, you decide to form a small group of experts to solve this part of the problem. You get some multiplier guys, a subtracter guy, and someone to make sure all the guys are coordinated and have the right values (A, B & C) to work on, and someone to hold the intermediate answers. You form a team called the “RT Cores”.
And keep in mind that’s just for what’s called the “primary ray”, from the camera to the object. Then (in series) you need to do more calculations after that for the secondary (shadow) ray. But you can’t do the second without first calculating the primary ray. And more cores probably ain’t gonna help.
Oh I know what you are saying. Iray seems pretty niche, and I almost never hear anyone talk about it. Its so niche that I have worried that Nvidia might one day give up and just kill it off, and where would that leave Daz Studio? (One reason why I never liked being tied so much to one renderer.) However, Iray is owned and developed by Nvidia, and if they have any desire to push their Iray software forward, it would have to come now. If it does not do the trick, well, I really don't know. But if Nvidia does pull this off, if they do make Tensor and RTX and Iray sing, it could give them a big advantage over other GPU render engines. Because while others might use RTX, Nvidia should know how to best use their own hardware.
The Migenius group I have spoke about doing benchmarks makes RealityServer, which like Daz Studio uses Iray as it renderer. So there's that. I have thought it would be interesting if Nvidia made a version of Iray for games. I guess that's not happening. It certainly would be nice if Iray had a real time mode, which might still happen.
Well the new Turing is designed for Raytracing, and would seem Iray should get a big benefit to it. According to the information we have, the Turing series is a monster for rendering, and I'm guessing they plan on Iray gaining ground with it. It has the advantage of native support - if Nvidia gets more software to use Iray & offers performance enhancements with it, they have their customers locked with their hardware.
While it's cool for gaming, I find it hard to believe Nvidia would think it's going to be mainstream for a while - RTX starts at the 2070 with a $600 price tag. Consoles AFAIK don't have it the tech either.
No mention of memory pooling. Surely they would tout that if it was a feature. So yeah I think it'll just act as a better SLI and not double memory, hence the lower cost too.
That page and almost every other preorder page also fails to mention anything about Tensor cores. But we now know that Tensor is indeed a part of the die, and they have a big role what is taking place. So just because this page lacks that info is not an indication of a lack of memory pooling. If you watch the conference, the CEO mentions NVLink and memory pooling right before jumping to name prices of the RTX cards. That's some weird timing to talk about NVLink if it isn't going to actually do what he says for the gaming cards. He pretty much said the same words I did, that the NVLink'ed GPUs are like single big GPU. We don't have confirmation, but we also don't have a denial. We need to get this question answered outright.
do gamers rally need that much VRAM (?)
Nope. Would make very little sense from that point of view. Games will need to run on commonly used hardware, devs cannot cater to a tiny minority that might be memory-pooling. To even get anywhere close to 11GB you'd need to be gaming at 8k resolution which is not feasible if you want decent frame rate. Some time in the future 8k gaming is not unlikely, by then GPU's will be fast enough to handle it and probably also have more VRAM to go along with it. So, I think to memory-pool today will do nothing for gamers. It would really just be useful to the ultra niche GPU rendering crowd that I doubt Nvidia is very concerned with. To the pros they wanna sell Quadros which covers the VRAM problem, the rest is a group of indies or hobbyists and they can render well enough with 11GB. Not to mention out of core rendering that only really Iray seems to lack.
I wouldn't speak so fast on that. 4K gaming is only getting started. Just look at the underwhelming response of gamers to Turing...they expect more.
But it may very well be that NVLink does not pool memory. The consumer one is about half the speed of the Quadro line's. That may be enough to eliminate the feature. But then it also makes one wonder, what is the point? Gamers have largely given up on the idea of SLI as fewer games even support it now.
It doesn't encroach on Quadro regardless. If you do CAD seriously, you really want Quadro. Look at the benchmarks Linus does in this video, even the very cheapest Quadro cards totally stomp the best consumer cards here because of the specialized software Quadro uses.
Meanwhile, back to the subject of RTX and Iray, check out this article of render engines that are looking to use RTX and Tensor.
Otoy OctaneRender: GPU-accelerated, unbiased, physically correct renderer is demonstrating performance improvements of 5-8x with Octane 2019’s path-tracing kernel — running at 3.2 billion rays/second on NVIDIA Quadro RTX 6000, compared with 400 million rays/second on Quadro P6000.
The P6000 is not exactly a slouch! It has 3840 Pascal CUDA cores.
So...are you guys still convinced that Iray, the render engine created by Nvidia, will not see much of an impact from Nvidia RTX? I believe it will be a pretty substantial increase.
I also believe the wait on the Iray SDK will not be too long. Remember that Turing has been held back a little bit, it was originally projected to launch early this year, and the cryto boom messed everything up. They've had over 6 months to polish this launch out and its software. So I expect the Iray SDK to be released much sooner than it was for Pascal.
O.M.G!!!
I truly love what I am hearing. like Bluejuante, i think I'm getting revved up by the hype.
As if I didnt already have anough reasons to love Octane. However even with this huge leap in RTX performance one cannot help but to question rather being an early adopter is very worthwhile. Will we see cards released witrhin the next couple of months/years that will exceed the performance of these first generation cards by huge leaps? Because if so, then its hard to justify buying in now only to have your card outperfomed in under 6 months.
so I guess the question is two fold.
1. We can surely expect improvements in ram and software utilization of RTX and Tensor tech, but what about basic numbers like core counts and clock speeds? Is there much room to improve on the current RTX hardware technology itself? Like we've seen with CPU clock speeds that steadily improved until physical limits stalled further clock speed improvements for the past 8 years? Just how far can this go?
2. Will we see anywhere near the same types of performance boost from consumer level cards? Lovely though that Octane bench may have been, it was a Quadro and not a Geforce. Do I really expect a 2080ti to outperform the pascal 1080ti by 500 - 800 times?
3. Last question...who in their right minds will buy these non rtx cards once we all migrate away from the gtx line and need to recoup some cash?
No mention of memory pooling. Surely they would tout that if it was a feature. So yeah I think it'll just act as a better SLI and not double memory, hence the lower cost too.
That page and almost every other preorder page also fails to mention anything about Tensor cores. But we now know that Tensor is indeed a part of the die, and they have a big role what is taking place. So just because this page lacks that info is not an indication of a lack of memory pooling. If you watch the conference, the CEO mentions NVLink and memory pooling right before jumping to name prices of the RTX cards. That's some weird timing to talk about NVLink if it isn't going to actually do what he says for the gaming cards. He pretty much said the same words I did, that the NVLink'ed GPUs are like single big GPU. We don't have confirmation, but we also don't have a denial. We need to get this question answered outright.
do gamers rally need that much VRAM (?)
Nope. Would make very little sense from that point of view. Games will need to run on commonly used hardware, devs cannot cater to a tiny minority that might be memory-pooling. To even get anywhere close to 11GB you'd need to be gaming at 8k resolution which is not feasible if you want decent frame rate. Some time in the future 8k gaming is not unlikely, by then GPU's will be fast enough to handle it and probably also have more VRAM to go along with it. So, I think to memory-pool today will do nothing for gamers. It would really just be useful to the ultra niche GPU rendering crowd that I doubt Nvidia is very concerned with. To the pros they wanna sell Quadros which covers the VRAM problem, the rest is a group of indies or hobbyists and they can render well enough with 11GB. Not to mention out of core rendering that only really Iray seems to lack.
I wouldn't speak so fast on that. 4K gaming is only getting started. Just look at the underwhelming response of gamers to Turing...they expect more.
But it may very well be that NVLink does not pool memory. The consumer one is about half the speed of the Quadro line's. That may be enough to eliminate the feature. But then it also makes one wonder, what is the point? Gamers have largely given up on the idea of SLI as fewer games even support it now.
It doesn't encroach on Quadro regardless. If you do CAD seriously, you really want Quadro. Look at the benchmarks Linus does in this video, even the very cheapest Quadro cards totally stomp the best consumer cards here because of the specialized software Quadro uses.
Meanwhile, back to the subject of RTX and Iray, check out this article of render engines that are looking to use RTX and Tensor.
Otoy OctaneRender: GPU-accelerated, unbiased, physically correct renderer is demonstrating performance improvements of 5-8x with Octane 2019’s path-tracing kernel — running at 3.2 billion rays/second on NVIDIA Quadro RTX 6000, compared with 400 million rays/second on Quadro P6000.
The P6000 is not exactly a slouch! It has 3840 Pascal CUDA cores.
So...are you guys still convinced that Iray, the render engine created by Nvidia, will not see much of an impact from Nvidia RTX? I believe it will be a pretty substantial increase.
I also believe the wait on the Iray SDK will not be too long. Remember that Turing has been held back a little bit, it was originally projected to launch early this year, and the cryto boom messed everything up. They've had over 6 months to polish this launch out and its software. So I expect the Iray SDK to be released much sooner than it was for Pascal.
O.M.G!!!
I truly love what I am hearing. like Bluejuante, i think I'm getting revved up by the hype.
As if I didnt already have anough reasons to love Octane. However even with this huge leap in RTX performance one cannot help but to question rather being an early adopter is very worthwhile. Will we see cards released witrhin the next couple of months/years that will exceed the performance of these first generation cards by huge leaps? Because if so, then its hard to justify buying in now only to have your card outperfomed in under 6 months.
so I guess the question is two fold.
1. We can surely expect improvements in ram and software utilization of RTX and Tensor tech, but what about basic numbers like core counts and clock speeds? Is there much room to improve on the current RTX hardware technology itself? Like we've seen with CPU clock speeds that steadily improved until physical limits stalled further clock speed improvements for the past 8 years? Just how far can this go?
2. Will we see anywhere near the same types of performance boost from consumer level cards? Lovely though that Octane bench may have been, it was a Quadro and not a Geforce. Do I really expect a 2080ti to outperform the pascal 1080ti by 500 - 800 times?
3. Last question...who in their right minds will buy these non rtx cards once we all migrate away from the gtx line and need to recoup some cash?
It's exciting, except the prices are putting a real damper on the excitement. The high-end gaming card jumped from $800 to $1200 (1080 Ti to 2080 Ti).
Then there is the fact that 7nm process is coming soon - will we see a 2nd gen 2xxx series soon? Or new professional Turing chips?
So...are you guys still convinced that Iray, the render engine created by Nvidia, will not see much of an impact from Nvidia RTX? I believe it will be a pretty substantial increase.
I don't think anyone is saying Iray won't see a substantial impact from RTX. Rather, I think people are saying "we don't know what the impact will be until someone can actually test it in Studio/Iray". Obviously render times will improve over a 1080ti. The question is "how much?" Will they drop from, say, 10 minutes to 9 minutes, or from 10 minutes to 5 minutes, or from 10 minutes to 10 seconds? Nobody has any idea.
So...are you guys still convinced that Iray, the render engine created by Nvidia, will not see much of an impact from Nvidia RTX? I believe it will be a pretty substantial increase.
I don't think anyone is saying Iray won't see a substantial impact from RTX. Rather, I think people are saying "we don't know what the impact will be until someone can actually test it in Studio/Iray". Obviously render times will improve over a 1080ti. The question is "how much?" Will they drop from, say, 10 minutes to 9 minutes, or from 10 minutes to 5 minutes, or from 10 minutes to 10 seconds? Nobody has any idea.
Exactly. So, why not just wait and see? Isn't the mere possibility kinda exciting though? Doesn't happen every year that an entirely new piece of tech appears. I think what the Octane guy said sounds pretty great. Even though it was an NVIDIA video so maybe they're just completely screwing us over but I don't know why they would that, knowing that there will be tests and benchmarks left and right soon.
Exactly. So, why not just wait and see? Isn't the mere possibility kinda exciting though?
That's what I'm doing...waiting. But I'm also being practical and listing the facts we have before us. And IMO, right now there's not a lot to be excited about other than, well, excitement.
And keep in mind, that at that $1,200 price tag the 2080ti would have to, at a minimum, cut render times in half just to be comparable to the 1080ti. That's about the same jump as from a GTX-1060 to a 1080ti. That's a lot.
Now maybe it can cut a 33 minute render down to 11 minutes. Maybe at that point I'd start to be interested.
But what about my existing cards? Would I spend $1,200 for a 2080ti and trash the existing 1080ti and 1070 because they don't quite work well with the 2080ti? Who knows? And what about VRAM stacking? Will that ever be possible with a 2080ti?
Too many questions right now for me to be even a little excited. How is that negative?
Okay, well if there's no VRAM stacking then the only way I'll drop $1,200 on one of these is if it's so fast that you no longer need a Render button in Studio cuz it renders everything in realtime with the same quality as a real render. Now THAT would be cool.
...well real time rendering is one thing Nvidia's been touting with the new generation. Not good enough though to get me to bite given the price jump and same memory limits as the previous generation.
Comments
Last time, the delay was NVIDIA - after NVIDIA spent months getting Iray ready for the new cards, DAZ got it out in Studio quickly. The positive for this time, DAZ has the new Denoiser already in the beta.
Don't compare cores.
...the only time he mentions pooling memory resources between two cards is when he brings up the Quadro 8000 (10,000$ ea). As I mentioned there is different pricing between the Consumer RTX NVLinks and the Quadro/Tesla NVLinks (see attachments below). For the GeForce series it mentions multiple card bridges which pretty much seem to be configured like as the old SLI ones. They may allow faster data transfer between cards than SLI but nothing about pooling memory in the description.
It makes perfect sense why the GeForce series would not have memory pooling as one, do gamers rally need that much VRAM (?), and two, it would seriously undercut the Quadro series (particularly the 2,300$ RTX5000 using two RTX2070s which would provide more CUDA and Tensor cores than the single Quadro while offering the same combined amount of VRAM [16 GB])
I was kind of amused by his comments comparing 2070 performance to the Titan-X/Xp as the latter has 4 more GB of VRAM and again, I see VRAM as the single most important feature for GPU based rendering (the more the better). If your scene winds up being say 9 GB, suddenly that old 12 GB Titan-X or Xp is looking pretty good even if it is "slower" than the 2070.
So if I was going to upgrade from my 970 would it be worth it to get one of the new 2080s or just stick with the 1080TI?
I was hoping for... more?
...well if you are a bit strapped I would say the 1080 Ti as like mentioned above, they are now around 600$. Gives you 3 more GB overhead than the standard 2080. and is 400$ (or more) less than the 2080 Ti. Heck I'm still running with a Maxwell Titan X and it works just fine compared to Iray CPU rendering.
It's a gaming card, so you don't need it. Not to mention the Titan V having 12 GB.
Too bad Titan V doesn't look like it's aimed at us rendering folks. 12 GB for 3K?
Nope. Would make very little sense from that point of view. Games will need to run on commonly used hardware, devs cannot cater to a tiny minority that might be memory-pooling. To even get anywhere close to 11GB you'd need to be gaming at 8k resolution which is not feasible if you want decent frame rate. Some time in the future 8k gaming is not unlikely, by then GPU's will be fast enough to handle it and probably also have more VRAM to go along with it. So, I think to memory-pool today will do nothing for gamers. It would really just be useful to the ultra niche GPU rendering crowd that I doubt Nvidia is very concerned with. To the pros they wanna sell Quadros which covers the VRAM problem, the rest is a group of indies or hobbyists and they can render well enough with 11GB. Not to mention out of core rendering that only really Iray seems to lack.
I disagree. I determine what I need, or want.
Hard to say what is best for you; you are the best placed to answer that.
Personally, we can't yet make a decission; we know what they have, but not what it (meaning what they have) can do.
I am neither a Renderer, nor a Gamer; I'm a hybrid of the two, although I do far more Rendering.
I also hate being pidgeon-holed... Expecially by companies hoping to flog me something.
Well, I agree, I'll do what I want with the card.
But Nvidia markets it as a gaming card... which is kinda nuts with the price tag. And the gap now to a $3K Titan. Raytracing will be nice to have in games, but not with the 2xxx generation cards, they're too expensive. Maybe 3xxx generation it can get widespread adoption.
I wouldn't speak so fast on that. 4K gaming is only getting started. Just look at the underwhelming response of gamers to Turing...they expect more.
But it may very well be that NVLink does not pool memory. The consumer one is about half the speed of the Quadro line's. That may be enough to eliminate the feature. But then it also makes one wonder, what is the point? Gamers have largely given up on the idea of SLI as fewer games even support it now.
It doesn't encroach on Quadro regardless. If you do CAD seriously, you really want Quadro. Look at the benchmarks Linus does in this video, even the very cheapest Quadro cards totally stomp the best consumer cards here because of the specialized software Quadro uses.
Meanwhile, back to the subject of RTX and Iray, check out this article of render engines that are looking to use RTX and Tensor.
https://evermotion.org/articles/show/11111/nvidia-geforce-rtx-performance-in-arch-viz-applications
The P6000 is not exactly a slouch! It has 3840 Pascal CUDA cores.
So...are you guys still convinced that Iray, the render engine created by Nvidia, will not see much of an impact from Nvidia RTX? I believe it will be a pretty substantial increase.
I also believe the wait on the Iray SDK will not be too long. Remember that Turing has been held back a little bit, it was originally projected to launch early this year, and the cryto boom messed everything up. They've had over 6 months to polish this launch out and its software. So I expect the Iray SDK to be released much sooner than it was for Pascal.
I'd say if NVIDIA does not make good use of RTX in Iray eventually that would be pretty strange indeed. And once they do make good use of it, and all they are saying about the technology is true, then yes there should be rather massive performance gains. Sadly, it never seems to me like Iray is of very much interest to NVIDIA because the market for it is teensy weensy. I'm not aware Iray is actually used as production renderer much. They throw it into applications like Substance and Daz Studio so that anyone uses the damn thing at all. At least that's what it feels like to me
I've got two EVGA GeForce GTX 980 Ti's, which I bought about a year apart from each other.
I've decided to replace them with one NVIDIA RTX 2080 Ti Founders Edition. Maybe a year from now I'll look at getting another; hopefully after a price drop.
My primary reason is the jump in CUDA processing power for about half the juice I'm using now. I'm hoping a lot of the ray tracing benefits ultimately show up.
Oh I know what you are saying. Iray seems pretty niche, and I almost never hear anyone talk about it. Its so niche that I have worried that Nvidia might one day give up and just kill it off, and where would that leave Daz Studio? (One reason why I never liked being tied so much to one renderer.) However, Iray is owned and developed by Nvidia, and if they have any desire to push their Iray software forward, it would have to come now. If it does not do the trick, well, I really don't know. But if Nvidia does pull this off, if they do make Tensor and RTX and Iray sing, it could give them a big advantage over other GPU render engines. Because while others might use RTX, Nvidia should know how to best use their own hardware.
The Migenius group I have spoke about doing benchmarks makes RealityServer, which like Daz Studio uses Iray as it renderer. So there's that. I have thought it would be interesting if Nvidia made a version of Iray for games. I guess that's not happening. It certainly would be nice if Iray had a real time mode, which might still happen.
...part why I am patiently waiting for the release of Octane4 which will include a more affordable subscription track. With out of core rendering, the need for memory pooling is not that crucial and even an old Titan X or a 1080 Ti would be more than sufficient
I somewhat doubt though that Nvidia will pull the plug on Iray as they open sourced their MDL SDK.
Otoy is just restating what nVidia said in the presentation.
To put that into context, a fast Intel CPU can give us ~100Mrays. (via Embree)
OptiX on a 1080ti gives us ~400Mrays. Multiply that by the 4-6x everyone is quoting, gets us to the Gigaray range of the RT cores.
One of the block diagrams was interesting though, it showed the Raytracing being done asynchronously, in paralel to the render! That might save a bit of time, though I don't know if Iray already does this.
The other thing in the same block diagram was that the denoiser is still post-render! Does that mean Iray will need a new stop condition as the three existing conditions have no way of evaluating 'Denoise Quality'.
Again, I would caution folks not to immediately assume that more is better when it comes to hardware specs. More cores or more gigapaths/second or “a beast of a GPU” doesn’t mean that your particular applications will benefit. Which is why NVIDIA says carefully crafted marketing stuff like the performance is “UP TO six times more” than previous generations or whatever.
The “up to” is the important thing. My car’s speedometer says it can go up to 150 mph. But in real world benchmarks I’ve compiled over the last year of driving it’s never exceeded like 80mph. And in some cases, as much as I wish it would, it never exceeds 20 mph. Or even 0 mph.
Keep in mind that when you’re doing computer stuff there are TWO aspects to solving problems. One is the capabilities of the solver (hardware, and software algorithms, and gigapaths and cores and unleashing the beast and such), but just as important is the type of problem you need to solve. Just like in the real world I can’t take advantage of my car’s top speed (due to real world stuff like stop signs and heavy traffic and winding roads), in the software world the parameters of the problem define how (or if) I can take full advantage of the solver.
For example, let’s assume I have a very low res image (200x100), and for every pixel I want to remove the red component. So for each of 20,000 pixels I want to do a very simple operation, which is replace the Red in the RGB values for that pixel with 0. So if I have a GPU with 20,000 cores I can do all of those at the same time. Cool.
But what if my GPU has 80,000 cores? Will throwing more cores at the problem make it solve faster? Probably not. There’s a point at which the problem doesn’t really benefit from more workers working on the problem.
Also keep in mind that some problems don’t lend themselves to the parallel/simultaneous solutions that GPU’s are so good at. A very simple example:
Let’s say I need to add 2 + 3 + 16. I can have one core add 2 + 3. But a second core has to wait for that result before it adds the 16. So I can’t do those operations in parallel (simultaneously). It requires a “serial” solution, where you solve one thing, then the next, and so on.
And as I’m finding as I investigate ray tracers in more detail, some of the equations you need to solve with ray tracing look like this (a serial equation to solve whether a ray is hitting an object or not):
B^2 – 4 * A * C
In this case you have to solve 4 things in series before you get the final answer. More cores or more hardware or a bigger beast may not help. So another approach in addition to adding more parallel processors (and one that NVIDIA has taken apparently) is to design the hardware specifically to handle this particular type of equation. So you design the hardware to have the right registers, storage locations, math operators, etc., so that these particular equations are solved very fast and efficiently. Instead of getting a million workers who simply replace a number with “0”, you decide to form a small group of experts to solve this part of the problem. You get some multiplier guys, a subtracter guy, and someone to make sure all the guys are coordinated and have the right values (A, B & C) to work on, and someone to hold the intermediate answers. You form a team called the “RT Cores”.
And keep in mind that’s just for what’s called the “primary ray”, from the camera to the object. Then (in series) you need to do more calculations after that for the secondary (shadow) ray. But you can’t do the second without first calculating the primary ray. And more cores probably ain’t gonna help.
It’s complicated.
Well the new Turing is designed for Raytracing, and would seem Iray should get a big benefit to it. According to the information we have, the Turing series is a monster for rendering, and I'm guessing they plan on Iray gaining ground with it. It has the advantage of native support - if Nvidia gets more software to use Iray & offers performance enhancements with it, they have their customers locked with their hardware.
While it's cool for gaming, I find it hard to believe Nvidia would think it's going to be mainstream for a while - RTX starts at the 2070 with a $600 price tag. Consoles AFAIK don't have it the tech either.
It's not that complicated. Performance will either increase enough to justify and upgrade or it won't. We will see.
O.M.G!!!
I truly love what I am hearing. like Bluejuante, i think I'm getting revved up by the hype.
As if I didnt already have anough reasons to love Octane. However even with this huge leap in RTX performance one cannot help but to question rather being an early adopter is very worthwhile. Will we see cards released witrhin the next couple of months/years that will exceed the performance of these first generation cards by huge leaps? Because if so, then its hard to justify buying in now only to have your card outperfomed in under 6 months.
so I guess the question is two fold.
1. We can surely expect improvements in ram and software utilization of RTX and Tensor tech, but what about basic numbers like core counts and clock speeds? Is there much room to improve on the current RTX hardware technology itself? Like we've seen with CPU clock speeds that steadily improved until physical limits stalled further clock speed improvements for the past 8 years? Just how far can this go?
2. Will we see anywhere near the same types of performance boost from consumer level cards? Lovely though that Octane bench may have been, it was a Quadro and not a Geforce. Do I really expect a 2080ti to outperform the pascal 1080ti by 500 - 800 times?
3. Last question...who in their right minds will buy these non rtx cards once we all migrate away from the gtx line and need to recoup some cash?
It's exciting, except the prices are putting a real damper on the excitement. The high-end gaming card jumped from $800 to $1200 (1080 Ti to 2080 Ti).
Then there is the fact that 7nm process is coming soon - will we see a 2nd gen 2xxx series soon? Or new professional Turing chips?
https://wccftech.com/nvidia-7nm-next-gen-gpus-tsmc/
Actually I’m also getting pretty jazzed about the RTX cards. I mean, just look at the list of pros and cons:
RTX 2080ti:
Oh, wait….nevermind.
If the Iray performance is 100x better then I'm pretty sure you won't mind ;)
Anyway, why so negative? We simply don't know yet, let's wait and see what happens.
I don't think anyone is saying Iray won't see a substantial impact from RTX. Rather, I think people are saying "we don't know what the impact will be until someone can actually test it in Studio/Iray". Obviously render times will improve over a 1080ti. The question is "how much?" Will they drop from, say, 10 minutes to 9 minutes, or from 10 minutes to 5 minutes, or from 10 minutes to 10 seconds? Nobody has any idea.
Exactly. So, why not just wait and see? Isn't the mere possibility kinda exciting though? Doesn't happen every year that an entirely new piece of tech appears. I think what the Octane guy said sounds pretty great. Even though it was an NVIDIA video so maybe they're just completely screwing us over but I don't know why they would that, knowing that there will be tests and benchmarks left and right soon.
That's what I'm doing...waiting. But I'm also being practical and listing the facts we have before us. And IMO, right now there's not a lot to be excited about other than, well, excitement.
And keep in mind, that at that $1,200 price tag the 2080ti would have to, at a minimum, cut render times in half just to be comparable to the 1080ti. That's about the same jump as from a GTX-1060 to a 1080ti. That's a lot.
Now maybe it can cut a 33 minute render down to 11 minutes. Maybe at that point I'd start to be interested.
But what about my existing cards? Would I spend $1,200 for a 2080ti and trash the existing 1080ti and 1070 because they don't quite work well with the 2080ti? Who knows? And what about VRAM stacking? Will that ever be possible with a 2080ti?
Too many questions right now for me to be even a little excited. How is that negative?
And by that you mean.... ?
Okay, well if there's no VRAM stacking then the only way I'll drop $1,200 on one of these is if it's so fast that you no longer need a Render button in Studio cuz it renders everything in realtime with the same quality as a real render. Now THAT would be cool.