At some point I’ll be preparing a longer post than this but just wanted to quickly offer some insight on ray tracing hardware acceleration and ensure that user expectations are reasonable.
Most renderers work by executing the following three basic operations: 1) They generate rays (initially from the camera), 2) They shoot these rays into the scene (i.e. they do ray tracing), 3) They run shaders at the intersection points of the rays. Shading typically spawns new rays for reflection/refraction/GI/etc purposes which means going back to step 1. This 1-2-3 process happens as many times as there are ray bounces.
Hardware accelerated ray tracing primarily speeds up the second step: i.e. the ‘core’ ray tracing. If the renderer uses really simple shading, then the ray tracing step becomes the most expensive part of the renderer. For example, if you use extremely simple shaders that (say) just read a flat texture and return it, you could easily find out that the ray tracing step takes 99% of the entire render time and shading just takes 1%. In that case, accelerating ray tracing 10 times means that the frame renders 10 times faster, since ray tracing takes most of the time.
Unfortunately, production scenes do not use quite as simple shading as that.
Both us and other pro renderer vendors have found cases where shading takes a considerable chunk of the render time. I remember reading a Pixar paper (or maybe it was a presentation) where they were claiming that their (obviously complicated) shaders were actually taking *more* time than ray tracing! Let’s say that, in such a scenario, shading takes 50% of the entire frame time and tracing takes the other 50% (I intentionally ignore ray generation here). In that scenario, speeding up the ray tracer a hundred million times means that you make that 50% ray tracing time go away but you are still left with shading taking the other 50% of the frame! So even though your ray tracer became a hundred million times faster, your entire frame only rendered twice as fast!
All this is to say that when you read claims about a new system making rendering several times times faster, you have to ask yourself: was this with simple shading? Like the kind you see in videogames? Or was it in a scene which (for whatever reason) was spending a lot of time during tracing and not shading?
In more technical terms: the RT cores accelerate ray tracing while the CUDA cores accelerate shading and ray generation. The RT hardware cannot do volume rendering and I think no hair/curve tracing either - so these two techniques would probably also fall back to CUDA cores too - which means no benefit from the RT hardware.
All this is not to say that we’re not excited to see developments on the ray tracing front! On the contrary! But, at the same time, we wanted to ensure that everyone has a clear idea on what they’ll be getting when ray tracing hardware (and the necessary software support) arrives. We have, as explained in other forum posts, already started on supporting it by re-architecting certain parts of Redshift. In fact, this is something we’ve been doing silently (for RS 3.0) during the last few months and in-between other tasks. Hopefully, not too long from now, we’ll get this all working and will have some performance figures to share with you.
Thanks
Also interesting thought later on:
Thanks for the explanation Panos. In theory though if the RT cores can handle some of these tasks does that not leave the CUDA cores open to do more shading tasks? I dont think anyone here expects a 600% increase in average performance but overall if the RT cores can help out even by 20% that is still a massive leap on top of the other performance enhancements especially when talking about animations. I am assuming you guys plan on somehow dividing up the work so the RT cores can tackle “some” aspects its good at while the CUDA cores tackle the rest and the tensor cores can just denoise faster. Either way I think the cumulative increase is what has people excited mostly. Really look forward to testing out what you guys make in the future
Makes sense that if RT core would do some work that CUDA core can focus on other stuff and also do those tasks faster without having to worry about raytracing. Dev answer:
Yes, if the RT cores can be executed completely in parallel with the other tasks, that would (in itself) provide a performance benefit. But the entire render time will still be dominated by the things that the RT cores cannot do. I.e. ray gen, shading, volume tracing, etc. So if these exist and take the same time as ray tracing, then the maximum speedup would be 2x. If they take more than ray tracing then the speedup will be even less than that. So the final time will heavily depend on the scene and shading complexity.
Very interesting post @bluejaunte. Good to see a developer finally telling us what the deal really is all about. If I understood it correctly, it's pretty much what I though it would be. It's only 8x faster in optimal conditions, but that's marketing ABC here folks . That being said, it's still good to know that even with complex shaders RT cores still help. If I manage to get 2x performance increase from RT cores, that would be amazing. I was already assuming that tensor cores are for ai denoising, so I have to wait and see how good that denoising algorithm really is. I have to admit, that I do have some serious doubts about that. Also it still remains to be seen how good turing cuda cores are for rendering tasks ( cuda cores seem to get better and better in every new architecture ), so probably we still have to wait a little longer before final verdict.
In that scenario, speeding up the ray tracer a hundred million times means that you make that 50% ray tracing time go away but you are still left with shading taking the other 50% of the frame! So even though your ray tracer became a hundred million times faster, your entire frame only rendered twice as fast!
"... even though your ray tracer became a hundred million times faster, your entire frame only rendered twice as fast!"
Very interesting post @bluejaunte. Good to see a developer finally telling us what the deal really is all about. If I understood it correctly, it's pretty much what I though it would be. It's only 8x faster in optimal conditions, but that's marketing ABC here folks . That being said, it's still good to know that even with complex shaders RT cores still help. If I manage to get 2x performance increase from RT cores, that would be amazing. I was already assuming that tensor cores are for ai denoising, so I have to wait and see how good that denoising algorithm really is. I have to admit, that I do have some serious doubts about that. Also it still remains to be seen how good turing cuda cores are for rendering tasks ( cuda cores seem to get better and better in every new architecture ), so probably we still have to wait a little longer before final verdict.
As you say, it's marketing. This means we don't know yet.
In that scenario, speeding up the ray tracer a hundred million times means that you make that 50% ray tracing time go away but you are still left with shading taking the other 50% of the frame! So even though your ray tracer became a hundred million times faster, your entire frame only rendered twice as fast!
"... even though your ray tracer became a hundred million times faster, your entire frame only rendered twice as fast!"
Boy, he's no fun at all....
Almost sounds like you're saying twice as fast isn't great
Keep in mind also this sounded like a worst case scenario where shading takes 50% of the time. And this is just the RT core, but there's also more CUDA cores and tensor for denoising. Then on top that I think it's not unreasonable to assume that the software to make use of this stuff will mature more and renderers themselves will also find new ways to make use of the these new hardware possibilities.
When was the last time we had such massive potential to boost render speeds?
Twice as fast means a 50% improvement in render times. A 10 minute scene would render in 5 minutes if twice as fast, and that's a 50% improvement.
LIke I posted before, the norm between GTX generations has hovered around 33%. And if a 2080ti costs twice as much as a 1080ti, a 50% improvement barely makes up for the difference in cost.
Twice as fast means a 50% improvement in render times. LIke I posted before, the norm between GTX generations has hovered around 33%. And if a 2080ti costs twice as much as a 1080ti, a 50% improvement barely makes up for the difference in cost.
I suggest you ask Mr. Google "how to calculate percent improvement".
You take the difference of the two render times, then divide that by the longer time to get percent improvement. So a 10 minute render becoming a 5 minute render is (10-5)/10 or 50% improvement.
A 100% improvement means it renders instantaneously.
You're talking about percent change in raw speed, not percent improvement in results.
If you still don't believe after a science website tells you differently, then try checking some render benchmark sites. They'll say the same thing. Or ask your local 9th grader.
At some point I’ll be preparing a longer post than this but just wanted to quickly offer some insight on ray tracing hardware acceleration and ensure that user expectations are reasonable.
Most renderers work by executing the following three basic operations: 1) They generate rays (initially from the camera), 2) They shoot these rays into the scene (i.e. they do ray tracing), 3) They run shaders at the intersection points of the rays. Shading typically spawns new rays for reflection/refraction/GI/etc purposes which means going back to step 1. This 1-2-3 process happens as many times as there are ray bounces.
Hardware accelerated ray tracing primarily speeds up the second step: i.e. the ‘core’ ray tracing. If the renderer uses really simple shading, then the ray tracing step becomes the most expensive part of the renderer. For example, if you use extremely simple shaders that (say) just read a flat texture and return it, you could easily find out that the ray tracing step takes 99% of the entire render time and shading just takes 1%. In that case, accelerating ray tracing 10 times means that the frame renders 10 times faster, since ray tracing takes most of the time.
Unfortunately, production scenes do not use quite as simple shading as that.
Both us and other pro renderer vendors have found cases where shading takes a considerable chunk of the render time. I remember reading a Pixar paper (or maybe it was a presentation) where they were claiming that their (obviously complicated) shaders were actually taking *more* time than ray tracing! Let’s say that, in such a scenario, shading takes 50% of the entire frame time and tracing takes the other 50% (I intentionally ignore ray generation here). In that scenario, speeding up the ray tracer a hundred million times means that you make that 50% ray tracing time go away but you are still left with shading taking the other 50% of the frame! So even though your ray tracer became a hundred million times faster, your entire frame only rendered twice as fast!
All this is to say that when you read claims about a new system making rendering several times times faster, you have to ask yourself: was this with simple shading? Like the kind you see in videogames? Or was it in a scene which (for whatever reason) was spending a lot of time during tracing and not shading?
In more technical terms: the RT cores accelerate ray tracing while the CUDA cores accelerate shading and ray generation. The RT hardware cannot do volume rendering and I think no hair/curve tracing either - so these two techniques would probably also fall back to CUDA cores too - which means no benefit from the RT hardware.
All this is not to say that we’re not excited to see developments on the ray tracing front! On the contrary! But, at the same time, we wanted to ensure that everyone has a clear idea on what they’ll be getting when ray tracing hardware (and the necessary software support) arrives. We have, as explained in other forum posts, already started on supporting it by re-architecting certain parts of Redshift. In fact, this is something we’ve been doing silently (for RS 3.0) during the last few months and in-between other tasks. Hopefully, not too long from now, we’ll get this all working and will have some performance figures to share with you.
Thanks
Also interesting thought later on:
Thanks for the explanation Panos. In theory though if the RT cores can handle some of these tasks does that not leave the CUDA cores open to do more shading tasks? I dont think anyone here expects a 600% increase in average performance but overall if the RT cores can help out even by 20% that is still a massive leap on top of the other performance enhancements especially when talking about animations. I am assuming you guys plan on somehow dividing up the work so the RT cores can tackle “some” aspects its good at while the CUDA cores tackle the rest and the tensor cores can just denoise faster. Either way I think the cumulative increase is what has people excited mostly. Really look forward to testing out what you guys make in the future
Makes sense that if RT core would do some work that CUDA core can focus on other stuff and also do those tasks faster without having to worry about raytracing. Dev answer:
Yes, if the RT cores can be executed completely in parallel with the other tasks, that would (in itself) provide a performance benefit. But the entire render time will still be dominated by the things that the RT cores cannot do. I.e. ray gen, shading, volume tracing, etc. So if these exist and take the same time as ray tracing, then the maximum speedup would be 2x. If they take more than ray tracing then the speedup will be even less than that. So the final time will heavily depend on the scene and shading complexity.
Interesting, but this does not include Tensor in the equation. We know that Tensor does denoising, that is clear, but that is not all it can do. There has been talk of Tensor aiding with tasks like Nvidia Hairworks. I believe Tensor has an active role in the ray tracing process, and that they would in fact help step 3, so you actually have 2 of 3 steps getting accelerated. No I do not have proof of this. And of course, the 2080ti has a lot of CUDA cores to throw at the shading as well, so there will be a boost in pure shading regardless. You get a boost from CUDA + a boost from ray tracing + a boost from Tensor doing its thing. With these powers combined, they form the Voltron of rendering.
Back to gaming, ray tracing may not seem like an obvious benefit in competition, but it can be. If you have true reflective surfaces, then there is a better chance for that surface to reflect an on coming enemy approaching from behind, or around the corner. For example, the car that is frequently shown in Battlefield 5 footage would be great for this. If you are behind the car looking at it, there is a chance you may see the reflection of an enemy sneaking up on you or just hiding out somewhere. Or a wet ground surface could alert you to enemies coming around the corner of a building. So in cases like these, a player with RTX enabled could in fact have a legit competitive advantage over a player who does not see the reflections. Another example, lets look at Rainbow Six Siege. In this game players can place traps and things to impede or outright kill intruders. But if a surface is reflective, you might have a chance to see a reflection of the trap that you could not see otherwise. Players can also deploy mobile cameras. A crafty player could place a camera that is facing a reflection, so that an approaching player cannot directly see the camera. So there are tons of things that could be done with reflections in games thanks to ray tracing.
It is reflections that could really change things. Shadows...I don't think many players have ever cared about shadows, so Nvidia really spent way too long on talking about just shadows. That's gaming 101, many people almost instinctively turn down shadows to get better framerates because shadows do not have a real benefit. No, reflections are what could potentially excite players. You can have real mirrors and other really cool reflective surfaces in games, and as I said, reflections could actually give an advantage if there are shiny surfaces around.
And believe it or not, there is a list of video games that have working mirrors.
I can remember thinking how cool it was that there was a mirror in Metal Gear Solid that worked. And then later on Prey had mirrors, and it had really cool portals which was a neat trick in itself.
This is quite interesting. Our friends at OTOY say that they have been told the gaming NVLink is fully functional. Quadro NVLink may cost hundreds of dollars, but the GPUs also thousands, even $10,000. So them having the link cost over $400 is not at all surprising. I'm not sure if a Tweet will show in these forums. Lets see.
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">V-Ray GPU was the first render engine to support NVLink. We haven’t tested the gaming RTX series yet. <br>I am being told from guys familiar with the topic, that NVLink will work fine on the gaming RTX, so 2 x 2080ti will be 22GB.<br><br>To be 100% sure, better wait a month or so :)</p>&mdash; Blagovest Taskov (@savage309) <a href="https://twitter.com/savage309/status/1031662991622721536?ref_src=twsrc^tfw"&gt;August 20, 2018</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
Twice as fast means a 50% improvement in render times. A 10 minute scene would render in 5 minutes if twice as fast, and that's a 50% improvement.
You're confused about how percentages work. I don't blame you for that, it can be really hard to wrap one's head around it, and many people never manage to do it. I've worked with percentages a lot throughout my career, so to me, your example seems pretty clear-cut.
In your example of going from 10 to 5 minutes, there is a 100% improvement in speed. To put it in different terms, the new speed is 200% of the old speed, making the difference 100%. You said it yourself when you said it's twice as fast. That always, without exception means a 100% improvement in speed, no matter what numbers you are using. My guess is that what's confusing you is the fact that the total render time becomes 50% of the original render time in this equation. Yeah, depending on which direction you do the calculation, you end up with different percentages.
In other words, doubling rendering speed means your render times are 100% faster than before and take 50% of the time.
If you still disagree with this, then I can only assume that the way you described the problem is incorrect. As long as you're using the phrasing "twice as fast" however, what I wrote above will always be true.
Edit: When these things get too confusing, it can be helpful to use absolute percentages instead of relative ones. In this case, that means the new rendering speed is 200% of the original one, while the new rendering time is 50% of the original one.
Read what I actually said.... You're confusing a percentage raw increase in speed with percentage "relative improvement", which is an inverse of raw increase.
Read what I actually said.... You're confusing a percentage raw increase in speed with percentage "relative improvement", which is an inverse of raw increase.
No. Percentage "relative improvement" is 100% when you double something. Not that the phrase "relative improvement" means anything in the first place. An improvement is always relative since it has to improve something that it is then related to. You cannot have an absolute improvement, which would be the counterpart to a relative one. I read what you said multiple times, including trying to see if there was some hidden meaning I might have been missing. Unless you are leaving out important information, you are flat out wrong.
Actually, to make it perfectly clear what's going on, let me make a tiny adjustment to what you originally said so that it becomes correct. It won't take much:
Twice as fast means 50% of the render time. A 10 minute scene would render in 5 minutes if twice as fast, and that's a 100% improvement.
Is it becoming clearer now? You're trying to use the same percentage to describe two different relations, but it's not the correct way to do this kind of calculation.
Again if you're talking about relative improvements in render times (NOT speed), you use the difference in render times as a percentage of the initial render time. You're certainly free to define another metric if you want, just don't be surprised if it's not accepted by others.
And do you not trust the website I referenced which even provides a calculator to show you how to make the calculation?
In my experience, a tool is only as valuable as the skill of the person using it. No offense intended.
I did check that website right now, and now I understand why you're confused. The formula you've been using is intended to be used when moving from one value to a higher value. You've been using it backward. When you calculate 10 minutes versus 5 minutes, what you're ending up with is obviously 50%. But since the formula (and the naming of it) on that site says "improvement", you're expecting the result to mean one thing when it actually means something else. What you just calculated on that site is that 5 is half of 10 (ie. 50%), not that 5 is a 50% improvement on the value 10. If the formula you've been using is to have any meaning, you have to quantify the speed in a way where higher numbers are better. In fact, the result you got was negative 50%, which should tell you everything you need to know.
Let's use some make-believe numbers where this approach would actually work. Imagine that GPU A can do 1000 calculations per second while GPU B can do 2000 calculation per second. We can obviously tell at a glance that B is twice as fast as A. The formula tells us that B is a 100% improvement over A. Feel free to calculate this yourself.
If anything, this fascinating argument of ours illuminates how important it is to make sure that one is using the correct tool for the job. :)
Edit: I have to correct myself here. You didn't calculate that 5 is 50% of 10. You calculated that 5 is 50% as good as 10. See, I get confused too. ;)
Unlike the Apple of Discord, the GPU of Discord seems able to provoke arguements even before it exists. That's an achievement, but not an admirable achievement. Rather than launch an eleven-year siege, please drop the argument.
Here's what the NVIDIA website says: "The GeForce RTX NVLink™ bridge connects two NVLink SLI-ready graphics cards with 50X the transfer bandwidth of previous technologies. This means you can count on super-smooth gameplay at maximum resolutions with ultimate visual fidelity in GeForce RTX 2080 Ti and 2080 graphics cards." And nothing about memory stacking.
Hmm ok, so the SLI term is here to stay? Guess I had the wrong idea. SLI has such negative connotation, would have thought they would replace it. But maybe the underlying tech is still SLI, just much faster with NVLINK.
SLI means "Scalable Link Interface" which sounds like a rather generic term which can be used in many different contexts.
Comments
From Redshift dev:
https://www.redshift3d.com/forums/viewthread/20597/P45/
Also interesting thought later on:
Makes sense that if RT core would do some work that CUDA core can focus on other stuff and also do those tasks faster without having to worry about raytracing. Dev answer:
Interesting posts and points.
But it basically boils down to, we don't yet know. :)
Very interesting post @bluejaunte. Good to see a developer finally telling us what the deal really is all about. If I understood it correctly, it's pretty much what I though it would be. It's only 8x faster in optimal conditions, but that's marketing ABC here folks
. That being said, it's still good to know that even with complex shaders RT cores still help. If I manage to get 2x performance increase from RT cores, that would be amazing. I was already assuming that tensor cores are for ai denoising, so I have to wait and see how good that denoising algorithm really is. I have to admit, that I do have some serious doubts about that. Also it still remains to be seen how good turing cuda cores are for rendering tasks ( cuda cores seem to get better and better in every new architecture ), so probably we still have to wait a little longer before final verdict.
Can't you get a 2x performance increase by just adding a 1080ti with your Titan X?
"... even though your ray tracer became a hundred million times faster, your entire frame only rendered twice as fast!"
Boy, he's no fun at all....
As you say, it's marketing. This means we don't know yet.
Almost sounds like you're saying twice as fast isn't great
Keep in mind also this sounded like a worst case scenario where shading takes 50% of the time. And this is just the RT core, but there's also more CUDA cores and tensor for denoising. Then on top that I think it's not unreasonable to assume that the software to make use of this stuff will mature more and renderers themselves will also find new ways to make use of the these new hardware possibilities.
When was the last time we had such massive potential to boost render speeds?
Twice as fast means a 50% improvement in render times. A 10 minute scene would render in 5 minutes if twice as fast, and that's a 50% improvement.
LIke I posted before, the norm between GTX generations has hovered around 33%. And if a 2080ti costs twice as much as a 1080ti, a 50% improvement barely makes up for the difference in cost.
Twice as fast means 100% improvement.
I suggest you ask Mr. Google "how to calculate percent improvement".
You take the difference of the two render times, then divide that by the longer time to get percent improvement. So a 10 minute render becoming a 5 minute render is (10-5)/10 or 50% improvement.
A 100% improvement means it renders instantaneously.
If you were traveling 60 mph, 10% faster would be 66 mph. Right? 50% faster is 90 mph. Guess what 100% faster is
https://sciencing.com/calculate-improvement-percentage-8588140.html
If you drive 10% faster than 60 mph, how fast are you driving?
You're talking about percent change in raw speed, not percent improvement in results.
If you still don't believe after a science website tells you differently, then try checking some render benchmark sites. They'll say the same thing. Or ask your local 9th grader.
Uh huh, read this:
http://www.legitreviews.com/nvidia-geforce-rtx-2080-performance-50-percent-faster-than-gtx-1080_20747
50% 1.5 times faster, 100% 2 times faster (not mentioned, twice as fast sounds better).
Your math isn't wrong obviously, but no marketing department ever will say a 90% more performance and mean 10x speed increase.
Interesting, but this does not include Tensor in the equation. We know that Tensor does denoising, that is clear, but that is not all it can do. There has been talk of Tensor aiding with tasks like Nvidia Hairworks. I believe Tensor has an active role in the ray tracing process, and that they would in fact help step 3, so you actually have 2 of 3 steps getting accelerated. No I do not have proof of this. And of course, the 2080ti has a lot of CUDA cores to throw at the shading as well, so there will be a boost in pure shading regardless. You get a boost from CUDA + a boost from ray tracing + a boost from Tensor doing its thing. With these powers combined, they form the Voltron of rendering.
Back to gaming, ray tracing may not seem like an obvious benefit in competition, but it can be. If you have true reflective surfaces, then there is a better chance for that surface to reflect an on coming enemy approaching from behind, or around the corner. For example, the car that is frequently shown in Battlefield 5 footage would be great for this. If you are behind the car looking at it, there is a chance you may see the reflection of an enemy sneaking up on you or just hiding out somewhere. Or a wet ground surface could alert you to enemies coming around the corner of a building. So in cases like these, a player with RTX enabled could in fact have a legit competitive advantage over a player who does not see the reflections. Another example, lets look at Rainbow Six Siege. In this game players can place traps and things to impede or outright kill intruders. But if a surface is reflective, you might have a chance to see a reflection of the trap that you could not see otherwise. Players can also deploy mobile cameras. A crafty player could place a camera that is facing a reflection, so that an approaching player cannot directly see the camera. So there are tons of things that could be done with reflections in games thanks to ray tracing.
It is reflections that could really change things. Shadows...I don't think many players have ever cared about shadows, so Nvidia really spent way too long on talking about just shadows. That's gaming 101, many people almost instinctively turn down shadows to get better framerates because shadows do not have a real benefit. No, reflections are what could potentially excite players. You can have real mirrors and other really cool reflective surfaces in games, and as I said, reflections could actually give an advantage if there are shiny surfaces around.
And believe it or not, there is a list of video games that have working mirrors.
https://www.giantbomb.com/functional-mirrors/3015-4618/
I can remember thinking how cool it was that there was a mirror in Metal Gear Solid that worked. And then later on Prey had mirrors, and it had really cool portals which was a neat trick in itself.
This is quite interesting. Our friends at OTOY say that they have been told the gaming NVLink is fully functional. Quadro NVLink may cost hundreds of dollars, but the GPUs also thousands, even $10,000. So them having the link cost over $400 is not at all surprising. I'm not sure if a Tweet will show in these forums. Lets see.
...still not buying it until I see actual proof, not just what "someone says".
Were that true that would mean, a 4 slot NVLink bridge should allow for 44 GB with 4 2080 Ti's.
You're confused about how percentages work. I don't blame you for that, it can be really hard to wrap one's head around it, and many people never manage to do it. I've worked with percentages a lot throughout my career, so to me, your example seems pretty clear-cut.
In your example of going from 10 to 5 minutes, there is a 100% improvement in speed. To put it in different terms, the new speed is 200% of the old speed, making the difference 100%. You said it yourself when you said it's twice as fast. That always, without exception means a 100% improvement in speed, no matter what numbers you are using. My guess is that what's confusing you is the fact that the total render time becomes 50% of the original render time in this equation. Yeah, depending on which direction you do the calculation, you end up with different percentages.
In other words, doubling rendering speed means your render times are 100% faster than before and take 50% of the time.
If you still disagree with this, then I can only assume that the way you described the problem is incorrect. As long as you're using the phrasing "twice as fast" however, what I wrote above will always be true.
Edit: When these things get too confusing, it can be helpful to use absolute percentages instead of relative ones. In this case, that means the new rendering speed is 200% of the original one, while the new rendering time is 50% of the original one.
No. Percentage "relative improvement" is 100% when you double something. Not that the phrase "relative improvement" means anything in the first place. An improvement is always relative since it has to improve something that it is then related to. You cannot have an absolute improvement, which would be the counterpart to a relative one. I read what you said multiple times, including trying to see if there was some hidden meaning I might have been missing. Unless you are leaving out important information, you are flat out wrong.
Actually, to make it perfectly clear what's going on, let me make a tiny adjustment to what you originally said so that it becomes correct. It won't take much:
Is it becoming clearer now? You're trying to use the same percentage to describe two different relations, but it's not the correct way to do this kind of calculation.
Speed. Going from 1 render per 10 minutes to 2 renders per 10 minutes. Seems like speed and time are being conflated in this discussion.
- Greg
In my experience, a tool is only as valuable as the skill of the person using it. No offense intended.
I did check that website right now, and now I understand why you're confused. The formula you've been using is intended to be used when moving from one value to a higher value. You've been using it backward. When you calculate 10 minutes versus 5 minutes, what you're ending up with is obviously 50%. But since the formula (and the naming of it) on that site says "improvement", you're expecting the result to mean one thing when it actually means something else. What you just calculated on that site is that 5 is half of 10 (ie. 50%), not that 5 is a 50% improvement on the value 10. If the formula you've been using is to have any meaning, you have to quantify the speed in a way where higher numbers are better. In fact, the result you got was negative 50%, which should tell you everything you need to know.
Let's use some make-believe numbers where this approach would actually work. Imagine that GPU A can do 1000 calculations per second while GPU B can do 2000 calculation per second. We can obviously tell at a glance that B is twice as fast as A. The formula tells us that B is a 100% improvement over A. Feel free to calculate this yourself.
If anything, this fascinating argument of ours illuminates how important it is to make sure that one is using the correct tool for the job. :)
Edit: I have to correct myself here. You didn't calculate that 5 is 50% of 10. You calculated that 5 is 50% as good as 10. See, I get confused too. ;)
Unlike the Apple of Discord, the GPU of Discord seems able to provoke arguements even before it exists. That's an achievement, but not an admirable achievement. Rather than launch an eleven-year siege, please drop the argument.
Haha, thanks Richard.
SLI means "Scalable Link Interface" which sounds like a rather generic term which can be used in many different contexts.