2060 to launch Jan 15 for $349
Nvidia officially unveiled the 2060 today. According to Nvidia, it meets or exceeds the 1070ti in gaming, and even matches the 1080 in 3 of their marks. Here are the full specs, along with the other Turing for comparison.
| Graphics Card Name | NVIDIA GeForce RTX 2060 | NVIDIA GeForce RTX 2070 | NVIDIA GeForce RTX 2080 | NVIDIA GeForce RTX 2080 Ti |
|---|---|---|---|---|
| GPU Architecture | Turing GPU (TU106) | Turing GPU (TU106) | Turing GPU (TU104) | Turing GPU (TU102) |
| Process | 12nm NFF | 12nm NFF | 12nm NFF | 12nm NFF |
| Die Size | 445mm2 | 445mm2 | 545mm2 | 754mm2 |
| Transistors | 10.6 Billion | 10.6 Billion | 13.6 Billion | 18.6 Billion |
| CUDA Cores | 1920 Cores | 2304 Cores | 2944 Cores | 4352 Cores |
| TMUs/ROPs | 120/48 | 144/64 | 192/64 | 288/96 |
| GigaRays | 5 Giga Rays/s | 6 Giga Rays/s | 8 Giga Rays/s | 10 Giga Rays/s |
| Cache | 4 MB L2 Cache | 4 MB L2 Cache | 4 MB L2 Cache | 6 MB L2 Cache |
| Base Clock | 1365 MHz | 1410 MHz | 1515 MHz | 1350 MHz |
| Boost Clock | 1680 MHz | 1620 MHz 1710 MHz OC |
1710 MHz 1800 MHz OC |
1545 MHz 1635 MHz OC |
| Compute | 6.5 TFLOPs | 7.5 TFLOPs | 10.1 TFLOPs | 13.4 TFLOPs |
| Memory | Up To 6 GB GDDR6 | Up To 8 GB GDDR6 | Up To 8 GB GDDR6 | Up To 11 GB GDDR6 |
| Memory Speed | 14.00 Gbps | 14.00 Gbps | 14.00 Gbps | 14.00 Gbps |
| Memory Interface | 192-bit | 256-bit | 256-bit | 352-bit |
| Memory Bandwidth | 336 GB/s | 448 GB/s | 448 GB/s | 616 GB/s |
| Power Connectors | 8 Pin | 8 Pin | 8+8 Pin | 8+8 Pin |
| TDP | 160W | 185W (Founders) 175W (Reference) |
225W (Founders) 215W (Reference) |
260W (Founders) 250W (Reference) |
| Starting Price | $349 US | $499 US | $699 US | $999 US |
| Price (Founders Edition) | $349 US | $599 US | $799 US | $1,199 US |
| Launch | January 2019 | September 2018 | September 2018 |
A link to one of many sources covering it:
https://wccftech.com/nvidia-geforce-rtx-2060-official-launch-349-usd/
Similar to last gen Pascal 1060, there will multiple VRAM specs available, with 3 to 6 GB.
And here's a surprise, no Founder's Edition price increase over the base MSRP.
I think the 2060 is certainly stronger as a product over the rest of Turing so far. It certainly is odd, if the 2060 approaches 1080 benches, where does that leave the 2070 at? The 2060 is kind of close to the 2070, and costs at least $150 less, and that gap increases to $250 if you consider the 2070 Founder's Edition.
However, as nice as this all is, this is still a hefty price increase over the 1060. The 1070ti can also be purchased for less than the 2060, so that performance better be real.
It will also be interesting to see how prices of the 1060 through 1080 adjust to this in the new and used markets.
Also, an important note for any Daz users, currently only the Daz 4.11 beta supports Turing cards. So if you upgrade to the 2060 or any other 2000 card you will need to use the Daz beta!

Comments
Still 6Gb so of limited interest for us particularly at that price. If someone is sitting on $350 to buy a gpu right to find a 1070 of they can.
maybe i am missing the point, but for iray users that main goal is to have a scene fit on to the GPU so it doesn't fall back to the CPU for rendering and the GB amounts are going down, not up. Wake me when we get a 2000 series card with 12 GB or more and I'll get excited.
I would like to buy it, mainly for Daz Studio, is it a good purchase according to you?
For giving you some context, currently in Italy (this is important: we don't have your wonderful prices! I'm gonna convert to USD for you) you could find for
6GB is definitely an issue for Iray users. But it is doable for some users. The question will be what kind of scenes they do, and more importantly, how large are the renders. I never had a 6GB card, so I cannot comment. I went from 2 to 4 to 11, as I went from a 670 to 970 to 1080ti. FYI all of these were used when I bought them. The 4GB was certainly better, I was able to get 3 models in a scene with a HDRI background at about 1800 to 2500 pixels square. I used a HDRI because a full 3D background took too long for me, and that is assuming it fit. So you might be able to squeeze a bit more into 6GB, but your mileage will vary. Some of the newer stuff in the store is extremely memory hungry, even for the larger GPUs.
Personally, and this is only my opinion, I would feel more comfortable with 8GB. Nobody has tested the 2060 yet, obviously, so we don't know how fast it is at Iray. I think there's only one 2070 benched in the Iray bench thread, and it was pretty fast, a bit shy of the 1080ti, but well ahead of the 1080 and 1070ti. The 2060 has the same CUDA count as the original 1070, so it will certaintly be faster that the 1070. It could be faster than the 1070ti and 1080 as well, but we need a test to be sure. I'm sure somebody from here will buy one. The bench thread is in my sig, so stay tuned.
Did you miss the Titan RTX? 24GB for only $2500! There will probably not be another 2000 series to get more than 12GB.
Awesome, I was hoping it wouldn't be too long before such rendering power in desktop PCs became affordable.
Haha. +1
The Titan isn't a 20 series card, but something else; unless you would call the other RTX cards that are Quadro also 20 series cards?
If this gen had been 8GB, 10GB, 12GB and 16GB respectively they would have been so much more interesting but as it is,... meh.
Actually, yes, it is a gaming card. Titans have historically been part of the GTX line and even had "GTX" in its name. You can go look up Nvidia's gaming drivers and find all of the Titans listed alongside the gaming cards. You know you don't see on that list? Quadro and Tesla. Just because the Titan has a ridiculous price does not mean it is not a gaming card. Nvidia released the Titan Z back in the day for $3000. The Titan RTX is the fastest single gaming card you can buy. This is not true of Quadro, the $10000 Quadro 8000 does not beat the Titan RTX at gaming.
Umh...interesting, thanks! :)
Let's see in a few months how this leaked "GTX 1160" will be, and if the prices of the new 1070 are gonna go down significantly!
I don't currently have the money, so it's okay to wait and see how things are gonna evolve! Since I'll do this upgrade mainly for Iray, if 6Gb is not enough, maybe I'll have to take a 1070! I can't really spend more than that, this is not my job! ^^
....but it does for CG production and rendering with twice the VRAM.
Though it is about time the Titan broke the 12 GB barrier. At almost twice the price of a 2080Ti, it better offer something more than one extra GB of VRAM.
Good news !
I planned to buy it for my new computer in July.
looks attractive for gamers on a budget, for iRay though max 6gb can still be too limiting. These days i wouldnt want to run with less than 8gb if using iRay. Would not be such a big deal if Nvidia would implement out of core rendering functionality to iRay, but given iRay can already dump to CPU if VRAM is exceeded, it seems to be a low priority for Nvidia. Out of Core rendering in both Redshift and Octane is still lightyears faster than CPU rendering in iRay. Assuming of course that you dont have some crazy dual Xeon system.
True about the VRAM.
Cost wise, i think it looks about right for the performance it will have
When i built my current machine, i spent about what the Titan RTX costs (maybe a little less) on multiple GPUs. I am fairly sure one Titan RTX is going to comfortably outperform my current setup, particularly once renderers are updated to support the RT/tensor cores, and to be honest the cooling headache that comes with having a case crammed with GPUs, i'd prefer to have one GPU that has equal or better performance over 3 less powerful GPUs. Or really, i'd have the Titan RTX and one smaller card to run my displays
What does suprise me is that they are still sticking with gddr6
...indeed, with the 2019 release of Octane4, that 6 GB would not be so much a limiting factor. Out of core rendering uses system memory when VRAM runs out but does not dump the entire process to the CPU like Iray does. There will also be a free subscription track along with the one for 20$ a month (the free version restricts you to only 2 GPUs, not sure if it includes access to the Daz plugin or not). Given the price increase on the 20xx series, and dwindling supplies of higher end 10xx cards, that may be an alternative for some.
As to CPUs look out, AMD will be releasing the Ryzen 3000 series that tops out with 16 cores/ 32 threads for 499$ (3.9 GHz/4.7 Turbo) and 599$ (4.3GHz/5.1 Turbo). The next generation Threadripper series will include a whopping 32 core/64 thread CPU at 3.0GHz/4.2 GHz Turbo, though a bit expensive at 1,800$)
I finally managed to upgrade my very respectable GTX 980Ti and got the NVIDIA GeForce RTX 2080. Still putting it through its paces, but it is certainly an UPGRADE! I wanted the 2080 Ti, but the store was out, their online store was out and perhaps discontinued in that particular model so I just finally bought the last 2080 they had in the store. I have been watching for an upgrade price I copuld handle for almost two years now. Glad I waited, wish I could have gotten the 2080 Ti, but am lovin' the 2080 Founder's Edition so far! (I can still return it if I find something better in the next week...)
Yes, of course, and that is what the Quadro is made for, along with special support for things like CAD, which the Titan RTX cannot do so well, though it does do better than previous GTX cards thanks to improved async compute. But this improved async compute is more of a necessity for DirectX 12 and ray tracing, which is why it is in the RTX. STill, the 2000 series are gaming cards first, and this cannot be forgotten, and that even includes the Titan. Quadro is not for gaming. It can game if you really want it to, but the cards are clocked slower than gaming cards and you will not see the frame rates that the same chip delivers in a gaming card. The Titan RTX absolutely can be for gaming and has the drivers to prove it. It can do some other things, but again it lacks CAD. I do not doubt that Titan RTX is targeted at people in VFX on a "budget". It has as much VRAM as a low end Quadro. However there are gamers who buy Titans, even insanely expensive Titans like the V and RTX, just to say they have the fastest gaming card on the planet. Obviously this is a small group, but this group does exist, and you will not see gamers buying any Quadro for that bragging right.
If you looking for gaming cards to get more VRAM, then you want to see gaming push VRAM more so that they need more. It is a chicken and the egg situation. At this time, games are not pushing VRAM that hard, in part because few GPUs have much to offer. And I see this as a major roadblock for gaming. The lack of VRAM is hurting game design, as games target low end products first. While the "open world" game has become a thing, and the seamless open world (a game that has an open world with no loading times) has become popular, these games are still restricted by VRAM. They stream data on the fly to fill in information as you move, but you still get pop up. You will see trees and plants suddenly pop up into existence out of nothing. More VRAM would allow games to keep more data active to prevent this. There many other instances where games are restricted. You might have a game that throws a bunch of things on the screen at once, like a Musou style game (Dynasty Warriors and similar.) If you ever look at Dynasty Warriors in action, you will see that most of the people on screen are all clones. Some of this can be a lack of time dedicated to designing more varied enemies, but a lot of it is for performance. just like using instances in Daz can save VRAM, video games reuse textures all the time. Anybody who sits there and says that "well video games don't need or use a lot of VRAM" is full of it, because it is purely by necessity that games do that. The next wave of consoles may provide a big leap. If the next consoles ship with a lot more VRAM, then PC gaming will quickly follow and Nvidia and AMD will respond with larger cards.
Look at the past.
2005: The Xbox 360 launches. It has exactly 512 MB of total memory, though not all could be VRAM. The PS3 would launch the next year and match that, though only 256 MB was VRAM. The best GPUs of 2005 mostly had 256 MB. Want a fun fact? A new technology had just started in PC, the PCI Express came out in 2004, and was just starting to take hold.
These consoles hung around for a VERY long time. Perhaps too long. This had far reaching effects. The PC gaming industry was struggling a bit when this era started, and so many studios shifted focus to console. Remember though, both consoles still only had 256 to 512 M of memory.
2013: The Xbox One launched. And this time the PS4 followed in the very same month. Both systems had 8 GB system memory, HOWEVER only about 4 GB could be used for games as VRAM, though this number was slightly flexible. The Xbox would release updates to allow the system to use more, and I believe Sony did the same. Yes, the PS4 has 8 GB of GDDR5, but it still has an operating system and only about 5.5 can be allocated to a game. Now look at PC GPUs of this time. Most GPUs still were only 2 GB. Yes there were 4 GB variants of the 670 and 680, but these were more rare. Remember the PS4 and One only used about 4GB VRAM at launch. I bought a 670 2 GB myself because at the time I felt 2 GB was all I'd need (this was long before I knew what this Daz 3D thing was.) Many mid tier GPUs had 1 GB or even just 512 MB...notice that this matches the aging previous generation consoles! This is NOT a coincidence! However, after the new consoles launched, the 900 series, Maxwell, would be more generous with VRAM. The 970 and 980 came with 4GB standard, the 980ti had 6GB, and the new Titan would have 12GB.
2016: The PS4 Pro releases. The system still has 8 GB GDDR5, but now also adds 1 GB DDR3 for system resources, allowing much more of the GDDR5 to be used for games, though I am not certain exactly what that number is. Nvidia Pascal also released in 2016 a few months before the consoles. The 1000 series made huge leaps in performance and VRAM. Most of the tiers got upgrades. The 1070 and 1080 have 8 GB, and the 1080ti has 11. The 1060 is funny one with variants that run from 3, 5, and 6 GB. Pascal was released first, and you might think that consoles are just catching up, but you would be wrong. Consoles are in development for years, so Nvidia and AMD both know what is coming at least months in advance.
2017: The Xbox One X (Scorpio) releases. This unit has 12 GB of GDDR5 in total, of which 9 can be used for games. However, the X is far, far behind the PS4 in sales.
2018: Turing launches, but all tiers have the exact same memory. While the Xbox Scorpio has 9 GB VRAM, that's pretty close to what Nvidia offers in the 2070 and 2080. Its also important to note that GDDR6 is 70% more expensive than it was before, which may also have played a part in Nvidia's hesitance to use more.
As it stands, we may not see a new console until 2020. 2019 is possible, but with a huge lead, Sony has no need, and MS just released the X so they likely wont either. Any specs at this point are mostly speculation, though we do know that AMD will power both MS and Sony. However I do think it is safe to say that we will see more VRAM in these new consoles. 12, possible even 16 if we are very lucky. And if consoles are using 12GB of VRAM, then PC games will, too!
So you heard it from me, Outrider42, that this is my prediction: the next GPU launch in 2020 will offer some VRAM upgrades. the 3070 and 3080 or whatever Nvidia wants to call them will have more than 8GB, and the 3080ti will have more than 11GB. How much I cannot say, but after the troublesome Turing launch, I believe the 3000 series will serve as a needed rebound for Nvidia.
Octane had Out of Core textures long before Octane 4 was released, so it could store textures in system ram leaving geometry for VRAM. The release of Octane 4 expanded out of core functionality to include the ability to store geometry in system ram too. That basically removes any limitation on scene size/density etc. Using out of core comes at a cost of some render speed, but still loads faster than cpu renders
...true, I became interested in Octane when I heard about the subscription track as well as teh advances with 4. I don't have nearly 600$ to plunk down in one lump for a perpetual licence and the plugin. However 20$ a month is much easier to budget for (on my income (it would take me almost three years putting 20$ a month away to get a standalone licence and the plugin, and that isn't taking into account any price increases).
Iray is primarily targeted towards architecural and product promo rendering, not characters which is why vehicles, props, and surroundings in scenes can looks so much better tha the characters. Octane has a more robust materials and shader system than Iray in Daz does. Yes it takes more time to work with but the results are much better. Again even with everything going to memory, it is still much faster than when Iray dumps to the CPU (which i my case is an old Xeon 6 core/12 thread), and I don't need to find a way to scrape up 2,500$ for a Titan RTX to ensure every scene I create stays in VRAM, or make other compromises.
...Outrider42, good information there.
As I am not into gaming, frame rate means nothing to me. Yeah I may be a pariah here, but I would rather spend my time creating art instead of just amassing arbitrary meaningless numbers.. Most games are pretty much combat related which is kind of boring to me. About the only "game related" software that would interest me are simulators, but you need a lot of expensive peripherals to really become immersed in the experience. Flying a TU-114, Pitts Special, or Concorde as well as driving a Lotus BRM (in the Monaco Grand Prix), 427 Cobra (at Laguna Seca), or Cafe racer (In the Isle of Man TT) from a keyboard or a gaming controller just doesn't do it. Loved Gran Turismo on the Play Station 2.
When I built my current system 6 years ago we pretty much still only had CPU render engines (3DL, Carrara, Bryce Vue, and Firefly). So I just got a 1 GB card to drive the single display I had. CPU core/thread count, clock speed, count and memory (particularly multi channel memory) was much more important. I dabbled with Reality 2.5 and then 4.0 but found it glacially slow (as well as the new interface, buggy). However I could stop and resume render jobs so when i was done working on a new project, ready to sleep, or heading out for the day, I'd resume the render job. When the Daz 4.8 update was released with Iray it first was like Christmas, a spanking brand new render engine built into Daz that offered photo real quality like the big expensive pro software used. The novelty eventually wore off though when I came ot the realisation that it was better for GPU rendering than CPU rendering (render times often were longer than 3DL with UE, Carrara (which could produce some very lifelike results), or even Bryce sometimes. The unfortunate part was first, the letdown with the 980 Ti, as all the pre release hype pointed to a doubling of the VRAM to 8 GB yet ended up with only 6 (even then I saw 8 GB as being more optimal for the "epic" type scenes I created). When the the new high VRAM Pascal GPUs came out I began saving for a 1070 (which was 200$ less than a 980 Ti) but then prices became stupid expensive thanks to not only the cryptomining rush but a shortage of memory chips. I finally gave up as 800$ - 900$ was way out of my budget.
Anyway, if I had the funds, yes A Titan RTX would be perfect for my needs. But I don't. I'm content enough to have a Titan-X (Maxwell) I came into that is dedicated totally to rendering (using a 2 GB 560 Ti for driving the displays).
Will be interesting when I get all my stuff rebuilt after the HDD crash and open an Octane subscription.to compare the two engines directly.
Also been playing off and on with the latest Blender 2.8 Beta. I must say (I know this may come as a shock to some) I do like what they've done to the workspace (I can finally move around and use tools without having to remember keyboard commands and the tab setup is a bit reminiscent of Carrara.) Will also be interesting to try out Eevee with that Titan-X (will have to hook up the displays to it for that)
I have to admit I do not understand the advantages of the new Nivedia video cards, so please excuse what must be a question that displays my ignorance.
I do understand that the number of cuda cores is very important in producing iray images. So please tell me, from that perspective, would not a rig with 2x1070Ti cards be more effective than the lower end new cards from Nivedia? I ask because with those 1070Ti cards you would have almost 5000 cuda cores.
What do the nrew features of the RTX cards add to the iray rendering performance and capacity?
Thanks.
It is not just cuda cores that drive the performance during a render. Clock speed also is a factor, and i believe one or two other things aswell, like firmware etc. There are some benchmark results around that show two cards of different generations with the same or close to the same cuda cores performing significantly differently.
At the present time, the RTX cards do not add to rendering performance anything more than the usual improvements to cuda count, clock speed, etc. They do however have 'RT' cores which are new, and designed specifically for ray tracing tasks like rendering. Once the render engines are updated to support and use those RT cores, then in theory, they will unlock major speed improvements.
The team behind OctaneRender (and i am sure every other GPU renderer) are working hard to implement support for the RT cores, but the Octane team have posted some early benchmarks from their internal testing of an eperimental version that supports the RT cores, that shows a 35% performance boost compared to not using the RT cores. Pretty significant increase, especially given that is an early unrefined version. That is likely to be more when it makes it to a stable release
Thanks, Joseft.
That adds a lot.
R
I'm suprised nobody has mentioned rendering speed here. You've all been focused on RAM. The advantage of buying an RTX card, even a 2060, is once the RT cores get involved (when NVIDIA release the next iteration of Optix), you'll be getting a 6x - 10x boost in performance. That's HUGE, especially if you make animations. Remember that iRay doesn't yet make use of RT. It's all CUDA based.
Actually I've said that too! ^^
Yes, let's hope they're going to implement the use of tensor cores too...but I've noticed on Nvidia website that they're not distributing Iray themselves anymore...maybe they don't think it's important :'(
That keeps getting said but its not true. iRay remains under very active development by Nvidia.
We will?
Please point me towards the data to back up such a claim. (Yes I've seen posts suggesting there might be an improvement - your post states it as unequivacal fact.) And it must be said, I hope you are correct; I'll hang off purchasing a graphics card until they come out of the equivalent to software beta.
Nvidia made lots of claims on release, which have yet to be fulfilled all these months later.
What has been said, and what seems to confuse people, is that nVidia is no longer developing the application plug-ins that it used to produce (for 3D Studio Max, maya, etc.). Those are the equivalent of the Daz Studio plug-in for which Daz has always been responsible, they handle to connection between the application and the Iray render engine itself (which is developed by nVidia).
...all that speed is moot if a scene ad render process exceeds the card's VRAM. 6 GB is the low end for GPU rendering. Fine for portraits and relatively simple scenes, but throw in several characters with clothing, hair, and HD morphs, along with transmaps, reflectivity and emissive lights and the process could likely dump to the CPU.
Not if you handle the scene and optimise it properly. I had a 4GB card and managed to do quite well with it.
...however "in-render" optimisations (like using the Scene Optimiser) do compromise on final render quality, particularly at larger resolution sizes. Manually doing so in a very involved scene can become a case of diminishing returns time-wise.
My railway station scene with all the characters (8), effects, and emissive lighting involved, at 2,000 x 1,500 resolution and high quality setting, would definitely dump to the CPU with only 4 or even 6 GB (and that isn't my most complex work).