2060 6GB Now Officially $300 US
The lowest card in the RTX lineup has officially dropped from $350 to $300 US. This card originally launched just over a year ago on January 15, 2019. Meanwhile the 2060 Super (which has 8GB) is still $400 US. This makes me wonder if the 2060 Super will see a drop any time soon, as that is a rather large gap between the two cards.
EVGA released the 2060 KO, which is basically a 2060 6GB, but it is actually a different chip and a different board from the original 2060. This card is also $300, but has a $20 rebate making it $280 for now. The 2060 KO uses the TU104 die, while the 2060 uses the TU106 die. These dies have different specs as the TU104 is what the 2070 was based on. What Nvidia has done is disable more cores on the TU104 so it matches the regular 2060 spec of 1920 CUDA cores and it matches the clocks speeds as well. Then Nvidia paired this chip with the same board that they use for the 1660ti. MADNESS! The result is this card performs functionally the same as the original 2060, even though it uses different parts. It is likely that the KO is using chips that did not meet the requirements for a 2070, ie defective 2070s.
A review of the 2060 KO here.
https://www.techpowerup.com/review/evga-geforce-rtx-2060-ko/
But there is a difference, the temps and fans on the 2060 KO are higher than the 2060. This is because the TU104 die is larger than the TU106. The actual temperature will be around 74C, but to maintain that temp the fans have to spin HARD to cool it to that level. But that is the only real negative, otherwise the KO perfines just fine. The KO is factor overclocked, so it can actually perform a little faster than the 2060. Not to be confused with the 2060 Super, that is a totally different and more powerful GPU. (I rather hate these silly naming schemes.) If I had a choice between the 2060 and the 2060 KO, I personally would go with the 2060 (non KO) since it is more efficient and will be quieter.
Anyway, this is what good competition can do. AMD is releasing their 5600XT soon, and here Nvidia drops the price on their 2060 just before it launches. That is why I am excited for the next generation, as AMD is releasing its "big Navi" sometime this year. This card could be a big factor in how Nvidia prices its next lineup. 2020 is going to be a fun year for PC hardware.

Comments
The Zoltec nVidia RTX 2070 Mini with 8GB RAM was $399.99 in November on Amazon.com. It was about three weeks before the seller's supply ran out & then the next best was about $454 or something like that.
Things look very promising to get a relatively good price on a real-time raytracing nVidia GPU with 8+ GB RAM but this autumn that's for sure.
Oh, I bought an AMD Radeon RX 570 MK2 8GB OC on Amazon for $132 Cyber Monday week and while it can't do iRay it can do Blender, Eeevee, Cycles, and it's own ProRender ray tracer just fine. True I got an exceptional price and they quickly went up to $200 as it got closer to Christmas but I think they are back down to $149 now.
Supposedly AMD is going to have real time ray tracing competition ready this 2020 against nVidia and they do seem to offer more RAM at cheaper price points than nVidia. Like intel was forced to drastically change it's overhyped overpriced strategy so probably will nVidia. They've already started but I can't call their efforts serious yet.
I was sort of interesting building a PC for the first time in 17 years and watch prices flutuate so wildly for absolutely no reason except unrealistic speculation. I don't think such pricing games were done 17 years ago.
After a Power Supply (only 450 Watts now but way more than enough for one GPU with the reduced power these new AMD components use) upgrade my motherboard is supposed to be able to do dual video cards it's going to be likely I run a AMD in one slot and the newest nVidia, or at least the Zoltec 2070 8GB Mini in the other PCIe X16 slot. The nVidia will go in the main PCIe X16 slot as I think even though they are both x16 I think one slot is considered primary and the other secondary. I'll have to read up on that.
I've been reluctant to pay the Nvidia card tax, and have been waiting to see what happens - and if getting studio scenes or items into Blender becomes more straightforward; it still sucks.
Are you using the diffieomorphic? I was actually planning on trying to get decently competent between now & when the relevant 2020 nVidia & AMD GPUs become available at getting DAZ scenes into Blender. Who knows, I could save the price of an nVidia GPU if it gets to be a reasonable amount of work to do that.
AMD will indeed have ray tracing this year. Both the new consoles will have ray tracing abilities and AMD is building them both. I totally agree that AMD's competition will effect Nvidia's prices, rather I am counting on it. They have proven it over and over. We got the whole Super series in 2019 at the same time the 5700 and 5700XT came along. Now this year the 2060 drops in price just before the 5600 launches. I don't think those are coincidences. Nvidia will move along with AMD.
Current rumors are saying AMD actually has a GPU that can beat the 2080ti. That is the Big Navi I was talking about. If this happens and AMD prices it right, oh man will all hell break lose, LOL. Even if Nvidia turns around and smashes it with a 3080ti they will still have to counter the pricing of what AMD does. And that would benefit us all. For me I really want whatever the 3080ti is going to be, because I really do believe that it will be a monster. Turing is only the first generation of ray tracing, so it is quite logical that gen 2 will be a big leap in ray tracing performance. Sources say that Nvidia is literally doubling down on ray tracing, as they may offer double the ray tracing cores than Turing. That alone would be a pretty big deal, especially for Iray. We can already see just how well Turing does with Iray in the bench thread as a 2060 can match a 1080ti, which is just bonkers. I think Daz Iray users are going to really love the next gen Nvidia. They'll love it even more if Nvidia starts increasing VRAM. Next gen consoles will also have a lot of VRAM, and like I said before, the high end PC gamers would be quite irritated if a console had more VRAM than their brand new high end GPUs. It would seriously hurt their pride, LOL.
BTW, we don't actually know what they will call it. Not only is the 3000 number unconfirmed (though likely), it might not be called "Ampere", either. It turns out there is a tech company called Ampere, so Nvidia may not use the name to avoid that. They might go with "Hopper" instead. And if you recall, the name "Turing" was a bit of a surprise, as some were thinking it was going to be called something else and it wasn't until nearly the last minute that Turing got leaked as a possible name.
Heres what im hearing...
Blah, blah, blah, blah, blah and blah... $300.
Seriously, I like to read the techi talk. Although I dont understand alot of it but I do understand the value of a bargain. When A thread like this goes up to highlight a price change in computer hardware, after my head stops spinning, its nice to know that the prices for current tech is getting more approachable(? spelling ?). Id be looking for the 8GB minimum to replace the 1070ti (which cost more than $400).
Thanks all.
My big hestitation in spending $400 on an nVidia RTX 2070 GPU with 8GB/11GB/12GB RAM is I've always CPU rendered and so haven't had RAM problems for a while. Now though, I actually had to built a new desktop that had 32GB system RAM because 16GB RAM was not big enough anymore for a DAZ scene I made. That means that I can't render that scene in an nVidia GPU either unless it has 32GB or rather, something more than 16GB that it couldn't CPU render in. Forget it, I'm not paying that sort of money. That means I have to render multiple images and composite, not hard but not as enjoyable or fast either, plus I need image editing SW to do that as well.
Pricing is certainly a factor, but I hope AMD's offering this year will also affect the performance of an upcoming 3080ti. Usually we have an increase of 30-70% overall performance over last generation. That's a bad thing for those who already have a 2080ti and a bad argument to pay 1000+ for a 30-70% gain. If AMD's new flagship beats the 2080ti is priced lower than the 1000-1200 Nvidia asks for the 2080ti, then Nvidia will adjust the price, sure. What I also hope to see us that they will know that they can't offer a flagship card for 1000 that's only 30-70% faster than the former generation, and especially they can't offer a premium price card if it's only marginally faster than AMD's card and if AMD is only in the 800-1000 range (here's hoping).
Nvidia has a major stake in winning the gamers willing to spend on premium hardware. They have to convince the 2080ti users to upgrade. I can't imagine this is irrelevant to them if they have competition. They can still price their flagship cards high but then they have to offer performance. Which is what I hope we will see when the 3080ti rolls around. I already have someone who wants to inherit my 2080ti card. I'm only willing to upgrade if the performance gain seems right to me though.
Please forgive the n00b questions, but I only started looking into Nvidia cards just before Christmas. Does anyone know what the cheapest Nvidia card that can do Iray rendering is? Is it true that Daz/Iray needs a Graphics card of (at least) 8GB?
No that isn't true. The cheapest current generation card is the GTX 1650. the cheapest card overall depends on what you can find used. I personally would try to stay at or above 6Gb as a minimum so the 6Gb 1060 would be the besy used card, if you can find one for a good price.
There is a benchmark thread where you can compare a wide variety of cards to each other.
https://www.daz3d.com/forums/discussion/341041/daz-studio-iray-rendering-hardware-benchmarking/p1
This bench (and any bench) only factors in the speed of rendering, not how much VRAM it has. How much VRAM you need is largely dependent on you, and there is no easy way to make any suggestion for it. HOWEVER, I would recommend trying to get 8GB because of how Iray is demanding, and that many Daz products just eat VRAM for breakfast. Some clothing or hair really are VRAM hogs. But that is just me. You do not need 8GB unless well, you actually need that much. Some people get by with a lot less. VRAM is vital because this is your absolute capacity. If you exceed the VRAM then the GPU will not run at all, meaning it is a paperweight for that render. If you can delete stuff or optimize the scene, then maybe you can get it back under whatever your limit is and render.
So...I like recommending the 2060 Super because that offers 8GB, and has RT cores. But if you wish to go real, real cheap, then you can buy a used card instead. There you may find like a last generation 1070 or 1080 which each have 8GB. But these lack the awesome RT cores. They will work, but not as fast as the 2060 Super.
If you only render like 1 or 2 characters in a scene, and the scene has a fairly simple backround, then you can use a lot less VRAM, maybe even 4GB. Still, you could be pushing it with how large of a render you want and with what products you use.
The absolute cheapest card that works with Iray might be a GTX 650, which has 1 GB of VRAM and 384 CUDA cores from the year 2012. You should be able to find one for less than $50. However, I would advise staying away from this card as much as you can. It might work, but with just 1 GB of VRAM you may as well be using CPU instead. I am not sure if anything will fit into that small amount of VRAM. And it would render so slow that, again, you might just be better off using a freakin' CPU to render. Its that bad. And there is also the possibility that Daz Iray will even stop supporting this GPU with its next version of Iray. All 600 and 700 series cards are marked for end of support soon in Iray's release notes, so they could literally be killed off at any moment. So I would avoid anything below the 900 series. (Technically the 750ti is a Maxwell based card and thus not part of the list of Keplar 600-700 series cards that will be ending support. However the 750ti is so old and only has 2GB of VRAM its not worth touching. Just ask Richard, he has a 750ti until recently.)
So, well, you get what you pay for. You could maybe get a 4GB 970 or 980 on the cheap. I'm seeing 970's for just under $100 now. That is not so bad. I used to have a 970 so i know what it can do. I see a 980 for $130. But 1070's are now dropping below $200 into the $180 range. Those would be better. Many of these are listed in the bench thread so you can compare them directly.
Hmm. So I could have bought eight 2060s for what I just paid for my new RTX Titan 24.
And I don't care! 24 GB VRAM!!! Bwa ha ha ha ha ha ha!!!!
Yes, no doubt it just feels better to have AMD CPU, GPU together so the massive popularity of the Ryzen cpus has to have boosted their GPUs tremendously. It's what swayed me for sure. And now Intel is going to get into discrete GPUs.
Pretty sure the most current model that's cheapest is the GeForce 1660 at about $200 USD. That's on Amazon.com. It has 6GB RAM which is enough for I'd say most scenes most people make, which seem to be iRay portraiture.
https://www.amazon.com/ZOTAC-GeForce-192-bit-Graphics-Zt-T16600K-10M/dp/B07XV7FNR2/
You can search at PCPartPicker.com for cheaper ones. Sometimes there are & sometimes not. And electronics/computers are generally very safe to buy as returns from Amazon Warehouse because you get a 30 day return on all the electronic / computer items. I've ordered that way. It's only worth it though if they give you a $20 or more savings for taking the return off their hands.
The 2060 KO is a card anyone on a budget should look at.
Gamers Nexus just published a review with a card teardown and production benchmarks.
It strongly appears that Nvidia did not lockout all the additional CUDA on the TU104 chip. This thing is up to 25% faster in CUDA rendering in Cycles than the original 2060. This carries through into SpecViewPerf's entire suite of tests.
Ha ha ha, yeah! I agree too much money for something not worth that much. Everyone is trying to get rich overnight. No one wants to give value for money so I agree Blah blah blah $300. I waited and bought a Titan X used and it was cheap because everyone was jumping on all the new stuff and wanted to get rid of perfectly good hardware. hahaha
I doubt this is some scheme by Nvidia, beyond the price drop. Nvidia is selling these chips, which clearly failed validation as 2080's or 2070 Super's, as 2060 chips to not wind up throwing them away. But that decisions was made months ago. They offered these chips for sale, EVGA and maybe other partners bought them and built 2060's with them. Since they are physically larger than the TU-106 chips that went on the original 2060's that meant designing a new PCB. That whole process took months.
According to techpowerup the KO board is exactly the same board as the 1660ti, there are no changes to the board, they did not design a new board at all. See this paragraph:
"The EVGA GeForce RTX 2060 KO is made possible because of the way NVIDIA engineered three of its key "Turing" family GPUs, the TU104, TU106, and TU116. The three feature very different die-sizes, but the size of their fiberglass substrate (package) is the same. This is because NVIDIA decided to give them a common pin-map. This enables graphics card designers to share PCB designs among the three GPUs, reducing R&D costs, all while NVIDIA gets to better harvest its GPU dies."
So that is why this is possible. Nvidia engineered it this way from the very start. That means it is possible Nvidia may have considered this move from the start as well, they had to have in order to engineer them like this. But consideration vs actual action are different things. There is little doubt in my mind that the 5600XT is a major reason why Nvidia is doing this right now.
Its a smart move to take the chips that failed to be 2080s and use them as a lesser card. It certainly beats the alternative of trashing them. But the way they did it this time is pretty unique. Typically a TU104 that doesn't cut it would become a 2070, its a bit strange for them to drop 2 full tiers into the x60 range. Its possible these chips might not have met 2070 spec, either, but there could be other factors involved, like market demand. The 2070 and 2080, at least their original non Super versions, were not very exciting cards. Shoot, it might have even made sense to turn these into a 2060 equivalent of a Quadro considering the performance gains are only for production software, the kind of software whose users are more willing to pay more money for hardware.
This got me thinking. A while back there were reports about how the Titan V was performing so well at ray tracing in spite of not having RT cores. In fact, if you look up our benchmark list you will find that the Titan V straight up beats a 2080ti and even the mighty Titan RTX at Iray. And this is the Iray with RT cores enabled. It is not a fluke, either. If you search for Migenius Iray Benchmarks, you will find the Titan V above the 2080ti there as well. (And I would like to point out people here in these forums thought I was being silly quoting those Migenius benchmarks all the time...just sayin.) I finally came across an article that had a direct answer from Nvidia to explain its performance:
"Volta has about 4x lower latency on the cache with more bandwidth and twice the low level cache."
So...I suspect what is happening with the KO comes down to something similar. While the KO may have its core counts and clocks in line with the 2060, the TU104 chip still has certain fundamental elements embedded in its design that help it with tasks like Blender.
At any rate, the bottom line is that the 2060 KO has suddenly become a very compelling GPU for Daz Iray users on a budget. Though I stress we don't know how it will perform exactly with Iray since we don't have any data.
But in that video they did cover a software that uses both CUDA and OptiX, just like Iray does. That bench saw the KO perform 19% faster than a regular 2060. So if that holds true for Iray, that is big news. Because if the KO can run 19% faster in Iray, then that would place it ABOVE the 2060 Super and the original 2070, which cost $100 more. So if a user is OK with just 6GB of VRAM, then this would be pretty fantastic if true.
However, it would be a good idea to see somebody test it on the benchmark before Daz users start snatching up 2060 KO's.
Keep in mind that this was only found to be true in the case of scenes which do not benefit significantly from RTCore based raytracing acceleration (like the test scene I created for the thread in question - inadvertantly. Since the goal was 0-day benchmarking of RTX benefits, the test scene had to be created in advance of Iray RTX's debut. Meaning that it was a guess about what sort of scene composition would take best advantage of the new features. A guess that unfortunately turned out to be wrong.) If you compare findings between the Titan RTX and Titan V in the case of a test scene where RTCore acceleration does contribute signficantly (like this one), the story is very different.
My initial results for this scene with a Titan RTX where:
Titan RTX, 4.11.0.383 x64, Optix Prime ON, 431.60 WDDM, 70.724 = 08.484 mega-samples per second
Titan RTX, 4.12.0.047 x64, OptiX Prime NA, 431.60 WDDM, 26.797 = 22.391 mega-sampels per second
So that would be an apparent speed increase of 2.639 times with RTCore support for this scene.
Whereas @JD_Mortal's initial results with a single Titan V where:
Titan-V, 4.11.0.383 x64, Optix ON, GRD 436.15, WDDM, 76.910sec = 07.801 mega-samples/sec
Titan-V, 4.12.0.67 x64, Optix ON, GRD 436.15, WDDM, 74.143sec = 08.092 mega-samples/sec
Which would be an apparent speed increase of 1.0374 times or only +4% versus the +164% of the Titan RTX.
Pretty much. Here's a quick and dirty set of renditions of what logic dictates the GPU die topology of a vanilla spec 2060 (1920 Cuda cores) built from a cut-down TU106 chip looks like:
Here's the same for what logic dictates the GPU die topology of a vanilla spec 2060 (1920 Cuda cores) built from a cut-down TU104 chip looks like:
As you can see, there is no logical way for the TU104 die to host 1920 active Cuda cores (30 SMs - the small green rectangles bisected with reddish lines) without there being at least one additional GPC (the larger rectangles containing groupings of green SMs) since GPCs contain fewer SMs (8 vs 12) on TU104 chips. Which means more active lesser known Turing die components like Raster Engines and TPCs in play. And less fighting for shared resources between SMs since they exist in smaller groupings. *
* See pages 7-23, Appendix A and Appendix B of this document for larger versions of these graphics as well as what all of these graphics/acronyms mean.
Hence the TU104 based 2060 KO's superior performance in certain GPU compute tasks where Cuda cores aren't the primary thing being stressed.