Nvidia Ampere (2080 Ti, etc. replacements) and other rumors...

13940424445

Comments

  • tj_1ca9500btj_1ca9500b Posts: 2,057

    Here's the discussion about th 2 vs 4 sticks of memory on Ryzen 5000 over at the Level 1 Techs forum:

    https://forum.level1techs.com/t/ryzen-5000-memory-layout-2x8gb-vs-4x8gb-vs-2x16gb-single-rank-vs-dual-rank/163529/4

    I don't think Wendell has weighed in as of yet in the thread as I type this, but apparently Bullzoid had looked into this behaviour with Zen 2 (see thread for link to video).

  • The weird thing is the memory controller is supposed to be unchanged from Zen 2 so I just don't get this. When I go to work Monday I'll have to see what I have in the way of a matched set of 4 sticks of DDR4 RAM to bring home to test this myself.

    Also this will be somewhat awkward to take advantage of for the average user. 4 x 4Gb kits are essentially unavailable right now, I could not find any. so if you just want 16Gb of RAM you'd be out of luck.

    This may be a result of a number of improvements, the ones jumping out at me ATM is that they increased the number of table walkers from 2 to 6, which reduces that bottleneck.  They also increased the store queue depth from 48 to 64.  There are other improvements as well.

    https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-dive-review-5950x-5900x-5800x-and-5700x-tested/4

    Speculating here, but AMD MAY have already built the IO die in such a way that the CPU can directly handle either slot in a channel, with the IO die just being the 'bridge'.  The 'cross ccd' thing in Zen 1-2 may have incurred some sort of penalty here before, which was resolved somehow when the CCX/D grew to 8 cores.  I honestly don't know, but I look forward to Wendell's investigation into this subject.

    It certainly could be any number of things that only showed up now. It's just puzzling. Due to never really bothering with 4x kits for desktops and having generally fairly slow ECC kits at work I'm not sure how much validating of this I'll be able to do.

  • tj_1ca9500btj_1ca9500b Posts: 2,057

    The weird thing is the memory controller is supposed to be unchanged from Zen 2 so I just don't get this. When I go to work Monday I'll have to see what I have in the way of a matched set of 4 sticks of DDR4 RAM to bring home to test this myself.

    Also this will be somewhat awkward to take advantage of for the average user. 4 x 4Gb kits are essentially unavailable right now, I could not find any. so if you just want 16Gb of RAM you'd be out of luck.

    This may be a result of a number of improvements, the ones jumping out at me ATM is that they increased the number of table walkers from 2 to 6, which reduces that bottleneck.  They also increased the store queue depth from 48 to 64.  There are other improvements as well.

    https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-dive-review-5950x-5900x-5800x-and-5700x-tested/4

    Speculating here, but AMD MAY have already built the IO die in such a way that the CPU can directly handle either slot in a channel, with the IO die just being the 'bridge'.  The 'cross ccd' thing in Zen 1-2 may have incurred some sort of penalty here before, which was resolved somehow when the CCX/D grew to 8 cores.  I honestly don't know, but I look forward to Wendell's investigation into this subject.

    It certainly could be any number of things that only showed up now. It's just puzzling. Due to never really bothering with 4x kits for desktops and having generally fairly slow ECC kits at work I'm not sure how much validating of this I'll be able to do.

    Definitely check out Bullzoid's video.  He rambles a bit, but the short form is that having dual ranks in the channel seems to help, even with Zen 2, at least in the benches.  But of course, you may sacrifice your max memory speed a bit with dual ranks, but it looks like the performance bump more than offsets the loss in MHz, at least in the benchmarks he picked.  Latency does go up slightly when going to dual rank, but the scores are higher despite this, at least in his testing, and this was with Zen 2...

  • nonesuch00nonesuch00 Posts: 18,729
    edited November 2020

    I have 32GB as 2x16GB DDR4 2666MHz because at the time the motherboard only supported 4 slots to total 64GB. Since then Gigabyte has updated that B450 motherboard BIOS to support 4x32GB to total 128GB RAM. Given the direct access to SSD speed enhancements and infinityfabric multi-GPU on a single video card style improvements said to be coming to nVidia & AMD video cards and motherboards in the next couple of years I think I will be cheap and continue to get by on 32GB RAM until the new tech is ready.

    GPUs are getting tantalizingly close to being able to run a renderer to create complex PBR/iRay animations in what seems like realtime to the end user so of course I don't want to spend thousands before they get that achieve that they are so close to only to need to upgrade afterwards to get that last leg up and running when they do achieve it.

    Post edited by nonesuch00 on
  • The weird thing is the memory controller is supposed to be unchanged from Zen 2 so I just don't get this. When I go to work Monday I'll have to see what I have in the way of a matched set of 4 sticks of DDR4 RAM to bring home to test this myself.

    Also this will be somewhat awkward to take advantage of for the average user. 4 x 4Gb kits are essentially unavailable right now, I could not find any. so if you just want 16Gb of RAM you'd be out of luck.

    This may be a result of a number of improvements, the ones jumping out at me ATM is that they increased the number of table walkers from 2 to 6, which reduces that bottleneck.  They also increased the store queue depth from 48 to 64.  There are other improvements as well.

    https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-dive-review-5950x-5900x-5800x-and-5700x-tested/4

    Speculating here, but AMD MAY have already built the IO die in such a way that the CPU can directly handle either slot in a channel, with the IO die just being the 'bridge'.  The 'cross ccd' thing in Zen 1-2 may have incurred some sort of penalty here before, which was resolved somehow when the CCX/D grew to 8 cores.  I honestly don't know, but I look forward to Wendell's investigation into this subject.

    It certainly could be any number of things that only showed up now. It's just puzzling. Due to never really bothering with 4x kits for desktops and having generally fairly slow ECC kits at work I'm not sure how much validating of this I'll be able to do.

    Definitely check out Bullzoid's video.  He rambles a bit, but the short form is that having dual ranks in the channel seems to help, even with Zen 2, at least in the benches.  But of course, you may sacrifice your max memory speed a bit with dual ranks, but it looks like the performance bump more than offsets the loss in MHz, at least in the benchmarks he picked.  Latency does go up slightly when going to dual rank, but the scores are higher despite this, at least in his testing, and this was with Zen 2...

    I think you're misunderstanding.

    The discussion from GN is about the difference between dual and quad rank not between single and dual rank. extreme overclockers have quite frequently found benefit to a single rank of RAM. That isn't what is under discussion here.

  • tj_1ca9500btj_1ca9500b Posts: 2,057
    edited November 2020

    The weird thing is the memory controller is supposed to be unchanged from Zen 2 so I just don't get this. When I go to work Monday I'll have to see what I have in the way of a matched set of 4 sticks of DDR4 RAM to bring home to test this myself.

    Also this will be somewhat awkward to take advantage of for the average user. 4 x 4Gb kits are essentially unavailable right now, I could not find any. so if you just want 16Gb of RAM you'd be out of luck.

    This may be a result of a number of improvements, the ones jumping out at me ATM is that they increased the number of table walkers from 2 to 6, which reduces that bottleneck.  They also increased the store queue depth from 48 to 64.  There are other improvements as well.

    https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-dive-review-5950x-5900x-5800x-and-5700x-tested/4

    Speculating here, but AMD MAY have already built the IO die in such a way that the CPU can directly handle either slot in a channel, with the IO die just being the 'bridge'.  The 'cross ccd' thing in Zen 1-2 may have incurred some sort of penalty here before, which was resolved somehow when the CCX/D grew to 8 cores.  I honestly don't know, but I look forward to Wendell's investigation into this subject.

    It certainly could be any number of things that only showed up now. It's just puzzling. Due to never really bothering with 4x kits for desktops and having generally fairly slow ECC kits at work I'm not sure how much validating of this I'll be able to do.

    Definitely check out Bullzoid's video.  He rambles a bit, but the short form is that having dual ranks in the channel seems to help, even with Zen 2, at least in the benches.  But of course, you may sacrifice your max memory speed a bit with dual ranks, but it looks like the performance bump more than offsets the loss in MHz, at least in the benchmarks he picked.  Latency does go up slightly when going to dual rank, but the scores are higher despite this, at least in his testing, and this was with Zen 2...

    I think you're misunderstanding.

    The discussion from GN is about the difference between dual and quad rank not between single and dual rank. extreme overclockers have quite frequently found benefit to a single rank of RAM. That isn't what is under discussion here.

    Per the GSkill forum, at least from a senior forum member (post 3), all 4x8gb GSkill kits (which is what Steve has apparently) are single rank.  The 16GB sticks are probably dual rank, but the 8GB sticks are single rank.  You end up with 'dual rank' with those when you pair them up in the same channel if I'm understanding correctly from Bullzoid's benchmarks and other sources.

    https://www.gskill.us/forum/forum/product-discussion/ddr4/162477-ddr4-3200mhz-cl14-dual-rank

    When pressed, a GSkill Admin linked to 2x16GB kits on Newegg, indicating that THOSE were dual rank (see post 6).

     

    Post edited by tj_1ca9500b on
  • tj_1ca9500btj_1ca9500b Posts: 2,057
    edited November 2020

    Also, definitely read the reply from Mirzad Redsovic that is a couple of replies below Steve's post on the youtube video.  He covers the subject pretty extensively, mentioning that the 4 x 8 sticks should add up to the 4 ranks, or 256 bits.  There may be some older kits out there that have dual rank 8 GB sticks, but almost all DDR4 sticks manufactured recently are single rank, according to him.

    He does caution against using 4 dual ranked modules in an AMD system (4 x 16 dual rank), but these days there are single ranked 16GB sticks on the market to help address that issue.

    Post edited by tj_1ca9500b on
  • I just noticed on NewEgg and some other sites that pretty much all of the 3090 models are now showing "Sold Out" and not just "Out of Stock" and no longer list MSRP prices for the various models.  Does this mean that NVidia has stopped making this card, or should we expect to see some inventory start showing up in the future?  It makes me nervous to hear about possible switching of chip fab (from Samsung to TSMC) and whether this means that production has simply been discontinued and there is no hope of obtaining the card.  I agree with other commenters above that all the rumors of 30xxTi cards or higher VRAM variants of the 3070/3080 are all nonsense if the current models cannot be obtained anywhere because there are no new units to sell other than the ones the scalpers all scooped up.

  • Kevin SandersonKevin Sanderson Posts: 1,643
    edited November 2020

    3080 and 3090 cards have been in high demand and they've been selling faster than they can make them.The rumor guys have been saying more are on the way but after a few weeks. People have been getting 3090 cards as soon as they come in at Best Buy online according to online posts. But they go fast. The big problem has been the scalpers snatching up what would've been a normal supply any other year. Quit being nervous. Save your money. Nvidia has said supply will be better next year. 

    Post edited by Kevin Sanderson on
  • I just noticed on NewEgg and some other sites that pretty much all of the 3090 models are now showing "Sold Out" and not just "Out of Stock" and no longer list MSRP prices for the various models.  Does this mean that NVidia has stopped making this card, or should we expect to see some inventory start showing up in the future?  It makes me nervous to hear about possible switching of chip fab (from Samsung to TSMC) and whether this means that production has simply been discontinued and there is no hope of obtaining the card.  I agree with other commenters above that all the rumors of 30xxTi cards or higher VRAM variants of the 3070/3080 are all nonsense if the current models cannot be obtained anywhere because there are no new units to sell other than the ones the scalpers all scooped up.

    It is entirely likely that some of those specific models are sold out. Some of the cards were always meant to have very small production runs The Kingpin and FTW cards are always limited production runs for instance.

    Also now that the AIB's have the real drivers in hand they have seen that their power delivery is not adequate for these cards, any 3090 with only 2x 8 pins. So they may have dropped those out of production and may be redesigning those models. 

    But by all accounts Samsung is still making chips, Nvidia is still delivering those chips to teh AIB's and the AIB's are still making cards as fast as they get chips. They'd all be somewhat crazy to drop production right before xmas.

  • nonesuch00nonesuch00 Posts: 18,729
    edited November 2020

    I just noticed on NewEgg and some other sites that pretty much all of the 3090 models are now showing "Sold Out" and not just "Out of Stock" and no longer list MSRP prices for the various models.  Does this mean that NVidia has stopped making this card, or should we expect to see some inventory start showing up in the future?  It makes me nervous to hear about possible switching of chip fab (from Samsung to TSMC) and whether this means that production has simply been discontinued and there is no hope of obtaining the card.  I agree with other commenters above that all the rumors of 30xxTi cards or higher VRAM variants of the 3070/3080 are all nonsense if the current models cannot be obtained anywhere because there are no new units to sell other than the ones the scalpers all scooped up.

    No, they switched from Out of Stock to Sold Out to throw bots off the site that excessively kept trying to get to any new stock before normal customers would have a chance to buy the cards. The configurers of the bot scripts will soon adjust to that though. It costs them nothing if the IPs they use aren't banned.

    Post edited by nonesuch00 on
  • nicsttnicstt Posts: 11,715

    I've stopped bothering. I'm not that desperate to spend my cash that I must give it to someone else in exchange for something I dont actually NEED. Sure it will be faster than my 980ti and Threadripper 1950x, but that means I can do what I need to do now; faster is a WANT not a NEED.

  • The Nvidia strangeness continues: (plus some other stuff)

     

  • billyben_0077a25354billyben_0077a25354 Posts: 771
    edited November 2020

    Here's something interesting, a mid/low end card with more memory than a 3080

     

    Post edited by billyben_0077a25354 on
  • Nvidia is not going to sell a 3060 with 12Gb of VRAM for $250 they'd invalidate their entire product stack, not that they're going to sell the 3060 at $250 based on the GN story.

  • AsariAsari Posts: 703
    One of the biggest hardware retailers in my country has both the 3080 and the 3090 in stock. The 3090 costs 1900€ and the 3080 costs 1100€.
  • Nvidia is not going to sell a 3060 with 12Gb of VRAM for $250 they'd invalidate their entire product stack, not that they're going to sell the 3060 at $250 based on the GN story.

    The story said the 3050 with 6GB would be $250.00.  TYhey were saying that the 3060 with 12GB wiould be at least $300.00.  I would say take it with a grain of salt but the 3060 will be using slower GDDR6, not GDRR6X and the sources he used are usually pretty rel;iable but they are still rumors.  We will have to see.

  • nonesuch00nonesuch00 Posts: 18,729
    edited November 2020

    Nvidia is not going to sell a 3060 with 12Gb of VRAM for $250 they'd invalidate their entire product stack, not that they're going to sell the 3060 at $250 based on the GN story.

    The story said the 3050 with 6GB would be $250.00.  TYhey were saying that the 3060 with 12GB wiould be at least $300.00.  I would say take it with a grain of salt but the 3060 will be using slower GDDR6, not GDRR6X and the sources he used are usually pretty rel;iable but they are still rumors.  We will have to see.

    Well the units (cude, tensor, RT) in the 3060 with 12GB were said to be only about 1/2 as many too; so $250 - $300 is sort of in line when the next card up costing $500. A lot more to a 30X0  card than the name makes them faster or slower.

    Post edited by nonesuch00 on
  • JamesJABJamesJAB Posts: 1,766

    Nvidia needs to do better than these little 8 and 10 GB cards for me to consider dumping my GTX 1080 ti 11GB card.  With AMD setting a the new normal for high tier gaming cards at 16GB...

    Just waiting for the new stack of Nvidia cards now that there is going to be much needed real competition in the GPU market. (the whole stack of Nvidia cards 16xx, 20xx, and 30xx where all designed and priced around a performance monopoly.)

  • Nvidia is not going to sell a 3060 with 12Gb of VRAM for $250 they'd invalidate their entire product stack, not that they're going to sell the 3060 at $250 based on the GN story.

    The story said the 3050 with 6GB would be $250.00.  TYhey were saying that the 3060 with 12GB wiould be at least $300.00.  I would say take it with a grain of salt but the 3060 will be using slower GDDR6, not GDRR6X and the sources he used are usually pretty rel;iable but they are still rumors.  We will have to see.

    The GN news story said that Nvidia wants the AIB's to sell the 3060 for $250. GN is far more reliable than "shout at camera guy." Either way Nvidia is not making a card cheaper than the 3070 that invalidates the 3070 and 3080. They aren't that stupid. 

    It has no reason to exist. it would be a 1080p card beiong sold with VRAM for 8k. How does that make any sense? Beyond that the AIB's have flat said they won't sell what ever the real specs of the card are at the price Nvidia wants, which have to a be a lot lower than you guys think since Nvidia thinks it can be cooled with the equivalent of an Intel stock cooler, a $4 flower heat sink with no heat pipes, no vapor chamber or anything but a heat sink right on the GPU and one very cheap fan. That definitely means no 12 VRAM modules (it probably mean 6 or less).

  • tj_1ca9500btj_1ca9500b Posts: 2,057
    edited November 2020

    So we are starting to get a picture of Apple's upcoming 'in house' notebook and desktop chips:

    https://www.anandtech.com/show/16226/apple-silicon-m1-a14-deep-dive

    We'll have to wait a bit for the first batch of laptops and such to reach the hands of the reviewers, but so far the numbers look rather impressive, roughly on par with the latest AMD chips, in some cases better.  No idea what their GPU comparison point is ATM, but they are talking a big game on the GPU front as well.

    Of course, whether Daz Studio will be compatible with the new Apple chips will remain to be seen.  At the very least, I'd imagine that some tweaks will need to be made, and of course there's the chance that the new chips may break Daz compatibility entirely in favor of Apple's own 3D rendering software. 

    Could be interesting...

    Post edited by tj_1ca9500b on
  • RobinsonRobinson Posts: 751

    Have I missed a thread?  Anyone managed to get hold of a 3070 and bench it?  I'm curious.

  • tj_1ca9500btj_1ca9500b Posts: 2,057
    edited November 2020
    Robinson said:

    Have I missed a thread?  Anyone managed to get hold of a 3070 and bench it?  I'm curious.

    Keep an eye on the Iray Benchmarking thread.  There are a few results there for the RTX 3000 series cards.  Not seeing the 3070 specifically (may have missed it) but I'm sure it'll show up soon.

    https://www.daz3d.com/forums/discussion/341041/daz-studio-iray-rendering-hardware-benchmarking#latest

    Also, not Daz Studio, but here are a few rendering benchmarks showing the 3070, 3080 and 3090 and a few other cards in Octane Render and Blender and such:

    https://techgage.com/article/nvidia-geforce-rtx-3070-rendering-performance/

    Post edited by tj_1ca9500b on
  • tj_1ca9500btj_1ca9500b Posts: 2,057

    BTW, for those not in the know already, you probably should avoid the UserBenchmark database as it's still heavily biased towards Intel for no good reason, other than maybe being supported by Intel money...

    https://www.notebookcheck.net/Final-nail-in-the-coffin-Bar-raising-AMD-Ryzen-9-5950X-somehow-lags-behind-four-Intel-parts-including-the-Core-i9-10900K-in-average-bench-on-UserBenchmark-despite-higher-1-core-and-4-core-scores.503581.0.html

    Anandtech has decent reviews if you need to check out CPU comparisons, as do a few other sites.

  • tj_1ca9500btj_1ca9500b Posts: 2,057

    Here's a decent video covering memory timings with the Ryzen 5000 series from Hardware Unboxed, including the single vs dual rank performance bump.

    Short form, having dual ranks in a channel helps both Intel and Ryzen CPUs (Zen 2 & 3).  The memory speed and timings comparison is worth looking at in the video as well.

  • tj_1ca9500btj_1ca9500b Posts: 2,057

    In other news, Intel has launched a server based card with 4 GPUs on it.  Each GPU sports 8 GB of LPDDR4 VRAM.

    It's targetted at cloud gaming, but one does wonder how it might do with non-Iray based rendering...

    https://wccftech.com/intel-server-gpu-h3c-xg310/

  • Keeeping in mind that I've yet to see a full spec sheet on these, Intel didn't even try selling them into the GP datacenter market apparently, there are some major red flags on how they'd do in anything besides mobile gaming and video transcoding. 

    First they only have DDR4 not GDDR6. That's a very significant difference in memory speed, no matter what the speed of the modules are. Next the memory bus is only 128 bits. I checked all the way back to the 3Gb 1060 and it had a 192bit. So you're talking about a very narrow bus to very slow memory. As memory intensive as rendering is that alone would make me doubt how well these would do. But they are low power cards, and apparently reasonably cheap, so you could cram shed loads of them into a rack and let them run as a render farm, if the TCO calculations work out. but there'd need to be a renderer that knew what they were that supported distributed rendering. Maybe Octane or V-ray?

     

  • RayDAntRayDAnt Posts: 1,154
    Robinson said:

    Have I missed a thread?  Anyone managed to get hold of a 3070 and bench it?  I'm curious.

    Still waiting for one to appear in the wild, I'm afraid.

  • Gr00vusGr00vus Posts: 372

    Yeah, I'd love to buy a particular 3090 build somewhere near retail. Can't find one.

  • Gr00vus said:

    Yeah, I'd love to buy a particular 3090 build somewhere near retail. Can't find one.

    Best Buy online had a few Nvidia 3090s for a short time a couple days ago around 11:15 am for $1499.99. You had to wait and click again to put it in your cart but it didn't work out for me. Nvidia has said more will be available beginning of next year.

Sign In or Register to comment.