Nvidia Ampere (2080 Ti, etc. replacements) and other rumors...

1161719212245

Comments

  • nicsttnicstt Posts: 11,715

    Just a quick heads up.  This doesn't really relate to GPUs, but for those of you that like to buy used server grade CPUs occasionally, you should know about this:

    https://www.servethehome.com/amd-psb-vendor-locks-epyc-cpus-for-enhanced-security-at-a-cost/

    The short form, for those that won't check the link right now, according to the article above, is that EPYC CPUs can apparently be 'vendor locked' to specific ecosystems, via a 'fuse' that is built into the CPUs.  Once those security fuses are set, you won't be able to port the CPUs over to motherboards from other vendors.  Right now it looks like only HPE and Dell EMC are doing this, but that's not to say that other vendors won't take advantage of it as well.

    So this is more of a heads up if you should see used EPYC CPUs on Ebay or something.  STH discovered this in their lab testing, where once a CPU was used on certain specific motherboards, that these CPUs would no longer work in other platforms that they had been tested in previously.  Again, something to keep in mind as you shop for used EPYC hardware...

    IMHO, it's a bit of a 'dick move' but apparently for security reasons it might have advantages.  I think it's more of a 'we want you to buy only our ecosystem hardware' move, but that's a matter of opinion...  So for you DIY workstation types, just be aware...

    This is a good heads up, cheers.

  • nicsttnicstt Posts: 11,715

    Well if anyone has any cash left over from getting their new card, there is always this:

    https://wccftech.com/ps5-24k-gold-edition-pre-orders-open-september-10/

  • nonesuch00nonesuch00 Posts: 18,762

    OK, the AMD thing today was to confirm the unveil dates for Vermeer Zen 3 CPUs and Radeon RX 6000 GPUs

    October 8th for Vermeer Zen 3, October 28th for the RX 6000 series GPUs.

    https://wccftech.com/amd-confirms-radeon-rx-6000-rdna-2-gpus-ryzen-4000-vermeer-cpus-october-unveil/

    So just a tease, but we have official dates for the unvelings now.  These of course could be subject to change, but since it's an official announcement, that's unlikely.  Now, unveiling isn't necessarily the same as launching of course!

    So, nothing much to see here (yet), carry on!

    Awesome as those will be supported by my B450 MB which I didn't really expect.

    https://www.techpowerup.com/265850/amd-ryzen-4000-series-vermeer-cpus-to-be-compatible-with-b450-motherboards

     

  • Just a quick heads up.  This doesn't really relate to GPUs, but for those of you that like to buy used server grade CPUs occasionally, you should know about this:

    https://www.servethehome.com/amd-psb-vendor-locks-epyc-cpus-for-enhanced-security-at-a-cost/

    The short form, for those that won't check the link right now, according to the article above, is that EPYC CPUs can apparently be 'vendor locked' to specific ecosystems, via a 'fuse' that is built into the CPUs.  Once those security fuses are set, you won't be able to port the CPUs over to motherboards from other vendors.  Right now it looks like only HPE and Dell EMC are doing this, but that's not to say that other vendors won't take advantage of it as well.

    So this is more of a heads up if you should see used EPYC CPUs on Ebay or something.  STH discovered this in their lab testing, where once a CPU was used on certain specific motherboards, that these CPUs would no longer work in other platforms that they had been tested in previously.  Again, something to keep in mind as you shop for used EPYC hardware...

    IMHO, it's a bit of a 'dick move' but apparently for security reasons it might have advantages.  I think it's more of a 'we want you to buy only our ecosystem hardware' move, but that's a matter of opinion...  So for you DIY workstation types, just be aware...

    This is not a dick move. I know it makes some thing unpleasant in the used world but it makes my life so much easier. The secure coprocessor does things like encrypt individual pages of RAM with separate encryption keys so even if one virtual machine is compromised it cannot gain access to the rest of the live data because the keys it needs to decrypt that RAM is no where that process can ever get to. I can kill the process, and figure out why the security patches weren't up to date or how it got penetrated at my leisure. There's no panic about other clients data or that there is some malware still live on the box somewhere.

    I'm actually disappointed that all the vendors weren't binding the CPU's to the boards upon install. That feature is an explicit part of AMD's marketing of PSB andI has assumed it was ctive by default. 

    BTW the article describes an Intel behavior, releasing off roadmap CPU's only to big buyers who then manually upgrade their racks, that as far as I'm aware AMD has not done. I'm not saying AMD won't. Epyc is very young as a server architecture and there aren't that many chassis deployed. Just so far they have not and to date they have shown they aren't yet able to fulfill all the demand for Rome much less supply tens of thousands of new Rome chips to the big guys while still supplying the rest of us what we want. Anyway Rome is jsut over a year old. Intel usually does this once chips are in the 2 to 3 year old range and do need replacement as they are in the 50 to 75% of MTBF range.

  • NylonGirlNylonGirl Posts: 2,236

    Just a quick heads up.  This doesn't really relate to GPUs, but for those of you that like to buy used server grade CPUs occasionally, you should know about this:

    https://www.servethehome.com/amd-psb-vendor-locks-epyc-cpus-for-enhanced-security-at-a-cost/

    The short form, for those that won't check the link right now, according to the article above, is that EPYC CPUs can apparently be 'vendor locked' to specific ecosystems, via a 'fuse' that is built into the CPUs.  Once those security fuses are set, you won't be able to port the CPUs over to motherboards from other vendors.  Right now it looks like only HPE and Dell EMC are doing this, but that's not to say that other vendors won't take advantage of it as well.

    So this is more of a heads up if you should see used EPYC CPUs on Ebay or something.  STH discovered this in their lab testing, where once a CPU was used on certain specific motherboards, that these CPUs would no longer work in other platforms that they had been tested in previously.  Again, something to keep in mind as you shop for used EPYC hardware...

    IMHO, it's a bit of a 'dick move' but apparently for security reasons it might have advantages.  I think it's more of a 'we want you to buy only our ecosystem hardware' move, but that's a matter of opinion...  So for you DIY workstation types, just be aware...

    This is not a dick move. I know it makes some thing unpleasant in the used world but it makes my life so much easier. The secure coprocessor does things like encrypt individual pages of RAM with separate encryption keys so even if one virtual machine is compromised it cannot gain access to the rest of the live data because the keys it needs to decrypt that RAM is no where that process can ever get to. I can kill the process, and figure out why the security patches weren't up to date or how it got penetrated at my leisure. There's no panic about other clients data or that there is some malware still live on the box somewhere.

    I'm actually disappointed that all the vendors weren't binding the CPU's to the boards upon install. That feature is an explicit part of AMD's marketing of PSB andI has assumed it was ctive by default. 

    BTW the article describes an Intel behavior, releasing off roadmap CPU's only to big buyers who then manually upgrade their racks, that as far as I'm aware AMD has not done. I'm not saying AMD won't. Epyc is very young as a server architecture and there aren't that many chassis deployed. Just so far they have not and to date they have shown they aren't yet able to fulfill all the demand for Rome much less supply tens of thousands of new Rome chips to the big guys while still supplying the rest of us what we want. Anyway Rome is jsut over a year old. Intel usually does this once chips are in the 2 to 3 year old range and do need replacement as they are in the 50 to 75% of MTBF range.

    While I was at Valvoline, the card swipe device wasn't working. They had to go to another one that was at an inconvenient location to use a debit card. The worker asked why they don't just connect that card swipe machine to the cash register in the convenient location. The manager explained that the card swipe machines are binded to a specific cash register before they even get delivered, to ensure that nobody can connect one to their own device and make fraudulent transactions.

  • ...if one virtual machine is compromised it cannot gain access to the rest of the live data because the keys it needs to decrypt that RAM is no where that process can ever get to. I can kill the process, and figure out why the security patches weren't up to date or how it got penetrated at my leisure. There's no panic about other clients data or that there is some malware still live on the box somewhere.

    That's interesting. If you are lucky enough to know you've been compromised, shouldn't you at the very least:

    Immediately isolate the system and create an offline image for analysis, as part of a greater Incident Response?

    Assume all your diagnostic programs are compromised as well to report "nope, no compromise here..."?

    Make sure you meet your incident reporing obligations, as required by law and that vary from state to state?

    I think I remember something in the SANS SEC504 course saying things to that effect, and at no time using language like "at your leisure".

     

  • ...if one virtual machine is compromised it cannot gain access to the rest of the live data because the keys it needs to decrypt that RAM is no where that process can ever get to. I can kill the process, and figure out why the security patches weren't up to date or how it got penetrated at my leisure. There's no panic about other clients data or that there is some malware still live on the box somewhere.

    That's interesting. If you are lucky enough to know you've been compromised, shouldn't you at the very least:

    Immediately isolate the system and create an offline image for analysis, as part of a greater Incident Response?

    Assume all your diagnostic programs are compromised as well to report "nope, no compromise here..."?

    Make sure you meet your incident reporing obligations, as required by law and that vary from state to state?

    I think I remember something in the SANS SEC504 course saying things to that effect, and at no time using language like "at your leisure".

    My reporting obligations when a website, which is almost always what is being done in a virtual machine that accepts outside connections, is hacked is almost zero if no financial data is compromised. I notify the owner if they aren't the ones who notified me. We have very stringent rules about separating web servers from DB servers so a compromised web site should not also directly compromise any financial data. If it does that isn't on us but on the client. But if we do believe financial data was stolen we are required to report it.

    My problem starts and ends at why/how the VM got hacked. Did a secuirty patch fail to propogate? That sort of thing. The owner of the site may have very big problems and may be in a big rush to tell people but for me, I am not, at least on these Rome systems. 

  • kyoto kidkyoto kid Posts: 41,925
    edited September 2020
    nicstt said:
    Robinson said:
    nicstt said:
    ... I would really like to know what it really can do.

    Unfortunately the reviewers will concetrate on games and perhaps Blender.  We'll have to wait for a Daz person to buy one before finding out how they do with iRay.

    I'm not bothered about Iray, but Blender is just fine.

    ...my major concern is having to translate all the materials. Reality made it easy for LuxRender. I don't do well with node biased shader tools which is why I never bother with the Shader Mixer or Shader Builder in Daz. . I love Carrara's system but that seems to be unique to that software.

    So, on the main topic.  WIll the 3090 run off of a PCIe 2.0x16 slot?

    Post edited by kyoto kid on
  • kyoto kidkyoto kid Posts: 41,925

    No ones posted about all the miners snapping up the first lot of 3080's?

    We're in for rough start peeps!

    ..is that rubbish still going on?  I thought most of those crypto currencies lost value, particularly after hte big hack of Bitcoin in May of last year.

  • kyoto kid said:

    No ones posted about all the miners snapping up the first lot of 3080's?

    We're in for rough start peeps!

    ..is that rubbish still going on?  I thought most of those crypto currencies lost value, particularly after hte big hack of Bitcoin in May of last year.

    Apparently so, Etherium and Bitcoin are still going.. ?

  • nicsttnicstt Posts: 11,715
    kyoto kid said:
    nicstt said:
    Robinson said:
    nicstt said:
    ... I would really like to know what it really can do.

    Unfortunately the reviewers will concetrate on games and perhaps Blender.  We'll have to wait for a Daz person to buy one before finding out how they do with iRay.

    I'm not bothered about Iray, but Blender is just fine.

    ...my major concern is having to translate all the materials. Reality made it easy for LuxRender. I don't do well with node biased shader tools which is why I never bother with the Shader Mixer or Shader Builder in Daz. . I love Carrara's system but that seems to be unique to that software.

    So, on the main topic.  WIll the 3090 run off of a PCIe 2.0x16 slot?

    Well it is backward compatible to 3, but not seen any info of 2.

  • nicstt said:
    kyoto kid said:
    nicstt said:
    Robinson said:
    nicstt said:
    ... I would really like to know what it really can do.

    Unfortunately the reviewers will concetrate on games and perhaps Blender.  We'll have to wait for a Daz person to buy one before finding out how they do with iRay.

    I'm not bothered about Iray, but Blender is just fine.

    ...my major concern is having to translate all the materials. Reality made it easy for LuxRender. I don't do well with node biased shader tools which is why I never bother with the Shader Mixer or Shader Builder in Daz. . I love Carrara's system but that seems to be unique to that software.

    So, on the main topic.  WIll the 3090 run off of a PCIe 2.0x16 slot?

    Well it is backward compatible to 3, but not seen any info of 2.

    I think it's fine.. but personally, I dont see the point in putting in something so highend into an old PC.  

    There are so many nice features on newer hardware, it's probably better to invest in that, rather than just the GPU.. IMVHO. 

  • nicsttnicstt Posts: 11,715
    nicstt said:
    kyoto kid said:
    nicstt said:
    Robinson said:
    nicstt said:
    ... I would really like to know what it really can do.

    Unfortunately the reviewers will concetrate on games and perhaps Blender.  We'll have to wait for a Daz person to buy one before finding out how they do with iRay.

    I'm not bothered about Iray, but Blender is just fine.

    ...my major concern is having to translate all the materials. Reality made it easy for LuxRender. I don't do well with node biased shader tools which is why I never bother with the Shader Mixer or Shader Builder in Daz. . I love Carrara's system but that seems to be unique to that software.

    So, on the main topic.  WIll the 3090 run off of a PCIe 2.0x16 slot?

    Well it is backward compatible to 3, but not seen any info of 2.

    I think it's fine.. but personally, I dont see the point in putting in something so highend into an old PC.  

    There are so many nice features on newer hardware, it's probably better to invest in that, rather than just the GPU.. IMVHO. 

    I would agree, but someone using a Xeon system would likely get decent benefit from a good GPU, although I expect they would lose out some.

     

  • i53570ki53570k Posts: 235

    PCIe2.0x16 is a bit slower than PCIe3.0x8.  In the benchmark I've seen RTX2080 is like 1-5% slower on PCIe3.0x8 than PCIe3.0x16, almost statiscally insignificant.  RTX3090 is more than twice as fast as 2080 and needs its memory to go much faster so the penality will be larger.  You are probably getting 3080 performance at 3090 price by putting it in PCIe2.0x16.  Of course you are still getting the 24GB vs.10GB VRAM but it still doesn't make financial sense IMO not to upgrade to new CPU/MB if you are buying 3090.

  • jmtbankjmtbank Posts: 187

    I got a bit bored of waiting and just spent $300 on a 1080ti now the prices have dropped. Should get half back for the 1070.  It's been a couple of months since any render fit on 8gb for me.  I'll pick up a 20gb card as and when they show up now.

  • i53570k said:

    PCIe2.0x16 is a bit slower than PCIe3.0x8.  In the benchmark I've seen RTX2080 is like 1-5% slower on PCIe3.0x8 than PCIe3.0x16, almost statiscally insignificant.  RTX3090 is more than twice as fast as 2080 and needs its memory to go much faster so the penality will be larger.  You are probably getting 3080 performance at 3090 price by putting it in PCIe2.0x16.  Of course you are still getting the 24GB vs.10GB VRAM but it still doesn't make financial sense IMO not to upgrade to new CPU/MB if you are buying 3090.

    I was sceptical on the PCIe 4 stuff.. but wow, going from a Gen3 to Gen4 NVME drive is a whole new world.

  • jmtbankjmtbank Posts: 187
    i53570k said:

    PCIe2.0x16 is a bit slower than PCIe3.0x8.  In the benchmark I've seen RTX2080 is like 1-5% slower on PCIe3.0x8 than PCIe3.0x16, almost statiscally insignificant.  RTX3090 is more than twice as fast as 2080 and needs its memory to go much faster so the penality will be larger.  You are probably getting 3080 performance at 3090 price by putting it in PCIe2.0x16.  Of course you are still getting the 24GB vs.10GB VRAM but it still doesn't make financial sense IMO not to upgrade to new CPU/MB if you are buying 3090.

    We already know that the gaming penalty of pcie3 vs 4 is less than the 'penalty' of dropping from the Intel i9 9900k Nvidia used in their benchmarks to an Amd pcie4 machine.

  • i53570ki53570k Posts: 235
    edited September 2020
    jmtbank said:
    i53570k said:

    PCIe2.0x16 is a bit slower than PCIe3.0x8.  In the benchmark I've seen RTX2080 is like 1-5% slower on PCIe3.0x8 than PCIe3.0x16, almost statiscally insignificant.  RTX3090 is more than twice as fast as 2080 and needs its memory to go much faster so the penality will be larger.  You are probably getting 3080 performance at 3090 price by putting it in PCIe2.0x16.  Of course you are still getting the 24GB vs.10GB VRAM but it still doesn't make financial sense IMO not to upgrade to new CPU/MB if you are buying 3090.

    We already know that the gaming penalty of pcie3 vs 4 is less than the 'penalty' of dropping from the Intel i9 9900k Nvidia used in their benchmarks to an Amd pcie4 machine.

    To be clear, I wasn't advocating that going from PCIe3 to PCIe4 is a nobrainer.  I was refering to the question whether it makes sense to put 3090 in a PCIe2 machine, a two generation jump to PCIe4.

    Post edited by i53570k on
  • kyoto kid said:
    nicstt said:
    Robinson said:
    nicstt said:
    ... I would really like to know what it really can do.

    Unfortunately the reviewers will concetrate on games and perhaps Blender.  We'll have to wait for a Daz person to buy one before finding out how they do with iRay.

    I'm not bothered about Iray, but Blender is just fine.

    So, on the main topic.  WIll the 3090 run off of a PCIe 2.0x16 slot?

    If your system is that old, then you will probably e CPU limited.  Not sure if it will be that big of an issue when rendering static images.

  • jmtbankjmtbank Posts: 187

    There might be some reviews with bandwith limited testing done, but would guess they will just slow down a PCIe 3/4 mobo rather than actually test it in a 2.0 machine.   

    Might actually be months before you get that level of confirmation that someone has tested it for sure in a 2.0 machine.  But as said by others above I would be suprised if it didnt work.

  • nonesuch00nonesuch00 Posts: 18,762

    I have yet to buy the 2TB NVME SSD module I want for this computer but I guess I need to seriously consider getting the PCIe 4 instead of the PCIe 3 version even though the price bump is quite high.

  • kyoto kidkyoto kid Posts: 41,925
    edited September 2020
    nicstt said:
    kyoto kid said:
    nicstt said:
    Robinson said:
    nicstt said:
    ... I would really like to know what it really can do.

    Unfortunately the reviewers will concetrate on games and perhaps Blender.  We'll have to wait for a Daz person to buy one before finding out how they do with iRay.

    I'm not bothered about Iray, but Blender is just fine.

    ...my major concern is having to translate all the materials. Reality made it easy for LuxRender. I don't do well with node biased shader tools which is why I never bother with the Shader Mixer or Shader Builder in Daz. . I love Carrara's system but that seems to be unique to that software.

    So, on the main topic.  WIll the 3090 run off of a PCIe 2.0x16 slot?

    Well it is backward compatible to 3, but not seen any info of 2.

    I think it's fine.. but personally, I dont see the point in putting in something so highend into an old PC.  

    There are so many nice features on newer hardware, it's probably better to invest in that, rather than just the GPU.. IMVHO. 

    ...can't afford to build a new system on top of purchasing a new GPU. 24 GB of VRAM though at 1,000$ less than a Titan RTX (3,500$ less than a Quadro 6000) is just too hard to pass up, particularly with over 10,000 cores.  As I understiood, the drawback of using a PCIe 3.0 card in a PCIe 2.0  slot is that loading the scene into VRAM takes a bit longer but once the render process begins, there is little to no lag.

    No point in spending money on building a new rig and then having to settle for a newer, less capable GPU for my needs. Would be nice to rarely if ever have scenes dump to the CPU. Crikey, I can make even a Titan-X break a sweat.

    kyoto kid said:
    nicstt said:
    Robinson said:
    nicstt said:
    ... I would really like to know what it really can do.

    Unfortunately the reviewers will concetrate on games and perhaps Blender.  We'll have to wait for a Daz person to buy one before finding out how they do with iRay.

    I'm not bothered about Iray, but Blender is just fine.

    So, on the main topic.  WIll the 3090 run off of a PCIe 2.0x16 slot?

    If your system is that old, then you will probably e CPU limited.  Not sure if it will be that big of an issue when rendering static images.

    ...yeah that is all I do.  I already know my system doesn't have quite the horsepower (nor I the patience to wait days...weeks for even a short animation to render).

    Post edited by kyoto kid on
  • PCIE generations are fully interoperable. Gen 3 devices worked on gen 2 motherboards. Gen 4 devices work on gen 3 motherboards. I have no idea if  a gen 4 device has actually ben tested on a gen 2 board but there is zero reason it should not work.

  • nicsttnicstt Posts: 11,715
    edited September 2020

    If 4 works on 3, and 3 works on 2, then surely 4 should work on EDIT: 3 should read: 2?

    It makes sense; it even seems logical.

    I'd ask the supplier I was getting it from (in writing) if it would work, and if they would take it back if it didn't work - presuming they thought it would.

    Post edited by nicstt on
  • nicstt said:

    If 4 works on 3, and 3 works on 2, then surely 4 should work on 3?

    It makes sense; it even seems logical.

    I'd ask the supplier I was getting it from (in writing) if it would work, and if they would take it back if it didn't work - presuming they thought it would.

    If you mean 4 on 2 the standard requires that it work. If the device conforms to the standard it should work fine. gen 4 should even work on a gen 1 motherboard if any of those are still around. 

    The only thing that changes one generation to the next, at least as far as this sort of thing is concerned, is how fast data can be sent. But data can be sent more slowly for any number of reasons. So the device has to be able to wait until the receiving device says it has read the data before sending more. So on Gen 2 when the max bandwidth is simply lower it should not matter

  • TheMysteryIsThePointTheMysteryIsThePoint Posts: 3,242
    edited September 2020
    nicstt said:

    If 4 works on 3, and 3 works on 2, then surely 4 should work on 3?

    It makes sense; it even seems logical.

    I'd ask the supplier I was getting it from (in writing) if it would work, and if they would take it back if it didn't work - presuming they thought it would.

    Sure, there should be a transitive law logically speaking, but this is exactly the kind of thing that I would not trust to just work... I don't feel the need to be first, so I'm going to just wait and let others figure everything out. Gives me more time to budget, too.
    Post edited by TheMysteryIsThePoint on
  • nicsttnicstt Posts: 11,715
    nicstt said:

    If 4 works on 3, and 3 works on 2, then surely 4 should work on 3?

    It makes sense; it even seems logical.

    I'd ask the supplier I was getting it from (in writing) if it would work, and if they would take it back if it didn't work - presuming they thought it would.

     

    Sure, there should be a transitive law logically speaking, but this is exactly the kind of thing that I would not trust to just work... I don't feel the need to be first, so I'm going to just wait and let others figure everything out. Gives me more time to budget, too.

    Which is why I said I would ask who I was buying it from, and if I could return it - in writing.

  • nicsttnicstt Posts: 11,715
    nicstt said:

    If 4 works on 3, and 3 works on 2, then surely 4 should work on 3?

    It makes sense; it even seems logical.

    I'd ask the supplier I was getting it from (in writing) if it would work, and if they would take it back if it didn't work - presuming they thought it would.

    If you mean 4 on 2 the standard requires that it work. If the device conforms to the standard it should work fine. gen 4 should even work on a gen 1 motherboard if any of those are still around. 

    The only thing that changes one generation to the next, at least as far as this sort of thing is concerned, is how fast data can be sent. But data can be sent more slowly for any number of reasons. So the device has to be able to wait until the receiving device says it has read the data before sending more. So on Gen 2 when the max bandwidth is simply lower it should not matter

    Correct, I meant 2.

  • marblemarble Posts: 7,500

    So, experts - are we being advised that upgrading to an Ampere GPU needs to be accompanied by a whole system upgrade? I have a 4 year old Intel i7 6700, a LGA 1151 (PCIe 3.0) motherboard and 32 GB DDR4 RAM. 

  • outrider42outrider42 Posts: 3,679

    There should be no issue using these cards with pcie 2.0 for Iray and GPU based rendering. And I assume we are GPU rendering considering this thread is about Ampere. The only issues you need to worry about are the physical size and power draw. These are going to be monsters. If you have a case dating back that long, you might have some things that might block the card. The 3090s are over a foot long.

    It would only be a bottleneck for gaming and certain software. Iray does not function like video games at all. This is the whole reason why we need so much VRAM in the first place! The entire scene is sent to VRAM. Then it basically stays in VRAM and the GPU does its thing.

    If Iray ever gets an "out of core" rendering type of feature then yes, it would make a difference. In that situation, the CPU and GPU must trade data with each other. But Iray has no such mode, and I doubt it ever will. They haven't added one after all these years, so why would they add one now? So pcie 4.0 might help Octane out of core renders. It would be really cool if somebody could test this.

    Now there could be one possible situation that might see a benefit: if you use CPU and GPU to render together, which is still an option with Iray. There could be some kind of improvement with this situation. But for strictly GPU based renders, which most Daz of us do, you will not see any difference whatsoever. Maybe loading times, but if you are already using pcie 2.0, the load times should be the same as what you are used to. 

    Stressing the size, here is a pic of a Nvidia 3090 in a "standard" case.

    The 3090 is actually resting on the hard drive bays. Why the bays would stand out so much in the first place is my immediate thought, but lots of older cases have weird tight spots and design choices. Some designs often overlook airflow a bit, too. In this particular case, I would be a little concerned. While the fan on the GPU is not blocked, if we assume the front of the case brings in air like most do, then the drive bays have to be blocking a lot of air from reaching the underside of this card. That could pose a problem here. We don't see the rest of the case, so it is hard to say. The CPU looks to be water cooled, but I don't think that's going to help much of anything. The unique push-pull fans on FE Ampere will be tested in this case for sure.

    But the point being, many cases might appear to have enough room, but in reality they may have things blocking the area needed. So make sure to double check the dimensions of whatever you get, and don't just factor in space for the card. You should factor in space around the card so it can breath properly.

Sign In or Register to comment.