Semi OT: Next Gen wafer on wafer technology die stacking for GPUs - TSMC working on it.

tj_1ca9500btj_1ca9500b Posts: 2,057
edited May 2018 in The Commons

Just saw this:

https://wccftech.com/next-generation-radeon-geforce-tsmc-wafer-on-wafer-die-stacking/

We won't be seeing this immediately, but this might be a significant game changer for GPU design... We are already reaping the benefits of die stacking in the memory front (HBM) and more significantly in the storage arena, so this just makes sense.  If TSMC is able to bring this up to speed without too many issues, yeah this could indeed be a game changer for GPUS, and also CPUs I'd guess...

With 7nm ramping up to speed in the next year, and if this technology becomes an option for extending the life of larger nodes... yeah things could get VERY interesting in the next couple of years on the GPU front...

Edit: there's also this news:

In related news, TSMC also announced 7nm (7ff) has entered volume production, and the node improvements (7ff+) are going through the validation steps now.  There are signficant density and power efficiency increases associated with 7nm, and 5nm apparently isn't too far away either...

https://wccftech.com/tsmc-details-7nm-manufacturing-process-euv/

As I said, should get rather exicting in the next year or two...

Post edited by tj_1ca9500b on

Comments

  • nonesuch00nonesuch00 Posts: 18,729

    Yeah, stacking & 5nm would bring better compute power to mobile tablts than desktop PCs get now I bet.

  • kyoto kidkyoto kid Posts: 41,857
    edited May 2018

    ...the downside is that tablets have terrible cooling so rendering could result in a puddle of plastic and silicon as there is no way to enhance airflow.

    Post edited by kyoto kid on
  • nonesuch00nonesuch00 Posts: 18,729
    kyoto kid said:

    ...the downside is that tablets have terrible cooling so rendering could result in a puddle of plastic and silicon as there is no way to enhance airflow.

    Less power usage means cooler chips. That's where all that is heading.

  • kyoto kidkyoto kid Posts: 41,857
    edited May 2018

    ...rendering puts quite a load on graphics processors and the CPU which generates waste heat that needs to be dissipated. I have a notebook for which I now need to use an auxiliary keyboard because the heat from rendering burned out the switches for several keys on the upper part of the keyboard (and this was with Intel Integrated graphics, not a dedicated GPU).

    Tablets only employ passive, not active cooling

    Post edited by kyoto kid on
  • nonesuch00nonesuch00 Posts: 18,729
    kyoto kid said:

    ...rendering puts quite a load on graphics processors and the CPU which generates waste heat that needs to be dissipated. I have a notebook for which I now need to use an auxiliary keyboard because the heat from rendering burned out the switches for several keys on the upper part of the keyboard (and this was with Intel Integrated graphics, not a dedicated GPU).

    Tablets only employ passive, not active cooling

    Yes but you are talking today and today's technology. I am talking tomorrow's tech and the direction it has already been going. I have in my home with various tablets, phones, and laptops as much computer capability at less than 500W compared to 1998 when about the equivalent compute capability was using thousands of watts in about 50 SPARC workstations that created enough heat to make an sound insulated room 20'x40'x15' about 110 degrees. That's big power efficiency improvements trends. i replaced my furnace 3 years ago for efficiency reasons too.

  • kyoto kidkyoto kid Posts: 41,857

    ..it doesn't matter. Intense calculations like rendering involves (particularly photo real rendering) will require a high level of power if it is to perform the task in a reasonable amount of time.  Take a 1080 Ti which consumes 250w at peak output. Even in a standard PC "box" you still want liquid cooling. to mitigate heat production unless you have at least a half dozen fans or more to promote decent airflow.  Such capability is beyond the tightly closed architecture of a tablet which doesn't even have a single active fan like a notebook has.

    Electronics produce heat.  Making electronics perform highly intensive work in a short amount of time forces them to create far more heat. Unless there is an unforeseen breakthrough in processor chip design (or advanced closed loop cryogenics is employed), this just isn't going to happen.

  • nonesuch00nonesuch00 Posts: 18,729
    kyoto kid said:

    ..it doesn't matter. Intense calculations like rendering involves (particularly photo real rendering) will require a high level of power if it is to perform the task in a reasonable amount of time.  Take a 1080 Ti which consumes 250w at peak output. Even in a standard PC "box" you still want liquid cooling. to mitigate heat production unless you have at least a half dozen fans or more to promote decent airflow.  Such capability is beyond the tightly closed architecture of a tablet which doesn't even have a single active fan like a notebook has.

    Electronics produce heat.  Making electronics perform highly intensive work in a short amount of time forces them to create far more heat. Unless there is an unforeseen breakthrough in processor chip design (or advanced closed loop cryogenics is employed), this just isn't going to happen.

    Oh yes it does matter, a calculation that took 5 W of power in the past but only 0.0000000001  W of power today and less in the future is using less power and generating less heat. There is no way around it.

  • hphoenixhphoenix Posts: 1,335
    kyoto kid said:

    ..it doesn't matter. Intense calculations like rendering involves (particularly photo real rendering) will require a high level of power if it is to perform the task in a reasonable amount of time.  Take a 1080 Ti which consumes 250w at peak output. Even in a standard PC "box" you still want liquid cooling. to mitigate heat production unless you have at least a half dozen fans or more to promote decent airflow.  Such capability is beyond the tightly closed architecture of a tablet which doesn't even have a single active fan like a notebook has.

    Electronics produce heat.  Making electronics perform highly intensive work in a short amount of time forces them to create far more heat. Unless there is an unforeseen breakthrough in processor chip design (or advanced closed loop cryogenics is employed), this just isn't going to happen.

    Electronics don't produce heat per-se.  Work produces heat.  In Electronics, Work is a question of Power and time.  And power is (basically, though it's more complex than this) volts x amps (though you can use Ohm's Law to rearrange a bit to get power = amps² x resistance, and so on.)

    So why does reducing the process size in a chip help with power?  Simple.  The shorter the distance between gates/junctions, the lower the resistance.  Using more complex silicon doping and such can reduce the active voltage range.  So instead of TTL (5v), or CMOS (5v/3v) or such, we now have gates that operate on 1.35v logic.  Reducing the voltage and the current requirements together, produces a geometrically smaller power consumption PER GATE.  Now, if you increase the number of gates on the chip, you see reduced benefits (power-wise) since more gates equals more power consumption.

     

    kyoto kid said:

    ..it doesn't matter. Intense calculations like rendering involves (particularly photo real rendering) will require a high level of power if it is to perform the task in a reasonable amount of time.  Take a 1080 Ti which consumes 250w at peak output. Even in a standard PC "box" you still want liquid cooling. to mitigate heat production unless you have at least a half dozen fans or more to promote decent airflow.  Such capability is beyond the tightly closed architecture of a tablet which doesn't even have a single active fan like a notebook has.

    Electronics produce heat.  Making electronics perform highly intensive work in a short amount of time forces them to create far more heat. Unless there is an unforeseen breakthrough in processor chip design (or advanced closed loop cryogenics is employed), this just isn't going to happen.

    Oh yes it does matter, a calculation that took 5 W of power in the past but only 0.0000000001  W of power today and less in the future is using less power and generating less heat. There is no way around it.

    Unfortunately, there has NOT been that much of a gain in power reductions.  And calculations would be rated in Work, not power.  So your statement is a massive exaggeration (unless you are counting back to Univac/Eniac days and vacuum tubes, and even then I think it's off by a few orders of magnitude).

    The biggest power consumption in transistor gate digital logic circuits is when states CHANGE.  It's a very sudden shift that requires a LOT of power (relatively speaking) compared to simply maintaining their existing state.  And the faster it has to change those levels (i.e., clock speed dependent) the more power it takes, and the more heat is generated.

    The real problem isn't that there's too much heat or it's generated too quickly.  The real problem is that the heat that is generated is being concentrated into a smaller and smaller area.  So we need cooling that can cool even a tiny fraction of its total active area very quickly.  Water cooling is much better at this than air-cooling systems.  But even that has limits.  But we're a ways from those yet.

  • tj_1ca9500btj_1ca9500b Posts: 2,057
    edited May 2018

    Just want to point out that my HP 13.3" x360 2 in 1 laptop has a fan.  I rendered with it for several months (CPU only, it had integrated AMD graphics) without issue..

    Sure, it got a little toasty at times on the underside, but I simply elevated it slightly (with two sticks of wood) so as to make sure the cooling slots weren't blocked, and also didn't put it on my thighs directly.  It wasn't unbearably hot, just detectably hot.  Say 65-70c on the CPU temps.

    As a tablet/laptop hybrid, I got a lot of use out of it before I moved on to this 18.1" beast I I have now, and still use it occasionally.  I like having an actual keyboard, as opposed to typing on the touchscreen, but still have tablet functionality (by fold the clamshell completely open so that the keyboard part folded against the back of the screen).  Plus having a slightly larger screen (13.3") than the Ipad I was initially using was quite nice.  Especially for these older eyes.  13.3" is a very nice size for a tablet.

    So yeah, some tablets, with more powerful CPU's, DO come with fans.  Not many of them, but they do exist.  And the 2 in 1's often have fans.

    Y'all have the wattage/TDP discussion going quite well on your own.  I'll just add that you can get a lot more done with the same wattage these days on a CPU than you used to be able to.  And they do design low power CPUs.  Sure, these CPUs are 'throttled' to run slower, and hence generate much less heat, but my point is it's not an apples to apples comparison.  And NVidia does make low power GPUs as well.  Again, slower due to throttling/design/etc., but they do exist.

    And, as hphoenix pointed out, power gating is much more sophisticated now than it used to be.. There are a buttload of sensors in the latest CPUs (many more than in previous generations) that monitor heat, usage, etc. that throttle as necessary to minimize wasting power, and also to prevent overheating.   Ryzen's base clock vs boost clock, and the fact that it throttles back from the max boost clock at some point under load (unless you make changes in the bios, etc.) is a good example of this.

    As for TDP envelopes, 5W CPU, put simply, is going to generate a lot less heat when under load than a 95W one, it's simple physics people!  It'll also be a lot slower (assuming the same CPU design family/generation), but it'll get the job done eventually...

    Post edited by tj_1ca9500b on
  • nonesuch00nonesuch00 Posts: 18,729
    hphoenix said:
    kyoto kid said:

    ..it doesn't matter. Intense calculations like rendering involves (particularly photo real rendering) will require a high level of power if it is to perform the task in a reasonable amount of time.  Take a 1080 Ti which consumes 250w at peak output. Even in a standard PC "box" you still want liquid cooling. to mitigate heat production unless you have at least a half dozen fans or more to promote decent airflow.  Such capability is beyond the tightly closed architecture of a tablet which doesn't even have a single active fan like a notebook has.

    Electronics produce heat.  Making electronics perform highly intensive work in a short amount of time forces them to create far more heat. Unless there is an unforeseen breakthrough in processor chip design (or advanced closed loop cryogenics is employed), this just isn't going to happen.

    Electronics don't produce heat per-se.  Work produces heat.  In Electronics, Work is a question of Power and time.  And power is (basically, though it's more complex than this) volts x amps (though you can use Ohm's Law to rearrange a bit to get power = amps² x resistance, and so on.)

    So why does reducing the process size in a chip help with power?  Simple.  The shorter the distance between gates/junctions, the lower the resistance.  Using more complex silicon doping and such can reduce the active voltage range.  So instead of TTL (5v), or CMOS (5v/3v) or such, we now have gates that operate on 1.35v logic.  Reducing the voltage and the current requirements together, produces a geometrically smaller power consumption PER GATE.  Now, if you increase the number of gates on the chip, you see reduced benefits (power-wise) since more gates equals more power consumption.

     

    kyoto kid said:

    ..it doesn't matter. Intense calculations like rendering involves (particularly photo real rendering) will require a high level of power if it is to perform the task in a reasonable amount of time.  Take a 1080 Ti which consumes 250w at peak output. Even in a standard PC "box" you still want liquid cooling. to mitigate heat production unless you have at least a half dozen fans or more to promote decent airflow.  Such capability is beyond the tightly closed architecture of a tablet which doesn't even have a single active fan like a notebook has.

    Electronics produce heat.  Making electronics perform highly intensive work in a short amount of time forces them to create far more heat. Unless there is an unforeseen breakthrough in processor chip design (or advanced closed loop cryogenics is employed), this just isn't going to happen.

    Oh yes it does matter, a calculation that took 5 W of power in the past but only 0.0000000001  W of power today and less in the future is using less power and generating less heat. There is no way around it.

    Unfortunately, there has NOT been that much of a gain in power reductions.  And calculations would be rated in Work, not power.  So your statement is a massive exaggeration (unless you are counting back to Univac/Eniac days and vacuum tubes, and even then I think it's off by a few orders of magnitude).

    The biggest power consumption in transistor gate digital logic circuits is when states CHANGE.  It's a very sudden shift that requires a LOT of power (relatively speaking) compared to simply maintaining their existing state.  And the faster it has to change those levels (i.e., clock speed dependent) the more power it takes, and the more heat is generated.

    The real problem isn't that there's too much heat or it's generated too quickly.  The real problem is that the heat that is generated is being concentrated into a smaller and smaller area.  So we need cooling that can cool even a tiny fraction of its total active area very quickly.  Water cooling is much better at this than air-cooling systems.  But even that has limits.  But we're a ways from those yet.

    I enjoyed your write-up.

    Yes, I know I exaggerated (maybe not really - vaccum tube computers used massive amounts of power with high failure rates) but I wanted to highlight that less power per calculation means cooler devices. However even if it's a made up number I gave as a simple example it is not so big an exaggeration, eg vacuum tube computers:

    https://en.wikipedia.org/wiki/Vacuum_tube_computer ;

    Calculations are work but work needs power so you're just splitting hairs there. 

    And having worked in SPARC server rooms and comparing to a room full of tablets, the sparcs generated much more heat and no the CPU dies haven't reduced that massively in size compared to the heat generated being reduced.

  • kyoto kidkyoto kid Posts: 41,857
    edited May 2018

    ...of course we are talking about die stacking which is how HBM memory is configured, not GPU processors.

    Adding multiple GPU processors would help with some functions (paging and frame rate for games and floating point performance) but do little to assist GPU rendering as memory size is what governs how big of a scene can be rendered on the card. (Dual GPU cards like the Titan-Z were effectively 6 GB cards for rendering purposes even though they had 12 GB of memory split between the two processors).

    Also how would one go about cooling each processor wafer in a stack? It would requires some form of exotic setup to keep one GPU from heating the other ones stacked above it.  As we have seen, HBM memory is expensive, more so than standard GDDR, and not as readily available.   Considering how much more expensive cards with HBM2 are, imagine the cost of a card with stacked GPU processors.  If it is even possible, we might see it in the next generation or two of the Tesla series for use in supercomputers and datacentres where floating point performance and accuracy is extremely important.

    Post edited by kyoto kid on
  • mtl1mtl1 Posts: 1,508

    Reducing gate sizes is not straightforward. The biggest problem is that there's quantum tunneling through the gate oxide, leading to substantial leakage currents and power consumption issues.

    There's also the other problem of gate turn-on voltages being limited to the gate size, hence why non-planar transistors (ie. FinFETs) were created to increase the effective gate size. And that's not even considering all of the other active properties of a *single* transistor.

    And, to be on topic, stacking is extremely difficult as well, as there are massive heat dissipation issues associated with computational workloads and devices -- and that's not even discounting the semiconductor processes that goes into making a *single* layer of a stacked device.

     

     

    ... aaaand, I probably revealed which field I work in with that post so... yeah, perhaps I'll stop.

  • GreymomGreymom Posts: 1,139

    Lot of good information here - thanks to all!

    I wonder what the lifetime of these very small scale components will be compared to the current generation?  We have to be getting close to the point where solid-phase diffusion starts to be a problem....

    The liquid nitrogen cooling system on our Dynamic Mechanical Analyzer/Spectrometer at work was a pain in the rear to use/maintain, would not look forward to having to cool my graphics cards this way. : )

  • GreymomGreymom Posts: 1,139
    mtl1 said:

    Reducing gate sizes is not straightforward. The biggest problem is that there's quantum tunneling through the gate oxide, leading to substantial leakage currents and power consumption issues.

    There's also the other problem of gate turn-on voltages being limited to the gate size, hence why non-planar transistors (ie. FinFETs) were created to increase the effective gate size. And that's not even considering all of the other active properties of a *single* transistor.

    And, to be on topic, stacking is extremely difficult as well, as there are massive heat dissipation issues associated with computational workloads and devices -- and that's not even discounting the semiconductor processes that goes into making a *single* layer of a stacked device.

     

     

    ... aaaand, I probably revealed which field I work in with that post so... yeah, perhaps I'll stop.

    No need to stop, this is really interesting!   Particularly to hear from someone in the field!

  • tj_1ca9500btj_1ca9500b Posts: 2,057
    edited May 2018

    I had a very interesting conversation once with a guy that worked with satellite and space probe validation.  He mentioned that one reason that NASA liked the larger nodes was that it usually translated to more 'space' between the transistors, hence the chips used were less impacted by cosmic rays, etc. flipping electrons.  I.E. since they were farther apart, fewer transitors would be impacted by said ray or electron.  And of course redundancy, but redundancy is a tradeoff, when you are trying to keep weight to a minimum.

    He also mentioned that they don't shield the entire satellite equally (again due to weight constraints), and that even the 'structure/layout/positioning' of the sheilding pieces could vary it's protectiveness greatly, due to 'deflected electrons' and such.  I'm not explaining it the best, but it was a rather interesting conversation.  This conversation left me even more impressed that the Voyager probes are still chugging along after decades in a hostile cosmic ray environment.  They don't build 'em like they used to...

    Post edited by tj_1ca9500b on
  • tj_1ca9500btj_1ca9500b Posts: 2,057

    This is at best considered a snapshot, but AMD hit near parity with Intel on CPU sales at a german retailer last month...

    https://wccftech.com/amd-intel-cpu-market-share-ryzen-2000-up-intel-8th-gen-dominates/

    a 47%/53% AMD/Intel CPU sales split isn't shabby at all!  It'll be interesting to see the larger market numbers in the broader market as they eventually become available, but Ryzen is certainly getting lots of love from buyers these days!

  • nonesuch00nonesuch00 Posts: 18,729

    This is at best considered a snapshot, but AMD hit near parity with Intel on CPU sales at a german retailer last month...

    https://wccftech.com/amd-intel-cpu-market-share-ryzen-2000-up-intel-8th-gen-dominates/

    a 47%/53% AMD/Intel CPU sales split isn't shabby at all!  It'll be interesting to see the larger market numbers in the broader market as they eventually become available, but Ryzen is certainly getting lots of love from buyers these days!

    Pretty interesting given the AMD CPU buyers have to buy a GPU too. 

  • tj_1ca9500btj_1ca9500b Posts: 2,057

    This is at best considered a snapshot, but AMD hit near parity with Intel on CPU sales at a german retailer last month...

    https://wccftech.com/amd-intel-cpu-market-share-ryzen-2000-up-intel-8th-gen-dominates/

    a 47%/53% AMD/Intel CPU sales split isn't shabby at all!  It'll be interesting to see the larger market numbers in the broader market as they eventually become available, but Ryzen is certainly getting lots of love from buyers these days!

    Pretty interesting given the AMD CPU buyers have to buy a GPU too. 

    Keep in mind that the Ryzen 2200G and 2400G have a Vega GPU integrated with processor.  They came out earlier this year, and are one of the drivers in the sales spike highlighted in the article.

  • kyoto kidkyoto kid Posts: 41,857

    ...yeah, but it's pretty weak for rendering.

  • ArtiniArtini Posts: 10,310
    edited May 2018

    Pushing further development of GPU is great for rendering, but it will not speed up so much working with the graphic programs itself.

    You still need to have a fast CPU for that.

    I only hope, that the recent advances in development of Terahertz processors will go to production in a next few years,

    and that could be the real game changer in pushing the limits of the graphic software in general.

    Terahertz technology promises 100 times faster processing times than nowadays CPUs has, and applies both for stationary and mobile devices.

    This technology is based on light, so it will not require to go to sub nanometer chips design.

    Hope, that it will happen in not so far future.

     

    Post edited by Artini on
  • outrider42outrider42 Posts: 3,679
    edited May 2018
    Die stacking will create a lot of problems for heat. When we think of die shrinks as using less heat, its always been a single stack die. Now if you add a layer on top, all sorts of new issues can pop up. For example, how do you keep the bottom layer cool??? The layer on the bottom will almost certainly run much hotter than the top as it will have this top layer acting as a blanket. I believe cooling these chips will be a nightmare. There is a lot of potential, and if they pull it off processing can make a huge leap. But I predict any chip that comes to the market will be vastly down clocked compared to single layered chips because of heat.

    This in turn creates a new problem, as some applications may run worse without getting updated to use the extra cores.

    Its going to be like when multicore CPUs were created all over again. Remember that? Most people thought CPUs would just get faster and faster. 5 Ghz? 6, 7, 8??? It would all happen in time, LOL. Nope. Instead the multicore evolution happened, and instead of higher frequencies, they had much lower frequencies. This wrecked havoc on programs that were not made for multicores. I was watching a video about the original Crysis game, which released in 2007. It was a landmark game, and it spawned its own meme: "But can it run Crysis?" Incredibly, over 10 years later Crysis can STILL bring modern hardware to it knees. Though a big reason for that is because Crysis is designed around single core frequency, and does not use multicores. And that's because the assumption was frequencies would only get faster. Oddly enough, a multicore version was produced...but only for consoles. The PC version never got updated for multiple cores.

    Anyway, we know why high core count CPUs are always slower in base frequencies, they have to balance the heat they produce. So this is how I see stacked dies doing. They will have to make compromises to work around the heat issue. Plus they will be difficult and costly to produce for the simple fact they require multiple wafers. So it will take literally double production to meet the same quota. Just look at HBM, this is only stacked memory, and it has ran into numerous issues to produce. That's why its so hard to find, and why so few GPUs use it. I think stacked dies would be even more complicated and face more obstacles to develop.
    Post edited by outrider42 on
  • kyoto kidkyoto kid Posts: 41,857
    edited May 2018

    ...+1

    Daz during scene assembly, posing, Dforce draping and such only uses a single CPU core It would be nice if it used all threads as that would help mitigate the programme "bogging down" when working on a large scene.  The only time all cores come into play is during CPU based/assisted rendering.  My system had one of the first generation i7s with a clock speed of 2.8 GHz/thread.  I don't overclock as that produces more heat, so that is all I had when setting up a scene or running any process that involved using the animation timeline.  It now has a 6 core 2.8 GHz Xeon, but again only one of those 12 threads is used by Daz (and if I am not mistaken, Carrara as well) during scene production.  Not sure what is involved in updating a single core programme to use all available cores as I am not a software developer.

    Post edited by kyoto kid on
  • KaribouKaribou Posts: 1,325
    mtl1 said:

    Reducing gate sizes is not straightforward. The biggest problem is that there's quantum tunneling through the gate oxide, leading to substantial leakage currents and power consumption issues.

    There's also the other problem of gate turn-on voltages being limited to the gate size, hence why non-planar transistors (ie. FinFETs) were created to increase the effective gate size. And that's not even considering all of the other active properties of a *single* transistor.

    And, to be on topic, stacking is extremely difficult as well, as there are massive heat dissipation issues associated with computational workloads and devices -- and that's not even discounting the semiconductor processes that goes into making a *single* layer of a stacked device.

     

     

    ... aaaand, I probably revealed which field I work in with that post so... yeah, perhaps I'll stop.

    I love it when I stumble upon a conversation that involves quantum tunneling.  I'm married to an electrical engineer.  Actually, now that I think about it, I'm kind-of a physics fan-girl. (phan-girl?) 

    I feel like I read something once about the potential to utilize quantum tunneling as a way of boosting processing speed. I don't remember the particulars, but I think the main hurdle was that today's chip materials are not suited to the process, which would make the technology commercially unattractive to current manufacturers. 

    Of course, I'm uncertain if I've remembered that correctly.  (Get it??  Uncertain!!

    I'm such a dork.

  • nonesuch00nonesuch00 Posts: 18,729

    I have read the 5nm was about the limit for that technology but i don't know if that is still true or not. I'm not in that field and really don't know the sizes except that since these iCores came along they've went for 32nm to 22nm to whatever and i guess soon 7nm and 5nm are next. I do remember reading in the 90s they might not ever be able to do nm sizes on those dies so things change. And I am old enough to remember writing programs on paper with lead pencil bubble cards in high school just before they went away. laugh 

  • kyoto kidkyoto kid Posts: 41,857
    edited May 2018

    ...hmm my old Nehalem i7 930 is listed at 45 nm in the specs.

    Post edited by kyoto kid on
  • nonesuch00nonesuch00 Posts: 18,729
    kyoto kid said:

    ...hmm my old Nehalem i7 930 is listed at 45 nm in the specs.

    And I think the die size before that was 65nm.

  • kyoto kidkyoto kid Posts: 41,857

    ..ah, the old LGA 775. Core2 Quads,

  • tj_1ca9500btj_1ca9500b Posts: 2,057
    edited May 2018

    My K6-III came from a 250nm node, and my K6-III+ came from a 180nm mode, and they both had just over 21 million transistors..  Sure, that was just shy of 2 decades ago, but it is worth noting.  The node size has shrunk by a factor of 20 since 1999 (12nm, not counting 10 or 7nm yet as those aren't fully up to speed yet)...  BTW, 8 core Ryzen's transistor count is 4.8 Billion transistors, or about 225x that of K6-III.

    BTW, the max TDP for a K6-III was just under 30 watts, which is about 1/2 (65w) to 1/3 (95w) of where Ryzen is right now.  So they've figured how to run 225 times the transistors on just double the power.  That's roughtly a 100x increase in power efficiency, and that's before you start comparing clock speeds, which multiplies that efficiency by a lot more (roughly up to 800x assuming 500 MHz to 4.0 GHz)

    The fact that a CPU now has billions of transistors, or even the millions of transistors they had at the turn of the millenium, still blows my mind.  That is so far past what a human being could build by hand, yeah we've become VERY reliant on automated techology to make all of this possible.  Heck, even drawing up the 'blueprints' for that many transistors, yeah that's another exercise in having the software map everything out so precisely transistor by transistor.   Yeah it's amazing that CPUs work so flawlessly with that many subcomponents/transistors.  And CPUs are very tiny, which amazes me even more (that little square has BILLIONS of transistors?).  I'm sure you older guys that remember working with vacuum tubes are even more amazed than I am.  The only vacuum tubes I ever tinkered with were for an old TV, but I wasn't able to revive it.  That and older Marshall amps.

    I remember reading around a decade ago that they were thinking they'd hit some hard limits at just below 10nm, so the fact they've apparenlty figured out 5nm is noteworthy. Them engineers are pretty smart, being able to innovate us down to such absolutely tiny nodes...

     

    Post edited by tj_1ca9500b on
  • KaribouKaribou Posts: 1,325

    According to Wikipedia, transistors smaller than 7 nm will experience quantum tunnelling through the gate oxide layer.  5nm seems to be the hard limit for current materials.  And, with a little memory-prodding from my husband, I dug up info on the transistor I'd read about.  It's called a Tunnel field-effect transistor (TFET) and is structurally similar to a MOSFET, but utilizes quantum tunneling -- instead of controlling movement of electrons by raising/lowering an energy barrier, it encourages tunneling through the barrier.  As I said earlier, the challenge in developing them appears to be finding appropriate materials.

    Totally OT, but kind-of cool... I've been reading up on quantum tunneling because there's now hot debate on whether biological enzymes utilize QT in catalyzing reactions much faster than traditional models would predict. There's also research being done on whether QT is responsible for small numbers of DNA mutations, when hydrogen atoms tunnel/shift from one base to another during replication. 

Sign In or Register to comment.