OT Update 2: Nvidia & AMD about to lose a lot of sales from cryptominers?

11213141517

Comments

  • tj_1ca9500btj_1ca9500b Posts: 2,057
    edited April 2018

    Cash is replenishable.  Time is not...

    Post edited by tj_1ca9500b on
  • kyoto kidkyoto kid Posts: 41,851

    ...same here, I'd rather spend my time concentrating on creating the scene.

  • kyoto kidkyoto kid Posts: 41,851

    Cash is replenishable.  Time is not...

    ...yes 

    Even when on a tight budget as I am.  Considering I am getting on in years and am diabetic, time is actually far more valuable.

  • kyoto kidkyoto kid Posts: 41,851

    ...well upped my bid as high as I can afford. still about 90 min to go. Probably will get outbid on this one.

  • agent unawaresagent unawares Posts: 3,513
    kyoto kid said:

    ...well upped my bid as high as I can afford. still about 90 min to go. Probably will get outbid on this one.

    On online auctions, you should try to snipe if you have a hard price in mind. Just wait until a minute or so before the auction is due to close and bid whatever your max price is. If you outbid someone that would have pushed their bid higher they may not have time to do it, which they could if you outbid them earlier.

  • outrider42outrider42 Posts: 3,679

    I'm wondering if at this point is better to wait and see what Volta is doing. From what it sounds like to me, Nvidia is going to include some Tensor cores on gaming GPUs. That's because Nvidia has been saying that their ray tracing will work better on Volta hardware, not just software. DirectX 12 will support ray tracing, so most DX12 capable GPUs are gong to be able to do this, but Volta will be able to ray trace faster, because of Tensor cores.

    When asked about it directly, CEO Huang stated that yes, Tensor cores could improve gaming. He was non commital as to whether they would include them.

    If Daz Iray ever gets AI denoising, I think there is a good chance that it will get Tensor core support, too. We don't know how much of a difference Tensor cores are, but I want to want and see. I think Nvidia is actually doing a good job innovating right now, and they are motivated. That motivation is to crush Intel, and they may very well do that one day.

    Aside from all this, I think GPU prices are going to keep dropping, and eventually get to the point of where they should be at this point in their life cycles. Pascal is almost 2 years old! The 1070 should not cost anywhere near $400 two years into its life. It should be $300 or less at this point in time.

    nicstt said:
    kyoto kid said:

    ...buses and trams are not sexy either, but they are more efficient at moving a larger number of people in a smaller footprint and for less fuel consumption per person than the equivalent number of sleeker designed single occupant cars.

    If I could afford one, I'd still go with a Quadro just for the higher VRAM, lower power requirements for the horsepower they offer, and drivers that make them more efficient at what we do.

    Crikey even having 16 GB, let alone 32, would be a major leap for people like me who create very large involved scenes.

    Doesn't take many characters to fill up a scene.

    ... And tbh, the more tweaking I have to do to get it to fit on a card, the more annoyed I get. My time is as valuable as my cash. :) Or nearly so.

    If you optimize the scene, it will not only load faster, but render faster, too. So there are benefits to making use of optimizing techniques. Since you can save any kind of preset, it is easy to create optimized presets for your favorite models for future use. And often times, rendering parts of a scene in separate renders is still faster than rendering the whole thing in one go, and that counts post work.

    I mean, if you do create a scene that requires a Quadro to render it all at say 32 GB, I'm scared to know how long that 32 GB scene would take to render...even on the $9,000 Quadro. It would almost certainly be faster to break it down and render it in pieces instead. I'd take bets if somebody tested this idea.

  • tj_1ca9500btj_1ca9500b Posts: 2,057
    edited April 2018

    While 2018 isn't going to be as exciting CPU wise as 2017 was, yeah I'm looking forward to the new and improved grapics cards this year, on the smaller nodes.  I'm really curious to see how 7nm shakes out for AMD with their next gen graphics card (that's end of the year/next year basically), but of course that doesn't help us Daz users...

    12nm Ryzen+, well it's just a bit faster than Ryzen Gen 1 of course, but just an incremental improvement.  So while this is welcome, it's not overly exciting...  Intel might be slightly more exciting with 10nm landing, but again, it feels incremental to me, at least when comparing things to last year - the 18 Core thing last year being one example of what I mean...

    I've definitely curious to see where the VRAM sizes will end up on the upcoming NVidia cards.  The $9,000 GV-100 is a bit out of the 'hobbyist' range, but since 8GB VRAM doesn't seem to be enough for some of the pieces I'm working on (GTX 1080), where the stats, price, etc. end up on the upcoming Ti card is something I'm waiting to see.

    I can't really complain with my dual 1080's, so yeah it's a waiting game to see what's next at this point.

    Post edited by tj_1ca9500b on
  • kyoto kidkyoto kid Posts: 41,851

    ...from what I have been reading, Volta is only going to be reserved for the top end Quadro and Tesla line as well as the new Titan-V series (which is sort of an entry level scientific/AI rather than gaming card) .

    For the consumer line we are looking at GDDR6 memory, not HBM-2, and a different GPU architecture code named "Turing". 

  • kyoto kidkyoto kid Posts: 41,851
    edited April 2018

    I'm wondering if at this point is better to wait and see what Volta is doing. From what it sounds like to me, Nvidia is going to include some Tensor cores on gaming GPUs. That's because Nvidia has been saying that their ray tracing will work better on Volta hardware, not just software. DirectX 12 will support ray tracing, so most DX12 capable GPUs are gong to be able to do this, but Volta will be able to ray trace faster, because of Tensor cores.

    When asked about it directly, CEO Huang stated that yes, Tensor cores could improve gaming. He was non commital as to whether they would include them.

    If Daz Iray ever gets AI denoising, I think there is a good chance that it will get Tensor core support, too. We don't know how much of a difference Tensor cores are, but I want to want and see. I think Nvidia is actually doing a good job innovating right now, and they are motivated. That motivation is to crush Intel, and they may very well do that one day.

    Aside from all this, I think GPU prices are going to keep dropping, and eventually get to the point of where they should be at this point in their life cycles. Pascal is almost 2 years old! The 1070 should not cost anywhere near $400 two years into its life. It should be $300 or less at this point in time.

    nicstt said:
    kyoto kid said:

    ...buses and trams are not sexy either, but they are more efficient at moving a larger number of people in a smaller footprint and for less fuel consumption per person than the equivalent number of sleeker designed single occupant cars.

    If I could afford one, I'd still go with a Quadro just for the higher VRAM, lower power requirements for the horsepower they offer, and drivers that make them more efficient at what we do.

    Crikey even having 16 GB, let alone 32, would be a major leap for people like me who create very large involved scenes.

    Doesn't take many characters to fill up a scene.

    ... And tbh, the more tweaking I have to do to get it to fit on a card, the more annoyed I get. My time is as valuable as my cash. :) Or nearly so.

    If you optimize the scene, it will not only load faster, but render faster, too. So there are benefits to making use of optimizing techniques. Since you can save any kind of preset, it is easy to create optimized presets for your favorite models for future use. And often times, rendering parts of a scene in separate renders is still faster than rendering the whole thing in one go, and that counts post work.

    I mean, if you do create a scene that requires a Quadro to render it all at say 32 GB, I'm scared to know how long that 32 GB scene would take to render...even on the $9,000 Quadro. It would almost certainly be faster to break it down and render it in pieces instead. I'd take bets if somebody tested this idea.

    ...let me win a lotto first and then I can post some benchmarks.

    With over 10,240 Cores it would be like having 4 Quadro P5000s with twice the memory (and faster HBM-2 as opposed to GDDR5X).  I think it would do pretty well with a large scene particularly if your render engine has AI denoising due to  the 640 Tensor cores. Slap two in and add two NVLink widgets and you have 64 GB. 20,480 cores. That's like having the resources of an eight M6000 VCA for less than half the cost and with 5x the VRAM.

    Post edited by kyoto kid on
  • GreymomGreymom Posts: 1,139
    edited April 2018
    kyoto kid said:

    I'm wondering if at this point is better to wait and see what Volta is doing. From what it sounds like to me, Nvidia is going to include some Tensor cores on gaming GPUs. That's because Nvidia has been saying that their ray tracing will work better on Volta hardware, not just software. DirectX 12 will support ray tracing, so most DX12 capable GPUs are gong to be able to do this, but Volta will be able to ray trace faster, because of Tensor cores.

    When asked about it directly, CEO Huang stated that yes, Tensor cores could improve gaming. He was non commital as to whether they would include them.

    If Daz Iray ever gets AI denoising, I think there is a good chance that it will get Tensor core support, too. We don't know how much of a difference Tensor cores are, but I want to want and see. I think Nvidia is actually doing a good job innovating right now, and they are motivated. That motivation is to crush Intel, and they may very well do that one day.

    Aside from all this, I think GPU prices are going to keep dropping, and eventually get to the point of where they should be at this point in their life cycles. Pascal is almost 2 years old! The 1070 should not cost anywhere near $400 two years into its life. It should be $300 or less at this point in time.

    nicstt said:
    kyoto kid said:

    ...buses and trams are not sexy either, but they are more efficient at moving a larger number of people in a smaller footprint and for less fuel consumption per person than the equivalent number of sleeker designed single occupant cars.

    If I could afford one, I'd still go with a Quadro just for the higher VRAM, lower power requirements for the horsepower they offer, and drivers that make them more efficient at what we do.

    Crikey even having 16 GB, let alone 32, would be a major leap for people like me who create very large involved scenes.

    Doesn't take many characters to fill up a scene.

    ... And tbh, the more tweaking I have to do to get it to fit on a card, the more annoyed I get. My time is as valuable as my cash. :) Or nearly so.

    If you optimize the scene, it will not only load faster, but render faster, too. So there are benefits to making use of optimizing techniques. Since you can save any kind of preset, it is easy to create optimized presets for your favorite models for future use. And often times, rendering parts of a scene in separate renders is still faster than rendering the whole thing in one go, and that counts post work.

    I mean, if you do create a scene that requires a Quadro to render it all at say 32 GB, I'm scared to know how long that 32 GB scene would take to render...even on the $9,000 Quadro. It would almost certainly be faster to break it down and render it in pieces instead. I'd take bets if somebody tested this idea.

    ...let me win a lotto first and then I can post some benchmarks.

    With over 10,240 Cores it would be like having 4 Quadro P5000s with twice the memory (and faster HBM-2 as opposed to GDDR5X).  I think it would do pretty well with a large scene particularly if your render engine has AI denoising due to  the 640 Tensor cores. Slap two in and add two NVLink widgets and you have 64 GB. 20,480 cores. That's like having the resources of an eight M6000 VCA for less than half the cost and with 5x the VRAM.

    The new Tesla v100 with 640 tensor cores is rated at 110 teraflops for deep learning applications.   Not 3 or 8 or even 20 or so like most of the previous boards.  110!   Nine of these has about the same theoretical rating as a one petaflop supercomputer!   Not sure how that would translate to rendering, as the TF rating for more standard calculations is much lower.   Have to re-write the rendering software to use the tensor cores.

     

    Post edited by Greymom on
  • agent unawaresagent unawares Posts: 3,513
    Bitcoin spiked about $1000 today (something like 800 now), other cryptos following suit. https://cointelegraph.com/news/bitcoins-price-jumps-1000-in-30-minutes-of-market-growth-across-the-board
  • kyoto kidkyoto kid Posts: 41,851

    ..as mentioned by others, Bitcoin isn't the real worry anymore however ETH spiked by nearly 50$ this morning as well.

    Meanwhile on Nvidia's site Standard GTX 1070 FE's are still out of stock  Only the 1,200$ Titan XP is available and that's 250$ more than my monthly SS benefit..

  • agent unawaresagent unawares Posts: 3,513
    kyoto kid said:

    ..as mentioned by others, Bitcoin isn't the real worry anymore

    Bitcoin spiked about $1000 today (something like 800 now), other cryptos following suit. https://cointelegraph.com/news/bitcoins-price-jumps-1000-in-30-minutes-of-market-growth-across-the-board

    Bitcoin tends to be a gauge of the crypto market overall.

  • kyoto kidkyoto kid Posts: 41,851

    ..been watching auctions on ebay for a while and noticing prices slipping down a bit for used cards.  Saw a couple 1070s that auctioned for under 400$ as well as a few "buy it now" ones in the low 400s, though again it is hard to know if they were used in mining rigs or not.

  • GreymomGreymom Posts: 1,139
    edited April 2018

    Prices on Newegg are slipping too.   They have a 1070ti for "just" $50 over the NVIDIA price, vs. $300-$400 higher a few weeks ago.

    I could not resist bidding on a used EVGA Titan X 12GB Hydro Copper (their top of the line Titan X) that was only $316 one minute from the end of the auction, and got the high bid at $342!.....buuuut....."RESERVE NOT MET".   So it was a hollow victory.  I really don't see the point of the "super secret minimum price I will take".  Just start the bidding there.   It looks like fewer of the higher priced auctions are resulting in sales.  So it is still a waiting game.

    "The oxen are slow, but the Earth is patient" - High Road to China (1983)

    Post edited by Greymom on
  • kyoto kidkyoto kid Posts: 41,851
    edited April 2018

    ...just went there and the lowest price I saw was for a 510$ for a Zotac "mini". Even refurbished and "open box" ones are priced int the mid 500s.

    Yeah that reserve thing is a pain.  If they set a minimum bid, that should be the minimum accepted.

    Watching a couple 1080s right now if they stay reasonable may try to snag one.  Yeah I imagine a lot of people use that "autobid" feature. Read the details and you set a maximum amount you are willing to go to, however that could accelerate bidding and exceed your cap early on as it automatically adds the new bid immediately afterwards someone else entered a bid, which could prompt your competition to keep pace by out bidding you.  Several auctions I watched were relatively quiet early on (even when down to about an hour or so) then within the last 40 seconds or so that is when the furious bidding starts. .

    Post edited by kyoto kid on
  • tj_1ca9500btj_1ca9500b Posts: 2,057

    So, Nvidia's GPP program continues go generate ire... maybe this will mean better GPU availability for us due to some people jumping off the Nvidia bandwagon/boycotting the brand...

    https://wccftech.com/nvidia-gpp-opposition-grows-hp-dell-say-no-intel-mulling-legal-action/

    When you essentially own over 3/4's of the graphics card market already, maybe you shouldn't be greedy...

  • kyoto kidkyoto kid Posts: 41,851
    edited April 2018

    ...easier for the gaming community to opt for a different choice as games are not "GPU specific" like some render engines are.

    Post edited by kyoto kid on
  • outrider42outrider42 Posts: 3,679

    So, Nvidia's GPP program continues go generate ire... maybe this will mean better GPU availability for us due to some people jumping off the Nvidia bandwagon/boycotting the brand...

    https://wccftech.com/nvidia-gpp-opposition-grows-hp-dell-say-no-intel-mulling-legal-action/

    When you essentially own over 3/4's of the graphics card market already, maybe you shouldn't be greedy...

    The threat of so called boycotts is laughable at best. I'll believe that when I see it. But the possible legal action is very much real, and from everything I have heard they have a legit case here.

    The boycott is laughable because of infamous instances like this:

    There can be interesting caveats of this always online era, LOL.

    The bigger threat to Nvidia is Intel's possible foray into the GPU market. They are indeed looking at building a gaming GPU. We probably wont see the results until 2020 or so. Its also probably very bad news for AMD unless they pull a Ryzen out of their hat for GPUs.

  • KitsumoKitsumo Posts: 1,221

    So, Nvidia's GPP program continues go generate ire... maybe this will mean better GPU availability for us due to some people jumping off the Nvidia bandwagon/boycotting the brand...

    https://wccftech.com/nvidia-gpp-opposition-grows-hp-dell-say-no-intel-mulling-legal-action/

    When you essentially own over 3/4's of the graphics card market already, maybe you shouldn't be greedy...

    The threat of so called boycotts is laughable at best. I'll believe that when I see it. But the possible legal action is very much real, and from everything I have heard they have a legit case here.

    The boycott is laughable because of infamous instances like this:

    There can be interesting caveats of this always online era, LOL.

    The bigger threat to Nvidia is Intel's possible foray into the GPU market. They are indeed looking at building a gaming GPU. We probably wont see the results until 2020 or so. Its also probably very bad news for AMD unless they pull a Ryzen out of their hat for GPUs.

    Intel could definitely make a dent in the market with this. During the early 2000's Intel had the highest market share if you count all their integrated GPUs. So if they start making real gaming GPUs and ship them in all their integrated chipsets, that would ruin Nvidia and AMD's day. The majority of gamers are still using 1920x1080 resolution, which even a modest GPU can handle nowadays, so would a casual gamer really buy an integrated card if their onboard graphics already did the job? Plus if Intel creates a GPU with shader units (the equivalent of Cuda cores) they could integrate those into their processors. No easier way to get a user hooked on something new than to give them a sample for free. Plus if the blockchain madness continues, miners could start buying Intel CPUs if they have decent mining performance (a block of GPU shaders with access to all system RAM). Nvidia and AMD might just compete for the scientific and enthusiast gamer crowd while Intel would own the casual gamer market.

    AMD probably doesn't have much to worry about as far as being put out of business. If Nvidia is the only GPU manufacturer or Intel is the only CPU manufacturer, the FTC and antitrust committees are going to start gunning for them. It's in their best interest to let AMD survive, and maybe even help them occasionally.

  • kyoto kidkyoto kid Posts: 41,851

    ...unfortunately that would do little to help us. CPU based Iray rendering will still be glacially slow as it requires the CUDA graphics language.

  • KitsumoKitsumo Posts: 1,221

    I guess the best thing for us all would be to have an alternative to Cuda/Iray. Opencl works ok, but it's community property. Every company can use it, but no one wants to put any money into developing it because they can't get any money back out of it. If Intel had their own API, they could pump money into it the way Nvidia does with Cuda and build up a user base.

    It's just wishful thinking on my part. First Intel would have to make GPUs that are at least close to competing with Nvidia/AMD. Then they'd have to come up with an API like Cuda/Opencl. Then create a renderer like Iray. Then work with(bribe) software companies to get them to integrate it into their programs. So, yeah it's a long shot, but I can dream.

  • nicsttnicstt Posts: 11,715
    edited April 2018

    I'm wondering if at this point is better to wait and see what Volta is doing. From what it sounds like to me, Nvidia is going to include some Tensor cores on gaming GPUs. That's because Nvidia has been saying that their ray tracing will work better on Volta hardware, not just software. DirectX 12 will support ray tracing, so most DX12 capable GPUs are gong to be able to do this, but Volta will be able to ray trace faster, because of Tensor cores.

    When asked about it directly, CEO Huang stated that yes, Tensor cores could improve gaming. He was non commital as to whether they would include them.

    If Daz Iray ever gets AI denoising, I think there is a good chance that it will get Tensor core support, too. We don't know how much of a difference Tensor cores are, but I want to want and see. I think Nvidia is actually doing a good job innovating right now, and they are motivated. That motivation is to crush Intel, and they may very well do that one day.

    Aside from all this, I think GPU prices are going to keep dropping, and eventually get to the point of where they should be at this point in their life cycles. Pascal is almost 2 years old! The 1070 should not cost anywhere near $400 two years into its life. It should be $300 or less at this point in time.

    nicstt said:
    kyoto kid said:

    ...buses and trams are not sexy either, but they are more efficient at moving a larger number of people in a smaller footprint and for less fuel consumption per person than the equivalent number of sleeker designed single occupant cars.

    If I could afford one, I'd still go with a Quadro just for the higher VRAM, lower power requirements for the horsepower they offer, and drivers that make them more efficient at what we do.

    Crikey even having 16 GB, let alone 32, would be a major leap for people like me who create very large involved scenes.

    Doesn't take many characters to fill up a scene.

    ... And tbh, the more tweaking I have to do to get it to fit on a card, the more annoyed I get. My time is as valuable as my cash. :) Or nearly so.

    If you optimize the scene, it will not only load faster, but render faster, too. So there are benefits to making use of optimizing techniques. Since you can save any kind of preset, it is easy to create optimized presets for your favorite models for future use. And often times, rendering parts of a scene in separate renders is still faster than rendering the whole thing in one go, and that counts post work.

    I mean, if you do create a scene that requires a Quadro to render it all at say 32 GB, I'm scared to know how long that 32 GB scene would take to render...even on the $9,000 Quadro. It would almost certainly be faster to break it down and render it in pieces instead. I'd take bets if somebody tested this idea.

    Agreed there could be instances where the time taken to optomise may be beneficial; I have a scene that keeps dropping, I'm going to calculate how long it takes me to optomise, and factor that in if I get it on the card.

    Of course, time taken to optomise is lost, whereas I hit render it drops to CPU and I'm off doing other stuff; of course there could be an increase in power used thus an increase in cost to factor, but that is more of a problem to calculate.

    Post edited by nicstt on
  • Charlie JudgeCharlie Judge Posts: 13,248

    EVGA is starting to show various GTX 1080ti cards as available again, albeit at about $200 more than I paid for mine a year ago.

  • GreymomGreymom Posts: 1,139

    Newegg has a Gigabyte  GTX 1080 for $549, same list price as the NVIDIA reference (still out of stock).

  • tj_1ca9500btj_1ca9500b Posts: 2,057

    I haven't been looking lately, but what I want is a single slot liquid cooled 1080 Ti.  Most/pretty much all 1080 Ti's are designed with one of the ports requiring a second slot.  You can remove it with some tools if you have mad skillz, but of course that'd void your warranty...

  • GreymomGreymom Posts: 1,139

    I haven't been looking lately, but what I want is a single slot liquid cooled 1080 Ti.  Most/pretty much all 1080 Ti's are designed with one of the ports requiring a second slot.  You can remove it with some tools if you have mad skillz, but of course that'd void your warranty...

    The EVGA Geforce GTX 1080 Ti KiNGPiN Hydro Copper was supposed to be a true single-slot water-cooled 1080ti.   I have seen pictures, but have never actually seen one for sale, even on the EVGA website.  Maybe they delayed it due to the GPU shortages.

  • tj_1ca9500btj_1ca9500b Posts: 2,057
    edited April 2018

    Yeah the EVGA Kingpin one has been pretty nonexistent since it was announced in November/December... when I've googled it, no one has had it.

    At this point, I might as well wait for the next round of Nvidia cards...  The1180 Ti's are probably 3-5 months away at this pont.

    Post edited by tj_1ca9500b on
  • tj_1ca9500btj_1ca9500b Posts: 2,057

    Not really GPU related (although if you mess with this sort of thing, yeah you could LN2 your GPU too) but it's always fun to see what can be done with the latest toys using extreme methods...

    https://wccftech.com/amd-ryzen-7-2700x-ryzen-5-2600x-5-88-ghz-ln2-oc/

    Just thought I'd share...

  • outrider42outrider42 Posts: 3,679
    kyoto kid said:

    ...unfortunately that would do little to help us. CPU based Iray rendering will still be glacially slow as it requires the CUDA graphics language.

    Again, you got to think about the end game, and competition gets us there. If Intel broke into the GPU market and put a lot of pressure on Nvidia, a new price war could ensue. Intel is not the market leader here, or even in the market, so they have a lot to prove if they want to break into it.

    Competition is not just limited to hardware, either. Iray as a product is also facing competition. Even video game engines pose a threat to Iray as games engines can now do ray tracing, and Unity is getting better every year (plus Unity has its very own asset store.) Iray has a lot of work to do, and if it gets AI denoising and some form of instant option, then you may be able to do more with Iray with less hardware. Nvidia cannot stand still on Iray...and neither can Daz.

Sign In or Register to comment.