OT Update 2: Nvidia & AMD about to lose a lot of sales from cryptominers?

1101113151618

Comments

  • kyoto kidkyoto kid Posts: 41,851

    ...so how? Tesla cards are made for raw computing, not graphics production.  I know they can assist rendering with a Quadro (not a GTX series) adding the extra cores for speed as they use the same driver set.

  • GazukullGazukull Posts: 96
    edited March 2018

    Ideally your render cards are connected to nothing, as they can concentrate on rendering.  No memory taken up / no resources spent on Windows.

    EDIT:  Didn't see the other question.  CUDA cores are CUDA cores.  You cannot mix Quadro and GTX but you can mix Tesla with GTX in terms of drivers.  So the little tesla just sits in there and COOKS renders... and is a super innefficient space heater.  

    EDIT AGAIN:

    Here we can see, my monitors are connected to the motherboard instead of the TITAN Xs.  Leaving them unmolested to do their work.  Also second PSU sitting on table, to power the TITANS... ELEGANT SOLUTIONS.  Well one of them anyway...

    In the system with the little poop Tesla M2090, the AMD Fire Pro pushes the monitors and the Tesla just renders.  

    Post edited by Gazukull on
  • GreymomGreymom Posts: 1,139
    Greymom said:
    Oso3D said:

    Outside NVIDIA headquarters...

     

    yes  Soon it will be torches and pitchforks!

     

    Ok, been checking the NVIDIA site frequently since the wee hours, wasting much of the day.  Titan XPs in stock as before, but no 1080's, 1070's or Bigfoot have been sighted in stock.  A cruel tease?

    TIme for torches and pitchforks!  Watch the video with the sound off and add your own comments about cryptominers and NVIDIA marketing : )

    https://www.youtube.com/watch?v=qLvGnro4Cgw

     

  • adaceyadacey Posts: 186
    kyoto kid said:

    ...so how? Tesla cards are made for raw computing, not graphics production.  I know they can assist rendering with a Quadro (not a GTX series) adding the extra cores for speed as they use the same driver set.

    Basically, the GPU isn't literally drawing the render in the render window. Forget that it's even called a GPU for a moment and instead just think of it as "specialized hardware that's good at doing the calculations that Iray needs". DAZ sees that this specialized hardware is available, sends the scene to the card(s) and your CPU periodically gets the results back. There are a few separate threads on here talking about CPU load and how the scene gets assembled when there are multiple GPUs involved, but essentially you could just use the onboard graphics on your motherboard or another less powerful video card. When they say there's no video output, that doesn't mean they're not suitable for calculating graphics, they just mean there's literally no monitor connectors on the card so you can't plug your monitor into the card. So just happily run your monitor from another card.

  • kyoto kidkyoto kid Posts: 41,851
    Gazukull said:

    Ideally your render cards are connected to nothing, as they can concentrate on rendering.  No memory taken up / no resources spent on Windows.

    EDIT:  Didn't see the other question.  CUDA cores are CUDA cores.  You cannot mix Quadro and GTX but you can mix Tesla with GTX in terms of drivers.  So the little tesla just sits in there and COOKS renders... and is a super innefficient space heater.  

    EDIT AGAIN:

    Here we can see, my monitors are connected to the motherboard instead of the TITAN Xs.  Leaving them unmolested to do their work.  Also second PSU sitting on table, to power the TITANS... ELEGANT SOLUTIONS.  Well one of them anyway...

    In the system with the little poop Tesla M2090, the AMD Fire Pro pushes the monitors and the Tesla just renders.  

    ...as I read, it was the other way around Teslas were compatible with Quadros (and often used in combination to help with computational power) but not GTX cards. Also if the displays are patched into the MB, wouldn't you just be using the board's onboard integrated graphics?

  • kyoto kidkyoto kid Posts: 41,851
    edited March 2018
    adacey said:
    kyoto kid said:

    ...so how? Tesla cards are made for raw computing, not graphics production.  I know they can assist rendering with a Quadro (not a GTX series) adding the extra cores for speed as they use the same driver set.

    Basically, the GPU isn't literally drawing the render in the render window. Forget that it's even called a GPU for a moment and instead just think of it as "specialized hardware that's good at doing the calculations that Iray needs". DAZ sees that this specialized hardware is available, sends the scene to the card(s) and your CPU periodically gets the results back. There are a few separate threads on here talking about CPU load and how the scene gets assembled when there are multiple GPUs involved, but essentially you could just use the onboard graphics on your motherboard or another less powerful video card. When they say there's no video output, that doesn't mean they're not suitable for calculating graphics, they just mean there's literally no monitor connectors on the card so you can't plug your monitor into the card. So just happily run your monitor from another card.

    ...interesting. 

    Well I'm currently considering running the displays off of the original GT 460 and the rendering off a 4 GB 750 Ti (hopefully Octane 4 will be out in the next couple months).  No way will I risk cooking either of my systems with an old fanless Tesla particularly not for just an additional 512 cores.  Now if one could build an external box for them with its own cooling and PSU, that would be different.

    So could say, a single 8 GB K10 (3,072 cores) perform rendering without a GTX or Quadro GPU on the board?

     

    Post edited by kyoto kid on
  • adaceyadacey Posts: 186

    Take it one step further if this helps you clarify things if you have Iray server you can have a machine with no GPUs whatsoever taking advantage of the GPUs on another machine on the network. I *think* that has to be Quadros or Teslas, not GTX, but that's beside the point. You'll still get updates back to your render window on your machine, even though there are no GPUs participating in the render (say, just using onboard Intel graphics, or if your main machine has an AMD GPU). Your CPU is acting as a render manager, it's looking at what resources it has available. You need to be able to load the scene into the GPU's memory so that it's not constantly going back and forth to the CPU to ask for the data it needs. But forget for a minute that you're dealing with graphics, you're really doing a ton of math calculations, which the GPUs happen to be very good at. The results of those calculations are periodically sent back to the CPU (if you watch the history messages you'll see information about when the canvas is updated, and if memory serves there's even some render settings about how often the canvas will refresh). So basically, your CPU is getting a bunch of data back from the GPUs, e.g., the results of one or more render passes for the scene, and combines those results with the results it already has. Those results end up getting sent to whatever video card, or onboard graphics you're using for driving your monitor.

    I think where you're getting confused is more with when you're dealing with a video game and realtime rendering. In that case, you do really need the graphics card to drive the monitor since you're trying to push out the rendering results in real-time, and frequently at 60 FPS or higher, so you really can't have a GPU send the results back to the CPU for assembly.

  • GazukullGazukull Posts: 96
    edited March 2018

    ...as I read, it was the other way around Teslas were compatible with Quadros (and often used in combination to help with computational power) but not GTX cards. Also if the displays are patched into the MB, wouldn't you just be using the board's onboard integrated graphics?

    Well yeah.  We are not playing Crisis here, we are working in DS.  Intel G3100 would be fine.  Iris Pro 6200 is overkill.  If you have the option to run DS off your integrated graphics and are not...  I got nothing for you.  I sorta understand when people game with the same system.  I found it much easier to just build a different box for gaming. 

    So could say, a single 8 GB K10 (3,072 cores) perform rendering without a GTX or Quadro GPU on the board?

    Totes.

    It has been me experience that using something to drive the monitors other than Nvidia is my prefered.  If you are using something Nvidia to drive your monitors and the drivers crash you just killed whatever is rendering too (not that, that kind of thing happens too often, but can).  I like AMD Firepro personally, if you have a driver issue, your render task just keep happily plugging away.

    So this is an example of one setup.

    Or using your K10 example.  

    Post edited by Gazukull on
  • bluejauntebluejaunte Posts: 1,990

    In my own tests with dual 1080 Ti, where one has 3 monitors connected to it and the other none, I could not see a difference in renderintg speed between the two. In fact, the one without monitors is even a bit slower, probably because it is clocked a little lower.

  • kyoto kidkyoto kid Posts: 41,851

    ...have a P6T MB with no on board graphics so I need to use the 460 for the displays which leaves 1 slot (as a K10 is a dual width card).

  • GreymomGreymom Posts: 1,139
    kyoto kid said:
    adacey said:
    kyoto kid said:

    ...so how? Tesla cards are made for raw computing, not graphics production.  I know they can assist rendering with a Quadro (not a GTX series) adding the extra cores for speed as they use the same driver set.

    Basically, the GPU isn't literally drawing the render in the render window. Forget that it's even called a GPU for a moment and instead just think of it as "specialized hardware that's good at doing the calculations that Iray needs". DAZ sees that this specialized hardware is available, sends the scene to the card(s) and your CPU periodically gets the results back. There are a few separate threads on here talking about CPU load and how the scene gets assembled when there are multiple GPUs involved, but essentially you could just use the onboard graphics on your motherboard or another less powerful video card. When they say there's no video output, that doesn't mean they're not suitable for calculating graphics, they just mean there's literally no monitor connectors on the card so you can't plug your monitor into the card. So just happily run your monitor from another card.

    ...interesting. 

    Well I'm currently considering running the displays off of the original GT 460 and the rendering off a 4 GB 750 Ti (hopefully Octane 4 will be out in the next couple months).  No way will I risk cooking either of my systems with an old fanless Tesla particularly not for just an additional 512 cores.  Now if one could build an external box for them with its own cooling and PSU, that would be different.

    So could say, a single 8 GB K10 (3,072 cores) perform rendering without a GTX or Quadro GPU on the board?

     

    Hmmm. .... I have seen K10's on ebay for as little as ~$220.   The card has two GPUs, so each manages 4 GB.  I thought at first it had one GPU with 8 GB.   You can also find used a GTX 970 4GB for a similar price, and it has a little better performance rating (for games anyway) compared to the K10, and should use less power.  Both have built-in cooling.  The features of Octane 4 open up some more possibilities ...

  • kyoto kidkyoto kid Posts: 41,851
    edited March 2018

    ..didn't realise it was a dual GPU.  Looking for some way to get that 8 GB without paying 700 - 900$.  Still, 3072 cores added to the 640 I already have would be nothing to sneeze at. That's about as much as as a Titan Xp.  Even with out of core rendering in Octane 4 it would rip.

    If my disability case were resolved 6 months earlier, I'd probably have a 1070 Ti.

    Post edited by kyoto kid on
  • GreymomGreymom Posts: 1,139
    edited March 2018
    kyoto kid said:

    ..didn't realise it was a dual GPU.  Looking for some way to get that 8 GB without paying 700 - 900$.  Still, 3072 cores added to the 640 I already have would be nothing to sneeze at. That's about as much as as a Titan Xp.  Even with out of core rendering in Octane 4 it would rip.

    Yeah, me too, darn it!  But as you say, could be great with Octane 4.

    The K40 (single GPU, 12GB) is going for $1100 and up, and the K80 (two GPU, 24 GB) is going for $1400 and up, used on ebay.  Not such a great deal for the K40 compared to the 11 GB 1080ti at original price ($700), the k80 would be better if it was single GPU with 24GB.   If we see a crypto crash, these prices should drop too. 

    There are ads for an NVIDIA GRID M40 with 16 GB for $650, but that not the same card.  It is really 4 GPUs with 4GB each, basically a double K10 for more $ than two K10s.

    Post edited by Greymom on
  • nicsttnicstt Posts: 11,715

    Prices have reduced a little, and there seem to be a few available.

    ... To me, that is a reason to hold off buying. New tech is being rumoured as available soon, so either new will be tempting, or the deals on the old stuff will be - either way I can wait. I like my cash where it is.

  • tj_1ca9500btj_1ca9500b Posts: 2,057

    I'm guessing this doesn't affect rendering much, but for a $3000 MSRP card this report is nonetheless noteworthy...

    WCCFTech: Nvidia Titan V Reportedly Producing Errors in Scientific Simulations

    Of course, if the reported error carries over to rendering (in the form of artifacts or some such)...  well it's worth keeping an eye on at least.

    This Register article indicates that the error was reproducable, causing errors around 10% of the time...

    The Register: 2 + 2 = 4, er, 4.1, no, 4.3... Nvidia's Titan V GPUs spit out 'wrong answers' in scientific simulations

    In a related note, there's a forum thread here about the Titan V not playing well in Daz...

    https://www.daz3d.com/forums/discussion/224441/nvidia-titan-v-iray-render-fails

     

  • kyoto kidkyoto kid Posts: 41,851

    ...for 3000$ I could build a dual 10 core Xeron system with 128 GB of 4 channel DDR3 memory.  Yeah, it may not be as blistering fast but then I'm not stuck with only 12 GB to render very huge high quality large resolution scenes in.

  • ghastlycomicghastlycomic Posts: 2,531

    On the bitcoin front IBM is developing a new chipset specifically for cheaper blockchain applications which means all these bitcoin miners will hopefully be dumping a hell of a lot of videocards on the market in the next year or so. Finger crossed.

  • kyoto kidkyoto kid Posts: 41,851

    ...not sure I would want to buy one considering the punishment many have gone through.

  • outrider42outrider42 Posts: 3,679
    kyoto kid said:

    The return would be owning a decent machine for Iray. <.<

    All speculation indicates that Nvidia will ask a much higher MSRP on their next gen cards, whatever they are called. (Volta, Ampere, and Turing have all been brought up as code names.)

    I noticed some comments about Intel getting into the GPU business. That is true. And the whole thing is weird. First Intel hired AMD's former head of graphics development. This was extremely unexpected, as he had just took part in launching AMD Vega. Then Intel shows plans to include AMD Graphics on Intel CPU chip designs. This was such a strange marriage that numerous brains exploded. And there is no doubt a connection between the two. Right now, it is not known if Intel is going to make fully discrete GPUs, or focus on these combo dies. But I really doubt Intel would stick with their arrangement with AMD for long after bringing Raja aboard. They have plans.

    https://www.theverge.com/circuitbreaker/2018/1/7/16861164/intel-amd-core-i5-i7-processor-radeon-rx-vega-m-gpu-cpu-h-series-ces-2018

    But either way, this creates pressure on Nvidia if these perform well and are priced reasonably. They will certainly be cheaper than buying a CPU and GPU separate. And being combo chips they should not be very desirable to miners since you can't stack them. Too bad Iray doesn't work on them.

    A number of things may go against Nvidia if they are not careful. Microsoft recently added AMD Freesync to their Xbox console lineup. This means that you can hook an Xbox to a Freesync monitor and eliminate screen tear. This gives TV makers a reason to add Freesync to units, and doing so is fairly simple and cheap. Xbox uses AMD. So does the PS4, and the PS4 is the market leader. If Sony also adds Freesync to PS4, that will be huge boost to AMD and Freesync. For those not versed, Freesync is better than Vsync when it comes to frame times in games. What are these? Basically, when you play a game and the system renders frames at a different pace from what is being displayed, you can get screen tearing. Vsync was created to force the system to wait for each screen refresh, at either 30 or 60 frames per second. This works to stop the screen tear, but it can create a small amount of input lag. Gamers flip out over input lag, and always look to make it as small as possible. Freesync solves that problem. Nvidia has their own version of this, because of course they do, called Gsync. Freesync is open, while Gsync is exclusive to Nvidia, because of course it is, LOL. And Gsync costs a lot more to build into your monitor because Nvidia charges a high fee to license it.

    Anyway...having major consoles use Freesync turns the tables on Nvidia a bit. So while Nvidia enjoys a huge lead in the GPU race, they still face some big hurdles going forward that could possibly keep them in check (and most importantly, keep them from charging whatever they feel like for GPUs.) Competition is important, even if you don't use the competitor, what they do still has a big impact on what the leader does.

    ...none of these moves will help with Iray unless Nvidia changes their tack and decides to embrace Open_CL which is what AMD uses.

    These moves are more about competition, which effects...everything. Strong competition leads to more innovation and better prices in the market. If AMD competes with Nvidia better, the better and cheaper these GPUs become. The Titan V would not cost $3000 if AMD had real competition against it.

    Obviously there are other factors driving prices right now, but if Nvidia has no threat from AMD, they can charge whatever they want for their cards and get away with it.

    Nvidia is starting to do some shady things to bully their position in the market. Right now there is controversy around their GPP program, which by all accounts is putting pressure on 3rd party board makers to sign up to its terms. What happens if they refuse? They lose the benefits they used to have with Nvidia, support and most importantly, they might not get the stock they want first to build their cards. GPP members get access to chips first. But the GPP has special guidelines that basically ask for preferential treatment to Nvidia. Its already had effects, with 2 GPP members releasing AMD gpus without their normal gaming branding attached. Its dirty as it sounds. Even if Nvidia gets slammed by regulatory sanctions, the damage may already be done, which is the whole point of such shady practices. I'm sure Nvidia would happily pay any fine they receive if it meant the death of AMD's gpu division. It would be worth it for them.
  • Charlie JudgeCharlie Judge Posts: 13,248

    On the bitcoin front IBM is developing a new chipset specifically for cheaper blockchain applications which means all these bitcoin miners will hopefully be dumping a hell of a lot of videocards on the market in the next year or so. Finger crossed.

     

    kyoto kid said:

    ...not sure I would want to buy one considering the punishment many have gone through.

    I don't think I would want the used ones either but a prolieration of cheap used cards could translate into lower prices for new ones.

  • kyoto kidkyoto kid Posts: 41,851
    kyoto kid said:

    The return would be owning a decent machine for Iray. <.<

    All speculation indicates that Nvidia will ask a much higher MSRP on their next gen cards, whatever they are called. (Volta, Ampere, and Turing have all been brought up as code names.)

    I noticed some comments about Intel getting into the GPU business. That is true. And the whole thing is weird. First Intel hired AMD's former head of graphics development. This was extremely unexpected, as he had just took part in launching AMD Vega. Then Intel shows plans to include AMD Graphics on Intel CPU chip designs. This was such a strange marriage that numerous brains exploded. And there is no doubt a connection between the two. Right now, it is not known if Intel is going to make fully discrete GPUs, or focus on these combo dies. But I really doubt Intel would stick with their arrangement with AMD for long after bringing Raja aboard. They have plans.

    https://www.theverge.com/circuitbreaker/2018/1/7/16861164/intel-amd-core-i5-i7-processor-radeon-rx-vega-m-gpu-cpu-h-series-ces-2018

    But either way, this creates pressure on Nvidia if these perform well and are priced reasonably. They will certainly be cheaper than buying a CPU and GPU separate. And being combo chips they should not be very desirable to miners since you can't stack them. Too bad Iray doesn't work on them.

    A number of things may go against Nvidia if they are not careful. Microsoft recently added AMD Freesync to their Xbox console lineup. This means that you can hook an Xbox to a Freesync monitor and eliminate screen tear. This gives TV makers a reason to add Freesync to units, and doing so is fairly simple and cheap. Xbox uses AMD. So does the PS4, and the PS4 is the market leader. If Sony also adds Freesync to PS4, that will be huge boost to AMD and Freesync. For those not versed, Freesync is better than Vsync when it comes to frame times in games. What are these? Basically, when you play a game and the system renders frames at a different pace from what is being displayed, you can get screen tearing. Vsync was created to force the system to wait for each screen refresh, at either 30 or 60 frames per second. This works to stop the screen tear, but it can create a small amount of input lag. Gamers flip out over input lag, and always look to make it as small as possible. Freesync solves that problem. Nvidia has their own version of this, because of course they do, called Gsync. Freesync is open, while Gsync is exclusive to Nvidia, because of course it is, LOL. And Gsync costs a lot more to build into your monitor because Nvidia charges a high fee to license it.

    Anyway...having major consoles use Freesync turns the tables on Nvidia a bit. So while Nvidia enjoys a huge lead in the GPU race, they still face some big hurdles going forward that could possibly keep them in check (and most importantly, keep them from charging whatever they feel like for GPUs.) Competition is important, even if you don't use the competitor, what they do still has a big impact on what the leader does.

    ...none of these moves will help with Iray unless Nvidia changes their tack and decides to embrace Open_CL which is what AMD uses.

     

    These moves are more about competition, which effects...everything. Strong competition leads to more innovation and better prices in the market. If AMD competes with Nvidia better, the better and cheaper these GPUs become. The Titan V would not cost $3000 if AMD had real competition against it.

     

    Obviously there are other factors driving prices right now, but if Nvidia has no threat from AMD, they can charge whatever they want for their cards and get away with it.

     

    Nvidia is starting to do some shady things to bully their position in the market. Right now there is controversy around their GPP program, which by all accounts is putting pressure on 3rd party board makers to sign up to its terms. What happens if they refuse? They lose the benefits they used to have with Nvidia, support and most importantly, they might not get the stock they want first to build their cards. GPP members get access to chips first. But the GPP has special guidelines that basically ask for preferential treatment to Nvidia. Its already had effects, with 2 GPP members releasing AMD gpus without their normal gaming branding attached. Its dirty as it sounds. Even if Nvidia gets slammed by regulatory sanctions, the damage may already be done, which is the whole point of such shady practices. I'm sure Nvidia would happily pay any fine they receive if it meant the death of AMD's gpu division. It would be worth it for them.

    ...as you and others here know, I tend value VRAM higher than boatloads of stream processors as I create highly complex and detailed scenes (remember Alpha Channel's epic sweeping scenes? Yeah as a former painter, that is my inspiration).  The 999$ Vega Frontier with 16 GB of HBM 2 would be very attractive however Iray is CUDA only and as we are just a small segment of the GPU consumer base next to gaming enthusiasts, that light, the the notion of "competition" from AMD has been and will always will be moot.  Add to this Otoy's forthcoming Octane 4 which will be available for 20$ a month and offers fast out of core rendering that makes the need for a super high VRAM GPU unnecessary (and will also support both the OpenCL graphics language as well as Vulkan API).

    While Daz incorporating Iray seemed a great advancement back in 2015, I find it has become somewhat limited for many given what has happened with GPU pricing and availability.  If they also had a similar out of core render mode, the situation might be different.  3DL and Octane both have standalone solutions (3DL's is free up to 4 cores/8 threads).  Iray's standalone is not only expensive (like the full unlimited core version of 3DL) but has no "third party" processing interface for Daz (like Renderman RIB for 3DL or Reality for LuxRender), only the high priced pro grade software.  Were there one for Daz, it would at least reduce the resource demand of having to keep the scene open in Daz while rendering in CPU mode. In that light it would be a bit faster as there would be less chance of falling into much slower swap mode.

  • KitsumoKitsumo Posts: 1,221
    kyoto kid said:

    ...none of these moves will help with Iray unless Nvidia changes their tack and decides to embrace Open_CL which is what AMD uses.

     

    These moves are more about competition, which effects...everything. Strong competition leads to more innovation and better prices in the market. If AMD competes with Nvidia better, the better and cheaper these GPUs become. The Titan V would not cost $3000 if AMD had real competition against it.

     

    Obviously there are other factors driving prices right now, but if Nvidia has no threat from AMD, they can charge whatever they want for their cards and get away with it.

     

    Nvidia is starting to do some shady things to bully their position in the market. Right now there is controversy around their GPP program, which by all accounts is putting pressure on 3rd party board makers to sign up to its terms. What happens if they refuse? They lose the benefits they used to have with Nvidia, support and most importantly, they might not get the stock they want first to build their cards. GPP members get access to chips first. But the GPP has special guidelines that basically ask for preferential treatment to Nvidia. Its already had effects, with 2 GPP members releasing AMD gpus without their normal gaming branding attached. Its dirty as it sounds. Even if Nvidia gets slammed by regulatory sanctions, the damage may already be done, which is the whole point of such shady practices. I'm sure Nvidia would happily pay any fine they receive if it meant the death of AMD's gpu division. It would be worth it for them.

    Nvidia right now is that cat that catches a mouse and just plays with it instead of killing it. Nvidia could easily put AMD (at least the graphics division) out of business, but that would attract attention from the FTC and antitrust regulators. That would eventually lead to fines and a possible breakup. Since the AT&T breakup, companies know exactly what they can and can't get away with. Nvidia has had AMD(ATI) under it's paw since they drove 3dfx out of business. Sure AMD may have a faster card once in a while, or gain more market share, but rest asurred, Nvidia has higher profit margins and is sitting on way more cash. They just make it look like a close race to keep the FTC off their back.

    I don't really care as long as I get a decent card at a decent price. My problem is when they try to lock users into an "Nvidia only" upgrade path, with applications that only support Iray or Cuda but not OpenCL. My favorite quote from the last 10 years:

    Nvidia claims they would be happy for ATI to adopt PhysX support on Radeons. To do so would require ATI to build a CUDA driver, with the benefit that of course other CUDA apps would run on Radeons as well. ATI would also be required to license PhysX in order to hardware accelerate it, of course, but Nvidia maintains that the licensing terms are extremely reasonable—it would work out to less than pennies per GPU shipped.

    So yeah, in theory anyone, AMD or even Intel could run Cuda code hardware accelerated just as long as they render unto Nvidia what is Nvidia's. (sorry for the bad pun)

  • kyoto kidkyoto kid Posts: 41,851
    edited March 2018

    On the bitcoin front IBM is developing a new chipset specifically for cheaper blockchain applications which means all these bitcoin miners will hopefully be dumping a hell of a lot of videocards on the market in the next year or so. Finger crossed.

     

    kyoto kid said:

    ...not sure I would want to buy one considering the punishment many have gone through.

    I don't think I would want the used ones either but a prolieration of cheap used cards could translate into lower prices for new ones.

    ...but that would likely be brief, as miners would most likely turn to the newer generation cards which would offer improved performance.

    Post edited by kyoto kid on
  • KitsumoKitsumo Posts: 1,221
    kyoto kid said:

    On the bitcoin front IBM is developing a new chipset specifically for cheaper blockchain applications which means all these bitcoin miners will hopefully be dumping a hell of a lot of videocards on the market in the next year or so. Finger crossed.

     

    kyoto kid said:

    ...not sure I would want to buy one considering the punishment many have gone through.

    I don't think I would want the used ones either but a prolieration of cheap used cards could translate into lower prices for new ones.

    ...but that would likely be brief, as miners would most likely turn to the newer generation cards which would offer improved performance.

    That would be great. If they follow past trends, cardmakers would lower the MSRP for the older generation of cards ... for 6 months then discontinue them. The struggle continues

  • Richard HaseltineRichard Haseltine Posts: 108,019
    kyoto kid said:

    Iray's standalone is not only expensive (like the full unlimited core version of 3DL) but has no "third party" processing interface for Daz (like Renderman RIB for 3DL or Reality for LuxRender), only the high priced pro grade software.  Were there one for Daz, it would at least reduce the resource demand of having to keep the scene open in Daz while rendering in CPU mode. In that light it would be a bit faster as there would be less chance of falling into much slower swap mode.

    I'm pretty sure I've reminded you that DS can interface with Iray Server in the past.

  • Silver DolphinSilver Dolphin Posts: 1,638
    Gazukull said:

    Yeah, it just sits in there and renders.  I mean, my TITAN X's just sit and render too, I use the integrated graphics to push monitors.  

    As for discontinuing Fermi support, well that would be a downer.  But I will cry about when it happens.

    For people on a budget this is a great card for iray because of the 6gb vram. You just need a regualar video card to run monitors(this card has no video out). Yes Nvidia stopped supporting this for Iray but it still works. Just make sure you get the front video card bracket to mount in case and add two fans with 4pins that are user controllable that way you can speed up fans to keep your tesla m2090 from burning up.

  • kyoto kidkyoto kid Posts: 41,851

    ...Iray Server, which is a one year licence for 300$ and is designed for network rendering requiring a separate render system. I don't see any mention in any product description of a Daz plugin, just Rhino, Maya, and C4D.

    https://www.daz3d.com/forums/discussion/comment/1649336/#Comment_1649336

  • Richard HaseltineRichard Haseltine Posts: 108,019
    kyoto kid said:

    ...Iray Server, which is a one year licence for 300$ and is designed for network rendering requiring a separate render system. I don't see any mention in any product description of a Daz plugin, just Rhino, Maya, and C4D.

    https://www.daz3d.com/forums/discussion/comment/1649336/#Comment_1649336

    See the Advanced tab of Render Settings.

  • KitsumoKitsumo Posts: 1,221
    Gazukull said:

    Yeah, it just sits in there and renders.  I mean, my TITAN X's just sit and render too, I use the integrated graphics to push monitors.  

    As for discontinuing Fermi support, well that would be a downer.  But I will cry about when it happens.

    For people on a budget this is a great card for iray because of the 6gb vram. You just need a regualar video card to run monitors(this card has no video out). Yes Nvidia stopped supporting this for Iray but it still works. Just make sure you get the front video card bracket to mount in case and add two fans with 4pins that are user controllable that way you can speed up fans to keep your tesla m2090 from burning up.

    I'm hoping they won't disable it completely. I mean the whole point of the Cuda core thing is that your cards can always contribute, as long as they are compliant. I hope their "discontinued support" will be the same as Microsoft not supporting Windows 98, as in "You can still use it, just don't call us if anything goes wrong." I can live with that.

  • kyoto kidkyoto kid Posts: 41,851
    kyoto kid said:

    ...Iray Server, which is a one year licence for 300$ and is designed for network rendering requiring a separate render system. I don't see any mention in any product description of a Daz plugin, just Rhino, Maya, and C4D.

    https://www.daz3d.com/forums/discussion/comment/1649336/#Comment_1649336

    See the Advanced tab of Render Settings.

    ...I have and it requires a remote system to link to.  That still means more expense.

Sign In or Register to comment.