Semi-OT - nVidia Pascal cards debut finished, 1080GTX and 1070GTX announced!

17810121316

Comments

  • MEC4DMEC4D Posts: 5,249

    @hphoenix

    Thanks for your reply , it sounds more rational , however 

    I don't agree one one point a about voltage and I have no ideas where you get the info  from or just think it is this way but this is not and you have to be corrected 

    the limits on Titan X standard is 250W no matter it is superclocked or not there is only 10% room for extra voltage and the card can't use more than 275W at its power setting at 110% , if that was the scenario you will need 1300W to run only 3 Titan X and where is the power resource  for the system ?  I run 2 Titan X SC for months on only 800W PSU without any issues , EVGA recommend 1200W for 3 Titan X and the system so they can't consume more power I go ahead with 1300W  as my CPU use 140W so I wanted to have little room , there are well bios that allow the voltage at 1,3 v but that are mostly used for benchmarks under very good cooling conditions like liquid nitrogen but not for the regular consumer, the air cooled super clocked Titan X can't from the design point get over 1.2 v , but because I have original 2 Titan X SC air cooled that run at max allowed 1.2v and 1 original Hydro , there is no difference in voltage, power usage or memory speed , when I upgraded the air cooled to Hydro kit I was talking to the EVGA technician if I have to flash the bios to get the best of that due to the vol limitation , and he said nope as long  the GPU stay cool I can turn the boost higher without worry about the voltage adjustment . So you see there is no voltage adjusted and usually they use max on boost 1.16V  never saw even 1,2 v  so the scenario is 0.87v idle to 1.18 v max and not 3.0V as that would be crazy for so big GPU anyway but there are fans that pushing to the limit and flashing the bios over the allowed limits

    We talking here about TDP limited scenarios where is not much headroom to play with.  Thanks for a very low stock voltage on the base and boost clock what is only 1.162v there is still a lot of  room to  get to the max without any adjusting which pushing the curve  higher at the same voltages . 

    There are also 2 scenarios, game and rendering , with the game the power limits will run on the max pick, with rendering there is still at last 40% room as Titan X never used more than 70% of Power while rendering and when you use caustic and  architectural samplers the power usage get even lower to 50-60% only so in this case it don;t even need the extra juice 

    but again a lot of your info is coming from the gamer scene and not actually  for using with rendering and there are 2 big difference if that was  the facts I could not run my 2 Titan X SC on 800W for months last year it would be over the moment I start to play games with as I would not have enough power to even run my CPU with , not to mentions the rest of the rig  as the 2 cards would consume 700W on higher voltage settings . Well I played The Witcher  since it came free with my cards but the voltage never go over 1.19V no matter original Hydro SC or Air SC or standard at 1457Mhz 

    so reading about and actually see it in action are 2 different scenarios . 

     

     

     

  • hphoenixhphoenix Posts: 1,335
    MEC4D said:

    @hphoenix

    Thanks for your reply , it sounds more rational , however 

    I don't agree one one point a about voltage and I have no ideas where you get the info  from or just think it is this way but this is not and you have to be corrected 

    the limits on Titan X standard is 250W no matter it is superclocked or not there is only 10% room for extra voltage and the card can't use more than 275W at its power setting at 110% , if that was the scenario you will need 1300W to run only 3 Titan X and where is the power resource  for the system ?  I run 2 Titan X SC for months on only 800W PSU without any issues , EVGA recommend 1200W for 3 Titan X and the system so they can't consume more power I go ahead with 1300W  as my CPU use 140W so I wanted to have little room , there are well bios that allow the voltage at 1,3 v but that are mostly used for benchmarks under very good cooling conditions like liquid nitrogen but not for the regular consumer, the air cooled super clocked Titan X can't from the design point get over 1.2 v , but because I have original 2 Titan X SC air cooled that run at max allowed 1.2v and 1 original Hydro , there is no difference in voltage, power usage or memory speed , when I upgraded the air cooled to Hydro kit I was talking to the EVGA technician if I have to flash the bios to get the best of that due to the vol limitation , and he said nope as long  the GPU stay cool I can turn the boost higher without worry about the voltage adjustment . So you see there is no voltage adjusted and usually they use max on boost 1.16V  never saw even 1,2 v  so the scenario is 0.87v idle to 1.18 v max and not 3.0V as that would be crazy for so big GPU anyway but there are fans that pushing to the limit and flashing the bios over the allowed limits

    We talking here about TDP limited scenarios where is not much headroom to play with.  Thanks for a very low stock voltage on the base and boost clock what is only 1.162v there is still a lot of  room to  get to the max without any adjusting which pushing the curve  higher at the same voltages . 

    There are also 2 scenarios, game and rendering , with the game the power limits will run on the max pick, with rendering there is still at last 40% room as Titan X never used more than 70% of Power while rendering and when you use caustic and  architectural samplers the power usage get even lower to 50-60% only so in this case it don;t even need the extra juice 

    but again a lot of your info is coming from the gamer scene and not actually  for using with rendering and there are 2 big difference if that was  the facts I could not run my 2 Titan X SC on 800W for months last year it would be over the moment I start to play games with as I would not have enough power to even run my CPU with , not to mentions the rest of the rig  as the 2 cards would consume 700W on higher voltage settings . Well I played The Witcher  since it came free with my cards but the voltage never go over 1.19V no matter original Hydro SC or Air SC or standard at 1457Mhz 

     

    Ah, that explains a LOT about why you disagree about my assesments in relation to the Titan-X.  I wasn't aware of that particular peculiarity with that particular models voltage/current handling, but what you've stated here clarifies a few things.  We're BOTH a bit off.  I'll explain:

    The Titan-X evidently is designed to run at a much lower voltage at load, which means it's using current-loading to provide the boosting to force the clocks to change faster when clocking is turned up.  While all cards do this to some extent (simply increasing the voltage will cause it, since power is volts x amps) using current-loading would allow for the core to run more stable at full loads while keeping the voltage more consistent (gate logic is pretty tied to voltage, so modifying it too much causes things to get out of whack when you are also drawing too much power....or too little, depending on which direction it gets modified.)  That's pretty intelligent, and also helps keep the thermal design within parameters.  I dare say it enables it to more consistently throttle as total power draw starts to hit limits, unlike big voltage offsets.  BUT....it also would tend to throttle more consistently, if in smaller amounts.  It would also make the supply rails more current-limit sensitive (hence why they require a fairly high current max on the power supply +12v rail.)

    Various benchmarking and testing sites have shown the typical 100% load power consumption of the baseline Titan-X at around 250W, which is consistent with specs.  That's not TDP, that's measured consumed line wattage.  And regardless of thermal dissipation, line wattage is true power consumption.  And it goes up consistently with increasing clocks.  But if throttling occurs in small amounts, we'd expect to see power consumption and thermal dissipation to top out at some particular value (as opposed to seeing the card have stutters or getting too hot.)  But without increasing the voltage, it's harder for overclocking to get the rise-times on the clock signal to hit the marks in the needed period for a given clock-speed.  (I'll assume I don't need to go into the whole instantaneous power vs rise/fall times explanation here.)    It sounds like the Titan-X was built for consistency and stability, as opposed to just raw clocking.  So while it does give some great performance (thanks to the number of cores and such) it also keeps itself very stable through several clever on-board limiting systems.

    And I didn't know that (I've never owned a Titan series card, so I've never been able to experiment with one.)  That's some good engineering, though it does limit the card to below theoretical performance limits.  But for consistent high-throughput, it's the better solution.  It also explains why you never 'see' the card hitting more than 70% - 80% of max power rating.  Such drains would be so quickly throttled as to prevent spiking of current, as well as thermal build-up.

    Gaming cards are built more on the concept of "we'll only see full load for brief periods" so they allow spiking on the assumption that any heat build up will be dissipated out during the low-load view points and such.  The fan will ramp up a little at first, so if the spike doesn't last long, it levels everything out and nothing gets too far from base specs.  They aren't really built for long-periods of full-load (like in rendering) which is why one has to be more careful in using such cards for that kind of loading (using custom fan curves or just setting it at a rather high fixed speed fan.)

    Personally, I think I like the Titan-X design better.....more reliable and stable, if not able to hit the raw spike boosts that a gaming card can, and using those extra cores to compensate.

    If they DO make a Pascal version of the Titan-X using that same kind of design philosophy, it'll be one killer card......

     

  • mjc1016mjc1016 Posts: 15,001
    hphoenix said:
     

    And some of the OC forums are already doing LN2 mods to the 1080.....I'm almost afraid to see what kind of numbers they are going to post when they get it up and benchmark it.  Imagine a 1080 GTX running at 3000 MHz......

    That would be interesting...

    The thing with the fan...it always seems like the first couple driver versions are needed to settle things down.  I can't remember the last time that a major Nvidia release didn't have something that was driver related.  And it seems like an easy fix...just manually control the fan.  Wow, a real hardship there...wink

  • hphoenixhphoenix Posts: 1,335
    edited June 2016
    mjc1016 said:
    hphoenix said:
     

    And some of the OC forums are already doing LN2 mods to the 1080.....I'm almost afraid to see what kind of numbers they are going to post when they get it up and benchmark it.  Imagine a 1080 GTX running at 3000 MHz......

    That would be interesting...

    The thing with the fan...it always seems like the first couple driver versions are needed to settle things down.  I can't remember the last time that a major Nvidia release didn't have something that was driver related.  And it seems like an easy fix...just manually control the fan.  Wow, a real hardship there...wink

    Yeah, they always tend to release before they've really thoroughly tested.  That's the software/hardware biz.

    But it's a little more complicated than just 'manually control the fan'.  This is more a firmware issue.  It's like the card is deliberately dragging behind in ramping up the fan with increased load, and is instead only ramping up the fan based on thermal sensing.  That's dangerous, as heat build up at full load can happen pretty quick, and suddenly the fan has to punch to max AND the gpu throttles it's frequency down to compensate.  Temps drop, fan slows down gently, then it happens again....and again....that produces a lot of wear and tear on  the fan, and the thermal connections.  It should be slowly ramping up the fan as the gpu load increases, rather than waiting for temps, and only going beyond a certain point (say, 60% max fan tach) if the thermal detection goes above a limit (say, 80C).  That way, if your load is consistent, your fan is consistent, and your temp is consistent.  Less wear on the card that way.

     

    ETA:  Also, more consistent noise levels, which are less distracting than changing noise levels.  I'm sure they'll release a firmware update to fix the issue soon™.... wink

     

    Post edited by hphoenix on
  • MEC4DMEC4D Posts: 5,249

    Finally we reach the top hahahah , I have nothing to add .

    But I never go beyond 100% of voltage as I am not doing benchmarks or need super fast FPS and usually run on 1347Mhz all 3 so they stay on the same speed level for iray, or other GPU depended soft . For the other stuff I will have to run SLI anyway for optimal speed . 

    and btw just finushed setup my new rig with my new Asus MoBo X99-Delluxe/USB3.1 and i75960x .. and I am running smooth , the CPU temp are much higher than the  i75820K and around 7 C but maybe due to 8 cores , need to fine tune everything .. Asus provide the most smooth setup and as much as I like Gigybte hardware the Bios and soft just sucks big time ... well I am happy camper now 

    hphoenix said:
    MEC4D said:

    @hphoenix

    Thanks for your reply , it sounds more rational , however 

    I don't agree one one point a about voltage and I have no ideas where you get the info  from or just think it is this way but this is not and you have to be corrected 

    the limits on Titan X standard is 250W no matter it is superclocked or not there is only 10% room for extra voltage and the card can't use more than 275W at its power setting at 110% , if that was the scenario you will need 1300W to run only 3 Titan X and where is the power resource  for the system ?  I run 2 Titan X SC for months on only 800W PSU without any issues , EVGA recommend 1200W for 3 Titan X and the system so they can't consume more power I go ahead with 1300W  as my CPU use 140W so I wanted to have little room , there are well bios that allow the voltage at 1,3 v but that are mostly used for benchmarks under very good cooling conditions like liquid nitrogen but not for the regular consumer, the air cooled super clocked Titan X can't from the design point get over 1.2 v , but because I have original 2 Titan X SC air cooled that run at max allowed 1.2v and 1 original Hydro , there is no difference in voltage, power usage or memory speed , when I upgraded the air cooled to Hydro kit I was talking to the EVGA technician if I have to flash the bios to get the best of that due to the vol limitation , and he said nope as long  the GPU stay cool I can turn the boost higher without worry about the voltage adjustment . So you see there is no voltage adjusted and usually they use max on boost 1.16V  never saw even 1,2 v  so the scenario is 0.87v idle to 1.18 v max and not 3.0V as that would be crazy for so big GPU anyway but there are fans that pushing to the limit and flashing the bios over the allowed limits

    We talking here about TDP limited scenarios where is not much headroom to play with.  Thanks for a very low stock voltage on the base and boost clock what is only 1.162v there is still a lot of  room to  get to the max without any adjusting which pushing the curve  higher at the same voltages . 

    There are also 2 scenarios, game and rendering , with the game the power limits will run on the max pick, with rendering there is still at last 40% room as Titan X never used more than 70% of Power while rendering and when you use caustic and  architectural samplers the power usage get even lower to 50-60% only so in this case it don;t even need the extra juice 

    but again a lot of your info is coming from the gamer scene and not actually  for using with rendering and there are 2 big difference if that was  the facts I could not run my 2 Titan X SC on 800W for months last year it would be over the moment I start to play games with as I would not have enough power to even run my CPU with , not to mentions the rest of the rig  as the 2 cards would consume 700W on higher voltage settings . Well I played The Witcher  since it came free with my cards but the voltage never go over 1.19V no matter original Hydro SC or Air SC or standard at 1457Mhz 

     

    Ah, that explains a LOT about why you disagree about my assesments in relation to the Titan-X.  I wasn't aware of that particular peculiarity with that particular models voltage/current handling, but what you've stated here clarifies a few things.  We're BOTH a bit off.  I'll explain:

    The Titan-X evidently is designed to run at a much lower voltage at load, which means it's using current-loading to provide the boosting to force the clocks to change faster when clocking is turned up.  While all cards do this to some extent (simply increasing the voltage will cause it, since power is volts x amps) using current-loading would allow for the core to run more stable at full loads while keeping the voltage more consistent (gate logic is pretty tied to voltage, so modifying it too much causes things to get out of whack when you are also drawing too much power....or too little, depending on which direction it gets modified.)  That's pretty intelligent, and also helps keep the thermal design within parameters.  I dare say it enables it to more consistently throttle as total power draw starts to hit limits, unlike big voltage offsets.  BUT....it also would tend to throttle more consistently, if in smaller amounts.  It would also make the supply rails more current-limit sensitive (hence why they require a fairly high current max on the power supply +12v rail.)

    Various benchmarking and testing sites have shown the typical 100% load power consumption of the baseline Titan-X at around 250W, which is consistent with specs.  That's not TDP, that's measured consumed line wattage.  And regardless of thermal dissipation, line wattage is true power consumption.  And it goes up consistently with increasing clocks.  But if throttling occurs in small amounts, we'd expect to see power consumption and thermal dissipation to top out at some particular value (as opposed to seeing the card have stutters or getting too hot.)  But without increasing the voltage, it's harder for overclocking to get the rise-times on the clock signal to hit the marks in the needed period for a given clock-speed.  (I'll assume I don't need to go into the whole instantaneous power vs rise/fall times explanation here.)    It sounds like the Titan-X was built for consistency and stability, as opposed to just raw clocking.  So while it does give some great performance (thanks to the number of cores and such) it also keeps itself very stable through several clever on-board limiting systems.

    And I didn't know that (I've never owned a Titan series card, so I've never been able to experiment with one.)  That's some good engineering, though it does limit the card to below theoretical performance limits.  But for consistent high-throughput, it's the better solution.  It also explains why you never 'see' the card hitting more than 70% - 80% of max power rating.  Such drains would be so quickly throttled as to prevent spiking of current, as well as thermal build-up.

    Gaming cards are built more on the concept of "we'll only see full load for brief periods" so they allow spiking on the assumption that any heat build up will be dissipated out during the low-load view points and such.  The fan will ramp up a little at first, so if the spike doesn't last long, it levels everything out and nothing gets too far from base specs.  They aren't really built for long-periods of full-load (like in rendering) which is why one has to be more careful in using such cards for that kind of loading (using custom fan curves or just setting it at a rather high fixed speed fan.)

    Personally, I think I like the Titan-X design better.....more reliable and stable, if not able to hit the raw spike boosts that a gaming card can, and using those extra cores to compensate.

    If they DO make a Pascal version of the Titan-X using that same kind of design philosophy, it'll be one killer card......

     

     

  • MEC4DMEC4D Posts: 5,249

     Yes indeed who know what EVGA is cooking with the game version and 250W  

    also just got message from EVGA team GTX 1080 is on sale for $699 http://www.evga.com/products/Product.aspx?pn=08G-P4-6180-KR

    ​I buy always all my cards there the support is fantastic and never troubles with anything I purchased there . Amazon stay away for PC hardware , the stuff is rolling from one hand to another . I returned 3 Wacom Ciniq monitors, 3 Nikon cameras as all was not new with fat finger prints and hair in the lenses ...ewww  unless sold direct by Amazon and sealed I don't bother , and bad cookies will rotate around soon again 

    mjc1016 said:
    hphoenix said:
     

    And some of the OC forums are already doing LN2 mods to the 1080.....I'm almost afraid to see what kind of numbers they are going to post when they get it up and benchmark it.  Imagine a 1080 GTX running at 3000 MHz......

    That would be interesting...

    The thing with the fan...it always seems like the first couple driver versions are needed to settle things down.  I can't remember the last time that a major Nvidia release didn't have something that was driver related.  And it seems like an easy fix...just manually control the fan.  Wow, a real hardship there...wink

     

  • MEC4DMEC4D Posts: 5,249

    Hi Jura, I got my i75690x to 4.9Ghz , almost got 5Ghz and just on the end of stress it was the limit , I guess another silicone loterry winner hehe  48% extra temp max 43C  that will be huge boost in rendering with the 16 threads in other programs compared to the standard 3Ghz but maybe I get little lower to 4.7Ghz

    and btw I should purchase the 16GB memory sticks , X99-Delluxe support it already after last BIOS update and nothing was about in the manual damn , the Memory run nice on the Profile 2 , over 3000 Mhz 

    so 600Mhz faster .. 

    jura said:
    MEC4D said:
    Regarding the memory,I can't go beyond 2400MHz on mine(limitation of the Xeon) and on yours,this will depends,I've been using Corsair RAM for many years,but still best RAM to the date what I've used has been G.Skill,with those RAM has been OC very easy,with Corsair,you can get cheaper 3200MHz RAM,but what I don't like on them are latencies,on memory you shouldn't do any compromise,if budget is allowing,if not,then all depends on more factors like where this will be used more in gaming or rendering

    I don't use and never used XMP profiles,usually they putting too many volts to vCore or RAM voltage etc,usually quick dirty OC working for me,I don't like pump too many volts to CPU or RAM or MB,most of the time I spend few hours with tuning the settings to perfection if time allows,if not then I'm keeping in reasonable levels and voltages and if I see there is no BSOD in rendering,then I will lower vcore as first and CPU input voltage,CPU cache voltages etc,but mainly higher voltage usually results with higher CPU temps and this I don't like at all

    This my build is test build I would say,I still have 2 months at least to return this item if I don't like or I'm not satisfied,I'm still deciding if I want to go with Z10PE-D16 or other Dual CPU workstation baord for next build or I will stick what I've,that's the question for next weeks 

    Cath regarding Titan X SC Hybrid,did you put own Hybrid AIO or did you bought as they're assembled ? In my case I bought Hybrid kit and put on mine this kit,as temps on stock SC has been very high when I rendered and mainly noise of fan has been too loud,but in my case this pump on Hybrid AIO makes noise

    Hope this helps there and good luck 

    Thanks,Jura

     

  • jurajura Posts: 50
    MEC4D said:

    Hi Jura, I got my i75690x to 4.9Ghz , almost got 5Ghz and just on the end of stress it was the limit , I guess another silicone loterry winner hehe  48% extra temp max 43C  that will be huge boost in rendering with the 16 threads in other programs compared to the standard 3Ghz but maybe I get little lower to 4.7Ghz

    and btw I should purchase the 16GB memory sticks , X99-Delluxe support it already after last BIOS update and nothing was about in the manual damn , the Memory run nice on the Profile 2 , over 3000 Mhz 

    so 600Mhz faster .. 

    jura said:
    MEC4D said:
    Regarding the memory,I can't go beyond 2400MHz on mine(limitation of the Xeon) and on yours,this will depends,I've been using Corsair RAM for many years,but still best RAM to the date what I've used has been G.Skill,with those RAM has been OC very easy,with Corsair,you can get cheaper 3200MHz RAM,but what I don't like on them are latencies,on memory you shouldn't do any compromise,if budget is allowing,if not,then all depends on more factors like where this will be used more in gaming or rendering

    I don't use and never used XMP profiles,usually they putting too many volts to vCore or RAM voltage etc,usually quick dirty OC working for me,I don't like pump too many volts to CPU or RAM or MB,most of the time I spend few hours with tuning the settings to perfection if time allows,if not then I'm keeping in reasonable levels and voltages and if I see there is no BSOD in rendering,then I will lower vcore as first and CPU input voltage,CPU cache voltages etc,but mainly higher voltage usually results with higher CPU temps and this I don't like at all

    This my build is test build I would say,I still have 2 months at least to return this item if I don't like or I'm not satisfied,I'm still deciding if I want to go with Z10PE-D16 or other Dual CPU workstation baord for next build or I will stick what I've,that's the question for next weeks 

    Cath regarding Titan X SC Hybrid,did you put own Hybrid AIO or did you bought as they're assembled ? In my case I bought Hybrid kit and put on mine this kit,as temps on stock SC has been very high when I rendered and mainly noise of fan has been too loud,but in my case this pump on Hybrid AIO makes noise

    Hope this helps there and good luck 

    Thanks,Jura

     

    Hi Cath

    That's very nice OC,I know few friends have i7-5960x and they struglled go beyond 4.6GHz,most of them running 4.4GHz.I would check what vCore you are running when you are rendering,you don't want pass or go beyond 1.35v on 5960x(some people feed those chips at 1.45v),as chip will degrade faster than with lower volatge...If you have enabled all power saving options then and you are getting 4.9GHz then I would stress that with OCCT,its free SW for stressing CPU,don't use older Prime95 for yours GPU,just check if does use AVX

    Idle yes agree always will be hotter with such bump of speed,under load you don't want to go beyond for prolong time beyond 75C,my previous builds usually sits at low 60's,with current Xeon 55C with slight BCLK OC(105.5Mhz) which gains me 2954MHz when I render

    If you can use 16GB RAM module,then I would use them,but this again depends there,if you are already running 64GB then not sure if its worth it,you need to decide Cath

    In my case I would be happy with 64GB as I do use right now 32GB and that's not enough for me,if I do use 3DS MAX and very high poly models(I'm usually adding modifiers like Subdivide,Smooth,Turbo Smooth and few others) I usually running to issues due this I will need to add more RAM soon

    Thanks,Jura

  • kyoto kidkyoto kid Posts: 41,928
    prixat said:

    First benchmarks for the new Haswell replacements have started to appear...

    http://arstechnica.com/gadgets/2016/05/broadwell-e-arrives-testing-intels-10-core-1700-desktop-cpu/

    surprise

    ...so as the article concludes, and I am not into overclocking the CPU, sticking with a Xeon 8 or 10 core might still be a better deal for the cost.

  • kyoto kidkyoto kid Posts: 41,928
    edited June 2016
    hphoenix said:
     
    MEC4D said:

    But my point is that the standard 1080 they used for the public test was heavy overclocked and the benchmark numbers will never reach the speed on consumer machine . It may be faster but not for us , if Nvidia want they can run Titan X 70% faster in iray as it already is but for that you need to pay 3 times the price , so thinking that 1080 will be better in iray make not sense for the people that spend so much money for multiple cards . And maybe one M6000 is slower in iray than 1 Titan X but 2 M6000 are faster than 2 Titan X .. it is controlled by iray software and have nothing to do with graphic card specifications or the speed of GPU it never was that is how business is made . So what we can expect from super clocked multiple 1080 in iray is up to Nvidia and not product specifications and since Nvidia see Titan X 12 GB more as workstation card than game  there is a chance thing can change for better  soon regarding rendering . 

    It was overclocked in ONE demo, and they pointed it out.  Hence why they showed the 2144MHz clocking and the temp (67C)...but they audience couldn't hear the fans blowing at full speed since the boxes were backstage.....And the graphs did show where the power consumption was estimated for at those overclocks.

    As for the 'business' side, I think nVidia wants to push 1080/1070 right now and Iray WILL be faster on Pascal than on Maxwell.  I'm still betting that the next quadro/titan generation will be Volta, not GP102.  I think GP102 will be the Titanium cards.  And the Volta, if it follows even close to what nVidia claims, will be another big jump.  So Quadro and Titan will get even MORE powerful.  But that's at least a year away, maybe more.  1080 won't be HUGELY more powerful than top-end Quadro or top-end Titan-X, but it will be slightly more.  Most businesses won't pay a premium just for a small (10-20%) boost with less support.  But when Volta debuts, they'll pay $$$$ for those kinds of boosts.

    Yes, I know they have hamstrung render performance in the drivers in some generations to prevent them from competing with their more professional-oriented cards.  Kepler was a perfect example of this, and they continued it with Maxwell.  It's nothing new in the computing industry.  It's easier typically to mass-produce one thing, then sell it at various price-points based on artificial limits.  Given the needs Pascal was created to solve, I would think they're planning more along what I described.  GP102 might have an 'interim' quadro/titan....but I think it'll be quite limited....And Volta is where they'll truly bring out the Professional-level cards.

    ...the next Generation Tesla compute card is Pascal architecture (GP100 with 16 GB of HBM 2), so it would make sense for the Quadro line to follow suit.  Most likely the new flagship Quadro GPU (replacing the M6000) will have 32 GB of HBM 2 using the GP102 processor (the current M6000 was upgraded last year to 24 GB of GDDR5)

    As I have read things, Volta development seems to be primarily oriented towards high speed compute applications for the next generation of supercomputers, not graphics processing.

    The silver lining in al of this is that all it could result in not only a drop in price for Maxwell Titan GPUs but also Maxwell Quadro GPUs as well.  For myself, compared to what I currently have, that would be a major boost in performance. As I have mentioned, 12 or 24 GB of video memory means a better chance a scene will render completely on the GPU (pretty much a "given" with 24 GB) instead of dumping to the much slower CPU mode. For me, that is a "speed bonus".

    Post edited by kyoto kid on
  • MEC4DMEC4D Posts: 5,249

    Thanks Jura, I purchased the 64GB 8x8 as the manual stated it don't support 16GB modules , then updated the BIOS from April to support my new processor and read in the info now it support 16GB modules , I don't like yo buy small modules I have like tones of it at home but DDR 3 and not usable anymore .

    Regarding the CPU seriously it was worth the upgrade , it is fantastic , I go down to  4.6 Ghz to safe it as I am going to use it a lot , the power saving mode don't want turn ON anyway , I prefer the lower speed for the quiet mode anyway and it turn the clock to 1.2Ghz for that.

     

    jura said:
    MEC4D said:

    Hi Jura, I got my i75690x to 4.9Ghz , almost got 5Ghz and just on the end of stress it was the limit , I guess another silicone loterry winner hehe  48% extra temp max 43C  that will be huge boost in rendering with the 16 threads in other programs compared to the standard 3Ghz but maybe I get little lower to 4.7Ghz

    and btw I should purchase the 16GB memory sticks , X99-Delluxe support it already after last BIOS update and nothing was about in the manual damn , the Memory run nice on the Profile 2 , over 3000 Mhz 

    so 600Mhz faster .. 

    jura said:
    MEC4D said:
    Regarding the memory,I can't go beyond 2400MHz on mine(limitation of the Xeon) and on yours,this will depends,I've been using Corsair RAM for many years,but still best RAM to the date what I've used has been G.Skill,with those RAM has been OC very easy,with Corsair,you can get cheaper 3200MHz RAM,but what I don't like on them are latencies,on memory you shouldn't do any compromise,if budget is allowing,if not,then all depends on more factors like where this will be used more in gaming or rendering

    I don't use and never used XMP profiles,usually they putting too many volts to vCore or RAM voltage etc,usually quick dirty OC working for me,I don't like pump too many volts to CPU or RAM or MB,most of the time I spend few hours with tuning the settings to perfection if time allows,if not then I'm keeping in reasonable levels and voltages and if I see there is no BSOD in rendering,then I will lower vcore as first and CPU input voltage,CPU cache voltages etc,but mainly higher voltage usually results with higher CPU temps and this I don't like at all

    This my build is test build I would say,I still have 2 months at least to return this item if I don't like or I'm not satisfied,I'm still deciding if I want to go with Z10PE-D16 or other Dual CPU workstation baord for next build or I will stick what I've,that's the question for next weeks 

    Cath regarding Titan X SC Hybrid,did you put own Hybrid AIO or did you bought as they're assembled ? In my case I bought Hybrid kit and put on mine this kit,as temps on stock SC has been very high when I rendered and mainly noise of fan has been too loud,but in my case this pump on Hybrid AIO makes noise

    Hope this helps there and good luck 

    Thanks,Jura

     

    Hi Cath

    That's very nice OC,I know few friends have i7-5960x and they struglled go beyond 4.6GHz,most of them running 4.4GHz.I would check what vCore you are running when you are rendering,you don't want pass or go beyond 1.35v on 5960x(some people feed those chips at 1.45v),as chip will degrade faster than with lower volatge...If you have enabled all power saving options then and you are getting 4.9GHz then I would stress that with OCCT,its free SW for stressing CPU,don't use older Prime95 for yours GPU,just check if does use AVX

    Idle yes agree always will be hotter with such bump of speed,under load you don't want to go beyond for prolong time beyond 75C,my previous builds usually sits at low 60's,with current Xeon 55C with slight BCLK OC(105.5Mhz) which gains me 2954MHz when I render

    If you can use 16GB RAM module,then I would use them,but this again depends there,if you are already running 64GB then not sure if its worth it,you need to decide Cath

    In my case I would be happy with 64GB as I do use right now 32GB and that's not enough for me,if I do use 3DS MAX and very high poly models(I'm usually adding modifiers like Subdivide,Smooth,Turbo Smooth and few others) I usually running to issues due this I will need to add more RAM soon

    Thanks,Jura

     

  • MEC4DMEC4D Posts: 5,249

    I don't think Titan X will drop much the price soon due to it higher memory in the gtx family , I just tested iray with my new rig , the processor perform fantastic , the delay is 3 times less than before and in most case instant compared to 20 sec before so it looks like huge improvement I am going to monitor it now to see how many cores are used for that process 4 or more , also the photoreal mode iray viewport spin so fast with just CPU I guess it use all the power , did not expected it like faster than gtx 760 lol

    when using 3 cards the viewport spin did not improved and is as before , but when you load new asset it is instant not need to wait as before with i75820K conclusion here is just a good card is not enough as you need good balance between your GPU and CPU to work with iray smooth and fast , well for now my mission is accomplish .  

    kyoto kid said:
    hphoenix said:
     
    MEC4D said:

    But my point is that the standard 1080 they used for the public test was heavy overclocked and the benchmark numbers will never reach the speed on consumer machine . It may be faster but not for us , if Nvidia want they can run Titan X 70% faster in iray as it already is but for that you need to pay 3 times the price , so thinking that 1080 will be better in iray make not sense for the people that spend so much money for multiple cards . And maybe one M6000 is slower in iray than 1 Titan X but 2 M6000 are faster than 2 Titan X .. it is controlled by iray software and have nothing to do with graphic card specifications or the speed of GPU it never was that is how business is made . So what we can expect from super clocked multiple 1080 in iray is up to Nvidia and not product specifications and since Nvidia see Titan X 12 GB more as workstation card than game  there is a chance thing can change for better  soon regarding rendering . 

    It was overclocked in ONE demo, and they pointed it out.  Hence why they showed the 2144MHz clocking and the temp (67C)...but they audience couldn't hear the fans blowing at full speed since the boxes were backstage.....And the graphs did show where the power consumption was estimated for at those overclocks.

    As for the 'business' side, I think nVidia wants to push 1080/1070 right now and Iray WILL be faster on Pascal than on Maxwell.  I'm still betting that the next quadro/titan generation will be Volta, not GP102.  I think GP102 will be the Titanium cards.  And the Volta, if it follows even close to what nVidia claims, will be another big jump.  So Quadro and Titan will get even MORE powerful.  But that's at least a year away, maybe more.  1080 won't be HUGELY more powerful than top-end Quadro or top-end Titan-X, but it will be slightly more.  Most businesses won't pay a premium just for a small (10-20%) boost with less support.  But when Volta debuts, they'll pay $$$$ for those kinds of boosts.

    Yes, I know they have hamstrung render performance in the drivers in some generations to prevent them from competing with their more professional-oriented cards.  Kepler was a perfect example of this, and they continued it with Maxwell.  It's nothing new in the computing industry.  It's easier typically to mass-produce one thing, then sell it at various price-points based on artificial limits.  Given the needs Pascal was created to solve, I would think they're planning more along what I described.  GP102 might have an 'interim' quadro/titan....but I think it'll be quite limited....And Volta is where they'll truly bring out the Professional-level cards.

    ...the next Generation Tesla compute card is Pascal architecture (GP100 with 16 GB of HBM 2), so it would make sense for the Quadro line to follow suit.  Most likely the new flagship Quadro GPU (replacing the M6000) will have 32 GB of HBM 2 using the GP102 processor (the current M6000 was upgraded last year to 24 GB of GDDR5)

    As I have read things, Volta development seems to be primarily oriented towards high speed compute applications for the next generation of supercomputers, not graphics processing.

    The silver lining in al of this is that all it could result in not only a drop in price for Maxwell Titan GPUs but also Maxwell Quadro GPUs as well.  For myself, compared to what I currently have, that would be a major boost in performance. As I have mentioned, 12 or 24 GB of video memory means a better chance a scene will render completely on the GPU (pretty much a "given" with 24 GB) instead of dumping to the much slower CPU mode. For me, that is a "speed bonus".

     

  • MEC4DMEC4D Posts: 5,249

    Yeah! the deal is not good for the price, plus it is new and you never buy new one as a lot of errors and stuff is not yet clear , plus the price is crazy , maybe in January 2017 will be good time to get the new intel for less too . If you going to render with the threads then great but for iray it will use only 1 core for the nvidia vewport to render the pixel samples between rotation and rendering after you stop it will switch  to GPU and how slower the core is how slow things works no matter the video cards speed so my cheaper cpu will do it faster than this . But the moment you use CPU only for rendering iray will use all the extra treads . Loading objects to the scene use also only as much cores as much cards you have , so not how much cores you have in your CPU  but how fast the  core speed is and i7-6950X speed is only 3Ghz so I am not sure what for deal you will get for the money I see not deal at all since you don;t overclock but you see , I run my CPU now at 1.6 Ghz only as I switch the profile and not at standard speed or overclocked but the moment I open DS or Photoshop or Zbrush I get into turbo extreme mode for faster processing , that how you save the life of a CPU  and running it at base clock all the time will make it last exactly as long as mine . The math is simple , but wait I can overclock just the 4 cores needed for iray and left the other 12 at very low for even better life spam ... everything is possible .

    Xeon may be good for other kind of rendering but with iray it is slower so you have to choice the most program you use and buy the right processor for that , that will be the best deal , but wait at last 6 months after release and that is for all electronic stuff in general . They first release and later fix thing based on consumer complains .lol

    I am learning each time that getting greedy on important stuff and looking for cheap alternatives will cost you double anyway , and at the end you gonna lose ..  

    kyoto kid said:
    prixat said:

    First benchmarks for the new Haswell replacements have started to appear...

    http://arstechnica.com/gadgets/2016/05/broadwell-e-arrives-testing-intels-10-core-1700-desktop-cpu/

    surprise

    ...so as the article concludes, and I am not into overclocking the CPU, sticking with a Xeon 8 or 10 core might still be a better deal for the cost.

     

  • kyoto kidkyoto kid Posts: 41,928

    ...again, the purpose for a CPU with more threads in my design is for rendering in Carrara and 3DL which are both ray trace render engines.Crikey I can get dual 10 core Xeons for less than the cost of a single Broadwell-E i7, have twice CPU cores, and change in my pocket.

  • MEC4DMEC4D Posts: 5,249

    About support regarding GTX 1080 and rendering with Iray

    We will have the support around Siggraph timeframe.
    It sounds like a long time to react to a hardware we deliver our own, but wereally have to invest a lot of work for any new card.
    And we can only do that on the software side with released hardware.
    We also have other software products that have to be there first (Drivers, Cuda, ..., ...) and then we can adapt the renderer.
    I am pretty sure that you agree that you would prefer a well tested version.

  • MEC4DMEC4D Posts: 5,249

    Siggraph is around 24-28 JULY so 2 more months anyway before we can see any perfomance in iray . 

  • mtl1mtl1 Posts: 1,508

    A lot of reviews have been dropping for a lot of 3rd-party cards and it seems like the 1080 is hitting some sort of limit at around 2050 MHz or so. That puts the card at around 10M TFlops.

    It sounds like a lot, but that 2050 is only around 18% higher than the boost clock. Perhaps it's a voltage wall of sorts, but who knows?

  • hphoenixhphoenix Posts: 1,335
    edited June 2016
    mtl1 said:

    A lot of reviews have been dropping for a lot of 3rd-party cards and it seems like the 1080 is hitting some sort of limit at around 2050 MHz or so. That puts the card at around 10M TFlops.

    It sounds like a lot, but that 2050 is only around 18% higher than the boost clock. Perhaps it's a voltage wall of sorts, but who knows?

    I went looking to verify some of this.  That 2050 MHz 'limit' is an issue with both the fan control and the VBIOS limits.  Third-parties (at least MSI and eVGA) have already announced 'enhanced' VBIOS for their cards which removes the voltage limits, and enhanced cooling does bump that 2050 MHz limit a bit.  However, we knew that much over 2100MHz was going to be questionable, given the FE cards being shown at 2144MHz at the opening were probably running a custom VBIOS and had their fans running full 100% locked (which keeps the yoyo throttling issue from happening.....I'm sure nVidia, as well as third-parties, will release an updated BIOS with better fan/throttle settings to fix the yoyo issue we're seeing.)

    KingPin/Tin have already shown a screenshot of a 1080 GTX running on LN2 at 2.8GHz core clock.  No idea how long it stayed there, but it did make it long enough to get the screenshot.  Chip temp was reading -102C.

    But don't expect to get anywhere NEAR that without LN2.  Water cooling is definitely good though, and even air cooled the chip OC's pretty well.

     

    Post edited by hphoenix on
  • mtl1mtl1 Posts: 1,508
    hphoenix said:
    mtl1 said:

    A lot of reviews have been dropping for a lot of 3rd-party cards and it seems like the 1080 is hitting some sort of limit at around 2050 MHz or so. That puts the card at around 10M TFlops.

    It sounds like a lot, but that 2050 is only around 18% higher than the boost clock. Perhaps it's a voltage wall of sorts, but who knows?

    I went looking to verify some of this.  That 2050 MHz 'limit' is an issue with both the fan control and the VBIOS limits.  Third-parties (at least MSI and eVGA) have already announced 'enhanced' VBIOS for their cards which removes the voltage limits, and enhanced cooling does bump that 2050 MHz limit a bit.  However, we knew that much over 2100MHz was going to be questionable, given the FE cards being shown at 2144MHz at the opening were probably running a custom VBIOS and had their fans running full 100% locked (which keeps the yoyo throttling issue from happening.....I'm sure nVidia, as well as third-parties, will release an updated BIOS with better fan/throttle settings to fix the yoyo issue we're seeing.)

    KingPin/Tin have already shown a screenshot of a 1080 GTX running on LN2 at 2.8GHz core clock.  No idea how long it stayed there, but it did make it long enough to get the screenshot.  Chip temp was reading -102C.

    But don't expect to get anywhere NEAR that without LN2.  Water cooling is definitely good though, and even air cooled the chip OC's pretty well.

     

    Well, getting higher clocks is not always a guarantee with higher voltages given process variability and all that. Anything above the BIOS voltage is just playing with the silicon lottery, so we'll see how far the third-party manufacturers can push their cards.

    As an aside, a part of me cringes whenever I see people playing with LN2, especially on consumer components...

  • hphoenixhphoenix Posts: 1,335
    mtl1 said:
    hphoenix said:
    mtl1 said:

    A lot of reviews have been dropping for a lot of 3rd-party cards and it seems like the 1080 is hitting some sort of limit at around 2050 MHz or so. That puts the card at around 10M TFlops.

    It sounds like a lot, but that 2050 is only around 18% higher than the boost clock. Perhaps it's a voltage wall of sorts, but who knows?

    I went looking to verify some of this.  That 2050 MHz 'limit' is an issue with both the fan control and the VBIOS limits.  Third-parties (at least MSI and eVGA) have already announced 'enhanced' VBIOS for their cards which removes the voltage limits, and enhanced cooling does bump that 2050 MHz limit a bit.  However, we knew that much over 2100MHz was going to be questionable, given the FE cards being shown at 2144MHz at the opening were probably running a custom VBIOS and had their fans running full 100% locked (which keeps the yoyo throttling issue from happening.....I'm sure nVidia, as well as third-parties, will release an updated BIOS with better fan/throttle settings to fix the yoyo issue we're seeing.)

    KingPin/Tin have already shown a screenshot of a 1080 GTX running on LN2 at 2.8GHz core clock.  No idea how long it stayed there, but it did make it long enough to get the screenshot.  Chip temp was reading -102C.

    But don't expect to get anywhere NEAR that without LN2.  Water cooling is definitely good though, and even air cooled the chip OC's pretty well.

     

    Well, getting higher clocks is not always a guarantee with higher voltages given process variability and all that. Anything above the BIOS voltage is just playing with the silicon lottery, so we'll see how far the third-party manufacturers can push their cards.

    As an aside, a part of me cringes whenever I see people playing with LN2, especially on consumer components...

    The original VBIOS locks the voltage at a max of 1.25v.  And convincing it to do anywhere near that is tricky.  It's mainly run at factory OCing at around 1.06v - 1.08v  Stock is around 0.98v.  But pushing it higher doesn't help if the card can't draw enough current.  Which is why many of the third-party cards have two power connectors instead of just the one on the FE.  Of course, the crazy LN2 overclockers will actually modify the circuit board, and completely bypass the voltage and current regulation circuits.  Good way to wreck an $700 graphics card.  Of course, Kingpin/Tin I think are 'subsidized' by eVGA, so they probably get several cards to 'push the envelope' with for eVGA bragging rights.  So if they happen to fry a couple along the way, no big deal.....but they've been doing this for a long time, so they have probably gotten pretty good at it.  Not something for someone to just 'play around with'.....

    I still think that once the driver/BIOS issues are resolved and Iray is updated to work with it, the 1080/1070 cards are going to be quite the 'go-to choice' for Iray.....

     

  • kyoto kidkyoto kid Posts: 41,928
    edited June 2016

    ..still waiting for all the dust to settle.

    Of course if I win the Megabucks lotto, all bets are off and I will go with dual 24 GB Quadro  M6000s. ;-)

    Post edited by kyoto kid on
  • The thing to keep in mind is the 1080 and 1070s will be compatible with the new version of CUDA coming out in August. I believe if you have 2 or more cards it will use the combined vram of muliple cards. I read somewhere that CUDA 8 will not work with older Nvidia cards. I will wait for the dust to settle and upgrade my 980tis to 1080s. 3 1080s would give me 24 gb of vram and at that point I would never have to worry about the size of my scenes again.

  • ANGELREAPER1972ANGELREAPER1972 Posts: 4,559

    to have 3 or 4  1070s or 1080s in sli you have ask for a code from nvidia to use them and even then more than 2 isn't used together others are used instead for other operations audio or monitor weird really noone understands why they did this

  • mtl1mtl1 Posts: 1,508

    to have 3 or 4  1070s or 1080s in sli you have ask for a code from nvidia to use them and even then more than 2 isn't used together others are used instead for other operations audio or monitor weird really noone understands why they did this

    Probably because the percentage of consumers who game with more than 3 or 4 cards is lower than a single percentage point. For workstation purposes, I can see multiple cards being used but the 1070/1080 aren't marketed as workstation cards.

  • Silver DolphinSilver Dolphin Posts: 1,638
    nicstt said:

    It isn't the only thing to consider; they may not be as good for rendering as the specs suggest as NVidia considers them a consumer card (games and general stuff); rendering is more their commercial variants.

    What they will be able to do, will be down to drivers. So am not in any hurry to part with my cash.

    2nd gen I'll consider, 1st gen GPU's are for folks with disposable income who will beta test while I wait.

     

    I may splurge and get a 1070 for my monitors because I'm using a Nvidia GT 640 2gb to run my monitors and 3x Nvidia 780's 6gb editions to do my Iray renders so a 1070 with 8gb of video ram will be an improvement when cuda is sorted out. This card will also make the other programs I use like Iclone run better.

  • StratDragonStratDragon Posts: 3,274

    The thing to keep in mind is the 1080 and 1070s will be compatible with the new version of CUDA coming out in August. I believe if you have 2 or more cards it will use the combined vram of muliple cards. I read somewhere that CUDA 8 will not work with older Nvidia cards. I will wait for the dust to settle and upgrade my 980tis to 1080s. 3 1080s would give me 24 gb of vram and at that point I would never have to worry about the size of my scenes again.

    CUDA 6 which introduces shared memory will NOT work on anything other than the 1080 and the 1070 at this time.

    https://en.wikipedia.org/wiki/CUDA

  • MEC4DMEC4D Posts: 5,249

    3 x 1080 will give you only 8 GB vram from what you need also ram for rendering so around 6-7 GB left for the scene at render size full HD plus it will take away  24 GB from your system so make sure you have enough

    The thing to keep in mind is the 1080 and 1070s will be compatible with the new version of CUDA coming out in August. I believe if you have 2 or more cards it will use the combined vram of muliple cards. I read somewhere that CUDA 8 will not work with older Nvidia cards. I will wait for the dust to settle and upgrade my 980tis to 1080s. 3 1080s would give me 24 gb of vram and at that point I would never have to worry about the size of my scenes again.

     

  • StratDragonStratDragon Posts: 3,274

    all this also assumes Nvidia gets it right by August, delivers a working set of tools and they actually do what they are supposed to without issues. Until that happens all performance gains are a matter of speculation and conjecture.

  • MEC4DMEC4D Posts: 5,249

    Plus not in rendering with iray  as the process is different and the reason why each scene need to be loaded separately for each GPU and not as shared memory unless they rewrite the engine but I don't see it happen just for 1080 and  1070 as that are not workstation cards and officially not recommended for rendering , for the same reason why not even ready for iray after 3 years of development as that is the last thing they worry about .

    The thing to keep in mind is the 1080 and 1070s will be compatible with the new version of CUDA coming out in August. I believe if you have 2 or more cards it will use the combined vram of muliple cards. I read somewhere that CUDA 8 will not work with older Nvidia cards. I will wait for the dust to settle and upgrade my 980tis to 1080s. 3 1080s would give me 24 gb of vram and at that point I would never have to worry about the size of my scenes again.

    CUDA 6 which introduces shared memory will NOT work on anything other than the 1080 and the 1070 at this time.

    https://en.wikipedia.org/wiki/CUDA

     

  • hphoenixhphoenix Posts: 1,335

    The thing to keep in mind is the 1080 and 1070s will be compatible with the new version of CUDA coming out in August. I believe if you have 2 or more cards it will use the combined vram of muliple cards. I read somewhere that CUDA 8 will not work with older Nvidia cards. I will wait for the dust to settle and upgrade my 980tis to 1080s. 3 1080s would give me 24 gb of vram and at that point I would never have to worry about the size of my scenes again.

    As others have noted, the Unified Memory Model in CUDA 6 does NOT mean you add up the memory.  The full scene still has to fit in the memory of any single card.

    The Unified Memory Model is simply an abstraction layer between the host program and the CUDA layer code.  Prior to CUDA 6, developers had to manually create the memory block allocations in both main memory, and on the CUDA devices, then copy it across.  "Unified Memory" is a simple (and very marketing friendly) way of saying the library code now does this for you.

     

Sign In or Register to comment.