New Pascal Titan announced. $1200.

1356

Comments

  • GatorGator Posts: 1,320

    I'll have to wait for a different render engine to be created for a video card from a business that isn't price gouging.

    laugh

    You missed the announcement that AMD is open sourcing their FireRender, now ProRender engine and while they are still charging a lot for their video cards, the prices I'm seeing on Amazon are about on par for what they were charging in the early 2000s, not that that those were not expensive but they are not the ridiculous nVidia prices.

    And this winter they are supposed to have a video card that makes a huge step forward.

    I dunno, I've been in IT for like 20 years.  Gaming cards have always been expensive, and workstation cards even more expensive.  Competitors have priced their cards closely with each other based on their performance.

    Isn't gonna change.

    And I've been in IT for 25 years, lol, and AMD cards are going for 1/2 to 1/4 the price of nVidia cards and Intels' 3D graphics HW capabilities aren't exactly standing still. If these businesses are going to earn money via the masses such as they claim then the best tech has to become commoditized and a $1500 video card is not a commodity. Intel and AMD HW is much closer to being commotities the nVidia HW. Just in the last 5 years I've watched Unity Pro features go free and high-end HW graphics features be included in a $50 Android tablets.

    Gamers and early-adaptor Apple types willingly overpay but most homes have indoor plumbing and electricity now. I am confident, HW & SW that is affordably priced as entry level HW will become availble in the next 5 years that will blow away the current nVidia high-end offerings, lol, at least if you use marketing speak as your primary lingo.

    OK, now we both outed ourselves as old geezers.  laugh

     

    AMD is cheaper because Nvidia is beating them at performance at the moment.  When the performance is near the same, the prices are the same. 

    Early adopters and those want the latest and greatest will always pay more.

  • Kendall SearsKendall Sears Posts: 2,995

    CUDA and AMD Stream are both RISC sets.  AMD is beating nVidia on price because the number of instructions that the Stream Processors can do is much smaller than CUDA and therefore the number of transistors/core is fewer.   It costs less to make each Stream core and AMD can pack a huge number of them onto a die.

    CUDA is approacing CISC but not quite there yet.  With the more complex instructions come larger transistor counts/core thus the ability to fit fewer cores on a die.  Also the controller for CUDA has to be more complex due to the higher number of states the cores require.

    So, in some cases the number of Stream cores running more instructions to do the same work as a single CUDA instruction can outrun the equivalent CUDA.  This cannot happen in every case, nor can it happen consistently.  For "general purpose computing" there are simply operations that the AMD Stream cannot do at all, which is why nVidia can charge a premium.  On those operations where Stream can hyper-parallelize the work, CUDA loses badly.

    Two different paradigms used for similar jobs -- at least in this industry.

    Kendall

  • kyoto kidkyoto kid Posts: 42,052
    kyoto kid said:

    Looks like the latest Quadro will beat the new Titan X.  News update: 

    http://techgage.com/article/siggraph-2016-a-look-at-nvidias-upcoming-pascal-gp102-quadros/

     

    ..I would have thought at least the Quadros would get HBM memory.

    Clearly HBM production is just not there yet. Neither Nvidia nor AMD have shown a card with the latest HBM on board.

    ..well the Tesla P100 has 16 GB HBM memory.

  • kyoto kidkyoto kid Posts: 42,052
    edited July 2016
    kyoto kid said:

    I'll have to wait for a different render engine to be created for a video card from a business that isn't price gouging.

    ...+1. 

    I'm thinking of just giving up in Iray altogether myself.

    Did you see the AMD announcement regarding open sourcing the FireRender engine and other graphics software libraries? Wow! nVidia will now have competition with iRay! smiley

    ...yes but will there be a plugin or RIB utility to send scenes from Daz like Lux, Octane, and the standalone 3DL has and will it be useable with Radeon cards?

    Post edited by kyoto kid on
  • kyoto kidkyoto kid Posts: 42,052
    edited July 2016
    MEC4D said:

    This is for games so 60% better perfomance in games than stock Titan X , you have different versions of Titan X with different bios that run the clock at different speed , my Titan X can run easy at 1500Mhz the same as Pascal Titan X but becouse you have little more cores  it will be maybe 15-20% faster than my Titan X that is faster as stock 1080 .. Pascal Titan X is only 24% faster in games than 1080 ..the math is  simple 

    unless there are a real official iray benchmarks using Pascal cards ( may be less than in 2 weeks ) you will have the chance to see the true perfomance of all stock cards and decide what is the best for the money . Still getting 2 x1080 is better for the money than simple stock Pascal Titan X , same price but 2x1080 will give you 70% extra perfomance for free than just 1 Pascal Titan X

    Hopefully this means Pascal support for Iray is coming soon.  Seems like a card like this has doesn't have too many gamers buying them and more for rendering use.

     

    Nvidia's site still says 1.5 GHz and claims a 60% performance increase over a Maxwell Titan X.  Why are some of you saying it's not that much over a Maxwell Titan X?  Counting against over clocked Maxwells?

     

     

    ...this is my feeling about the Titan-P  Not a lot more rendering horespower to justify the extra cost, which is probably why Maxwell Titan-Xs won't be coming down in price even though they have been discontinued.  Still 12 GB, still GDDR5 instead of HBM 2.

    In spite of the extra cost, the Quadro P5000 sounds like a better deal in comparison as it has double the memory of its predecessor.

    As to dual 1080s, they may be faster, but there is still the memory ceiling issue for me.  Your process dumps to CPU and all those CUDA cores mean squat.

    Post edited by kyoto kid on
  • nonesuch00nonesuch00 Posts: 18,795
    kyoto kid said:
    kyoto kid said:

    I'll have to wait for a different render engine to be created for a video card from a business that isn't price gouging.

    ...+1. 

    I'm thinking of just giving up in Iray altogether myself.

    Did you see the AMD announcement regarding open sourcing the FireRender engine and other graphics software libraries? Wow! nVidia will now have competition with iRay! smiley

    ...yes but will there be a plugin or RIB utility to send scenes from Daz?

    Well I'm hoping since it's opensource that the ProRender API can have have a DAZ Render COM API layer added over it to to pipe the revelant info back & forth between  DAZ Studio and ProRender

  • kyoto kidkyoto kid Posts: 42,052
    edited July 2016

    ..doing some quick research, it seems that Nvidia still has the edge on GPU rendering which means either 1,000$s for high end GPUs with a lot of memory or glacial CPU mode.

    Amother option I discussed with a freind last night is online render farms.

    Post edited by kyoto kid on
  • nonesuch00nonesuch00 Posts: 18,795
    kyoto kid said:

    ..doing some quick research, it seems that Nvidia still has the edge on GPU rendering which means either 1,000$s for high end GPUs with a lot of memory or glacial CPU mode.

    Amother option I discussed with a freind last night is online render farms.

    That is supposed to change this Winter is what I heard but you now how these claims go...

  • MEC4DMEC4D Posts: 5,249

    Telsa P100 Mezzanine 16GB 1.4Gb HBM2 , Telsa P100 16GB 1.4Gb HMB2, Telsa P100 12GB HMB2

     

    kyoto kid said:
    kyoto kid said:

    Looks like the latest Quadro will beat the new Titan X.  News update: 

    http://techgage.com/article/siggraph-2016-a-look-at-nvidias-upcoming-pascal-gp102-quadros/

     

    ..I would have thought at least the Quadros would get HBM memory.

    Clearly HBM production is just not there yet. Neither Nvidia nor AMD have shown a card with the latest HBM on board.

    ..well the Tesla P100 has 16 GB HBM memory.

     

  • kyoto kidkyoto kid Posts: 42,052
    edited July 2016

    ...looked at the Pascal Tesla line and all have either 16 or 12 GB HBM2.  So what is the 1.4Gb rating?

    Also so what is with the full length form factor?  I thought one of the benefits of HBM 2 was supposed be a more compact card.

     

    Post edited by kyoto kid on
  • nicsttnicstt Posts: 11,715
    kyoto kid said:
    MEC4D said:

    This is for games so 60% better perfomance in games than stock Titan X , you have different versions of Titan X with different bios that run the clock at different speed , my Titan X can run easy at 1500Mhz the same as Pascal Titan X but becouse you have little more cores  it will be maybe 15-20% faster than my Titan X that is faster as stock 1080 .. Pascal Titan X is only 24% faster in games than 1080 ..the math is  simple 

    unless there are a real official iray benchmarks using Pascal cards ( may be less than in 2 weeks ) you will have the chance to see the true perfomance of all stock cards and decide what is the best for the money . Still getting 2 x1080 is better for the money than simple stock Pascal Titan X , same price but 2x1080 will give you 70% extra perfomance for free than just 1 Pascal Titan X

    Hopefully this means Pascal support for Iray is coming soon.  Seems like a card like this has doesn't have too many gamers buying them and more for rendering use.

     

    Nvidia's site still says 1.5 GHz and claims a 60% performance increase over a Maxwell Titan X.  Why are some of you saying it's not that much over a Maxwell Titan X?  Counting against over clocked Maxwells?

     

     

    ...this is my feeling about the Titan-P  Not a lot more rendering horespower to justify the extra cost, which is probably why Maxwell Titan-Xs won't be coming down in price even though they have been discontinued.  Still 12 GB, still GDDR5 instead of HBM 2.

    In spite of the extra cost, the Quadro P5000 sounds like a better deal in comparison as it has double the memory of its predecessor.

    As to dual 1080s, they may be faster, but there is still the memory ceiling issue for me.  Your process dumps to CPU and all those CUDA cores mean squat.

    This is why I am reluctantly (and seriously reluctantly) considering the new Titan. (Well I would, but my credit card has hidden itself away.crying)

    kyoto kid said:

    ..doing some quick research, it seems that Nvidia still has the edge on GPU rendering which means either 1,000$s for high end GPUs with a lot of memory or glacial CPU mode.

    Amother option I discussed with a freind last night is online render farms.

    That is supposed to change this Winter is what I heard but you now how these claims go...

    Indeed, I'll believe this when I see the independent stats; just like I believe IRAY performance on the new cards, once the stats support the claims.

    Lets face it, a company doesn't have to be corrupt, only make a mistake; presuming that is what happened here. It's going to cost NVidia a pretty penny, the mistake over directly addressable memory.

    http://uk.ign.com/articles/2016/07/29/nvidia-settles-gtx-970-lawsuit-owes-buyers-money

  • MEC4DMEC4D Posts: 5,249

    That is the memory clock of HBM2 at 1.4Gbps , Telsa P-100 16GB max GPU clock speed is 1300Mhz , 3584 Cuda cores and 250Watts 

    kyoto kid said:

    ...looked at the Pascal Tesla line and all have either 16 or 12 GB HBM2.  So what is the 1.4Gb rating?

    Also so what is with the full length form factor?  I thought one of the benefits of HBM 2 was supposed be a more compact card.

     

     

  • MEC4DMEC4D Posts: 5,249

    of course if the money is not the issue  you don't need to settle for less vram 

    nicstt said:
    kyoto kid said:
    MEC4D said:
    This is why I am reluctantly (and seriously reluctantly) considering the new Titan. (Well I would, but my credit card has hidden itself away.crying)
  • GatorGator Posts: 1,320
    MEC4D said:

    This is for games so 60% better perfomance in games than stock Titan X , you have different versions of Titan X with different bios that run the clock at different speed , my Titan X can run easy at 1500Mhz the same as Pascal Titan X but becouse you have little more cores  it will be maybe 15-20% faster than my Titan X that is faster as stock 1080 .. Pascal Titan X is only 24% faster in games than 1080 ..the math is  simple 

    unless there are a real official iray benchmarks using Pascal cards ( may be less than in 2 weeks ) you will have the chance to see the true perfomance of all stock cards and decide what is the best for the money . Still getting 2 x1080 is better for the money than simple stock Pascal Titan X , same price but 2x1080 will give you 70% extra perfomance for free than just 1 Pascal Titan X

    Hopefully this means Pascal support for Iray is coming soon.  Seems like a card like this has doesn't have too many gamers buying them and more for rendering use.

     

    Nvidia's site still says 1.5 GHz and claims a 60% performance increase over a Maxwell Titan X.  Why are some of you saying it's not that much over a Maxwell Titan X?  Counting against over clocked Maxwells?

     

     

    OK yeah, yours are overclocked.  I haven't overclocked for many years now, I'm only interested in stock performance.

    I WISH SLI was all that it's cracked up to be.  I play games and render.  I have two Titan X's.  It actually laggier with SLI enabled than using a single card.  frown

    It's been mentioned before, the 1080s don't have as much memory, the Iray performance increase may be marginal.  And if SLI is crap like it currently does with my Titan X's, it's barely any better for games.  I have two Nvidia Titans.

     

  • Kevin SandersonKevin Sanderson Posts: 1,643

    You aren't supposed to use SLI when rendering in Iray.

  • MEC4DMEC4D Posts: 5,249

    My stock Titan X run @1377 Mhz when not overclocked , they have different bios  than founder edition , any boost is actual OC  , normally I downclock my all cards to 1277Mhz as there is not much difference in rendering when I push to 1500Mhz , and of course base clock at 1400Mhz will be better choice for rendering in iray than card with boost .. for games this really not matter base or boost but there is difference when rendering .  I don;t know what you doing but you should have super speed with your 2 cards , I hope you not using them in SLI while rendering as that is not what you should , I use 2 xTitans X on a daily basic I can play my animated timeline frames in real time in the iray viewport .. so definitelly there is something wrong , you should have no less than 50% boost with second GPU , my max was 76% with second GPU  and 4.9.2.70 DS build . 

    MEC4D said:

    This is for games so 60% better perfomance in games than stock Titan X , you have different versions of Titan X with different bios that run the clock at different speed , my Titan X can run easy at 1500Mhz the same as Pascal Titan X but becouse you have little more cores  it will be maybe 15-20% faster than my Titan X that is faster as stock 1080 .. Pascal Titan X is only 24% faster in games than 1080 ..the math is  simple 

    unless there are a real official iray benchmarks using Pascal cards ( may be less than in 2 weeks ) you will have the chance to see the true perfomance of all stock cards and decide what is the best for the money . Still getting 2 x1080 is better for the money than simple stock Pascal Titan X , same price but 2x1080 will give you 70% extra perfomance for free than just 1 Pascal Titan X

    Hopefully this means Pascal support for Iray is coming soon.  Seems like a card like this has doesn't have too many gamers buying them and more for rendering use.

     

    Nvidia's site still says 1.5 GHz and claims a 60% performance increase over a Maxwell Titan X.  Why are some of you saying it's not that much over a Maxwell Titan X?  Counting against over clocked Maxwells?

     

     

    OK yeah, yours are overclocked.  I haven't overclocked for many years now, I'm only interested in stock performance.

    I WISH SLI was all that it's cracked up to be.  I play games and render.  I have two Titan X's.  It actually laggier with SLI enabled than using a single card.  frown

    It's been mentioned before, the 1080s don't have as much memory, the Iray performance increase may be marginal.  And if SLI is crap like it currently does with my Titan X's, it's barely any better for games.  I have two Nvidia Titans.

     

     

  • GatorGator Posts: 1,320

    You aren't supposed to use SLI when rendering in Iray.

    Yeah well bugged SLI fixed that problem for me.  frown

    I was always having to turn SLI on and off switching between gaming and rendering.  Well, with the Titans I don't have to bother anymore, because things run WORSE with SLI enabled vs. one card.

  • mtl1mtl1 Posts: 1,508
    MEC4D said:

    This is for games so 60% better perfomance in games than stock Titan X , you have different versions of Titan X with different bios that run the clock at different speed , my Titan X can run easy at 1500Mhz the same as Pascal Titan X but becouse you have little more cores  it will be maybe 15-20% faster than my Titan X that is faster as stock 1080 .. Pascal Titan X is only 24% faster in games than 1080 ..the math is  simple 

    unless there are a real official iray benchmarks using Pascal cards ( may be less than in 2 weeks ) you will have the chance to see the true perfomance of all stock cards and decide what is the best for the money . Still getting 2 x1080 is better for the money than simple stock Pascal Titan X , same price but 2x1080 will give you 70% extra perfomance for free than just 1 Pascal Titan X

    Hopefully this means Pascal support for Iray is coming soon.  Seems like a card like this has doesn't have too many gamers buying them and more for rendering use.

     

    Nvidia's site still says 1.5 GHz and claims a 60% performance increase over a Maxwell Titan X.  Why are some of you saying it's not that much over a Maxwell Titan X?  Counting against over clocked Maxwells?

     

     

    OK yeah, yours are overclocked.  I haven't overclocked for many years now, I'm only interested in stock performance.

    I WISH SLI was all that it's cracked up to be.  I play games and render.  I have two Titan X's.  It actually laggier with SLI enabled than using a single card.  frown

    It's been mentioned before, the 1080s don't have as much memory, the Iray performance increase may be marginal.  And if SLI is crap like it currently does with my Titan X's, it's barely any better for games.  I have two Nvidia Titans.

     

    The reason why games sometimes lag with SLI is because not all games are fully optimized for alternate frame rendering. Ideally, AFR should theoretically boost frame rates drastically but almost never happens since there's a lot going on between each frame. DX11 is also a major limitation to the implementation of multi video card setups, so we should *theoretically* see better performance when DX12 is more widely adopted.

  • MEC4DMEC4D Posts: 5,249

    Totally agree ! 

    mtl1 said:
    MEC4D said:

    This is for games so 60% better perfomance in games than stock Titan X , you have different versions of Titan X with different bios that run the clock at different speed , my Titan X can run easy at 1500Mhz the same as Pascal Titan X but becouse you have little more cores  it will be maybe 15-20% faster than my Titan X that is faster as stock 1080 .. Pascal Titan X is only 24% faster in games than 1080 ..the math is  simple 

    unless there are a real official iray benchmarks using Pascal cards ( may be less than in 2 weeks ) you will have the chance to see the true perfomance of all stock cards and decide what is the best for the money . Still getting 2 x1080 is better for the money than simple stock Pascal Titan X , same price but 2x1080 will give you 70% extra perfomance for free than just 1 Pascal Titan X

    Hopefully this means Pascal support for Iray is coming soon.  Seems like a card like this has doesn't have too many gamers buying them and more for rendering use.

     

    Nvidia's site still says 1.5 GHz and claims a 60% performance increase over a Maxwell Titan X.  Why are some of you saying it's not that much over a Maxwell Titan X?  Counting against over clocked Maxwells?

     

     

    OK yeah, yours are overclocked.  I haven't overclocked for many years now, I'm only interested in stock performance.

    I WISH SLI was all that it's cracked up to be.  I play games and render.  I have two Titan X's.  It actually laggier with SLI enabled than using a single card.  frown

    It's been mentioned before, the 1080s don't have as much memory, the Iray performance increase may be marginal.  And if SLI is crap like it currently does with my Titan X's, it's barely any better for games.  I have two Nvidia Titans.

     

    The reason why games sometimes lag with SLI is because not all games are fully optimized for alternate frame rendering. Ideally, AFR should theoretically boost frame rates drastically but almost never happens since there's a lot going on between each frame. DX11 is also a major limitation to the implementation of multi video card setups, so we should *theoretically* see better performance when DX12 is more widely adopted.

     

  • outrider42outrider42 Posts: 3,679
    kyoto kid said:
    kyoto kid said:

    Looks like the latest Quadro will beat the new Titan X.  News update: 

    http://techgage.com/article/siggraph-2016-a-look-at-nvidias-upcoming-pascal-gp102-quadros/

     

    ..I would have thought at least the Quadros would get HBM memory.

    Clearly HBM production is just not there yet. Neither Nvidia nor AMD have shown a card with the latest HBM on board.

    ..well the Tesla P100 has 16 GB HBM memory.

    And how much does that cost again? That and any other Tesla has a very limited production run. So with a more limited production they can add HBM2. Gaming cards are sold in much, much larger volumes and that supply has to be there to meet the demand, including a Titan level card. Nvidia's original plans for Pascal clearly state HBM. But that did not happen. I very highly doubt Nvidia crammed G5X in there just to save costs...it is a Titan after all and they are still asking $1200 for this thing. Besides that, AMD hasn't pulled anything out with HBM2, either. The $1500 Pro Duo is using first gen HBM that the Fury Nano it was based own uses. At this rate, we wont see HBM2 in a consumer card until 2017 at the very earliest. If Nvidia is really looking to release Volta in 4Q 2017, it may show up there.

  • kyoto kidkyoto kid Posts: 42,052
    edited August 2016

    ...just like with the 8 GB Maxwell 970/980 hinted at over a year ago.

    Maybe just going with a render farm service like my friend and I discussed the other night would be better than sinking 1,000s into high end GPUs for Iray rendering.

    Also, I thought Daz was planning to do that with the  "Cloud (Beta)" option which shows up in the Advanced Render Settings tab.

    Post edited by kyoto kid on
  • GatorGator Posts: 1,320
    edited August 2016
    MEC4D said:

    Totally agree ! 

    mtl1 said:
    MEC4D said:

    This is for games so 60% better perfomance in games than stock Titan X , you have different versions of Titan X with different bios that run the clock at different speed , my Titan X can run easy at 1500Mhz the same as Pascal Titan X but becouse you have little more cores  it will be maybe 15-20% faster than my Titan X that is faster as stock 1080 .. Pascal Titan X is only 24% faster in games than 1080 ..the math is  simple 

    unless there are a real official iray benchmarks using Pascal cards ( may be less than in 2 weeks ) you will have the chance to see the true perfomance of all stock cards and decide what is the best for the money . Still getting 2 x1080 is better for the money than simple stock Pascal Titan X , same price but 2x1080 will give you 70% extra perfomance for free than just 1 Pascal Titan X

    Hopefully this means Pascal support for Iray is coming soon.  Seems like a card like this has doesn't have too many gamers buying them and more for rendering use.

     

    Nvidia's site still says 1.5 GHz and claims a 60% performance increase over a Maxwell Titan X.  Why are some of you saying it's not that much over a Maxwell Titan X?  Counting against over clocked Maxwells?

     

     

    OK yeah, yours are overclocked.  I haven't overclocked for many years now, I'm only interested in stock performance.

    I WISH SLI was all that it's cracked up to be.  I play games and render.  I have two Titan X's.  It actually laggier with SLI enabled than using a single card.  frown

    It's been mentioned before, the 1080s don't have as much memory, the Iray performance increase may be marginal.  And if SLI is crap like it currently does with my Titan X's, it's barely any better for games.  I have two Nvidia Titans.

     

    The reason why games sometimes lag with SLI is because not all games are fully optimized for alternate frame rendering. Ideally, AFR should theoretically boost frame rates drastically but almost never happens since there's a lot going on between each frame. DX11 is also a major limitation to the implementation of multi video card setups, so we should *theoretically* see better performance when DX12 is more widely adopted.

     

    I don't know, I got the impression that it was only with Titans.

    Before those, I had two EVGA GTX 970s.  SLI ran great (and I kept turning SLI on & off switching between gaming & rendering.)  I also had some ATI R9 290s, those ran great too.  Forgot what I had before that, but I'm pretty sure I've been exclusively running SLI for gaming since the 3DFX days.  And I keep my drivers up to date, except for a while when Octane wouldn't work with 2 Titans at some driver revision forward for many months.  Could be fixed now for all, I gave up on Octane.

    Maybe it's the games, I only play a few now.  IIRC, Killing Floor 2 actually runs well SLI.  Battlefront & FO4 does not.

    Post edited by Gator on
  • Any Nvidia card they say turn SLI off as it can significantly slow down Iray.

    http://irayrender.com/fileadmin/filemount/editor/PDF/iray_Performance_Tips_100511.pdf

  • MEC4DMEC4D Posts: 5,249

    Not all games are optimized for SLI , however when I had 3 Titan X I could use SLI but with 4 I can't , I tried different kind of SLI bridges and nothing , Nvidia Panel does not recognize the bridges with 4 cards 

    someone told me couple days ago that they can run SLI with iray by connecting the bridge and selecting only 1 GPU under iray rendering setting , but I tried it before and it was not working and very slow as Cuda technology with SLI in iray will double the raytraced faces  in rendering for that reason it is not recommended .

    -----------

    and on the side a little note

    Also using different mixed architectures with iray like Kepler , Maxwell or Pascal cards together will not work optimal since each of the architecture required different display drivers for optimal performance with Cuda technology so unless the cards are from the same series that use the same display drivers all other cards should be not used when rendered with Iray as Nvidia does not recommend doing it for a good reason as it will mess up your GPU scaling and slow down everything and the performance will be not optimal . I got this info from Nvidia iray programmers since a lot of people don't believe it is true and can stack up GPUs Cuda cores no matter what they throw into PC .

    And if you want to use optimal older card with different architecture as monitor you should remove all cards from the slot, put the display card only and start the PC , install the correct driver and shut down, put the newer card in the slot start again the PC and install the proper driver for the second card , so each card have own proper driver to perform optimal and older card can be used correctly to run the monitor a long side with newer card for rendering only . This little trick is not approved by Nvidia but it works if you know what you doing , however if you select the both card for the same task you may lose the optimal performance of the newer card.

     

     

     

    MEC4D said:
    mtl1 said:
    MEC4D said:

    This is for games so 60% better perfomance in games than stock Titan X , you have different versions of Titan X with different bios that run the clock at different speed , my Titan X can run easy at 1500Mhz the same as Pascal Titan X but becouse you have little more cores  it will be maybe 15-20% faster than my Titan X that is faster as stock 1080 .. Pascal Titan X is only 24% faster in games than 1080 ..the math is  simple 

    unless there are a real official iray benchmarks using Pascal cards ( may be less than in 2 weeks ) you will have the chance to see the true perfomance of all stock cards and decide what is the best for the money . Still getting 2 x1080 is better for the money than simple stock Pascal Titan X , same price but 2x1080 will give you 70% extra perfomance for free than just 1 Pascal Titan X

    Hopefully this means Pascal support for Iray is coming soon.  Seems like a card like this has doesn't have too many gamers buying them and more for rendering use.

     

    Nvidia's site still says 1.5 GHz and claims a 60% performance increase over a Maxwell Titan X.  Why are some of you saying it's not that much over a Maxwell Titan X?  Counting against over clocked Maxwells?

     

     

    OK yeah, yours are overclocked.  I haven't overclocked for many years now, I'm only interested in stock performance.

    I WISH SLI was all that it's cracked up to be.  I play games and render.  I have two Titan X's.  It actually laggier with SLI enabled than using a single card.  frown

    It's been mentioned before, the 1080s don't have as much memory, the Iray performance increase may be marginal.  And if SLI is crap like it currently does with my Titan X's, it's barely any better for games.  I have two Nvidia Titans.

     

    The reason why games sometimes lag with SLI is because not all games are fully optimized for alternate frame rendering. Ideally, AFR should theoretically boost frame rates drastically but almost never happens since there's a lot going on between each frame. DX11 is also a major limitation to the implementation of multi video card setups, so we should *theoretically* see better performance when DX12 is more widely adopted.

     

    I don't know, I got the impression that it was only with Titans.

    Before those, I had two EVGA GTX 970s.  SLI ran great (and I kept turning SLI on & off switching between gaming & rendering.)  I also had some ATI R9 290s, those ran great too.  Forgot what I had before that, but I'm pretty sure I've been exclusively running SLI for gaming since the 3DFX days.  And I keep my drivers up to date, except for a while when Octane wouldn't work with 2 Titans at some driver revision forward for many months.  Could be fixed now for all, I gave up on Octane.

    Maybe it's the games, I only play a few now.  IIRC, Killing Floor 2 actually runs well SLI.  Battlefront & FO4 does not.

     

  • MEC4DMEC4D Posts: 5,249

    It does not turn the SLI automatic off , what it do with simple words is rendering the same thing double at the same time in place of stacking up the GPU performance for faster GPU scaling and rendering 

    as I said someone said that selecting only 1 GPU from the SLI will work ok but I can;t confirm that as it does the same so definitely need to be set off .

    Any Nvidia card they say turn SLI off as it can significantly slow down Iray.

    http://irayrender.com/fileadmin/filemount/editor/PDF/iray_Performance_Tips_100511.pdf

     

  • Yep, it's not automatic. Scott must turn it off himself.

  • StratDragonStratDragon Posts: 3,278

     

    ...Gamers and early-adaptor Apple types willingly overpay but most homes have indoor plumbing and electricity now. 

    "Apple types" willing to overpay? I think in the interest of an informative thread you should stay on topic.

  • GatorGator Posts: 1,320

    Yep, it's not automatic. Scott must turn it off himself.

    Yes, I know.  A few times I said had to switch SLI on & off going from gaming to rendering.  In other word, I enable SLI when gaming, and disable SLI when rendering.

    I wish it were that simple.  frown

     

    But my problem is the other way around.  SLI sucks for gaming, I have no issues with two Titans (SLI off) rendering.  Basically.  There seems to be some bug where once in a while my rendering goes CPU only (and I have CPU rendering unchecked).  When that happens no matter how many times I render it won't use GPUs, I need to re-start Daz Studio then it works again.

  • GatorGator Posts: 1,320

    Titan X Pascal available now!

    Gaming performance is impressive.  With Maxwell Titan X's being unavailable I'm tempted to pick up the Pascal Titans.  It's been a few months now, anyone hear when Iray rendering will be supported?

    http://www.pcworld.com/article/3102877/components-graphics/tested-nvidias-new-titan-x-is-absolutely-decadant-in-sli.html

     

    I could hold out for SLI to be fixed with Battlefront and my Maxwell Titans but I'm not holding my breath.  indecision

  • hphoenixhphoenix Posts: 1,335

    Titan X Pascal available now!

    Gaming performance is impressive.  With Maxwell Titan X's being unavailable I'm tempted to pick up the Pascal Titans.  It's been a few months now, anyone hear when Iray rendering will be supported?

     

    Current word from nVidia is Late September for Pascal-compatible Iray.  sad

Sign In or Register to comment.