980 ti or 1080?

I saw a few threads on daz forum. All post were "980 ti is much better because bla bla bla" and then "No, 1080 is the best because bla bla bla"

so... 980 ti or 1080? Be short please!

Comments

  • ANGELREAPER1972ANGELREAPER1972 Posts: 4,555
    edited June 2016

    everyone is still waiting on when the 1080 is updated to be iray workable at present it wont as another on here found out after buying installing 2 1080s and got error message may not be till august september. there were fan issues but new drivers coming out to fix that. but other than that from all reports the 1080 is the better more speed less power but everyone is waiting for the iray update to see that performance. oh and you can only use two 1080s in sli, anymore wont work except for benchmark testing according to nvidia

    Post edited by ANGELREAPER1972 on
  • nicsttnicstt Posts: 11,715
    edited June 2016

    980ti: you can use for rendering now - is stable and mature for gaming.

    1080: NO rendering, and possible issues with gaming as the drivers are stil very much a work in progress.

    It is expected by many, that the 1080 will be better than the 980ti for rendering; there is no data yet to back that up.

    Personally, I would wait.

    I'm considering another 980ti if the price gets low enough. If not I'll use my current one to drive the monitors when the 1080ti becomes available next year.

     

    Post edited by nicstt on
  • Ghosty12Ghosty12 Posts: 2,080

    Yeah at the moment with any new card especially as new as a 1080 always wait a few months for the bugs to be sorted..  The problem with going with a 980 is that in a way it will be obsolete soon..  Also and the main thing here is what do you have at the moment may just tide you over till the 1080's become more mainstream..

  • 3delinquent3delinquent Posts: 355

    Yeah, join the queue of people waiting for Iray drivers and hopefully a souped up version with more vram. Better value for money than the founders edition from nvidea as well.

     

  • joseftjoseft Posts: 310

    in theory, the 1080 will be faster - when the drivers are updated so it works in iRay.

    But even when the drivers are updated, i would still wait for some iRay benchmarks. Nvidia may purposely limit the 1080s rendering capabilities, because they have far more expensive quadro cards around that are there specifically for the 3D development market - and having a gaming focused desktop card that performs nearly as well in 3d applications as a $3000 quadro card would make some big companies very unhappy, and also cause them to buy the much cheaper solution (the 1080s) in future, thereby killing the demand and hence profits from the quadro cards.

  • linvanchenelinvanchene Posts: 1,386
    edited June 2016

    - - -

    Nvidia partnered with DAZ3D to provide Iray for free for a mostly hobbiest or indie user group.

    Based on that Nvidia should be very much aware that this user group is not going to afford quadro line of cards.

    Nevertheless Nvidia seems to be aware that some of the more enthusiast users of that group have the budget for the Titan category. Titan cards offer a lot more VRAM that makes them much more suited for rendering than the pure gaming cards.

    There is no indication that Nvidia has any plans to completly change their lineup.

    - - -

    So the question is what is the difference between  1080, Titan, Tessla, Quadro?

    As current rumors go

    - the pascal generation Titan may not use HBM2 but GDDR5X same as 1080.

    - the Pascal Titan may have more memory either 14 or 16 GB VRAM than the 1080 but less than tessla and quadro cards (24+).

    - Pascal Titan may use the GP 102 that may still be 50 percent faster than the GP 104 chip used for the 1080 but it will still be slower than the GP 100 used for tessla and quadro.

    Source:

    http://www.pcgameshardware.de/Nvidia-Pascal-Hardware-261713/News/GTX-Titan-2-Nachfolger-Pascal-50-Prozent-schneller-1080-1197623/

    - - -

    To put it differently you get as much performance as you pay for.

    Based on the currently available data the performance of the GP 100, 102, 104 seem to scale in a reaonable way compared to the estimated price tag. Keep in mind that there also may be a GP 106 that may be on the lower end of the specturm but propbably not anymore that useful for rendering.

    In any case if you are not in that a rush to get a 1080 WAIT for a few months until at least the Iray and OctaneRender benchmarks are out.

    -> Personally I would not be surprised if Nvidia announces the Pascal Titan pretty much at the same time Iray is able to support Pascal cards for rendering.

    - - -

    Post edited by linvanchene on
  • ANGELREAPER1972ANGELREAPER1972 Posts: 4,555

    not many could afford those really expensive cards yeah those are more aimed at the high end professionals/studios not the rest of us the titans are our high end for those can afford it these new cards would be a more affordable high end card for the average consumer a comprimise between price and grunt power/speed/extra 

  • DAZ_SpookyDAZ_Spooky Posts: 3,100

    The 1080Ti is expected by the end of hte year. It is likely to be worth wating for given the announced spec difference. 

     

  • StratDragonStratDragon Posts: 3,273
    edited June 2016

     

    Daz partnered or rather licenced within Nvida, not the other way around. We represent a fraction of a percent of their user base and possible purchasing niche. I don't think Nvida is sweating getting us drivers for Cuda in any timeline other than when they're ready. In the mean time they are selling these things and you can buy one and wait for Cuda, or wait for Cuda and then buy the cards when they are cheaper. The 102 cards will hopefully be out by then and they are set to be the Cuda workhorses of the batch, the 1080 and 1070 are being marketed to gamers and thats where we can expect Nvida to concentrate their resources as that represents a big portion of their user base. Us 3D types are the minority and Cuda 3D users are a minority within that minority.

     

    Post edited by StratDragon on
  • hphoenixhphoenix Posts: 1,335
    edited June 2016

    oh and you can only use two 1080s in sli, anymore wont work except for benchmark testing according to nvidia

    Not true.  But you have to get a special 'key' from nvidia (they are supposedly putting up (or have put up) a webpage where you can request the key.  The need to do this is a bit silly.  Evidently the 'key' is just a driver extension that enables 3 or 4 way SLI for the cards.  The reasoning was something along the lines of "well, for gaming, more than two actually has a detrimental effect on speed, so we locked it out.  But we are going to give you a way to unlock it."

    But yes, you CAN do 3 or 4 way SLI with 1000 series cards.  Just takes some extra steps.....

     

    EDIT:

    Nope, they've decided to disable it except for benchmarking apps (and DX12, which bypasses the restrictions internally).  News was breached on this Wednesday.  I hadn't seen it until now.  nVidia are now officially morons.  *sigh*

     

    Post edited by hphoenix on
  • namffuaknamffuak Posts: 4,409
    hphoenix said:

    oh and you can only use two 1080s in sli, anymore wont work except for benchmark testing according to nvidia

    Not true.  But you have to get a special 'key' from nvidia (they are supposedly putting up (or have put up) a webpage where you can request the key.  The need to do this is a bit silly.  Evidently the 'key' is just a driver extension that enables 3 or 4 way SLI for the cards.  The reasoning was something along the lines of "well, for gaming, more than two actually has a detrimental effect on speed, so we locked it out.  But we are going to give you a way to unlock it."

    But yes, you CAN do 3 or 4 way SLI with 1000 series cards.  Just takes some extra steps.....

     

    EDIT:

    Nope, they've decided to disable it except for benchmarking apps (and DX12, which bypasses the restrictions internally).  News was breached on this Wednesday.  I hadn't seen it until now.  nVidia are now officially morons.  *sigh*

     

    And for renderingg in Iray, SLI is a non-issue. Nvidia reccomends NOT using SLI for Iray.

  • outrider42outrider42 Posts: 3,679

     

    Daz partnered or rather licenced within Nvida, not the other way around. We represent a fraction of a percent of their user base and possible purchasing niche. I don't think Nvida is sweating getting us drivers for Cuda in any timeline other than when they're ready. In the mean time they are selling these things and you can buy one and wait for Cuda, or wait for Cuda and then buy the cards when they are cheaper. The 102 cards will hopefully be out by then and they are set to be the Cuda workhorses of the batch, the 1080 and 1070 are being marketed to gamers and thats where we can expect Nvida to concentrate their resources as that represents a big portion of their user base. Us 3D types are the minority and Cuda 3D users are a minority within that minority.

     

    I don't know about that. You have to consider the new indie scene that is cropping up, as does Nvidia. Indie games, indie artists, and so on. There are a lot of things that use CUDA. These small developers and users may not have the budget for those expensive Quadro cards in their machines. Ignoring the Indies would a costly mistake. There other renderers besides Iray, and those can use AMD. And if AMD makes decent cards that go for cheap, they will take that group.

    There's a reason why the x70 and x80 line of any series sells millions more than any Titan or Quadro. I think the hobbiest market is lot bigger than you think.

  • eric suscheric susch Posts: 135

    The Quadro cards aren't as powerful or fast.  They're designed to be very stable, cool, and quiet for use in a business environment.  I have a Quadro K5000 and a Titan X.  The K5000 is fine but the Titan X, at 1/3rd the price, runs circles around it when rendering Iray.  It's also really loud and you can cook bacon on it.  I have to keep the door to my system open with an external fan on it or it overheats and throttles down, but I don't care.  I'm tired of waiting.  ;-)

  • Quadro or geforce? Which is the best for rendering?

  • Sorry, another question. It's a good idea make two or more renders at the same time in order to save time? I have only a 970

  • linvanchenelinvanchene Posts: 1,386
    edited June 2016

    Quadro or geforce? Which is the best for rendering?

     

    This is the state of OctaneBench today on 14th of June 2016:

    image

    Source:

    https://render.otoy.com/octanebench/results.php?sort_by=avg&filter=&singleGPU=1

    The GTX Titan scores 126 points. (12GB VRAM ~ $ 1000 - 1200, 171 results)

    The 980 Ti scores 126 points. (6GB VRAM,  ~ $ 550 - 700$, 391 results)

    Quadro M 6000 scores 122 points  (12GB VRAM, ~ $ 4800.– , 9 results)

    Prices currently paid in CHF converted to US Dollars, in different countries prices may differ but the general relative price class may stay the same.

     

    -> Quadro scores less then a GTX Titan and a 980Ti and costs more.

    -> 980 Ti scores the same in speed but has only half the VRAM of Titan X and Quadro 6000.

    -> 980 Ti seems to be the most popular card with 391 submitted benchmark results.

    - - -

    The Nvidia Quadro page gives an indication of the intended applications:

    http://www.nvidia.com/object/quadro.html

    Let's leave it at that...

    - - -

    To summarize:

    - wait for the 1080Ti to get a great price vs render speed relationship with a bit more VRAM

    - wait for Pascal Titan if you are looking for more VRAM

    - wait for Pascal Quadro if you are looking for a lot more VRAM

    - buy 1080 now if you want to play your games at Ultra HD resolution for some time and are happy with 8 GB of VRAM.

    - buy 1070 now if you are happy with standard resolution gaming and 8 GB of VRAM

    - wait for 1060 if you are looking for a more affordable deal that is suitable for gaming and could also be used for rendering

     

    OctaneBench state 20160614.jpg
    1920 x 1080 - 317K
    Post edited by linvanchene on
  • Nyghtfall3DNyghtfall3D Posts: 813

    I saw a few threads on daz forum. All post were "980 ti is much better because bla bla bla" and then "No, 1080 is the best because bla bla bla"

    so... 980 ti or 1080? Be short please!

    The answer depends on your current specs, how much more rendering power you want, what you can afford, and how patient you are.

    My Specs:

    3.5 GHz Core i7-4770K
    32 GB RAM
    Windows 10 Pro

    I upgraded to DS 4.8 and switched to Iray as my engine of choice last June.  Until yesterday, I had a single 6GB GTX 780 that served as both my display driver and renderer.  The problem is, Iray is such a resource hog that I always had to tell DS to use all but one of my cores when rendering, lest my PC slow down to a crawl and make it impossible for me to do anything else.  It was annoying.

    Pascal isn't ready for Iray, and current forecasts by nVidia have put Iray compatibility several months down the road.  Even then, there's still going to be questions concerning pricing and stability.

    I decided not to wait, but can't afford a 12 GB Titan, so I took advantage of a $70 discount Amazon.com is offering on 6 GB 980 Ti's, and bought one for $600.  It's now my display driver, and my 780 provides additional CUDA cores.

    The performance gain is absolutely fantastic.  The boost in render speed was worth every penny, and my rig runs smooth as silk even while I surf the web.  I do experience minor lag while streaming HD video on Netflix and Hulu, but nothing intolerable.

  • StratDragonStratDragon Posts: 3,273
    edited June 2016

    There's a reason why the x70 and x80 line of any series sells millions more than any Titan or Quadro. I think the hobbiest market is lot bigger than you think.

    the overwhelming majority of video card owners use it for games. Games sales for PC's in 2015 were $32,000,000,000.00 (1,2), that number does not include mobile or platform, that's just home computers. If the market for 3D users is bigger that I think it is where are those tools and why are we waiting for them. My $0.02 is the focus right now is on gamers because thats' where the money is and that is what will drive the market as a primary concern. 

    Sales of x70,x80 cards have very little to do with 3d hobbyists, these are inexpensive aftermarket cards that work well for gamers and they commonly ship with desktops and laptops, this is hardly an argument to support the hobbyist niche as being a dominant force for developers to market these cards to us at this time. An estimated 51% of Americans play video games(3), the number of hobbyist may be more than I think but I think it's also safe to say it's not close to that number and it's likely in the single digits. If we weren't we would already be able to use those cards at the time of release for our needs, and we are still waiting. And while there are developers outside of gamers who leverage CPU power for specific tasks like AI or Encryption they are all not in need of CUDA or any rendering software either, and that is also another likely candidate to have a user base that exceeds us 3D graphic artists and Indie users and get preferential treatment when it comes to development. 

    (1)http://www.ign.com/articles/2016/01/27/pc-dominated-worldwide-game-revenue-in-2015';

    (2) http://arstechnica.com/gaming/2016/01/dont-look-now-but-the-pc-is-the-worlds-biggest-gaming-platform/

    (3) http://www.gamasutra.com/blogs/UlyanaChernyak/20140527/218626/Video_Game_Market_Overview_Console_vs_PC_vs_Mobile.php

     

    Post edited by StratDragon on
  • Thanks for all answers! smiley

Sign In or Register to comment.