nvida 1080 question

how much of a bigger jump will it be from my 980 ti to that? would it be hugely noticeable or  anything

«134

Comments

  • Kevin SandersonKevin Sanderson Posts: 1,643

    Nobody knows as the 1080 doesn't have a driver yet that works with Iray. It's being worked on by Nvidia. Your 980 ti is really good.

  • really im supprised that it doesnt have an iray driver

  • nicsttnicstt Posts: 11,715

    how much of a bigger jump will it be from my 980 ti to that? would it be hugely noticeable or  anything

    It will be much worse, about the same as your CPU.

    In otherwords, until Nvidia upgrade the drivers, no one knows; Why are you surprised?

    Gaming is the primary markey for these cards. And drivers aren't optimised for games yet.

  • Kevin SandersonKevin Sanderson Posts: 1,643

    It was released to take advantage of the VR craze and games. But as happened with the 900 series, the driver for CUDA rendering is being released later. Many using Octane Render waited and stuck with the 700 series until they got the 900 series driver done and some jumped to the Titan X instead. As nicstt said, gaming is the primary market for these consumer cards.

  • MEC4DMEC4D Posts: 5,249
    edited June 2016

    according to GPU rendering benchmarks If you will use only 1 new 1080 for rendering then around 0.8 sec faster than standard 1 x 980ti, if you pair together with 980ti for iray rendering then only 50% jump in speed as you already have , what is already very noticeable improvement both in rendering and iray viewport but that after the drivers for iray are done but there are still 2 scenarios after, the 1080 may get faster  or slower it all  depends of Nvidia work in it , they did not made it work yet as the performance was not good enough 

    the difference between 1080, 1070 and standard 980ti and Titanx in GPU rendering are so small that not even worth the upgrade unless you looking for more memory for that reason I got myself another Titan X SC that not only is faster but have 4GB more memory .. 

     

    how much of a bigger jump will it be from my 980 ti to that? would it be hugely noticeable or  anything

     

    Post edited by MEC4D on
  • Jan19Jan19 Posts: 1,109

    That 980 card is powerful, according to my local computer tech.  It's the one she liked but she was slightly appalled at the price, along w/myself. :-)

    I read about those new...is it Pascal?...cards.  I wondered if they'd work w/IRay.

    I keep wavering between a new video card and more memory.  I wonder which would speed up monitor refresh rate/render times most efficiently.   

  • MEC4DMEC4D Posts: 5,249
    edited June 2016

    Not one of them in rendering so unless you looking for more memory or playing games  then yes but that is all  , 980ti and Tiran X use  less power in rendering around 150-160W than 1080 due to it higher clock speed it it need it all it have what is 180W

    Below is the GPU rendering time for all Dual cards , as you see no much difference 

    Jan19 said:

    That 980 card is powerful, according to my local computer tech.  It's the one she liked but she was slightly appalled at the price, along w/myself. :-)

    I read about those new...is it Pascal?...cards.  I wondered if they'd work w/IRay.

    I keep wavering between a new video card and more memory.  I wonder which would speed up monitor refresh rate/render times most efficiently.   

     

    pic_disp.jpg
    650 x 383 - 46K
    Post edited by MEC4D on
  • Jan19Jan19 Posts: 1,109
    MEC4D said:

    Not one of them in rendering so unless you looking for more memory or playing games  then yes but that is all  , 980ti and Tiran X use  less power in rendering around 150-160W than 1080 due to it higher clock speed it it need it all it have what is 180W

    Below is the GPU rendering time for all Dual cards , as you see no much difference 

    Jan19 said:

    That 980 card is powerful, according to my local computer tech.  It's the one she liked but she was slightly appalled at the price, along w/myself. :-)

    I read about those new...is it Pascal?...cards.  I wondered if they'd work w/IRay.

    I keep wavering between a new video card and more memory.  I wonder which would speed up monitor refresh rate/render times most efficiently. 

     

    Thanks very much, Cath. :-)

     

  • hphoenixhphoenix Posts: 1,335
    MEC4D said:

    Not one of them in rendering so unless you looking for more memory or playing games  then yes but that is all  , 980ti and Tiran X use  less power in rendering around 150-160W than 1080 due to it higher clock speed it it need it all it have what is 180W

    Below is the GPU rendering time for all Dual cards , as you see no much difference 

    Jan19 said:

    That 980 card is powerful, according to my local computer tech.  It's the one she liked but she was slightly appalled at the price, along w/myself. :-)

    I read about those new...is it Pascal?...cards.  I wondered if they'd work w/IRay.

    I keep wavering between a new video card and more memory.  I wonder which would speed up monitor refresh rate/render times most efficiently.   

     

    That graph is for Premiere Pro 4K video encoding.  Not 3D work or Iray.  It uses the video accelleration portion of the GPU, not CUDA.  This is why that graph has the cards virtually identical (the only real difference is bandwidth, with faster cards getting a slight edge, as do higher bus-width cards.)  Those graphs really didn't demonstrate CUDA/arch/speed differences very much.  They were a bit deceptive.  Those graphs are really about handling 4k (2160p) monitors and 4k (2160p) video.

    We'll have to wait for more market penetration to start getting realiable and consistent averages on actual CUDA/3d performance of the new 1080/1070 cards vs the last generation.

    Now if they'll get their damn production issues fixed (or stop trying to keep them artificially small to boost demand) and let us get our hands on them....

    (hmm....that may be why they're keeping the production low.  So they have time to fix driver issues and such BEFORE too many people have them, so there isn't a huge kerfuffle about them not 'measuring up' to the hype....)

  • kyoto kidkyoto kid Posts: 42,013

    ...the main issue for me with Iray is memory, not just clock/core speed.  The more memory the better as unlike Octane, Iray dumps the entire process to the CPU  (not just the remaining texture load) if the file exceeds GPU memory . Should the commercial HBM 2 cards be released within the next year I would expect the 1000 series "Titan" brand to have 16 GB.

  • 3delinquent3delinquent Posts: 355

    ...I was wondering if there might be a 'ti' for the 1000 series that might have as much as 12gb? Any up in memory would be good, 16gb would be awesome at a price similar to the current Titan...

  • MEC4DMEC4D Posts: 5,249

    I am not sure if Nvidia want to go there , you don't need 16GB to run a game and since gtx is a game card and not workstation card I don't see it coming  soon if that was the case 1080 would have 12 GB already

    ...I was wondering if there might be a 'ti' for the 1000 series that might have as much as 12gb? Any up in memory would be good, 16gb would be awesome at a price similar to the current Titan...

     

  • MEC4DMEC4D Posts: 5,249

    It is more than a driver issue the cards working with rendering already , it is iray software that need to adapt to new architecture and fix for the OptiX that don't run with Cuda 8 and codes need to be adjusted the reason my Optix delay the rendering in iray since I run Cuda 8 drivers on my system , I am member of Nvidia Developing , we was talking about that already , so don;t expect wonders 

    and again Adobe Premiere use Cuda and Open CL as I use it ,  and there will be not magic  driver that will improve anything here 

    hphoenix said:
    MEC4D said:

    Not one of them in rendering so unless you looking for more memory or playing games  then yes but that is all  , 980ti and Tiran X use  less power in rendering around 150-160W than 1080 due to it higher clock speed it it need it all it have what is 180W

    Below is the GPU rendering time for all Dual cards , as you see no much difference 

    Jan19 said:

    That 980 card is powerful, according to my local computer tech.  It's the one she liked but she was slightly appalled at the price, along w/myself. :-)

    I read about those new...is it Pascal?...cards.  I wondered if they'd work w/IRay.

    I keep wavering between a new video card and more memory.  I wonder which would speed up monitor refresh rate/render times most efficiently.   

     

    That graph is for Premiere Pro 4K video encoding.  Not 3D work or Iray.  It uses the video accelleration portion of the GPU, not CUDA.  This is why that graph has the cards virtually identical (the only real difference is bandwidth, with faster cards getting a slight edge, as do higher bus-width cards.)  Those graphs really didn't demonstrate CUDA/arch/speed differences very much.  They were a bit deceptive.  Those graphs are really about handling 4k (2160p) monitors and 4k (2160p) video.

    We'll have to wait for more market penetration to start getting realiable and consistent averages on actual CUDA/3d performance of the new 1080/1070 cards vs the last generation.

    Now if they'll get their damn production issues fixed (or stop trying to keep them artificially small to boost demand) and let us get our hands on them....

    (hmm....that may be why they're keeping the production low.  So they have time to fix driver issues and such BEFORE too many people have them, so there isn't a huge kerfuffle about them not 'measuring up' to the hype....)

     

  • MEC4DMEC4D Posts: 5,249

    I want to add that people focus on number of iterations in a short time of rendering thinking it will make your renders looking better , faster rendering in iray don't mean you will have better quality , with longer rendering time , less iterations and full optimization you will have better quality and less noises and that is where you need the GPU for to be fast , so you using the program optimal .  But of course for one person McDonalds food  is already good enough for others don't

  • HorusRaHorusRa Posts: 1,665
    MEC4D said:

    I want to add that people focus on number of iterations in a short time of rendering thinking it will make your renders looking better , faster rendering in iray don't mean you will have better quality , with longer rendering time , less iterations and full optimization you will have better quality and less noises and

    Word!

  • kyoto kidkyoto kid Posts: 42,013
    MEC4D said:

    I am not sure if Nvidia want to go there , you don't need 16GB to run a game and since gtx is a game card and not workstation card I don't see it coming  soon if that was the case 1080 would have 12 GB already

    ...I was wondering if there might be a 'ti' for the 1000 series that might have as much as 12gb? Any up in memory would be good, 16gb would be awesome at a price similar to the current Titan...

     

    ...if it becomes cost effective I think they would, particularly if game texture resolution becomes more detailed. I have read several articles that question why would you need more than 4 - 6 GB for gaming, yet the 12 GB Titan-X keeps selling. I'm of the "wait and see" school.

    One article I read mentioned that the 32 GB HBM 2 card would most likely be for professional work (Quadro) with the 16 GB being the top end for the consumer market. With HBM 2, memory increases are in steps of 4, so 4, 8 ,16, 32.

  • hphoenixhphoenix Posts: 1,335
    kyoto kid said:
    MEC4D said:

    I am not sure if Nvidia want to go there , you don't need 16GB to run a game and since gtx is a game card and not workstation card I don't see it coming  soon if that was the case 1080 would have 12 GB already

    ...I was wondering if there might be a 'ti' for the 1000 series that might have as much as 12gb? Any up in memory would be good, 16gb would be awesome at a price similar to the current Titan...

     

    ...if it becomes cost effective I think they would, particularly if game texture resolution becomes more detailed. I have read several articles that question why would you need more than 4 - 6 GB for gaming, yet the 12 GB Titan-X keeps selling. I'm of the "wait and see" school.

    One article I read mentioned that the 32 GB HBM 2 card would most likely be for professional work (Quadro) with the 16 GB being the top end for the consumer market. With HBM 2, memory increases are in steps of 4, so 4, 8 ,16, 32.

    Also, there is at least one game out that recommends 5GB of VRAM.  If it is there, they'll start using it.  Now that the 1000 series is out with 8GB, I'm expecting game engine writers and game developers will start designing/coding to use the additional memory to add/improve gaming realism.  Won't be long before we see games recommending 7GB of VRAM........

     

  • MEC4DMEC4D Posts: 5,249

    We going to lough in the future from all this discousions , of course the games will use more in the future when the systems become cheaper so everyone can afford it  but for today not much people can do it and adding more Vmemory means your system need to be upgraded , not too so far ago everyone was dreaming about multiple gtx 590 to use with Octane , today we have other dreams and tomorrow things change again .

    But the facts for today are different and rather still too expensive for the masses . 

    hphoenix said:
    kyoto kid said:
    MEC4D said:

    I am not sure if Nvidia want to go there , you don't need 16GB to run a game and since gtx is a game card and not workstation card I don't see it coming  soon if that was the case 1080 would have 12 GB already

    ...I was wondering if there might be a 'ti' for the 1000 series that might have as much as 12gb? Any up in memory would be good, 16gb would be awesome at a price similar to the current Titan...

     

    ...if it becomes cost effective I think they would, particularly if game texture resolution becomes more detailed. I have read several articles that question why would you need more than 4 - 6 GB for gaming, yet the 12 GB Titan-X keeps selling. I'm of the "wait and see" school.

    One article I read mentioned that the 32 GB HBM 2 card would most likely be for professional work (Quadro) with the 16 GB being the top end for the consumer market. With HBM 2, memory increases are in steps of 4, so 4, 8 ,16, 32.

    Also, there is at least one game out that recommends 5GB of VRAM.  If it is there, they'll start using it.  Now that the 1000 series is out with 8GB, I'm expecting game engine writers and game developers will start designing/coding to use the additional memory to add/improve gaming realism.  Won't be long before we see games recommending 7GB of VRAM........

     

     

  • kyoto kidkyoto kid Posts: 42,013

    ...well, if they don't offer more than 8 GB in the 1000 series then looks like the older generation Titan X wll have to be the solution. I'd rather have more GPU memory to avoid teh process  dumping to the CPU. Again, it will still be better than CPU rendering.

  • MEC4DMEC4D Posts: 5,249

    for the only reason I keept my12 GB intact on all cards , as I prefer to have extra memory just in case , the other day I made 17 milions poly HD morph for Genesis where each small skin structure was build in morph in place of the maps , that is 629 sub-d genesis figures , guess what everything was rendered fine . just need extra time for loading before rendering but once it was loaded to the card things was going smooth

    I always want to do that , one day it will be the standard when our systems and cards run 1TB of ram and we can trow in whatever we want , having per vortex color info and skip the maps forever 

    kyoto kid said:

    ...well, if they don't offer more than 8 GB in the 1000 series then looks like the older generation Titan X wll have to be the solution. I'd rather have more GPU memory to avoid teh process  dumping to the CPU. Again, it will still be better than CPU rendering.

     

  • linvanchenelinvanchene Posts: 1,386
    edited June 2016

    Update/ Edit:

    It makes more sense to post this in the windows 10 thread.

    Moved it myself...

    Post edited by linvanchene on
  • kyoto kidkyoto kid Posts: 42,013
    MEC4D said:

    for the only reason I keept my12 GB intact on all cards , as I prefer to have extra memory just in case , the other day I made 17 milions poly HD morph for Genesis where each small skin structure was build in morph in place of the maps , that is 629 sub-d genesis figures , guess what everything was rendered fine . just need extra time for loading before rendering but once it was loaded to the card things was going smooth

    I always want to do that , one day it will be the standard when our systems and cards run 1TB of ram and we can trow in whatever we want , having per vortex color info and skip the maps forever 

    kyoto kid said:

    ...well, if they don't offer more than 8 GB in the 1000 series then looks like the older generation Titan X wll have to be the solution. I'd rather have more GPU memory to avoid teh process  dumping to the CPU. Again, it will still be better than CPU rendering.

     

    ...though would still be nice to see a Pascal GPU with 16 GB HBM 2.  I guess they could do a 12 GB one as it just depends on the stacking (12 GB would be 4 stacks of 3 chips ea. but then why not just go to stacks of 4 since the card was designed to handle it?)

  • boisselazonboisselazon Posts: 458

    according to the new structure of the pascal cards, the ideal is: 16GB HBM2 and 2 cards that share this memory making a 32GB unique memory block, and if possible with a GP100 (or 102) with all cuda core activated. This hadware already exists, but the software/driver part doesn't. Add to this the marketing layer: if this goes live, the more than 16GB cards will be (almost) useless, so they'd rather don't go to far and too soon in this direxction.

  • MEC4DMEC4D Posts: 5,249

    It is all possible , but sadly not for average customers yet as beside your VRAM you need to spend money on your system to handle it smooth , my system has 40GB of VRAM from what 12GB is usable in iray and 4GB for the system video memory the rest of it not really usable but still costed no less, not forget to mention the system costs to support it all . Waste of money , resource and energy but there is no other way at this moment .

    Maybe in the next 4-5 years we have the 1 and only card that will support our needs , cost and energy efficient that everyone can afford it . And the sales of Titan X 12GB will be like $55 lol

    as nobody want it anymore .

  • kyoto kidkyoto kid Posts: 42,013

    according to the new structure of the pascal cards, the ideal is: 16GB HBM2 and 2 cards that share this memory making a 32GB unique memory block, and if possible with a GP100 (or 102) with all cuda core activated. This hadware already exists, but the software/driver part doesn't. Add to this the marketing layer: if this goes live, the more than 16GB cards will be (almost) useless, so they'd rather don't go to far and too soon in this direxction.

    ...one report form a fairly reputable source that  I read mentioned that the 32 GB version would most likely be reserved for the "Professional" (eg Quadro) GPUs.  Considering they upgraded the M6000's memory to 24 GB but not the Titan-X's, it would make sense.  So we get a 16 GB Pascal "Titan" and a 32 GB Quadro "P6000" with Volta being for the next generation Tesla Compute units.

  • kyoto kidkyoto kid Posts: 42,013
    edited June 2016
    MEC4D said:

    It is all possible , but sadly not for average customers yet as beside your VRAM you need to spend money on your system to handle it smooth , my system has 40GB of VRAM from what 12GB is usable in iray and 4GB for the system video memory the rest of it not really usable but still costed no less, not forget to mention the system costs to support it all . Waste of money , resource and energy but there is no other way at this moment .

    Maybe in the next 4-5 years we have the 1 and only card that will support our needs , cost and energy efficient that everyone can afford it . And the sales of Titan X 12GB will be like $55 lol

    as nobody want it anymore .

    ...yeah the memory issue is a real stickler.  Too bad Nvidia, or someone else, wouldn't develop cards that just added CUDA cores (without the memory) for boosting render speed.  Yeah I know we are "small potatoes" compared to the gaming or professional 3D market, but yes, it would be nice not having to spend 1,000$ per extra card that we use only a portion of its resources just to reduce render time.

    Post edited by kyoto kid on
  • ebfebf Posts: 0

    I know it is a linux test but looks promising for the gtx 1070

    www.phoronix.com/scan.php?page=article&item=nvidia-gtx-1070&num=4

  • kyoto kidkyoto kid Posts: 42,013

    ...hmm I get a magento warning that the link is forbidden.

  • ANGELREAPER1972ANGELREAPER1972 Posts: 4,560

    got a forbidden too they wanna keep it a secret guess

  • kyoto kidkyoto kid Posts: 42,013

    ...you can copy the URL to your browser's search field and it will open.

Sign In or Register to comment.