Semi-OT - nVidia Pascal cards debut finished, 1080GTX and 1070GTX announced!

1246716

Comments

  • MEC4DMEC4D Posts: 5,249

    From iray rendering benchmark  I got from a friend programmer 

    MEC4D said:

     

    4 cards look to be nearly 4 times faster than one. That's pretty linear. Where are those figures from?

     

     

     

  • MEC4DMEC4D Posts: 5,249

    With my case I just plug the card in without any screws .. 6 second work it is as much work as with HD 

    I think the 1070 will see more buisness from cuda users because of price and size of video memory. I never upgraded to 980's because they were over priced. People don't need bleeding edge they just need fuction. I slowly bought three Nvida 780's 6gb edition. They only had 2330 cores instead of the 2880 cores of a 780ti but I got 6gb of Video ram to load textures into. I wish Nvidia would just put upgradable slots on their cards so if you need more memory you could add a chip to double the memory. But, this is not going to happen! They could it, but it would cut into card sales and that is just bad business. All those 780ti's are great cards they just need a boost in memory size. Wouldnt it be great if there was a company out there that recylcled video cards by upgrading memory capcity. Just my 2cents

    luckily I can go for the 1080s or titans but yeah it would be great if they had easy to use slots that you could plug in upgrades easliy like those hard drive cages you can just plug in a hard drive without having to connect cables take things apart like plugging in those old game carts in the old consoles before they became disks

     

  • MEC4DMEC4D Posts: 5,249
    edited May 2016

    Nope .. when you have open iray viewport calculations are done already and so the transfer , it is instant rendering the moment you hit the render button  

    2 identical GPU DON''T get 2x faster  as that is not possible with iray software at this moment  for that reason iray is number 3 on the fastest unbiased  GPU rendering list

    Just a note about the Mec4D performance bench: the time seems to be the whole calculation time, including the initial calculation before transfert to the nvidia cards.

    This initial period of time is roughly independant to the GPU. The iray strats after that.

    You should be aware that :

    - if you only do little renders, you won't get big improvments in the whole calculation time be cause this initialization is a big % of the whole render time.

    - if you render big stuff, then this initial setup time is a little % of the whole render time: and then, the gain of adding cards/GPU is pretty much linear (2 identical GPU get 2x faster, 4 GPU, 4 times faster...)

     

     

    Post edited by MEC4D on
  • joseftjoseft Posts: 310

    Just a note about the Mec4D performance bench: the time seems to be the whole calculation time, including the initial calculation before transfert to the nvidia cards.

    This initial period of time is roughly independant to the GPU. The iray strats after that.

    You should be aware that :

    - if you only do little renders, you won't get big improvments in the whole calculation time be cause this initialization is a big % of the whole render time.

    - if you render big stuff, then this initial setup time is a little % of the whole render time: and then, the gain of adding cards/GPU is pretty much linear (2 identical GPU get 2x faster, 4 GPU, 4 times faster...)

     

    Yes, i think many people dont factor in that initial loading time that is the scene data transferring into your GPU's - and leading to believe that is causing a non-linear performance increase when adding second identical card and so on.

    I did my own tests to test this. i did 1 Titan X alone, then 2 Titan X together. Then i also did 1 Titan X and CPU, and both Titan X plus CPU. When i did this, i did it on a small scene (~5 minutes) and on a big scene (over 1 hour). I did not use the log file to get the times. i sat at my computer and watched it loading the scene into the GPU, and then started my own stopwatch as soon as the scene started to render. In the case of the small scene, i sat and watched it until it finished to get the time. In the big scene, i was multitasking (using a third video card to power my displays), while watching the render progress on another monitor.

    The results were just shy of twice as fast with both Titans working together. i dont remember the exact numbers, but the small scene was something like 4 minutes 50 seconds with both Titans and like 10 minutes and 3 seconds with just the one titan. The large scene was 1 hour and 18 minutes with both titans, and 2 hours 48 minutes with one titan.

    So the performance increase for the extra card isnt quite 2x, but its very close. something like 1.95 times faster. Not sure if that becomes less efficient going to a third one though. I have a third card, but its not the same so will not give accurate comparison if i include it.

    As for including the CPU - not worth it. They dont seem to play well together. From memory all the tests except one where i included the CPU were slower than without it (small scene, 1 titan + CPU was the one that was faster, by like 10 seconds over the same scene with just one titan alone)

  • BarubaryBarubary Posts: 1,232
    edited May 2016

    so it'll probably be two weeks till we get any handy reviews it's going to be a long two weeks this is how I feel :)

    I know that feeling.

    It's like having Arnold Schwarzenegger inside of you! :D

    Post edited by Barubary on
  • hphoenixhphoenix Posts: 1,335
    edited May 2016
    MEC4D said:

    Rendering iray scene with the same card type
    1 x GTX Titan X 12GB - 100.5 s
    2 x GTX Titan X 12GB - 62.25s
    3 x GTX Titan X 12GB - 44.25s
    4 x GTX Titan X 12GB - 26.15s

    2 Titan X cards don't render double faster , 3 cards will do
     

    As @PeterFulford noted, that looks like 4 Titans ARE rendering at 1/4 the time of a single.  The discrepancies between the dual/triple and single/quad lead me to believe that the scheduler in the driver/Iray code isn't load-balancing the cards quite correctly.  It's as if some parts of the scene are more complex, and the rest of the cards have to idle while they wait for those parts to complete (an iteration) before continuing.  Not the best scheme, but easy to implement (it's just path-tracing different scene volumes per card.)  But since some parts of the volume may be more complex, the other cards end up waiting on completions from the slower sections.

    Without creating some very symmetrical scenes of various symmetries to try to figure out how the Iray scheduler is splitting up the work between the cards, it's all conjecture.  But those numbers do seem to indicate something of that nature happening.

     

    Post edited by hphoenix on
  • hphoenixhphoenix Posts: 1,335
    edited May 2016
    MEC4D said:

    That's the point, and the clock speed did not affected much rendering in iray so 1200Mhz or 1500Mhz per GPU did nothing to rendering speed it just improved real time viewport 

    so if the card is slower in reality than 980ti or Titan X in rendering with iray, you gonna waste your good card efficiency you already have  and speed here the electric meter only for nothing .

    I need one more card for texturing but I wait until EVGA come with super clocked version and water cooling ..  and then I will see how it affect everything , if is slower I keep it only for texturing and monitors, if is faster I am gonna get second one for match and finalize my iray rig but all at this moment is only [ IF ] as nobody knows the answer until May 27 or better say May 31 when the orders arrive but not my order I am waiting for SC and better one , if not  EVGA you better give me my money ..

    I don't care about the Game benchmarks  it means nothing and have nothing to do wih rendering in iray

    If that was all true you should see Titan X dropping the price next week to $800 other wise total BS , I wish they do I will snap one more for the collection and forget about 1080

    There's a review of the 1080 in regards to gaming here: http://www.gamespot.com/articles/nvidia-geforce-gtx-1080-review/1100-6439863/

    It actually has less CUDA cores than the 980ti or Titan. I think it's a great card if you're coming from a 980 or below, but for people like Cath (with two Titans, haha) or myself (with a 980ti), it's not anything to rush out and buy.

     

    This is a bit disingenuous.  GPU core clock speed DOES affect GPU render speed.  But there are OTHER factors you have coming into play.  Multiple cards (as evidenced by your time benchmarks) may be causing delays based on how the Iray scheduler is dividing the work between the cards.  And architectural differences (Maxwell vs. Pascal) may cause even more differences.

    Take a single Titan-X at 1000MHz core clock.  Render the scene.  Reload DS/whatever, Overclock the Titan-X to 1250MHz core clock, and up the memory clock by the same factor (25%).  Render the same scene again.  Monitor card temps to make sure no throttling occurs.  If you don't get around a 25% speed-up (discounting setup/load time), something is VERY wrong with basic digital electronics theory and physics.......

    Post edited by hphoenix on
  • MEC4DMEC4D Posts: 5,249

    I said there was no difference between my regular 1277Mhz speed  and 1500Mhz  with iray there is huge difference in speed between 1000Mhz and 1277Mhz  I tested stuff since last year with all possible scenarios and I use separate non cuda card for monitors and I agree with you that clock speed does affect GPU rendering as every idiot know that and not for nothing I pay extra money  for the  super clocked  version , but Autodesk stated there may be the software problem with GPU scaling that slow down the second card and the same issue maybe is here as I have no problem with  other engines 

    but  but but good news.. as  Autodesk  have confirmed that there was a GPU scaling issue with the software and they got patch to fix it,  I was about to check whats new with the beta build of DS and guess what ? everything run  faster, not 195% as  that is little BS  maybe Quadro cards that have special driver for iray but not  GTX but it run almost 20% faster than 4.9.1 build , also the viewport move faster already on  100msec I can rotate the scene without been choppy before I needed 300msec for smooth rotation ( 10 genesis in the scene ) and I just  got around 70% extra speed with second card , also the new Nvidia  display  driver not crashing anymore when using Interactive mode so I guess they did something good this time , maybe was the patch maybe by fixing the issue with interactive mode and the display driver they improved the performance , the Uberiray shader changed also ..but most important it improved and a lot , 4.9.1 was the worse build I used anyway so I am running on Beta from now.

    You see I know people that have faster CPU and better rig than me with the same cards still they did not rendered faster , different shaders and light setup will affect the result too so comparison based on individual tests are questionable unless it is a real benchmark and based on megapixel per sec an not by counting the min and sec in DS .

    My results are for my rig and my scene and there is no way that second GPU will give you 100% extra performance as that is not what Nvidia wanted you to have in first place, they created Quadro cards with special driver for iray and it cost double , GTX cards are not build for rendering even if a single card like Titan X render faster than Quadro and if GTX 1080 get the key for that ,we don't know , for Nvidia would be very bad  business doing so as nobody would ever buy Quadro , Titan X or  GTX 980Ti again unless they really  need the VRAM , iray is also not ready for the new card anyway

     

    hphoenix said:
    MEC4D said:

    That's the point, and the clock speed did not affected much rendering in iray so 1200Mhz or 1500Mhz per GPU did nothing to rendering speed it just improved real time viewport 

    so if the card is slower in reality than 980ti or Titan X in rendering with iray, you gonna waste your good card efficiency you already have  and speed here the electric meter only for nothing .

    I need one more card for texturing but I wait until EVGA come with super clocked version and water cooling ..  and then I will see how it affect everything , if is slower I keep it only for texturing and monitors, if is faster I am gonna get second one for match and finalize my iray rig but all at this moment is only [ IF ] as nobody knows the answer until May 27 or better say May 31 when the orders arrive but not my order I am waiting for SC and better one , if not  EVGA you better give me my money ..

    I don't care about the Game benchmarks  it means nothing and have nothing to do wih rendering in iray

    If that was all true you should see Titan X dropping the price next week to $800 other wise total BS , I wish they do I will snap one more for the collection and forget about 1080

    There's a review of the 1080 in regards to gaming here: http://www.gamespot.com/articles/nvidia-geforce-gtx-1080-review/1100-6439863/

    It actually has less CUDA cores than the 980ti or Titan. I think it's a great card if you're coming from a 980 or below, but for people like Cath (with two Titans, haha) or myself (with a 980ti), it's not anything to rush out and buy.

     

    This is a bit disingenuous.  GPU core clock speed DOES affect GPU render speed.  But there are OTHER factors you have coming into play.  Multiple cards (as evidenced by your time benchmarks) may be causing delays based on how the Iray scheduler is dividing the work between the cards.  And architectural differences (Maxwell vs. Pascal) may cause even more differences.

    Take a single Titan-X at 1000MHz core clock.  Render the scene.  Reload DS/whatever, Overclock the Titan-X to 1250MHz core clock, and up the memory clock by the same factor (25%).  Render the same scene again.  Monitor card temps to make sure no throttling occurs.  If you don't get around a 25% speed-up (discounting setup/load time), something is VERY wrong with basic digital electronics theory and physics.......

     

  • hphoenixhphoenix Posts: 1,335

    Just wanted to post these, since nVidia has posted the specs for the 1070 online now.

     

    nVidia 1070 GTX

    1920 CUDA cores

    1506MHz base core clock, 1683 boost core clock

    8GB GDDR5 VRAM, 256-bit wide memory bus (8Gbps memory speed, 256GB/sec memory bandwidth)

    Maximum resolution:  7680 x 4320 @60Hz

    Graphics Card Power:  150W

    Single 8-pin power connector.

     

    So, slightly less than was predicted/speculated  (128 less CUDA cores, and 100MHz slower clock speed.)  But still WAY better than the 970 GTX (about 15% more cores, and 50% higher base core clock.)

     

  • MEC4DMEC4D Posts: 5,249

    @hphoenix yeah it fall exactly in the place with one prediction diagram I saw couple days ago plus it have more vram so better for the money 

    but my pressure is already released , my budget spent and no worries anymore

    Just snapped one more EVGA Titan X SC Hybrid water cooled for $1099 

    with total of 9216 cudas at 1277Mhz it is going to be DS party on Friday lol

    I was thinking long about and come to conclusion  it would be better for me to keep my vram at 12GB 

    so I can use it also for rendering bigger scenes together with my twins and not just for GPU texturing and Photoshop work

    I hope you guys find what is best for you , I am starting saving for the next year purchase as I want 4 of them in total but if the prices drop more maybe early in December 

    It getting harder to get good deal on the hybrids as the stock is so limited .

     

  • ANGELREAPER1972ANGELREAPER1972 Posts: 4,559

    wow MECH4D you can have 10 gen figs in a scene cool I love doing big complex scenes or at least try to well most time they end up being I think and you do highly detailed renders that look real of course your excellent shaders and other content you've made helped too but having a good rig as you do to get the most out of them again also helps

  • ANGELREAPER1972ANGELREAPER1972 Posts: 4,559

    stats have been posted of the gtx 970, 980, titan, 1070 and 1080 cards took screen shots still no news on how well work for 3d editing but those of you that understand these stats and what is more important for us maybe help us noobs to understand but the speed, cooling power savings and price  sound good though though titan still has the cuda and vram advantage is that more important or do the new cards have more than enough for very large complex scenes and 3d creation - zbrush, mudbox, marvelous designer, substance painter extra - do these need programs need or benifit from more powerful cards as well as iray rendering at high settings? 1 or 2 1080s are a big saving over 1 or 2 titans either way going for two cards was thinking 4 1080s but now know that wont work or 3. Current prices haven't changed yet so whether or not the current prices for 2x  1080s $2762 and 2x Titans $3799 Australian will go up down or remain same don't know

    Image1.jpg
    893 x 755 - 77K
    Image3.jpg
    895 x 714 - 77K
    Image5.jpg
    887 x 738 - 73K
    Image6.jpg
    911 x 700 - 74K
    Image7.jpg
    826 x 727 - 66K
    Image9.jpg
    814 x 631 - 66K
    Image10.jpg
    811 x 630 - 63K
    Image11.jpg
    795 x 621 - 66K
    Image12.jpg
    857 x 638 - 63K
    Image13.jpg
    838 x 621 - 72K
  • joseftjoseft Posts: 310

    stats have been posted of the gtx 970, 980, titan, 1070 and 1080 cards took screen shots still no news on how well work for 3d editing but those of you that understand these stats and what is more important for us maybe help us noobs to understand but the speed, cooling power savings and price  sound good though though titan still has the cuda and vram advantage is that more important or do the new cards have more than enough for very large complex scenes and 3d creation - zbrush, mudbox, marvelous designer, substance painter extra - do these need programs need or benifit from more powerful cards as well as iray rendering at high settings? 1 or 2 1080s are a big saving over 1 or 2 titans either way going for two cards was thinking 4 1080s but now know that wont work or 3. Current prices haven't changed yet so whether or not the current prices for 2x  1080s $2762 and 2x Titans $3799 Australian will go up down or remain same don't know

    Just curious as to what Titans you are looking at? those prices seem a little inflated...only 2 of the evga watercooled titans would be around that price. I am in australia aswell, and my two titans cost me 1600 australian each, so total 3200. they are the gigabyte xtreme version

  • ANGELREAPER1972ANGELREAPER1972 Posts: 4,559
    joseft said:

     

    Just curious as to what Titans you are looking at? those prices seem a little inflated...only 2 of the evga watercooled titans would be around that price. I am in australia aswell, and my two titans cost me 1600 australian each, so total 3200. they are the gigabyte xtreme version

    on the origion computer builds only says dual 12gb gtx titan x, not many custom build sites I've seen offer titans mostly 980ti around $1099-$1149 (taspro) each other cards quadro m6000 $7499, m5000 $2969 (ple),  MWAVE's titans $1699 each and their 8gb m5000 is $2699 and other sites prices vary too some even higher, pc part picker titans are $1509 - $1699, gr-tek doesn't have titans on their pc built for photo video editing it's single cards only with 980ti or up to their best a quadro m5000 $2850+

    so overall compared to others for me price wise and what they offer seem to be the two origion builds with either 2 titans or 2 1080s with the maxed out 32gb chronus pro or the 64gb genesesis with 2 drives if the 2 1080s were good enough could probably afford extra drives but if I went chronus with either of the cards could probably get other stuff like zbrush and more stuff here obviously especially with the 1080s again if they are good enough compared to titans not even looking at considering  1070 even though before did consider the alienware with 3 980ti expensive too for hardly anything else but still better than anything in our stores harvey norman being our best main one

     

  • MEC4DMEC4D Posts: 5,249

    well the max I can work with is 50 then it start to be choppy lol well at this moment upgrading my motherboard to 4 way SLI  so my cards run optimal .

    My Titans X can run 50 % faster than standard Titan X so 1080 will be 30% slower than my cards unless superclocked but still only in games that support it and we don't know even if iray is updated yet for this architecture to run optimal since it was created to support games and realtime VR, your viewport may spin faster but if the GPU scaling of the software do the same nobody knows . Remeber that power limitation in 1080 is 250W and that is how much it will use when superclocked , conlusion here ... i stay with my cards and 12GB , I got the last super clocked water cooled for $1099 usd , Titan X is so overprised outside US no matter the version 

    wow MECH4D you can have 10 gen figs in a scene cool I love doing big complex scenes or at least try to well most time they end up being I think and you do highly detailed renders that look real of course your excellent shaders and other content you've made helped too but having a good rig as you do to get the most out of them again also helps

     

  • kyoto kidkyoto kid Posts: 41,928
    kyoto kid said:

    ...the newest supercomputer being built is using the first HBM2 Pascal Tesla compute cards with 16 GB and a new interface developed by Nvidia called NVlink.  NVLink offers an extremely fat pipeline between both GPU and CPU as well as directly between multiple GPUs compared to PCI 3.0.

    Yes, but that's not a consumer card. You aren't going to see consumer cards carry HBM without a premium price attached. And looking at the specs for the 1080, I'm not sure that G5X is holding it back any at all. 1080 is a beast. 8 gb is still VERY Daz friendly. Nearly everyone who uses it has been working with less than that, considering the 980 ti only has 6. Maybe next year, but I can't afford to wait that long with an aging 670 with only 2 gb. I managed to deal with that limitation, I think I can deal with "only" 8 gb.

    Anyway, the reveal just wrapped.

    A single 1080 is faster the two Titan X in SLI. The 1070 is also faster than the Titan X. I didn't expect that.

    They also has their card running AIR cooled clocked at over 2100 and running at 67C. Pascal is looking pretty sweet.

    GTX 1080, twice the power of a Titan X and 3 times more effecient, $599

    HTX 1070  $379

    More info here http://wccftech.com/nvidia-geforce-gtx-1080-launch/

    Now decision time. My 670 is starting to sweat bullets after this show. He's been a good friend for some time, we made it through the good times and the bad times. But its about time to move on.

    ...agreed, the cost will most likely be higher (AMD's first HBM GPU is more expensive than a 980TI and only had 4 GB of video memory).

    However HBM 2 will have a few major advantages to justify the price, like a smaller form factor (about half the length of a current generation unit), lower power consumption, a better pipeline between the GPU processor and video memory, and the potential for more video memory (very important for 3D rendering).  The first commercial units to be released will no doubt be oin the Quadro line which could boast 32 GB of HBM 2 (The Quadro M6000 was recently upgraded to 24 GB GDDR5).  From what I have gathered, the 1070/1080's advantages will primarily benefit the gaming market rather than CG production.

    8 GB is "borderline" for my needs as a fair number of my scenes exceed that during rendering. If a scene dumps to the CPU, it doesn't matter how fast the clock speed is or how many threads the GPU has.  This is why I am looking at the M6000 for now as it pretty much will handle anything I throw at it.  It may not be as fast as the forthcoming Pascal/GDDR5X GPUs head to head, but not having to risk exceeding GPU memory is a major speed advantage as well.

    Also as I understand, SLI only benefits gaming, not CG rendering.

  • DustRiderDustRider Posts: 2,888
    kyoto kid said:
    kyoto kid said:

    ...the newest supercomputer being built is using the first HBM2 Pascal Tesla compute cards with 16 GB and a new interface developed by Nvidia called NVlink.  NVLink offers an extremely fat pipeline between both GPU and CPU as well as directly between multiple GPUs compared to PCI 3.0.

    Yes, but that's not a consumer card. You aren't going to see consumer cards carry HBM without a premium price attached. And looking at the specs for the 1080, I'm not sure that G5X is holding it back any at all. 1080 is a beast. 8 gb is still VERY Daz friendly. Nearly everyone who uses it has been working with less than that, considering the 980 ti only has 6. Maybe next year, but I can't afford to wait that long with an aging 670 with only 2 gb. I managed to deal with that limitation, I think I can deal with "only" 8 gb.

    Anyway, the reveal just wrapped.

    A single 1080 is faster the two Titan X in SLI. The 1070 is also faster than the Titan X. I didn't expect that.

    They also has their card running AIR cooled clocked at over 2100 and running at 67C. Pascal is looking pretty sweet.

    GTX 1080, twice the power of a Titan X and 3 times more effecient, $599

    HTX 1070  $379

    More info here http://wccftech.com/nvidia-geforce-gtx-1080-launch/

    Now decision time. My 670 is starting to sweat bullets after this show. He's been a good friend for some time, we made it through the good times and the bad times. But its about time to move on.

    ...agreed, the cost will most likely be higher (AMD's first HBM GPU is more expensive than a 980TI and only had 4 GB of video memory).

    However HBM 2 will have a few major advantages to justify the price, like a smaller form factor (about half the length of a current generation unit), lower power consumption, a better pipeline between the GPU processor and video memory, and the potential for more video memory (very important for 3D rendering).  The first commercial units to be released will no doubt be oin the Quadro line which could boast 32 GB of HBM 2 (The Quadro M6000 was recently upgraded to 24 GB GDDR5).  From what I have gathered, the 1070/1080's advantages will primarily benefit the gaming market rather than CG production.

    8 GB is "borderline" for my needs as a fair number of my scenes exceed that during rendering. If a scene dumps to the CPU, it doesn't matter how fast the clock speed is or how many threads the GPU has.  This is why I am looking at the M6000 for now as it pretty much will handle anything I throw at it.  It may not be as fast as the forthcoming Pascal/GDDR5X GPUs head to head, but not having to risk exceeding GPU memory is a major speed advantage as well.

    Also as I understand, SLI only benefits gaming, not CG rendering.

    Keep in mind though that for the price of a quadro m6000 with 24gb you could get 4 gtx 1080's, a new mother board, case, power supply, ram, cpu, and Octane render with the DS plugin and not have to worry about the amount of ram on your GPU at all. Plus you would have all of the other features available with Octane render 3x that aren't available with Iray like true volumetrics, motion blur, and hair (right now you would need the Carrara plugin for dynamic hair, but adding that would still kkeep you under the cost of the quadro m4000).  You would also still be able to use Iray for smaller scenes.

    Just a thought, I'm sure most people would prefer the convenience of Iray, but there are other more cost effective options available for rendering large scenes on your gpu, without breaking the bank on a something like the m6000. In fact, you could render large scenes quite quickly on a system with even 4 older  2 gb  gtx 780 cards using Octane. Different strokes for different folks, but other, possibly more affordable and feature rich, options are available rather than investing ~$4,500 on a single video card.

  • outrider42outrider42 Posts: 3,679
    I wonder if some of the tricks the 1000 series uses for VR could be applied to Iray. One key feature to their VR performance is that they are able to basically drop pixels in a scene that are not visible to the user. Like stuff behind an object in front of you. If this kind of technique could apply to rendering in Iray, it would increase performance by a huge factor. One of the problems with Iray is it has to hold everything in the entire scene, which is very ineffecient. We do know that nvidia is making Iray for VR, so who knows. But if this did happen, the 1070 would murder the Titan X. Nvidia didn't disclose anything for Iray. And why would they? They were speaking to gamers at that reveal, and catered their reveal to gamers. And since they are making VR Iray, I think we can rest assured that Iray is not being forgotten.
  • Kevin SandersonKevin Sanderson Posts: 1,643

    Hey DustRider, is the DAZ Studio plugin for Octane working? It wasn't fully for the longest time. Seemed to be a problem for the guy working on it. Has everything been resolved? Does it convert Iray material to Octane or do we have to use those doggone nodes?

  • kyoto kidkyoto kid Posts: 41,928
    DustRider said:
    kyoto kid said:
    kyoto kid said:

    ...the newest supercomputer being built is using the first HBM2 Pascal Tesla compute cards with 16 GB and a new interface developed by Nvidia called NVlink.  NVLink offers an extremely fat pipeline between both GPU and CPU as well as directly between multiple GPUs compared to PCI 3.0.

    Yes, but that's not a consumer card. You aren't going to see consumer cards carry HBM without a premium price attached. And looking at the specs for the 1080, I'm not sure that G5X is holding it back any at all. 1080 is a beast. 8 gb is still VERY Daz friendly. Nearly everyone who uses it has been working with less than that, considering the 980 ti only has 6. Maybe next year, but I can't afford to wait that long with an aging 670 with only 2 gb. I managed to deal with that limitation, I think I can deal with "only" 8 gb.

    Anyway, the reveal just wrapped.

    A single 1080 is faster the two Titan X in SLI. The 1070 is also faster than the Titan X. I didn't expect that.

    They also has their card running AIR cooled clocked at over 2100 and running at 67C. Pascal is looking pretty sweet.

    GTX 1080, twice the power of a Titan X and 3 times more effecient, $599

    HTX 1070  $379

    More info here http://wccftech.com/nvidia-geforce-gtx-1080-launch/

    Now decision time. My 670 is starting to sweat bullets after this show. He's been a good friend for some time, we made it through the good times and the bad times. But its about time to move on.

    ...agreed, the cost will most likely be higher (AMD's first HBM GPU is more expensive than a 980TI and only had 4 GB of video memory).

    However HBM 2 will have a few major advantages to justify the price, like a smaller form factor (about half the length of a current generation unit), lower power consumption, a better pipeline between the GPU processor and video memory, and the potential for more video memory (very important for 3D rendering).  The first commercial units to be released will no doubt be oin the Quadro line which could boast 32 GB of HBM 2 (The Quadro M6000 was recently upgraded to 24 GB GDDR5).  From what I have gathered, the 1070/1080's advantages will primarily benefit the gaming market rather than CG production.

    8 GB is "borderline" for my needs as a fair number of my scenes exceed that during rendering. If a scene dumps to the CPU, it doesn't matter how fast the clock speed is or how many threads the GPU has.  This is why I am looking at the M6000 for now as it pretty much will handle anything I throw at it.  It may not be as fast as the forthcoming Pascal/GDDR5X GPUs head to head, but not having to risk exceeding GPU memory is a major speed advantage as well.

    Also as I understand, SLI only benefits gaming, not CG rendering.

    Keep in mind though that for the price of a quadro m6000 with 24gb you could get 4 gtx 1080's, a new mother board, case, power supply, ram, cpu, and Octane render with the DS plugin and not have to worry about the amount of ram on your GPU at all. Plus you would have all of the other features available with Octane render 3x that aren't available with Iray like true volumetrics, motion blur, and hair (right now you would need the Carrara plugin for dynamic hair, but adding that would still kkeep you under the cost of the quadro m4000).  You would also still be able to use Iray for smaller scenes.

    Just a thought, I'm sure most people would prefer the convenience of Iray, but there are other more cost effective options available for rendering large scenes on your gpu, without breaking the bank on a something like the m6000. In fact, you could render large scenes quite quickly on a system with even 4 older  2 gb  gtx 780 cards using Octane. Different strokes for different folks, but other, possibly more affordable and feature rich, options are available rather than investing ~$4,500 on a single video card.

    ...however I would have only one third the video memory for full GPU rendering.  I frequently have been seeing scene files top out at 10 GB and even higher (going into the virtual memory partition which is even slower).

    As I have become quite accustomed to Iray (even more than 3DL, and far more than Lux now since I ditched R4 because of all the bugginess) and I am content to keep working in it.  Personally, I like the workflow of being able to access the tools directly in Daz during setup rather than sending everything to, and flipping back an forth between the Daz viewport an another UI. As I do not work with dynamic hair (even though I have Carrara) that is not an issue since Daz does not support (nor offer) dynamic hair content.  24 GB of video memory would also support importing and rendering of my Garibaldi hair designs as .obj files.

    My original "workstation II" plans called for x4 liquid cooled Titan X's which would have required a fairly hefty (and expensive) server grade PSU. The M6000 eliminates that requirement as well as the need for exotic cooling since it consumes less power than a Titan X or 980 TI (actually close to what of my old Fermi GPU does). Yes I'd have only 3072 threads but as Mec4D has pointed out, adding more GPUs does not increase render speed by 100% per unit. The best you render speed advantage can get with 4 GPUs is barely twice that of one.  Where multiple GPUs do have a greater effect is on screen response which is why such setups are favoured for gaming rigs. As I do not do games, I don't have the need for that level of performance.

    For my purpose, not having a scene drop from GPU memory while rendering is more important, 24 GB of VRAM would pretty much guarantee that.

  • kyoto kidkyoto kid Posts: 41,928
    I wonder if some of the tricks the 1000 series uses for VR could be applied to Iray. One key feature to their VR performance is that they are able to basically drop pixels in a scene that are not visible to the user. Like stuff behind an object in front of you. If this kind of technique could apply to rendering in Iray, it would increase performance by a huge factor. One of the problems with Iray is it has to hold everything in the entire scene, which is very ineffecient. We do know that nvidia is making Iray for VR, so who knows. But if this did happen, the 1070 would murder the Titan X. Nvidia didn't disclose anything for Iray. And why would they? They were speaking to gamers at that reveal, and catered their reveal to gamers. And since they are making VR Iray, I think we can rest assured that Iray is not being forgotten.

    ...this was partly why I wasn't so impressed as it seemed everything (both the in the reveal and information on their site) primarily points to a boost for gaming performance rather than CG production. I'll wait until the dust settles and then see what comes out in the wash so to say. If they indeed release a Pascal based Quadro line using HBM 2 (particularly if the top end unit ends up with 32 GB), then the price of that M6000 may no longer be as big an issue.

  • mjc1016mjc1016 Posts: 15,001

    I'm waiting to see if they will release a Pascal version of a Tesla 'compute' card...

  • kyoto kidkyoto kid Posts: 41,928

    ...they already have, its the P100 with 16 GB of HBM 2 memory.  Prices for indicvidual units have not been released yet as for now they are only available as part of Nvidia's DGX-1 deep learning supercomputer for use in AI research.

  • MEC4DMEC4D Posts: 5,249
    edited May 2016

    The M6000 and Titan X STANDARD are exactly the same chipset and have exactly the same number of cores, and exactly the same single precision compute capabilities + they usually have extra features like better colour depth and better driver for rendering with iray . rendering with iray don't mean how faster you finish, it is about how accurate the calculations are in your final render .

    Having the same cards specification means nothing when rendering with iray , Nvidia decide here , if that was the case there will be angry companies and people at Nvidia's doors soon asking for refund and sue for misleading , why you think M6000 12GB cost 4 times the money and it is slower than Titan X 12GB?   but they interact differently with iray software , so not expect 1080 to come even closer , it is not a Game it is rendering with raytraced light path and shader calculations not used in games that are still long behind photoreal real-time rendering .

     only idiot would spend 5K in a one slower card where he or she can have 500% the speed with other card for the same money , it would be a scam .. but there is a secret key that unlock the power of rendering with iray and GTX cards don't have it . And I am not talking about speed in rendering , I am talking about raytraced calculations and light path that are not used in Games .

    If your render use less speed with more iterations  to finish your render it does not mean it is better or accurate render , 4000 iterations at 50 sec will look less accurate than 1800 iterations at 11 min rendering time with all correct samplers

    4000 iterations may give you zero noises in 50 sec , but light paths will be not as accurate for realistic rendering and less photo real . 

    and that's how it works .

    but we are not architects or scientists that need proper light calculations to the last photon , so we can go away with cheaper GTX that clean the noises faster , as usually nobody will miss the extra precision that professionals depending on that really need . And they don't use 1 M6000 in a PC and at last $50.000 worth of rendering power , you could not do that with 8 Titan X in one PC case .

    comprende ?

    Post edited by MEC4D on
  • Peter FulfordPeter Fulford Posts: 1,325
    kyoto kid said:

     Yes I'd have only 3072 threads but as Mec4D has pointed out, adding more GPUs does not increase render speed by 100% per unit. The best you render speed advantage can get with 4 GPUs is barely twice that of one. 

    MEC4D did post some (third party) Iray benchmark results that showed four cards being nearly linear:

    1 x GTX Titan X 12GB - 100.5 s
    2 x GTX Titan X 12GB - 62.25s
    3 x GTX Titan X 12GB - 44.25s
    4 x GTX Titan X 12GB - 26.15s

    She has also mentioned that Iray is performing better in the new DS beta.

     

    kyoto kid said:

    For my purpose, not having a scene drop from GPU memory while rendering is more important, 24 GB of VRAM would pretty much guarantee that.

    When fantasizing, you might as well fantasize big.

     

  • fixmypcmikefixmypcmike Posts: 19,693
    kyoto kid said:

     Yes I'd have only 3072 threads but as Mec4D has pointed out, adding more GPUs does not increase render speed by 100% per unit. The best you render speed advantage can get with 4 GPUs is barely twice that of one. 

    MEC4D did post some (third party) Iray benchmark results that showed four cards being nearly linear:

    1 x GTX Titan X 12GB - 100.5 s
    2 x GTX Titan X 12GB - 62.25s
    3 x GTX Titan X 12GB - 44.25s
    4 x GTX Titan X 12GB - 26.15s

    She has also mentioned that Iray is performing better in the new DS beta.

     

    kyoto kid said:

    For my purpose, not having a scene drop from GPU memory while rendering is more important, 24 GB of VRAM would pretty much guarantee that.

    When fantasizing, you might as well fantasize big.

    "It's not what you do with it, it's the size that counts." -- Arrogant Worms

     

  • artphobeartphobe Posts: 97

    Are there any charts showing the CUDA rendering performance improvement over Maxwell?

    For Reality/LuxRender users, the performance is on par with the Fury cards from AMD.

     

    My R9 Nano scores:

    I love having a GPU speed up  the rendering process, things on Reality are so much faster with OpenCL compared to Iray. I hope the GTX 1070 has performance similar to the R9 Nano - OpenCL and Iray + 8GB VRAM = great!

  • kyoto kidkyoto kid Posts: 41,928
    edited May 2016

    ...again it is the memory size that I see as the biggest advantage.  Currently, no other GPU on the market offers that level of video memory and probably will not for several years. Again, I don't do games so features like a hyper fast frame rate & overclocked memory speed are meaningless to me.

    To me not having to be concerned about large high quality (as in gallery sized/quality output for printing) render jobs dumping to CPU mode and taking countless hours if not days to complete is worth the price.  I used to paint in oils and create detailed pencil drawings before advancing arthritis took both away from me. I now look to 3D CG as their replacement and hold to the same exacting standards as I did when I worked with brushes and paints on canvas, or multiple weight graphite on fine Bristol.

    Post edited by kyoto kid on
  • marblemarble Posts: 7,500

    Hey DustRider, is the DAZ Studio plugin for Octane working? It wasn't fully for the longest time. Seemed to be a problem for the guy working on it. Has everything been resolved? Does it convert Iray material to Octane or do we have to use those doggone nodes?

    It seems to me that, for someone with the cash to spend, Octane might be a better buy than multiple cards. Octane 3 is due out (over a week ago, according to their press release) which can offload textures and some geometry to system RAM. Surely that would cut the VRAM requirement significantly? If the reason for buying another GPU is memory rather than speed, then Octane might be the answer? The question of whether the plugin is working yet is obviously important in such a decision. I don't have Octane and I am finding the 4GB on my GTX970 quite limiting for IRay already (I've only had the card a few weeks).

  • kyoto kidkyoto kid Posts: 41,928
    edited May 2016

    ...well in effect, video memory becomes "speed" provided the scene can be totally rendered on the GPU from start to finish. I'm not so much interested in trying to render a large complex scene scene in say, five minutes. Even if it took a couple hours, that is fine (instead of potentially a day or more in pure CPU mode).  A number of my scenes have topped out at over 11 GB as I tend to use a lot of "in render" effects since my postwork ability is pretty poor (especially when it comes to layering and dealing with depth and and shadows).  I know I have to "fake" volumetics in Iray, but have become pretty good at making it work for my needs.

    As I mentioned, I prefer the workflow Daz/Iray offers compared to using a separate programme and UI. For example, with a more powerful GPU I will be able to easily check lighting, blocking, & such without having to stop and perform test renders by just switching to Iray View while still working on a scene.  Having all my shader and surface controls right there in the surfaces tab is just more elegant. To me that makes it more intuitive.

    Oh and to add, just read the post after this one and forgot to mention that I have a fairly extensive library of Iray shaders and utilities which would be of little use in Octane.

    Post edited by kyoto kid on
Sign In or Register to comment.