OT: The New Nvidia Titan V, Feel The Power

2»

Comments

  • drzap said:

    It has 5100 CUDA cores compared to 3500 of its predecessor.   I have never seen such a massive generational jump before.  It should thrash everything else in GPU rendering.

    Probably could finally run ARK full throttle.

  • The tensor cores will be used in the new IRay software, to render up to 8x faster, to a final noise-free rendering, for production. The memory bandwidth is for internal movement and shifting of massive code and raw image data. Thus the ability to render up to 2x to 8x of a similar cuda-core count, at lower power levels. Video is the primary target, as well as 4k, 8k, and 16k final renders. Output per watt, and less operating hardware, makes this ideal for professional renderfarms and also, supercomputing, which is part of the rendering process. Calculating photons/light in a 3D space, is supercomputing. Playing a game, which uses shortcuts, tricks, and illusions, with ghetto pseudo3D environment chunks, is what direct-x and open-GL is for. I will have four of these soon, but I am sure it will be a short while before the tensor chip acceleration and optix-5 is released beyond the IRay SDK, and finds its way into the Daz core.
  • kyoto kidkyoto kid Posts: 40,571

    ...yeah, but who of us can afford a 3,000$ GPU card that only has 12 GB of VRAM?   I'd be better off saving 500$ and getting a P5000 with 4 more GB as video memory still means better render speed if you do large scenes like I create.  Once the process dumps to the CPU all those CUDA and tensor cores are worthless.

  • ebergerlyebergerly Posts: 3,255
    edited December 2017
    JD_Mortal said:
    The tensor cores will be used in the new IRay software, to render up to 8x faster, to a final noise-free rendering, for production. 

    Really?? Where did you hear that? I heard the tensor cores are designed for scientific-type calcs, which are much different from image-type calcs (I'm assuming the scientific stuff is more like floating point operations, and image stuff is integer-based?)

    I keep hearing the tech guys saying it's not designed for gaming and image-based stuff, but since their "enthusiast" audience are so excited about more power they bite their tongues and produce videos showing it gives a 1.5% increase in gaming performance.  

    BTW, you said you will "have four of those soon"? Are you serious? What type of work do you do that requires $12,000 in video cards? 

    Post edited by ebergerly on
  • kyoto kidkyoto kid Posts: 40,571

    ...yes

  • JamesJABJamesJAB Posts: 1,760

    I think he's talking about this: https://developer.nvidia.com/optix-denoiser

    It's the new 5.0 version of Optix.  It uses the Tensor cores in the Pascal chips as an AI-accelerated denoiser.

    Here's a quote from the above linked website :
    "The AI-accelerated denoiser was trained using tens of thousands of images rendered from one thousand 3D scenes. The training data was given to an auto encoder similar to the one described in the paper and run on an NVIDIA® DGX-1™. The result is an AI-accelerated denoiser which is included in the OptiX 5.0 SDK that works on a wide number of scenes. To further expand quality and performance, developers can train their own denoiser neural network using images produced by their renderer."

  • bluejauntebluejaunte Posts: 1,861

    I've read about AI denoising before, other renderers are experimenting with such stuff too. Pretty interesting. AI will do a lot of stuff for us eventually. No doubt it won't need a $3000 GPU either.

  • JamesJABJamesJAB Posts: 1,760

    I've read about AI denoising before, other renderers are experimenting with such stuff too. Pretty interesting. AI will do a lot of stuff for us eventually. No doubt it won't need a $3000 GPU either.

    In the future...  But for now yes a $3000 GPU.  Though I don't think it's enabled in any version of Iray yet.

    Guess the big question will be, what the state of Tensor cores will be in the consumer level Volda GPUs (If they keep any enabled)

  • Looking at the claimed specs, this is all about double precision calculation. 14% better at single precision than Titan X, but >1800% better (yes more than 18 times better) in double precision. Total waste of money for single precision, assumably massively worth it for double, especially if you factor in energy costs.

  • JD_MortalJD_Mortal Posts: 758
    edited January 2018

    BTW, you said you will "have four of those soon"? Are you serious? What type of work do you do that requires $12,000 in video cards?

    3D Porn games...

    It doesn't require it, but it is a nice thing to have. Besides, it is the "next generation". It will be a while before the "Titan-W" comes out, or the "Titan-V" takes a price-cut. (I made-up the Titan-W, Two titan V's. Since they are not using any logic in naming, like windows. "Windows 4K" coming soon!)

    Yes, I was talking about the denoiser, which uses the AI part of code, to "think", "What is the correct shade value here?"

    What is the AI core?

    X = A * B + C

    https://devblogs.nvidia.com/parallelforall/programming-tensor-cores-cuda-9/

    That is it... but, it is in a physical 3D cube-array with outputs in every possible direction. It spits-out every possible answer at once, when force-fed an array of values to process.

    The next core should be...

    X = (A + B) / 2

    But, as (A + B) * 0.5... because that is faster.

    Used for faster calculations of blending, which is the other shortfall in rendering.

    or X = (A + B) / C

    For open-ended calculations.

    How do I justify spending $12,000 for four cards... With the counter-balance of my electric bill and the bonus of more product output in that same time. It would take 2x-8x 1080-Ti or Titan-Xp to yield the same potential output, but at 2x to 8x the electric bill and additional hardware/software purchases. (Using OptiX 5)

    1x Titan-V = $3,000 @ 250+ Watts

    8x Titan-Xp = ~$9,000 @ 2000+ Watts

    8x 1080-Ti = ~$6,400 @ 2000+ Watts

    Now, multiply all that times 4... Which is the better deal? (I can fit 8x Titan-V in one computer. I would need 4x additional computers to power each other setup.)

    P.S. My new workstation... a 65" 4K T.V. as a monitor. :P

    Post edited by JD_Mortal on
  • TomDowdTomDowd Posts: 197
    JD_Mortal said:
    P.S. My new workstation... a 65" 4K T.V. as a monitor. :P

    And i assume you are happy with the text crispness on that 4K TV?

  • kyoto kidkyoto kid Posts: 40,571
    edited January 2018
    JamesJAB said:

    I've read about AI denoising before, other renderers are experimenting with such stuff too. Pretty interesting. AI will do a lot of stuff for us eventually. No doubt it won't need a $3000 GPU either.

    In the future...  But for now yes a $3000 GPU.  Though I don't think it's enabled in any version of Iray yet.

    Guess the big question will be, what the state of Tensor cores will be in the consumer level Volda GPUs (If they keep any enabled)

    ...most likely there will be none in the Volta GTX series as consumer cards are geared primarily towards graphics rather than deep learning. From what I have been reading, memory will be GDDR6 rather than HBM 2 (to keep them affordable).

    Post edited by kyoto kid on
  • JD_MortalJD_Mortal Posts: 758
    edited January 2018
    TomDowd said:
    JD_Mortal said:
    P.S. My new workstation... a 65" 4K T.V. as a monitor. :P

    And i assume you are happy with the text crispness on that 4K TV?

    Yes, it uses 4:4:4 and a forced color of 10 bpp, in computer mode. I don't use any GUI or Font scaling at all. The actual TV is a "Samsung MU6500".

    https://www.rtings.com/tv/reviews/samsung/mu6500

    The only "issue" I have with it, is a thing called pixel-gap. It has a slight microscopic brightness gap, colors leaning towards the bottom of the pixel. At my current angle, this gives the illusion of jagged top-angles and ultra-smooth bottom-angles. (Hard to describe. Looking at an angled line, you sort-of get a jagged "wide" looking shearing of individual pixels where contrast is great.) Completely unseen at a normal viewing distance, and a slightly higher elevation. {Moving my head up, so my head is aligned with the center of the screen, instead of being 1/3 from the bottom edge of the screen. I have to remove the stand and get a lower desk, with a recessed mount for the TV. It towers above me, at the moment.}

    I love rendering, with Daz, in 4K and NOT having to scroll to see the details of the render progress! Daz is 4K friendly...

    My old monitor... I have two of them...

    Post edited by JD_Mortal on
  • TomDowdTomDowd Posts: 197

    Cool - thanks. I'm currently running 3 monitors, 2 HD and 1 UHD, and have been thinking about reconfiguring. I've had mixed results with DAZ and the 4K (nothing bad, just flakey,) but I suspect that has more to do with my mixed-resolution set-up and the fact that I keep parts of Studio on different monitors than anything else.

    Thanks!

  • kyoto kidkyoto kid Posts: 40,571

    ...I do the same as well to have the largest viewport I can get.

  • JamesJABJamesJAB Posts: 1,760
    kyoto kid said:
    JamesJAB said:

    I've read about AI denoising before, other renderers are experimenting with such stuff too. Pretty interesting. AI will do a lot of stuff for us eventually. No doubt it won't need a $3000 GPU either.

    In the future...  But for now yes a $3000 GPU.  Though I don't think it's enabled in any version of Iray yet.

    Guess the big question will be, what the state of Tensor cores will be in the consumer level Volda GPUs (If they keep any enabled)

    ...most likely there will be none in the Volta GTX series as consumer cards are geared primarily towards graphics rather than deep learning. From what I have been reading, memory will be GDDR6 rather than HBM 2 (to keep them affordable).

    Base on what was said and shown in Nvidia's CES keynote, I don't think that the Tensor cores will be edited out of the Geforce Volta chips.  Even the self driving car SOC gets to keep it's tensor cores.

    On a side note, the stuff Nvidia is doing with AI is pretty amazing... (They have AI writing it's own software now)

  • kyoto kidkyoto kid Posts: 40,571

    ...but what purpose would AI have for a standard GPU card?   Don't know that 3D hobbyists or gamers are suddenly going to get the bug under them ot do scientific modelling or deep learning.

    It makes sense for an autonomous vehicle that needs to learn about different conditions so it can more accurately adapt to them.  Also those self driving vehicles will tend to have a much higher price than your average desktop computer, so the cost will not be as significant as it would be for the average desktop 3D or gaming rig. 

  • HorusRaHorusRa Posts: 1,662
    edited March 2019

    .

    Post edited by HorusRa on
  • JD_MortalJD_Mortal Posts: 758

    Beta release (public), now has potential support for "Volta" cards. I will hopefully get to try it this weekend.

    Also, FYI, there is a new Titan-V (CEO edition) with more cuda/volta/speed/mem-bandwidth and 32GB of VRAM coming soon. I may save-up for two of those, instead of two more Titan-V's. Sell mine while they are still at a high market price.

    https://www.anandtech.com/show/13004/nvidia-limited-edition-32gb-titan-v-ceo-edition

  • AllenArtAllenArt Posts: 7,140

    Think I'll stick to the cheap 600-700 dollar cards.

    surprise

    Laurie

  • kyoto kidkyoto kid Posts: 40,571
    edited July 2018
    ...*yawn*.
    Post edited by kyoto kid on
Sign In or Register to comment.