VRAM available for rendering with Windows 10 - #273017 (closed)

1246710

Comments

  • kyoto kidkyoto kid Posts: 41,843

    ...networked Iray rendering or Linux support?

  • Kendall SearsKendall Sears Posts: 2,995
    edited April 2017

    Here's the thing.  I don't think that they are allocating a percentage.  The minimum frambuffer that one can reliably allocate for this type of operation is the largest monitor resolution supported by the card.  Therefore, if the video card is capable of supporting 4K monitor sizes, one has to assume that *someone* is going to hotplug a 4K monitor into the system (with DVI, HDMI, and DisplayPort the act of turning a monitor on or off qualifies as a hotplug event -- do this and watch WIndows completely reconfigure your desktop(s) and icons each time).  So now you're looking at allocating as many 4K framebuffers as there are monitor ports.  If you do the math on 32bit depth x 4K screen sizes you'll see that you're using a significant amount of memory.  This could end up looking like a great percentage of the VRAM.  Keep in mind that the primary use for VRAM is for screen framebuffers, not texture storage.  This is only going to get worse when 8K screen resolutions become supported by default.

     

    Doing the math, as Kendall suggests - 4K resolution is 8.3 Mp (mega-pixels, or million pixels) X 32 bit color depth equals a frame buffer of 265.5 MB; X 5 (one DVI, one HDMI, and 3 Display port connectors) equals 1,328 MB or 1.3 GB. And I wouldn't be surprised at a number 10% higher, as that 8.3 Mp is the minimum size for the buffer; I don't know what else Nvdia allocates to go with it.

    This whole talk about frame buffers is just adding more confusion.

    https://en.wikipedia.org/wiki/Framebuffer

    This whole speculation does not add up with what you can actually observe:

     

    Titan (2013) / 4 ports / - 1 GB blocked of 6 GB

    Titan X (2016) / 5 ports / ~ around 2.4 GB blocked of 12 GB (reported by other user)

    GTX 1080 / 5 ports / 1.4 GB blocked of 8 GB

    GTX 1080 Ti / 4 ports / 2 GB blocked of 11 GB

    - - -

    -> So now please explain

    Why has  the 1080 Ti with only 4 ports blocked a larger amount of VRAM ( 2 GB) then the 1080 with 5 ports (1.4 GB) ?

    Why do a Titan and a GTX 1080 Ti with both 4 ports have blocked a hugely different amount of VRAM (1 GB vs 2 GB)?

    Why do a Titan X and a GTX 1080 with both 5 ports have blocked a hugely different amount of VRAM (~ 2.4GB vs 1.4 GB)?

    - - -

    Firstly, all of the above cards are limited to 4 displays regardless of the number of ports -- use of the VGA port will greatly affect the numbers.  Another consideration that is left out is who the manufacturers of the individual cards are and what, if any, mods were made to the reference design.  Also, the individual firmware on the boards may be tweaked for specific changes.  Explaining specific memory use differences w/o the background is nigh-on impossible.  There are a plethora of hardware reasons that designs with 2 of the same board types could allocate different amounts of memory.  It is entirely possible to have 2 of the exact same boards have different memory use models based simply on what motherboards they are on as well as monitor types used and other things.  For instance, is one motherboard forcing the card to allocate extra buffers to compensate for a transfer deficiency relative to another board?

    What one can do, and has been explained here already, is to lay out what use model is "reasonable" based on the known/documented behavior of the OS.  Individual variances introduced by manufacturers of any of the involved hardware can, and will, throw off the numbers.  Without using the same exact hardware across many different software/hardware setups and documenting the differences leads to nothing more than supposition about what may be causing the variances.  We don't even know if the folks reporting the numbers have other background software running that may affect VRAM use.  Using anecdotally reported internet numbers as some sort of analysis is not going to lead to any sort of solid answers.  The errors in the data are just too high.

    Is Windows 10 using more VRAM than earlier versions of Windows?  The answer seems to be "yes", but the reported amounts seem to vary greatly.  What does seem consistent is that Windows 7, with its more "primitive" UI seems to use significantly less VRAM than versions of Windows that have more complex UI's.  One could point to this and say "AHA! There's the answer!", but to what degree are these UI changes to fault for the higher use?  For sure, a more busy UI is going to require more VRAM, especially if it requires double-buffering for portions of the display.  Is this the culprit?  Impossible to tell based on the reports.  Are the nVidia 3D options in the drivers in play?  If so, then there is a great variance there.  OpenGL vs Vulkan vs Direct*?  Yup, a great difference.  What about 2D options set in the nVidia settings?  And on and on.

    Kendall

     

    Post edited by Kendall Sears on
  • TaozTaoz Posts: 10,236
    kyoto kid said:

    ...networked Iray rendering or Linux support?

    Networked Iray rendering. I think I saw it mentioned somewhere that they were working on it.

  • linvanchenelinvanchene Posts: 1,386
    edited April 2017

    We don't even know if the folks reporting the numbers have other background software running that may affect VRAM use.

    Please, this is starting to become insulting on a personal level.

    People developing for and using OctaneRender have experience with GPU rendering since 2012.

    DAZ Studio users have experience with GPU rendering since 2015.

    By now people talking about VRAM usage know to close other applications when performing such tests...

     

    Using anecdotally reported internet numbers as some sort of analysis is not going to lead to any sort of solid answers

    Agreed, we will not get the answer how to fix this issue based on the screenshots.

    It is not up to us to fix this or speculate about what exactly is causing this.

    I  asked for help to please participate and share some screenshots and your system setup so Microsoft and Nvidia can see that it is not just some individual users who encounter this issue but a whole community of 100s, 1000s, 10'000s or even 100'000s of creative users.

    Please if you want to help the situation, test this yourselves.

    Share your screenshots  with the results and your system information.

    Update / Edit:

    System Information that could be useful:

    GPU ( video cards used for display and rendering), Nvidia Driver version, processor, RAM information...

    Once again the link to the demo.

    https://home.otoy.com/render/octane-render/demo/

    - - -

    If DAZ3D staff would like to help the situation why not include some tools in DAZ Studio so all users can actually know how much VRAM they have available to build their 3d scenes to render with Nvidia Iray?

    - - -

    Post edited by linvanchene on
  • TaozTaoz Posts: 10,236

    How do you get the system information?

  • linvanchenelinvanchene Posts: 1,386
    Taozen said:

    How do you get the system information?

     

    http://www.geforce.com/geforce-experience for Nvidia Driver version, processor, RAM information

    https://www.techpowerup.com/gpuz/ has a built in screenshot utility

     

  • Richard HaseltineRichard Haseltine Posts: 107,908

    Here's the thing.  I don't think that they are allocating a percentage.  The minimum frambuffer that one can reliably allocate for this type of operation is the largest monitor resolution supported by the card.  Therefore, if the video card is capable of supporting 4K monitor sizes, one has to assume that *someone* is going to hotplug a 4K monitor into the system (with DVI, HDMI, and DisplayPort the act of turning a monitor on or off qualifies as a hotplug event -- do this and watch WIndows completely reconfigure your desktop(s) and icons each time).  So now you're looking at allocating as many 4K framebuffers as there are monitor ports.  If you do the math on 32bit depth x 4K screen sizes you'll see that you're using a significant amount of memory.  This could end up looking like a great percentage of the VRAM.  Keep in mind that the primary use for VRAM is for screen framebuffers, not texture storage.  This is only going to get worse when 8K screen resolutions become supported by default.

     

    Doing the math, as Kendall suggests - 4K resolution is 8.3 Mp (mega-pixels, or million pixels) X 32 bit color depth equals a frame buffer of 265.5 MB; X 5 (one DVI, one HDMI, and 3 Display port connectors) equals 1,328 MB or 1.3 GB. And I wouldn't be surprised at a number 10% higher, as that 8.3 Mp is the minimum size for the buffer; I don't know what else Nvdia allocates to go with it.

    This whole talk about frame buffers is just adding more confusion.

    https://en.wikipedia.org/wiki/Framebuffer

    This whole speculation does not add up with what you can actually observe:

     

    Titan (2013) / 4 ports / - 1 GB blocked of 6 GB

    Titan X (2016) / 5 ports / ~ around 2.4 GB blocked of 12 GB (reported by other user)

    GTX 1080 / 5 ports / 1.4 GB blocked of 8 GB

    GTX 1080 Ti / 4 ports / 2 GB blocked of 11 GB

    - - -

    -> So now please explain

    Why has  the 1080 Ti with only 4 ports blocked a larger amount of VRAM ( 2 GB) then the 1080 with 5 ports (1.4 GB) ?

    Why do a Titan and a GTX 1080 Ti with both 4 ports have blocked a hugely different amount of VRAM (1 GB vs 2 GB)?

    Why do a Titan X and a GTX 1080 with both 5 ports have blocked a hugely different amount of VRAM (~ 2.4GB vs 1.4 GB)?

    - - -

    Do they differ in maximum supported resolution?

  • linvanchenelinvanchene Posts: 1,386
    edited April 2017

     

    update / edit:

    - added Maximum Digital Resolution of mentioned cards , updated links

    - added links to the official post about this known (!) issue since december 2015.

    @ Maximum Digital Resolution

    Here's the thing.  I don't think that they are allocating a percentage.  The minimum frambuffer that one can reliably allocate for this type of operation is the largest monitor resolution supported by the card.  Therefore, if the video card is capable of supporting 4K monitor sizes, one has to assume that *someone* is going to hotplug a 4K monitor into the system (with DVI, HDMI, and DisplayPort the act of turning a monitor on or off qualifies as a hotplug event -- do this and watch WIndows completely reconfigure your desktop(s) and icons each time).  So now you're looking at allocating as many 4K framebuffers as there are monitor ports.  If you do the math on 32bit depth x 4K screen sizes you'll see that you're using a significant amount of memory.  This could end up looking like a great percentage of the VRAM.  Keep in mind that the primary use for VRAM is for screen framebuffers, not texture storage.  This is only going to get worse when 8K screen resolutions become supported by default.

     

    Doing the math, as Kendall suggests - 4K resolution is 8.3 Mp (mega-pixels, or million pixels) X 32 bit color depth equals a frame buffer of 265.5 MB; X 5 (one DVI, one HDMI, and 3 Display port connectors) equals 1,328 MB or 1.3 GB. And I wouldn't be surprised at a number 10% higher, as that 8.3 Mp is the minimum size for the buffer; I don't know what else Nvdia allocates to go with it.

    This whole talk about frame buffers is just adding more confusion.

    https://en.wikipedia.org/wiki/Framebuffer

    This whole speculation does not add up with what you can actually observe:

     

    Titan (2013) / 4 ports / - 1 GB blocked of 6 GB

    Titan X (2016) / 5 ports / ~ around 2.4 GB blocked of 12 GB (reported by other user)

    GTX 1080 / 5 ports / 1.4 GB blocked of 8 GB

    GTX 1080 Ti / 4 ports / 2 GB blocked of 11 GB

    - - -

    -> So now please explain

    Why has  the 1080 Ti with only 4 ports blocked a larger amount of VRAM ( 2 GB) then the 1080 with 5 ports (1.4 GB) ?

    Why do a Titan and a GTX 1080 Ti with both 4 ports have blocked a hugely different amount of VRAM (1 GB vs 2 GB)?

    Why do a Titan X and a GTX 1080 with both 5 ports have blocked a hugely different amount of VRAM (~ 2.4GB vs 1.4 GB)?

    - - -

    Do they differ in maximum supported resolution?

    Based on the official information with the cards I tested myself:

    https://www.asus.com/Graphics-Cards/GTXTITAN6GD5/

    https://www.asus.com/Graphics-Cards/ROG-STRIX-GTX1080-A8G-GAMING/

    https://www.asus.com/Graphics-Cards/GTX1080TI-FE/overview/

    And the one other people mentioned:

    https://www.nvidia.com/en-us/geforce/products/10series/titan-x-pascal/#forceLocation=US

     

    GTX Titan (2013) Max Digital resolution 4096x2160 (3840x2160 at 30Hz or 4096x2160 at 24Hz supported over HDMI)

    GTX Titan Pascal (2016) Max Digital resolution 7680x4320@60Hz

    GTX 1080 : Digital Max Resolution:7680x4320

    GTX 1080 Ti FE: Digital Max Resolution:7680x4320

     

    -> Max Resolution seems to be the same at 8 K@ 60 Hz for 1080, 1080 Ti and Titan X Pascal for the Pascal generation cards.

    -> But blocked VRAM is different.

     

    - - -

    @ official information by Nvidia and Otoy staff

    Official Otoy staff confirmed the issue with the following words in December 2015:

    "A few people have noticed that on Windows 10 a large chunk of device memory is unavailable. This occurs even on GPUs which are not connected to a screen.

    We can confirm this affects any CUDA application on any type of GPU. GPUs with more VRAM will have a larger amount of unusable memory. On 6GB cards, a bit over 1GB is unusable. CUDA applications effectively are not able to use this memory.

    At the moment we don't know any workaround yet. If you often render scenes which use most VRAM on your cards, you may need to delay upgrading to Windows 10."

    Source:

    https://render.otoy.com/forum/viewtopic.php?f=12&t=51992

    - - -

    Nvidia provided the following feedback in June 2016 to Otoy:

     

    "It appears that in Win 10, with the Windows Display Driver Model v2, processes will be assigned budgets for how much memory they can keep resident. What we are noticing is that WDDMv2 started to impose a limit on total process allocation size. This is briefly mentioned here:

    https://msdn.microsoft.com/en-us/library/windows/hardware/dn932169(v=vs.85).aspx "

    Source: https://render.otoy.com/forum/viewtopic.php?f=12&t=51992&start=20#p279386

    - - -

    ¨That is the "official" information we have from Nvidia and Otoy. Everything else is speculation until the moment Microsoft deceides to comment on this case.

    - - -

    -> post your screenshots and test results in the allready exiting threads if you want to help raising attention:

    https://social.technet.microsoft.com/Forums/windows/en-US/15b9654e-5da7-45b7-93de-e8b63faef064/windows-10-does-not-let-cuda-applications-to-use-all-vram-on-especially-secondary-graphics-cards?forum=win10itprohardware

    - - -

    Post edited by linvanchene on
  • TaozTaoz Posts: 10,236
    edited April 2017
    Taozen said:

    How do you get the system information?

    http://www.geforce.com/geforce-experience for Nvidia Driver version, processor, RAM information

    https://www.techpowerup.com/gpuz/ has a built in screenshot utility

    Thanks, I already have both installed though and can't find anything about how much VRAM is available, only how much there is. Where do I see that?

     

    NVIDIA_INFO.jpg
    558 x 743 - 68K
    Post edited by Taoz on
  • linvanchenelinvanchene Posts: 1,386
    edited April 2017
    Taozen said:
    Taozen said:

    How do you get the system information?

    http://www.geforce.com/geforce-experience for Nvidia Driver version, processor, RAM information

    https://www.techpowerup.com/gpuz/ has a built in screenshot utility

    Thanks, I already have both installed though and can't find anything about how much VRAM is available, only how much there is. Where do I see that?

     

    I added a quick guide to the "Technical Help" section that shows

    - how to download and install the OctaneRender Demo

    - Export a scene as .obj from DAZ Studio

    - Import the scene in the OR Demo

    - check how much VRAM is used by geometry, textures and how much VRAM is blocked on your system

    - - -

    https://www.daz3d.com/forums/discussion/164281/quick-guide-export-scene-from-daz-studio-to-octanerender-demo-to-check-available-vram#latest

    If you get stuck please add any technical questions in that separate thread.

    - - -

    Post edited by linvanchene on
  • SimonJMSimonJM Posts: 6,067
    Taozen said:
    Taozen said:

    How do you get the system information?

    http://www.geforce.com/geforce-experience for Nvidia Driver version, processor, RAM information

    https://www.techpowerup.com/gpuz/ has a built in screenshot utility

    Thanks, I already have both installed though and can't find anything about how much VRAM is available, only how much there is. Where do I see that?

     

    From what I can see, mist things tell you what you have, not what is in use - that seems to say your system is reporting 12GB of memory available for graphics, of which 8GB is on your gfx card and 4GB is potentially being shared from your system RAM.  You'd have to dom something like has been suggested and use a tool (or the reporting of a program) that shows VRAM consumption.

  • TaozTaoz Posts: 10,236
    Taozen said:
    Taozen said:

    How do you get the system information?

    http://www.geforce.com/geforce-experience for Nvidia Driver version, processor, RAM information

    https://www.techpowerup.com/gpuz/ has a built in screenshot utility

    Thanks, I already have both installed though and can't find anything about how much VRAM is available, only how much there is. Where do I see that?

     

     

    I added a quick guide to the "Technical Help" section that shows

    - how to download and install the OctaneRender Demo

    - Export a scene as .obj from DAZ Studio

    - Import the scene in the OR Demo

    - check how much VRAM is used by geometry, textures and how much VRAM is blocked on your system

    - - -

    https://www.daz3d.com/forums/discussion/164281/quick-guide-export-scene-from-daz-studio-to-octanerender-demo-to-check-available-vram#latest

    If you get stuck please add any technical questions in that separate thread.

    - - -

    OK, thanks!

  • kyoto kidkyoto kid Posts: 41,843
    Taozen said:
    kyoto kid said:

    ...networked Iray rendering or Linux support?

    Networked Iray rendering. I think I saw it mentioned somewhere that they were working on it.

    ...OK, that makes a big difference as well as simplifies matters quite a bit.

    ...and if I can run my dual Xeon render system on Linux, all the better.

  • GaryHGaryH Posts: 66
    edited May 2017

    No need to use Octane Render to check available memory, the DS log file will show you how much VRAM is available on each card for Iray's use in Windows 10.

    Just check the log file after you open DS, no need to even load a scene.

    I have a GTX 970 for my 2K display and a dedicated Titan X Pascal for rendering.  This is what the relevant section of the log file says:

    ...

    2017-05-27 13:48:27.660 Iray INFO - module:category(IRAY:RENDER):   1.1   IRAY   rend info : NVIDIA display driver version: 376.53
    2017-05-27 13:48:27.660 Iray INFO - module:category(IRAY:RENDER):   1.1   IRAY   rend info : Your NVIDIA driver supports CUDA version up to 8.0; iray requires CUDA version 8.0; all is good.
    2017-05-27 13:48:27.676 Iray INFO - module:category(IRAY:RENDER):   1.1   IRAY   rend info : Using iray plugin version 4.5, build 278300.4305 n, 30 Nov 2016, nt-x86-64-vc11.
    2017-05-27 13:48:27.817 Iray INFO - module:category(IRAY:RENDER):   1.1   IRAY   rend info : CUDA device 1 (GeForce GTX 970): compute capability 5.2, 4 GiB total, 3.33423 GiB available, display attached
    2017-05-27 13:48:28.020 Iray INFO - module:category(IRAY:RENDER):   1.1   IRAY   rend info : CUDA device 0 (TITAN X (Pascal)): compute capability 6.1, 12 GiB total, 10.0604 GiB available

    Help -> Troubleshooting -> View Log File...

    Search for "GiB available" in your text editor

    Post edited by GaryH on
  • linvanchenelinvanchene Posts: 1,386
    edited May 2017
    garyh.pub said:

    No need to use Octane Render to check available memory, the DS log file will show you how much VRAM is available on each card for Iray's use in Windows 10.

    Just check the log file after you open DS, no need to even load a scene.

    I have a GTX 970 for my 2K display and a dedicated Titan X Pascal for rendering.  This is what the relevant section of the log file says:

    ...

    2017-05-27 13:48:27.660 Iray INFO - module:category(IRAY:RENDER):   1.1   IRAY   rend info : NVIDIA display driver version: 376.53
    2017-05-27 13:48:27.660 Iray INFO - module:category(IRAY:RENDER):   1.1   IRAY   rend info : Your NVIDIA driver supports CUDA version up to 8.0; iray requires CUDA version 8.0; all is good.
    2017-05-27 13:48:27.676 Iray INFO - module:category(IRAY:RENDER):   1.1   IRAY   rend info : Using iray plugin version 4.5, build 278300.4305 n, 30 Nov 2016, nt-x86-64-vc11.
    2017-05-27 13:48:27.817 Iray INFO - module:category(IRAY:RENDER):   1.1   IRAY   rend info : CUDA device 1 (GeForce GTX 970): compute capability 5.2, 4 GiB total, 3.33423 GiB available, display attached
    2017-05-27 13:48:28.020 Iray INFO - module:category(IRAY:RENDER):   1.1   IRAY   rend info : CUDA device 0 (TITAN X (Pascal)): compute capability 6.1, 12 GiB total, 10.0604 GiB available

    Help -> Troubleshooting -> View Log File...

    Search for "GiB available" in your text editor

    wow!!! So that information was there all along in Nvidia Iray in DAZ Studio.

    garyh.pub, thank you so much for pointing this out!

    I was able to find that information in the log as well. It seems it is displayed at several stages when starting up DAZ Studio:

    Test setup:

    - 1x GTX 1080 set as display only

    - 2x GTX 1080 Ti set as rendering devices in the DAZ Studio Iray Render settings

    - Scene: 1 cube primitive and Nvidia Iray live viewport active.

    Result in the log file:

    "2017-05-28 06:01:31.567 Iray INFO - module:category(IRAY:RENDER):   1.1   IRAY   rend info : Your NVIDIA driver supports CUDA version up to 8.0; iray requires CUDA version 8.0; all is good.
    2017-05-28 06:01:31.576 Iray INFO - module:category(IRAY:RENDER):   1.1   IRAY   rend info : Using iray plugin version 4.5, build 278300.12584 n, 24 Mar 2017, nt-x86-64-vc11.
    2017-05-28 06:01:31.721 Iray INFO - module:category(IRAY:RENDER):   1.1   IRAY   rend info : CUDA device 1 (GeForce GTX 1080): compute capability 6.1, 8 GiB total, 6.66345 GiB available, display attached
    2017-05-28 06:01:31.886 Iray INFO - module:category(IRAY:RENDER):   1.1   IRAY   rend info : CUDA device 2 (GeForce GTX 1080 Ti): compute capability 6.1, 11 GiB total, 9.17244 GiB available
    2017-05-28 06:01:32.114 Iray INFO - module:category(IRAY:RENDER):   1.1   IRAY   rend info : CUDA device 0 (GeForce GTX 1080 Ti): compute capability 6.1, 11 GiB total, 9.17244 GiB available"

    Just check the log file after you open DS, no need to even load a scene.

    Tested this as well. Deleted the log file and restarted DAZ Studio without rendering a scene.

    Confirmed. The same values are indicated.

    - - -

    -> This means the official log created by DAZ Studio Iray indicates similar amounts of unavailable VRAM as OctaneRender VRAM monitoring tools.

    On a 1080Ti with 11 GB  VRAM only around 9 GB are useable. 2 GB of VRAM are blocked by Windows 10 and are not available even though they are not actually used for anything.

    - - -

    @ latest official statement by Otoy Mon May 22, 2017 10:56 pm

    "AFAIK it's an issue that is shared by every CUDA app on Win10, not just ours.

    NVIDIA is aware of this, and based on my recent understanding, is talking to MS about addressing the issue"

    https://render.otoy.com/forum/viewtopic.php?f=12&t=51992&start=70#p314100

    - - -

    Post edited by linvanchene on
  • Leonides02Leonides02 Posts: 1,379

    I hope this gets addressed quickly. Buying a 1080 ti and then NOT being able to use its full potential is beyond frustrating. 

  • GreymomGreymom Posts: 1,139

    Thanks to everyone for this information!  

    I was wondering if the same memory problem is seen with AMD GPU cards?   I know they can't be used for IRAY or OCTANE, but I am setting up a machine for Luxrender to take advantage of the two R9-290X 8 GB cards I picked up cheap last year.  From the vague info I saw from Microsoft, it looks like a general issue, not CUDA-specific, but I have not been able to confirm this.  Of course the siren's lure of the "free" upgrade to Windows 10 seduced me, but I have a spare copy of Win 7 around somewhere, and a spare hard drive, so I can always set up hardware dual-OS.  This new machine will be about 40 times faster than my old clunker for Lux.

    I am glad I held off buying a new CUDA card for IRAY and VUE Render.  I am also hoping for a card with 16GB instead of 11.

     

  • kyoto kidkyoto kid Posts: 41,843

    ...Vue doesn't natively support GPU rendering.

  • kyoto kid said:

    ...Vue doesn't natively support GPU rendering.

    The latest vrsion does, though I believe it is still limited in the features it supports.

  • GreymomGreymom Posts: 1,139
    kyoto kid said:

    ...Vue doesn't natively support GPU rendering.

    The latest vrsion does, though I believe it is still limited in the features it supports.

    VUE Infinite 2016 uses CUDA GPU's for previews, antialiasing and some limited path-tracing.   They are headed for GPU rendering, but I think that cards with >16gb will be needed before they can fully implement.   I am doing my bit to help, as I am certain that as soon as I get my rendering machines built for the VUE RenderHerd, a reasonably-priced 24 or 32 GB CUDA board will appear : )

  • kyoto kidkyoto kid Posts: 41,843
    kyoto kid said:

    ...Vue doesn't natively support GPU rendering.

    The latest vrsion does, though I believe it is still limited in the features it supports.

    ...just saw that.  Very liiting though as you cannot use many of Vue's effects and it is a GPU assisted rather than full GPU based process.

  • nicsttnicstt Posts: 11,715

    I hope this gets addressed quickly. Buying a 1080 ti and then NOT being able to use its full potential is beyond frustrating. 

    I was going to buy a 1080ti, but until it's sorted I decided not to.

    I'm upgrading the system as a whole instead - costing more but it will be more versatile.

  • kyoto kidkyoto kid Posts: 41,843
    edited September 2017
    Greymom said:
    kyoto kid said:

    ...Vue doesn't natively support GPU rendering.

    The latest vrsion does, though I believe it is still limited in the features it supports.

    VUE Infinite 2016 uses CUDA GPU's for previews, antialiasing and some limited path-tracing.   They are headed for GPU rendering, but I think that cards with >16gb will be needed before they can fully implement.   I am doing my bit to help, as I am certain that as soon as I get my rendering machines built for the VUE RenderHerd, a reasonably-priced 24 or 32 GB CUDA board will appear : )

    ...I wouldn't count on that as Nvidia would most likely first implement advanced performance features like increased VRAM (most likley HBM 2) in it's Quadro line before in any prosumer cards. So I see the Titan series at 12GB remaining as the upper VRAM limit for us enthusiasts for a while.  The only possible way I could see Nvida bumping memory of it's prosumer cards beyond 12 GB is if the entire Quadro line moved completely to Volta technology with 32 GB HBM 2 memory in the top end card (V6000), 24 GB in the V5000, 16 in the V4000, NVlink for memory pooling, and higher core counts.

    A couple years ago it was thought AMD had the jump with it's Fury line that had HBM 1 memory but it was limited to only 4 GB while the Maxwell 980Ti had 6 GB.  Now people are looking at the Vega Frontier with 16 GB, however power requirments and cooling (as for rendering the card would be running at peak output for an extended period) seems to be a serious issue. Also as it does not support CUDA, in a way it is a non factor unless you move to an OpenCL render engine.Like LuxRender or AMD's ProRender.

    Post edited by kyoto kid on
  • GatorGator Posts: 1,319
    nicstt said:

    I hope this gets addressed quickly. Buying a 1080 ti and then NOT being able to use its full potential is beyond frustrating. 

    I was going to buy a 1080ti, but until it's sorted I decided not to.

    I'm upgrading the system as a whole instead - costing more but it will be more versatile.

    I'll tell you I upgraded a few water cooled 1080 Ti's and it was totally worth it.  So far the memory hasn't been an issue with size able scenes, HDRI's, and multiple Genesis 3 figures.

  • GreymomGreymom Posts: 1,139
    kyoto kid said:
    Greymom said:
    kyoto kid said:

     

    ...I wouldn't count on that as Nvidia would most likely first implement advanced performance features like increased VRAM (most likley HBM 2) in it's Quadro line before in any prosumer cards. So I see the Titan series at 12GB remaining as the upper VRAM limit for us enthusiasts for a while.  The only possible way I could see Nvida bumping memory of it's prosumer cards beyond 12 GB is if the entire Quadro line moved completely to Volta technology with 32 GB HBM 2 memory in the top end card (V6000), 24 GB in the V5000, 16 in the V4000, NVlink for memory pooling, and higher core counts.

    A couple years ago it was thought AMD had the jump with it's Fury line that had HBM 1 memory but it was limited to only 4 GB while the Maxwell 980Ti had 6 GB.  Now people are looking at the Vega Frontier with 16 GB, however power requirments and cooling (as for rendering the card would be running at peak output for an extended period) seems to be a serious issue. Also as it does not support CUDA, in a way it is a non factor unless you move to an OpenCL render engine.Like LuxRender or AMD's ProRender.

    You are probably right, particularly since saying what I hoped would happen has probably jinxed it for sure : ) .   I agree the VEGA will need liquid cooling (and maybe slight underclocking) to run for extended periods at high duty cycles.   I would much rather have a good CUDA card that will run everything (assuming NVIDIA also keeps up OpenCL support).

    I need to see what I can do with the hardware I have anyway.  That will take a while, and I will watch how things develop.

     

  • mjc1016mjc1016 Posts: 15,001
    nicstt said:

    I hope this gets addressed quickly. Buying a 1080 ti and then NOT being able to use its full potential is beyond frustrating. 

    I was going to buy a 1080ti, but until it's sorted I decided not to.

    I'm upgrading the system as a whole instead - costing more but it will be more versatile.

    As far as Microsoft is concerned, there is no problem and nothing to fix. 

  • kyoto kidkyoto kid Posts: 41,843
    edited September 2017
    ...that's what they claim. MS has been doing everything they can to force people to W10. From nagware to no longer supplying of update details to the "all or nothing" update rollup bundles for W7 and 8.1. They already admitted last year that they were collecting more user data than their telemetry programme required. Apologies, but that does not help build trust in what they say.
    Post edited by kyoto kid on
  • ebergerlyebergerly Posts: 3,255

    I notice that, using an app like GPU-Z, when I'm just running my GTX 1070 on Windows 10, with 3 monitors plugged into the GPU, the memory usage is pretty small. Like 250 MB or something. The only time it sucks up memory is the instant I start Studio, at which time it grabs over 2GB of my 8GB VRAM. 

    I thought it was a Windows thing, not a Studio thing....

  • kyoto kidkyoto kid Posts: 41,843
    ...what OS are you using?
  • namffuaknamffuak Posts: 4,404
    ebergerly said:

    I notice that, using an app like GPU-Z, when I'm just running my GTX 1070 on Windows 10, with 3 monitors plugged into the GPU, the memory usage is pretty small. Like 250 MB or something. The only time it sucks up memory is the instant I start Studio, at which time it grabs over 2GB of my 8GB VRAM. 

    I thought it was a Windows thing, not a Studio thing....

    It is a Windows 10 thing - but you're already setting the size requirements for three of the ports on the card. Where it hurts is the memory allocation for all five ports on a card with no monitors attached just on the off-chance you'll plug a new monitor in one of those ports with the system up and running. See Kendall's two posts back on page 2 of this thread.

Sign In or Register to comment.