VRAM available for rendering with Windows 10 - #273017 (closed)

1356710

Comments

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    edited April 2017

    Lots of wrong belief here. My POV is that everything works correctely and as intended by either Nvidia and Microsoft. For the records I don't use Windows 10. Still on Windows 7

    First thing WDDM  = Windows Display Driver Model

    Which means when you plug a graphic card in your system, some memory will be used for the display as mentionned by Kendall. And that will be done either you plug a monitor or not. That way, when you plug a monitor on any port, you will get something displayed immediately. You musn't forget what is the main purpose of a graphic card : display something on a monitor

    That is not an unconcious design nor is it a security design. It is how it should work. And just because you intend to use your Graphic Card for rendering doesn't eliminate it's primary purpose. If you just want compute cards, Tesla are there for that, as mentionned by Richard Haseltine.

    The "issue" is also not just related to Windows 10. It is just more visible because of the implementation of WDDM 2.x which seems to eat more memory . Windows 7 and 8 also reserved memory but in a less quantity as they were using WDDM 1.x .

     

    Workarounds  :

    1 °/ Buy Tesla Card ( I know that's not cheap )

    2 °/ Buy Quadro or Titan Cards and activate TCC mode (not cheap either I know). That mode was normaly only for some Quadros. Titan owners have that benefit over common Gfx cards. How to do that ? See http://www.royalrender.de/help/Infos/GPU_rendering/Windows/Windows.htm

    Sidenote : Not all Quadro do have that mode and TCC on latest Titan XP has to be confirmed

    3 °/ Try to disable all Windows eye candy to minimize memory footprint.

    * unverified test to do * In multi display mode every graphic card should be able to have it's own settings whithout affecting each other.

    => configure each Nvidia adapter to use low display settings. What settings minimize memory footprint have to be found

    4 °/ Go back to Windows 7 or 8 to get WDDM 1.3

    5 °/ If on Windows 10 there is a possibility of combining Software and Hardware that would use WDDM 1.3

    => put a Fermi GFX card on the first PCIe Slot to drive the display with Nvidia drivers version prior 361 and you should boot in WDDM 1.3 mode.

    That may imply DAZ Studio v4.8 and no Pascal GPU. No idea for Octane or other rendering engines. Not a long term solution

     

    Conclusion : Whining to microsoft won't change anything as their system works as intended. Proof is that Nvidia can provide drivers that don't implement the display feature for a graphic card but you have to pay for it. And whining to Nvidia won"t change anything as that is a premium feature.

    Final words : you get what you pay for.

    Post edited by Takeo.Kensei on
  • TaozTaoz Posts: 9,743

    Final words : you get what you pay for.

    I'd rather say you don't get what you pay for.

  • kyoto kidkyoto kid Posts: 40,617

    ...true but it's all or nothing (and I believe only the Pro version or higher allows for it), unlike it was before Oct 1st of last year where you could pick annd choose which updates to install or avoid.  Home Edition is extremely limited (you can't evne disable Cortana) and that is the only "free" version I'd get as I have W7 Home Premium.

  • Taozen said:

    Final words : you get what you pay for.

    I'd rather say you don't get what you pay for.

    Agreed; people should not have to spend extra money for a card without certain hardware when a driver could be coded to detect whether a monitor is connected or not, set a timer for a reasonable period of time (I'd say 5 minutes tops) to plug in any additional cards, then free memory if no monitor is connected in that time period. Of course, I'm of the opinion that hot-plugging a video display SHOULD result in a crash, but that's me.

  • hphoenixhphoenix Posts: 1,335
    Taozen said:

    Final words : you get what you pay for.

    I'd rather say you don't get what you pay for.

    Agreed; people should not have to spend extra money for a card without certain hardware when a driver could be coded to detect whether a monitor is connected or not, set a timer for a reasonable period of time (I'd say 5 minutes tops) to plug in any additional cards, then free memory if no monitor is connected in that time period. Of course, I'm of the opinion that hot-plugging a video display SHOULD result in a crash, but that's me.

    Hot plugging a video display shouldn't result in a crash.  It shouldn't result in ANYTHING.  The OS should require the User to enable the display for it to function.  The ONLY time the OS should automatically allocate is in the first card found, and the first active port on that card.

    People have gotten used to not having to really configure anything, just plug it in and it works.  But dumbing down things at the driver level to that extent is what is causing the issue......And is it so hard to right click on the desktop, get to the display properties, and tell it to 'rescan' for connected displays?

     

  • linvanchenelinvanchene Posts: 1,357
    edited April 2017

    Thank you all for your inputs.

    I also got the impression that it should be possible to add some customisation options to the windows settings that lets experienced users adjust some values how VRAM is used by Windows 10, especially when additional rendering GPU are installed and not just one display GPU for all purposes.

    Now that the basics seem more clear there remains one important aspect that I still do not understand:

    - - -

    If for some reason a part of the VRAM of GPU that do not even have a display attached needs to be reserved / blocked...

    Why can the reserved / blocked amount of VRAM not be a fixed amount?

     

    Example: If just a fixed amount of 1 GB of VRAM would be reserved or blocked I could accept that.

    What worries me is that the amount of unavailable VRAM increases the more VRAM the card offers.

    This will be more obvious if and when 16 or 24 GB VRAM cards arrive on the market. We may look at values of 3 - 4+ GB of VRAM that become unavailable.

     

    Post edited by linvanchene on
  • nicsttnicstt Posts: 11,714
    kyoto kid said:

    ...true but it's all or nothing (and I believe only the Pro version or higher allows for it), unlike it was before Oct 1st of last year where you could pick annd choose which updates to install or avoid.  Home Edition is extremely limited (you can't evne disable Cortana) and that is the only "free" version I'd get as I have W7 Home Premium.

    You can disable all sorts of stuff; i let me security software remind me; then a day or three after i see if there are any issues.

    I install em if not try em out, then update my disk image for easy backup/re-installation for when the inevitable hits the rapidly spinning obstacle.

  • MattymanxMattymanx Posts: 6,879

    CPUID and GPUz both show about 90mb vram used on the main card and only 3MB on the second card when idle. 

    CPUID and GPUz do NOT show you the amount of unavailable VRAM

    Those tools show you the total amount of VRAM actively being used.

     

    Sorry, I misunderstood your previous posts and see what you're saying now.

  • kyoto kidkyoto kid Posts: 40,617
    nicstt said:
    kyoto kid said:

    ...true but it's all or nothing (and I believe only the Pro version or higher allows for it), unlike it was before Oct 1st of last year where you could pick annd choose which updates to install or avoid.  Home Edition is extremely limited (you can't evne disable Cortana) and that is the only "free" version I'd get as I have W7 Home Premium.

    You can disable all sorts of stuff; i let me security software remind me; then a day or three after i see if there are any issues.

    I install em if not try em out, then update my disk image for easy backup/re-installation for when the inevitable hits the rapidly spinning obstacle.

    ...the nice thing with W7, I don't have to do all that. "Features" I didn't need or want were easy to disable.

    I've been involved in discussions on several tech forums about W10 and disabling/removing what you don't want isn't always that easy to do. As I understand with the anniversary release if you have the Home Edition, Cortana cannot be disabled completely (it keeps running in background and with every update goes "live" again) and you cannot defer updating at all. Only the more expensive pro edition allows for that and keeping Cortana (which I refer to as the "spawn" of BOB) completely turned off.  Whoever at MS came up with the idea of giving an OS a human-like personality needs to stop watching Star Trek.

  • linvanchenelinvanchene Posts: 1,357
    edited April 2017

    There is one more thing I came across last weekend:

    Microsoft Insider Program

    https://insider.windows.com/

     

    I truely think that 3D designers, artists and software users are an important part of the Windows comunity.

    While more and more people use mobile phones or tablets for e-mail and casual work, a lot of creative people still rely on windows for their work.

    I have the impression that microsoft is interested in providing software that is of use to its customers.

    If just one person could help bring feedback across what we need to make GPU rendering on windows work this would allready help.

    Maybe it could help if motivated customers, DAZ 3D staff and developers would participate in the insider program and provide feedback to current and next versions of windows.

     

     

    Post edited by linvanchene on
  • TaozTaoz Posts: 9,743

    .

    There is one more thing I came across last weekend:

    Microsoft Insider Program

    https://insider.windows.com/

     

    I truely think that 3D designers, artists and software users are an important part of the Windows comunity.

    While more and more people use mobile phones or tablets for e-mail and casual work, a lot of creative people still rely on windows for their work.

    I have the impression that microsoft is interested in providing software that is of use to its customers.

    If just one person could help bring feedback across what we need to make GPU rendering on windows work this would allready help.

    Maybe it could help if motivated customers, DAZ 3D staff and developers would participate in the insider program and provide feedback to current and next versions of windows.

    I though it was only for beta testing but if it can help getting some attention to problems like this as well it might be worth joining yes (I already have, actually).

  • outrider42outrider42 Posts: 3,679
    What about how Daz 4.9 itself uses more vram than 4.8? My observations show quite clearly that 4.9 is using 500mb to 1gb more than 4.8 does, directly resulting in smaller scenes capping my vram. I have both installed, so this isn't a nostalgic belief. I can start Daz 4.9 from a fresh instance, load a large scene, and drop to CPU mode. I close a start 4.8, load that same scene, and it works. And this behavior is consistent. I have scenes that work 100% in 4.8 that fall to CPU mode in 4.9 100% of the time.

    So while you guys are freaking out over MS taking away some vram, you might want to take a peak at Daz itself, and ask them why this is happening.
  • SimonJMSimonJM Posts: 5,947
    What about how Daz 4.9 itself uses more vram than 4.8? My observations show quite clearly that 4.9 is using 500mb to 1gb more than 4.8 does, directly resulting in smaller scenes capping my vram. I have both installed, so this isn't a nostalgic belief. I can start Daz 4.9 from a fresh instance, load a large scene, and drop to CPU mode. I close a start 4.8, load that same scene, and it works. And this behavior is consistent. I have scenes that work 100% in 4.8 that fall to CPU mode in 4.9 100% of the time.

     

    So while you guys are freaking out over MS taking away some vram, you might want to take a peak at Daz itself, and ask them why this is happening.

    Is there anything in the log that gives a clue as to where that difference may lie, as in geometryu or textures, etc., consuming more VRAM?  Plus it mught not be Daz Studio 'at fault' but the later version of Iray included.

  • nicsttnicstt Posts: 11,714
    kyoto kid said:
    nicstt said:
    kyoto kid said:

    ...true but it's all or nothing (and I believe only the Pro version or higher allows for it), unlike it was before Oct 1st of last year where you could pick annd choose which updates to install or avoid.  Home Edition is extremely limited (you can't evne disable Cortana) and that is the only "free" version I'd get as I have W7 Home Premium.

    You can disable all sorts of stuff; i let me security software remind me; then a day or three after i see if there are any issues.

    I install em if not try em out, then update my disk image for easy backup/re-installation for when the inevitable hits the rapidly spinning obstacle.

    ...the nice thing with W7, I don't have to do all that. "Features" I didn't need or want were easy to disable.

    I've been involved in discussions on several tech forums about W10 and disabling/removing what you don't want isn't always that easy to do. As I understand with the anniversary release if you have the Home Edition, Cortana cannot be disabled completely (it keeps running in background and with every update goes "live" again) and you cannot defer updating at all. Only the more expensive pro edition allows for that and keeping Cortana (which I refer to as the "spawn" of BOB) completely turned off.  Whoever at MS came up with the idea of giving an OS a human-like personality needs to stop watching Star Trek.

    It isn't easy, but not so bad; i'd never recommend home version of Windows anything.

    But you're restricted on W7, seeing as MS have done the dirty and are refusing to fully support a card not yet in extended support (or whatever it's called). The last update I installed seems good, so I guess I'll check out the anniversary update; 30 minutes to return to the previous disk image if I don't like it, or they've screwed up again. Ahh nm, I remember seeing a couple of folks having issues; I can wait day or two.

  • nicstt said:
    kyoto kid said:
    nicstt said:
    kyoto kid said:

    ...true but it's all or nothing (and I believe only the Pro version or higher allows for it), unlike it was before Oct 1st of last year where you could pick annd choose which updates to install or avoid.  Home Edition is extremely limited (you can't evne disable Cortana) and that is the only "free" version I'd get as I have W7 Home Premium.

    You can disable all sorts of stuff; i let me security software remind me; then a day or three after i see if there are any issues.

    I install em if not try em out, then update my disk image for easy backup/re-installation for when the inevitable hits the rapidly spinning obstacle.

    ...the nice thing with W7, I don't have to do all that. "Features" I didn't need or want were easy to disable.

    I've been involved in discussions on several tech forums about W10 and disabling/removing what you don't want isn't always that easy to do. As I understand with the anniversary release if you have the Home Edition, Cortana cannot be disabled completely (it keeps running in background and with every update goes "live" again) and you cannot defer updating at all. Only the more expensive pro edition allows for that and keeping Cortana (which I refer to as the "spawn" of BOB) completely turned off.  Whoever at MS came up with the idea of giving an OS a human-like personality needs to stop watching Star Trek.

    It isn't easy, but not so bad; i'd never recommend home version of Windows anything.

    But you're restricted on W7, seeing as MS have done the dirty and are refusing to fully support a card not yet in extended support (or whatever it's called). The last update I installed seems good, so I guess I'll check out the anniversary update; 30 minutes to return to the previous disk image if I don't like it, or they've screwed up again. Ahh nm, I remember seeing a couple of folks having issues; I can wait day or two.

    I've posted in the feedback hub on this issue and one other.

  • marblemarble Posts: 7,449

    I've noticed that there is often a spike in the VRAM usage figure. That spike is often over 1GB and, if the scene is already memory heavy, it will trigger the drop to CPU processing even though the real VRAm figure settles to a much lower level immediately. The spike occurs just when the little IRay progress window reports two "VERBOSE" messages. It can be seen in the screen shot of the GPU-Z log attached (the yellow marked row).

     

     

    Capture.JPG
    1078 x 364 - 74K
  • kyoto kidkyoto kid Posts: 40,617
    edited April 2017
    nicstt said:
    kyoto kid said:
    nicstt said:
    kyoto kid said:

    ...true but it's all or nothing (and I believe only the Pro version or higher allows for it), unlike it was before Oct 1st of last year where you could pick annd choose which updates to install or avoid.  Home Edition is extremely limited (you can't evne disable Cortana) and that is the only "free" version I'd get as I have W7 Home Premium.

    You can disable all sorts of stuff; i let me security software remind me; then a day or three after i see if there are any issues.

    I install em if not try em out, then update my disk image for easy backup/re-installation for when the inevitable hits the rapidly spinning obstacle.

    ...the nice thing with W7, I don't have to do all that. "Features" I didn't need or want were easy to disable.

    I've been involved in discussions on several tech forums about W10 and disabling/removing what you don't want isn't always that easy to do. As I understand with the anniversary release if you have the Home Edition, Cortana cannot be disabled completely (it keeps running in background and with every update goes "live" again) and you cannot defer updating at all. Only the more expensive pro edition allows for that and keeping Cortana (which I refer to as the "spawn" of BOB) completely turned off.  Whoever at MS came up with the idea of giving an OS a human-like personality needs to stop watching Star Trek.

    It isn't easy, but not so bad; i'd never recommend home version of Windows anything.

    But you're restricted on W7, seeing as MS have done the dirty and are refusing to fully support a card not yet in extended support (or whatever it's called). The last update I installed seems good, so I guess I'll check out the anniversary update; 30 minutes to return to the previous disk image if I don't like it, or they've screwed up again. Ahh nm, I remember seeing a couple of folks having issues; I can wait day or two.

    ...to fully disable the unwanted rubbish means hacking the registry which is fraught with peril.

    W7 takes an infinitesimal amount of VRAM in comparison. As I don't do gaming nor stream videos on my workstation, I have no need for Direct X12 and a 1080 Ti should work perfectly fine for rendering purposes.  Should Nvidia ever prioritise their cards just to W10 (like AMD did with the Ryzen CPU and Intel with Kaby Lake), then I'm done as that abomination of an OS will never be allowed on any of my systems.

    Post edited by kyoto kid on
  • Kendall SearsKendall Sears Posts: 2,995
    edited April 2017

    As Kendall pointed out that is not a bug but an intended design meant to stop video cards from crashing Windows 10.

    I don't have an nVidia card but it would be interesting if this new Windows 10 'Gaming Mode' would decrease the amount of VRAM to reserving for itself. One might be able to run DAZ Studio in Windows 10 'Gaming Mode' However, from what I read about Windows 10 Gaming Mode it didn't mention anything of the sort but sounds more like Windows 10 throttles these terminate and stay resident daemons while in gaming mode.

     Kendall gave great background information how the reasoning behind this might have been.

    Nevertheless that "intended design" is not something customers should accept.

    It is not a question if it is a "bug" or not.

    -> It is a matter about a decision made conscious or unconscious somewhere at microsoft to change the working VRAM behavior from windows 7 to windows 10 that now has a huge negative side effect on not only people using GPU rendering but any other people that use GPU for calculations (AI research) or other design applications that rely on VRAM.

     - - -

     

     

    No, Kendall was not guessing. Windows 10 is designed that way as a security requirement. 

    Whether it was a "security design" or not, the behavior of allocating VRAM on the off chance that a monitor will be plugged in while the OS is running is unacceptable when that VRAM is not released after a reasonable time.

    It is not a "security issue" per se, as much as a "let's not lock up/freeze the system" issue.  As far as I am aware, it isn't feasible to hack into the running OS using a video card's hardware change interrupts.  An attempt to run high level operations during this period would likely result in a frozen bus rather than any type of privilege enhancement.

    Microsoft was in a quandry wihen they moved from Windows NT3 to Windows NT4... Prior to NT4 EVERY piece of hardware was handled in the HAL (Hardware Abstraction Layer) and reinitializing and reallocating resources could be handled fairly easily.  However, HAL's introduce lag, and MS was wanting to increase performance for the video subsystem.  Short of recoding the HAL to use a different scheduling algorithm the only way to increase video performance was to allow video cards to run below the HAL.  One of the compromises to this approach is that dynamic changes to the hardware outside of a rigidly controlled environment can freeze the OS.

    As I stated before, older cards could be set to use a simple 80x25 character onboard buffer (text) to allow the OS to "catch up", query the changes, and make the necessary modifications.  In some cases, a system reboot was necessary to make the changes "stick".  With Windows 2000 and the move to allow hardware hot addition MS had to further "enhance" what the system could do.  Shortly afterwards the creation of video hardware that contained ONLY bitmap framebuffers added the necessity for Windows to allocate framebuffers to keep from having to reboot in order to add additional monitors.

    There is a lot more to this and the above is very simplified to keep from becoming TL;DR.  The problem could be resolved if MS were to move the video card drivers back into the HAL, but this would decrease performance for gaming greatly.  In addition, certain WIndows subsystems rely on the ability to directly tweak the hardware (DirectX/3D, etc) bypassing even the video drivers to increase performance.

    Kendall

    Post edited by Kendall Sears on
  • TaozTaoz Posts: 9,743

    Well like someone else said, at least MS could allocate a smaller, fixed amount of VRAM instead of a percentage of the total amount. Or make an option for disabling it if there's no need for hot-plugging monitors.

  • Kendall SearsKendall Sears Posts: 2,995
    edited April 2017
    Taozen said:

    Well like someone else said, at least MS could allocate a smaller, fixed amount of VRAM instead of a percentage of the total amount. Or make an option for disabling it if there's no need for hot-plugging monitors.

    Here's the thing.  I don't think that they are allocating a percentage.  The minimum frambuffer that one can reliably allocate for this type of operation is the largest monitor resolution supported by the card.  Therefore, if the video card is capable of supporting 4K monitor sizes, one has to assume that *someone* is going to hotplug a 4K monitor into the system (with DVI, HDMI, and DisplayPort the act of turning a monitor on or off qualifies as a hotplug event -- do this and watch WIndows completely reconfigure your desktop(s) and icons each time).  So now you're looking at allocating as many 4K framebuffers as there are monitor ports.  If you do the math on 32bit depth x 4K screen sizes you'll see that you're using a significant amount of memory.  This could end up looking like a great percentage of the VRAM.  Keep in mind that the primary use for VRAM is for screen framebuffers, not texture storage.  This is only going to get worse when 8K screen resolutions become supported by default.

    As I said previously: If you're using MS Windows, it is part of the "price you pay".  If you want your VRAM back for other things, then Linux is your path.  However, since the vast majority of Linux setups use the HAL for video drivers, high framerate gaming is normally not an option.

    Kendall

    Post edited by Kendall Sears on
  • TaozTaoz Posts: 9,743
    Taozen said:

    Well like someone else said, at least MS could allocate a smaller, fixed amount of VRAM instead of a percentage of the total amount. Or make an option for disabling it if there's no need for hot-plugging monitors.

    Here's the thing.  I don't think that they are allocating a percentage.  The minimum frambuffer that one can reliably allocate for this type of operation is the largest monitor resolution supported by the card.  Therefore, if the video card is capable of supporting 4K monitor sizes, one has to assume that *someone* is going to hotplug a 4K monitor into the system (with DVI, HDMI, and DisplayPort the act of turning a monitor on or off qualifies as a hotplug event -- do this and watch WIndows completely reconfigure your desktop(s) and icons each time).  So now you're looking at allocating as many 4K framebuffers as there are monitor ports.  If you do the math on 32bit depth x 4K screen sizes you'll see that you're using a significant amount of memory.  This could end up looking like a great percentage of the VRAM.  Keep in mind that the primary use for VRAM is for screen framebuffers, not texture storage.  This is only going to get worse when 8K screen resolutions become supported by default.

    As I said previously: If you're using MS Windows, it is part of the "price you pay".  If you want your VRAM back for other things, then Linux is your path.  However, since the vast majority of Linux setups use the HAL for video drivers, high framerate gaming is normally not an option.

    Kendall

    Well I'm not into gaming anyway so if Iray rendering works OK in Linux that may be the way to go then.

     

  • linvanchenelinvanchene Posts: 1,357
    edited April 2017

     

    Here's the thing.  I don't think that they are allocating a percentage.

    I do agree that the answer may not be as simple as

    "block x % of maximum available VRAM if xyz = true"

    Still, based on the answers posted here some people still seem to have the impression that everything is working "as intended".

    But that does not seem to be the case.

    Total Available Graphics Memory Reported Incorrectly

    Source:

    http://us.download.nvidia.com/Windows/375.70/375.70-win10-win8-win7-desktop-release-notes.pdf

     

    On page 18 of the "Known product limitations" you can read: 

     

    "In the Windows Display Driver Model (WDDM), Total Available Graphics (TAG) memory is reported as the sum of

    •Dedicated Video Memory (video memory dedicated for graphics use)

    •Dedicated System Memory (system memory dedicated for graphics use), and

    •Shared System Memory (system memory shared between the graphics subsystem and the CPU).

    The values for each of these components are computed according to WDDM guidelines when the NVIDIA Display Driver is loaded.

    Issue

    Some TAG-reporting APIs represent video memory using 32-bits instead of 64-bits, and consequently do not properly report available graphics memory when the TAG would otherwise exceed 4 gigabytes (GB). This results in under reporting of available memory and potentially undesirable behavior of applications that rely on these APIs to report available memory.

    The under reporting can be extreme. For example, 6 GB might be reported as 454 MB, and 8 GB might be reported as 1259 MB."

     

    You can find additional information in the linked release notes.

     

     - - -

    The "issue" is also not just related to Windows 10. It is just more visible because of the implementation of WDDM 2.x which seems to eat more memory . Windows 7 and 8 also reserved memory but in a less quantity as they were using WDDM 1.x .

    Commentary:

    Based on the acknowledged issues with TAG reporting I have the impression that such a huge (!) amount of VRAM being blocked under windows 10 is also not intended.

    So far noone has provided an official answer why a system that worked under windows 7 and afaik windows 8 needed to be changed with windows 10 in such a drastic (!) way.

    If everything would be working as intended then surely microsoft support would have said so in the meantime.

    - - -

    Post edited by linvanchene on
  • namffuaknamffuak Posts: 4,071

    Doing the math, as Kendall suggests - 4K resolution is 8.3 Mp (mega-pixels, or million pixels) X 32 bit color depth equals a frame buffer of 265.5 MB; X 5 (one DVI, one HDMI, and 3 Display port connectors) equals 1,328 MB or 1.3 GB. And I wouldn't be surprised at a number 10% higher, as that 8.3 Mp is the minimum size for the buffer; I don't know what else Nvdia allocates to go with it.

    I'm in no position to check this, as my main system runs Win 7 - and will until such time as an update to Studio won't run on Win 7 (It's not internet connected, so I don't care about security patches or upgrades).

  • Takeo.KenseiTakeo.Kensei Posts: 1,303

     

     

    Taozen said:

    Well like someone else said, at least MS could allocate a smaller, fixed amount of VRAM instead of a percentage of the total amount. Or make an option for disabling it if there's no need for hot-plugging monitors.

    Here's the thing.  I don't think that they are allocating a percentage.  The minimum frambuffer that one can reliably allocate for this type of operation is the largest monitor resolution supported by the card.  Therefore, if the video card is capable of supporting 4K monitor sizes, one has to assume that *someone* is going to hotplug a 4K monitor into the system (with DVI, HDMI, and DisplayPort the act of turning a monitor on or off qualifies as a hotplug event -- do this and watch WIndows completely reconfigure your desktop(s) and icons each time).  So now you're looking at allocating as many 4K framebuffers as there are monitor ports.  If you do the math on 32bit depth x 4K screen sizes you'll see that you're using a significant amount of memory.  This could end up looking like a great percentage of the VRAM.  Keep in mind that the primary use for VRAM is for screen framebuffers, not texture storage.  This is only going to get worse when 8K screen resolutions become supported by default.

    As I said previously: If you're using MS Windows, it is part of the "price you pay".  If you want your VRAM back for other things, then Linux is your path.  However, since the vast majority of Linux setups use the HAL for video drivers, high framerate gaming is normally not an option.

    Kendall

    Staying on Win 7 is an other and better option since there is no DS native build for Linux and using DS with Wine is a higher price than using W7

     

    Here's the thing.  I don't think that they are allocating a percentage.

    I do agree that the answer may not be as simple as

    "block x % of maximum available VRAM if xyz = true"

    Still, based on the answers posted here some people still seem to have the impression that everything is working "as intended".

    But that does not seem to be the case.

    Total Available Graphics Memory Reported Incorrectly

    Source:

    http://us.download.nvidia.com/Windows/375.70/375.70-win10-win8-win7-desktop-release-notes.pdf

     

     

    "In the Windows Display Driver Model (WDDM), Total Available Graphics (TAG) memory is reported as the sum of

    •Dedicated Video Memory (video memory dedicated for graphics use)

    •Dedicated System Memory (system memory dedicated for graphics use), and

    •Shared System Memory (system memory shared between the graphics subsystem and the CPU).

    The values for each of these components are computed according to WDDM guidelines when the NVIDIA Display Driver is loaded.

    Issue

    Some TAG-reporting APIs represent video memory using 32-bits instead of 64-bits, and consequently do not properly report available graphics memory when the TAG would otherwise exceed 4 gigabytes (GB). This results in under reporting of available memory and potentially undesirable behavior of applications that rely on these APIs to report available memory.

    The under reporting can be extreme. For example, 6 GB might be reported as 454 MB, and 8 GB might be reported as 1259 MB."

     - - -

    Commentary:

    Based on the acknowledged issues with TAG reporting I have the impression that a huge amount of VRAM being blocked under windows 10 is also not intended.

    So far noone has provided an answer why a system that worked under windows 7 and as far as I can tell also windows 8 needed to be changed with windows 10.

    If everything would be working as intended then surely mircosoft support would have said so in the meantime.

    - - -

    None of your screenshots shows invalid reported Total Available Graphic

    If the problem was TAG Reporting, you'd also have the problem on Windows 7 and 8 which is not the case

     

  • Kendall SearsKendall Sears Posts: 2,995
    edited April 2017

    Staying on Win 7 is an other and better option since there is no DS native build for Linux and using DS with Wine is a higher price than using W7

    Indeed.  However, for the majority of people Win 7 is no longer available.  Unless you already have it (and you didn't get suckered to convert it to Win10), then you have no way to get Windows 7.  This leaves Linux as the only non-Windows10 way to use nVidia cards and DS/Iray (without external PCIe casing and unsupported OSX hacks).  Be aware that Iray DOES run native in Linux and a DS scene can be exported to render at full speed in Linux Iray.  So, even though DS only runs via Wine in Linux, rendering in Linux at native high speed is an option.

    Kendall

    Post edited by Kendall Sears on
  • Takeo.KenseiTakeo.Kensei Posts: 1,303

    Staying on Win 7 is an other and better option since there is no DS native build for Linux and using DS with Wine is a higher price than using W7

    Indeed.  However, for the majority of people Win 7 is no longer available.  Unless you already have it (and you didn't get suckered to convert it to Win10), then you have no way to get Windows 7.  This leaves Linux as the only non-Windows10 way to use nVidia cards and DS/Iray (without external PCIe casing and unsupported OSX hacks).  Be aware that Iray DOES run native in Linux and a DS scene can be exported to render at full speed in Linux Iray.  So, even though DS only runs via Wine in Linux, rendering in Linux at native high speed is an option.

    Kendall

    An Iray server licence is still 300 $ per year, not sure people would go that path. I think you can still buy Windows 8.1 and even Windows 7 which are cheaper : https://www.amazon.com/Windows-Professional-System-Builder-Packaging/dp/B00H09BOXQ

     

  • kyoto kidkyoto kid Posts: 40,617
    edited April 2017

    Staying on Win 7 is an other and better option since there is no DS native build for Linux and using DS with Wine is a higher price than using W7

    Indeed.  However, for the majority of people Win 7 is no longer available.  Unless you already have it (and you didn't get suckered to convert it to Win10), then you have no way to get Windows 7.  This leaves Linux as the only non-Windows10 way to use nVidia cards and DS/Iray (without external PCIe casing and unsupported OSX hacks).  Be aware that Iray DOES run native in Linux and a DS scene can be exported to render at full speed in Linux Iray.  So, even though DS only runs via Wine in Linux, rendering in Linux at native high speed is an option.

    Kendall

    ...OEMs of W7 Pro are still available. You have to do a full clean install of your main (C:) drive but that is a small price to pay.

    SInce Daz does not natively support networked rendering for Iray (only Carrara does), how would one go about that short of writing your own newtwork/bridge utility?

    Post edited by kyoto kid on
  • linvanchenelinvanchene Posts: 1,357
    edited April 2017

    Here's the thing.  I don't think that they are allocating a percentage.  The minimum frambuffer that one can reliably allocate for this type of operation is the largest monitor resolution supported by the card.  Therefore, if the video card is capable of supporting 4K monitor sizes, one has to assume that *someone* is going to hotplug a 4K monitor into the system (with DVI, HDMI, and DisplayPort the act of turning a monitor on or off qualifies as a hotplug event -- do this and watch WIndows completely reconfigure your desktop(s) and icons each time).  So now you're looking at allocating as many 4K framebuffers as there are monitor ports.  If you do the math on 32bit depth x 4K screen sizes you'll see that you're using a significant amount of memory.  This could end up looking like a great percentage of the VRAM.  Keep in mind that the primary use for VRAM is for screen framebuffers, not texture storage.  This is only going to get worse when 8K screen resolutions become supported by default.

     

    Doing the math, as Kendall suggests - 4K resolution is 8.3 Mp (mega-pixels, or million pixels) X 32 bit color depth equals a frame buffer of 265.5 MB; X 5 (one DVI, one HDMI, and 3 Display port connectors) equals 1,328 MB or 1.3 GB. And I wouldn't be surprised at a number 10% higher, as that 8.3 Mp is the minimum size for the buffer; I don't know what else Nvdia allocates to go with it.

    This whole talk about frame buffers is just adding more confusion.

    https://en.wikipedia.org/wiki/Framebuffer

    This whole speculation does not add up with what you can actually observe:

     

    Titan (2013) / 4 ports / - 1 GB blocked of 6 GB

    Titan X (2016) / 5 ports / ~ around 2.4 GB blocked of 12 GB (reported by other user)

    GTX 1080 / 5 ports / 1.4 GB blocked of 8 GB

    GTX 1080 Ti / 4 ports / 2 GB blocked of 11 GB

    - - -

    -> So now please explain

    Why has  the 1080 Ti with only 4 ports blocked a larger amount of VRAM ( 2 GB) then the 1080 with 5 ports (1.4 GB) ?

    Why do a Titan and a GTX 1080 Ti with both 4 ports have blocked a hugely different amount of VRAM (1 GB vs 2 GB)?

    Why do a Titan X and a GTX 1080 with both 5 ports have blocked a hugely different amount of VRAM (~ 2.4GB vs 1.4 GB)?

    - - -

    Post edited by linvanchene on
  • TaozTaoz Posts: 9,743

    Question is for how long NVidia will support Windows 7. Have they said anything about that?  

  • TaozTaoz Posts: 9,743
    kyoto kid said:

    Staying on Win 7 is an other and better option since there is no DS native build for Linux and using DS with Wine is a higher price than using W7

    Indeed.  However, for the majority of people Win 7 is no longer available.  Unless you already have it (and you didn't get suckered to convert it to Win10), then you have no way to get Windows 7.  This leaves Linux as the only non-Windows10 way to use nVidia cards and DS/Iray (without external PCIe casing and unsupported OSX hacks).  Be aware that Iray DOES run native in Linux and a DS scene can be exported to render at full speed in Linux Iray.  So, even though DS only runs via Wine in Linux, rendering in Linux at native high speed is an option.

    Kendall

    ...OEMs of W7 Pro are still available. You have to do a full clean install of your main (C:) drive but that is a small price to pay.

    SInce Daz does not natively support networked rendering for Iray (only Carrara does), how would one go about that short of writing your own newtwork/bridge utility?

    Hasn't there been talk about that being included in DS 5?

Sign In or Register to comment.