GeForce vs Quadro

Good day, I am making plans to purchase a computer specifically for rendering. I am tempted to get the professional grade Quadro card ... but I have never really heard anyone complain about the GeForce cards. If anyone has worked with both, is it worth the extra cost?

«1

Comments

  • OZ-84OZ-84 Posts: 137

    If 12gb video memory is enough for you (TITAN XP) then the answer is: No Quadros arent worth it at all. 

  • AllenArtAllenArt Posts: 7,175
    edited July 2018

    The Quadro is made especially for hard use while the GeForce cards are geared more toward the consumer market. Quadros are used by studios that do graphics specifically. If you plan on rendering constantly, then maybe a Quadro card IS for you. I think you'll find tho that while they might have more ram than consumer cards, I don't think they necessarily perform any faster. You will have to do your due diligence and your homework to decide which you want and will work best for you ;). Also, I'd make sure that DS Iray supports any card you may be thinking of buying too before you put money out for it ;).

    Laurie

    Post edited by AllenArt on
  • Iray will support any Nvidia Quadro from the Kepler and above series, but Fermi architecture will not work, as I understand it.

    However, most Quadros have fewer CUDA cores than GTX cards in the same price range, until you get up into the top-tier Quadros, which also have far more VRAM i.e. the $5,000 P6000 has 24GB of VRAM and 3840 cores, whereas a $1,200 Titan XP has only 12GB of VRAM but the same number of CUDA cores. If you had $5K to burn, you could get 4 Titan XPs and quadruple your core count. You'd still only have 12GB of VRAM to work with, since the VRAM used on these models doesn't stack, but most scenes come in well below that mark, unless you're doing pro-level work like Pixar, or using 4K textures on every part of a figure in a scene, and there are multiple figures in a scene, and the scenery is high-poly and uses 4K textures.

    But then, if you are, you need to go for a P6000.

    The "big to-do" about Quadros over GTX cards stems from GTX cards having a faster clock speed, because they're designed for games that require faster refresh rates and quick adjustments to the displayed images - shadows, lighting, reflections, tesselation, etc. They're also designed to handle overclocking to boost the clock speed. They also require larger power supplies to feed them.
    GTX cards (at least those in the Kepler family) also have inherent stability issues when it comes to rendering, because it's a long, slow burn they're not used to. To paraphrase Sammy Hagar - they can't drive 55.

    Quadro cards run much slower, and generally have a lower power draw, which saves money on the light bill, in theory. If a Quadro draws 300w for a 16 hour render, does it really use less electricity than a GTX sucking down twice as much juice in only half the time? Would the GTX draw twice as much electricity? I really don't know.

    However, both EVGA and MSI (and maybe others?) offer apps that let you control the clock speed of a GPU, including backing it down to the highest speed of a Quadro, which improves stability during long renders (8 hours or more). There are also certain commands one can send directly to the GPU through an Nvidia app and a DOS command console window that control the clock and memory speed of the GPU.

    These stability issues have been addressed in the Pascal family, such as the 1080ti (and possibly the previous-generation Maxwell family, I'm not entirely sure). However, I haven't tried over-clocking mine to see if it renders faster (I assume it won't).

    If you go with a GTX, I strongly suggest a 1080ti direct from Nvidia. While they're currently out of stock thanks to the crypto-currency boom, they have upped their Limit Per Customer considerably (up from 2 to 10!), and at $700 each, they're a bargain. 11GB of VRAM and 3584 cores, but that same theoretical $5,000 burning a hole in your pocket would get 7 of them, and you'd have 25,088 CUDA cores at your disposal. More cores means faster Iray renders.

  • kyoto kidkyoto kid Posts: 41,857
    OZ-84 said:

    If 12gb video memory is enough for you (TITAN XP) then the answer is: No Quadros arent worth it at all. 

    ...actually yes they are as the drivers allow you to ignore W10's WDDM which robs GTX cards of about 18% of their VRAM. That Titan Xp will only have just over 9 GB of useable VRAM for the 1,200$ price tag. A 1080 Ti about the same.

  • kyoto kidkyoto kid Posts: 41,857
    AllenArt said:

    The Quadro is made especially for hard use while the GeForce cards are geared more toward the consumer market. Quadros are used by studios that do graphics specifically. If you plan on rendering constantly, then maybe a Quadro card IS for you. I think you'll find tho that while they might have more ram than consumer cards, I don't think they necessarily perform any faster. You will have to do your due diligence and your homework to decide which you want and will work best for you ;). Also, I'd make sure that DS Iray supports any card you may be thinking of buying too before you put money out for it ;).

    Laurie

    ...+1

  • IllidanstormIllidanstorm Posts: 655

    As someone with a 1080ti with 11 GB VRAM and running out of it regularly I'd think 32 GB VRAM would be a game changer in my rendering tasks.
    I'd probably wait for the 1180 and 1180 Ti though, there are rumors of the 1180 having 16 GB of VRAM. DDR5 over DDR4 will also make a diffrence.
    So unless you need it right now better wait.

  • kyoto kidkyoto kid Posts: 41,857
    edited July 2018

    ...for myself it's "wait and see'.  I still have my doubts they'll offer anything over 12 GB in the GTX line so it doesn't compete with their Titan and Quadro series. I still remember all the hype about the 980 Ti having 8 GB before it was released 

    Those 32 GB Titan-Vs they gave out most likely are a one shot deal. Production models more likely may have 16 GB which is still an improvement and would further establish Titan as it's own line separate from the GTX series.

    From what I read, the 11xx series will have DDR6 memory.

    Post edited by kyoto kid on
  • ebergerlyebergerly Posts: 3,255
    edited July 2018
    kyoto kid said:
    OZ-84 said:

    If 12gb video memory is enough for you (TITAN XP) then the answer is: No Quadros arent worth it at all. 

    ...actually yes they are as the drivers allow you to ignore W10's WDDM which robs GTX cards of about 18% of their VRAM. 

    I'm still hoping someone, anyone, will finally provide some actual proof that that's the case. Because I've seen a ton of evidence that points to it being a myth. Example attached....

    All but 900MB of my 11GB being used. 

     

    PageVRAM.JPG
    968 x 925 - 109K
    Post edited by ebergerly on
  • TaozTaoz Posts: 10,258

    Question is how Windows is calculating these numbers.

  • ebergerlyebergerly Posts: 3,255
    Taoz said:

    Question is how Windows is calculating these numbers.

    How is anything calculating any VRAM numbers? Where's the proof on which one is correct? Statements from Microsoft say it's the most accurate, since Windows 10's VidMm and VidSch are the ones running the show.  

    It's easy to question things, but actual facts are needed to discern the actual truth. 

     

  • TaozTaoz Posts: 10,258
    ebergerly said:
    Taoz said:

    Question is how Windows is calculating these numbers.

    How is anything calculating any VRAM numbers? Where's the proof on which one is correct? Statements from Microsoft say it's the most accurate, since Windows 10's VidMm and VidSch are the ones running the show.  

    It's easy to question things, but actual facts are needed to discern the actual truth. 

    I agree, but I'm still waiting for the unquestionable facts. So currently it's in the "I have no idea" category here. It seems to be a fact that Windows does eat some of the VRAM, question is how much and how you can measure it precisely.  

     

  • ebergerlyebergerly Posts: 3,255
    edited July 2018

    BTW, here's a portion of the statement from the lead engineer at Microsoft responsible for the Windows 10 GPU scheduler and memory manager:

    "performance data for the GPU is available no matter what API is being used, whether it be Microsoft DirectX API, OpenGL, OpenCL, Vulkan or even proprietary API such as AMD's Mantle or Nvidia's CUDA.  Further, because VidMm and VidSch are the actual agents making decisions about using GPU resources, the data in the Task Manager will be more accurate than many other utilities, which often do their best to make intelligent guesses since they do not have access to the actual data. "

    And...

    "Dedicated memory represents memory that is exclusively reserved for use by the GPU and is managed by VidMm. On discrete GPUs this is your VRAM, the memory that sits on your graphics card."

    Post edited by ebergerly on
  • TaozTaoz Posts: 10,258

    Very well, but the numbers being displayed are confusing to interprete. I have an 8 GB 1070 but the numbers say I have 12 GB GPU RAM. I.e. some of this RAM is shared RAM from system RAM.

    I'd like to see some clear data like:

    Total VRAM reserved by system: x MB
    Total VRAM used by display(s): x MB
    Total VRAM available for rendering: x MB
     

     

     

  • ebergerlyebergerly Posts: 3,255
    edited July 2018

    Maybe you should ask Mr. Google for some more info about the terminology. Mine reports correct VRAM values for my cards, as well as good info on individual GPU processes and compute engine usage. 

    And that's compared to, say, an Iray log file which reports a single word "available", which many apparently want to put their faith in for some reason. And that "available" number is reported before a scene even gets loaded. I guess I prefer realtime usage data reported by the software that controls everything. 

    Post edited by ebergerly on
  • linvanchenelinvanchene Posts: 1,386
    edited July 2018

    The support ticket Request #273017 from June 5 2018 asking for clarification on questions related to VRAM usage in Iray with windows 10 is still open.

     

    1) Based on what rules does Iray assign VRAM to the workspace?

    2a) What does the log entry at startup of DAZ Studio 11 GiB total, 9.14774 GiB available ​actually indicate?

    2b) Why is the entry not indicating 11 GiB total, 11 GiB available on GPU devices that have no display attached?

    3) Can the DAZ 3D or Iray developers comment if the issue with Windows 10 and VRAM allocation made public by Otoy in 2015 still exists?

    compare:

    https://www.daz3d.com/forums/discussion/248401/vram-management

    The ticket was last updated on July 10 by DAZ3D support staff.

    Support staff has not yet heared back from the devs.

     

    Post edited by linvanchene on
  • kyoto kidkyoto kid Posts: 41,857
    ebergerly said:
    kyoto kid said:
    OZ-84 said:

    If 12gb video memory is enough for you (TITAN XP) then the answer is: No Quadros arent worth it at all. 

    ...actually yes they are as the drivers allow you to ignore W10's WDDM which robs GTX cards of about 18% of their VRAM. 

    I'm still hoping someone, anyone, will finally provide some actual proof that that's the case. Because I've seen a ton of evidence that points to it being a myth. Example attached....

    All but 900MB of my 11GB being used. 

     

    ..this has been a topic of discussion over on the Microsoft forums as well, so it's not just people here who are noticing it.

  • scorpioscorpio Posts: 8,533
    kyoto kid said:
    ebergerly said:
    kyoto kid said:
    OZ-84 said:

    If 12gb video memory is enough for you (TITAN XP) then the answer is: No Quadros arent worth it at all. 

    ...actually yes they are as the drivers allow you to ignore W10's WDDM which robs GTX cards of about 18% of their VRAM. 

    I'm still hoping someone, anyone, will finally provide some actual proof that that's the case. Because I've seen a ton of evidence that points to it being a myth. Example attached....

    All but 900MB of my 11GB being used. 

     

    ..this has been a topic of discussion over on the Microsoft forums as well, so it's not just people here who are noticing it.

    If you repeat it enough it might come true, honestly its getting boring.

  • fastbike1fastbike1 Posts: 4,078

    ebergerly's attachment seems pretty clear to me. Shows how much Vram the GPU has total, how much available, how much shared system RAM is available.

  • SickleYieldSickleYield Posts: 7,649

    Good day, I am making plans to purchase a computer specifically for rendering. I am tempted to get the professional grade Quadro card ... but I have never really heard anyone complain about the GeForce cards. If anyone has worked with both, is it worth the extra cost?

    Hi! A lot of us PAs are using GTX cards because they do work and they are so much cheaper. I knew nothing about the hardware vs. Quadro when I bought my first GeForce. At the moment I use a 1080 for main rendering, a 980 for backup and a 740 runs my monitors, and that works fine; most of my promo renders are done in under a half hour at 1000x1300 (thee are notable exceptions). I'm not worried about stability on 8 hour renders because if my renders regularly took 8 hours I'd go broke. Thw fact is, for the price of a quadro I could kill a new GeForce every month for over a year and still come out ahead, and I am on a production schedule.
  • nicsttnicstt Posts: 11,715
    edited July 2018

    The only reason to go Quadro really, imo, is if you rarely produce scenes that would fit on anything with less RAM.

    If you renders take many hours, on a regular basis, then it might be something to think about; personally, more GTX cards might be an answer, RAM permitting.

    And while Quadro cards are supposed to last longer, they break just like anything else.

    Say you have 1 Quadro v 2 GTX; one breaks, no rendering on Quadro; 50 % reduction on GTX. Only you can really decide if it's worth it for you.

    Edit:

    Oh, and I'd wait to see what the new cards are - when they appear. They are expected this year.

    Post edited by nicstt on
  • 31415926543141592654 Posts: 975

    Thanks everyone ... lots of good advice on this thread now. Keep it coming, but I am starting to lean more towards the quadro. I do get into some very complex scenes that need the ram and I do get into the many hours of rendering because of animation.  Many times I have started the render process just before going to bed and then turned it off in the morning and let it do as many frames as it will during that time in between. This is not going to be an office machine, it is geared strictly for rendering. Thank you for all the help.

  • DustRiderDustRider Posts: 2,880

    IMHO, unless you need specific features offered by a quadro hardware/drivers (which are available in many professional applications to improve performance and interactive image quality - not available in DS), quad buffered stereo for example (if you need to ask what that is - then you don't need it), or you really need the extra memory (though using Scene Optimizer by V3Digitimes would be much less expensive), or your doing a lot of floating point calculations (scientific applications - Iray doesn't use them), or possibly some of the advanced AI functions, I would spend the money elsewhere and not on a Quadro.

    The idea that Quadros are build to last under continuous load better than GTX cards is IMHO a bit over hyped. In fact, I've had more quadros with use issues than GTX cards. That's just my experience, but unless I need to have the additional features offered on a Quadro (which in many cases are simply the Quadro drivers have the features enabled - the silicon is virtually identical), I will spend my money on a GTX, especially if you want increased speed, just get 2-4 GTX "TI" cards or Titans and you'll blow the doors off of any single Quadro for use with DS. If you are worried about stability of the GTX cards, simply drop the clock speed down to the equivalant Quadro card speed, and you should get similar performance. However, I have never had any problems running GTX cards for long renders that have often gone over 24 hours (and I almost always use a laptop), so again, based on my own experience, I think this is mostly hype and salesman speak (I have had heat issues with Quadros though, due to failing fans).

    If you're really more intereseted in animation and large scenes, right now your most budget friendly and performance improvement (speed) option would be to go with the GTX card(s) with Octane Render with the plugin for DS. It is MUCH faster than Iray for animations, and with the ability to use system RAM for textures in the current version (this will be expanded to include texture and geometry in in upcoming version - the beta is available for current customers now), you can use the GTX cards without worrying about running out of GPU memory like you do with Iray. The downside to using Octane is you will have to often optimize and sometimes create your own shaders. Depending on how well you understand shaders, and how much you are willing to do yourself, this may be a huge obstacle, or just a minor speed bump.

    Keep in mind, there is a reason the the crypto currency miners aren't building systems with Quadro cards at the speed of heat like the are with GTX cards .... because Quadros don't provide any benefit over GTX cards (and they run them hard 24x7x365), and the Quadro cards cost a lot more for equivalent core counts (as SickleYield kind of eluded to above - the Quadro would have to last a LONG time to make it cost competitive with a GTX, and if it lasts that long, you will probably be ready for the next upgrade before you get a full ROI).

    There are instances where a Quadro does make production and economic sense, for example in an animation studio running Maya, zBrush, and/or other high end applications, where the Quadro specific drivers do enhance production - especially with regard for employee time. But IMHO, if your targeted use is DS and Iray (AFAIK DS does not use any Quadro specific enhancements), Quadros don't make much sense, unless you absolutely need the increased RAM over the performance boost of multiple GTX cards (the Titans might make the most sense as an in between choice).

    Hope some of this helps a bit.

  • kyoto kidkyoto kid Posts: 41,857
    scorpio said:
    kyoto kid said:
    ebergerly said:
    kyoto kid said:
    OZ-84 said:

    If 12gb video memory is enough for you (TITAN XP) then the answer is: No Quadros arent worth it at all. 

    ...actually yes they are as the drivers allow you to ignore W10's WDDM which robs GTX cards of about 18% of their VRAM. 

    I'm still hoping someone, anyone, will finally provide some actual proof that that's the case. Because I've seen a ton of evidence that points to it being a myth. Example attached....

    All but 900MB of my 11GB being used. 

     

    ..this has been a topic of discussion over on the Microsoft forums as well, so it's not just people here who are noticing it.

    If you repeat it enough it might come true, honestly its getting boring.

    ...just reporting what I am reading:

    https://answers.microsoft.com/en-us/windows/forum/windows_10-performance-winpc/windows-10-and-vram-on-nvidia-gtx-cards/21e94f46-fbb7-4cf9-997c-9998a1f52e01

    https://answers.microsoft.com/en-us/windows/forum/windows_10-hardware-winpc/windows-10-does-not-let-cuda-applications-to-use/cffb3fcd-5a21-46cf-8123-aa53bb8bafd6

     

  • CGHipsterCGHipster Posts: 241
    kyoto kid said:
    OZ-84 said:

    If 12gb video memory is enough for you (TITAN XP) then the answer is: No Quadros arent worth it at all. 

    ...actually yes they are as the drivers allow you to ignore W10's WDDM which robs GTX cards of about 18% of their VRAM. That Titan Xp will only have just over 9 GB of useable VRAM for the 1,200$ price tag. A 1080 Ti about the same.

    Hi, question then... if the 1080 ti is being screwed by windows which is something I have read before, does it make much sense for someone to upgrade from GTX 1070's to the 1080 ti?  I'm only asking because I have 2 1070's and use my onboard 660 uhd graphics now for my monitors so I have 8gb of vram.  It sounds like if I did upgrade to the 1080 I'm really only gaining 1 gb of vram and it would be a waste of cash for a casual renderer like me.

  • Richard HaseltineRichard Haseltine Posts: 108,065
    kyoto kid said:
    OZ-84 said:

    If 12gb video memory is enough for you (TITAN XP) then the answer is: No Quadros arent worth it at all. 

    ...actually yes they are as the drivers allow you to ignore W10's WDDM which robs GTX cards of about 18% of their VRAM. That Titan Xp will only have just over 9 GB of useable VRAM for the 1,200$ price tag. A 1080 Ti about the same.

    Hi, question then... if the 1080 ti is being screwed by windows which is something I have read before, does it make much sense for someone to upgrade from GTX 1070's to the 1080 ti?  I'm only asking because I have 2 1070's and use my onboard 660 uhd graphics now for my monitors so I have 8gb of vram.  It sounds like if I did upgrade to the 1080 I'm really only gaining 1 gb of vram and it would be a waste of cash for a casual renderer like me.

    The issue, if real, affects all cards - as I understand it it depends on the number of display connections the card has, so it does tend to get more noticeable in higher end cards but you would still be suffering the effect in your existing cards.

  • CGHipsterCGHipster Posts: 241
    kyoto kid said:
    OZ-84 said:

    If 12gb video memory is enough for you (TITAN XP) then the answer is: No Quadros arent worth it at all. 

    ...actually yes they are as the drivers allow you to ignore W10's WDDM which robs GTX cards of about 18% of their VRAM. That Titan Xp will only have just over 9 GB of useable VRAM for the 1,200$ price tag. A 1080 Ti about the same.

    Hi, question then... if the 1080 ti is being screwed by windows which is something I have read before, does it make much sense for someone to upgrade from GTX 1070's to the 1080 ti?  I'm only asking because I have 2 1070's and use my onboard 660 uhd graphics now for my monitors so I have 8gb of vram.  It sounds like if I did upgrade to the 1080 I'm really only gaining 1 gb of vram and it would be a waste of cash for a casual renderer like me.

    The issue, if real, affects all cards - as I understand it it depends on the number of display connections the card has, so it does tend to get more noticeable in higher end cards but you would still be suffering the effect in your existing cards.

    Ok, thanks.  Good to know.

  • ebergerlyebergerly Posts: 3,255
    edited July 2018
    kyoto kid said:

    ..this has been a topic of discussion over on the Microsoft forums as well, so it's not just people here who are noticing it.

    Yeah, and a lot of people have also noticed that W10 Task Manager is showing that their GPU isn't rendering, while GPU-Z shows it at 100%. So Task Manager is junk, right? No, its just because they don't  know to select the correct compute engine to be displayed.  So basing beliefs on computer tech forum posts probably isn't the best source of facts. Personally, it's generally the last place I go for real information.

    I'm not saying this W10 grabbing VRAM issue isn't true, just that I haven't seen one bit of official, factual data saying it's an issue. NVIDIA hasn't said anything. Microsoft hasn't said anything, and in fact they show just the opposite. Not one trusted tech website has said anything based on real facts. Seems to me if it was affecting all GPU's under W10 there would be an uproar heard 'round the world. But other than some posts from 2-3 years ago by some users, it seems like an non-issue. And if zillions of NVIDIA customers spent all that money on their GPU's but couldn't access the VRAM, don't you think NIVDIA would be furious?

    And the whole concept has never made much sense to me. Why would W10 grab VRAM in such huge quantities? People are claiming 3GB on an 11GB GPU? Why?

    I get that you need a buffer for the images the GPU is producing so that you can display and refresh any connected monitors. I get that. And maybe W10 is blocking off a buffer area in the VRAM. But a 1920x1080 image only takes something like 5-10 MEGABYTES. Not even close to 3GB. Even if your app was producing video and wanted to buffer, say one second of images at 60 images per second, that's only like 300-600 MB. Or maybe you also want to run videos on 3 monitors simultaneously or something? Fine, but I can't imagine W10 would penalize every GPU application for all that VRAM if it's not requesting it. Presumably the requesting application (aka, game or whatever) would request a bunch of VRAM so its users don't see flickering video. But otherwise why would W10 care? Yeah, if W10 had to reserve VRAM for other apps that requested it, then fine. But if you don't have situation that then why would W10 gobble VRAM?   

    And what about those with older GPU's with very little onboard VRAM? They don't even have a TOTAL of 3GB VRAM, but clearly the OS doesn't grab all of that. So why grab 3GB on a 1080ti? My old GTX-750 has only 1GB onboard VRAM. How does it survive if W10 is grabbing all that VRAM? Why can I play videos and run multiple monitors and render okay with that? Makes no sense. Keep in mind all the GPU is doing with all of those Gigabytes of VRAM is INTERNAL calculations that the OS doesn't care about, whatsoever. The end result is just images going back to Studio and to the monitors. The OS doesn't have to worry about the VRAM usage. It's mostly between the application and CUDA, requesting and fulfilling "cudamalloc" VRAM allocation requests as needed.  The OS just needs to make sure that all the requesting apps are playing together nicely and getting their fair share. Why would it grab larger amounts of VRAM for GPU's with larger installed VRAM? Makes no sense.

    And what kinda sorta makes sense, and seems to align a bit with my tests on my 1080ti, 1070, and 1060, is that approximately 900MB of VRAM seems to be reserved in each GPU during rendering, independent of installed VRAM. This fixed amount makes more sense. Seems a bit high, though, but if you consider what I mentioned before regarding maintaining a video or other buffer it might make some sense. Maybe if they're buffering for 4k monitors or something (even though I don't have one), then it might seem reasonable. But grabbing more based solely on how much installed VRAM the card has?? Makes no sense. And grabbing 3GB on a 1080ti??

    Again, I could be missing something, and I'm happy to be proven wrong if the facts justify it. But so far, from what I've seen, there are no facts to support it, and based on what I described above, it makes absolutely zero sense.

     

    Post edited by ebergerly on
  • eric suscheric susch Posts: 135

    In my experience the Quadro cards just aren't worth the cost.  You'd do much better getting two GTX cards.  I originally started with Iray on a Quadro K5000 and then bought the orignal Titan X and it blew the Quadro away.  It was many, many times faster for a fraction of the cost.  The consumer cards are designed to scream as fast as possible and that's exactly what you want when waiting for a render.  They heat the house in the winter too, so that's an added benefit.

  • kyoto kidkyoto kid Posts: 41,857
    ...most reports I have read mention a percentage (roughly about 18%), not a set flat amount of VRAM.
  • scorpioscorpio Posts: 8,533
    kyoto kid said:
    scorpio said:
    kyoto kid said:
    ebergerly said:
    kyoto kid said:
    OZ-84 said:

    If 12gb video memory is enough for you (TITAN XP) then the answer is: No Quadros arent worth it at all. 

    ...actually yes they are as the drivers allow you to ignore W10's WDDM which robs GTX cards of about 18% of their VRAM. 

    I'm still hoping someone, anyone, will finally provide some actual proof that that's the case. Because I've seen a ton of evidence that points to it being a myth. Example attached....

    All but 900MB of my 11GB being used. 

     

    ..this has been a topic of discussion over on the Microsoft forums as well, so it's not just people here who are noticing it.

    If you repeat it enough it might come true, honestly its getting boring.

    ...just reporting what I am reading:

    https://answers.microsoft.com/en-us/windows/forum/windows_10-performance-winpc/windows-10-and-vram-on-nvidia-gtx-cards/21e94f46-fbb7-4cf9-997c-9998a1f52e01

    https://answers.microsoft.com/en-us/windows/forum/windows_10-hardware-winpc/windows-10-does-not-let-cuda-applications-to-use/cffb3fcd-5a21-46cf-8123-aa53bb8bafd6

     

    But is it necessary to repeat what you simply read on the net in every thread you can as if it is fact.

Sign In or Register to comment.