GeForce vs Quadro

2»

Comments

  • scorpioscorpio Posts: 8,533
    edited July 2018

    Opps double post.

    Post edited by scorpio on
  • outrider42outrider42 Posts: 3,679

    This idea that Quadro is engineered to last longer is pure poppycock. Nvidia does select better binned parts for Quadro, but stop and think, this is only compared Nvidia's Founder's Edition GTX cards. Unlike GTX, Nvidia only manufacturers Quadro themselves, and PNY. Does anywhere here even use a first party Nvidia? A Founder's Edition? Anyone??? Nearly every 3rd party card in existence uses better cooling than Founder's Editions. MSI and EVGA use high grade components, and you know what, I personally think their high end cards use better components than Nvidia Quadro. Its not like there is some kind of magical super component supplier that only supplies Nvidia and nobody else, LOL. They use the same parts. It kind of reminds me of how some people used to think the police had super parts in their cars not available to the public. They're just cars, people.

    Quadro cards are also typically downclocked compared to their GTX equivalent. This actually means they are SLOWER at performing the same tasks, most of the time. The downclocking is done for maximum stability.

    So what does Quadro actually offer? Well, you have the obvious VRAM. Quadro cards also have numerous pro features for things like CAD, and offer TCC mode (that turns the card into a strict computing device.) However, for Iray, none of those matter except for VRAM.

    So I'd only buy a Quadro if a was REALLY serious about creating massive Pixar sized scenes and had massive amounts of cash burning holes in my wallet. If you buy a "cheap" Quadro, you might find it still takes a really long time to render, because you wont have enough CUDA cores and the card is downclocked on top of that.

    The Quadro P5000 is basically a GTX 1080. They are the same chip, same CUDA counts, same parts, ect. A 3rd party 1080 will outperform the P5000 at pretty much every task except for CAD like based tasks where GTX lacks support. However the P5000 has 16 GB VRAM, double the 1080's 8. But the P5000 costs a lot more than double a 1080. 

    If you have that kind of money to spend, though, I'm not going to tell you what to do with it. Just know that Quadro is not a magic bullet. You still need to optimize your scenes so they render faster.

  • Daz Jack TomalinDaz Jack Tomalin Posts: 13,815
    The Quadro P5000 is basically a GTX 1080. They are the same chip, same CUDA counts, same parts, ect. A 3rd party 1080 will outperform the P5000 at pretty much every task except for CAD like based tasks where GTX lacks support. However the P5000 has 16 GB VRAM, double the 1080's 8. But the P5000 costs a lot more than double a 1080. 

    If you have that kind of money to spend, though, I'm not going to tell you what to do with it. Just know that Quadro is not a magic bullet. You still need to optimize your scenes so they render faster.

    I believe part of the Qaudro's benefits are certified/optimised drivers for certain industry apps too... but I agree, for what we use it for, 1080/Titan's do the job fine (unless you need more RAM as it's been said).

  • kyoto kid said:
    ...most reports I have read mention a percentage (roughly about 18%), not a set flat amount of VRAM
    kyoto kid said:
     

    If I can beleive the numbers Windows 10 is showing me (and I don't have a reason not to) the whole "windows 10 is stealing my VRAM!" is an echo chamber were people are repeating rumors or misinterpreting the nature of memory allocations hints. I have a 1080Ti, 11Gb VRAM. Without any problem I can get roughly 10.1, 10.2 VRAM allocated in my scenes At times I even reached 10.6 but that is rare as there are other programs/windows probably competing for the VRAM used at that moment. 10.1 Gb is reliable though. No mythical 18%  which would lowere my max to 9 Gb and that is something I have never seen.

  • kyoto kidkyoto kid Posts: 41,859

    ...well, there is still some concern and debate about this in these other forums.

  • ColinFrenchColinFrench Posts: 649
    edited July 2018
    scorpio said:
    kyoto kid said:
    scorpio said:
    kyoto kid said:
    ebergerly said:
    kyoto kid said:
    OZ-84 said:

    If 12gb video memory is enough for you (TITAN XP) then the answer is: No Quadros arent worth it at all. 

    ...actually yes they are as the drivers allow you to ignore W10's WDDM which robs GTX cards of about 18% of their VRAM. 

    I'm still hoping someone, anyone, will finally provide some actual proof that that's the case. Because I've seen a ton of evidence that points to it being a myth. Example attached....

    All but 900MB of my 11GB being used. 

     

    ..this has been a topic of discussion over on the Microsoft forums as well, so it's not just people here who are noticing it.

    If you repeat it enough it might come true, honestly its getting boring.

    ...just reporting what I am reading:

    https://answers.microsoft.com/en-us/windows/forum/windows_10-performance-winpc/windows-10-and-vram-on-nvidia-gtx-cards/21e94f46-fbb7-4cf9-997c-9998a1f52e01

    https://answers.microsoft.com/en-us/windows/forum/windows_10-hardware-winpc/windows-10-does-not-let-cuda-applications-to-use/cffb3fcd-5a21-46cf-8123-aa53bb8bafd6

     

    But is it necessary to repeat what you simply read on the net in every thread you can as if it is fact.

    Well, newer folks may not have read all the previous threads on this or other forums, so it's valuable as a 'heads-up' FYI sort of thing.

    As for the accuracy of the reports, I hope that somebody somewhere has the tools and expertise to actually measure what's going on and confirm / disprove the info. Maybe it's the sort of thing that happens under some situations but not others. I dunno.

    Post edited by ColinFrench on
  • Rashad CarterRashad Carter Posts: 1,830

    One thing I know for certain is that Kyoto does his homework. I cannot state with certainty that I have observed what Kyoto is talking about, but it does strike me as not so dissimilar to what I am seeing now with my own GTX Titan Black 6gb cards.

    I recently upgraded from Windows XP to Win10. I use an application called Octane Render for Carrara, and at the right bottom base of the screen it shows the Vram usage.

    When I was using Windows XP Enterprise, started up the computer brand new and had no files open except either Octane either standalone or Carrara plug-in, I would see that only a small sliver of my  6gb of Vram was unavailable. But it seems now under Windows 10 even with nothing really going except a blank scene in Octane or one of it plugins that there is a sizeable slab of vram unavailable. No card offers 100% availability, so lets assume that I really have about 5.5gb to work with in the best case scenario. At 5.5gb, an 18% drop would cost me roughly a full gb. This does seem to be consistent with what I am seeing. I've been puzzled and perplexed as to figure out why I am missing this Vram. I've looked at everything, assuming that something must be robbing me of these resources. But I cannot find it.

    Again, I cannot confirm that this relates directly to Kyotos observations, but I would not write it off and assume it to be wrong either. You really cannot be sure unless you take the time to test the vram availablity under different versions of windows to be certain. I ended up conducting said experiment accidentally, and didnt control for variables in the way I could have if I'd been aware of this potential issue. I wish I would have known about it I would have checked.

    Also understand, I use an app that reads the cards behavior and displays it in real time, even down to temperatures. Sighman has done a brilliant job and his measurements of the cards goes beyond even that of Octane standalone, which doesnt display temperatures as you work. GPUZ is also a tool that I use. But again, I can only state at this time that I have had an experince that seems to support the ideas in some forum threads.

    One of the reaons why it may not be a well known factor about the possible Vram reservation in Win10 is because reporting doesnt do any good for anyone. Especially as Vram availability increases with each new gen of cards, and because most people just arent looking carefully at these totals. You'd need to test the card under multiple version of windows but all sharing the exact same drivers....to rule out the drivers as the issue. I dont see how sales are increased with this knowledge, it just spreads fear and worry so even if true it is not likely to be talked about much.

    It should be stated that the reason I up\downgraded to Win10 from Winxp Enterprise was because my Titan Black cards were extremely unstable with all of the most recent nvidia drivers above 361.0. I assumed I couldnt be the only Titan user finding all the new drivers useless, so I decided to check the internet and see what I found. I found no posts at all claimimg that recent drivers were broken for Titian Black. This led me to believe there must have been something about my configuration, and since Emterprise is not a common version of windows and not particularly useful for graphics, I knew that was probably the culprit. I used to get graphics cards failures which would crash Carrara, Octane, UV Mapper Pro, and any other application utilizing gpu cycles. I had to keep current with drivers to run Octane as needed. Can't live without Octane!!!!!!!!!!!!!!!!!! It should also be noted, that since upgrading I have not had a single issue with graphics cards failures, so it is absolutely obvious to me that there are differences in the handling of card's behavior and some of these changes could include vram, or simply with the most recent spate of drivers over the past two years optimized for Windows 10. Its possible its the drivers and it happens to coincide with the adoption of Windows 10 into the mainstream...again who knows?

    I know of one person who routinely operates multiple versions of windows, but I do not know if he uses nvidia video cards as most of his rendering is done in Bryce. I will reach out to him and see if he has had any experiences that can either help confirm or disprove the ideal.

  • ebergerlyebergerly Posts: 3,255
    edited July 2018

    Unfortunately, in the computer world, mere observation and speculation is rarely sufficient to prove or disprove anything. Computers are complicated. Very complicated. A mix of very complicated hardware, BOIS, hardware drivers, chipset drivers, hugely complex operating system, hugely complicated software apps, CUDA, Iray, and so on, all working together. And they're ALL different depending on your hardware. And unless you know the internal workings of all of this, you can do all the homework in the world and never really figure it out. Especially since, at any point in time, one or more of those things could be operating with errors which can alter what you observe. I was just reading about a BIOS version error for my motherboard which makes hardware temps read incorrectly.   

    That's why I caution people not to believe things from those who merely observe stuff, but have no idea of the underlying internal workings. Stuff that might seem intuitively obvious could also be totally incorrect. 

    And that's why I strongly believe that until there is clear word from either Microsoft or NVIDIA, or someone equally knowledgeable, on how this stuff is REALLY designed and implemented, we won't know the full story. It's easy to decide on stuff when you have little depth of knowledge, since you don't know what you don't know. And as someone who does software, I can assure you, if you really start thinking about this stuff and basic programming concepts and what could be going on under the hood, you quickly realize that what might seem obvious could have 236 different causes, or might be totally incorrect. Don't assume that if you see a forum post, or see a meter reading, or observe some behavior, that you (or the poster) have any clue what's really going on. I've done some GPU/CUDA programming, and I have no clue how W10 and CUDA or Iray are really working together. This stuff is what people go to school for and spend lifetimes understanding. Don't think you can pick up anything close to a true understanding from a few forum posts. One thing I've learned is not to jump to conclusions, which seems to be the internet's favorite sport. Because I know just enough to realize I don't know nothin'.   

    We've seen Microsoft say that some apps like GPU-Z and others could be mis-reporting data, and Task Manager is the only correct one. So can you really believe what you see? Maybe, but maybe not. Go with proven facts, not hunches.   

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    edited July 2018

    By the way, another example...

    Recently I decided to jump into BIOS to look at something. So I shutdown my computer, turn it back on, and hit the magic "boot into BIOS" key. Hmmm...didn't work. I searched the internet and found a few other options, none of them worked. Strange.

    So I went on a tech forum, and had about a dozen very nice and helpful people, who clearly had a ton of experience among them, provide me with what turned out to be maybe a dozen different ideas on how to get it to work. 

    Simple, right? Boot into BIOS. No big deal. Many of us have been doing it for decades, no problem. 

    None of them worked. And to this day I have no clue how to do it, after searching all over the internet and asking a ton of presumed experts, and trying probably 2 dozen different approaches. And what's interesting is that most of the experts were pretty much certain their fix would work. Because it's intutively obvious, right? 

    Well, no. It's complicated. Something's going on with my particular machine or BIOS or Fast Boot or hibernating or monitor connections or whatever. But the intuitively obvious stuff is all wrong. In the end it will probably come down to my particular MB and BIOS version and maybe a particular BIOS setting or something like that.   

    Post edited by ebergerly on
  • outrider42outrider42 Posts: 3,679
    Well, its really quite easy to test the VRAM situation. It requires dual booting, and you can get Windows 7 anywhere (it doesn't need to be activated, that wont effect anything, this is just for testing.) Start Daz and create a scene that runs up the memory on your GPU on Windows 10. Push it until it is just enough to exceed your VRAM, and it falls to CPU. Save this scene. Dual boot into Windows 7 and run Daz again, load scene and see if it renders. Its all very easy, it just requires time from somebody to set up. I still have a hard drive with 7 on it, so I might be able to test this myself, but since I only have 4 GB of VRAM, that's a very narrow margin to work with. If it is true that W10 takes 18% of VRAM, then larger cards obviously are more effected.

    And then we can stop arguing on whether it is true or not.
  • ebergerlyebergerly Posts: 3,255
    edited July 2018

    And here's another example, a bit closer to home in the "Windows 10 grabbing VRAM" discussion.

    I start my PC, with my 3 monitors fed from my GTX-1070's HDMI, Displayport, and DVI adapters. No monitors are connected to my GTX-1080ti. And I haven't started any applications, just the standard startup services.

    GPU-Z and W10 Task Manager both show that my 1080ti's VRAM usage is 0. But my 1070, which is running the monitors, shows a Memory Used of 315MB in GPU-Z, and 200MB in Task Manager.

    Hmmm.....

    So I guess it's obvious that Windows 10 is hogging 200-300MB of VRAM in my 1070 to run the monitors right? And it's shown as "Memory Usage". And it's not "hogging" any VRAM in the 1080ti. Obvious. 

    Well, not so quick. I thought Windows 10 hogs a huge amount of ALL connected GPUs' VRAM. Why isn't it grabbing like 2-3GB of VRAM in the 1080ti?

    And wait a minute, when we load up our monster scenes and render with the GPU, we keep saying that Windows 10 will only show 8-9 GB utilized, and never gets above that because W10 is hogging the rest. 

    So make up your mind...when W10 grabs VRAM does it show as utilized or not? If it was grabbing VRAM, wouldn't it still appear in the "Memory Used" or "Dedicated GPU Memory" monitors? Wouldn't those monitors show all the 11GB utilized, with 2-3GB of it being hogged by W10? Why does utilzation never get up to the full 11GB?

    See what I mean? This stuff is complicated.    

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    Well, its really quite easy to test the VRAM situation.

    My point exactly. We love to simplify everything because, well, "it's intuitively obvious".

    No, it isn't.  

  • scorpioscorpio Posts: 8,533

    One thing I know for certain is that Kyoto does his homework. I cannot state with certainty that I have observed what Kyoto is talking about, but it does strike me as not so dissimilar to what I am seeing now with my own GTX Titan Black 6gb cards.

    ... 

    Anyone can do research on the net and find evidence to prove just about anything - the holocaust didn't happen for example- but without actual knowledge or at least experience that research is worthless. Kyoto Kid admits himself he doesn't use windows 10 so has no experience of using it, from what I have gathered from his posts he doesn't have a very powerful GPU either so again no real experience and yet in any thread where it may be relevant and some where its not really he posts his opinion as fact. 

  • Rashad CarterRashad Carter Posts: 1,830
    scorpio said:

    One thing I know for certain is that Kyoto does his homework. I cannot state with certainty that I have observed what Kyoto is talking about, but it does strike me as not so dissimilar to what I am seeing now with my own GTX Titan Black 6gb cards.

    ... 

    Anyone can do research on the net and find evidence to prove just about anything - the holocaust didn't happen for example- but without actual knowledge or at least experience that research is worthless. Kyoto Kid admits himself he doesn't use windows 10 so has no experience of using it, from what I have gathered from his posts he doesn't have a very powerful GPU either so again no real experience and yet in any thread where it may be relevant and some where its not really he posts his opinion as fact. 

    ....so you're saying the Earth isn't flat?.... 

  • barbultbarbult Posts: 26,234
    ebergerly said:

    By the way, another example...

    Recently I decided to jump into BIOS to look at something. So I shutdown my computer, turn it back on, and hit the magic "boot into BIOS" key. Hmmm...didn't work. I searched the internet and found a few other options, none of them worked. Strange.

    So I went on a tech forum, and had about a dozen very nice and helpful people, who clearly had a ton of experience among them, provide me with what turned out to be maybe a dozen different ideas on how to get it to work. 

    Simple, right? Boot into BIOS. No big deal. Many of us have been doing it for decades, no problem. 

    None of them worked. And to this day I have no clue how to do it, after searching all over the internet and asking a ton of presumed experts, and trying probably 2 dozen different approaches. And what's interesting is that most of the experts were pretty much certain their fix would work. Because it's intutively obvious, right? 

    Well, no. It's complicated. Something's going on with my particular machine or BIOS or Fast Boot or hibernating or monitor connections or whatever. But the intuitively obvious stuff is all wrong. In the end it will probably come down to my particular MB and BIOS version and maybe a particular BIOS setting or something like that.   

    I had a similar problem, which turned out to be that I couldn't boot to BIOS when a DVD was in the optical drive.
  • All I can say I have both GeForce and Quadro on different computers and even the lowest GeForce I have, flogs the Quadro. But that is just my experience !

  • ebergerly said:

    So make up your mind...when W10 grabs VRAM does it show as utilized or not? If it was grabbing VRAM, wouldn't it still appear in the "Memory Used" or "Dedicated GPU Memory" monitors? Wouldn't those monitors show all the 11GB utilized, with 2-3GB of it being hogged by W10? Why does utilzation never get up to the full 11GB?

    See what I mean? This stuff is complicated.    

    // skip if you don't like techno babble

    No necessarily. Monitors depend on the API (programming interface) exposed by the underlying operating system. I am not sure if VRAM works in similar fashion as regular memory but in regular memory it is quite difficult to retrieve the 'real' free memory. Why? Because memory gets fragmented between the different processen consuming it. Just like the old FAT disks there are lots of memory blocks at 'random' places reducing the largest contiguous block considerably ( 32 bits and 64 bits programs makes a significant difference). The OS is, in the background, continously trying to 'defrag' but it is a serious issue. There is no simple API method that gives me a reliable answer of how big the biggest chunk of memory is I can get. If the situation for VRAM is similar ( I am not sure here)  then yes the question of how much VRAM is realy available is tricky. Now the usage pressure on VRAM memory  is considerable less than on regular memory so the contiguous blocks are much larger. There various ways around this to get the info but that relies on murky depths of low level API's which are not that well documented (or not at all). For the moment I assume the developers at Microsoft know and understand their own (low leve) API's the best (they made them) and that the numbers windows 10 gives me in the performance tab of the task manager are as correct as they can get them. In those number I see no hint of the fabled 18% . 

  • ebergerlyebergerly Posts: 3,255
    edited July 2018

    schouwenberg,

    Here’s my vastly over simplified techno-babble view on how this works, and why it’s so complicated:

    There’s a few major players: Studio, Iray, CUDA, and Windows 10. Studio does the highest level user interaction to build the scene, and when it comes time to render Studio packs everything up (ie, scene description parameters) and hands it off to Iray, which figures out the lower level details of how the light rays bounce around the scene. Once Iray has that figured, it interacts with CUDA to do the actual GPU calculations, and CUDA handles the lower level hardware interactions with the GPU cores and memory. And since the GPU is a system resource, CUDA also interacts with Windows 10 VidMm and VidSch to manage the GPU memory, which involves the Windows Display Driver Model (WDDM). Oh, and there’s the other very low level GPU/hardware drivers that do a bunch of other lower level stuff in the chain between CPU, system memory, PCI bus, monitors, and GPU.

    Now, I can write a C/C++ code today that works with CUDA to use the GPU to calculate stuff, and it’s fairly simple. And Iray would have absolutely nothing to do with it.

    For example, let’s say I want to modify a 1920x1080 image, which has around 2 million pixels. Ideally, I could assign each of those pixels to a thread in the GPU and it could calculate each of the 2 million pixels simultaneously in 2 million parallel threads.

    In a normal C/C++ program, I’d write a “for” loop that loops thru each pixel in succession and does the calculation, then on to the next one. With a GPU I write the same sequential “for” loop, but I use some CUDA code that magically takes that sequential code and allocates GPU memory and converts the sequential code into parallel calculations and a gazillion threads operating simultaneously on the GPU. It also handles copying data between system RAM and GPU VRAM and a bunch of other stuff. So I’d write my “for” loop, ask CUDA to allocate memory on the GPU (using “cudaMalloc”), then transfer the image from system RAM to GPU VRAM (using “cudaMemcpy”), and CUDA handles the simultaneous calcs. And presumably, somewhere in there are CUDA interactions with Windows 10 VidMm and VidSch in the form of requests back and forth since Windows 10 is the boss of system resources. 

    Now, presumably I’d think you could do a “cudaMalloc” request for the entire 11GB on a GTX-180ti and see what happens, whether you get push back from Windows 10 because it wants to hog everything. And I may try that this weekend. Though I fully expect there’s a lot more to it that I’m not considering, and I’ll probably have to study CUDA a lot more. If you’ve ever seen the CUDA documentation you’ll understand why. Maybe cudaMalloc is just a "request" for memory, but not an indication of what is actually assigned/dedicated to your process by Windows 10. Hence the confusion...  

    But as far as figuring out how much memory is used, it seems to me that VRAM is quite different from system RAM, in that there are tons of background system processes running in system RAM at any time, but I think a GPU is pretty much dead until Studio or a game needs it. And usually not many apps are using the GPU. So you’d think it would be relatively easy to figure how much is being used. Just write some code to request a certain amount of memory and see what shows up in Task Manager as being used. And there’s also a “cudaMemGetInfo” function that “gets free and total device memory”, so maybe this issue can be resolved by writing a fairly simple C++ application that tries to grab GPU memory and checks the result. And maybe by checking the memory usage that cudaMemGetInfo returns versus what Task Manager says you might get some useful data. Though maybe Iray is also using "cudaMemGetInfo" when it gives the possibly incorrect "available" memory numbers in the log, so maybe it won't match Task Manager.

    Anyway, this approach seems certainly far more simple and direct than worrying about Iray and getting Studio and all that other stuff involved by doing render test comparisons between W10 and W7. Far too many unnecessary variables with something like that.

    Post edited by ebergerly on
  • algovincianalgovincian Posts: 2,664
    ebergerly said:
    I think a GPU is pretty much dead until Studio or a game needs it. And usually not many apps are using the GPU.

    Are you sure about this? When I open up the nVidia Control Panel -> Manage 3D Settings and look under the "Program Settings" tab (on Windows 7 Pro), I see a list of a ton of programs, including things like Chrome, Acrobat, Skype, Photoshop, iTunes, etc.

    I can watch the amount of reported VRAM used in GPU-Z go up/down as I open/close tabs in Chrome. I can also see it go up/down as I open/close explorer windows. This certainly could explain some of the differences people are reporting. It may not be enough to look at your video card, your OS and the drivers, etc. - it may be necessary to look at exacly what processes are running at any given time, and furthermore, how they are configured.

    Just some thoughts.

    - Greg

  • ebergerlyebergerly Posts: 3,255
    Yes, of course there are a lot of apps that CAN use the GPU. But the point was that compared to the hundreds of processes/services running in system RAM, the GPU is relatively quiet and easy to distinguish relative to the fragmenting that schouwenberg mentioned
  • 31415926543141592654 Posts: 975

    SUMMARY OF THIS THREAD:

    A) for the average Daz Studio user, the GeForce series cards are cheaper, faster, and quite sufficient to do the job.

    B) for the extreme Daz Studio user (huge scenes, large amounts of animation) and / or someone with other graphics intensive programs, the extra cost of the Quadro series cards and the additional memory, stability, and longevity can be worth it.

    For the record, I fall into category B. When I purchased a card, I went with the M6000 (basically a last generation card) instead of the newer P6000 because it was 1/3 the cost and still carries 24 Gb of Vram and 3072 cuda cores ... still much faster than my current system.

  • kyoto kidkyoto kid Posts: 41,859
    edited August 2018

    ..Rashad, thank you.

    For the record, I did not jump on the free version of W10 for a number of reasons and have no desire to use it as long as I can, until at least Pro Edition users get certain control over their system's maintenance back (like we had with W7/8.1) and useless "fluff" features were all relegated to the app store instead of integrated into the OS at install.  Had I gone for the deal at the time, I'd be stuck with the Home Edition (as I had W7 Home Premium) which offers the user little if any control except to turn the system on and off.

    That said, when I first heard of the reserved VRAM issue, I poured over numerous tech sites and blogs to find out what the actual story was. Indeed there was debate as to whether it was occurring or not or to how much of degree.  However, I found enough proof as to what was the most prevalent case reported/tested and it turned out in favour of "reserving" camp when it came to a conventional setup where the GPU is used both for rendering and driving the display(s).  I agree that it could also depend on the system setup, age, drivers, and other hardware and thus am still looking for some sort of a pattern to this.

    Quadro/Tesla drivers do offer users the option to effectively turn off (ignore) Windows WDDM, which GTX drivers don't (at least for now).

    Now As I mentioned, I also learned of possible ways to mitigate any VRAM loss such as not connecting the primary card to the displays (effectively dedicating it just to rendering) and either using the MB's onboard graphics chipset or a lesser GPU card to drive them.  Of course this will affect viewport performance, particularly when working in Iray View Mode as you wont have as large number of GPU cores (or any if it's the board's chipset) available so refreshing will be more sluggish. 

    I also recently came into possession of a Maxwell Titan X as well as a couple other cards (2 and 4 GB) so the 1 GB card is no longer a matter.  The 2 GB card is going to run the displays on the rendering workstation with the Titan dedicated to rendering alone.  The second "assembly" system has the 4 GB card which is sufficient for small test renders and will also be used for Carrara work as well as being networked with the render workstation for Carrara rendering (for having a CPU driven biased render engine, Carrara can produce some very close to realistic results if you know how to use it's materials room).

    Rashad, I also hear you about Octane and am patiently waiting for the release of Octane4 as it will offer a subscription track along with the normal perpetual licence (the former which finally makes it affordable for me).

    @ 3141592654

    Excellent summation. 

    I'm somewhat in "camp B" as well as I look to render detailed scenes in large format for art prints.  There is "talk" that the next generation of GTX cards may get a similar 2-way link system like the Quadro and Tesla series have which would allow for memory pooling.  If this is indeed true(?), it would be a major game changer (imagine having two 12 GB cards that cost a total of say 1,600$ instead of  5,000$, which together offer that same 24 GB of VRAM and double the core count, that would be "shredding").  Again we will not know until later this month when Nvidia makes it's official announcement so for now, for big jobs Quadro and Tesla have the edge.

    Post edited by kyoto kid on
  • 31415926543141592654 Posts: 975
    kyoto kid said:

     There is "talk" that the next generation of GTX cards may get a similar 2-way link system like the Quadro and Tesla series have which would allow for memory pooling.  If this is indeed true(?), it would be a major game changer (imagine having two 12 GB cards that cost a total of say 1,600$ instead of  5,000$, which together offer that same 24 GB of VRAM and double the core count, that would be "shredding").  Again we will not know until later this month when Nvidia makes it's official announcement so for now, for big jobs Quadro and Tesla have the edge.

    As I understand it, using SLI or equivalent linking of two graphics cards will double the core count, but will only still have the same Vram GB that a single card carries. In your example, 12 Gb with double the cores.

  • kyoto kidkyoto kid Posts: 41,859
    edited August 2018

    ...yes that is with the current SLI, however one tech site mentioned about a new 2-way link that uses two connector widgets between a pair of cards similar to NVLink in the Volta Quadro/Tesla lines.Now personally I'm pretty skeptical of the idea as we are still at that speculation stage until Nvidia finally issues an official release statement.

    Crikey, I'm still not convinced we will see a 16GB GTX card (maybe a Titan upgrade but be prepared to dig deep in the pocket for that). I feel the 1080s successor may be bumped to 12GB but no higher. The Ti version may include Tensor cores instead of a memory boost. Again this is speculation as well, but I just don't see Nvidia threatening sales of their pro line by offering nearly the same or better performance in their consumer cards.

    Post edited by kyoto kid on
Sign In or Register to comment.