Suggestion on a hardware upgrade?

2»

Comments

  • What I want from Santa this year.

    https://www.nvidia.com/en-us/design-visualization/visual-computing-appliance/

    30,720 Cuda cores and 256 GB of shared video memory.

    It is rumored that you can share video memory on two consumer level 2080 RTX video cards if you buy the Quadro Pro NV link (about $700.00 vs about $80.00 for a consumer level NV link,  This is just a rumor.  YMMV.  I don't think anyione is willing to spend $700.00 on a rumor).

    Do not do this! The links appear to be completely identical except for the one for RTX 5000 which is smaller and incompatible with all other links.

    Huh? The link is to Nvidia's Visual Conputing Appliance.  This thing is a rack server with 8 (COUNT THEM, EIGHT) current generation 24 GB Quadro or Tesla cards for making movies and very high end professional rendering deep learning, and other intensive uses.  It costs close to $100 geand.  I was being facetious. 

     

    CUDA cores are not analogous to CPU cores. AFAIK they don't even really exist and are more of a conceptual device for developers; from what I understand they can't perform control flow instructions and don't have separate instruction sets for each of the "cores". The VC just sort of rips through parallel math operations and can emulate the existence of separate cores. 

     

     

    It is rumored that you can share video memory on two consumer level 2080 RTX video cards if you buy the Quadro Pro NV link (about $700.00 vs about $80.00 for a consumer level NV link,  This is just a rumor.  YMMV.  I don't think anyione is willing to spend $700.00 on a rumor).

    Nah, there's no way. NVIDIA probably could have enabled memory pooling on the 2080 and 2080ti (the 2080ti is not significantly different to the RTX Titan which does support memory pooling), but they choose not to. This forces companies to pay (a massive amount) extra for Quadro or Titan if they want something that is likely to be of almost no benefit to the vast majority of consumers. AFAIK the RTX Titan uses the exact same bridge as the 2080/2080ti, but I could be wrong about that. That being said, people find hacks for this sort of thing all the time, I've heard there is some way to fake a bridge for 2x RTX2060's (physically it exists across the PCIe slots) to enable SLI. I don't know much about it though.

    The NV Link connections for the current 2080 RTX & TI cards, the TITAN RTX. and the Quadro RTX 6000 & 8000 cards are all the same but the internal electronics for the Quadro, Titan, and 2080 NVLinks themselves are very difderent (assumed due to the large difference in price between the 2080 & Quadro bridges).

    I found this article about NVLink compatability  https://www.pugetsystems.com/labs/articles/NVIDIA-NVLink-Bridge-Compatibility-Chart-1330/

  • To the OP, your best bet is get the newest, most powerfil Nvidia GPU you can afford.  That will give you the most bang for the buck and will speed up your rendering.  Be suire to try to keep your scene under 8GB,

  • outrider42outrider42 Posts: 3,679

    Just remember this, you could have a million CUDA cores but they are useless if you exceed VRAM and the scene gets dumped to CPU mode. So VRAM is major factor if you really want to go with larger scenes. A 1080 is pretty decent card. This was the fastest card available in 2016. There are only a few GPUs in the world that are actually faster than a 1080. Additionally there are even fewer GPUs that go beyond 8GB, which is the real bottleneck you have.

    Get your scenes under 8GB, and all is well. You certainly get another GPU, you can run as many GPUs as you want. As long as the scene fits into each individual GPU's VRAM they will combine their powers like Voltron and give you super powers. But any GPU without enough VRAM for the scene will not be used at all, leaving your Voltron missing a leg or arm, which is decidedly less cool.

  • kenshaw011267kenshaw011267 Posts: 3,805

    What I want from Santa this year.

    https://www.nvidia.com/en-us/design-visualization/visual-computing-appliance/

    30,720 Cuda cores and 256 GB of shared video memory.

    It is rumored that you can share video memory on two consumer level 2080 RTX video cards if you buy the Quadro Pro NV link (about $700.00 vs about $80.00 for a consumer level NV link,  This is just a rumor.  YMMV.  I don't think anyione is willing to spend $700.00 on a rumor).

    Do not do this! The links appear to be completely identical except for the one for RTX 5000 which is smaller and incompatible with all other links.

    Huh? The link is to Nvidia's Visual Conputing Appliance.  This thing is a rack server with 8 (COUNT THEM, EIGHT) current generation 24 GB Quadro or Tesla cards for making movies and very high end professional rendering deep learning, and other intensive uses.  It costs close to $100 geand.  I was being facetious. 

    HUH? I was telling people not to buy the pro Nvlink because actual testing has shown that there doesn't appear to be an difference, at least internally.

    https://www.pugetsystems.com/labs/articles/NVIDIA-NVLink-Bridge-Compatibility-Chart-1330/

  • What I want from Santa this year.

    https://www.nvidia.com/en-us/design-visualization/visual-computing-appliance/

    30,720 Cuda cores and 256 GB of shared video memory.

    It is rumored that you can share video memory on two consumer level 2080 RTX video cards if you buy the Quadro Pro NV link (about $700.00 vs about $80.00 for a consumer level NV link,  This is just a rumor.  YMMV.  I don't think anyione is willing to spend $700.00 on a rumor).

    Do not do this! The links appear to be completely identical except for the one for RTX 5000 which is smaller and incompatible with all other links.

    Huh? The link is to Nvidia's Visual Conputing Appliance.  This thing is a rack server with 8 (COUNT THEM, EIGHT) current generation 24 GB Quadro or Tesla cards for making movies and very high end professional rendering deep learning, and other intensive uses.  It costs close to $100 geand.  I was being facetious. 

    HUH? I was telling people not to buy the pro Nvlink because actual testing has shown that there doesn't appear to be an difference, at least internally.

    https://www.pugetsystems.com/labs/articles/NVIDIA-NVLink-Bridge-Compatibility-Chart-1330/

    Oh, okay, I completely misunderstood.  DOH

  • GLEGLE Posts: 52
    p0rt said:

    SLI help's iray. SLI make's rendering on my PC literally 100% faster then having a single card, optix is based on SLI code but for server's withmultiple rack's linked together for rendering farm's

     

    if you upgrade anything your want the CPU with DDR4 which can keep up with the GPU bus and won't bottleneck if something isn't stored in VRAM from the previous frame. likr doom 2016 game engine, with a virtual texture cache in the physical RAM so your graphic card can take 1000 textures per second if needed, and the main reason is can run at 200FPS

     

    p0rt said:
    I know, nvidia uses windows virtual memory as an extension to VRAM, which Id software did with rage, and was bottle necked by SATA speeds and resulted in major texture pop

    While all of this applies 100% to gaming, Iray does not leverage these capabilities.

    nVidia recommends disabling SLI for Iray, and in my own testing enabling SLI on my 2 980Tis slows them down noticeably when rendering.

    The closest proxy for what nVidia used to call "Turbo caching" (reserving system RAM to the GPU) would be memory pooling (treating all the VRAM as a single entity). This only works for the latest 2000 series cards, and only for the models equipped with NVlink (and with the bridge actually installed). This is also not yet implemented in Daz Studio, nor in the Windows driver (it was suggested on this forum that it works with a Linux box running Iray Server).

  • NexyNexy Posts: 21

    Get your scenes under 8GB, and all is well. You certainly get another GPU, you can run as many GPUs as you want. As long as the scene fits into each individual GPU's VRAM they will combine their powers like Voltron and give you super powers. But any GPU without enough VRAM for the scene will not be used at all, leaving your Voltron missing a leg or arm, which is decidedly less cool.

    So are you saying that if I buy an RTX 2080 with 8gb VRAM, that doesn't mean that I now have 16gb if I still have my 8gb 1080 installed? So DAZ will only recognise a max limit of the biggest GPU VRAM rather than combine them?

  • JamesJABJamesJAB Posts: 1,766

    Get your scenes under 8GB, and all is well. You certainly get another GPU, you can run as many GPUs as you want. As long as the scene fits into each individual GPU's VRAM they will combine their powers like Voltron and give you super powers. But any GPU without enough VRAM for the scene will not be used at all, leaving your Voltron missing a leg or arm, which is decidedly less cool.

    So are you saying that if I buy an RTX 2080 with 8gb VRAM, that doesn't mean that I now have 16gb if I still have my 8gb 1080 installed? So DAZ will only recognise a max limit of the biggest GPU VRAM rather than combine them?

    It's more like...
    Iray will attempt to load the scene into each installed video card's onboard RAM.  If a card has enough RAM to fit the entire scene, it will participate in the render.  If a card does not have enough RAM it will be left out of the render job.

     

  • SevrinSevrin Posts: 6,313
    JamesJAB said:

    Get your scenes under 8GB, and all is well. You certainly get another GPU, you can run as many GPUs as you want. As long as the scene fits into each individual GPU's VRAM they will combine their powers like Voltron and give you super powers. But any GPU without enough VRAM for the scene will not be used at all, leaving your Voltron missing a leg or arm, which is decidedly less cool.

    So are you saying that if I buy an RTX 2080 with 8gb VRAM, that doesn't mean that I now have 16gb if I still have my 8gb 1080 installed? So DAZ will only recognise a max limit of the biggest GPU VRAM rather than combine them?

    It's more like...
    Iray will attempt to load the scene into each installed video card's onboard RAM.  If a card has enough RAM to fit the entire scene, it will participate in the render.  If a card does not have enough RAM it will be left out of the render job.

     

    Oh, so if I have an 11Gb card and an 8Gb card, and need, say 10, then the 11Gb card will still render on GPU rather than the whole thing going to CPU while scenes  with < 8Gb would render on both?  I'm asking because I have an 1070 with 8 and was wondering what would happened if I added an 11 Gb card.

  • Sevrin said:

    Oh, so if I have an 11Gb card and an 8Gb card, and need, say 10, then the 11Gb card will still render on GPU rather than the whole thing going to CPU while scenes  with < 8Gb would render on both?  I'm asking because I have an 1070 with 8 and was wondering what would happened if I added an 11 Gb card.

    That is exactly right. Keep in mind that it is the textures that kill you, not vertices. You don't need a 4k texture for things far away from the camera, but most people don't bother because even an 8GB scene is pretty complex. If you are exceeding that, I'd say you need to make better use of instances, or smaller textures, and I think there are addons for both of those problems. And then, there's always the 2080ti with 11gigs.

    But honestly, I am holding off on upgrading my video cards because AMD is releasing their new architecture at the high end next year. Even if you don't want an AMD card, they'll force NVidia to try harder...

  • RayDAntRayDAnt Posts: 1,159
    edited August 2019

    Most of the salient points have already been made. But just to mention/go into further details on a couple things:

    1: Upgrade the CPU to an Ryzen 9 3900x which is unparalleled in terms of graphics rendering. ​Up against the latest AMD it almost matches for performance in gaming, but completely outstrips it in rendering in programs like DAZ.

    We just recently had someone benchmark their $330 Ryzen 7 3700X over in the updated benchmarking thread, and it scored a total of 0.633 iterations per second in overall performance. Which is only slightly faster than what my own 1050 non-Ti embedded laptop GPU managed to achieve (0.621 iterations per second.)  Even at launch a 1050 was no more than $110 (1/3 the price of a 3700X.)

    All of this is to say that even modern high-end CPU performance is negligible when it comes to relative perfomance in Cuda GPU accelerated applications.

     

    2: Buy another GPU - another GTX 1080 and run them together in SLI. Obviously having two of them would wrok, and running in SLI is going to improve it all that much more!

    SLI is redundant and somewhat in conflict with the way Iray already works internally whenever you are using it on a system with multiple GPUs or a single GPU + CPU active for rendering. SLI works by having the CPU divide up complex graphics workloads into separate chunks for multiple GPUs to work on separately and then recombining the results of those operations (framebuffer outputs) over a physically separate data line (SLI connectors/bridges) for instantaneous direct output to the framebuffer on whichever GPU is home to the system's primary display. Which makes perfect sense for motion based gaming.

    In contrast Iray - completely independently from SLI hardware or software support on a system - already works under multiple active rendering device conditions by breaking up complex graphics workloads into separate chunks on the CPU, sending those chunks to each device, and them recombining those chunks for iterative direct output to disk for safe keeping. Which makes perfect sense for stills based graphics rendering. In a best case scenario, mixing Iray with SLI results in no change to underlying rendering performance. In a worst case scenario, having SLI enabled leads to worse rendering performance in Iray because the system now has the added step of needing to constantly copy data from the framebuffer of whichever GPU drives the main display back to disk. Ergo why SLI is not your friend when it comes to tasks like Iray rendering.

     

    3: Buy another GPU - an RTX 2080 WITHOUT SLI. SLI doesn't work if the cards are different, so they'd be operating as individuals and I'm not sure how that would compare to fully SLI'd 1080s, or even if DAZ can use two graphics cards at once that aren't SLI.

    Using virtually any two Nvidia graphics cards in a system for Iray rendering is already functionally identical to having SLI enabled by default in a game really well optimized for it. So this is where you can have a ball..

    Post edited by RayDAnt on
Sign In or Register to comment.