Maximum GPU RAm for GPU rendering in Iray

2

Comments

  • RayDAntRayDAnt Posts: 1,154
    edited July 2019

    No. There is absolutely no where in Nvidia's documentation saying VRAM pooling was ever available even on Quadro's prior to Nvlink.

    Nvidia's first foray into VRAM pooling on its professional cards (called GPUDirect Peer-To-Peer) debuted in 2011, and - as this handy illustration shows - functioned exclusively via PCI-E:

    In late 2013, Nvidia debuted what continues to be its premier implementation of VRAM pooling (known officially as "Unified Memory") as part of CUDA 6, with the release of NVLink not coming until later in 2014 as a key part of Nvidia's newly announced Tesla Accelerated Computing Platform (technology originally developed for the Summit and Sierra supercomputers.) Some pertinent details on Unified Memory from Nvidia's own official documentation (emphasis added):

     Unified Memory was introduced in 2014 with CUDA 6 and the Kepler architecture. This relatively new programming model allowed GPU applications to use a single pointer in both CPU functions and GPU kernels, which greatly simplified memory management. CUDA 8 and the Pascal architecture significantly improves Unified Memory functionality by adding 49-bit virtual addressing and on-demand page migration. The large 49-bit virtual addresses are sufficient to enable GPUs  to access the entire system memory plus the memory of all GPUs in the system. The Page Migration engine allows GPU threads to fault on non-resident memory accesses so the system can migrate pages from anywhere in the system to the GPUs memory on-demand for efficient processing.

    In other words, Unified Memory transparently enables out-of-core computations for any code that is using Unified Memory for allocations (e.g. `cudaMallocManaged()`). It “just works” without any modifications to the application. CUDA 8 also adds new ways to optimize data locality by providing hints to the runtime so it is still possible to take full control over data migrations.

    These days it’s hard to find a high-performance workstation with just one GPU. Two-, four- and eight-GPU systems are becoming common in workstations as well as large supercomputers. The NVIDIA DGX-1 is one example of a high-performance integrated system for deep learning with 8 Tesla P100 GPUs. If you thought it was difficult to manually manage data between one CPU and one GPU, now you have 8 GPU memory spaces to juggle between. Unified Memory is crucial for such systems and it enables more seamless code development on multi-GPU nodes. Whenever a particular GPU touches data managed by Unified Memory, this data may migrate to local memory of the processor or the driver can establish a direct access over the available interconnect (PCIe or NVLINK).

    Many applications can benefit from GPU memory oversubscription and the page migration capabilities of Pascal architecture. Data analytics and graphworkloads usually run search queries on terabytes of sparse data. Predicting access patterns is extremely difficult in this scenario, especially if the queries are dynamic. Not only computational tasks but other domains like high-quality visualization can greatly benefit from Unified Memory. Imagine a ray tracing engine that shoots a ray which can bounce off in any direction depending on material surface. If the scene does not fit in GPU memory the ray may easily hit a surface that is not available and has to be fetched from CPU memory. In this case computing what pages should be migrated to GPU memory at what time is almost impossible without true GPU page fault capabilities.

    I if this sort of stuff is your thing, I highly recommend reading the rest of this article, as well as some/all of the following:

     

    TCC is a mode that Quadro's can be put in. It's not a kind of server. 

    TCC (Turing Compute Cluster) mode is the driver-level implementation of Nvidia's Tesla Accelerated Computing Platform. And it's use is required for access to underlying features of the platform like Unified Memory (aka VRAM pooling.)

     

    And just to be clear I've been working with Quadro's and Tesla's in one way or another for a decade at least. 

    If there's one thing I've learned in life, it's that there's always more to learn... wink

     

    ETA: I guess we now know where Jensen got his "It just works!" line (the above quoted article was published all the way back in late 2016.)

    Post edited by RayDAnt on
  • kenshaw011267kenshaw011267 Posts: 3,805

    P2PMemory Access, the feature you're trying to talk about, didn't work that's why it was replaced.  No one ever sold it as VRAM pooling but as direct access to VRAM fronm one card to another. That is a very different thing. Unified memory literally makes the 2 cards function as a single unit for CUDA applications coded for it.

    Tesla's don't need TCC. They have no video outs. Putting a Quadro in TCC makes it effectively a Tesla.

    I'm still waiting for you to prove a single thing besides that you've misunderstood Nvidia webpages.

  • SlimerJSpudSlimerJSpud Posts: 1,456
    edited July 2019

    Why not attack the problem from the texture angle? The Resource Saver product is on sale now. Just apply that to all your background figures, and you'll save a lot of texture memory. Also, with Iray, things don't have to be visible to take up VRAM memory. Delete anything that doesn't need to be there to keep those textures out of VRAM.

    Post edited by SlimerJSpud on
  • RayDAntRayDAnt Posts: 1,154
    edited July 2019

    Unified memory literally makes the 2 cards function as a single unit for CUDA applications coded for it.

    That's all I've ever been trying to say. And Iray just happens to be one of the handful of CUDA applications coded for it.

     

    Tesla's don't need TCC.

    Tesla's come defaulted to TCC mode and require special permissions to be set away from it. To quote directly from the Tesla Compute Cluster documentation page (already linked to in my previous post):

    Setting TCC Mode for Tesla Products

    NVIDIA GPUs can exist in one of two modes: TCC or WDDM. TCC mode disables Windows graphics and is used in headless configurations, whereas WDDM mode is required for Windows graphics. NVIDIA GPUs also come in three classes:

    • GeForce — typically defaults to WDDM mode; used in gaming graphics.
    • Quadro — typically defaults to WDDM mode, but often used as TCC compute devices as well.
    • Tesla — typically defaults to TCC mode. Current drivers require a GRID license to enable WDDM on Tesla devices.
    Post edited by RayDAnt on
  • drzapdrzap Posts: 795
    edited July 2019
    ebergerly said:

    Again, I'm trying to understand why there's so much continuing discussion of nitty gritty details of P2P and TCC and NVLink when none of that matters unless your software is configured to pool memory. Which, AFAIK, Iray isn't, nor is just about any other software. Has anyone actually seen results of pooled memory tests on any software?

    https://www.chaosgroup.com/blog/v-ray-gpu-benchmarks-on-top-of-the-line-nvidia-gpus

    This is old news.  Chaosgroup has now confirmed NVlink results with consumer grade 2080ti's as well as Titans.   Vray is the only production renderer I have seen proof of making use of NVlink for Vram pooling.

    Notice from this test that the use of NVlink does not necessarily mean memory pooling is enabled:  https://www.servethehome.com/dual-nvidia-titan-rtx-review-with-nvlink/4/     The software they used for testing did not support memory pooling (though Octane 4 has a beta build that does), yet NVlink was used to group two gpu cards for rendering.   

    Bottom line, if you want to pool the memory on both your cards in a renderer, Vray is the way to go..... for the time being.   Redshift, Octane and perhaps others are looking to implement it

    Post edited by drzap on
  • kyoto kidkyoto kid Posts: 41,847
    edited July 2019

     

    kyoto kid said:

    Thanks for all the replies!!! I am doing rather complex scenes with up to four G8 figures in them on SUBD 4 and 12 genesis 1 on SUBD0 for background characters. Before I start tweaking my drivers, could anybody please let me know if this also works with DAZ Iray when I put my GPU's in TCC? also can I teh use my motherboards ports to hook up my displays?

    Thanks a lot,

    Me

    ...this scene has eight Genesis figures (one is the train driver) along with a lot of other details including volumetric mist, reflectivity and a number of emissive sources at a resolution of 1,600 x 1,200.  When opened in the Daz programme, it takes up about 9.8 GB of system memory. Given what Ebergerly mentions, that would translate to at most, 4.9  GB of VRAM. This is not the most ambitious scene, as I have others that were unfinished because they were approaching the system RAM limits of my workstation (fortunately were saved on backup media), since I was still rendering Iray in CPU mode at the time.  I now have a Titan-X dedicated to rendering (a smaller VRAM GPU is running the displays).

     

    That scene takes up far more than 5 Gb of VRAM. The amount of system RAM consumed by the PC while manipulating a scene is not directly comparable to how much VRAM will be consumed by the GPU during a render.

     

    ...what is it's size in system memory when open in Daz?

    Post edited by kyoto kid on
  • kyoto kidkyoto kid Posts: 41,847
    edited July 2019
    ebergerly said:

    Again, I'm trying to understand why there's so much continuing discussion of nitty gritty details of P2P and TCC and NVLink when none of that matters unless your software is configured to pool memory. Which, AFAIK, Iray isn't, nor is just about any other software. Has anyone actually seen results of pooled memory tests on any software? Again, it seems like just another of those new technologies that people want to get excited about but just isn't ready for primetime. Kinda like was predicted last year (to a loud chorus of boo's...) about the still-not-yet-ready-for-primetime RTX fiasco. 

    The only real data I've seen on NVLINK and memory pooling is a few papers from Puget Systems from last year, but they pretty much conclude the same thing as I recall...that's it's all up to the software to implement it, but no solid details on any that do. 

    And oh, by the way, for this awesome new memory pooling you need a whole bunch more system RAM to support the gobs of linked GPU VRAM. So now we're talking 64-128GB of system RAM. And you need 1 or two expensive NVLINK connnectors along with the crazy expensive RTX cards that support it. And maybe even a 3rd GPU just to run your monitors if you use TCC. 

    Personally, I can't even imagine what kind of Iray scene would actually require 24GB+ of VRAM. I have to work hard and kludge together 3 scenes to get even 1/2 of that and overload my 1080ti. And is all of this cost and hassle so much more important than just doing some compositing? Personally, I just don't get it. But that's just me I suppose. 

    .....well, it still depends on the quality level and render resolution. If you're rendering at something like 32,000 x 24,000. for a large high quality gallery print, that will definitely eat up more memory. 

    Post edited by kyoto kid on
  • kenshaw011267kenshaw011267 Posts: 3,805
    kyoto kid said:

     

    kyoto kid said:

    Thanks for all the replies!!! I am doing rather complex scenes with up to four G8 figures in them on SUBD 4 and 12 genesis 1 on SUBD0 for background characters. Before I start tweaking my drivers, could anybody please let me know if this also works with DAZ Iray when I put my GPU's in TCC? also can I teh use my motherboards ports to hook up my displays?

    Thanks a lot,

    Me

    ...this scene has eight Genesis figures (one is the train driver) along with a lot of other details including volumetric mist, reflectivity and a number of emissive sources at a resolution of 1,600 x 1,200.  When opened in the Daz programme, it takes up about 9.8 GB of system memory. Given what Ebergerly mentions, that would translate to at most, 4.9  GB of VRAM. This is not the most ambitious scene, as I have others that were unfinished because they were approaching the system RAM limits of my workstation (fortunately were saved on backup media), since I was still rendering Iray in CPU mode at the time.  I now have a Titan-X dedicated to rendering (a smaller VRAM GPU is running the displays).

     

    That scene takes up far more than 5 Gb of VRAM. The amount of system RAM consumed by the PC while manipulating a scene is not directly comparable to how much VRAM will be consumed by the GPU during a render.

     

    ...what is it's size in system memory when open in Daz?

    Load it on a GPU and check. It's really the only way to be sure (with apologies to Ripley)

  • kenshaw011267kenshaw011267 Posts: 3,805
    RayDAnt said:

    Unified memory literally makes the 2 cards function as a single unit for CUDA applications coded for it.

    That's all I've ever been trying to say. And Iray just happens to be one of the handful of CUDA applications coded for it.

    You keep claiming this but you have yet to demonstrate it. You have yourself proven that the metrics you previously provided that you thought made your point don't. If you can make it work do so and show the numbers. If not stop before you cost someone a lot of money.

  • RayDAntRayDAnt Posts: 1,154
    edited July 2019

    You keep claiming this but you have yet to demonstrate it.

    Demonstrate what, exactly? As already demonstrated by me in this post, current versions of Iray actively advertise the fact that they are optimized for TCC driver mode on compatible cards. There has also been at least one instance of someone successfully completing rendering tests (if bugily - see the actual post here) over in the Sickleyield benchmarks thread using TCC driver mode on Titan Xp GPUs (an NVLink-lacking card btw) back when it was first made available on the Titan line (pre RTX.) And Iray's own changelogs even explicitly state texture sharing (VRAM pooling) support via P2P mode on capable cards:

    Iray 2017.1 beta, build 296300.616

    Added and Changed Features

    • Iray Photoreal
      • Texture sharing on NVLINK capable systems: A new render context option "iray_nvlink_peer_group_size" has been added. Enabled CUDA devices are divided into peer groups of the specified size. The group size needs to be a factor of the total number of enabled CUDA devices. Textures are subsequently shared between CUDA devices in a peer group and each texture is only uploaded to one of the devices in the peer group.

     

    The only open question at this point regarding VRAM pooling capabilities in Iray (and so Daz Studio) on Quadro/Titan/Tesla cards is how well it works. Not whether it works, since the underlying functionality on both hardware and software levels is already there. And the way that question gets answered is by people with capable hardware (like the OP of this thread) playing around with it.

    Post edited by RayDAnt on
  • kenshaw011267kenshaw011267 Posts: 3,805

    ! I'll try to explain this slowly to you. PROVE IT. Do not say someone wrote it but show that it works. I've been in IT long enough to know that documentation can be wrong and changelogs of features no one has tested are meaningless. Until you do please stop trying to get people to spend a couple of grand to find out for you.

  • Hi again, maybe a silly remark, but reading through this thread I feel that I do need a NVLink supporting card or card set. Also, it would appear that even when I do have such a harware, Iray (in Daz Studio or in another application, does not support it...) Is that correct?

  • nicsttnicstt Posts: 11,715
    edited July 2019

    Hi again, maybe a silly remark, but reading through this thread I feel that I do need a NVLink supporting card or card set. Also, it would appear that even when I do have such a harware, Iray (in Daz Studio or in another application, does not support it...) Is that correct?

    All reading this thread definitely proves is there is no consensus on what is possible with regards to memory pooling of Nvidia cards; this relates to Nvidia cards that are stated by Nvidia to support it, and to software that also support it.

    Daz Studio doesn't. It is not known if it will support memory pooling.

    Vray has demonstrated it working, I believe, but WAIT and SAVE your cash until it is clearly demonstrated.

    Edit:

    I'm tempted to say we all want memory pooling, but perhaps that is only some of us, but until Rendering software starts shouting about it (and they will), I for one will be waiting.

    Personally, I'll get a Titan first.

    Post edited by nicstt on
  • drzapdrzap Posts: 795
    edited July 2019
    RayDAnt said:

    You keep claiming this but you have yet to demonstrate it.

    Demonstrate what, exactly? As already demonstrated by me in this post, current versions of Iray actively advertise the fact that they are optimized for TCC driver mode on compatible cards. There has also been at least one instance of someone successfully completing rendering tests (if bugily - see the actual post here) over in the Sickleyield benchmarks thread using TCC driver mode on Titan Xp GPUs (an NVLink-lacking card btw) back when it was first made available on the Titan line (pre RTX.) And Iray's own changelogs even explicitly state texture sharing (VRAM pooling) support via P2P mode on capable cards:

    Iray 2017.1 beta, build 296300.616

    Added and Changed Features

    • Iray Photoreal
      • Texture sharing on NVLINK capable systems: A new render context option "iray_nvlink_peer_group_size" has been added. Enabled CUDA devices are divided into peer groups of the specified size. The group size needs to be a factor of the total number of enabled CUDA devices. Textures are subsequently shared between CUDA devices in a peer group and each texture is only uploaded to one of the devices in the peer group.

     

    The only open question at this point regarding VRAM pooling capabilities in Iray (and so Daz Studio) on Quadro/Titan/Tesla cards is how well it works. Not whether it works, since the underlying functionality on both hardware and software levels is already there.

    I'm sorry, but I beg to differ on that opinion.  TCC mode is quite a different thing from memory pooling.  A demonstration that your card or software can recognize a TCC enabled is not a demonstration that the software can share memory between the cards.  They are two entirely different functions.  The two posts you cited only seemed to confirm that iRay recognizes that the GPU is in TCC mode.  That has nothing to do with this conversation (well, only indirectly).  To confirm that memory pooling is working in software, one needs two identical cards and an NVlink bridge installed between them.  As far as I can see, there is no one who has come forth to confirm that this works, so any such claim is speculation.

    I noticed you have cited more than once an Iray beta changelog.  Referring to changelogs for beta software is not a good way to confirm features for production-ready software.  What's more telling is, given this unique and highly marketable feature, Nivida doesn't highlight or reference it in any of their advertising materials, despite the fact that they advertise TCC driver mode (according to your claim above).  When I am searching for software features, I am far more apt to notice what they omit from their shiny advertisements than what they include in some obscure beta-build changelog.  Think about it: this cutting edge, Nvidia only GPU technology, supposedly developed in their "flagship" rendering software, yet they forgot to highlight it in their marketing materials on numerous occasions?  Does that seem likely to you?

    When a person is about to let go of a significant amount of cash, the proof needs to be in the pudding.  Chaosgroup is the only company that has shown us the pudding (please prove me wrong, I would love to see this technology given more attention) and until someone with the hardware and enough interest in iRay (hah!) can demonstrate anything to the contrary, it may not be a smart thing to pull out your cash for a dream.

     

    P.S...     Here are examples of what I'm talking about:  https://blog.irayrender.com/ Iray developer blog.  Good space to toot your own horn about memory pooling right?  Nothing there.
      https://www.migenius.com/articles/new-in-realityserver-5-3  What about Migenius?  They sell and support iRay.   Silence.
      https://www.nvidia.com/en-us/design-visualization/iray/  Surely, Nvidia itself would have something to say about such a breakthu technology?  Crickets.   Not in their meager iRay documentation, not in their usergroups.... nowhere.         

    https://www.chaosgroup.com/vray-gpu    Vray?  They advertise it right there in black and white.

    My guess is if this feature works at all in iRay, it's not at the capacity where it's advantageous for them to mention it.  Their omission speaks volumes.

    Post edited by drzap on
  • nicsttnicstt Posts: 11,715
    drzap said:
    RayDAnt said:

    You keep claiming this but you have yet to demonstrate it.

    Demonstrate what, exactly? As already demonstrated by me in this post, current versions of Iray actively advertise the fact that they are optimized for TCC driver mode on compatible cards. There has also been at least one instance of someone successfully completing rendering tests (if bugily - see the actual post here) over in the Sickleyield benchmarks thread using TCC driver mode on Titan Xp GPUs (an NVLink-lacking card btw) back when it was first made available on the Titan line (pre RTX.) And Iray's own changelogs even explicitly state texture sharing (VRAM pooling) support via P2P mode on capable cards:

    Iray 2017.1 beta, build 296300.616

    Added and Changed Features

    • Iray Photoreal
      • Texture sharing on NVLINK capable systems: A new render context option "iray_nvlink_peer_group_size" has been added. Enabled CUDA devices are divided into peer groups of the specified size. The group size needs to be a factor of the total number of enabled CUDA devices. Textures are subsequently shared between CUDA devices in a peer group and each texture is only uploaded to one of the devices in the peer group.

     

    The only open question at this point regarding VRAM pooling capabilities in Iray (and so Daz Studio) on Quadro/Titan/Tesla cards is how well it works. Not whether it works, since the underlying functionality on both hardware and software levels is already there.

    I'm sorry, but I beg to differ on that opinion.  TCC mode is quite a different thing from memory pooling.  A demonstration that your card or software can recognize a TCC enabled is not a demonstration that the software can share memory between the cards.  They are two entirely different functions.  The two posts you cited only seemed to confirm that iRay recognizes that the GPU is in TCC mode.  That has nothing to do with this conversation (well, only indirectly).  To confirm that memory pooling is working in software, one needs two identical cards and an NVlink bridge installed between them.  As far as I can see, there is no one who has come forth to confirm that this works, so any such claim is speculation.

    I noticed you have cited more than once an Iray beta changelog.  Referring to changelogs for beta software is not a good way to confirm features for production-ready software.  What's more telling is, given this unique and highly marketable feature, Nivida doesn't highlight or reference it in any of their advertising materials, despite the fact that they advertise TCC driver mode (according to your claim above).  When I am searching for software features, I am far more apt to notice what they omit from their shiny advertisements than what they include in some obscure beta-build changelog.  Think about it: this cutting edge, Nvidia only GPU technology, supposedly developed in their "flagship" rendering software, yet they forgot to highlight it in their marketing materials on numerous occasions?  Does that seem likely to you?

    When a person is about to let go of a significant amount of cash, the proof needs to be in the pudding.  Chaosgroup is the only company that has shown us the pudding (please prove me wrong, I would love to see this technology given more attention) and until someone with the hardware and enough interest in iRay (hah!) can demonstrate anything to the contrary, it may not be a smart thing to pull out your cash for a dream.

    Well said.

     

  • gibrril_fa945a6ceegibrril_fa945a6cee Posts: 550
    edited July 2019

    Thanks! Since it does work in Vray, is there any way to rneder a scene made in Daz Studio in Vray? Has anyone done that and if so, how does an Iray optimzed scene translate to Vray rendering? Can all shaders just be used as is, or is there a conversion process needed?

    Thanks a lot,

    Me

    Post edited by gibrril_fa945a6cee on
  • RayDAntRayDAnt Posts: 1,154
    edited July 2019

    Until you do please stop trying to get people to spend a couple of grand to find out for you.

    drzap said:

    it may not be a smart thing to pull out your cash for a dream.

    When a person is about to let go of a significant amount of cash. 

    Um... the OP's original central question was, and I quote:

    I currently have a PC with two Nvidia Titan GPU's of 12 GB GPU Ram each. [...] is there any way at all,... no matter the cost, the hardware setup, whatever... to combine the GPU RAM of multiple GPU's together?

    To which my intial answer was:

    The short answer to your question is yes. The secret is an alternate driver functioning mode on Windows called "TCC" which is available on all Nvidia Quadro, Tesla, and Titan cards (starting with the 6GB Keppler generation.) Meaning that not only is it possible, the cards you already have right now should be perfectly capable of doing it.

    Which is the direct OPPOSITE of telling someone they need to buy new, expensive hardware to get something to work. I understand that the software/hardware support situation around Titan cards is (and always has been) extremely murky - especially to people (such as yourselves, I presume) who do not personally own them. Please understand that what I am saying in this thread about how they (and the wider Nvidia hardware/software stack they fit in to) is based on my own hands experience as a Titan card owner/operator.

     

    drzap said:

    TCC mode is quite a different thing from memory pooling. 

    Correct. TCC is the device driver model used by all Nvidia compute oriented-devices on Windows to implement their enterprise level computing platform, and Unified Memory (what you call memory pooling) is just a feature of that computing platform.

      

    drzap said:

    To confirm that memory pooling is working in software, one needs two identical cards and an NVlink bridge installed between them. 

    Any combination of two Quadro/Tesla/Titan cards from the Keppler generation or newer will do. And the only situation in which having an NVLink bridge is required is if either of the cards are a Titan RTX (since Titan RTXs apparently drop support for P2P functionality over PCI-E - or at least did as of this January when Puget Systems last looked into it.)

     

     

    Think about it: this cutting edge, Nvidia only GPU technology, supposedly developed in their "flagship" rendering software, yet they forgot to highlight it in their marketing materials on numerous occasions? Does that seem likely to you?

    Yes, because Iray's debut of support for Pascal-era cutting edge features (like Unified Memory) coincided with Nvidia's sales pitch for their high-powered GPU-based rendering appliance, the VCA (originally short for "Iray Visual Computing Appliance". Here's a press release from the initial announcement.) And openly marketing the fact that key advanced features of your latest and greatest pre-built hardware solution (intro price: $50,000USD) can be had on smaller economies of scale for much less money via some diy computer building wouldn't be good business.

     

    Chaosgroup is the only company that has shown us the pudding (please prove me wrong, 

    Check out this thread I only just stumbled upon here on the Daz forum from a little over a year ago. It's got some interesting stuff in it like this post from a DS/Iray user who noticed that having an instance of MATLAB (another TCC P2P functionality optimized program) running in the background on his GTX960M equipped laptop was tricking his graphics driver into allowing him to perform effectively out-of-core rendering and avoiding CPU fallback in Iray (via peer-to-peer functionality, as evidenced by the mention of "peering" in his 2nd log file excerpt.) Interesting stuff.

    ETA: This stuff is a lot less cut and dry (and interesting! imo) then a lot of people seem to think.

    Post edited by RayDAnt on
  • drzapdrzap Posts: 795

    Thanks! Since it does work in Vray, is there any way to rneder a scene made in Daz Studio in Vray? Has anyone done that and if so, how does an Iray optimzed scene translate to Vray rendering? Can all shaders just be used as is, or is there a conversion process needed?

    Thanks a lot,

    Me

    Yes, maybe, and it depends :)     It all depends on whether you are a casual user or a 3D artist who is willing to use other software.  There is a fellow in my Daz2Maya Skype group (not to be confused with the DaztoMaya plugin) who happens to be working on that problem.  He has previously developed Genesis 8 for Maya and Genesis 3 for Maya and he is currently in development of companion plugins.  One of them is a direct iRay-to-Arnold/Redshift/Vray conversion script that we are helping him beta test and release.  Of course, you would have to be familiar with Maya, deal with little inconveniences and let's not forget the very expensive license for V-ray.  If your fancy is just creating stills as a hobby, I doubt you would want to go that far to render a few more polygons.  It would be much better to follow nicstt's idea and buy a GPU with more VRAM.  On the other hand, if you already have the hardware, it might be worthwhile to download the free trials.   You might catch the fever as we did.  Now that I'm using real renderers, I wouldn't even give iRay a side-eye.  And then again, you could always wait and see what happens.  Maybe Nvidia will give iRay a little life.

  • RayDAntRayDAnt Posts: 1,154
    edited July 2019

    Hi again, maybe a silly remark, but reading through this thread I feel that I do need a NVLink supporting card or card set. Also, it would appear that even when I do have such a harware, Iray (in Daz Studio or in another application, does not support it...) Is that correct?

    Presuming your two 12GB Titan cards are either Titan X's or Titan Xp's, the only thing you should need to do is enable TCC driver mode on both of them. Titan/sQuadros of those generations do all of their enterprise level resource sharing via standard PCI-E lanes.

    ETA: And, of course, an integrated GPU so you can see what you're doing/can switch TCC off again if decide you want to use your GPUs as normal graphics cards again.

    Post edited by RayDAnt on
  • drzapdrzap Posts: 795

    "Yes, because Iray's debut of support for Pascal-era cutting edge features (like Unified Memory) coincided with Nvidia's sales pitch for their high-powered GPU-based rendering appliance, the VCA (originally short for "Iray Visual Computing Appliance". Here's a press release from the initial announcement.) And openly marketing the fact that key advanced features of your latest and greatest pre-built hardware solution (intro price: $50,000USD) can be had on smaller economies of scale for much less money via some diy computer building wouldn't be good business.

    So, you're saying that although Nvidia didn't fail to capitalize on their advanced technology back in 2014 (I remember Huang beating the drums back then - it was a big deal) with iRayVCA Unified Memory for $50K, which was a different animal altogether and now extinct, but somehow they forgot to inform us about an even bigger deal - unified memory in plain ole' everyday iRay for only $299/year??   Either you're grasping at hopeful straws, or the Nvidia marketing department dropped the ball big time!  At any rate, if it is just a matter of having two graphics cards together in the same box, that can be easily tested right here and now, but the examples you have given are far from proving anything, much less a reason to say "it should work".

  • gibrril_fa945a6ceegibrril_fa945a6cee Posts: 550
    edited July 2019
    drzap said:

    Thanks! Since it does work in Vray, is there any way to rneder a scene made in Daz Studio in Vray? Has anyone done that and if so, how does an Iray optimzed scene translate to Vray rendering? Can all shaders just be used as is, or is there a conversion process needed?

    Thanks a lot,

    Me

    Yes, maybe, and it depends :)     It all depends on whether you are a casual user or a 3D artist who is willing to use other software.  There is a fellow in my Daz2Maya Skype group (not to be confused with the DaztoMaya plugin) who happens to be working on that problem.  He has previously developed Genesis 8 for Maya and Genesis 3 for Maya and he is currently in development of companion plugins.  One of them is a direct iRay-to-Arnold/Redshift/Vray conversion script that we are helping him beta test and release.  Of course, you would have to be familiar with Maya, deal with little inconveniences and let's not forget the very expensive license for V-ray.  If your fancy is just creating stills as a hobby, I doubt you would want to go that far to render a few more polygons.  It would be much better to follow nicstt's idea and buy a GPU with more VRAM.  On the other hand, if you already have the hardware, it might be worthwhile to download the free trials.   You might catch the fever as we did.  Now that I'm using real renderers, I wouldn't even give iRay a side-eye.  And then again, you could always wait and see what happens.  Maybe Nvidia will give iRay a little life.

    Wow!!! Thanks! And where can we get those scripts? Also, I have seen V-Ray is available for Maya, 3DS Max, Cinema 4D, even Unreal Engine... Does it make a difference which application I use to render out with V-Ray as far as shader and texture conversion goes?

    Post edited by gibrril_fa945a6cee on
  • drzapdrzap Posts: 795
    drzap said:

    Thanks! Since it does work in Vray, is there any way to rneder a scene made in Daz Studio in Vray? Has anyone done that and if so, how does an Iray optimzed scene translate to Vray rendering? Can all shaders just be used as is, or is there a conversion process needed?

    Thanks a lot,

    Me

    Yes, maybe, and it depends :)     It all depends on whether you are a casual user or a 3D artist who is willing to use other software.  There is a fellow in my Daz2Maya Skype group (not to be confused with the DaztoMaya plugin) who happens to be working on that problem.  He has previously developed Genesis 8 for Maya and Genesis 3 for Maya and he is currently in development of companion plugins.  One of them is a direct iRay-to-Arnold/Redshift/Vray conversion script that we are helping him beta test and release.  Of course, you would have to be familiar with Maya, deal with little inconveniences and let's not forget the very expensive license for V-ray.  If your fancy is just creating stills as a hobby, I doubt you would want to go that far to render a few more polygons.  It would be much better to follow nicstt's idea and buy a GPU with more VRAM.  On the other hand, if you already have the hardware, it might be worthwhile to download the free trials.   You might catch the fever as we did.  Now that I'm using real renderers, I wouldn't even give iRay a side-eye.  And then again, you could always wait and see what happens.  Maybe Nvidia will give iRay a little life.

    Wow!!! Thanks! And where can we get those scripts? Also, I have seen V-Ray is available for Maya, 3DS Max, Cinema 4D, even Unreal Engine... Does it make a difference which application I use to render out with V-Ray as far as shader and texture conversion goes?

    Well, as I said, they will be Maya plugins, so you must use Maya and it isn't finished yet, so you will have to wait awhile.

  • Thanks! Is there a website where I can track the progress and keep myself updated?

  • drzapdrzap Posts: 795
    edited July 2019

    Thanks! Is there a website where I can track the progress and keep myself updated?

     

    I'm not sure if he updates his status on his website, but here is the link so you can contact him directly  https://www.laylo3d.com/

    BTW, you understand that you will need a pair of GPU's with NVLink connectors?  Titan XP cards don't have them.  I don't know anything about PCIE sharing that RayDAnt has been going on about, but I am sure it's not supported on any production renderer (outside of iRay) or it would definitely have been talked about everywhere.  So to take advantage of this Vray feature, you would need to buy more cards in addition to the software.

    Post edited by drzap on
  • Thanks! Other question, I saw there is a pipeline in existence using iclone to get Daz figures to other software. Could I be using that to get my figures to another application that supports V-Ray? And on a more general note... If I do export out of Daz, what about the SubD, does that transfer alos, or do I loose the SubD levels? My main characters a re always on SubD 4 or 3. I would like to preserve that detail when I export.

  • drzapdrzap Posts: 795
    edited July 2019

    Thanks! Other question, I saw there is a pipeline in existence using iclone to get Daz figures to other software. Could I be using that to get my figures to another application that supports V-Ray? And on a more general note... If I do export out of Daz, what about the SubD, does that transfer alos, or do I loose the SubD levels? My main characters a re always on SubD 4 or 3. I would like to preserve that detail when I export.

    You are about to look into a very long rabbit hole, my friend.  As for iClone, I don't use it but there is extensive support for Daz figures and there are others on the forum who can tell you more about it.  No subD, though.  There are also no iRay to Vray presets for Daz figures on any other platform as far as I know.  That is why our group has begun solving all these problems ourselves.  The best I've found to preserve the details is to export the subD figure from Daz as an .obj and bake normal or displacement maps to apply back to the base figure. It's more efficient that way anyways if you're animating.  To be honest, for my purposes, I don't find Daz figure subD details to be that significant and I usually make my own.  In fact, I only use Daz Studio for rigging my own custom figures so I don't delve into that problem much but I'm sure there are others who have.  Have fun in Wonderland.

     

    In case you missed the addendum to my previous comment,  if you wish to take advantage of the VRAM pooling in Vray (or Octane), you will need to buy a pair of gpu's with NVlink connectors.  

    Post edited by drzap on
  • gibrril_fa945a6ceegibrril_fa945a6cee Posts: 550
    edited July 2019

    Thanks! But do NVLink connectors work on Titans? If I export as obj, I can prserve the SuD detail?

    Post edited by gibrril_fa945a6cee on
  • RayDAntRayDAnt Posts: 1,154
    edited July 2019

    Thanks! But do NVLink connectors work on Titans?

    Unfortunately the only Titans with NVLink support currently are the Titan V and Titan RTX. And NVLink is a physical connector standard - meaning that older generation cards can't be updated to include/be compatible with it.

    ETA: Well, color me shocked. Just learned that the Titan V - despite having remnants of the physical connectors themselves - actually has no NVLink support (or SLI support for that matter.) So for Titans with NVLink you're currently looking at just the Titan RTX.

    Post edited by RayDAnt on
  • I've got two Titan X (Pascal) Cards, I guess no NVLink for me for now...

  • drzapdrzap Posts: 795
    edited July 2019

    ... If I export as obj, I can prserve the SuD detail?

    If you're just doing stills, exporting OBJ, DAE, or Alembic will preserve subD.  However, it will be just an unrigged mesh, so only useful for stills, baking maps, or if you want to rig yourself in other software.

    Post edited by drzap on
Sign In or Register to comment.