CUDA / Reality Useless - OpenCL AMD HIP solutions?

Hi all,

I got some sphisticated scenes to render (imported houses from Sweethome3d, multiple figures, difficult textures, difficult light) that I want to render via GPU. Reality is continuously crashing, DAZ is refusing to utilize my older nVidia GTX 750ti that I revived. I tried everything like disabling Optix acceleration, CPU+GPU, GPU only, latest drivers, but I was never able to render with GPU acceleration.

AMD has created HIP that is able to convert >95% of CUDA code automatically to OpenCL. Are there any efforts to utilize OpenCL?

https://streamhpc.com/blog/2016-09-07/get-ready-for-conversions-of-large-scale-cuda-software-to-amd-hardware/

Furthermore there's AMD's ProRender that is also much faster & more efficient than CUDA for similar renders. There are plugins for 3ds, Maya, Blender, Solidworks, PTC, Unreal, USD, but not a single one for Daz. Does anyone know if there were any alignments between AMD & Daz?

Or did Daz made exlcusive contacts with nVidia who bind them to their hardware exclusively?

Would be great if someone has more info here.

Greetz

Comments

  • LenioTGLenioTG Posts: 2,118
    edited June 2019

    In order to use the GPU you need to stay in your VRAM, and I don't think the 750ti has much.

    Iray only uses Nvidia GPUs.

    They say they're gonna implement AMD GPUs support on Octane, that has a plugin that works in Daz Studio too. Octane should also allow you to use the GPU even going over the VRAM (so they've told me, I don't use it).

    But Octane plugin costs a lot, maybe it would be better to buy a new GPU with that money.

    In order to know how much VRAM your scene need, here's a useful tool: https://www.daz3d.com/iray-memory-assistant

    I have 6Gb of VRAM and I'm fine, but my scenes usually have 3 characters plus an environment, and I'm good at optimizing because I've done 600+ renders with 3Gb of VRAM.

    A GTX 1660 should be an acceptable deal, just like a used 1060/1070.

    Post edited by LenioTG on
  • kenshaw011267kenshaw011267 Posts: 3,805

    The 750ti only has 2Gb of VRAM. It is going to quite difficult to put any modern characters in a scene that small.

    Would it be nice if there was a Prorender plug in for Daz? Maybe and maybe not. Prorender uses different shaders from Iray, much like 3Delight, and it is pretty clear that many vendors much prefer providing only one set. It wouldbequite tedious in any reasonable sized scene to go ing and provide shaders for everything. I tried that in Reality and it was excruciating.

  • My 750Ti has 4GB memory.

  • fastbike1fastbike1 Posts: 4,075
    edited June 2019

    @v1217108 "I got some sphisticated scenes to render (imported houses from Sweethome3d, multiple figures, difficult textures, difficult light)"

    You have minimal hardware yet want to render, by your own admission, complex, difficult scenes. What should one expect?

    CUDA isn't useless as you can see from the Gallery, just some hardware.

    Post edited by fastbike1 on
  • v1217108v1217108 Posts: 0

    Thanks all. My 750Ti has 4GB, too, but Memory Assistent was a good hint and shows "Estimated VRAM usage: 5,6GB", "Estimated Sysram usage: 12,3GB" (at least for Sysram I know that's pretty accurate).

    Seen Octane and their plans for Octane 2019.1 that seems to be released mid next year, would be interesting if they manage better than Reality.

    Was not aware that a ProRender plugin/OpenCL migration  would require add. shaders. So there's no option for general gpu acceleration utilizing the current ecosystem?

    (even on my 8 core Ryzen 7 2600 / 4GB RX560 rendering a scene takes >30h, compared to the 2,5h which it should be via GPU an absolute pain :( nevertheless I'm aware that I don't have a high performance only system, since I cool everything passively)

  • SpottedKittySpottedKitty Posts: 7,232
    v1217108 said:
    Was not aware that a ProRender plugin/OpenCL migration  would require add. shaders.

    Materials and shaders are almost always locked in to their particular render engines. This is why many items in the DAZ store install with both 3Delight and Iray materials. Incidentally, this is also why the Viewport doesn't always look like the finished render; the Viewport uses a third render engine, OpenGL (with an option to preview in Iray).

  • inquireinquire Posts: 2,099

    The 750ti only has 2Gb of VRAM. It is going to quite difficult to put any modern characters in a scene that small.

    Would it be nice if there was a Prorender plug in for Daz? Maybe and maybe not. Prorender uses different shaders from Iray, much like 3Delight, and it is pretty clear that many vendors much prefer providing only one set. It wouldbequite tedious in any reasonable sized scene to go ing and provide shaders for everything. I tried that in Reality and it was excruciating.

    Couldn't agree more. The time you "save" by supposedly getting a faster render in Reality is just not there because you have to spend time converting the shaders from DAZ to Reality for the Lux render.

  • inquireinquire Posts: 2,099
    v1217108 said:

    Hi all,

    I got some sphisticated scenes to render (imported houses from Sweethome3d, multiple figures, difficult textures, difficult light) that I want to render via GPU. Reality is continuously crashing, DAZ is refusing to utilize my older nVidia GTX 750ti that I revived. I tried everything like disabling Optix acceleration, CPU+GPU, GPU only, latest drivers, but I was never able to render with GPU acceleration.

    AMD has created HIP that is able to convert >95% of CUDA code automatically to OpenCL. Are there any efforts to utilize OpenCL?

    https://streamhpc.com/blog/2016-09-07/get-ready-for-conversions-of-large-scale-cuda-software-to-amd-hardware/

    Furthermore there's AMD's ProRender that is also much faster & more efficient than CUDA for similar renders. There are plugins for 3ds, Maya, Blender, Solidworks, PTC, Unreal, USD, but not a single one for Daz. Does anyone know if there were any alignments between AMD & Daz?

    Or did Daz made exlcusive contacts with nVidia who bind them to their hardware exclusively?

    Would be great if someone has more info here.

    Greetz

    This is very interesting, except the article about HIP is dated September 7, 2016. Is there anything newer? Has this idea just faded away? (I hope not.) Would you still have to adjust shaders?

     

     

  • kenshaw011267kenshaw011267 Posts: 3,805
    inquire said:
    v1217108 said:

    Hi all,

    I got some sphisticated scenes to render (imported houses from Sweethome3d, multiple figures, difficult textures, difficult light) that I want to render via GPU. Reality is continuously crashing, DAZ is refusing to utilize my older nVidia GTX 750ti that I revived. I tried everything like disabling Optix acceleration, CPU+GPU, GPU only, latest drivers, but I was never able to render with GPU acceleration.

    AMD has created HIP that is able to convert >95% of CUDA code automatically to OpenCL. Are there any efforts to utilize OpenCL?

    https://streamhpc.com/blog/2016-09-07/get-ready-for-conversions-of-large-scale-cuda-software-to-amd-hardware/

    Furthermore there's AMD's ProRender that is also much faster & more efficient than CUDA for similar renders. There are plugins for 3ds, Maya, Blender, Solidworks, PTC, Unreal, USD, but not a single one for Daz. Does anyone know if there were any alignments between AMD & Daz?

    Or did Daz made exlcusive contacts with nVidia who bind them to their hardware exclusively?

    Would be great if someone has more info here.

    Greetz

    This is very interesting, except the article about HIP is dated September 7, 2016. Is there anything newer? Has this idea just faded away? (I hope not.) Would you still have to adjust shaders?

     

     

    Last I heard the people involved left and AMD has basically put the project on hold indefinitely.

  • prixatprixat Posts: 1,585
    v1217108 said:

    Furthermore there's AMD's ProRender that is also much faster & more efficient than CUDA for similar renders. There are plugins for 3ds, Maya, Blender, Solidworks, PTC, Unreal, USD, but not a single one for Daz. Does anyone know if there were any alignments between AMD & Daz?

    Or did Daz made exlcusive contacts with nVidia who bind them to their hardware exclusively?

    Would be great if someone has more info here.

    Greetz

    Maxon/Cinema4D adopted ProRender and nVidia then discontinued Iray for Cinema4D, not saying the two things are related... indecision

    (While 3Delight seems to be bypassing the whole GPU acceleration idea and going with cloud based CPU servers.)

  • v1217108v1217108 Posts: 0

    >Last I heard the people involved left and AMD has basically put the project on hold indefinitely.

    According to the sources, this seems to be under heavy maintenane still, last commits from last week:

    https://github.com/ROCm-Developer-Tools/HIP

    https://gpuopen.com/compute-product/hip-convert-cuda-to-portable-c-code/

    It's a layer on top that'll execute CUDA on nVidia utilizing nVidia's NVCC Compiler & the related OpenCL code on other HW, so it should be possible to use all current materials & shaders ( https://32ipi028l5q82yhj72224m8j-wpengine.netdna-ssl.com/wp-content/uploads/2016/01/7637_HIP_Datasheet_V1_7_PrintReady_US_WE.pdf )

  • inquireinquire Posts: 2,099

    Well, maybe it still is in the works. The second site is dated 2018, I think, and the third, 2016. The first looks like it's dated 12 days ago. 

    Your comment, I think, is correct if this Heterogeneous-compute Interface for Portability (HIP) gets going: It's a layer on top that'll execute CUDA on nVidia utilizing nVidia's NVCC Compiler & the related OpenCL code on other HW, so it should be possible to use all current materials & shaders.

    Last night, I thought, since it's not a copy of iRay or a substitution for iRay, but a copy of the code, then the running of the code should not require new materials and shaders. The adjusting of materials and shaders is what may actually waste or use up the time you'll supposedly save by transferring a DAZ file into Reality for a Lux render, or into Blender, or Maya, etc.

    But sometimes things start off with a great deal of excitement, and end up going nowhere. I hope this isn't going to be one of those cases. Look at the attempt to revive Y-Gallery, or the attempt by Poser users to develop an improved weight-mapped M4.

     

  • kenshaw011267kenshaw011267 Posts: 3,805
    prixat said:
    v1217108 said:

    Furthermore there's AMD's ProRender that is also much faster & more efficient than CUDA for similar renders. There are plugins for 3ds, Maya, Blender, Solidworks, PTC, Unreal, USD, but not a single one for Daz. Does anyone know if there were any alignments between AMD & Daz?

    Or did Daz made exlcusive contacts with nVidia who bind them to their hardware exclusively?

    Would be great if someone has more info here.

    Greetz

    Maxon/Cinema4D adopted ProRender and nVidia then discontinued Iray for Cinema4D, not saying the two things are related... indecision

    (While 3Delight seems to be bypassing the whole GPU acceleration idea and going with cloud based CPU servers.)

    Cinema 4D still has Iray.This seems to be more misunderstanding of Nvidia devolving plug in support for Iray to the specific developers in question. Nvidia announced last fall that they were ending development of all the Iray plug ins not just Cinema 4D. This does not mean you can't use Iray in thiose products or won't be able to any time soon. Daz already developes their own Iray plug in for Daz Studio and now the Maxon and the rest just have to do the same.

  • kenshaw011267kenshaw011267 Posts: 3,805
    edited June 2019
    inquire said:

    Well, maybe it still is in the works. The second site is dated 2018, I think, and the third, 2016. The first looks like it's dated 12 days ago. 

    Your comment, I think, is correct if this Heterogeneous-compute Interface for Portability (HIP) gets going: It's a layer on top that'll execute CUDA on nVidia utilizing nVidia's NVCC Compiler & the related OpenCL code on other HW, so it should be possible to use all current materials & shaders.

    Last night, I thought, since it's not a copy of iRay or a substitution for iRay, but a copy of the code, then the running of the code should not require new materials and shaders. The adjusting of materials and shaders is what may actually waste or use up the time you'll supposedly save by transferring a DAZ file into Reality for a Lux render, or into Blender, or Maya, etc.

    But sometimes things start off with a great deal of excitement, and end up going nowhere. I hope this isn't going to be one of those cases. Look at the attempt to revive Y-Gallery, or the attempt by Poser users to develop an improved weight-mapped M4.

     

    Just read more in depth about HIP. That's not a CUDA abstraction layer. That's a way to program for Redeon and Nvidia specific compute features using one code base. Someone would have to get the Iray source code and rewrite it for HIP and then ciompile it before it would work on Radeon. Since Nvidia owns that code the chances of it ever being released as open source code is next to zero.

    The project I was referencing above was a way to directly run CUDA code on non Nvidia hardware not a way to write portable code.

    Post edited by kenshaw011267 on
  • inquireinquire Posts: 2,099

    I'm not sure I understand. I'm not a tech-savvy person. If CUDA code on non-Nvidia hardware can be written, can it be written for the Macintosh Os? Will it then be able to run on the new Metal that Apple is developing? Can it run on the new AMD cards (which Apple will tweak) so that iRay rendering would be able to work on OS 10.15? Or, is this all hopeful wondering on my part?

  • inquire said:

    I'm not sure I understand. I'm not a tech-savvy person. If CUDA code on non-Nvidia hardware can be written, can it be written for the Macintosh Os? Will it then be able to run on the new Metal that Apple is developing? Can it run on the new AMD cards (which Apple will tweak) so that iRay rendering would be able to work on OS 10.15? Or, is this all hopeful wondering on my part?

    CUDA is a specific set of features that is available only in nVidia GPUs.

  • kenshaw011267kenshaw011267 Posts: 3,805
    inquire said:

    I'm not sure I understand. I'm not a tech-savvy person. If CUDA code on non-Nvidia hardware can be written, can it be written for the Macintosh Os? Will it then be able to run on the new Metal that Apple is developing? Can it run on the new AMD cards (which Apple will tweak) so that iRay rendering would be able to work on OS 10.15? Or, is this all hopeful wondering on my part?

    Not presently or any time soon. What is referenced up thread is a method of writing compute code for both CUDA and non CUDA devices. In theory iRay could be rewritten for that but Nvidia owns iRay and there is basically no chance they are going to help AMD.

  • PadonePadone Posts: 3,481

    Not presently or any time soon. What is referenced up thread is a method of writing compute code for both CUDA and non CUDA devices. In theory iRay could be rewritten for that but Nvidia owns iRay and there is basically no chance they are going to help AMD.

    Unless AMD writes new drivers using HIP that are cuda capable, since iray can use any cuda device this would free daz from nvidia. But this solution would also require a jit NVCC compiler so performance issues could be expected.

  • inquireinquire Posts: 2,099
    Padone said:

     

    Unless AMD writes new drivers using HIP that are cuda capable, since iray can use any cuda device this would free daz from nvidia. But this solution would also require a jit NVCC compiler so performance issues could be expected.

    And what is a jit NVCC compiler?

     

  • nemesis10nemesis10 Posts: 3,277
    inquire said:
    Padone said:

     

    Unless AMD writes new drivers using HIP that are cuda capable, since iray can use any cuda device this would free daz from nvidia. But this solution would also require a jit NVCC compiler so performance issues could be expected.

    And what is a jit NVCC compiler?

     

    Just IN Time CUDA compiler driver

  • inquireinquire Posts: 2,099
    inquire said:

    I'm not sure I understand. I'm not a tech-savvy person. If CUDA code on non-Nvidia hardware can be written, can it be written for the Macintosh Os? Will it then be able to run on the new Metal that Apple is developing? Can it run on the new AMD cards (which Apple will tweak) so that iRay rendering would be able to work on OS 10.15? Or, is this all hopeful wondering on my part?

    Not presently or any time soon. What is referenced up thread is a method of writing compute code for both CUDA and non CUDA devices. In theory iRay could be rewritten for that but Nvidia owns iRay and there is basically no chance they are going to help AMD.

    Well, then, what is the point of CUDA? If "CUDA is a specific set of features that is available only in nVidia GPUs." AND, if Nvidia won't share CUDA, what is the good of CUDA?

     

  • kenshaw011267kenshaw011267 Posts: 3,805
    inquire said:
    inquire said:

    I'm not sure I understand. I'm not a tech-savvy person. If CUDA code on non-Nvidia hardware can be written, can it be written for the Macintosh Os? Will it then be able to run on the new Metal that Apple is developing? Can it run on the new AMD cards (which Apple will tweak) so that iRay rendering would be able to work on OS 10.15? Or, is this all hopeful wondering on my part?

    Not presently or any time soon. What is referenced up thread is a method of writing compute code for both CUDA and non CUDA devices. In theory iRay could be rewritten for that but Nvidia owns iRay and there is basically no chance they are going to help AMD.

    Well, then, what is the point of CUDA? If "CUDA is a specific set of features that is available only in nVidia GPUs." AND, if Nvidia won't share CUDA, what is the good of CUDA?

    It is very good at compute processes including AI processing. CUDA devices are at the core of pretty much every self driving vehicle out there. I manage IT at a datacenter and roughly half our boxes, and more all the time, are running CUDA applications.

  • inquireinquire Posts: 2,099

    OK, thank you. I did not know that. I'm always willing to learn.

  • inquireinquire Posts: 2,099

    I have posted links in this thread to articles that explain Metal, at least for the non-high-tech person: https://www.daz3d.com/forums/discussion/334446/a-daz-studio-version-to-render-in-metal#latest

Sign In or Register to comment.