ATI/AMD release their own PBR Renderer for RADEON

13»

Comments

  • samurlesamurle Posts: 94
    edited December 2016

    Let's see.

    Iray license: I think Daz3D must pay several thousands of dollars per year to include it in their Daz Studio program.

    ProRender license: free

    Yep, a lot of people will be using ProRender.

     

    Post edited by samurle on
  • samurle said:

    Let's see.

    Iray license: I think Daz3D must pay several thousands of dollars per year to include it in their Daz Studio program.

    ProRender license: free

    Yep, a lot of people will be using ProRender.

     

    And unless I missed something critical, anyone that chooses to use ProRender has no obligation to support the further development of said rendering engine, which is where at least part of the money from licensing Iray goes to.

  • kyoto kid said:
    kyoto kid said:
    prixat said:
    prixat said:

    We don't have much to judge performance on except benchmarks.

    This was interesting within the Compubench results. In almost every test the Titan X and the 1080 were faster in OpenCL than they were in CUDA!

    https://compubench.com/result.jsp

    Well, that would be to be expected and the margin will get bigger, not smaller as future SW & HW development cycles come & go.

    Except that nVidia is still using OpenCL 1.2! I would have expected OpenCL to suffer translation losses and CUDA to always be faster.

    Well they don't give the Windows version but I think the results point to them using Windows 10 & with the improved Windows 10 & DirectX 12 parallelization algorithms. OpenCL is not bad, it's really very good, but was hampered in the past by parallizing dispatch processing - that is much improved now.

    ...provided you can keep W10 from eating a big chunk of your VRAM.  For that you need two cards one for rendering and one to run the displays. The non render card still should have enough memory to handle the Daz OpenGL Viewport with a scene loaded without incredible lag.

    Totally irrelevant.

    ...and why? As I understand W10 reserves a significant amount of VRAM compared to older versions of the OS.  This has not only been addressed here but on several tech sites I frequent as well.  The solution usually given is to use one GPU to just run the display while reserving the other for rendering.

    As I understand it, all recent versions of Windows (NT4+, I think) reserve a certain amount of RAM on any GPU that could have a display connected, whether it does or not. Windows 10 uses more for this than previous versions. Having two GPUs will not help - indeed its the treatment of the second and subsequent GPU that is the issue. The way to avoid it, if it is unacceptable, is to use a non-GPU card - that is, Quadro for display and the non-GPU cards whose name I have forgotten as the subsidiary cards for render calculations. I believe that turning off display features will reduce the amount of RAM assigned on each card if using multiple GPUs.

  • nicsttnicstt Posts: 11,715
    kyoto kid said:
    kyoto kid said:
    prixat said:
    prixat said:

    We don't have much to judge performance on except benchmarks.

    This was interesting within the Compubench results. In almost every test the Titan X and the 1080 were faster in OpenCL than they were in CUDA!

    https://compubench.com/result.jsp

    Well, that would be to be expected and the margin will get bigger, not smaller as future SW & HW development cycles come & go.

    Except that nVidia is still using OpenCL 1.2! I would have expected OpenCL to suffer translation losses and CUDA to always be faster.

    Well they don't give the Windows version but I think the results point to them using Windows 10 & with the improved Windows 10 & DirectX 12 parallelization algorithms. OpenCL is not bad, it's really very good, but was hampered in the past by parallizing dispatch processing - that is much improved now.

    ...provided you can keep W10 from eating a big chunk of your VRAM.  For that you need two cards one for rendering and one to run the displays. The non render card still should have enough memory to handle the Daz OpenGL Viewport with a scene loaded without incredible lag.

    Totally irrelevant.

    ...and why? As I understand W10 reserves a significant amount of VRAM compared to older versions of the OS.  This has not only been addressed here but on several tech sites I frequent as well.  The solution usually given is to use one GPU to just run the display while reserving the other for rendering.

    As I understand it, all recent versions of Windows (NT4+, I think) reserve a certain amount of RAM on any GPU that could have a display connected, whether it does or not. Windows 10 uses more for this than previous versions. Having two GPUs will not help - indeed its the treatment of the second and subsequent GPU that is the issue. The way to avoid it, if it is unacceptable, is to use a non-GPU card - that is, Quadro for display and the non-GPU cards whose name I have forgotten as the subsidiary cards for render calculations. I believe that turning off display features will reduce the amount of RAM assigned on each card if using multiple GPUs.

    Hmm, that doesn't make sense, as GPUz reports all ram available on my 980ti; I use 8.0.

  • HavosHavos Posts: 5,573
    kyoto kid said:
    kyoto kid said:
    prixat said:
    prixat said:

    We don't have much to judge performance on except benchmarks.

    This was interesting within the Compubench results. In almost every test the Titan X and the 1080 were faster in OpenCL than they were in CUDA!

    https://compubench.com/result.jsp

    Well, that would be to be expected and the margin will get bigger, not smaller as future SW & HW development cycles come & go.

    Except that nVidia is still using OpenCL 1.2! I would have expected OpenCL to suffer translation losses and CUDA to always be faster.

    Well they don't give the Windows version but I think the results point to them using Windows 10 & with the improved Windows 10 & DirectX 12 parallelization algorithms. OpenCL is not bad, it's really very good, but was hampered in the past by parallizing dispatch processing - that is much improved now.

    ...provided you can keep W10 from eating a big chunk of your VRAM.  For that you need two cards one for rendering and one to run the displays. The non render card still should have enough memory to handle the Daz OpenGL Viewport with a scene loaded without incredible lag.

    Totally irrelevant.

    ...and why? As I understand W10 reserves a significant amount of VRAM compared to older versions of the OS.  This has not only been addressed here but on several tech sites I frequent as well.  The solution usually given is to use one GPU to just run the display while reserving the other for rendering.

    As I understand it, all recent versions of Windows (NT4+, I think) reserve a certain amount of RAM on any GPU that could have a display connected, whether it does or not. Windows 10 uses more for this than previous versions. Having two GPUs will not help - indeed its the treatment of the second and subsequent GPU that is the issue. The way to avoid it, if it is unacceptable, is to use a non-GPU card - that is, Quadro for display and the non-GPU cards whose name I have forgotten as the subsidiary cards for render calculations. I believe that turning off display features will reduce the amount of RAM assigned on each card if using multiple GPUs.

    You can also use the on board graphics processor. These have improved a lot over the years, and on modern motherboards the onboard GPU is more than capable of running a large screen, and can often handle multiple monitors as well. They normally do not have 3D processing cores however.

  • namffuaknamffuak Posts: 4,403
    kyoto kid said:
    kyoto kid said:
    prixat said:
    prixat said:

    We don't have much to judge performance on except benchmarks.

    This was interesting within the Compubench results. In almost every test the Titan X and the 1080 were faster in OpenCL than they were in CUDA!

    https://compubench.com/result.jsp

    Well, that would be to be expected and the margin will get bigger, not smaller as future SW & HW development cycles come & go.

    Except that nVidia is still using OpenCL 1.2! I would have expected OpenCL to suffer translation losses and CUDA to always be faster.

    Well they don't give the Windows version but I think the results point to them using Windows 10 & with the improved Windows 10 & DirectX 12 parallelization algorithms. OpenCL is not bad, it's really very good, but was hampered in the past by parallizing dispatch processing - that is much improved now.

    ...provided you can keep W10 from eating a big chunk of your VRAM.  For that you need two cards one for rendering and one to run the displays. The non render card still should have enough memory to handle the Daz OpenGL Viewport with a scene loaded without incredible lag.

    Totally irrelevant.

    ...and why? As I understand W10 reserves a significant amount of VRAM compared to older versions of the OS.  This has not only been addressed here but on several tech sites I frequent as well.  The solution usually given is to use one GPU to just run the display while reserving the other for rendering.

    No - the problem as I understand it from Kendall's write-up is that W10 allocates this memory on all video cards - because W10 will crash if you hot-plug a monitor into a card without the memory allocated.This has been going on since Windows NT 4.1, but wasn't really noticeable because the allocation was for a vesa monitor running at 640X480 and now it is for a 4K monitor running all the W10 eye-candy.

  • Rashad CarterRashad Carter Posts: 1,830
    edited December 2016
    hphoenix said:

    AMD/ATI Radeon now has ProRender...

    That name made me chortle out loud. I wonder if a lot of pointless arguing was involved.

    (long time Brycers will get it)

    I think it's just you and I who remember that one!!! I've still got a folder named ProRender. Bet you do too. FYI I tried to reach out to him a little whole back...natha. The old days of Bryce!!!

     

    Ah, I see Chohole is also getting a chuckle!

    Post edited by Rashad Carter on
  • DustRiderDustRider Posts: 2,878
    samurle said:

    Let's see.

    Iray license: I think Daz3D must pay several thousands of dollars per year to include it in their Daz Studio program.

    ProRender license: free

    Yep, a lot of people will be using ProRender.

     

    Well ...... we don't know how much DAZ 3D pays for the licensing of Iray or any other terms of the contract. It may even be free for DAZ 3D to use if it is seen by Nvidia as a good marketing strategy. But, IMHO, I seriously doubt we'll see DAZ 3D integrating ProRender into DS unless their business relationship with Nvidia goes bad. Even then it might not happen since they could decide to broker some sort of deal with Otoy for integration of Octane render (since there is already a plugin for it).

    While ProRender being free is nice, if it doesn't have the performance, features, and support of other "pay for" render engines on the market, then it won't be a very popular. Time will tell just how popular it actually gets. Keep in mind that Cycles has been free for commercial use for several years, and other than Blender, Poser is the only commercial implementation of it that I know of (Cycles has both Cuda and OpenCL support, but as others have noted OpenCL is well behind Cuda in development and stability in Cycles). Also, Pixar's Renderman has been free for personal use for a while now (and the commercial licensing has be dramatically reduced in price), and there have been no DS plugins developed for it. So, just because a render engine is "free", doesn't mean it is a viable candidate for DS plugin development

    Within the DAZ Studio community, IMHO, ProRender will have an insignificant impact unless someone steps up to the plate and makes a plugin for it. The biggest question for any potential plugin developer would be "is this a commercially viable product" (or "will I get my investment back .... and make a profit"). In general, the majority of DS users seem to be very reluctant to learn how to tweak shaders, so any potential developers would have to be able to convert Iray/3Delight shaders at a near 1:1 quality level. To gain wide adoption within the DS user community, it would also have to have an equivalent or better feature set compared to Iray to get the interest of those with Nvidia cards (and perform as well as Iray on Nvidia). There may be enough AMD users of DS to make it financially viable, but this is a risk the developer would need to consider (and given the length of time Iray has been in DS, I would guess the majority of the user base now has an Nvidia GPU). Of course there may be someone who is interested in developing a DS plugin more for their personal use or altruistic goals, where any sales of the plugin would be icing on the cake (or maybe even release it for free). So I'm not saying there is no chance a plugin will be developed for DS, I just wouldn't hold my breath waiting for it (or invest in an ATI card in anticipation).

    It will be interesting to see what happens, and competition is a good thing, but I'm just a bit skeptical about a plugin being developed for DS. Once the Blender plugin is complete, it may finally make me learn Blender image

  • wolf359wolf359 Posts: 3,929

    "
    It will be interesting to see what happens, and competition is a good thing, but I'm just a bit skeptical about a plugin being developed for DS. Once the Blender plugin is complete, it may finally make me learn Blender "

    Out of curiosity what does this new AMD PBR do  that would make one suddenly become interested in using blender 
    That Cycles cannot  already do as far as actual rendering quality??

    http://www.blenderguru.com/articles/24-photorealistic-blender-renders/


    I mean the Nerdy McNerd ,CUDA verses openCL  debate aside , and looking at it strictly from the perspective of someone wanting to render DAZ CONTENT or any content in a FREE PBR that will Not punish us severely  for not investing Nvidia Hardware.

    I only ask because I am getting very high quality photorealistic still renders of Daz content from Blender  in 2 hours  on a low specced machine
    that would takes 18+ hours to get "resolved" enough  to be passable with IRay

    NOT Daz or Nvidia'a Fault of course My own for not being able to upgrade HW until next year .

    My point is why wait for this AMD PBR and a plugin from "Someone ,somewhere" to Convert DAZ content & Shaders via a bridge toBlender?

    What is stopping one from learning Blender right now and using the free MCJ teleblender plugin for Convert any  DAZ content.

  • mjc1016mjc1016 Posts: 15,001
    wolf359 said:
     

    What is stopping one from learning Blender right now and using the free MCJ teleblender plugin for Convert any  DAZ content.

    Besides...Cycles is fun.

  • ScavengerScavenger Posts: 2,674

    Maybe a new player in the OpenCL game will get Apple to fix it's support for it.

  • I upgraded my GPU to an AMD board just months before Poser11 and iray. Was NOT a happy camper. And now that I'd given up doing GPU renders on it, AMD is finally getting their s#!t together? frown They thought making a competing programing UI would be enough to get interest in making a competing renderer. With Iray already on the market, that interest never really materalized because the huge time investment required in getting a product up to snuff. And without such a large scale project to give feedback and direction to OpenCL development, that languished too. It's a symbiotic relationship--just like app and driver developers need feedback from each other. Hopefully now that AMD came to their senses to do their own render, they will finally see some real progress with OpenCL too.

  • wolf359wolf359 Posts: 3,929
    mjc1016 said:
    wolf359 said:
     

    What is stopping one from learning Blender right now and using the free MCJ teleblender plugin for Convert any  DAZ content.

    Besides...Cycles is fun.

    And it also has"Branched path tracing" mode opening up many parameters for the individual adjustment of the number of bounces for volume caustic,SSS reflection ,diffuse etc.

    This gives much more options for efficient scene optimizations that can save render times

     I am not sure how IRay is implemented in other apps Like Maya or Max but the Daz studio version seems a bit "Brute Force"  unless I am missing some settings somewhere. 

  • mjc1016mjc1016 Posts: 15,001
    wolf359 said:
    mjc1016 said:
    wolf359 said:
     

    What is stopping one from learning Blender right now and using the free MCJ teleblender plugin for Convert any  DAZ content.

    Besides...Cycles is fun.

    And it also has"Branched path tracing" mode opening up many parameters for the individual adjustment of the number of bounces for volume caustic,SSS reflection ,diffuse etc.

    This gives much more options for efficient scene optimizations that can save render times

     I am not sure how IRay is implemented in other apps Like Maya or Max but the Daz studio version seems a bit "Brute Force"  unless I am missing some settings somewhere. 

    Doesn't seem to be much different from what I've seen...there's a 'master' bounce control and that's about it.  Brute force pretty well sums it up.

  • wiz said:

    I'm surprised that this is even an issue. In numerical computing, OpenCL is an also-ran. If you're heroic with it, you can get it to perform at about 50-60% of the level of CUDA. This is true whether you're pitting CUDA on Nvidia against OpenCL on equivalent Radeons, or using both OpenCL and CUDA on the same Nvidia.

    I'm a programmer and those are interesting ways to states the performance difference. Yes, on nVidia, OpenCL is slower than CUDA. There's a reason for that. And I haven't seen too many reliable CUDA vs OpenCL benchmarks with nVidia vs AMD. The reliable ones (and my personal experience) is that AMD is faster. WAY faster, especially once you get to the R9 series. This is established fact in the programming world. And using OpenCL on both AMD and nVidia (a benchmark type you didn't mention), AMD is pretty much always faster. For example, my R9 290 is always faster than my GTX Titan X on LuxRender... that is when the TitanX doesn't crash. AMD runs fine though. So it's somewhat ironic that many have used the bad quality of the drivers on a certain brand to argue against using OpenCL.

    Overall though, I like more options. I'm glad that both companies have products coming out (or already available) that we can enjoy.

     

  • I think it's just you and I who remember that one!!! I've still got a folder named ProRender. Bet you do too.

     

    I remember receiving the files and being quite surprised that there was nothing in ProRender that I hadn't already independently discovered and talked about on Renderosity (the pages were still there last time I looked). Len knew how to light scenes to look good (and choose models that minimised the facetting issue), so I initially thought he'd found something extra in Bryce True Ambience. Well over a decade ago now. Blimey.

    I look forward to diving back into Bryce and 3D once my new (old) house is completed. I think Genesis 4 will be out by then, and these guys will still be arguing about (the AMD) ProRender. History repeating...

  • samurlesamurle Posts: 94
    edited December 2016

    It costs $295 for a personal license for iRay.  That's one person.  I can only imagine how much it costs a company to include iRay in a program that everyone can use.  It must be at least $15,000/year.  I wouldn't pass on ProRender if I had to pay that every year.

    And who knows, they could have locked Daz into 5-year deal for 50k to stop them from using other renderers during that period.

     

    Post edited by samurle on
  • samurle said:

    It must be at least $15,000/year.  every year.

    It could be, but it needn't must be. For all we know, DAZ might have negotiated a free or super cheap use for Studio.

     

  • kyoto kidkyoto kid Posts: 41,838
    namffuak said:
    kyoto kid said:
    kyoto kid said:
    prixat said:
    prixat said:

    We don't have much to judge performance on except benchmarks.

    This was interesting within the Compubench results. In almost every test the Titan X and the 1080 were faster in OpenCL than they were in CUDA!

    https://compubench.com/result.jsp

    Well, that would be to be expected and the margin will get bigger, not smaller as future SW & HW development cycles come & go.

    Except that nVidia is still using OpenCL 1.2! I would have expected OpenCL to suffer translation losses and CUDA to always be faster.

    Well they don't give the Windows version but I think the results point to them using Windows 10 & with the improved Windows 10 & DirectX 12 parallelization algorithms. OpenCL is not bad, it's really very good, but was hampered in the past by parallizing dispatch processing - that is much improved now.

    ...provided you can keep W10 from eating a big chunk of your VRAM.  For that you need two cards one for rendering and one to run the displays. The non render card still should have enough memory to handle the Daz OpenGL Viewport with a scene loaded without incredible lag.

    Totally irrelevant.

    ...and why? As I understand W10 reserves a significant amount of VRAM compared to older versions of the OS.  This has not only been addressed here but on several tech sites I frequent as well.  The solution usually given is to use one GPU to just run the display while reserving the other for rendering.

    No - the problem as I understand it from Kendall's write-up is that W10 allocates this memory on all video cards - because W10 will crash if you hot-plug a monitor into a card without the memory allocated.This has been going on since Windows NT 4.1, but wasn't really noticeable because the allocation was for a vesa monitor running at 640X480 and now it is for a 4K monitor running all the W10 eye-candy.

    ...all the more reason then to avoid W10.  W7 takes an almost neglegible amount and I'm running two displays (without the Aero interface or other usellss on screen "gadgets").  Why bother laying out a lot of money for a card if W10 hogs a good protion of it and makes it perform like a lesser one for rendering purposes?  Better off just going with dual multi core CPU rendering and a lot of memory then.

  • kyoto kidkyoto kid Posts: 41,838
    Havos said:
    kyoto kid said:
    kyoto kid said:
    prixat said:
    prixat said:

    We don't have much to judge performance on except benchmarks.

    This was interesting within the Compubench results. In almost every test the Titan X and the 1080 were faster in OpenCL than they were in CUDA!

    https://compubench.com/result.jsp

    Well, that would be to be expected and the margin will get bigger, not smaller as future SW & HW development cycles come & go.

    Except that nVidia is still using OpenCL 1.2! I would have expected OpenCL to suffer translation losses and CUDA to always be faster.

    Well they don't give the Windows version but I think the results point to them using Windows 10 & with the improved Windows 10 & DirectX 12 parallelization algorithms. OpenCL is not bad, it's really very good, but was hampered in the past by parallizing dispatch processing - that is much improved now.

    ...provided you can keep W10 from eating a big chunk of your VRAM.  For that you need two cards one for rendering and one to run the displays. The non render card still should have enough memory to handle the Daz OpenGL Viewport with a scene loaded without incredible lag.

    Totally irrelevant.

    ...and why? As I understand W10 reserves a significant amount of VRAM compared to older versions of the OS.  This has not only been addressed here but on several tech sites I frequent as well.  The solution usually given is to use one GPU to just run the display while reserving the other for rendering.

    As I understand it, all recent versions of Windows (NT4+, I think) reserve a certain amount of RAM on any GPU that could have a display connected, whether it does or not. Windows 10 uses more for this than previous versions. Having two GPUs will not help - indeed its the treatment of the second and subsequent GPU that is the issue. The way to avoid it, if it is unacceptable, is to use a non-GPU card - that is, Quadro for display and the non-GPU cards whose name I have forgotten as the subsidiary cards for render calculations. I believe that turning off display features will reduce the amount of RAM assigned on each card if using multiple GPUs.

    You can also use the on board graphics processor. These have improved a lot over the years, and on modern motherboards the onboard GPU is more than capable of running a large screen, and can often handle multiple monitors as well. They normally do not have 3D processing cores however.

    ...unfortunately I have an old P6T MB which has a much older less capable onboard graphics chipset.

    ...oh, and Richard, I imagine you are referring to Tesla compute cards. Well, if I win tonight's Megabucks Lotto I can consider that option.

  • kyoto kidkyoto kid Posts: 41,838
    edited December 2016
    wolf359 said:

    "
    It will be interesting to see what happens, and competition is a good thing, but I'm just a bit skeptical about a plugin being developed for DS. Once the Blender plugin is complete, it may finally make me learn Blender "

    Out of curiosity what does this new AMD PBR do  that would make one suddenly become interested in using blender 
    That Cycles cannot  already do as far as actual rendering quality??

    http://www.blenderguru.com/articles/24-photorealistic-blender-renders/


    I mean the Nerdy McNerd ,CUDA verses openCL  debate aside , and looking at it strictly from the perspective of someone wanting to render DAZ CONTENT or any content in a FREE PBR that will Not punish us severely  for not investing Nvidia Hardware.

    I only ask because I am getting very high quality photorealistic still renders of Daz content from Blender  in 2 hours  on a low specced machine
    that would takes 18+ hours to get "resolved" enough  to be passable with IRay

    NOT Daz or Nvidia'a Fault of course My own for not being able to upgrade HW until next year .

    My point is why wait for this AMD PBR and a plugin from "Someone ,somewhere" to Convert DAZ content & Shaders via a bridge toBlender?

    What is stopping one from learning Blender right now and using the free MCJ teleblender plugin for Convert any  DAZ content.

    ...what is stopping myself  is Blender's clunky difficult to work with (at least from my perspective) UI and setup.

    If there ever is a third party plugin for Pro Render I would still probably give it a try. AMD cards are less expensive than Nvidia (Sapphire's had an 8 GB card out for a couple years now that is priced less than the 1080).

    Post edited by kyoto kid on
  • j cadej cade Posts: 2,310
    mjc1016 said:
    wolf359 said:
     

    What is stopping one from learning Blender right now and using the free MCJ teleblender plugin for Convert any  DAZ content.

    Besides...Cycles is fun.

    Every time there's a discussion on render engines I feel like a really pushy salesman, or one of those groups that comes to your house to evangelize even though its in the middle of the woods... "I'm sure whatever engine you're talking about is great, but... have you tried cycles?" "Oh you miss those old render profiles luxrender has and wish Iray had something similar? Well, you know what program has those too? Blender!"

    Or maybe Kanye "I'm gonna let you finish, but cycles has the best material node setup of all time. Of all time."

  • wolf359wolf359 Posts: 3,929
    edited December 2016

    "Every time there's a discussion on render engines I feel like a really pushy salesman......"

    Except the very subject of this thread is about a free PBR render engine that is being released for Blender certainly long before it will ever see the light of day as a Daz studio option( if ever)
    So discussing it, in the context of Blenders current free PBR render engine (Cycles),  is not what any reasonable person would consider a non-sequitor 


    "...what is stopping myself  is Blender's clunky difficult to work with ...."

    LOL!!!! reconfirming the sheer Genius of Pavlovcheeky

    Post edited by wolf359 on
  • wiz said:

    I'm surprised that this is even an issue. In numerical computing, OpenCL is an also-ran. If you're heroic with it, you can get it to perform at about 50-60% of the level of CUDA. This is true whether you're pitting CUDA on Nvidia against OpenCL on equivalent Radeons, or using both OpenCL and CUDA on the same Nvidia.

    I'm a programmer and those are interesting ways to states the performance difference. Yes, on nVidia, OpenCL is slower than CUDA. There's a reason for that. And I haven't seen too many reliable CUDA vs OpenCL benchmarks with nVidia vs AMD. The reliable ones (and my personal experience) is that AMD is faster. WAY faster, especially once you get to the R9 series. This is established fact in the programming world. And using OpenCL on both AMD and nVidia (a benchmark type you didn't mention), AMD is pretty much always faster. For example, my R9 290 is always faster than my GTX Titan X on LuxRender... that is when the TitanX doesn't crash. AMD runs fine though. So it's somewhat ironic that many have used the bad quality of the drivers on a certain brand to argue against using OpenCL.

    It wasn't driver quality that folks have mentioned with respect to OpenCL; it was the capabilities of the library itself that were what was at issue. OTOY and Pixar obviously went with support for CUDA because it was ready for prime time when they  were working on their software.

    Overall though, I like more options. I'm glad that both companies have products coming out (or already available) that we can enjoy.

     

    True.

  • I'm Working on a Radeon ProRender Plugin of Daz Studio getting the SDK's From AMD today  

Sign In or Register to comment.