ATI/AMD release their own PBR Renderer for RADEON

2

Comments

  • nicsttnicstt Posts: 11,715

    This is fantastic news. Maybe this can get Nvidia to start caring more about Iray and start to optimize it better. Competition is always a good thing.

    I won't argue with this, except to say that we don't really know that nVidia hasn't been working to optimize it all along.

    This absolutely can have a major impact on Daz's market. If it performs better than Iray, or even just as well, it will make rapid in roads, no matter how entrenched Iray may be (and to be frank, I don't believe it is at all, Daz is the only place where Iray is "King.") Ease of use will be a big factor, too. 

    Most folks that do renders in some pretty big name software have no idea what rendering engine is involved; they just use the tools provided to do their job. If those tools just so happen to be Iray and MDL based, so be it.

    There are a lot of users who install Daz Studio for one reason, and one reason only: Iray. Maybe some of these people end up buying something from Daz at some point. Everybody who installs Daz Studio is a potential Daz customer. That is the whole point of offering it for free, after all. So if people start moving towards an open source rendering platform that is also free, and can be used by any hardware with OpenGL, then this presents a serious threat to Daz.

    I don't agree that folks install DAZ Studio specifically for Iray; it just happens to come along for the ride now.

    GPU and CPU workload BALANCING. If this isn't just PR talk, Prorender will start strong out of the gate just for this reason alone.

    Maybe, but they still have to convince folks that have heard of the issues with Open CL that those no longer plague the software.

    The proof is in the results. Iray is still slow and heavily dependent on having high end hardware...which just happens to be exclusive to Nvidia GPUs. I feel very strongly that Iray is poorly optimized, not necessarily on purpose, but the software is still rather new. Its only been in public use for a couple years. It should be pretty obvious to anyone that as fairly recent software, it can be significantly improved. Nvidia has proven that when they are inspired, they can deliver. They used software, and software alone, to make their recent Pascal GPU line deliver 3 times the performance of the previous generation in VR, when it only delivers a modest gain in performance with normal games. They did that because VR is supposed to be the next big thing. VR, like Iray, is largely new in its recent form. It should be clear that Nvidia is dedicating most of their time to VR and other things.

    What big name companies are specifically using Iray for their rendering needs? How many movies have been made with Iray? This also ties into point #1. If more companies were looking at Iray, Nvidia would be more inspired to improve it faster. Its a niche market to them. Nvidia was quick to announce Daz was using Iray. They also happily announced PGO was using Iray for renderings of their fancy little cars. But beyond that, there are very few announcements like this. So that begs the question, who is using it? Its certainly not big in animation. I for one do not believe the market is saturated in any one direction or the other, because so many companies use so many different softwares and the big ones create their own.

    I didn't say that everyone chose Daz for Iray. But a lot do. Whether it be because of Iray outright, or they see the pretty picture renders made by other users and look to see what program is making them (because its free.) How many other programs offer free PBR rendering? Not many. Nvidia uses Daz to promote Iray, which in turn promotes Nvidia GPUs (because you want to render faster, right?)

    Anyone who discriminates against a software because they harbor outdated beliefs isn't really keeping up with the times. If you work in the business then you are very well aware that things change and can change very rapidly. What might be laughed at one day becomes the next big thing, and they are left in the cold because they didn't jump on board fast enough. Pity.

    The thing the realize is that the speed issues lie in OpenCL,not Iray. Also the stability issues are there too. And Iray is in the very places that this AMD renderer wants to be: Maya, 3DS max, the new renderer in the Allegorithmic products, etc. As long as the renderer is tied to OpenCL, as long as that standard isn't improved neither will be the output. But depending on the complexity of your scene CPU isn't going to be much of a difference in Iray as 3DL (such as transmapped hair.. I know I can use some of the hairs that used to be 3DL renderers to a crawl with Iray. Cycles has issues with transmaps as well). I'm sure performance and  features will improve espeically since we are using iray far differently than other GPU/Unbiased renderers: for rendering people and skin instead of architectual buildings and cars.. which is where iray/Mental ray is normally used. DAZ Studio is mostly likely leading the way for rendering people and with that knowledge Nvidia can improve how the software works. For example the 4.9 update, there was a change on how SSS is done. I doubt the other software has such a userbase and I doubt anyone will toss that knowledge for something that they will have to work from scratch to address this section of the industry's needs.

    It would be interesting if you said where you get your information from; certainly it seems CUDA used to be more efficient, now it is not so clear cut, although it does have some advantages, but they are fewer, and depend uppon usage.

    https://wiki.tiker.net/CudaVsOpenCL

    https://streamcomputing.eu/blog/2010-04-22/difference-between-cuda-and-opencl/

    http://create.pro/blog/open-cl-vs-cuda-amd-vs-nvidia-better-application-support-gpgpugpu-acceleration-real-world-face/

    https://www.researchgate.net/post/Which_one_do_you_prefer_CUDA_or_OpenCL

    https://pdfs.semanticscholar.org/d4e5/8e7c95d66f810252af630e74adbdbaf38da7.pdf

    https://arxiv.org/pdf/1005.2581

    ... It's specifically when you look at research, as opposed to opinion, then one realises that the differences are more grey; and open CL does appear to getting that improvement you suggest isn't happening.

  • Just a thought: what if this turns out to be a formidable rival to Iray and Smith-Micro pounces on supporting it for the next iteration of Poser? 

    Speculation aside, my iMac has a Radeon card (I don't think I can upgrade/change it) so I'll be keeping an eye on this. 

  • Male-M3diaMale-M3dia Posts: 3,584
    edited December 2016
    nicstt said:

    This is fantastic news. Maybe this can get Nvidia to start caring more about Iray and start to optimize it better. Competition is always a good thing.

    I won't argue with this, except to say that we don't really know that nVidia hasn't been working to optimize it all along.

    This absolutely can have a major impact on Daz's market. If it performs better than Iray, or even just as well, it will make rapid in roads, no matter how entrenched Iray may be (and to be frank, I don't believe it is at all, Daz is the only place where Iray is "King.") Ease of use will be a big factor, too. 

    Most folks that do renders in some pretty big name software have no idea what rendering engine is involved; they just use the tools provided to do their job. If those tools just so happen to be Iray and MDL based, so be it.

    There are a lot of users who install Daz Studio for one reason, and one reason only: Iray. Maybe some of these people end up buying something from Daz at some point. Everybody who installs Daz Studio is a potential Daz customer. That is the whole point of offering it for free, after all. So if people start moving towards an open source rendering platform that is also free, and can be used by any hardware with OpenGL, then this presents a serious threat to Daz.

    I don't agree that folks install DAZ Studio specifically for Iray; it just happens to come along for the ride now.

    GPU and CPU workload BALANCING. If this isn't just PR talk, Prorender will start strong out of the gate just for this reason alone.

    Maybe, but they still have to convince folks that have heard of the issues with Open CL that those no longer plague the software.

    The proof is in the results. Iray is still slow and heavily dependent on having high end hardware...which just happens to be exclusive to Nvidia GPUs. I feel very strongly that Iray is poorly optimized, not necessarily on purpose, but the software is still rather new. Its only been in public use for a couple years. It should be pretty obvious to anyone that as fairly recent software, it can be significantly improved. Nvidia has proven that when they are inspired, they can deliver. They used software, and software alone, to make their recent Pascal GPU line deliver 3 times the performance of the previous generation in VR, when it only delivers a modest gain in performance with normal games. They did that because VR is supposed to be the next big thing. VR, like Iray, is largely new in its recent form. It should be clear that Nvidia is dedicating most of their time to VR and other things.

    What big name companies are specifically using Iray for their rendering needs? How many movies have been made with Iray? This also ties into point #1. If more companies were looking at Iray, Nvidia would be more inspired to improve it faster. Its a niche market to them. Nvidia was quick to announce Daz was using Iray. They also happily announced PGO was using Iray for renderings of their fancy little cars. But beyond that, there are very few announcements like this. So that begs the question, who is using it? Its certainly not big in animation. I for one do not believe the market is saturated in any one direction or the other, because so many companies use so many different softwares and the big ones create their own.

    I didn't say that everyone chose Daz for Iray. But a lot do. Whether it be because of Iray outright, or they see the pretty picture renders made by other users and look to see what program is making them (because its free.) How many other programs offer free PBR rendering? Not many. Nvidia uses Daz to promote Iray, which in turn promotes Nvidia GPUs (because you want to render faster, right?)

    Anyone who discriminates against a software because they harbor outdated beliefs isn't really keeping up with the times. If you work in the business then you are very well aware that things change and can change very rapidly. What might be laughed at one day becomes the next big thing, and they are left in the cold because they didn't jump on board fast enough. Pity.

    The thing the realize is that the speed issues lie in OpenCL,not Iray. Also the stability issues are there too. And Iray is in the very places that this AMD renderer wants to be: Maya, 3DS max, the new renderer in the Allegorithmic products, etc. As long as the renderer is tied to OpenCL, as long as that standard isn't improved neither will be the output. But depending on the complexity of your scene CPU isn't going to be much of a difference in Iray as 3DL (such as transmapped hair.. I know I can use some of the hairs that used to be 3DL renderers to a crawl with Iray. Cycles has issues with transmaps as well). I'm sure performance and  features will improve espeically since we are using iray far differently than other GPU/Unbiased renderers: for rendering people and skin instead of architectual buildings and cars.. which is where iray/Mental ray is normally used. DAZ Studio is mostly likely leading the way for rendering people and with that knowledge Nvidia can improve how the software works. For example the 4.9 update, there was a change on how SSS is done. I doubt the other software has such a userbase and I doubt anyone will toss that knowledge for something that they will have to work from scratch to address this section of the industry's needs.

    It would be interesting if you said where you get your information from; certainly it seems CUDA used to be more efficient, now it is not so clear cut, although it does have some advantages, but they are fewer, and depend uppon usage.

    https://wiki.tiker.net/CudaVsOpenCL

    https://streamcomputing.eu/blog/2010-04-22/difference-between-cuda-and-opencl/

    http://create.pro/blog/open-cl-vs-cuda-amd-vs-nvidia-better-application-support-gpgpugpu-acceleration-real-world-face/

    https://www.researchgate.net/post/Which_one_do_you_prefer_CUDA_or_OpenCL

    https://pdfs.semanticscholar.org/d4e5/8e7c95d66f810252af630e74adbdbaf38da7.pdf

    https://arxiv.org/pdf/1005.2581

    ... It's specifically when you look at research, as opposed to opinion, then one realises that the differences are more grey; and open CL does appear to getting that improvement you suggest isn't happening.

    My information is taken from google, tech guides, apple forum issues with opencl, and aren't opinion based. My main occupation revolves around hardware and computer technology, so I keep myself up to date on this stuff. I use this information for personal and business purchases. It isn't that hard. I don't need to footnote my posts when this information is readily available.  CUDA vs OpenCL articles are quite abundant with the pros and cons. If you don't know this information, then it's on you to research, not  me to tell you how to do it.

    Besides most of the articles I found on CUDA vs OpenCL basically say to use CUDA when given a choice, not the other way around. And it's that advantage why companies are adding Nvidia GPUs to their rendering farms, not AMD Radeons. They're doing catch up.. especially wth the Pascals on the market. Companies don't buy into catch up products, that's simply how it is. They invest.

    Also keep in mind that Octane 3.0 was promised OpenCL support, but their faq still says it's not supported. This is the exact post:

    "No, OpenCL is currently not as mature as CUDA. As OpenCL matures, it is planned to be supported which will allow GPUs from AMD and Intel to be used with OctaneRender. Currently, OctaneRender requires a CUDA enabled NVIDIA video card to function."

    Post edited by Male-M3dia on
  • Male-M3diaMale-M3dia Posts: 3,584
    edited December 2016

    Just a thought: what if this turns out to be a formidable rival to Iray and Smith-Micro pounces on supporting it for the next iteration of Poser? 

    Speculation aside, my iMac has a Radeon card (I don't think I can upgrade/change it) so I'll be keeping an eye on this. 

    Poser already has superfly which is based off of Blender's Cycles but not an exact port. The OpenCL portion of the rendering engine was not implemented into superfly because the implementation wasn't finished, was in beta stage, or unstable in the base Cycles software. All though both the base and Pro copies of Poser 11 support Superfly, only the Pro version can use Nvidia GPUs. As far as what's in the next version of poser, that's an interesting topic to look at for next year.

    Post edited by Male-M3dia on
  • nicstt said:

    This is fantastic news. Maybe this can get Nvidia to start caring more about Iray and start to optimize it better. Competition is always a good thing.

    I won't argue with this, except to say that we don't really know that nVidia hasn't been working to optimize it all along.

    This absolutely can have a major impact on Daz's market. If it performs better than Iray, or even just as well, it will make rapid in roads, no matter how entrenched Iray may be (and to be frank, I don't believe it is at all, Daz is the only place where Iray is "King.") Ease of use will be a big factor, too. 

    Most folks that do renders in some pretty big name software have no idea what rendering engine is involved; they just use the tools provided to do their job. If those tools just so happen to be Iray and MDL based, so be it.

    There are a lot of users who install Daz Studio for one reason, and one reason only: Iray. Maybe some of these people end up buying something from Daz at some point. Everybody who installs Daz Studio is a potential Daz customer. That is the whole point of offering it for free, after all. So if people start moving towards an open source rendering platform that is also free, and can be used by any hardware with OpenGL, then this presents a serious threat to Daz.

    I don't agree that folks install DAZ Studio specifically for Iray; it just happens to come along for the ride now.

    GPU and CPU workload BALANCING. If this isn't just PR talk, Prorender will start strong out of the gate just for this reason alone.

    Maybe, but they still have to convince folks that have heard of the issues with Open CL that those no longer plague the software.

    The proof is in the results. Iray is still slow and heavily dependent on having high end hardware...which just happens to be exclusive to Nvidia GPUs. I feel very strongly that Iray is poorly optimized, not necessarily on purpose, but the software is still rather new. Its only been in public use for a couple years. It should be pretty obvious to anyone that as fairly recent software, it can be significantly improved. Nvidia has proven that when they are inspired, they can deliver. They used software, and software alone, to make their recent Pascal GPU line deliver 3 times the performance of the previous generation in VR, when it only delivers a modest gain in performance with normal games. They did that because VR is supposed to be the next big thing. VR, like Iray, is largely new in its recent form. It should be clear that Nvidia is dedicating most of their time to VR and other things.

    What big name companies are specifically using Iray for their rendering needs? How many movies have been made with Iray? This also ties into point #1. If more companies were looking at Iray, Nvidia would be more inspired to improve it faster. Its a niche market to them. Nvidia was quick to announce Daz was using Iray. They also happily announced PGO was using Iray for renderings of their fancy little cars. But beyond that, there are very few announcements like this. So that begs the question, who is using it? Its certainly not big in animation. I for one do not believe the market is saturated in any one direction or the other, because so many companies use so many different softwares and the big ones create their own.

    I didn't say that everyone chose Daz for Iray. But a lot do. Whether it be because of Iray outright, or they see the pretty picture renders made by other users and look to see what program is making them (because its free.) How many other programs offer free PBR rendering? Not many. Nvidia uses Daz to promote Iray, which in turn promotes Nvidia GPUs (because you want to render faster, right?)

    Anyone who discriminates against a software because they harbor outdated beliefs isn't really keeping up with the times. If you work in the business then you are very well aware that things change and can change very rapidly. What might be laughed at one day becomes the next big thing, and they are left in the cold because they didn't jump on board fast enough. Pity.

    The thing the realize is that the speed issues lie in OpenCL,not Iray. Also the stability issues are there too. And Iray is in the very places that this AMD renderer wants to be: Maya, 3DS max, the new renderer in the Allegorithmic products, etc. As long as the renderer is tied to OpenCL, as long as that standard isn't improved neither will be the output. But depending on the complexity of your scene CPU isn't going to be much of a difference in Iray as 3DL (such as transmapped hair.. I know I can use some of the hairs that used to be 3DL renderers to a crawl with Iray. Cycles has issues with transmaps as well). I'm sure performance and  features will improve espeically since we are using iray far differently than other GPU/Unbiased renderers: for rendering people and skin instead of architectual buildings and cars.. which is where iray/Mental ray is normally used. DAZ Studio is mostly likely leading the way for rendering people and with that knowledge Nvidia can improve how the software works. For example the 4.9 update, there was a change on how SSS is done. I doubt the other software has such a userbase and I doubt anyone will toss that knowledge for something that they will have to work from scratch to address this section of the industry's needs.

    It would be interesting if you said where you get your information from; certainly it seems CUDA used to be more efficient, now it is not so clear cut, although it does have some advantages, but they are fewer, and depend uppon usage.

    https://wiki.tiker.net/CudaVsOpenCL

    https://streamcomputing.eu/blog/2010-04-22/difference-between-cuda-and-opencl/

    http://create.pro/blog/open-cl-vs-cuda-amd-vs-nvidia-better-application-support-gpgpugpu-acceleration-real-world-face/

    https://www.researchgate.net/post/Which_one_do_you_prefer_CUDA_or_OpenCL

    https://pdfs.semanticscholar.org/d4e5/8e7c95d66f810252af630e74adbdbaf38da7.pdf

    https://arxiv.org/pdf/1005.2581

    ... It's specifically when you look at research, as opposed to opinion, then one realises that the differences are more grey; and open CL does appear to getting that improvement you suggest isn't happening.

    My information is taken from google, tech guides, apple forum issues with opencl, and aren't opinion based. My main occupation revolves around hardware and computer technology, so I keep myself up to date on this stuff. I use this information for personal and business purchases. It isn't that hard. I don't need to footnote my posts when this information is readily available.  CUDA vs OpenCL articles are quite abundant with the pros and cons. If you don't know this information, then it's on you to research, not  me to tell you how to do it.

    Besides most of the articles I found on CUDA vs OpenCL basically say to use CUDA when given a choice, not the other way around. And it's that advantage why companies are adding Nvidia GPUs to their rendering farms, not AMD Radeons. They're doing catch up.. especially wth the Pascals on the market. Companies don't buy into catch up products, that's simply how it is. They invest.

    Also keep in mind that Octane 3.0 was promised OpenCL support, but their faq still says it's not supported. This is the exact post:

    "No, OpenCL is currently not as mature as CUDA. As OpenCL matures, it is planned to be supported which will allow GPUs from AMD and Intel to be used with OctaneRender. Currently, OctaneRender requires a CUDA enabled NVIDIA video card to function."

    Indeed; this is why Disney/Pixar has adopted CUDA based technology in its internally developed content creation pipeline and is adding CUDA support to Renderman for certain aspects of the rendering process. Even outside of the CGI industry, software developers are seeing the benefits of using CUDA, OptiX and Iray in their products.

  • wolf359wolf359 Posts: 3,929

    "Just a thought: what if this turns out to be a formidable rival to Iray and Smith-Micro pounces on supporting it for the next iteration of Poser? "
     

     

    Posers Decline is less related to its internal to render engine and more to its substandard native figures/content and and a vestigial animation system .

    Daz studio surpassed poser in these areas with Genesis and the nonlinear system of animate& puppeteer long before Iray became a part of Daz studio.

    Even Iclone 6+ Character creator app can produce better quality
    Default human figures than  the native ones that shipped with the last three versions of poser.


    SM should focus on Updating the Core of poser itself  to support modern Figure techology
    and drag the Character animation tools out of the 1990's before tacking on yet another render engine
    requiring the user base& poser content makers to adopt another material system.

    While I Dont know the particualrs of Daz's Agreement with Nvidia,
    to me it seems highly doubtful that Daz  would be officially Supporting this new option from AMD at least in the short term.

    This of course does not preclude a talented third party from producing a bridge to the engine as Palo & others have Done with LUX  or Mcasual has done with the  Blender cycles scene exporter.

  • wolf359 said:

    "Just a thought: what if this turns out to be a formidable rival to Iray and Smith-Micro pounces on supporting it for the next iteration of Poser? "
     

     

    Posers Decline is less related to its internal to render engine and more to its substandard native figures/content and and a vestigial animation system .

    Daz studio surpassed poser in these areas with Genesis and the nonlinear system of animate& puppeteer long before Iray became a part of Daz studio.

    Even Iclone 6+ Character creator app can produce better quality
    Default human figures than  the native ones that shipped with the last three versions of poser.


    SM should focus on Updating the Core of poser itself  to support modern Figure techology
    and drag the Character animation tools out of the 1990's before tacking on yet another render engine
    requiring the user base& poser content makers to adopt another material system.

    Not to mention updating that "hideous" interface.

    wolf359 said:

    While I Dont know the particualrs of Daz's Agreement with Nvidia,
    to me it seems highly doubtful that Daz  would be officially Supporting this new option from AMD at least in the short term.

    This of course does not preclude a talented third party from producing a bridge to the engine as Palo & others have Done with LUX  or Mcasual has done with the  Blender cycles scene exporter.

    Agreed.

  • This is a discussion of the new/forthcoming AMD render engine, not a platform for App-wars.

  • HavosHavos Posts: 5,573

    Even if some one did make a plug-in for DS to this new render engine, I doubt many DS users would switch over. Reality and Luxus gave DS users access to a PBR renderer years before Iray appeared on the scene, and yet throughout this time 3DL remained the renderer of choice for the majority of DS users. It is just far more convenient to use a renderer built into the app, than one via a plug-in, particularly given that shaders etc would need to be amended.

  • hphoenix said:

    AMD/ATI Radeon now has ProRender...

    That name made me chortle out loud. I wonder if a lot of pointless arguing was involved.

    (long time Brycers will get it)

  • This is going to be VERY popular. It's targetting all the places that iRay isn't. It's free. Right there, I'm gonna recommend my company add it to their product. We're not in animation or rendering, but part of the app does have 3D vizualiation. We could not add an unbiased renderer before since any cost would not be viable. That meant iRay and Octane were out. Luxrender hasn't fully converted to a non-GPL license yet, so that was out. ProRender looks like the perfect solution. Many other companies and products will be in the same situation. This opens up a new avenue that wasn't there before. Also, it runs on any video card. A main concern at work was that we could not add iRay because we need to support most configurations out there. Not supporting AMD is a deal breaker. Also, ProRender has linux support. And it's in max, maya and Rhino. Soon to be in Blender. I hope Lightwave gets it too and I don't see why not. They have support for some other renderers already (Octane I think). It also claims it is load balancing. A lesser video card will only make the render take less time.

    Thanks to the OP for posting this info. I'm looking forward to the SDK release.

  • Male-M3diaMale-M3dia Posts: 3,584

    This is going to be VERY popular. It's targetting all the places that iRay isn't. It's free. Right there, I'm gonna recommend my company add it to their product. We're not in animation or rendering, but part of the app does have 3D vizualiation. We could not add an unbiased renderer before since any cost would not be viable. That meant iRay and Octane were out. Luxrender hasn't fully converted to a non-GPL license yet, so that was out. ProRender looks like the perfect solution. Many other companies and products will be in the same situation. This opens up a new avenue that wasn't there before. Also, it runs on any video card. A main concern at work was that we could not add iRay because we need to support most configurations out there. Not supporting AMD is a deal breaker. Also, ProRender has linux support. And it's in max, maya and Rhino. Soon to be in Blender. I hope Lightwave gets it too and I don't see why not. They have support for some other renderers already (Octane I think). It also claims it is load balancing. A lesser video card will only make the render take less time.

    Thanks to the OP for posting this info. I'm looking forward to the SDK release.

    Make sure you're doing your evaluation properly. Again the issue isn't with the video cards it's with the OpenCL standard. You're off to a bad start if you think the card is the issue. Octane could not support OpenCL because they feel is not ready for primetime, not because they don't like the cards. If the standard can't do the same set of features as CUDA with the same performance, then it is not worth the investment.

  • Competition would be a good thing but looks unlikely here. nvidia have been doing better than AMD/Radeon at the high end over the last few years but I was dismayed to buy a Pascal card without Iray drivers so I won't be an early adopter in future. A more positive note is that AMD in January are supposed to bring out a Zen/Ryzen 8 core/ 16 thread processor that might compete with Intel on 3Delight renders. Intel 8 core is around £900 so hoping AMD provides some cometition. 3delight provides a more artistic/less realistic light and before V6 you had little choice.

  • zaz777zaz777 Posts: 115
    kyoto kid said:
    I also understand, if you use a secondary GPU to run the displays then your rendering card doesn't take a hit to the VRAM from the OS (which albeit in W7 is pretty minimal compared to W10).  I'm not as concerned about the number of CUDA Cores as I am having enough memory on the card to hold most of my scenes particularly since a 1080 is 200$ more expensive on average (and I'm sure the 1080 Ti will probably be another 200$ more than that).

    With win7 pro, my GTX 980 is using 107M of video memory suporting two monitors, one 1920x1200 and the other 1680x1050, when I'm not running an application that specifically uses video memory, like DS, blender, games, etc.  That's about 1.5 uncompressed 4096x4096 texture maps (64M each).

    Desktop use is pretty light on the memory and shouldn't be a major concern in most situations.  The compute performance is supposedly impacted when rendering on video cards with monitors attached, but my rendering performance appears on par with others mentioned in the benchmark thread, some of which are GTX 980s without monitors attached.  I think the performace impact is small.

    As far as big scenes and memory goes, the texture compression used by iray can be very useful there, at least for non-normal (as in similar to bump maps) textures as they aren't compressed.  Automated mip-mapping of some sort, as 3DL uses, would be better but iray's texture compression can be very useful.  Some examples of the results of iray's texture compression can be found in their dev blog.

    An even better way of managing VRAM in iray is to just apply some common sense and a few tips/tricks.  Having 16, I exagerate a little smiley, 4096x4096 texture maps per skin material group is a bit over kill.  At minimum, some of those maps can be resized to something smaller, say 1024x1024, as they don't really have detail or the detail they have has very little impact on the final results.

    Other tips, mentioned in the skin thread and implemented in a at least one of the recently released skin material settings, is to use tiled normal and/or bump maps for some settings.  A well made and tiled normal, or other, map of 128x128 (64K) to 512x512 (1M) can go a long way to replacing several 4096x4096 (64M) normal maps.

    Also, one needs to be careful with some included material options provided by some PAs.  In many cases changing the eye color results in multiple 2048x2048 to 4096x4096 maps being used by the eye surfaces, the original eye color on the sclera and a new map used on the iris surface.

    Makeup options might cause you to have one set of maps for the face, another for the lips and yet another for the nostrils or lacrimals.  Tattoos can also do this.  Many times it isn't necessary for it to be this way and you can put the texture(s) into the approriate channel(s) on multiple surfaces.

    Toss in things like LIEs being separately built for multiple surfaces in the groups, i.e. separate LIEs for shoulders, arms, hands, etc. that are the same, but are separate LIEs, and one can waste 100s of megabytes of video memory unnecessarily.

    Efficient rendering in iray requires one to manage the memory a lot more than in some other renderers.  It isn't hard to do, but it does require some consideration in your work flow.

  • zaz777zaz777 Posts: 115
    hphoenix said:

    AMD/ATI Radeon now has ProRender...

    That name made me chortle out loud. I wonder if a lot of pointless arguing was involved.

    (long time Brycers will get it)

    I'm wondering if ProRender comes with the anatomical elements.

  • ChoholeChohole Posts: 33,604
    edited December 2016
    hphoenix said:

    AMD/ATI Radeon now has ProRender...

    That name made me chortle out loud. I wonder if a lot of pointless arguing was involved.

    (long time Brycers will get it)

        Was the first thing I thought of when I saw the name.

    Post edited by Chohole on
  • j cadej cade Posts: 2,310
    zaz777 said:
    kyoto kid said:
    I also understand, if you use a secondary GPU to run the displays then your rendering card doesn't take a hit to the VRAM from the OS (which albeit in W7 is pretty minimal compared to W10).  I'm not as concerned about the number of CUDA Cores as I am having enough memory on the card to hold most of my scenes particularly since a 1080 is 200$ more expensive on average (and I'm sure the 1080 Ti will probably be another 200$ more than that).

    With win7 pro, my GTX 980 is using 107M of video memory suporting two monitors, one 1920x1200 and the other 1680x1050, when I'm not running an application that specifically uses video memory, like DS, blender, games, etc.  That's about 1.5 uncompressed 4096x4096 texture maps (64M each).

    Desktop use is pretty light on the memory and shouldn't be a major concern in most situations.  The compute performance is supposedly impacted when rendering on video cards with monitors attached, but my rendering performance appears on par with others mentioned in the benchmark thread, some of which are GTX 980s without monitors attached.  I think the performace impact is small.

    As far as big scenes and memory goes, the texture compression used by iray can be very useful there, at least for non-normal (as in similar to bump maps) textures as they aren't compressed.  Automated mip-mapping of some sort, as 3DL uses, would be better but iray's texture compression can be very useful.  Some examples of the results of iray's texture compression can be found in their dev blog.

    An even better way of managing VRAM in iray is to just apply some common sense and a few tips/tricks.  Having 16, I exagerate a little smiley, 4096x4096 texture maps per skin material group is a bit over kill.  At minimum, some of those maps can be resized to something smaller, say 1024x1024, as they don't really have detail or the detail they have has very little impact on the final results.

    Other tips, mentioned in the skin thread and implemented in a at least one of the recently released skin material settings, is to use tiled normal and/or bump maps for some settings.  A well made and tiled normal, or other, map of 128x128 (64K) to 512x512 (1M) can go a long way to replacing several 4096x4096 (64M) normal maps.

    Also, one needs to be careful with some included material options provided by some PAs.  In many cases changing the eye color results in multiple 2048x2048 to 4096x4096 maps being used by the eye surfaces, the original eye color on the sclera and a new map used on the iris surface.

    Makeup options might cause you to have one set of maps for the face, another for the lips and yet another for the nostrils or lacrimals.  Tattoos can also do this.  Many times it isn't necessary for it to be this way and you can put the texture(s) into the approriate channel(s) on multiple surfaces.

    Toss in things like LIEs being separately built for multiple surfaces in the groups, i.e. separate LIEs for shoulders, arms, hands, etc. that are the same, but are separate LIEs, and one can waste 100s of megabytes of video memory unnecessarily.

    Efficient rendering in iray requires one to manage the memory a lot more than in some other renderers.  It isn't hard to do, but it does require some consideration in your work flow.

    So much this, my biggest pet peeve is makeup presets that leave the ears with the default textures ARGH! WHY?

    And 16 4k textures isn't exaggeration at all, you may even be underselling it  with gen3 you have head, torso, arm, and leg textures, each which might have diffuse, normal, translucency, bump, and specular maps so that can easily be 20 4k maps right there. Now imagine, on top of that your haven't bothered consolidating everything and your ears use the default face textures and the face uses makeup option 1 and the lips makeup option 3 that can add another 2-6+ (some makeup presets can include different diffuse, translucency and specular maps)

     

    Easiest optimization though is definitely, "is my character wearing pants or such? No? Goodbye leg textures"

  • This is going to be VERY popular. It's targetting all the places that iRay isn't. It's free. Right there, I'm gonna recommend my company add it to their product. We're not in animation or rendering, but part of the app does have 3D vizualiation. We could not add an unbiased renderer before since any cost would not be viable. That meant iRay and Octane were out. Luxrender hasn't fully converted to a non-GPL license yet, so that was out. ProRender looks like the perfect solution. Many other companies and products will be in the same situation. This opens up a new avenue that wasn't there before. Also, it runs on any video card. A main concern at work was that we could not add iRay because we need to support most configurations out there. Not supporting AMD is a deal breaker. Also, ProRender has linux support. And it's in max, maya and Rhino. Soon to be in Blender. I hope Lightwave gets it too and I don't see why not. They have support for some other renderers already (Octane I think). It also claims it is load balancing. A lesser video card will only make the render take less time.

    Thanks to the OP for posting this info. I'm looking forward to the SDK release.

    Iray, 3Delight and Renderman all have Linux versions too... And those already have plugins for many 3D applications. Also, ProRender needs to prove that it can perform equally well with ATI, nVidia and Intel video subsystems, and not favor one over another.

  • zaz777zaz777 Posts: 115
    j cade said:
    And 16 4k textures isn't exaggeration at all, you may even be underselling it  with gen3 you have head, torso, arm, and leg textures, each which might have diffuse, normal, translucency, bump, and specular maps so that can easily be 20 4k maps right there. Now imagine, on top of that your haven't bothered consolidating everything and your ears use the default face textures and the face uses makeup option 1 and the lips makeup option 3 that can add another 2-6+ (some makeup presets can include different diffuse, translucency and specular maps)

    You are correct.  What I meant by "exaggerating" was 16 4K textures per surface zone/group, i.e. the face or torso set of surfaces.  Normally you'll only see 3 to 6, maybe a couple more in extreme cases per surface set.

    16 total 4K maps per character would be pretty light for most characters as delivered from the store.  That's 1G (16 x 64M) of video card memory used on a character's textures if iray's compression isn't used.

    j cade said:

    Easiest optimization though is definitely, "is my character wearing pants or such? No? Goodbye leg textures"

    Absolutely.  Heck, turn off display on the geometry if you can't see it and save the bit more memory for geometry as well.

    A form of mip mapping would be even better.  Iray only lets you control the texture compression based on texture size, not how visible the textures are.

    If you're rendering a full length image of a model in a standing pose at 1920x1080.  Very few of the textures on the character would need to be more than 1Kx1K, about 10 x the resolution in the final image for many body parts, to get good results.  That's a huge savings in video card memory, but its a bit of work to resize all the images manually.

  • prixatprixat Posts: 1,616

    We don't have much to judge performance on except benchmarks.

    This was interesting within the Compubench results. In almost every test the Titan X and the 1080 were faster in OpenCL than they were in CUDA!

    https://compubench.com/result.jsp

  • nonesuch00nonesuch00 Posts: 18,714
    prixat said:

    We don't have much to judge performance on except benchmarks.

    This was interesting within the Compubench results. In almost every test the Titan X and the 1080 were faster in OpenCL than they were in CUDA!

    https://compubench.com/result.jsp

    Well, that would be to be expected and the margin will get bigger, not smaller as future SW & HW development cycles come & go.

  • prixatprixat Posts: 1,616
    prixat said:

    We don't have much to judge performance on except benchmarks.

    This was interesting within the Compubench results. In almost every test the Titan X and the 1080 were faster in OpenCL than they were in CUDA!

    https://compubench.com/result.jsp

    Well, that would be to be expected and the margin will get bigger, not smaller as future SW & HW development cycles come & go.

    Except that nVidia is still using OpenCL 1.2! I would have expected OpenCL to suffer translation losses and CUDA to always be faster.

  • prixat said:
    prixat said:

    We don't have much to judge performance on except benchmarks.

    This was interesting within the Compubench results. In almost every test the Titan X and the 1080 were faster in OpenCL than they were in CUDA!

    https://compubench.com/result.jsp

    Well, that would be to be expected and the margin will get bigger, not smaller as future SW & HW development cycles come & go.

    Except that nVidia is still using OpenCL 1.2! I would have expected OpenCL to suffer translation losses and CUDA to always be faster.

    By the same token, wouldn't the performance difference depend on which version of CUDA the tests used? I would expect tests using OpenCL 1.2 and CUDA 7.5 and earlier to be different than similar tests with OpenCL 1.2 and CUDA 8.0.4.

  • nonesuch00nonesuch00 Posts: 18,714
    edited December 2016
    prixat said:
    prixat said:

    We don't have much to judge performance on except benchmarks.

    This was interesting within the Compubench results. In almost every test the Titan X and the 1080 were faster in OpenCL than they were in CUDA!

    https://compubench.com/result.jsp

    Well, that would be to be expected and the margin will get bigger, not smaller as future SW & HW development cycles come & go.

    Except that nVidia is still using OpenCL 1.2! I would have expected OpenCL to suffer translation losses and CUDA to always be faster.

    Well they don't give the Windows version but I think the results point to them using Windows 10 & with the improved Windows 10 & DirectX 12 parallelization algorithms. OpenCL is not bad, it's really very good, but was hampered in the past by parallizing dispatch processing - that is much improved now.

    Post edited by nonesuch00 on
  • SixDsSixDs Posts: 2,384

    All knowledge is about the past, all development is about the future. It is fruitless to speculate about where OpenCL and CUDA are heading or which was/is/will be better or faster. As has been said previously, hardware and software development cycles can progress at a furious pace. Insofar as the state of OpenCL today is concerned, I believe that it is important to recognize that OpenCL is not, as the name might suggest to some, a product of the traditional Opensource community produced by a group of independent programmers in their spare time. OpenCL is developed under the auspices of the Khronos Group, an industry consortium whose membership reads like a whos who of the hardware and software industry, and that includes Nvidia. With such a broad and diverse group, each with their own interests, the development may be subject to some constraints, but I believe it is fair to say it won't be limited by a lack of intellectual horsepower. CUDA technology, on the other hand, is owned and controlled by one company and designed to work on their specific hardware, so the company is pretty much free to do as they see fit without needing to seek agreement from anyone else.

    My prediction, for what it is worth, is that OpenCL will continue to be developed and improved, but the pace of that is impossible to predict. We will simply have to wait and see. I can understand the anxiety on the part of those who may have married themselves to a particular technology, but from a broad consumer perspective, standards developed, agreed to and adopted by the industry as a whole are a good thing.

    For anyone interested in seeing who the members of the Khronos Group are, have a look here:

    https://en.wikipedia.org/wiki/Khronos_Group

  • hphoenixhphoenix Posts: 1,335

    Everyone also needs to remember that OpenCL is an Open standard.  Open Standards, by their very nature, evolve more slowly than closed ecosystem software.  

    CUDA is developed by nVidia, they control it, they do not share the source to it, they provide the API.

    OpenCL is developed by a consortium of people, including people from multiple big players.  Those include AMD, Intel, Apple, nVidia too.

    This means that changes and evolution of the OpenCL standards occur much more slowly, as there will be input (and disagreement) potentially from multiple sides.  And compromises have to be worked out without losing sight of the original goals and requirements.

    Even with that, OpenCL has moved rapidly, with a great deal of support from both corporate major players and educational institutions, and on both the implementation as well as tools side.  nVidia's OpenCL implementation basically just converts OpenCL calls to equivalent CUDA calls (where there is a disconnect, it may produce a small set of CUDA calls/code.)  This is true of a LOT of such implementations, as when we go back to old MPI-type code.....it's all about implementing an API for Parallel Processing.

     

  • wizwiz Posts: 1,100

    I'm surprised that this is even an issue. In numerical computing, OpenCL is an also-ran. If you're heroic with it, you can get it to perform at about 50-60% of the level of CUDA. This is true whether you're pitting CUDA on Nvidia against OpenCL on equivalent Radeons, or using both OpenCL and CUDA on the same Nvidia.

  • kyoto kidkyoto kid Posts: 41,838
    edited December 2016
    prixat said:
    prixat said:

    We don't have much to judge performance on except benchmarks.

    This was interesting within the Compubench results. In almost every test the Titan X and the 1080 were faster in OpenCL than they were in CUDA!

    https://compubench.com/result.jsp

    Well, that would be to be expected and the margin will get bigger, not smaller as future SW & HW development cycles come & go.

    Except that nVidia is still using OpenCL 1.2! I would have expected OpenCL to suffer translation losses and CUDA to always be faster.

    Well they don't give the Windows version but I think the results point to them using Windows 10 & with the improved Windows 10 & DirectX 12 parallelization algorithms. OpenCL is not bad, it's really very good, but was hampered in the past by parallizing dispatch processing - that is much improved now.

    ...provided you can keep W10 from eating a big chunk of your VRAM.  For that you need two cards one for rendering and one to run the displays. The non render card still should have enough memory to handle the Daz OpenGL Viewport with a scene loaded without incredible lag.

    Post edited by kyoto kid on
  • nonesuch00nonesuch00 Posts: 18,714
    kyoto kid said:
    prixat said:
    prixat said:

    We don't have much to judge performance on except benchmarks.

    This was interesting within the Compubench results. In almost every test the Titan X and the 1080 were faster in OpenCL than they were in CUDA!

    https://compubench.com/result.jsp

    Well, that would be to be expected and the margin will get bigger, not smaller as future SW & HW development cycles come & go.

    Except that nVidia is still using OpenCL 1.2! I would have expected OpenCL to suffer translation losses and CUDA to always be faster.

    Well they don't give the Windows version but I think the results point to them using Windows 10 & with the improved Windows 10 & DirectX 12 parallelization algorithms. OpenCL is not bad, it's really very good, but was hampered in the past by parallizing dispatch processing - that is much improved now.

    ...provided you can keep W10 from eating a big chunk of your VRAM.  For that you need two cards one for rendering and one to run the displays. The non render card still should have enough memory to handle the Daz OpenGL Viewport with a scene loaded without incredible lag.

    Totally irrelevant.

  • kyoto kidkyoto kid Posts: 41,838
    kyoto kid said:
    prixat said:
    prixat said:

    We don't have much to judge performance on except benchmarks.

    This was interesting within the Compubench results. In almost every test the Titan X and the 1080 were faster in OpenCL than they were in CUDA!

    https://compubench.com/result.jsp

    Well, that would be to be expected and the margin will get bigger, not smaller as future SW & HW development cycles come & go.

    Except that nVidia is still using OpenCL 1.2! I would have expected OpenCL to suffer translation losses and CUDA to always be faster.

    Well they don't give the Windows version but I think the results point to them using Windows 10 & with the improved Windows 10 & DirectX 12 parallelization algorithms. OpenCL is not bad, it's really very good, but was hampered in the past by parallizing dispatch processing - that is much improved now.

    ...provided you can keep W10 from eating a big chunk of your VRAM.  For that you need two cards one for rendering and one to run the displays. The non render card still should have enough memory to handle the Daz OpenGL Viewport with a scene loaded without incredible lag.

    Totally irrelevant.

    ...and why? As I understand W10 reserves a significant amount of VRAM compared to older versions of the OS.  This has not only been addressed here but on several tech sites I frequent as well.  The solution usually given is to use one GPU to just run the display while reserving the other for rendering.

Sign In or Register to comment.