Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
It would be interesting if you said where you get your information from; certainly it seems CUDA used to be more efficient, now it is not so clear cut, although it does have some advantages, but they are fewer, and depend uppon usage.
https://wiki.tiker.net/CudaVsOpenCL
https://streamcomputing.eu/blog/2010-04-22/difference-between-cuda-and-opencl/
http://create.pro/blog/open-cl-vs-cuda-amd-vs-nvidia-better-application-support-gpgpugpu-acceleration-real-world-face/
https://www.researchgate.net/post/Which_one_do_you_prefer_CUDA_or_OpenCL
https://pdfs.semanticscholar.org/d4e5/8e7c95d66f810252af630e74adbdbaf38da7.pdf
https://arxiv.org/pdf/1005.2581
... It's specifically when you look at research, as opposed to opinion, then one realises that the differences are more grey; and open CL does appear to getting that improvement you suggest isn't happening.
Just a thought: what if this turns out to be a formidable rival to Iray and Smith-Micro pounces on supporting it for the next iteration of Poser?
Speculation aside, my iMac has a Radeon card (I don't think I can upgrade/change it) so I'll be keeping an eye on this.
My information is taken from google, tech guides, apple forum issues with opencl, and aren't opinion based. My main occupation revolves around hardware and computer technology, so I keep myself up to date on this stuff. I use this information for personal and business purchases. It isn't that hard. I don't need to footnote my posts when this information is readily available. CUDA vs OpenCL articles are quite abundant with the pros and cons. If you don't know this information, then it's on you to research, not me to tell you how to do it.
Besides most of the articles I found on CUDA vs OpenCL basically say to use CUDA when given a choice, not the other way around. And it's that advantage why companies are adding Nvidia GPUs to their rendering farms, not AMD Radeons. They're doing catch up.. especially wth the Pascals on the market. Companies don't buy into catch up products, that's simply how it is. They invest.
Also keep in mind that Octane 3.0 was promised OpenCL support, but their faq still says it's not supported. This is the exact post:
"No, OpenCL is currently not as mature as CUDA. As OpenCL matures, it is planned to be supported which will allow GPUs from AMD and Intel to be used with OctaneRender. Currently, OctaneRender requires a CUDA enabled NVIDIA video card to function."
Poser already has superfly which is based off of Blender's Cycles but not an exact port. The OpenCL portion of the rendering engine was not implemented into superfly because the implementation wasn't finished, was in beta stage, or unstable in the base Cycles software. All though both the base and Pro copies of Poser 11 support Superfly, only the Pro version can use Nvidia GPUs. As far as what's in the next version of poser, that's an interesting topic to look at for next year.
Indeed; this is why Disney/Pixar has adopted CUDA based technology in its internally developed content creation pipeline and is adding CUDA support to Renderman for certain aspects of the rendering process. Even outside of the CGI industry, software developers are seeing the benefits of using CUDA, OptiX and Iray in their products.
Posers Decline is less related to its internal to render engine and more to its substandard native figures/content and and a vestigial animation system .
Daz studio surpassed poser in these areas with Genesis and the nonlinear system of animate& puppeteer long before Iray became a part of Daz studio.
Even Iclone 6+ Character creator app can produce better quality
Default human figures than the native ones that shipped with the last three versions of poser.
SM should focus on Updating the Core of poser itself to support modern Figure techology
and drag the Character animation tools out of the 1990's before tacking on yet another render engine
requiring the user base& poser content makers to adopt another material system.
While I Dont know the particualrs of Daz's Agreement with Nvidia,
to me it seems highly doubtful that Daz would be officially Supporting this new option from AMD at least in the short term.
This of course does not preclude a talented third party from producing a bridge to the engine as Palo & others have Done with LUX or Mcasual has done with the Blender cycles scene exporter.
Not to mention updating that "hideous" interface.
Agreed.
This is a discussion of the new/forthcoming AMD render engine, not a platform for App-wars.
Even if some one did make a plug-in for DS to this new render engine, I doubt many DS users would switch over. Reality and Luxus gave DS users access to a PBR renderer years before Iray appeared on the scene, and yet throughout this time 3DL remained the renderer of choice for the majority of DS users. It is just far more convenient to use a renderer built into the app, than one via a plug-in, particularly given that shaders etc would need to be amended.
That name made me chortle out loud. I wonder if a lot of pointless arguing was involved.
(long time Brycers will get it)
This is going to be VERY popular. It's targetting all the places that iRay isn't. It's free. Right there, I'm gonna recommend my company add it to their product. We're not in animation or rendering, but part of the app does have 3D vizualiation. We could not add an unbiased renderer before since any cost would not be viable. That meant iRay and Octane were out. Luxrender hasn't fully converted to a non-GPL license yet, so that was out. ProRender looks like the perfect solution. Many other companies and products will be in the same situation. This opens up a new avenue that wasn't there before. Also, it runs on any video card. A main concern at work was that we could not add iRay because we need to support most configurations out there. Not supporting AMD is a deal breaker. Also, ProRender has linux support. And it's in max, maya and Rhino. Soon to be in Blender. I hope Lightwave gets it too and I don't see why not. They have support for some other renderers already (Octane I think). It also claims it is load balancing. A lesser video card will only make the render take less time.
Thanks to the OP for posting this info. I'm looking forward to the SDK release.
Make sure you're doing your evaluation properly. Again the issue isn't with the video cards it's with the OpenCL standard. You're off to a bad start if you think the card is the issue. Octane could not support OpenCL because they feel is not ready for primetime, not because they don't like the cards. If the standard can't do the same set of features as CUDA with the same performance, then it is not worth the investment.
Competition would be a good thing but looks unlikely here. nvidia have been doing better than AMD/Radeon at the high end over the last few years but I was dismayed to buy a Pascal card without Iray drivers so I won't be an early adopter in future. A more positive note is that AMD in January are supposed to bring out a Zen/Ryzen 8 core/ 16 thread processor that might compete with Intel on 3Delight renders. Intel 8 core is around £900 so hoping AMD provides some cometition. 3delight provides a more artistic/less realistic light and before V6 you had little choice.
With win7 pro, my GTX 980 is using 107M of video memory suporting two monitors, one 1920x1200 and the other 1680x1050, when I'm not running an application that specifically uses video memory, like DS, blender, games, etc. That's about 1.5 uncompressed 4096x4096 texture maps (64M each).
Desktop use is pretty light on the memory and shouldn't be a major concern in most situations. The compute performance is supposedly impacted when rendering on video cards with monitors attached, but my rendering performance appears on par with others mentioned in the benchmark thread, some of which are GTX 980s without monitors attached. I think the performace impact is small.
As far as big scenes and memory goes, the texture compression used by iray can be very useful there, at least for non-normal (as in similar to bump maps) textures as they aren't compressed. Automated mip-mapping of some sort, as 3DL uses, would be better but iray's texture compression can be very useful. Some examples of the results of iray's texture compression can be found in their dev blog.
An even better way of managing VRAM in iray is to just apply some common sense and a few tips/tricks. Having 16, I exagerate a little
, 4096x4096 texture maps per skin material group is a bit over kill. At minimum, some of those maps can be resized to something smaller, say 1024x1024, as they don't really have detail or the detail they have has very little impact on the final results.
Other tips, mentioned in the skin thread and implemented in a at least one of the recently released skin material settings, is to use tiled normal and/or bump maps for some settings. A well made and tiled normal, or other, map of 128x128 (64K) to 512x512 (1M) can go a long way to replacing several 4096x4096 (64M) normal maps.
Also, one needs to be careful with some included material options provided by some PAs. In many cases changing the eye color results in multiple 2048x2048 to 4096x4096 maps being used by the eye surfaces, the original eye color on the sclera and a new map used on the iris surface.
Makeup options might cause you to have one set of maps for the face, another for the lips and yet another for the nostrils or lacrimals. Tattoos can also do this. Many times it isn't necessary for it to be this way and you can put the texture(s) into the approriate channel(s) on multiple surfaces.
Toss in things like LIEs being separately built for multiple surfaces in the groups, i.e. separate LIEs for shoulders, arms, hands, etc. that are the same, but are separate LIEs, and one can waste 100s of megabytes of video memory unnecessarily.
Efficient rendering in iray requires one to manage the memory a lot more than in some other renderers. It isn't hard to do, but it does require some consideration in your work flow.
I'm wondering if ProRender comes with the anatomical elements.
So much this, my biggest pet peeve is makeup presets that leave the ears with the default textures ARGH! WHY?
And 16 4k textures isn't exaggeration at all, you may even be underselling it with gen3 you have head, torso, arm, and leg textures, each which might have diffuse, normal, translucency, bump, and specular maps so that can easily be 20 4k maps right there. Now imagine, on top of that your haven't bothered consolidating everything and your ears use the default face textures and the face uses makeup option 1 and the lips makeup option 3 that can add another 2-6+ (some makeup presets can include different diffuse, translucency and specular maps)
Easiest optimization though is definitely, "is my character wearing pants or such? No? Goodbye leg textures"
Iray, 3Delight and Renderman all have Linux versions too... And those already have plugins for many 3D applications. Also, ProRender needs to prove that it can perform equally well with ATI, nVidia and Intel video subsystems, and not favor one over another.
You are correct. What I meant by "exaggerating" was 16 4K textures per surface zone/group, i.e. the face or torso set of surfaces. Normally you'll only see 3 to 6, maybe a couple more in extreme cases per surface set.
16 total 4K maps per character would be pretty light for most characters as delivered from the store. That's 1G (16 x 64M) of video card memory used on a character's textures if iray's compression isn't used.
Absolutely. Heck, turn off display on the geometry if you can't see it and save the bit more memory for geometry as well.
A form of mip mapping would be even better. Iray only lets you control the texture compression based on texture size, not how visible the textures are.
If you're rendering a full length image of a model in a standing pose at 1920x1080. Very few of the textures on the character would need to be more than 1Kx1K, about 10 x the resolution in the final image for many body parts, to get good results. That's a huge savings in video card memory, but its a bit of work to resize all the images manually.
We don't have much to judge performance on except benchmarks.
This was interesting within the Compubench results. In almost every test the Titan X and the 1080 were faster in OpenCL than they were in CUDA!
https://compubench.com/result.jsp
Well, that would be to be expected and the margin will get bigger, not smaller as future SW & HW development cycles come & go.
Except that nVidia is still using OpenCL 1.2! I would have expected OpenCL to suffer translation losses and CUDA to always be faster.
By the same token, wouldn't the performance difference depend on which version of CUDA the tests used? I would expect tests using OpenCL 1.2 and CUDA 7.5 and earlier to be different than similar tests with OpenCL 1.2 and CUDA 8.0.4.
Well they don't give the Windows version but I think the results point to them using Windows 10 & with the improved Windows 10 & DirectX 12 parallelization algorithms. OpenCL is not bad, it's really very good, but was hampered in the past by parallizing dispatch processing - that is much improved now.
All knowledge is about the past, all development is about the future. It is fruitless to speculate about where OpenCL and CUDA are heading or which was/is/will be better or faster. As has been said previously, hardware and software development cycles can progress at a furious pace. Insofar as the state of OpenCL today is concerned, I believe that it is important to recognize that OpenCL is not, as the name might suggest to some, a product of the traditional Opensource community produced by a group of independent programmers in their spare time. OpenCL is developed under the auspices of the Khronos Group, an industry consortium whose membership reads like a whos who of the hardware and software industry, and that includes Nvidia. With such a broad and diverse group, each with their own interests, the development may be subject to some constraints, but I believe it is fair to say it won't be limited by a lack of intellectual horsepower. CUDA technology, on the other hand, is owned and controlled by one company and designed to work on their specific hardware, so the company is pretty much free to do as they see fit without needing to seek agreement from anyone else.
My prediction, for what it is worth, is that OpenCL will continue to be developed and improved, but the pace of that is impossible to predict. We will simply have to wait and see. I can understand the anxiety on the part of those who may have married themselves to a particular technology, but from a broad consumer perspective, standards developed, agreed to and adopted by the industry as a whole are a good thing.
For anyone interested in seeing who the members of the Khronos Group are, have a look here:
https://en.wikipedia.org/wiki/Khronos_Group
Everyone also needs to remember that OpenCL is an Open standard. Open Standards, by their very nature, evolve more slowly than closed ecosystem software.
CUDA is developed by nVidia, they control it, they do not share the source to it, they provide the API.
OpenCL is developed by a consortium of people, including people from multiple big players. Those include AMD, Intel, Apple, nVidia too.
This means that changes and evolution of the OpenCL standards occur much more slowly, as there will be input (and disagreement) potentially from multiple sides. And compromises have to be worked out without losing sight of the original goals and requirements.
Even with that, OpenCL has moved rapidly, with a great deal of support from both corporate major players and educational institutions, and on both the implementation as well as tools side. nVidia's OpenCL implementation basically just converts OpenCL calls to equivalent CUDA calls (where there is a disconnect, it may produce a small set of CUDA calls/code.) This is true of a LOT of such implementations, as when we go back to old MPI-type code.....it's all about implementing an API for Parallel Processing.
I'm surprised that this is even an issue. In numerical computing, OpenCL is an also-ran. If you're heroic with it, you can get it to perform at about 50-60% of the level of CUDA. This is true whether you're pitting CUDA on Nvidia against OpenCL on equivalent Radeons, or using both OpenCL and CUDA on the same Nvidia.
...provided you can keep W10 from eating a big chunk of your VRAM. For that you need two cards one for rendering and one to run the displays. The non render card still should have enough memory to handle the Daz OpenGL Viewport with a scene loaded without incredible lag.
Totally irrelevant.
...and why? As I understand W10 reserves a significant amount of VRAM compared to older versions of the OS. This has not only been addressed here but on several tech sites I frequent as well. The solution usually given is to use one GPU to just run the display while reserving the other for rendering.