Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
openGL is still the fastest
and could be developed to use newer features, just shadows and transmapping needs improvement
not all art needs to be photoreal especially if post work is intended
The very fact that it does run faster on a GPU than a CPU is evidence that it's optimized for GPU over CPU.
I mean, dude....
I can see the why of encrypting scripts, re; don't want someone stealing it - but when an author of a script passes on, or just bails, and that script needs to be updated to work in the latest version, and no one steps up to the plate, then it's a problem for end-users. At one time, Poser had an effective metaball script plugin, but the author bailed and it only works with version 5. We need some sort of contingency plan, like a source code release if said author bails or passes away. Then again we also need some sort of policy regarding abandonware being public domain, so all these ancient and neglected pieces can get new life if the author doesn't see enough $ in doing it themselves. If they don't want to lose it, they have to keep it updated. If you want to run a business, run it like a business.
DrNewcenstein: It's evidence that GPUs process rendering calculations a lot faster than CPUs do.
Still incorrect. Iray runs the same per processor as per CPU.. the difference is that a CPU has 4-12 processors and a GPU has hundreds to thousands, which is why it runs faster. So no it's not optimized for GPU, it just runs faster on a GPU.
When an author passes, it doesn't mean he loses ownership of his products. If someone can buy the code from him, then the new owner can update it; if not, then the code dies with him. This isn't new business-wise. But death doesn't mean the code gets to be passed around. That's simply not how business or the law works.
I mean its always going to be faster rendeing with the GPU because "GPU" is usually actually GPU+CPU. if that were somehow slower...
If I were to render truly GPU only it might actually be slower than my CPU only. (I actually tested this once upon a time and my gpu+cpu was 1.5x faster than cpu only)
Really? I render GPU only all the time. It's usually 10x faster than CPU only.
TBF My computer has a much better CPU than GPU. I just tested a very simple scene of a sphere and point light in a cube and to get to 50 samples: Took the GPU 46 seconds CPU 63 seconds and GPU+CPU 36 seconds, but, of course, for complex scenes that take longer my gpu will start throttling about 5 minutes in... so it will slow down. (laptops aren't the best at dissapating heat)
I've been a programmer for longer than I care to admit. I do 3D and GPGPU programming and what you say is factually incorrect. Not every algorithm can be easily parallelised. In fact, multi-core programming is usually avoided if there is no pressing need for it as it is very easy to make a mistake and introduce bugs. On video cards, GPGPU processing is even more difficult because often the time it takes to upload all the resources can outweight the time it would take to just run the algorithm on the CPU. Once you have the data uploaded, you now have the challenge to split up the tasks into kernels that will run on the GPU. Each compute unit (OpenCL) or Streaming Multiprocessor (CUDA) will execute the exact same code and have the same instruction pointer for all the cores in each compute unit or streaming multiprocessor. So the code must be tailored just for GPU. Then you have the various types of memory access that are unique to GPUs that I don't have time to go into here. While you can run OpenCL code on a CPU, the performance will be severely degraded. CUDA (which iRay uses) cannot be run on CPU. So there is different code entirely that runs when you use iRay on CPU.
In short, to say that iRay is not optimized for GPU is just silly.
Please don't pull development rank on me. I've been in IT for over 3 decades including a stint at Intel.
What I said is correct. The architecture is the same built by processor; it is faster because GPUs have more processors (IE CUDA) than processor cores on a PC. It is also the reason why GPUs are being snatched up for the crypto currency mining.
Now THAT is wonderful news! A speed increase with newer versions!
Promise?
By the way, thank you for all of the information that you are providing here.
And thanks to everyone else here, I was kind of in a foul mood when I started this thread, but I'm a lot more optimistic now.
..running Iray to get hte optimal quality takes far too long on the CPU unless have a dual Xeon system (which most of us don't and it still takes longer than on the GPU). I have been rendering complex scenes in 3DL using IBL Master that take less than 15 min.
...test scenes of characters with simple backdrops tale upwards of 45 min to an hor for my system to render in Iray. In 3DL the I can get a high quality full scene with multiple characters to render in under 15 min.
...
..I have proved that wrong unless you want really low quality results. In .jpg or .png the image is flattened by default.
I don't have that chip, yet that scene ran in about an hour. My computer is the exact same machine I've been using for 3DL before Iray was released. Again, if you're running things in 15 minutes, they absolutely don't have the things I (and certainly not the lighting quality) mentioned that would grind 3DL to a halt in those scenes so it's not a valid comparison. Before iray, using any SAV hairs was basically a no-no in 3DL... but iray just breezes through those hairs.
In addition to what @AlienRenders mentioned about parallel processing, GPUs have completely different instruction sets than CPUs. They don't execute the same code.
I wonder what NVIDIA would say about whether or not Iray was optimized for GPU . . . (I'm just kidding - I really don't wonder).
- Greg
I suppose the argument that the parrallelism of GPUs comes into play with rendering, and some other tasks that benefit from said parrallelism.
... But there is the counter argument that an equal number of CPU cores would out-perform those CUDA cores; the cost though (shudder).
I have a 16 core CPU, and a 980ti; taken purely on CPU core count v CUDA core count, the CPU should either be much slower, or the CUDA cores much faster... But they are not.
What they are are different beasts that do share some characteristics but designed to be better for certain primary tasks.
I rendered the same scene, with the same settings other than switching Renderer from GPU to CPU. It should be noted that the 980ti is a render only card, whereas the CPU had other tasks to perform, although I left the machine idle during render.
The scene had SSS on Skin and Hair, with fibre mesh hair also included.
GPU 980ti - Cores: 2816
2018-03-18 10:53:32.257 Finished Rendering
2018-03-18 10:53:32.386 Total Rendering Time: 10 minutes 54.62 seconds
CPU 1950X - Cores: 16 (32 with AMD's version of Hyperthreading)
2018-03-18 11:24:06.658 Finished Rendering
2018-03-18 11:24:06.732 Total Rendering Time: 28 minutes 11.64 seconds
Not all cores are created equal, which is true of both CPUs and GPUs. 10 series cores perform the same task faster (core for core)r, whereas CPUs from pre-ryzen will perform less well - as a couple of examples (again core for core).
Is the software optimized for GPU useage? I would presume so, IF: it would make sense as that is what Nvidia primarily expects it to be used on. Of course, this presumes that optimising is possible: as in it would yield better results on one (at the expense of the other) if certain actions were performed; obviously I have no idea if this is the case.
EDIT
GPU finished on:
2018-03-18 10:53:24.718 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : Received update to 01164 iterations after 641.672s.
CPU finished on:
2018-03-18 11:23:48.688 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : Received update to 01176 iterations after 1670.180s.
I would have expected them to be the same iterations, although such a small variance on one comparrison can be taken as statistically not valid due to the sample size.
I think that the decision to stop the render is taken by DS, and that iray reports progress to DS at time intervals rather than sample intervals - if that is correct then the number of samples used in the end would depend on speed (and interruptions) affecting just how many samples were carried out after the target convergence was hit and before DS was informed
That makes sense Richard.
Am I one of the seeming few who don't really care? Iray is good for some stuff, 3Delight is good for some other stuff. Some cross-over between them, but enough differences to, hopefully, ensure that both will continue to be provided (hence the not really caring bit). I use both, I would say dropping back to 3Delight but that'd be unfair let's say rather making use of a render engine that can do what I want it to do, as and when mood, whim and requirements of final image dictate.
I have a pretty good computer, with a decent nVidia card so I can happily set off a render using CPU or GPU and then get on with doing other stuff should I so wish - sometimes having to use 'Set Affinity' on CPU threads to give the rest of the computer a bit of look in! Even when I had a lesser beast I was relatively sanguine about letting a CPU-hogging render spin on for days, and yes, I am also in thr happy position of this being a hobby so no deadlines, etc., so I could do that.
Everyone can do what they have always done, soemtimes on a bigger, better computer as time has passed. All we have now is another, pretty impressive, tool in the box which we can use when we need, or want, to.
I didn't say they execute the same code, they work in the same way, the more processors you have, the faster the render will run. As you basically hundreds or thousands of cuda cores on a GPU rather than 4-24 on a PC, the GPU is going to run a render many times faster. It doesn't mean that code is optimzed for a GPU, the shaders that are used are the same, as are the materials. This still doesn't take away from the fact that I can run things like reflection and transmapped hair much faster in Iray with CPU than with 3DL. 3DL chokes on those things that iray doesn't.
@Petercat If speed is the reason for not using 4.10 then I would strongly suggest you to pass to 4.10 and use the scene optimizer. This will give you about 100%-300% speed on the average scene. It also helps a lot with scene loading times.
https://www.daz3d.com/scene-optimizer
@Oso3D I believe each engine has its strong and weak points, it all depends what you need and how you use them. 3Delight can be much faster than Iray if you use it without ray-tracing. While you can't turn off ray-tracing in Iray. Though the interactive mode has some optimizations but it's ugly with transparency so it's not of any real use. GLSL may also be a good choice for toon renderings.
Below a simple scene rendered with 3Delight in no time without raytracing (really, it's a breeze). And the same scene with Iray.
It actually is optimized for GPUs. Optimization for software involves both speed and space (storage requirements). Feature-wise, there is no difference and that is a correct statement. But software optimized to take advantage of GPUs can offer a significant boost in performance.
The best software I own makes use of _both_ CPU and GPU; effectively treating both as a collection of "compute units". Then, dispatching as much as possible to as to saturate those compute units.
Similar statements were made over the past decades with CPUs themselves. Some providing higher floating point performance that others. For example the original PowerPC 604 vs 604e. Software developers could configure their compilers to target specific processors or families (and that is how things still work) in order to take advance of better performance. Such software would then be 'optimized for PowerPC 604e'. Again, output of the process would be the same, but if run on a 604e, it would be much faster.
Correct. At least in the US, copyright (which covers software), lasts for 50 years after the author's death. Note that I don't know if this requires a formal copyright (i.e. registration with the US copyright office) or if it can rely simply upon posted copyright notices in the source material.
In my opinion, content is both the best and worse thing about Daz. And, it's far more complicated than anything else I've seen. For example, look at music, video and images. While there are advancements over the years with codecs, these types of content are much more stable and don't require conversions very often (if at all).
As with any other for-profit industry, Daz needs to make money. And they choose to do so from content for the most part. So with this model, there will always be a push to have the software become more capable and content providers will then create content to take advantage of that. It is a shame that there is very limited forwards and backwards compatibility. Seems they are getting better with that though. Still, I do wonder if I'd be better off moving to a different package that would support content that could "live longer" without having to constantly reconfigure or convert. Hope that makes sense.
When people say optimized for GPU in these threads they usually mean to suggest Iray is worse for CPU than other options, which is untrue.
Iray CPU does as well as 3dl. 3dl offers a few simplifications to rendering that Iray doesn’t, which is good if you are ok with very simplified lighting.
But if you want bounce light and anything raytraced, Iray is not slower.
So you turned this thread into a "which one is better" even without my help
. Don't get me started, doing my best to keep out=)
One question @Padone though: No raytracing? You mean you had ray trace depth at 0? Clearly that is not an option if you want eyes with refraction or if you want shadows/occlusion/reflections in your scene? And as a side note. didn't know IBLM still casts shadows with raytrace depth at 0.Interesting
I assume you're joking here...
Agree. If you're an indie developer make sure that someone else have access to your source code, key generators etc., so your stuff doesn't die with you if you're hit by a truck.
DAZ could require that PAs give them the encryption keys to their scripts so they can update them if necessary. I would be fine with that myself if I was a PA.
OK, were talking iRay /GPU optimization - I thought it was about content being optimized for GPU rather than CPU in terms of rendering speed (I think that was what Will was refering to?).