Is this the future of Studio? Not good.

24

Comments

  • WendyLuvsCatzWendyLuvsCatz Posts: 40,100

    openGL is still the fastest  cheeky

    and could be developed to use newer features, just shadows and transmapping needs improvement 

    not all art needs to be photoreal especially if post work is intended

  • The very fact that it does run faster on a GPU than a CPU is evidence that it's optimized for GPU over CPU.

    I mean, dude....

     

    I can see the why of encrypting scripts, re; don't want someone stealing it - but when an author of a script passes on, or just bails, and that script needs to be updated to work in the latest version, and no one steps up to the plate, then it's a problem for end-users. At one time, Poser had an effective metaball script plugin, but the author bailed and it only works with version 5. We need some sort of contingency plan, like a source code release if said author bails or passes away. Then again we also need some sort of policy regarding abandonware being public domain, so all these ancient and neglected pieces can get new life if the author doesn't see enough $ in doing it themselves. If they don't want to lose it, they have to keep it updated. If you want to run a business, run it like a business.

  • Oso3DOso3D Posts: 15,085

    DrNewcenstein: It's evidence that GPUs process rendering calculations a lot faster than CPUs do.

     

  • Male-M3diaMale-M3dia Posts: 3,584

    The very fact that it does run faster on a GPU than a CPU is evidence that it's optimized for GPU over CPU.

    I mean, dude....

    Still incorrect. Iray runs the same per processor as per CPU.. the difference is that a CPU has 4-12 processors and a GPU has hundreds to thousands, which is why it runs faster. So no it's not optimized for GPU, it just runs faster on a GPU.

     

    I can see the why of encrypting scripts, re; don't want someone stealing it - but when an author of a script passes on, or just bails, and that script needs to be updated to work in the latest version, and no one steps up to the plate, then it's a problem for end-users. At one time, Poser had an effective metaball script plugin, but the author bailed and it only works with version 5. We need some sort of contingency plan, like a source code release if said author bails or passes away. Then again we also need some sort of policy regarding abandonware being public domain, so all these ancient and neglected pieces can get new life if the author doesn't see enough $ in doing it themselves. If they don't want to lose it, they have to keep it updated. If you want to run a business, run it like a business.

     

    When an author passes, it doesn't mean he loses ownership of his products. If someone can buy the code from him, then the new owner can update it; if not, then the code dies with him. This isn't new business-wise. But death doesn't mean the code gets to be passed around. That's simply not how business or the law works.

  • j cadej cade Posts: 2,310
    edited March 2018

    The very fact that it does run faster on a GPU than a CPU is evidence that it's optimized for GPU over CPU.

    I mean, dude....

     

    I can see the why of encrypting scripts, re; don't want someone stealing it - but when an author of a script passes on, or just bails, and that script needs to be updated to work in the latest version, and no one steps up to the plate, then it's a problem for end-users. At one time, Poser had an effective metaball script plugin, but the author bailed and it only works with version 5. We need some sort of contingency plan, like a source code release if said author bails or passes away. Then again we also need some sort of policy regarding abandonware being public domain, so all these ancient and neglected pieces can get new life if the author doesn't see enough $ in doing it themselves. If they don't want to lose it, they have to keep it updated. If you want to run a business, run it like a business.

    I mean its always going to be faster rendeing with the GPU because "GPU" is usually actually GPU+CPU. if that were somehow slower...

     

    If I were to render truly GPU only it might actually be slower than my CPU only. (I actually tested this once upon a time and my gpu+cpu was 1.5x faster than cpu only)

    Post edited by j cade on
  • Oso3DOso3D Posts: 15,085

    Really? I render GPU only all the time. It's usually 10x faster than CPU only.

     

  • j cadej cade Posts: 2,310
    Oso3D said:

    Really? I render GPU only all the time. It's usually 10x faster than CPU only.

     

    TBF My computer has a much better CPU than GPU. I just tested a very simple scene of a sphere and point light in a cube and to get to 50 samples: Took the GPU 46 seconds CPU 63 seconds and GPU+CPU 36 seconds, but, of course, for complex scenes that take longer my gpu will start throttling about 5 minutes in... so it will slow down. (laptops aren't the best at dissapating heat)

     

     

  • AlienRendersAlienRenders Posts: 794
    edited March 2018
    Still incorrect. Iray runs the same per processor as per CPU.. the difference is that a CPU has 4-12 processors and a GPU has hundreds to thousands, which is why it runs faster. So no it's not optimized for GPU, it just runs faster on a GPU.

    I've been a programmer for longer than I care to admit. I do 3D and GPGPU programming and what you say is factually incorrect. Not every algorithm can be easily parallelised. In fact, multi-core programming is usually avoided if there is no pressing need for it as it is very easy to make a mistake and introduce bugs. On video cards, GPGPU processing is even more difficult because often the time it takes to upload all the resources can outweight the time it would take to just run the algorithm on the CPU. Once you have the data uploaded, you now have the challenge to split up the tasks into kernels that will run on the GPU. Each compute unit (OpenCL) or Streaming Multiprocessor (CUDA) will execute the exact same code and have the same instruction pointer for all the cores in each compute unit or streaming multiprocessor. So the code must be tailored just for GPU. Then you have the various types of memory access that are unique to GPUs that I don't have time to go into here. While you can run OpenCL code on a CPU, the performance will be severely degraded. CUDA (which iRay uses) cannot be run on CPU. So there is different code entirely that runs when you use iRay on CPU.

    In short, to say that iRay is not optimized for GPU is just silly.

     

     

    Post edited by AlienRenders on
  • Male-M3diaMale-M3dia Posts: 3,584
    edited March 2018
    Still incorrect. Iray runs the same per processor as per CPU.. the difference is that a CPU has 4-12 processors and a GPU has hundreds to thousands, which is why it runs faster. So no it's not optimized for GPU, it just runs faster on a GPU.

    I've been a programmer for longer than I care to admit. I do 3D and GPGPU programming and what you say is factually incorrect. Not every algorithm can be easily parallelised. In fact, multi-core programming is usually avoided if there is no pressing need for it as it is very easy to make a mistake and introduce bugs. On video cards, GPGPU processing is even more difficult because often the time it takes to upload all the resources can outweight the time it would take to just run the algorithm on the CPU. Once you have the data uploaded, you now have the challenge to split up the tasks into kernels that will run on the GPU. Each compute unit (OpenCL) or Streaming Multiprocessor (CUDA) will execute the exact same code and have the same instruction pointer for all the cores in each compute unit or streaming multiprocessor. So the code must be tailored just for GPU. Then you have the various types of memory access that are unique to GPUs that I don't have time to go into here. While you can run OpenCL code on a CPU, the performance will be severely degraded. CUDA (which iRay uses) cannot be run on CPU. So there is different code entirely that runs when you use iRay on CPU.

    In short, to say that iRay is not optimized for GPU is just silly.

     

     

    Please don't pull development rank on me. I've been in IT for over 3 decades including a stint at Intel.

    What I said is correct.  The architecture is the same built by processor; it is faster because GPUs have more processors (IE CUDA) than processor cores on a PC. It is also the reason why GPUs are being snatched up for the crypto currency mining.

    Post edited by Male-M3dia on
  • PetercatPetercat Posts: 2,321
    edited March 2018
    jag11 said:
    Petercat said:

    I'm sticking with 4.9.2.70 because the newer versions render about 10% slower...

    Render times are linked to the current NVIDIA Iray version, so, that is something DAZ developers can't control, they just (and must) adopt newer versions of the Iray renderer to incorporate new features and error corrections.

    In the upcoming versions you'll get a speed upgrade (and noise reduction) that'll make you very happy.

    Now THAT is wonderful news! A speed increase with newer versions!

    Promise?

    Post edited by Petercat on
  • PetercatPetercat Posts: 2,321
    j cade said:
    kyoto kid said:

    ...not enough for those stuck with CPU rendering though. Iray is optimised for GPU based rendering. 3DL for CPU rendering which is why I have moved back to the latter.

    I have to disagree with the blanket assertion. I ran a fairly complex scene with things that would grind 3DL to a slow grind and CPU iray ran it faster. Now if you're running fairly simple scenes, there may be a case, but rendering the same complex scenes, even CPU Iray renders it faster.

    Or if you're like me, and can't do without proper bounce light. AFAICT there still aren't really any options for it beyond the fairly glacially slow uberenvironment... and then you have to take extra care with material settings for things like hair if you want it to finish within in a week (and hey, no pausing your render either).

     

    I always find all the comments about 3delight being so much faster pretty funny considering before Iray I had switched over to rendering in Blender and pretty much manually setting up every material because it was still faster for me than rendering in 3delight

    And I was surprised when I ran a scene using CPU with a bunch of tree's and grass and hair and it finished in less than an hour and I remember the days where I had to do about 15-20 promos using 3DL and I basically could only do two per day because it would grind to a halt on those things, so I could render when I worked all day at the office. Running the same type of renders, saying 3DL is faster is just not true.

    By the way, thank you for all of the information that you are providing here.

    And thanks to everyone else here, I was kind of in a foul mood when I started this thread, but I'm a lot more optimistic now.

  • kyoto kidkyoto kid Posts: 41,860
    kyoto kid said:

    ...not enough for those stuck with CPU rendering though. Iray is optimised for GPU based rendering. 3DL for CPU rendering which is why I have moved back to the latter.

    I have to disagree with the blanket assertion. I ran a fairly complex scene with things that would grind 3DL to a slow grind and CPU iray ran it faster. Now if you're running fairly simple scenes, there may be a case, but rendering the same complex scenes, even CPU Iray renders it faster.

    ..running Iray to get hte optimal quality takes far too long on the CPU unless have a dual Xeon system (which most of us don't and it still takes longer than on the GPU). I have been rendering complex scenes in 3DL using IBL Master that take less than 15 min.

  • kyoto kidkyoto kid Posts: 41,860
    j cade said:
    kyoto kid said:

    ...not enough for those stuck with CPU rendering though. Iray is optimised for GPU based rendering. 3DL for CPU rendering which is why I have moved back to the latter.

    I have to disagree with the blanket assertion. I ran a fairly complex scene with things that would grind 3DL to a slow grind and CPU iray ran it faster. Now if you're running fairly simple scenes, there may be a case, but rendering the same complex scenes, even CPU Iray renders it faster.

    Or if you're like me, and can't do without proper bounce light. AFAICT there still aren't really any options for it beyond the fairly glacially slow uberenvironment... and then you have to take extra care with material settings for things like hair if you want it to finish within in a week (and hey, no pausing your render either).

     

    I always find all the comments about 3delight being so much faster pretty funny considering before Iray I had switched over to rendering in Blender and pretty much manually setting up every material because it was still faster for me than rendering in 3delight

    And I was surprised when I ran a scene using CPU with a bunch of tree's and grass and hair and it finished in less than an hour and I remember the days where I had to do about 15-20 promos using 3DL and I basically could only do two per day because it would grind to a halt on those things, so I could render when I worked all day at the office. Running the same type of renders, saying 3DL is faster is just not true.

    ...test scenes of characters with simple backdrops tale upwards of 45 min to an hor for my system to render in Iray.  In 3DL the I can get a high quality full scene with multiple characters to render in under 15 min.

  • kyoto kidkyoto kid Posts: 41,860

    It all depends. 3Delight is faster than it used to be, especially if you use progressive (the raytracer had a bunch of optimizations a few years ago). You can get some speed gains by reducing textures. Some default settings are higher than they need to be. Some folks using Iray use the wrong settings or are just going about things wrong so it makes it slow. Sometimes it's the default settings, or the mistaken thinking that you have to run it a long time to get the image to converge which generally is not needed. Some folks say it's slow because they can only use Iray in CPU mode since they overload their scene and it gets dumped to CPU, or they don't have an Nvidia card.

    ...yes

  • kyoto kidkyoto kid Posts: 41,860
    edited March 2018
    Oso3D said:

    .Iray in cpu mode is no slower than 3dl.

    And saying it’s optimized for higher quality images doesn’t mean much: default light path in 3dl is 1, unlimited in Iray. You can trivially change those numbers.

    There are certain shortcuts you can’t do in Iray, but that only matters for the very most limited and flat looking images.

     

    ..I have proved that wrong unless you want really low quality results.  In .jpg or .png the image is flattened by default.

    Post edited by kyoto kid on
  • Male-M3diaMale-M3dia Posts: 3,584
    edited March 2018
    kyoto kid said:
    kyoto kid said:

    ...not enough for those stuck with CPU rendering though. Iray is optimised for GPU based rendering. 3DL for CPU rendering which is why I have moved back to the latter.

    I have to disagree with the blanket assertion. I ran a fairly complex scene with things that would grind 3DL to a slow grind and CPU iray ran it faster. Now if you're running fairly simple scenes, there may be a case, but rendering the same complex scenes, even CPU Iray renders it faster.

    ..running Iray to get hte optimal quality takes far too long on the CPU unless have a dual Xeon system (which most of us don't and it still takes longer than on the GPU). I have been rendering complex scenes in 3DL using IBL Master that take less than 15 min.

    I don't have that chip, yet that scene ran in about an hour. My computer is the exact same machine I've been using for 3DL before Iray was released. Again, if you're running things in 15 minutes, they absolutely don't have the things I (and certainly not the lighting quality) mentioned that would grind 3DL to a halt in those scenes so it's not a valid comparison. Before iray, using any SAV hairs was basically a no-no in 3DL... but iray just breezes through those hairs.

    Post edited by Male-M3dia on
  • algovincianalgovincian Posts: 2,664
    Still incorrect. Iray runs the same per processor as per CPU.. the difference is that a CPU has 4-12 processors and a GPU has hundreds to thousands, which is why it runs faster. So no it's not optimized for GPU, it just runs faster on a GPU.

    I've been a programmer for longer than I care to admit. I do 3D and GPGPU programming and what you say is factually incorrect. Not every algorithm can be easily parallelised. In fact, multi-core programming is usually avoided if there is no pressing need for it as it is very easy to make a mistake and introduce bugs. On video cards, GPGPU processing is even more difficult because often the time it takes to upload all the resources can outweight the time it would take to just run the algorithm on the CPU. Once you have the data uploaded, you now have the challenge to split up the tasks into kernels that will run on the GPU. Each compute unit (OpenCL) or Streaming Multiprocessor (CUDA) will execute the exact same code and have the same instruction pointer for all the cores in each compute unit or streaming multiprocessor. So the code must be tailored just for GPU. Then you have the various types of memory access that are unique to GPUs that I don't have time to go into here. While you can run OpenCL code on a CPU, the performance will be severely degraded. CUDA (which iRay uses) cannot be run on CPU. So there is different code entirely that runs when you use iRay on CPU.

    In short, to say that iRay is not optimized for GPU is just silly.

     

     

    Please don't pull development rank on me. I've been in IT for over 3 decades including a stint at Intel.

    What I said is correct.  The architecture is the same built by processor; it is faster because GPUs have more processors (IE CUDA) than processor cores on a PC. It is also the reason why GPUs are being snatched up for the crypto currency mining.

    In addition to what @AlienRenders mentioned about parallel processing, GPUs have completely different instruction sets than CPUs. They don't execute the same code.

    I wonder what NVIDIA would say about whether or not Iray was optimized for GPU . . . (I'm just kidding - I really don't wonder).

    - Greg

  • nicsttnicstt Posts: 11,715
    edited March 2018
    Oso3D said:

    Also, ‘optimized for gpu’ is factually untrue, unless I’m missing something; there are no sacrifices made to how Iray renders that preferentially improves GPU over CPU.

    That it CAN use GPU and run much faster has absolutely no negative impact on how it runs in CPU.

     

    Ergo, it is in no way optimized for GPU.

    I suppose the argument that the parrallelism of GPUs comes into play with rendering, and some other tasks that benefit from said parrallelism.

    ... But there is the counter argument that an equal number of CPU cores would out-perform those CUDA cores; the cost though (shudder).

    I have a 16 core CPU, and a 980ti; taken purely on CPU core count v CUDA core count, the CPU should either be much slower, or the CUDA cores much faster... But they are not.

    What they are are different beasts that do share some characteristics but designed to be better for certain primary tasks.

    I rendered the same scene, with the same settings other than switching Renderer from GPU to CPU. It should be noted that the 980ti is a render only card, whereas the CPU had other tasks to perform, although I left the machine idle during render.

    The scene had SSS on Skin and Hair, with fibre mesh hair also included.

    GPU 980ti - Cores: 2816
    2018-03-18 10:53:32.257 Finished Rendering
    2018-03-18 10:53:32.386 Total Rendering Time: 10 minutes 54.62 seconds

    CPU 1950X - Cores: 16 (32 with AMD's version of Hyperthreading)
    2018-03-18 11:24:06.658 Finished Rendering
    2018-03-18 11:24:06.732 Total Rendering Time: 28 minutes 11.64 seconds

    Not all cores are created equal, which is true of both CPUs and GPUs. 10 series cores perform the same task faster (core for core)r, whereas CPUs from pre-ryzen will perform less well - as a couple of examples (again core for core).

    Is the software optimized for GPU useage? I would presume so, IF: it would make sense as that is what Nvidia primarily expects it to be used on. Of course, this presumes that optimising is possible: as in it would yield better results on one (at the expense of the other) if certain actions were performed; obviously I have no idea if this is the case.

    EDIT

    GPU finished on:

    2018-03-18 10:53:24.718 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 01164 iterations after 641.672s.

    CPU finished on:

    2018-03-18 11:23:48.688 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 01176 iterations after 1670.180s.

    I would have expected them to be the same iterations, although such a small variance on one comparrison can be taken as statistically not valid due to the sample size.

     

    Post edited by nicstt on
  • Richard HaseltineRichard Haseltine Posts: 108,079
    nicstt said:
    GPU finished on:

    2018-03-18 10:53:24.718 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 01164 iterations after 641.672s.

    CPU finished on:

    2018-03-18 11:23:48.688 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 01176 iterations after 1670.180s.

    I would have expected them to be the same iterations, although such a small variance on one comparrison can be taken as statistically not valid due to the sample size.

    I think that the decision to stop the render is taken by DS, and that iray reports progress to DS at time intervals rather than sample intervals - if that is correct then the number of samples used in the end would depend on speed (and interruptions) affecting just how many samples were carried out after the target convergence was hit and before DS was informed

  • nicsttnicstt Posts: 11,715

    That makes sense Richard.

  • SimonJMSimonJM Posts: 6,067

    Am I one of the seeming few who don't really care?  Iray is good for some stuff, 3Delight is good for some other stuff.  Some cross-over between them, but enough differences to, hopefully, ensure that both will continue to be provided (hence the not really caring bit).  I use both, I would say dropping back to 3Delight but that'd be unfair let's say rather making use of a render engine that can do what I want it to do, as and when mood, whim and requirements of final image dictate.

    I have a pretty good computer, with a decent nVidia card so I can happily set off a render using CPU or GPU and then get on with doing other stuff should I so wish - sometimes having to use 'Set Affinity' on CPU threads to give the rest of the computer a bit of look in!  Even when I had a lesser beast I was relatively sanguine about letting a CPU-hogging render spin on for days, and yes, I am also in thr happy position of this being a hobby so no deadlines, etc., so I could do that.

    Everyone can do what they have always done, soemtimes on a bigger, better computer as time has passed.  All we have now is another, pretty impressive, tool in the box which we can use when we need, or want, to.

  • Male-M3diaMale-M3dia Posts: 3,584
    edited March 2018
    Still incorrect. Iray runs the same per processor as per CPU.. the difference is that a CPU has 4-12 processors and a GPU has hundreds to thousands, which is why it runs faster. So no it's not optimized for GPU, it just runs faster on a GPU.

    I've been a programmer for longer than I care to admit. I do 3D and GPGPU programming and what you say is factually incorrect. Not every algorithm can be easily parallelised. In fact, multi-core programming is usually avoided if there is no pressing need for it as it is very easy to make a mistake and introduce bugs. On video cards, GPGPU processing is even more difficult because often the time it takes to upload all the resources can outweight the time it would take to just run the algorithm on the CPU. Once you have the data uploaded, you now have the challenge to split up the tasks into kernels that will run on the GPU. Each compute unit (OpenCL) or Streaming Multiprocessor (CUDA) will execute the exact same code and have the same instruction pointer for all the cores in each compute unit or streaming multiprocessor. So the code must be tailored just for GPU. Then you have the various types of memory access that are unique to GPUs that I don't have time to go into here. While you can run OpenCL code on a CPU, the performance will be severely degraded. CUDA (which iRay uses) cannot be run on CPU. So there is different code entirely that runs when you use iRay on CPU.

    In short, to say that iRay is not optimized for GPU is just silly.

     

     

    Please don't pull development rank on me. I've been in IT for over 3 decades including a stint at Intel.

    What I said is correct.  The architecture is the same built by processor; it is faster because GPUs have more processors (IE CUDA) than processor cores on a PC. It is also the reason why GPUs are being snatched up for the crypto currency mining.

    In addition to what @AlienRenders mentioned about parallel processing, GPUs have completely different instruction sets than CPUs. They don't execute the same code.

    I wonder what NVIDIA would say about whether or not Iray was optimized for GPU . . . (I'm just kidding - I really don't wonder).

    - Greg

    I didn't say they execute the same code, they work in the same way, the more processors you have, the faster the render will run. As you basically hundreds or thousands of cuda cores on a GPU rather than 4-24 on a PC, the GPU is going to run a render many times faster. It doesn't mean that code is optimzed for a GPU, the shaders that are used are the same, as are the materials. This still doesn't take away from the fact that I can run things like reflection and transmapped hair much faster in Iray with CPU than with 3DL. 3DL chokes on those things that iray doesn't.

    Post edited by Male-M3dia on
  • PadonePadone Posts: 4,015
    edited March 2018

    @Petercat If speed is the reason for not using 4.10 then I would strongly suggest you to pass to 4.10 and use the scene optimizer. This will give you about 100%-300% speed on the average scene. It also helps a lot with scene loading times.

    https://www.daz3d.com/scene-optimizer

    @Oso3D  I believe each engine has its strong and weak points, it all depends what you need and how you use them. 3Delight can be much faster than Iray if you use it without ray-tracing. While you can't turn off ray-tracing in Iray. Though the interactive mode has some optimizations but it's ugly with transparency so it's not of any real use. GLSL may also be a good choice for toon renderings.

    Below a simple scene rendered with 3Delight in no time without raytracing (really, it's a breeze). And the same scene with Iray.

     

    pergola-3dl.jpg
    640 x 360 - 137K
    pergola-iray.jpg
    640 x 360 - 189K
    duf
    duf
    pergola-3dl.duf
    74K
    duf
    duf
    pergola-iray.duf
    76K
    Post edited by Padone on
  • Oso3D said:

    Also, ‘optimized for gpu’ is factually untrue, unless I’m missing something; there are no sacrifices made to how Iray renders that preferentially improves GPU over CPU.

    That it CAN use GPU and run much faster has absolutely no negative impact on how it runs in CPU.

     

    Ergo, it is in no way optimized for GPU.

    It actually is optimized for GPUs.  Optimization for software involves both speed and space (storage requirements).  Feature-wise, there is no difference and that is a correct statement.  But software optimized to take advantage of GPUs can offer a significant boost in performance.

    The best software I own makes use of _both_ CPU and GPU; effectively treating both as a collection of "compute units".  Then, dispatching as much as possible to as to saturate those compute units.

    Similar statements were made over the past decades with CPUs themselves.  Some providing higher floating point performance that others.  For example the original PowerPC 604 vs 604e.  Software developers could configure their compilers to target specific processors or families (and that is how things still work) in order to take advance of better performance.  Such software would then be 'optimized for PowerPC 604e'.   Again, output of the process would be the same, but if run on a 604e, it would be much faster.

  •  

    When an author passes, it doesn't mean he loses ownership of his products. If someone can buy the code from him, then the new owner can update it; if not, then the code dies with him. This isn't new business-wise. But death doesn't mean the code gets to be passed around. That's simply not how business or the law works.

    Correct.  At least in the US, copyright (which covers software), lasts for 50 years after the author's death.   Note that I don't know if this requires a formal copyright (i.e. registration with the US copyright office) or if it can rely simply upon posted copyright notices in the source material.

  • Petercat said:

    It just feels like this whole thing is about trying to push people into using the newest Studio, but persuant to the above link and others, many people still prefer the older versions for various reasons, and are at risk of being frozen out. Ah, well, there are other places to spend money.

    In my opinion, content is both the best and worse thing about Daz.  And, it's far more complicated than anything else I've seen.  For example, look at music, video and images.  While there are advancements over the years with codecs, these types of content are much more stable and don't require conversions very often (if at all).

    As with any other for-profit industry, Daz needs to make money.  And they choose to do so from content for the most part.  So with this model, there will always be a push to have the software become more capable and content providers will then create content to take advantage of that.  It is a shame that there is very limited forwards and backwards compatibility.  Seems they are getting better with that though.   Still, I do wonder if I'd be better off moving to a different package that would support content that could "live longer" without having to constantly reconfigure or convert.  Hope that makes sense.

  • Oso3DOso3D Posts: 15,085

    When people say optimized for GPU in these threads they usually mean to suggest Iray is worse for CPU than other options, which is untrue.

    Iray CPU does as well as 3dl. 3dl offers a few simplifications to rendering that Iray doesn’t, which is good if you are ok with very simplified lighting.

    But if you want bounce light and anything raytraced, Iray is not slower.

  • Sven DullahSven Dullah Posts: 7,621

    So you turned this thread into a "which one is better" even without my helpsurprise. Don't get me started, doing my best to keep out=)

    One question @Padone though: No raytracing? You mean you had ray trace depth at 0? Clearly that is not an option if you want eyes with refraction or if you want shadows/occlusion/reflections in your scene? And as a side note. didn't know IBLM still casts shadows with raytrace depth at 0.Interestingsmiley

  • TaozTaoz Posts: 10,259

    The very fact that it does run faster on a GPU than a CPU is evidence that it's optimized for GPU over CPU.

    I mean, dude....

    I assume you're joking here...

     

     

    I can see the why of encrypting scripts, re; don't want someone stealing it - but when an author of a script passes on, or just bails, and that script needs to be updated to work in the latest version, and no one steps up to the plate, then it's a problem for end-users. At one time, Poser had an effective metaball script plugin, but the author bailed and it only works with version 5. We need some sort of contingency plan, like a source code release if said author bails or passes away. Then again we also need some sort of policy regarding abandonware being public domain, so all these ancient and neglected pieces can get new life if the author doesn't see enough $ in doing it themselves. If they don't want to lose it, they have to keep it updated. If you want to run a business, run it like a business.

    Agree. If you're an indie developer make sure that someone else have access to your source code, key generators etc., so your stuff doesn't die with you if you're hit by a truck.

    DAZ could require that PAs give them the encryption keys to their scripts so they can update them if necessary. I would be fine with that myself if I was a PA.

     

     

  • TaozTaoz Posts: 10,259
    Still incorrect. Iray runs the same per processor as per CPU.. the difference is that a CPU has 4-12 processors and a GPU has hundreds to thousands, which is why it runs faster. So no it's not optimized for GPU, it just runs faster on a GPU.

    In short, to say that iRay is not optimized for GPU is just silly.

    OK, were talking iRay /GPU optimization - I thought it was about content being optimized for GPU rather than CPU in terms of rendering speed (I think that was what Will was refering to?).

Sign In or Register to comment.