Show Us Your Iray Renders

1151618202150

Comments

  • Oso3DOso3D Posts: 15,085
    edited December 1969

    Ordered 16 GB RAM, GTX 970 SC. 12 hours later, had them in my hands. Kudos, Amazon!

    And holy poot, renders are not only way faster, they LOOK better, too... I suspect the calculations are better or SOMETHING.

  • macleanmaclean Posts: 2,438
    edited December 1969

    Ordered 16 GB RAM, GTX 970 SC. 12 hours later, had them in my hands. Kudos, Amazon!

    And holy poot, renders are not only way faster, they LOOK better, too... I suspect the calculations are better or SOMETHING.

    That's good to hear. I ordered exactly the same - Asus GTX 970 + 16mb RAM. Should get them Monday.

    mac

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,414
    edited March 2015

    Ordered 16 GB RAM, GTX 970 SC. 12 hours later, had them in my hands. Kudos, Amazon!

    And holy poot, renders are not only way faster, they LOOK better, too... I suspect the calculations are better or SOMETHING.

    had to read that a few times :coolhmm:
    Is that 16GB RAM on a video card!? na, lol.

    I suspect, because it dose not take so long to get spot-renders to show something, more time is being productive in fine-tuning stuff. That or the gambit on the output of one of the cards is off? I don't believe I just typed that, Gambit off calibration with a DVI,HDMI connector, lol.

    GTX970 with 16GB of RAM, where can I get one, lol.

    Post edited by ZarconDeeGrissom on
  • Oso3DOso3D Posts: 15,085
    edited December 1969

    Ha! Sorry, I meant I bought 2 8GB RAM sticks AND ALSO got a EVGA GTX 970 SC (which has 4 GB VRAM)

    I had 8 GB RAM and a 550 before.

    Huuuuge improvement. (I had been contemplating buying a completely new machine, but people pointed out I could get a rather substantial improvement without spending quite so much.

  • CypherFOXCypherFOX Posts: 3,401
    edited December 1969

    Greetings,
    So...a few notes. The 970 is awesome, and probably entirely worth it, but it has *one* glitch. That is the 4GB is actually 3.5GB addressable. The other 512MB is...well, I don't really understand it, but it's sort of a side-buffer, not as straightforwardly addressable. It's still great, and damn powerful, but it's worth keeping in mind.

    The discount on the price, comparable to a 980, is probably mostly due to that.

    I wish I had the spare scratch for that, myself. I picked up a 4GB 740GT though (and a new power supply to power it), and after a bunch of hours of setup and fighting with my windows box, it's up and running happily, and cranking at a really nice speed. Now I can put a scene together on my Mac, save it, let Dropbox sync it to the Windows box right next to it, start a render using the GPU, and keep working on my Mac. :)

    -- Morgan

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,414
    edited December 1969

    timmins.william, it's all good, just poking fun at my old tired eye, lol.

    Yes that will give you a major boost, more stuff will not be sent to swap (If you use that), and more options with scenes and figures. I did hit the limits with 16GB of ram, then recently again with 32GB. So it will never bee enough, and we will always want more.

    I'm on a "watt budget", so my options for a GPU for Iray is incredibly limited, to put it lightly. Some day I'll string a few more 20A circuits out here for better computers and radios, lol.

  • 8eos88eos8 Posts: 170
    edited December 1969

    Hello, I'm new here :) Here's what I've done so far after a couple days of playing around with Iray. Both images are lit with an HDRI and a couple of photometric spotlights, and I set the tone mapping ISO to 200. I used the Iray skin shader, and the velvet shader on the bikini. This post was really helpful for getting a nice skin sheen: http://www.daz3d.com/forums/discussion/53690/P225/#779482 They took about 5 minutes each using 2 GTX 970's.

    iray-test3-hd-sheen.png
    1000 x 1000 - 901K
    iray-test2a-hd-sheen.png
    1000 x 1000 - 870K
  • pearbearpearbear Posts: 227
    edited March 2015

    Glad you found that helpful, 8eos8!

    I've been doing more experiments with customizing skin and cloth shaders for Iray. Surprisingly, I'm having better luck playing with some older shaders than with the Iray Optimized Genesis 2 Female MAT. Clothes and skin in these new renders are based on Age of Armour's Subsurface Gummy shaders. They were a lot easier and more intuitive for me to get good results with. Going for a less shiny, more everyday look to the skin here, and trying to get some natural looking translucency to the cloth. Light is all from free HDRIs found online.

    test07.jpg
    800 x 1000 - 225K
    test06.jpg
    800 x 1000 - 288K
    test05.jpg
    800 x 1000 - 277K
    test04.jpg
    800 x 1000 - 265K
    test03.jpg
    800 x 1000 - 346K
    Post edited by pearbear on
  • Oso3DOso3D Posts: 15,085
    edited December 1969

    Wow, pearbear, those are amazing. I particularly like the second one.

  • SotoSoto Posts: 1,450
    edited December 1969

    I agree, the second one is simply fantastic.
    I might need to get HD morphs after all...

  • SnowSultanSnowSultan Posts: 3,773
    edited March 2015

    I think the last question I asked was buried in other posts, so hopefully you mind my asking again. Does Iray prefer spherical HDRIs or can you use any sort of environmental map? If someone who has had success with HDR lighting could recommend or give me an example of the type of map that worked for you, I'd appreciate it.


    I ordered exactly the same - Asus GTX 970

    Hm, I hope DAZ can give us better prices when they start selling them. :) That one has 1664 CUDA cores, but I think Sickleyield mentioned 2000 cores being especially efficient. Least I can try to justify buying a new card because I also play a lot of PC games. ;)

    Post edited by SnowSultan on
  • Dumor3DDumor3D Posts: 1,316
    edited December 1969

    Ditto! Great work!

  • pearbearpearbear Posts: 227
    edited December 1969

    Thanks! That second one is the most like natural everyday indoor lighting, while the others are photo studio-like.

    I just can't stress how psyched I am about Iray in DAZ. The last couple of years I've been exporting my DAZ projects to other unbiased renderers, and now having a great realistic renderer built right into DAZ... it's just amazing.

    I'm doing some work with the skin textures in Photoshop before rendering too. It seems logical that bump maps, spec maps etc. made for 3Delight might not be optimal for an unbiased renderer. Also, the gamma slider on image maps in the shader settings don't seem to be functional, so I'm adjusting gamma of my skin texture maps in Photoshop to work with the Age of Armour shaders.

    It'll be interesting once 4.8 gets out of beta to see if vendors start offering their projects with shaders optimized for both rendering engines. Also, I'd like to see some higher res skin textures available for the figures. 10,000 x 10,000 pixel texture maps aren't unusual in high quality models, and that resolution makes skin bump maps more real looking with less artifacts. I'd really like to see a new wave of higher resolution V6 and M6 textures.

  • MattymanxMattymanx Posts: 6,996
    edited December 1969

    martClut said:
    ,,messing with emission


    THat looks really good. YOu did a really good job

  • pearbearpearbear Posts: 227
    edited December 1969

    I think the last question I asked was buried in other posts, so hopefully you mind my asking again. Does Iray prefer spherical HDRIs or can you use any sort of environmental map? If someone who has had success with HDR lighting could recommend or give me an example of the type of map that worked for you, I'd appreciate it.

    I haven't checked if Iray supports different types of HDRI map, but the typical rectangular shaped ones are working for me. I downloaded a bunch of great ones from this generous person on deviant art:

    http://zbyg.deviantart.com/gallery/6278123/Resources

    I hope DAZ gets the built in gamma slider control on materials working soon, since I currently have to adjust the gamma of a lot of my HDRIs in Photoshop to get them to work properly. (or maybe I'm missing something)

  • Dumor3DDumor3D Posts: 1,316
    edited December 1969

    As it has been said, CPU or GPU or GPU+CPU render modes all produce the exact same quality. That said, there are three areas which will cause a render to 'finish'. The defaults are 7200 seconds (2 hours), 95% convergence or 5000 iterations. If you hit 7200 seconds, CPU vs. GPU, yes, almost certainly the quality will be less as GPU should be faster that CPU in almost all cases. I suppose maybe a very old card with very few CUDA cores might not be as fast as a screaming fast processor or two? As best as I can tell, if you hit 95% convergence on CPU or GPU or 5000 iterations, the render is identical or for that matter what ever convergence level or number of iterations you choose.

    To me, Nvidia has done a fabulous job on CPU as their thinking had to be mostly from the GPU side. I have so far been disappointed with LuxRender in GPU mode... enough so to keep using CPU mode. The quality difference was too much to satisfy me. I do feel wonderful that I happened to choose some really nice Nvidia cards and now finally my main stream application can actually take full advantage of those cards. :) A couple of 980s and a couple of 660s with a 6 core i7 really rocks! :) Now I want two more 980s. LOL! I find I can operate with the viewport in Nvidia mode at least 50% of the time and actually see what I'm going to get immediately, or within a few seconds. So testing before rendering is very easy and quick, much like being able to adjust lights while Lux renders. Iray rocks! Heck, Iray is so fast, I don't have time to adjust lights while it's rendering!

  • SnowSultanSnowSultan Posts: 3,773
    edited December 1969

    Thank you Pearbear, those look like good HDRs and if they work for you, they're good enough for us. :)

    I'm actually not sure if Studio can handle maps larger than 4096 x 4096 without problems.

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,414
    edited March 2015

    pearbear and 8eos8, If only my Iray light knowledge was that good. struggling on CPU only and a 8600GT that accounts for diddly squat. lol. Your renders are incredible, makes my sorry attempts look like chicken scratches.

    As for GPU's, the 970 did have a flaw with that last 512mb being slow, just read that. Also 4GB only, Don't I need way more then that, like 32GB+ at this point?

    The best I found was a 6GB card, out of stock, and going for what I get a month :ohh: before bills.

    Dumor3D, you lucky... Why cant this CPU get with the program, lol.

    Pending_980itrs_43minutes.png
    663 x 761 - 596K
    Post edited by ZarconDeeGrissom on
  • L'AdairL'Adair Posts: 9,479
    edited December 1969

    Razor 42 said:
    First Iray render, Straight CPU.
    Using the Skydome texture in the Environment settings and some heavily overblown distant lights which seem to do okay if you turn of limits and blow them right out. I notice they leave some dots in the scene from their origin though.

    There's a setting in the Photometric lights to "Render Emitter." I'm rendering now or I'd go check the other lights, so I'm only assuming it's there for all the lights. You can set that to "off"... I suspect that will get rid of the white dots from the lights' origins. Certainly worth a try, anyway.
  • Dumor3DDumor3D Posts: 1,316
    edited December 1969

    I think the last question I asked was buried in other posts, so hopefully you mind my asking again. Does Iray prefer spherical HDRIs or can you use any sort of environmental map? If someone who has had success with HDR lighting could recommend or give me an example of the type of map that worked for you, I'd appreciate it.


    I ordered exactly the same - Asus GTX 970

    Hm, I hope DAZ can give us better prices when they start selling them. :) That one has 1664 CUDA cores, but I think Sickleyield mentioned 2000 cores being especially efficient. Least I can try to justify buying a new card because I also play a lot of PC games. ;)

    If you have the Yosemite Packs, number 11 from the second pack works really good. Plug the YosemiteHDR11.tif file into environment and it gives great results. What I'm finding is the HDRs in full sun don't work great as the backdrop image but look a bit washed out. If you mistakenly load the environment version, which is normally a much smaller file, it is for hdr lighting and will look totally fuzzy and it was never meant to be seen. If you want to use a sunny HDR, you can turn off everything in the scene and adjust the environment settings until you have a good backdrop, render that and then load that render into the backdrop channel over under the other environment tab.... then reset the settings on your HDR Environment Map and you'll have a proper backdrop and proper HDR lighting at the same time. I hope that Nvidia will add in the ability to use sIBLs at some point. The positive about HDR image use is renders are blazingly fast! Seconds instead of minutes!

  • TJohnTJohn Posts: 11,339
    edited December 1969

    Playing with tone mapping settings, sorry I don’t remember the names of the values I changed. Two mesh lights; one a large sphere above the well, one a plane behind the camera, both are set to 100 watts. Environment set to Scene Only. Set to render for 4 hours, I don’t know how long it actually ran. CPU only.

    MinSeo_2.jpg
    1400 x 1500 - 431K
  • pearbearpearbear Posts: 227
    edited December 1969

    Thank you Pearbear, those look like good HDRs and if they work for you, they're good enough for us. :)

    I'm actually not sure if Studio can handle maps larger than 4096 x 4096 without problems.

    I'll have to test it out. I was using 10,000 px textures with the Octane for DAZ plugin, hope they work in Iray too.

  • Dumor3DDumor3D Posts: 1,316
    edited December 1969

    Iray is a CUDA application. Yeah, ok, CPU works, too. But what I'm seeing in my tests is render speed is just about exactly linear to the number of CUDA cores your card(s) have. There is the time to generate the scene and transfer it to the card(s) which seems to be the only factor not making it 100% linear. My 660s render just as fast as my 980s if I on a per CUDA core basis. Or at least they are so close it really doesn't come into play.

    I have found the 660s at 2GB will run out of VRAM fairly quickly. I have found I've needed about .5 GB free for other system use. Before I begin a render, they seem to use around 300 to 500 mb of VRAM to run the basic system. So, that only leaves about a gig for a scene. I've also found that only the cards with monitors plugged into them and in particular the one with the main monitor plugged in, uses a lot more VRAM. I suppose OpenGL is assigned to that card? From this, I have disabled the cards with monitors on them and left the other cards as the work engines. I find that 4GB so far has been totally ample. I haven't sent a super huge mega scene to them yet.

    So, 2GB cards running the system and a render don't have much room left for a scene. I think this is why DAZ chose to not set those for use by default. If you happen to have a 3GB card, if Nvidia made any of those, I'm thinking your actually have twice as much room for a scene as a 2GB card. A 4GB 3 times the space vs. a 2GB card. If you have more than one card and can get any monitor off of one of them, you might have decent success using that card.

    You obviously need enough room to load the scene (VRAM) before you can make use of the CUDA cores... more cores equal faster. If you do only small scenes, you could string together a bunch of 2GB $100 cards and wind up with more CUDA cores per dollar... but could only load scenes with limited items in them.

    If you need a utility to look at what is happening on your card(s), TechPowerUp GPU-Z is a nice free one. Just note that what I have found is when that utility shows 1.5GB in use on a 2GB card running the system, I was at the limit. I couldn't see the spikes in use, but can only assume OpenGL or something was hitting it the card hard causing a system lockup or actually slow down with notices that OpenGL had restarted. If I waiting long enough, the system recovered but my render had stopped and all programs using OpenGL had just a white window. That's it why I suggest no more than a 1GB scene on a 2GB card running the system. That's not much in a scene.

    Hope that helps!

  • L'AdairL'Adair Posts: 9,479
    edited December 1969

    wwes said:
    scatha said:
    Vaskania said:

    No, he means Nvidia. Iray is made by Nvidia.

    Oh great, so we're back to the dark ages until Nvidia decides to play nice... just lovely.

    I think a lot of people are looking at this the wrong way. It is really a win/win. DS still has the 3Delight render engine, which has been improved if I understand correctly. IN ADDITION, there is the option for Iray renders, which can take advantage of GPU rendering with an NVIDIA graphics card. But, even without GPU rendering, iray will work just fine from your cpu. Even on my anemic computer, I've already done some great unbiased renders, and the speed seems about the same as 3delight. No one is losing anything with this update, and most people are gaining a great deal.
    I couldn't agree with WWES more.

    I have Intel HD Graphics integrated with my motherboard and I'm getting beautiful renders. Would it be nice to have an Nividia graphics card with 4G ram? Sure it would. But the only thing affected is speed. Before I went to bed last night, I set Max Time to the equivalent of 8 hours and started a render. When I got up, the render was done. It took "5 hours 45 minutes 42.16 seconds" (from the log file.)

    I think it's unfair to complain DAZ has included an Nvidia product that only works with Nvidia GPUs. Should DAZ deny all of us access to this technology because some of their customers are using cards with AMD GPUs? It's not like we have to actually have an Nvidia GPU to use Iray. Or that our results are any different when rendering using the CPU. In fact, if the image needs more memory than the card has installed, it reverts to using the CPU anyway.

    And 3Delight is better than ever.

    Thank you, DAZ. :)

    P.S. Here's one of my earliest Iray renders

    Beta_iray_render_Dome_mode.png
    1000 x 667 - 829K
  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,414
    edited March 2015

    pearbear said:
    Thank you Pearbear, those look like good HDRs and if they work for you, they're good enough for us. :)

    I'm actually not sure if Studio can handle maps larger than 4096 x 4096 without problems.

    I'll have to test it out. I was using 10,000 px textures with the Octane for DAZ plugin, hope they work in Iray too.The textures I used months ago on a wall, were straight from Hubble (HST), and incredibly large.
    http://www.daz3d.com/forums/discussion/40627/
    It's the OpenGL in the View-field that has limits to the best of my knowledge.

    Post edited by ZarconDeeGrissom on
  • ToyenToyen Posts: 2,029
    edited December 1969

    pearbear that skin looks gorgeous!

    So, you didn´t use the Iray Uber shader?

  • Dumor3DDumor3D Posts: 1,316
    edited December 1969

    pearbear said:
    Thank you Pearbear, those look like good HDRs and if they work for you, they're good enough for us. :)

    I'm actually not sure if Studio can handle maps larger than 4096 x 4096 without problems.

    I'll have to test it out. I was using 10,000 px textures with the Octane for DAZ plugin, hope they work in Iray too.

    I have used 14932 x 7466. Of course that is a big image and is loaded to VRAM. I'm finding the common 8000 x 4000 is giving good results without too big of a hit.

  • Dumor3DDumor3D Posts: 1,316
    edited December 1969

    ACross said:
    wwes said:
    scatha said:
    Vaskania said:

    No, he means Nvidia. Iray is made by Nvidia.

    Oh great, so we're back to the dark ages until Nvidia decides to play nice... just lovely.

    I think a lot of people are looking at this the wrong way. It is really a win/win. DS still has the 3Delight render engine, which has been improved if I understand correctly. IN ADDITION, there is the option for Iray renders, which can take advantage of GPU rendering with an NVIDIA graphics card. But, even without GPU rendering, iray will work just fine from your cpu. Even on my anemic computer, I've already done some great unbiased renders, and the speed seems about the same as 3delight. No one is losing anything with this update, and most people are gaining a great deal.
    I couldn't agree with WWES more.

    I have Intel HD Graphics integrated with my motherboard and I'm getting beautiful renders. Would it be nice to have an Nividia graphics card with 4G ram? Sure it would. But the only thing affected is speed. Before I went to bed last night, I set Max Time to the equivalent of 8 hours and started a render. When I got up, the render was done. It took "5 hours 45 minutes 42.16 seconds" (from the log file.)

    I think it's unfair to complain DAZ has included an Nvidia product that only works with Nvidia GPUs. Should DAZ deny all of us access to this technology because some of their customers are using cards with AMD GPUs? It's not like we have to actually have an Nvidia GPU to use Iray. Or that our results are any different when rendering using the CPU. In fact, if the image needs more memory than the card has installed, it reverts to using the CPU anyway.

    And 3Delight is better than ever.

    Thank you, DAZ. :)

    P.S. Here's one of my earliest Iray renders


    Here Here! All well said. Nvidia developed CUDA tech. Nvidia developed CUDA programming. Nvidia developed Iray. I doubt Nvidia will give away any of this tech no more than Apple gives away tech to Samsung. LOL!!! Nvidia gets Iray to DAZ and DAZ gives Iray to us. Now that is different! I guess I did get a 'free' phone from my provider, but the 2 year contract monthly rate I'm sure more than covers that. Now I get to choose how fast I want to render and choose hardware accordingly. I get to choose an iPhone or a Samsung phone or any of the other barrage of phones and each one is different... pros and cons. I have full choice with the new beta 'and' no 2 year contract. LOL

  • Dumor3DDumor3D Posts: 1,316
    edited December 1969

    I don't remember where I saw them, but there are some 'GPU Appliances' out there which can hold one or more very nice cards. These can be plugged in to some laptops or workstations. So basically you can share your investment between machines.

  • CypherFOXCypherFOX Posts: 3,401
    edited March 2015

    Greetings,
    Wow... What a difference a GPU makes. I rendered a scene on my Mac with the following timings:

    CPU (8 threads): 2054 iterations, 77.832s init, 21523.859s render.

    Total Rendering Time: 6 hours 1 minutes 44.26 seconds.I also rendered it (the below scene) on my Windows PC using CPU+GPU:CUDA device 1 (GeForce GT 740): 1892 iterations, 215.695s init 11389.565s render.
    CPU (3 threads): 325 iterations, 192.695s init, 11316.635s render.

    Total Rendering Time: 3 hours 16 minutes 43.89 seconds

    So despite my Mac having a significantly better CPU, my new GPU blew it out of the water. There is one difference, the Mac had 'Architectural Sampler' enabled, and the PC didn't, but there's almost no difference between the two images. The sole noticeable difference is in the texture of the robe.

    For some helpful reference, the G2F in the image is set to render-time subD of 4 for HD morphs, the robe to 3 for displacement mapping. Max samples was set to 10,000 and max time was set to 21600 (6 hours, which the CPU-only renderer hit as a limit), and the default 95% convergence (which the CPU+GPU renderer hit). Vignetting of 0.2, ISO 200, 181.02 shutter speed, 14.5 exposure. (There's a relationship between the exposure, shutter speed and ISO.) The scene is Dome+Scene with the environment set to BWC Sky 16.tif, from BWC's Skies. I don't think it's efficient to use a Dome+Scene and then put another dome inside the Iray environment, so I didn't.

    In short, a significantly lesser computer (my Windows box) with a new GPU hit 95% convergence twice as fast as my CPU-bound Mac. I would love to know where the convergence was at the end of the CPU render. (Hear that DAZ? Please put convergence as a statistic in the log files!)

    -- Morgan

    [Edit: Forgot to note! This scene took 2659MB of my VRAM according to GPU-Z; my monitor was being driven by my onboard video card, so none of that was interfering.]

    Nook_Iray_Tweaked_-_Windows_CPU+GPU.png
    2000 x 1500 - 4M
    Post edited by CypherFOX on
This discussion has been closed.