Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2026 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2026 Daz Productions Inc. All Rights Reserved.
Comments
I also see more reflected light from the ground in the picture than there is in the render.
Set the SS Ground Colour in the Environment settings to mid grey or lighter to get more reflectance without resorting to Glossiness which wouldn't look right.
I may have found a solution which works for me according to the first test.
The magic keyword is tonemapping!
I did first of all the image as usual.
Than I rendered the same image an exr in the beauty mode (no nodes).
I opened that exr in Affinity Photo and tonemapped it and got an image which looks how I would expect it.
One thing I've noticed is that the backgrounds for HDRI seem too light compared to how much light they are throwing into the scene.
- Greg
As it should be. You cant see evenly lit two environments with so different illumination. Not in a photo, not with your own eyes either.
The big thing render engines are missing IMO is an eyeball mode. A mode that can mimic more what our eyes see instead of what a camera can see. Not everyone is looking to fake a photograph, I want it to seem more like you are looking on a scene with your eyeballs, rather than looking through a camera lense. Probably easier said than done lol. For now we gotta do tricks to get what we need.
Not going to happen anytime soon
https://medium.com/photography-secrets/whats-the-difference-between-a-camera-and-a-human-eye-a006a795b09f
https://www.cambridgeincolour.com/tutorials/cameras-vs-human-eye.htm
In other words, it's not a matter of the iRay dev team not understanding the issue, it's your not taking the time to learn how to use iRay.
Just to be clear, path-tracing in Iray results in exactly the same thing being rendered to screen regardless of which hardware platform (RTX GPU, non RTX GPU, Intel/AMD CPU) is being used. RTX hardware has no other other advantages to it over other platforms than simply more speediness (under most conditions.)
What that photo also emphasizes, is, that simple concrete reflects way more light than what people generally find aesthetically pleasing, especially in their renders. Without that photo for reference, if we were asked to render such a scene with that kind of light, most of use would end up with a render where we can clearly make out every single tile of the street, if possible with a few sprigs of grass between them. But, looking at the photo, we can clearly see a reflection from the sun instead. So, either the tiles we'd render in Daz are not reflective or glossy enough, or the light intensity we use is too low. Most probably: a fair bit of both.
And from that, it's fairly logical to assume that the same applies for our indoor shots: the light entering our indoor scene is probably not intense enough, while at the same time, many shaders we use are too matt.
It's logical that we set up things that way, we add details to be seen, not to turn invisible from overlighting.
But, maybe we should start to look at it from a different angle. Why do we put all those details in our scene? Do we place them there because they mean something, or do we put them there because otherwise our scene looks so empty? That is basically returning to setting up our scene in the first place, and deciding which parts are our subjects, and which are our secondary props, or fillers. We don't want our subjects to wash out in overlighting. The scene fillers however? Let them wash out a bit, they're just fillers anyway, right? The result might not be as bad as you think. But, you will at least end up with more light for the important parts you placed in more shaded areas.
I don't agree.
maybe they could implement an automatic tonemapping like affinity?
Very interesting. Yes, I have had problems with lighting in this context, though it's usually solved by fiddling with tone mapping settings and dialing lights to the "correct" intensity.
This is also a big issue for "correct" lighting. Your eyes adjust. The camera doesn't.
yeah, I figured that tone mapping trick a few hours after my post out. Never read anything about that in the thousends tutotials I read
Fair enough!
Quote from the opening post:
I was just pointing out adaptive sampling as one way of dealing with lowlight/indirect light scenarios, thinking pathtracing in general. So, with current IRay, what can be done, without adding ghostlights, removing ceilings, walls or using section planes?
1 Adjust exposure/ tonemapping/ rendersettings
2 Adjust shader/material settings
3 Adjust levels/gamma in postwork
Forgive me, I was more meaning the functionality unique to RTX cards; something that would happen 'naturally' as opposed to beinga cheat. I refer to their presentations, which I always take with a pinch of salt, but they are still claims.
That depends on which Tone Mapping you are using.
There is the DAZ Studio Tone Mapping (which is actually referring to film and camera settings).
http://docs.daz3d.com/doku.php/public/software/dazstudio/4/referenceguide/interface/panes/render_settings/engine/nvidia_iray/tone_mapping/start
Or Photographic Tone Mapping (which changes the Colour Space of the image and remaps it to get a higher Dynamic Range).
https://en.wikipedia.org/wiki/Tone_mapping
These are two entirely different processes.
You can adjust Tone Mapping in DS, too - though the controls are simpler (gamma and white/black compression).
I talking about post editing tonemapping with affinity photo. It's just two clicks. I don't do tonemapping in DAZ.
Besides: In real life, you have an athmospere with dust particels in it bouncing of light as well. That is different from having a sky image background.
You also needed a volume shader with dust particles to get closer to realism.
Nvidia has never made any claims that having RTX hardware would effect already purely physically based (unbiased) rendering applications like Iray in any way other than simply speeding up the rendering process. They've said plenty about it making a visual difference for biased rendering applications (ie. Games) by making the inclusion of some physically based rendering passes into the mix possible. But Iray was already 100% physically based even back before it was called Iray.
Excuse me???
Earlier, you showed how important tone mapping is to getting the lighting right - but you don't use it in DS!!! If you're only using half a tool you shouldn't be complaining when you get half a result.
Since I can only use iRay in CPU, I don't do a lot of work with it because it is so excruciatingly slow so I didn't let these run very long. Just enough to show how important tone mapping is in iRay.
All the internal lights were turned off and the scene is lit only by an HDRI "shining" in through the windows.
The first has the default tonemapping of ISO 100 shutter speed 128. The second has ISO 500 shutter speed 10.
I think anyone can see that tone mapping makes a difference.
As long as it's the raw EXR file straight out of DS's temp folder that's getting tonemapped (rather than the JPG/PNG/Tif/BMP DS offers as output options) as a post-process, there's nothing wrong with that. Personally I'd even encourage that workflow for really important renders (that EXR is akin to the Camera Raw in digital still photography.) Yeah, you can do the whole thing in DS itself. But working from the EXR means that you can go back later and re-work the exposure entirely without having to re-render anything. It's the sort of post-processing latitude that conventional photographers can only dream of (I know because I used to have those dreams.)
And that is a perfectly fine work flow if that is what you want - but this thread started as a complaint about how iRay does internal lighting poorly.
As many have already pointed out, you won't get the results you want out of DS if you don't use the renderer the way it was intended.
I think Iray might still have same problems that Cycles used to have before Dynamic range ( Filmic ) was improved. Changing ISO etc. tone mapping values has a tendency to wash away details, so it's not always a solution. Those who are interested, this Andrew's post about it explains it much better than I can: https://www.blenderguru.com/tutorials/secret-ingredient-photorealism
Also, if they are comparing to "photographs" many of such are post processed to level out the dynamic range, or combinations of several photos with different exposures.
Reality has become very hard to judge whne viewed through the work of others.
You can't rely on outdoor light to properly light an indoor scene. You need to setup square plane ghost lights at the windows to shoot some light inside the room.
More reason for someone to develop a good Daz to Blender bridge.
I use only in DAZ the Exposure Value, but I'm very carefully about this, because of the clipping.
Thank you, that video was super helpfully to unerstand.
So basically the issue is not the light bouncing of iRay, it's the color system!
If we would have a better color system we would have no clipping and would need no ghost lights.