Interesting article: The importance of true HDR in HDR images
Strixowl
Posts: 301
in The Commons
The importance of true high dynamic range in hdr images
http://www.aversis.be/tutorials/vray/vray-high-dynamic-range-hdri.htm
http://www.aversis.be/tutorials/vray/vray-high-dynamic-range-hdri.htm
Comments
That is all very legit. The difference between a good map and a bad one are very noticable.
Awesome. Thanks for the info.
thx for the Link
I've been seeing this term flying around a lot recently. What are HDR images and how to they apply to iray? I know it stands for "High Dynamic Range" but I don't really know what that means.
HDR is a term used differently for different industries. For example in photography HDR means something a bit differently (though there is some overlap...in meaning).
Anyway, Images you view on the computer are low dynamic range. They don't actually capture the real light that was on the scene. HDR captures light at different exposure levels and with higher precision. So instead of using an 8 bit image, its 16 or more bits of data. This is very useful in 3D rendering. A good HDR image will make lighting look much better than a LDR image.
In 3D rendering HDR (and even LDR) are often used to light the scene usually in conjuction with other light sources. An HDR will give metals something to reflect, good background details. But also it will give the scene more depth because ever object will be influenced by the light. So an HDR of a forest will inject lots of greens into the subject, even if it is subtle. This helps the CGI image look more grounded in a real world environment.
I had my hand in the HDR pie for a while so I'll attempt an answer.
The following was taken from my now defunct website and is a lot of text but covers most of the important aspects of the technology. I've left out details of how HDR is used in DAZ products and other 3D applications, so perhaps someone else can illuminate that area.
High Dynamic Range Imaging (HDRI or simply HDR) is a photographic technique for capturing and displaying the wide range of light intensity of the physical world while attempting to preserve the information content as well as the esthetic value of the scene while also taking into consideration the capabilities or limitations of the display medium.
Whew, what a mouthful. More simply, HDR imaging is a more adapable camera technique and some more adaptable image display techniques.
Why does HDR Imaging exist?
To make “impossible” pictures possible. To capture a scene that is simultaneously too dark and too bright.
To the human eye light & dark are relative terms. How dark is black, how bright is white? How black is ink? How white is paper? How dark is your TV or computer monitor when turned off? How bright is your slide projector bulb? How bright is the sun? In the everyday world we can see with our eyes the darkness of a cave and the brightness of a the sun but we cannot capture with our eyes or with most devices, both extremes of intensity at the same time.
All means of recording light are limited to a certain range of brightness or light intensity. This is true quite clearly with typical photography. For example you've probably taken a photograph of people in the shade of a tree but later found out that the beautiful sunlit scenery beyond the tree is greatly overexposed..
The goal of photography is to capture what we see. But what our eyes and mind see is not what the camera sees. The camera sees an instantaneous sampling of the physics in play in the scene, our eyes & mind, however, see many small areas of the scene at different "exposures" and calculates a translation and interpretation of what exists using the best information it saw. Similarly HDR imaging is a technique to record the nearly full dynamic range of light of a scene, pick out the best sampling of information and . display it in a manner that attempts to recreate the feeling and ambience and human interpretation of the original scene.
Think of the intensity of light in terms of musical octaves, each higher light octave being twice as bright as the previous. A typical outdoor scene (without looking directly at the sun) covers about 13 octaves or zones of brightness from the dark shadows deep in the bushes, to the glare from the water or sun reflections from windows. We can see this wide dynamic range of light because our eyes are dynamic instruments, the pupil is continually changing to create the correct “exposure” for where we are looking. The brain is a marvel of computation that uses continual comparison and localization and interpretation of many small areas of the scene to build up a mental picture of the entire environment without seeing it all at once. From the black snake in the grass under the bush to the white clouds in the bright sky. No normal photograph can capture this range of light dynamics. Normal film negative and typical digital cameras can record about 9 zones of brightness. Typical computer monitors can display about 8 zones, and ink on paper can only display about 7 zones.
Another major use for full range HDR images is to archive a scene in full detail and to provide accurate light intensity and light direction information to 3D graphics programs. This technique is used to provide near-real lighting information for artificial scenes created by 3D graphics.
How is HDR Produced?
Given cameras with limited brightness range we can capture a wide range of brightness by taking several photos of the same area with different exposure settings. For example put the camera on a tripod so that it doesn’t move and take one picture at what the camera light meter says should be the proper exposure, then force the camera to take another photo that is deliberately overexposed and then take one that is deliberately underexposed. By cutting out the over exposed areas and replacing them with the same area from one of the images with a better picture of that area we can create a patchwork image that looks closer to what our eye/mind sees.
Computer programs to create an HDR image use a similar process but execute it in much finer detail and with more sophistication. Often 3 or 5 or 7 or more images are combined depending of the accuracy and range of light dynamics needed. By artfully condensing the 13 zones of an outdoor scene into the 7 zones displayable on paper we can come close to representing the mental impression of the original scene.
Full HDR Imaging and Tone-mapped HDR Imaging
HDR Imaging is a very versatile technology. The vast amount of infomation about the range of light content of a scene makes it possible to simulate true lighting of a reproduction of the scene if one had a device that could fully reproduce the captured light intensity. This capability is sometimes referred to as Image Based Lighting (IBL) While such devices are theoretically possible, their availability and uses is still in its infancy. However, full or true HDR images of a physical scene are being used in the 3D graphics industry to provide realistic complex lighting for computer generated scenes. The recent movie Troy being a good example.
A more common use of HDR images is to use the full light range captured in an HDR image to produce an image that is not over or under exposed when displayed on paper or on a computer screen. This technique is what is displayed in this website in our gallery and in these discussions of HDR Imaging. This technique essentially compresses (or maps) the extreme light and dark values of the HDR image into the 8-bit range of color channel values that can be represented on the computer screen or on paper. This process is often called "Tone-mapping" and the result is more properly called a "Tone-mapped HDR Image". This distinction though is often omitted when discussing HDR because it is usually obvious how the image is being used.
What are Limitations of HDR?
Without special, very expensive, cameras it is difficult (if not impossible) to obtain a good HDR image of moving subjects. The need for multiple photographs of the same scene requires all objects to be unmoving during the entire time the exposures are being made. This also requires the light sources to be stable. For example if the sun goes behind a cloud in one of a series of exposures, the final HDR image will be inaccurate. Also for this reason, the exposures must all be made within a few minutes to avoid changes in the angle of the sun which would produce different shadow positions. Outdoors, wind is a major problem when photographing foliage affected by the wind. People are difficult to photograph because the slightest movement, such as breathing or eye movement during a 3 or 5 exposure session will blur the final output.
Thanks for the replies! How do we use HDR images in IRay / DAZ?
i forget how to do it for Uber Environment (or too lazy to tell ya)
but for Iray go to the environment settings under Render Settings. There is an environment map section. Insert map there. Someone can tell me if I'm wrong but that should be what ya need.
There's a lot of good information posted here. Absolutely, the term HDR or HDRI is in my opinion, too broadly used. Add to that, IBL is also divided with pure IBL HDRs and sIBLs. sIBLs are available at HDRLabs. These carry an extra text file which can contain additional lights. Iray cannot use this extra information in sIBLs. This is the reason many of those HDRI files don't provide a really good solution for Iray renders. I have found a few nice ones over there. However, due to the extra lighting info that might be in a sIBL, the HDR itself does not have to have a really high dynamic range.
OK, the tech gibberish. Dynamic range. When you want to create a HDRI for 3D lighing, it needs to contain the full exposure range of the scene that has been shot. Basically, a proper exposure for the darkest shadows and a proper exposure for the brightest light, which might be the sun on a clear day and then a range of images taken in between. The shots should be taken at about a 2 stop increment. I find that on a brightly lit day, I'm taking 9 exposures in 2 stop increments, which gives me an exposure value range from minus 8 stops to plus 8 stops or a range 17 exposure values. This is a huge range. The HDRI assembly allows the sun to be as bright as it is in the scene that was photographed and allows the shadows to be as dark as they are... with regards to how the image 'creates the lighting' for your scene. If you think about the 'chrome ball' or 'probe' in a scene, a HDRI will light that ball and you will see a reflection of the scene on that ball. It's really obvious how the light is playing on a chrome ball. It does the same on figures, props and such, although it's not as obvious, but extremely realistic. Gosh, I hope I said that in a way that makes sense?
If we are talking about using a HDRI image as a light source and a backdrop, I find the image needs to be at least 8000w x 4000t pixels, to have a resolution high enough to provide a nice background. Smaller HDRIs can provide great lighting, but not a good background.
To use a HDRI in Studio, (if it is not a product from the DAZ store) go to the 'Render Settings' tab, then to environment. Under 'Environment Map' browse to your HDRI image to load it. If you want to use it as a backdrop, you'll need to turn 'Draw Dome' On. You still won't see it until you do one of the following. Set Aux Viewport to NVIDIA mode and it will show there. Set the main viewport to NVIDIA mode and it will show there, or run an Iray render.
One of the fantastic things about using HDRI's is they render extremely fast! Also, they contain real world lighting so you get real world lighting without a lot of fuss.
I am attaching to this post my "Quick Start Guide to the Iray Dome and the Use of HDRIs" which is included with my "Apocalytic Plant Outdoors Set 1 - Iray HDRIs" http://www.daz3d.com/apocalyptic-plant-outdoors-set-1-iray-hdris
And some HDRIs are made simply as a light source and not for use as a background image. What I've talked about here is the use of a HDRI as a light source and a background image.
Hope this helps!
HDRIs suitable for a light source are very hard to create.
I struggled for a while trying to get Carrara sky renders to produce noticeable lighting before just giving up. I can arbitrarily plop light and dark on a map and get that to provide lighting, but anything actually image-like? Newp.
...the really good high definition HDRIs with true physically correct sunlight and shadows cost, and cost a lot. Who here has 99$ - 2,000$ (yes I've seen them that expensive) for a single HDRI set?
Basically like Dreamlight did with LDP, we have to "fake it" using a photometic distant light and a more affordable HDRI sphere (like those in Skies of Economy) rather than using the more accurate sun light provided in Iray.
Maybe that explains why HDR images rendered in Vue turn out to be crap when rendered in Iray !
No...that's not it.
You have to have the Vue settings just right (and no, I don't know exactly how to do the settings, but I think you need the 'pro' levels of Vue to actually expose the needed settings).
Two key things...NO gamma correction. None. When saving it MUST be 1.0. And the second, the highest bit depth possible (minimum 16 bit, but 32 would be better).
Steve@home did a bunch of them, and they work nicely...but they are not just straight Vue renders, they have been postworked.
That's OK I all ready knew you needed the 32bit depth to get a good render but it makes the size way to big. I wanted to make a Mars set but just one HDR image was over a 100Meg.
Ummm...that is about right for a 'full range', decent resolution image. If you want a 'lighting only' image you can cut down the resolution to something more manageable (like 1024 x 512). Then do a high res jpg/png image for 'backdrop' usage.
Terrific article, thanks for posting!
I use HDR for light and I often reduce the size of the image for VRAM considerations. It doesn't have a noticable impact on the lighting quality, but a huge savings on VRAM. expecially because those images aren't simple 8bit jpgs, they take up ton of space. Though VRAM isn't a real restriction most of the time for me...I still prefer to be efficent out of habit.
Some 3D programs let you make your own HDR images, which can be useful for your own renders if you want a particular environment and can't find it online. I've had some success doing this with Octane but typically I depend on others HDR or my own LDR when appropriate.
Glad you all like the article at the link and have added so many wonderful comments. I think my brain may explode
The way I look at is...
Is the combination of 'set', backdrop image and HDR going to take more space than just a higher res HDR that can be used for both lighting and backdrop. For 'simple' scenes, like a car, I find it 'cheaper' to use the single, high res/ backdrop worthy HDR.
What it comes down to is a bit of planning to make the best use of your resources.
Does anyone know of a free (or very cheap) application that can make IBL backdrops properly?
I'm not the one tye typically uses HDR as a backdrop. Especially not the primary backdrop. But I completely understand where you are coming from. For a car render I could see how the backdrop is unimportant so an HDR backdrop may be tolerable.
And the rare scenarios I use a backdrop image, its typically something I made custom for the scene, so not really interchangable with a stock HDR image. Two issues I have with HDR as a primary backdrop are:
A. It's still "stock" art, that people may have seen 1billion times. And if they haven't you might...
B. They don' t fit the overal goal of the render. They may have the right lighting. but maybe you would rather it not be seen.
I am cheap so I tend to use the same HDR I got over and over. If I don't use them as a backdrop, then people don't know that. So finances are one of the resources to monitor as well.
@timmins.william You can technically make IBL from any 2D program...but most would argue they aren't proper! (i do this myself sometimes, no replacement for HDR, but if you want to light the scene primarly with other lights and use the IBL for flavor, it can be fine).
But creating HDR's in particular is a different story. Most peopel get their HDR from expensive cameras and very particular workflows. But you CAN make HDR images otherways. I just don't think the options are cheap and probably not plentiful. I know you can export .exr HDR from octane render, but obviously you wouldn't buy oOctane render for that purpose...
I'm confused how the HDR images are used as lights. Does iray somehow know how to interpret the picture to generate the proper light for the scene? Are there any settings, or do you just slap a valid HDR image into the environment map, change the dome orientation to your desired angle and press render?
Yes, proper HDR images for use in 3D applications have a huge range of brightness and darkness in them, beyond what can be viewed on your computer screen. When using a proper HDR, It is that additional information that is used by Iray to light the scene.
Actually, when you start Studio, there is a default HDR loaded. It is a HDR that is designed for lighting purposed and not a clear background, but, if under Render Settings->Envivronment, if you set Draw Dome to 'On', then run a render or a NVIDIA view in your viewport, you will see the fuzzy image that is used. (this can work positively, as an out of focus background for some types of scenes).
So yes. HDRIs are interpreted by Iray. Many physically based render engines have HDRI lighting ablilities and even 3DL has some of this ability but we won't go there. Iray in Studio, Bryce, Carrara, Luxrender, Octane... and on and on the list goes. Good HDRs will provide you with instant lighting... as easy as it gets... and Iray understands this very quickly and will render using HDRI lights faster than any others I have tested.
They are also extremely useful when used to provide an outdoor scene and outdoor lighting for an interior scene. If a 'room' has windows or doors, something light can pass through, or something you can see through, using an HDR will provide a visual exterior environment and also provide a natural lighting coming in through the door, windows or other openings.
So, what you're saying is that they are a simple solution to one of the most annoying things in my workflow?
You have any plans on making any non-apoclyptic HDR sets? :p
HAH!!! Yes! Full lighting solutions and fast render speeds! What more can you ask for?
Yes, I actually am working on a number of other shoots as I write this. One being processed is a small campsite in a small clearing in the woods. I have some other going in field areas with varying lightings, such as full sun, light clouds and full clouds. The Apocalyptic set was shot intentionally under full overcast which to me is a good 'feeling' saying the environment is ruined. BTW, that set works great as lighting, sort of like an Uber 2 light in 3DL. The light of course won't pass through a roof or whatever, but it can provide a nice ambiance to a scene where the Dome itself doesn't show at all. I did this scene using one of those HDRs in combo with one plane set to emissive. The plane is the main light, while the HDR filled in what would have been extremely dark shadows.
http://www.daz3d.com/gallery/#images/76341/
I have a few other examples which also use these HDRs in different ways. For instance this http://www.daz3d.com/gallery/#images/68467/
uses only the HDR to light the scene. There are enough windows, doors and openings to let enough light inside.
There are applications that can take images at a various exposures and bring them together to make a proper HDR image. I've experimented. It never works. :/
Yeah... It's hard to make good HDRIs! When you combine in all the variables, from camera and lens and pano heads to working with images where you can't actually see completely. Yes! It's hard.
Blender can also, and is free. I have heard of people playing with creating HDR images for lighting in Blender but I haven't played with this myself so can't say what it is capable of, or what limitations one might have. I can say that Blender has quite extensive capabilities in the compositing module, which can extend what comes out of the render engine (cycles) quite a bit, but this requires an understanding of node based compositing.
A side note worth mentioning, the .exr format is not just for HDR images. It can also save multiple light path information for dynamic recomposition/lighting of a scene after rendering. The one Dreamlight product (forget name) was a way of bringing some of the functionality of .exr format in relighting a scene (only one part of .exr, along with zdepth information, etc...) to 3DLight. AFAIK, there isn't a product for IRay in DAZ Studio that does that yet.
I'm sure many know this but for those that are newer to this, or familiar with HDR only in either Photography or 3D lighting this information might be useful.
HDR is used differently in photography and lighting for a 3D scene, but the base is still the same. In simple terms, the 'Dynamic' range is the range from brightist white to darkest black. Modern dSLRs can capture a wider range then displays can display, printers can print, or for that matter, then an eye can process in a single focus of the eye. It does this with something called RAW format. Photographers will exend even this to a much wider 'dynamic range' (whitest white to blackest black with all of the hues inbetween) by taking multiple images, darker and lighter, and combining them to create the widest possible range. This is the same whether creating an HDR photograph or an image for 3D lighting. The process varies some based on it's end use but the basic concept is the same, to create an image with the widest dynamic range possible.
In 3D lighting, the render engine software uses all of this extra information in creating lighting for the scene. The wider the dynamic range, the more the software has to work with. The more skill in creating the image, the better the end results. These two are not always related.
In photography (creating an image directly from the HDR,) it provides the photographer a wider palate to create his/her image. Things that fell into undiscernible shadow or blown out white can be expanded to become visible and hues and colors that appear bland or washed out can be made very colorful if desired. To envision this, think of the light/color information as an accordion where what we see are the tips of the folds and where all of this other color/light information is hidden in the grooves. With skill and vision, these visible tips can be moved and manipulated to show a different image then what we first see exposed from the camera. Think of how various animals can see in a wider range black to white then humans, we can basically recreate this effect so we can see it. This information is there in the image, it just needs to be emphasised differently then the original image. This is different for instance then creating a layer above some part of an image and putting a tone to that layer and using layer mix modes to 'add/enrich' an area, which is adding new information.
Part of the point of explaining this is that in different environments (photography/3D lighting) people will use the term differently, but the base concept is actually the same. The implementation and use is very different and so can be very confusing.
With Blender you have to:
It is very simple.