Interesting article: The importance of true HDR in HDR images

2»

Comments

  • Joe CotterJoe Cotter Posts: 3,259
    edited July 2015

    That's the basics, but to get a 'good HDR' to be used as an image probe*, there is a bit more then that. That alone won't get one the dynamic range one would want for good IBL. It's the expansion and capture of a much wider dynamic range that is in question. (Thus the comment about compositing.)

    This actually is an area of interest because one could develop a smaller HDR image map for lighting, and an LDR backdrop image/dome for the scene that specifically is part of the scene rather then some random HDR image of a parking lot, etc... It's just that this is still in it's infancy. For 3D graphics, I believe this would be a much more optimal workflow. Basically, the same as what people are trying to do with VUE now. However, while VUE has some distinct advantages for creating outdoor scenes and would seem optimal, having it all in one package for final render has some advantages also, and VUE is behind the curve on some aspects of it's render engine imo. Also, VUE doesn't have a node based compositor from what I remember (could be wrong, while since I used it) which would facilitate expanding the dynamic range of the image to something more HDR.


    *Techy way of saying image used for lighting.

    Post edited by Joe Cotter on
  • Dumor3DDumor3D Posts: 1,316

    Gedd... Yes!

    Another way to think of the 'exposure range'. The iris in a human's eye adjust quite quickly. If you stand outside of your house on a bright day, with the front door open, when you look a the sky, perhaps above the door, your iris will close down so you can see the sky. If you leave your eyes on the sky, but pay attention to the room behind the door, it's very dark. When you move your eyes to focus on the room behind the door, your iris opens up... at least some and you start to see more detail inside of the room. It's the same for a camera... the lens aperture will open or close (or one of the other abilities a camera has to adjust exposure) in order to capture a proper exposure of the main scene. So a camera for instance during one shot, can only properly expose the sky or the room behind the door and not both at one time. For HDRI, we need to get exposures for both the bright sun and exposures for the shadowed areas in that room behind the door. Almost totally darkness to what we know as total light.

  • Joe CotterJoe Cotter Posts: 3,259

    I forgot to mention earlier, Thanks Strixowl for the excellent link. It points out some important points about HDR in 3D Image Based Lighting (IBL) in an easy to understand manner with very good examples. It is one of the best I've seen so far. smiley

  • mjc1016mjc1016 Posts: 15,001

    Yes, but ease of scene element generation is not one of Blender's strong points.  Vue and Terragen both will generate quite usable 'worlds' easily...Blender, doable but a lot more work.   "Studio' type lighting is pretty easy, in Blender.
     

    Set up your lights and render as above...

  • larsmidnattlarsmidnatt Posts: 4,511
    latego said:

    With Blender you have to:

    1. select Cycles rendering engine;
    2. in the properties of the camera, Orthographic projection/type equirectangular;
    3. in the render dimensions any 2x1 aspect ratio (e.g. 4096x2048);
    4. when saving, use HDRI formats i.e. .HDR or .EXR.

     It is very simple.

    @Gedd and @latego, thanks for that info! 

  • Joe CotterJoe Cotter Posts: 3,259

    Yes, thanks latego for putting up the basic steps, an often skipped starting point which is a big help for anyone looking to play with a technique. :)

  • LeatherGryphonLeatherGryphon Posts: 11,164
    Dumor3D said:

    There are applications that can take images at a various exposures and bring them together to make a proper HDR image. I've experimented. It never works. :/

    Yeah... It's hard to make good HDRIs! When you combine in all the variables, from camera and lens and pano heads to working with images where you can't actually see completely. Yes! It's hard.

    Yes, it's hard.  I've made a few good ones and spent $$$$ for equipment and software and lots of practice and it still takes me an hour to take all the photos required for a spherical HDRI, and an hour or two to stitch the photos.  And I automated a lot of the process but sometimes it still needs tweaking and manual photoshopping to remove artifacts.  And yes sometimes the automation just doesn't work.  Even manually stitching sometimes isn't possible.  You have to be very very careful with your camera stability, and consistent settings.  Yeah, an automatic multi-exposure HDR camera on a motorized heavy duty tripod would make it a lot easier but now your talking $$$$$ and still you might have trouble with automatic stitching because the stitching depends on detail in the image to "lock" onto.  If there are areas of blue sky or still water or monotone building walls, the stitcher may not be able to stitch correctly.

    I still have my camera and lenses and special tripod head and a heavy duty tripod and all my software but I've pretty much abandoned HDR work.  I couldn't get it to pay.

  • NovicaNovica Posts: 23,859

    Thank you to everyone who is taking the time to explain this to us :) I know it takes time to type all that out as an explanation. It's very much appreciated.

  • CypherFOXCypherFOX Posts: 3,401

    Greetings,

    So...I'm going to be naive and silly for a moment.  I have an old DSLR (Canon EOS 10D, a crappy 3Kx2K), adequate lens, and it has the ability to take multiple shots that can be composited together at different exposures.  I can set a 'spread' of 3 exposure levels, and I typically set it at two down, middle, and two up, and then taking it in 'RAW' mode, which also captures a little bit more color values per pixel anyhow.  This is fine for creating a photographic HDR image (where you can see into the dark crevasses, and yet see the clouds in the sky.)

    In order to take a surround HDRI that I could use to render with, I'd have to then take multiple pictures in a panorama, right?  Stitching them together's probably not too hard, it just needs software that doesn't bring the images down into LDR format first.

    But are three exposure levels enough for anything remotely useful?  I'm not looking to sell them, it's more...if I wanted to take a picture of my driveway, and then throw a rendered shiny car on it, or something, and want the shadows to be right, etc... :)

    --  Morgan

     

  • Dumor3DDumor3D Posts: 1,316
    CypherFOX said:

    Greetings,

    So...I'm going to be naive and silly for a moment.  I have an old DSLR (Canon EOS 10D, a crappy 3Kx2K), adequate lens, and it has the ability to take multiple shots that can be composited together at different exposures.  I can set a 'spread' of 3 exposure levels, and I typically set it at two down, middle, and two up, and then taking it in 'RAW' mode, which also captures a little bit more color values per pixel anyhow.  This is fine for creating a photographic HDR image (where you can see into the dark crevasses, and yet see the clouds in the sky.)

    In order to take a surround HDRI that I could use to render with, I'd have to then take multiple pictures in a panorama, right?  Stitching them together's probably not too hard, it just needs software that doesn't bring the images down into LDR format first.

    But are three exposure levels enough for anything remotely useful?  I'm not looking to sell them, it's more...if I wanted to take a picture of my driveway, and then throw a rendered shiny car on it, or something, and want the shadows to be right, etc... :)

    --  Morgan

     

     Hi Morgan. I found that was not enough of an exposure range. At the moment I'm taking 9 shots at 2EV stops for a range of 17 EV. For a sunny day, this is what's needed. On days with less light contrast, fewer shots are needed (but I do them anyway and just dump the ones that are of no use)

    What you want is a range that goes from just shy of total whiteout to just shy of total blackout. During a pan, it might be the sun or the direction of the sun that needs the fastest exposure in order to get to almost total blackout. And it might be another direction which holds the darkest shadows where you would go for all but the total whiteout.

    I hope that helps! You could use your bracketing to get the 0 EV and -2 and +2, then manually set your camera down... hmmm... lets see -6 EV and get -4, -6 and -8... then up to +6 EV to get =+4, +6 and +8. I did that for a bit and mostly wound up losing my place either during the shoot or when sorting the images. :) I bought a Promote Control which can do exposure bracketing. A spendy remote, but an awesome product.

    There is a fair amount of good reading on this, if you can sort past the HDR for photos portion. Another thing with doing the panos is you need a special head so that you can set the camera up at the len's nodal point. Otherwise the scene will shift a bit as you rotate the camera. This is the same effect as looking out of your right eye and then out of your left eye. Vertical lines in particular shift... so the images in the eyes aren't the same... thus the images on the pano aren't aren't the same. This leads to bad stitching and fuzzy images.

    There are a number of free softwares available which can stack the images and can stitch the images. So give it a go and see what you come up with. It's not like you're burning up 'film'. With the 5 EV range you most likely won't get a good HDR for lighting, but you could very well get a great background. You may need to add a light in the scene. It's a lot of fun to bring the field of photography into Studio! :) And if you're like me, the Iray lights are totally logical when thought about in terms of photography. Yes! This is fun.

Sign In or Register to comment.