This topic is semi related to the previous one “To Post or not to Post.”
In this, the hope is to explore the different ways people view the wonderful dynamic puzzle of 3D. First a bit of an explanation. Working with premade content is not exactly what people often think originally coming into it. They see a picture, buy the item, load it (no going into finding items in this post.. that’s a separate topic) and eventually after putting the various parts into their scene… hit the render button. And…. it doesn’t look like the promo. What happened (or didn’t?)
Well the short answer is, 3D art is made up of many different parts that all have to come together to make a final image/animation. It ends up being up to us as the artist to fit these various parts together to create the final image. How any given artist approaches this is going to vary based on their experience and goals. There is so much valuable information in our community on how people do this that it seems a good idea to take a moment to explore this topic.
Here is an image out of the box that I rendered, and it came out ok, but it wasn’t what I was going for in this case. The model is Requiescat Which requires Reparation So this is the first part of the puzzle, we need a base product to go with the product we need, simple enough for most of us here as we understand this, but it is confusing to new people.
The the second part of the puzzle is, we can render in png for a transparent background or in jpg for a smaller file size. If our goal was to use this as a piece of a composition then the png format with alpha would be better, but notice the white fringing… that is because I did render it with a transparent background then simply did a fill in my photo editing software. If we were doing a composition, we would need to deal with this.
Again, this image has fringing, which if I was just trying to leave a dark background as it stands I would be better saving without alpha. But the more important point is, this doesn’t look anything like the promo and definitely not what I want. There’s no detail.. it’s all just a shadow with a small point light.
This next step, I added UberEnvironment for the ambient light and a directional light for the moonlight. The moonlight has raytraced shadows. Render time went from around a minute to about 15. An interesting point about this was it originally was taking much longer and I happened to know from experience it shouldn’t take as long as it was for the setup I had, so I canceled the render. The first place I looked was for shadows on the other lights as having shadows on every light in the scene will dramatically increase render time. That wasn’t it. Then I noticed that what I thought was putting a texture on the original model actually loaded a second version of the model. This is often noticible in the preview window but wasn’t in this case as they overlayed each other in a way that it wasn’t obvious.
*On this last image, one should click on it to see it full size.
So in summary, it may seem obvious to many of us that these are components in a puzzle, but for new people especially, it is easy to think they are more fleshed out then they actually are. Some products, especially newer ones do render out of the box like the promos, but even then, integrating them into our projects requires retrofits.
The goal isn’t to go into too much detail, or even use pictures in the discussion. I put in much more detail in this initial example simply to clarify the concept. What the idea really is, is to explore ways different artists approach the idea of using the components to create their finished product. In most cases this can be done by a simple text post of an example workflow.
In this example, the sample goal was to create a piece that would serve as a basis for future renderings in a Halloween type render. As such, the final piece was reframed to contain the entire piece and I would typically render one with/without background and save the settings as custom settings if I were planning on using it. It was just an example piece in this case, and definitely not a ‘work of art.’ But hopefully served it’s purpose in showing how we can take a product out of the box, with some simple modifications often, come up with something.
- Setup, around 10 minutes
- Render 14m.11s
- Hardware: I5 760, 12G Ram, 9800GTX w 512m Memory
If DS can save renders as TIF’s you might try that (we don’t render in it so I don’t know if it can). The alpha channel can be used as a mask in any image editor that supports layers, masks and alpha’s.
On the render with fringing, have you tried the defringe filter? Or tried rendering to a “green screen” and compositing in the image editor?
Well technically, DS doesn’t render anything, the render engine it uses by default is 3Dlight. Some people’s workflows will be very different because they use different render engines for their final output. I am just learning Lux/Luxus myself atm, and would be using Octane if I could afford it. 3DLight does have some features not available in Unbiased (ie Lux/Octane) render engines but that is a separate discussion.
On the specifics of removing the fringing there are actually a couple good threads on that. If anyone wants to put up the links (don’t have them handy at the moment) they might be useful. It happens to be an area I need to take the time get down in my pipeline. Yes 3Dlight does output Tiffs, though I believe it has the same fringing problems, as does green screen in a default ‘green (or insert preferred unused color here) background.’ As for a defringe filter… not familiar with it.
[Edit] The Fringe issue was extremely simple to fix, in this case at least. I was trying to ‘fill’ the transparent area, which was what gave the halo effect. This is actually not my normal workflow but for some reason was taking this shortcut without thinking. Once I created a layer below the image and filled it, the fringing went away. The added advantage of this of course is that in many cases it’s a non issue since one is overlaying the object in question on top of another image, and because one can incorporate layer (watchamacallits… my brain blanks on the simplest things sometimes) to effect exactly how the two layers integrate at the fringe areas.
On a side note, the previous thread that inspired this, people were posting great examples of this, better then the one I presented here, but I thought a separate thread dedictated to the topic would be helpful. I wish I could copy over some of that thread to this, but hopefully it will fill on it’s own with inspiring stuff
A note on Tiffs for anyone that is not familiar, Tiffs are uncompressed. That means one can edit and save them without them degrading due to multiple recompressions similar to the difference between copying a dvd/cd using an exact copy method vs copying a VHS or cassette tape did (for those that remember those.) Hence the reason many artists will save in Tiff format rather then jpg or png up until the final image that’s being prepared for output. So thank-you icprncss. The winning answer here would be to save in tiff at this stage typically, with an alpha, understanding that using a layer under the image with the alpha channel should sort out any fringing.
This is all open for discussion btw so anyone can jump in and offer counters to anything presented.
This brings up another good point. What if we have things that we are using for stock and are already in jpg/png? If we are going to be doing multiple edits and saves on them, it pays to save them as tiff or a format that is native to the photo editing tool (as these are also usually uncompressed, but verify this if not sure.) Then, save in the preferred compressed format only when putting out the final image.
Definitely don’t use JPG, and certainly store your files in the native format of your image editor.
TiFF is just one option and one of the best, but it will depends on what you are doing and looking for. When I did animation we used TGA because they are smaller files, while still retaining data very well (and other reasons). For years I have mostly used PNG as the quality is the same as TIFF or TGA, but file size is smaller. All three are lossless, but PNG and TGA support compression. I don’t think TGA does transparencies though…PNG does. So TIFF is the strongest, but if you don’t need layers you may have another option.
The disadvantage of PNG is that it uses more CPU to save a file than the other formats because of its compression.
If you need thousands of files for an animation file size may become a factor, I know it was for me!
Using one of my own images here are the file size differences
PSD file with 2 layers 20.8mb
Tiff with 1 layer 21.7mb
JPG max quality 1.2mb (remember max quality is still lossly!)
JPG medium quality 167kb (set @ 30% in PS)
Not a big deal when working with a single image, but it does add up. Not just hard drive space, but impacting speed of file reads if you are creating a video from an image sequence.
When you render to .png render with a black background, not a white one. Fringing problem solved.
Yes, it did take me a long, annoying time to figure that out. No, I was never told it by anyone else.
Yes, there are a lot of little things like that… But many fewer than before I started using DS4.5 as my main renderer. I will leave it at that so the mods don’t smack me for app-baiting again. :D
Another couple of things I’ve learned that I was never told, or that I was told the opposite of initially by forum denizens:
-You do not need 15 lights in a scene, unless it’s a street with 15 street lamps or the like. You can get a nice-looking scene with an uber, a specular distant, and a diffuse spotlight. If you can’t, adding more usually won’t help.
-You do not need a blue velvet setting at 20% in any body shader, ever. It won’t even show up at that value, and pink or brown is much more lifelike if your character isn’t blue. Velvet doesn’t mimic veins under the skin, but rather the peach fuzz aura at the edges of the body.
-You don’t want the SSS above 50% without a specific map for it, either, or your body texture will be overwhelmingly blown out of details by the SSS color.
-Letting your corneas reflect actual scene objects instead of using a transmap is great - IF you can afford enough raytracing samples in the render settings to actually do that.
-Seamlessly tiling textures are not automatically bad. BAD seamlessly tiling textures are.
First let me say that I use PSP 9 for all my post work.
I always composite my pictures (my compute can’t render any kind of large scene).
I render any figure or scene piece against a black background and save it out to a tiff file with an alpha channel. In PSP I create a selection using the tiff alpha channel. I modify the selection by having it add a two-pixel feather to the selection. When I copy and paste my selection to the image I’m working on a little of the black background comes along and softens the edge.
The first picture shows the straight alpha channel selection and the second picture shows the feathered selection. Notice the edge of the ear.
The problem one runs into with alpha channels and growing it by a pixel is it doesn’t work in situations where there is semi-transparent areas such as the areas around the ivy leaves in the example above. I didn’t post the example image, but the fringing in all of the areas except the leaves were able to be solved with growing the selection. It worked fine with a background layer rather then a ‘fill’ (or past into) of the alpha area of the base layer, as the background layer simply merged with it. One could use multiply, darken etc.. for blend modes and even masks if necessary if using the alpha combined with a background layer.
It’s a great tip because it looks much better, and sometimes a simple fix will work. We can often use more complex methods then necessary to solve a problem because it becomes habit. Not thinking to use simple fixes when appropriate can be the source of big time sinks.