Interested in Compositing Video with 3D Footage? Free Compositor.
Joe.Cotter
Posts: 3,361
in The Commons
Natron, a free open source compositor with Nuke like functionality just shipped their v2.0 rc1. CG Channel just posted an article with more information and a demo trailer. The Natron YouTube channel has more demos and some training videos.

Comments
I should also mention that Blackmagic has a free version of their Fusion 8 compositing tool that's been out for a while. If we look at the features between the free version and their studio version (around $1k) the main features missing are team based and render farm type of functionality. Fusion is also Windows only at the moment (although they say Mac is due) whereas Natron supports Windows, Mac and Linux.
Other then the above I don't have a lot of information on how they compare but if there are any compositors out there who have or can give them a spin and report back I'm guessing there would be people here who would like to know how they compare.
There is a review of Blackmagic Fusion 8 at No Film School which has an interesting (video) interview worth watching.
Thanks for the heads up. I'll take a look at Natron. I've been interested in checking out the Mac version of Fusion; the beta version is actually available now... I use DaVinci Resolve which blackmagic has under their care as well and have been really impressed with what they have done with it.
Both look extremely powerful, but I gave up on Fusion because I can't stand node-based workflows (which Natron appears to use as well). While After Effects is sometimes slow and clunky (and not free), I found it much more intuitive and there are many more tutorials available.
Good to know though, thanks for the heads up too.
A lot of products are going node based. It's more powerful and flexible then traditional interfaces and not as complicated as writing one's own code, basically like a step between. The thing is, many products in computer graphics have gotten complex enough that it's very hard to create a traditional interface that covers the gamut of functionality and flexibility without being out of date in a single version of the software. Also, nodes expose areas of tools that previously could only be gotten at through scripting or programming to the API.
Blender can do it too; but the more the merrier.
I didn't know what compositing was until a couple of days ago. I watched a Blender video and it does look impressive but I don't see how we could use DAZ Studio with an external compositor unless we could export the scene as a whole (as we can do to Blender). Perhaps I still don't understand what these programs do.
I tried natron on windows 7 and it crashed repeatedly so I uninstalled it.
back to good old After effects CS over on my mac machine.
If you use the Canvasses feature on the Advanced tab of Render Settings with Iray active you can render .exr files that can show only certain aspects of the scene, those can then be used in a compositor.
Hi compositors are really useful to those of us who do lengthy animation work with complex scenes & backgrounds
I have attached three screenshots from my current animation film project using Daz Figures & assets.
In all three the, animated figures are rendered separately with "alpha" or transparent backgrounds.
and then composited on top of a still image backgrounds which saves me alot of rendertime as My CPU does not have to calculate the geometry for the background scenery for each frame of the render.
This layered comp approach also gives us unlimited options for adding post effects to shot .
Hope this helps
..............................
.....................................................
marble, you can also render out parts of your scene and composite with layers in Photoshop and other photo editors. As Wolf has done above with animation, you can also composite in Sony Vegas importing a series of frames and composite with a background or other parts of your scene rendered separately. It saves render time and can help those needing more memory than they have on a card for Iray.
Blender can do some of what these tools can do but is no where near as full featured as full on compositors.
A basic definition of compositing, still image or video, is to take disparate pieces of imagery and stitch them together, usually through layering, then (and this is an important part) processing them so they make a seamless whole, so that they all blend together into a single image such that the parts all look like they've always been together. Besides layering the images, this usually entails color balancing the image so all parts are of a base color family and blend, texture balancing so that we don't have parts that are gritty and parts that are pristene such that we wouldn't find in the image presented (often an issue with mixing photography/video with 3D content) and other such considerations.
There are three types of compositors actually, those that are focused on still image post production, those focused on video, and some hybrid. Blender can do both still image and video compositing, but the features for video compositing toolset are not as defined as those in Blackmagic's Fusion or Natron's. It's a bit of jack of all, master of none in Blender's case. That's not to say Blender's tools aren't powerful, they are. But the tools in a full on video compositor are optimized for that task. As to using DAZ Studio with compositors, there are various techniques but all require some amount of forethought and understanding of the process. It is an advanced area of computer graphics with it's own learning curve.
And yes, Photoshop is in itself a powerful compositor but for still images. It also is limited in it's video compositing, although it is more powerful then many realize even in video with CS6 and beyond.
A side note about compositors... they can also be used to shortcut the render process by doing in post what would take much longer to do in the render engine. A perfect example of this is fog. If we output a z-depth map from our image, we can use that to interject fog into our still or video in a fraction of the time that would be required to render the fog. Some of these types of tricks are specifically for working with the hardware at hand, like using bump or normal maps instead of actual polygons to add definition to an image. Eventually as systems get more powerful, some of these complex tricks will fall by the wayside and others will be automated to the point we don't even have to think about them, like automatic LOD.
a new software?, bring it on!
Has everyone seen this (Yes blender, I know) The definite advantage blender has is you can build your model/scene, with the data from your scene already there. Natron looks like they're a bit designed to work with with blender for the post color corection and blending stuff.
Edit That was the totally wrong link, (though also totally cool) This was what I meant
Wow.
I don't know what else to say.
The GOT video was interesting but I really liked the skinning one, very interesting.
There are so many facets to a discussion that gets into something like this that it's like trying to explain the color passing through a diamond from all perspectives at once. In an over simplified explanation I would say that it's not a limitation of the Blender tools as they are very capable, but more a level of efficiency for a certain subset of tasks. On one hand, it's often not as much 'can someone do something' but 'can they do it profitably' which by definition means efficiently. The trade off of course is learning a different tool, understanding all of the tradeoffs of moving back and forth between tools etc... Each person has to look at those and decide for themselves what makes the most sense.
My card is certainly short of memory which is why, despite the problems, I'm still using Reality and Luxrender. I'm trying to imagine how this all works with DAZ Studio though. I assume I create a scene, then hide the background objects and render the foreground (perhaps a couple of people). Then hide the people and un-hide the background and render that (over-simplifying, of course). But how does it handle shadows, etc?
Just out of curiousity, wolf359, what file format do you prefer when compositing? Image files, or video? I often work directly with AVI files, myself, since the Huffyuv and Lagarith video codecs support Poser's alpha channel, but stick with PNG when I must deal with image sequences.
IRay is just a non-starter for me, unfortunately. My card, although it is an NVidia, is not up to the job and, my computer being an iMac, I have no upgrade option. There is also a Masking product for 3Delight which allows selection of items in a scene to render but that doesn't help if I'm using Reality/Luxrender.
The remaining option for me would be to export a whole scene to Blender using mcjTeleBlender and play with compositing there. I've tinkered with that option a little over the past week or so but getting the materials to look right requires experienced hands, I think. Also, Cycles is good but my results were far short of the natural look of Luxrender.
Having said all that, I can see the attraction of compositing as explained here and in the videos I've watched. In my amateurish way, I've tried to add a character I've rendered into a photograph using Photoshop and the results were pretty pathetic.
There are ways of using shadows all depending how many you have... with some lighting, shadows are not noticed (more diffused). Some scenes will not cause you any problem. There is a tute I saved somewhere at home that shows how to do it with Studio. Here's another I just turned up that can get you thinking:
http://www.graphics.com/article-old/image-compositing-light-and-shadow-techniques-part-1
Hey there LD!!
Long time not post/write etc.
In answer to your query, I always render finals
From Maxon Cinema4D
to uncompressed Targa with Alpha channels
if, there is to be compositing,( 98% of cases)
This gives me the greatest amount of post render
flexibility in After Effects CS.
I was doing some more searching for ideas yesterday because I have a feeling that compositing might well be worth putting some time and effort into. Thing is that I can't figure out specifically how to do it independently of the render engine. I know there are multi-layer and masking tools for 3Delight and Richard mentioned the canvas feature of IRay but how would I do it for Luxrender, for example?
I know that Reality has an option to export Alpha channel (transparent background) and support for .exr (though I'm not sure of the significance of that). But I can't quite get the procedure I might use to separate foreground and background in DAZ Studio. To explain my thinking: the idea is to render a background (room, furniture, props) in 3Delight and then render the people (figures, clothing, anything they may be holding) in Luxrender. Of course, 3Delight and Reality use different types of lighting so all I can think of is two identical scenes, one lit for 3Delight, the other for Luxrender.
Any thoughts?
I did a small video for the Grandchildren which was going to take 41 Days to render as still .png images
After watching a Dreamlight video on speeding up render times I did the lot in three days
I only have CPU as I don't have a Nvidia card.
First I rendered the backdrop and used it as a Background image.
Then I rendered the main animated character against it.
I then added another animated character and hid the main character and backdrop and rendered that.
I used Sony Vegas to composite all that together, added an end caption, music and a falling snow image I made in Photoshop.
Nothing spectacular but they liked it
Thanks, very impressive :) I can see why rendering a static background would be a big time saver in an animation. I'm talikng about stills, however, where the perspectives change for each in the sequence (a story in stills, as in comics but not toonish).
I have seen that Dreamlight video but couldn't find it on YouTube. Maybe it is one that I bought - I'll check.
EDIT: I didn't look hard enough.
Ok - I watched it again and it goes back to what I said previously. That method is fine for 3Delight - render layers for each light separately - but not for sending to Luxrender. And that's the conundrum I'm trying to solve.
Edit: here's a similar video tutorial for rendering lights to separate layers in Luxrender. However, that doesn't help me either becuase I want to composite a mixture of rendering options (3Delight and Luxrender).
You can render from different engines if they output image sequences. Once output to an image sequence like .exr the sequences can be brought into a compositor like natron and composited together.
All that was rendered in Iray only using Dome lighting. The Elf was rendered on a blank background on the right and duplicated in Sony Vegas and moved to the left. I also set the convergence to 15% in the render settings to speed things up too.
Gedd, could you elaborate a little, please? When I think of an image sequence, I think of: set up the scene then render, move things around then render, and these renders would be image_01, image_02, etc. Is this what you mean?