Smart rendering

Is there an option for smart rendering of unchanged pixels?

ill explain:

If i have a scene with a table and a vase on the table, now i'm creating an animation of the vase movin1 one inch in 5 seconds, is ther a smart way for not rendering the hole scene in each frame but only the changed pixels?

Comments

  • How would it know what was unchanged? The background would change if the vase was reflective or refractive, or if the background was reflective.

  • abunezekabunezek Posts: 32

    how would it know? its alogarithm like in after effects and h264 encoding. its possible.

    h264 files are so small because they duplicate pixels that has not changed from the previews frame.

  • abunezek said:

    how would it know? its alogarithm like in after effects and h264 encoding. its possible.

    h264 files are so small because they duplicate pixels that has not changed from the previews frame.

    But the pixels have to be evaluated to determine that they are unchanging, and to do that in DS you would have to render them.

  • abunezekabunezek Posts: 32

    No,

    the pixes been analyzed first which is must faster than render

  • Richard HaseltineRichard Haseltine Posts: 97,165
    edited June 2018

    I'm sorry but I don't see how that would be possible - to know if a pixel is uncanged from the previous frame its "final" value is needed (scare quotes as the refinment never stops), which is what rendering does. If there was a quicker way to get the "final" value it would be used for rendering in the first place.

    If you know, as the artist, that the background is not going to chnage (or you don't care about the changes) then render an HDR from it and use that in the scene in place of the actual models.

    Post edited by Richard Haseltine on
  • abunezekabunezek Posts: 32

    https://en.wikipedia.org/wiki/H.264/MPEG-4_AVC

    read the "features" settings.

    take after effects for example, try to render unmoving animation, it will render the first frame and all the next frame will be copied in much faster speed. its a known method

  • That is not, I think, the same thing - video encoding already has the final images, it's now trying to shrink the sequence of them down to a mangeable size for playback. Rendering is generating the images in the first place, it is not a comparable situation.

  • abunezekabunezek Posts: 32

    h264 is also for real time, so its not always have the final images.

    we can argue forever but i really thing that this feature should be considered.

  • LeanaLeana Posts: 11,057

    Even if that was possible (and I don't see how either) that would most probably be a feature of the render engine, so not something Daz can implement.

  • murgatroyd314murgatroyd314 Posts: 1,440
    abunezek said:

    h264 is also for real time, so its not always have the final images.

    we can argue forever but i really thing that this feature should be considered.

    H264 is for real time - as in video cameras, which get the images directly from the camera sensor. That's different from 3D graphics, where the images do not exist until the rendering is done.

  • hphoenixhphoenix Posts: 1,335
    abunezek said:

    h264 is also for real time, so its not always have the final images.

    we can argue forever but i really thing that this feature should be considered.

    No.  You aren't understanding the differences in terminology and technique.

    H.264 is an MPEG encoding system.  It takes VIDEO frames (i.e., from the camera, image ALREADY exists) and determines differences (deltas) to produce a sequence of changes between two 'key' frames.  "Real Time" is simply a camera that takes the frames and just sets a 'key' frame every 10 frames or so.  It can encode the video stream, though the encoding is often not the best.

    Rendering you don't HAVE that 'next frame' to determine the deltas until AFTER you render it.  There are ways to render sections of a scene that have motion, but they are based on the aniimator being able to specify those areas.  One full render is done, and then smaller sub-renders are done of the motion areas at the appropriate relative size and aspect ratios, then composited together over the base frame.  The problem is that there may be reflections and light bounces that are altered by the motion in the sub areas that DO affect the global illumination effects in the base image.  None of this can be simply 'calculated' and worked out just from a scene.....it requires the animator to do it themselves.

    tl/dr; - No, it doesn't work the same as H.264 or any other video encoding; and No, you can't do that in rendering.

     

Sign In or Register to comment.