Could GANs (deepfake technology) be used to create morphs?

Hi all,

I'm curious to hear what people familiar with GANs (generative adversarial networks, the technology behind deepfakes) think of potential applications for 3D. Imagine being able to load in a photograph (or series of photographs) of yourself or a friend/family member and generating a morph for whatever base 3D character you wanted, as long as the mesh is in the database? Am I way off base here, or is this something you could see on the DAZ Studio roadmap?

Thanks for your input.

Comments

  • Matt_CastleMatt_Castle Posts: 3,055

    Absolutely, but it would be handled very differently to deepfakes, in that the user shouldn't be training the networks themselves.

    AI learning is incredibly computing intensive (we're talking days, weeks, even months to train a system well), and we're looking at a much simpler problem. The reason deepfakes generally require the user to carry out the AI learning themselves is because the problem of "turn an image of any person into any other person, regardless of expression, camera angle, perspective, lighting, etc" is far too complicated to be able to train one network to do that for any combination of people, so we have to train an individual algorithm for each specific pair of people.

    In comparison, the problem of "create a morph that resembles these photos that have fairly neutral expression and lighting (and then let the render engine worry about specific expression, angle and lighting)" is a far simpler process, so one network could theoretically be trained with a wide enough array of training data to work for anyone. (Essentially, swapping the wildcard in the problem from "one person under any circumstance" to "any person under one circumstance").

    By the time it was on your computer, such a system should be a very complex, but ultimately dumb, algorithm - the user wouldn't actually be carrying out the machine learning process themselves.

  • Using various AI techniques to generate a 3d face from a photograph is an area of active research.  https://arxiv.org/pdf/1903.08527v1.pdf and https://arxiv.org/pdf/1806.06098v1.pdf are some recent papers.

  • SadRobotSadRobot Posts: 116

    Check out this project: https://github.com/YadiraF/PRNet

    I had a thread trying to use it to do just what you're talking about, but creating the morph from the generated file proved difficult and I ended up losing interest after a while. But I lose interest quickly.

  • Alley RatAlley Rat Posts: 405

    Thank you all for your input. Thank you RobotHeadArt and SadRobot for those links.

    I saw (in the 80s/90s) from Japan some facial capture that worked pretty well, and with the issues Matt_Castle brought up, it seems like this might still be the way to go. (3D capture, then manually morphing a base character to match the pose and expression, then projecting the captured stuff to make a morph similarly to how normal maps used to be made by projecting high-res meshes onto low-res mesh UVs).

Sign In or Register to comment.