New Tech to generate 2d illustration from 3d renders

Inspired by Algovincian's NPR thread: https://www.daz3d.com/forums/discussion/68493/algovincian-non-photorealistic-rendering-npr/p1,

Would you be interested in buying a tool that can generate an illustration from 3d renders that is indistinguishable to hand-drawn art?

The tool could allow artistic & style adjustments for non-artist people like how Daz3d was designed for non-technical people.

Main selling point of this tech is that there is NO human intervention during this generation process. It is a simple click and finish process instead of having to be an illustration expert and needing to fix imperfections here and there throughout the process.

Some examples of illustration capabilities provided by Algovincian:

https://www.daz3d.com/forums/discussion/comment/986517/#Comment_986935

This could have uses in comics, 2d games and other artistic illustrations like book covers.

If this interests you, I would like to discuss on exact features that would make it useful for you.

«1

Comments

  • Seeing the wide variety of tools available for Daz Studio to generate an approximation of hand-drawn art (or cel shading, or print) I will hazard a guess that yes, there is a lot of interest in this. What kind of styles are we talking about? Pencil, crayon, paint? Could it reproduce specific styles like manga or engraving? And how about consistency for a series of images meant for the same work, like a sprite sheet for a 2D game? Also, what does this tool offer compared to programs like DAP or some Photoshop filters?

    Sorry if I sound over-eager, it's just this is one of my goals with 3D art and so far I haven't seen anything that quite clicks with what I envision.

  • I would love something like this!

  • Conclave said:

    Would you be interested in buying a tool that can generate an illustration from 3d renders that is indistinguishable to hand-drawn art?

    Yes.

    What would make it most useful to me would be the option to output different illustration passes as their own images so I could tweak the compositing and/or use with other tools.

  • Wow, that is the most realistic hand drawn renders I've seen. Amazing. 

  • Serene NightSerene Night Posts: 17,567

    I’d have to see how it handles figures. There are already sketch quality tools to render mechanical things pretty well but doing the figure is different as many tools don’t do the skin too well

  • digitelldigitell Posts: 558

    VERY interested in this!

  • Worlds_EdgeWorlds_Edge Posts: 2,146

    It would go straight into my cart, particularly if figures/characters come out well.

  • AllenArtAllenArt Posts: 7,145

    I would be immensely interested in this.

  • JOdelJOdel Posts: 6,255

    Oh yes... Particularly impressed by the black and white noir example. But they are all splendid.

  • SnowSultanSnowSultan Posts: 3,510

    This would bring me back to doing 3D, but what connection do you have to Algovincian and his project? I still have doubts that we can ever achieve truly hand-drawn looks right out of a 3D renderer - if Disney can't do it, none of us are likely to - but I'm certainly for anything that gets us a little bit closer.

  • WonderlandWonderland Posts: 6,745

    I would prefer not to have it fully automated but be able to tweak it as much as possible and to hand fix imperfections... If that was possible, I'd definitely get it. But the truth is, there are already a lot of apps, even iPad/iPhone apps that do something similar and allow tweaking...

  • kenmokenmo Posts: 895

    Contigent on cost, YES I would be very interested...

  • OdaaOdaa Posts: 1,548
    edited December 2017

    Something that could get consistent results across a variety of renders with little tweaking would be helpful to me, but the specific styles that algovincian shows off so beautifully in those comments are not really what I would be looking for-I aspire (unsuccessfully) to have my stuff look more like Caravaggio and Rembrandt type paintings (or Monet on the impressionistic side), rather than sketch or toon type art.

    Post edited by Odaa on
  • algovincianalgovincian Posts: 2,578
    edited December 2017

    This would bring me back to doing 3D, but what connection do you have to Algovincian and his project? I still have doubts that we can ever achieve truly hand-drawn looks right out of a 3D renderer - if Disney can't do it, none of us are likely to - but I'm certainly for anything that gets us a little bit closer.

    In short, the answer is none. I recently received a couple of messages outside of these forums that made no sense to me at the time, but now that I've seen this thread they do lol (the part about me providing examples). To clear up any confusion, I will say that the images I've posted in either of the threads of mine mentioned by Conclave were created by systems/algorithms that I've developed over the last 20+ years, and not examples of any work that Conclave has done.

    That being said, I'm always interested in seeing anybody's NPR work - including any examples of Conclave's own work.

    On another note related to @SnowSultan's comments (and I don't want to speak for anyone else), I will say that distilling my process down to something that would run as a stand-alone or plugin to DS never made sense to me from a practical stand point. The number of wheels that would have had to be re-created was prohibitive.

    There are a few people that have actually seen the process execute from start to finish, and they were astonished at the scale of the system as a whole. Going in, they were familiar with the details of the system that I've posted publicly. But when they saw the scale of the actual hardware (distributed network), the number/size of the intermediate files generated for each scene processed (approaching 1GB - the bulk of it compressed image data), and the complexity of all the code being executed, they were shocked.

    I've talked in the past about how the development of my systems/algorithms has been an evolutionary process (small incremental changes made over long periods of time), and it really is true. I went down many forks in the road that ended up not being the right path, but there was no way for me to know ahead of time. There was no book that I read that walked me through the process. There was no class that I could take to teach me everything I needed to know to make it happen. I was making it all up as I went - creating components to solve problems as they popped up.

    The journey was (and continues to be) a lot of fun, and I wish anybody endeavoring in something similar the best of luck!

    - Greg

    Post edited by algovincian on
  • SnowSultanSnowSultan Posts: 3,510

    Thank you for that explanation Algovincian. I figured your examples took far more work than a plugin could achieve, but you did do amazing work coming up with solutions on your own and as they arose. I hope you'll continue your work and I'm sure I can speak for everyone when I say we all look forward to seeing what you create next.   :)

     

    SnowS

  • CrescentCrescent Posts: 319

    I'm definitely interested but I'd need to see the results of your tool before I'd commit to buying. 

    Features wanted:

    1)  Good outlines.  Figures, especially facial features, are the hardest to get decent results for.  Things composed of straight/hard lines, like vehicles and buildings, are simple to get a 2D look for.

    2)  Smooth, flowing shadows.  I have a few different ways to fake a 2D look currently, but the shadows jut all over the place instead of being simple, smooth arcs.

    3)  Variable line width would be nice.  Most hand-inked images have variable thickness depending on how the brush/pen is held and the pressure used.

    4)  Good color translation.  Dark skin tends to get washed out when run through different plugins.

  • algovincianalgovincian Posts: 2,578

    Thank you for that explanation Algovincian. I figured your examples took far more work than a plugin could achieve, but you did do amazing work coming up with solutions on your own and as they arose. I hope you'll continue your work and I'm sure I can speak for everyone when I say we all look forward to seeing what you create next.   :)

     

    SnowS

    Well, to be clear, all I was saying is that *I* couldn't do it. People want it to be a "shader", or a "filter", but it's so much more than that. Maybe somebody else can make it happen in a simpler fashion - hopefully, somebody else can. Maybe that person is Conclave?

    - Greg

  • nonesuch00nonesuch00 Posts: 17,944

    I've seen Algoquin's NPR renders and many are a step above so I'd have to see a 1 to 1 comparison between his and any new postprocessing filters out there and there are a lot and they are everyone. From Unity 3D to GIMP to Photoshop to Filter Forge and on and on...

    I've be interested in such filters though. There are already various products in the DAZ Store that really aren't to shabby with a little prep work.

  • SnowSultanSnowSultan Posts: 3,510

    Well, to be clear, all I was saying is that *I* couldn't do it

    Well you've come closer to a genuine hand-drawn look than anything that's come before, and I've been keeping track of this sort of thing for over 15 years.  :)  No shaders or presets truly give hand-drawn results because they're not able to visualize shadows and line weight in a way that is pleasing to the human eye; they obviously just use mathematical calculations to determine where shadows go and what lines are considered important. This is why we get jagged shadows and outlines that we would never place if we drew them by hand.

    I don't even pretend to know how you got the results you did, but I think the methods behind it should be explored further, regardless of how impractical they might be. My own ancient method of flattening everything in the scene along the Z axis and rendering (the Z-Toon technique, which might be older than DAZ itself) wasn't practical either, but it gave pretty good results for the time. Your technique is like the discovery of an Earth-like world in another galaxy; we don't know exactly what we're dealing with and how to make it benefit us at the moment, but the potential is great.   :)

     

    SnowS

  • kyoto kidkyoto kid Posts: 40,621
    edited December 2017

    ...being able to render directly into various illustrative styles without having to do so in post would be wonderful.

    Post edited by kyoto kid on
  • 1) I need to see the before and afters. The literal before and after to judge. Some images translate better than others. People and creatures tend to have curves to the shading and are always difficult for any process to judge the importance - black or white? Thick or thin? Outline or detail?

    2) Maybe all of this- probably more time as an issue- could be offered as a service. People submit/upload and the process happens. Even if it's a pay for service. Might be best done for book covers or important illustrations Maybe not quick enough for day to day and or graphic novels, etc....

    3) Personally, I'd rather have it NOT be related to daz, but if getting it work with a script to figure- better- what's a surface/outershell and let that get all the heavy line work might be key.  The results from that other thread were spectacular, but yeah, might be still far out of reach.

    4) For Conclave: I'd rather hear how close or how possible this all is. And what you intend to do about it. You didn't show up and do the usual

    "I'm working on a Daz shader that...."

    "Me and a few buddies have this app...."

    "I'm thinking on one day looking into this and I'm just making a feeler post to get ideas and no, you won't be hearing about this again...ever....thanks for getting excited about this fantasy product, I'm not currently developing." [add an lol if you feel sensitive]

    -----------------

    So yeah, obviously- the results @algovincian got everyone excited.

    But I feel like I just saw a teaser trailer for a Star Wars movie coming out in Christmas 2018.

    And the trailer was just heavy breathing and the sound of a light saber igniting. Yuck.

  • algovincianalgovincian Posts: 2,578
    edited December 2017

    SnowSultan said:

    Well, to be clear, all I was saying is that *I* couldn't do it

    Well you've come closer to a genuine hand-drawn look than anything that's come before, and I've been keeping track of this sort of thing for over 15 years.  :)  No shaders or presets truly give hand-drawn results because they're not able to visualize shadows and line weight in a way that is pleasing to the human eye; they obviously just use mathematical calculations to determine where shadows go and what lines are considered important. This is why we get jagged shadows and outlines that we would never place if we drew them by hand.

    I don't even pretend to know how you got the results you did, but I think the methods behind it should be explored further, regardless of how impractical they might be. My own ancient method of flattening everything in the scene along the Z axis and rendering (the Z-Toon technique, which might be older than DAZ itself) wasn't practical either, but it gave pretty good results for the time. Your technique is like the discovery of an Earth-like world in another galaxy; we don't know exactly what we're dealing with and how to make it benefit us at the moment, but the potential is great.   :)

     

    SnowS

    Thanks for the encouragement, SnowS - it's much appreciated. Truth is, I have no choice but to continue. I'm no spring chicken, but I still find myself pulling all-nighters when inspiration strikes and provides a spark to investigate a new angle further. My mind refuses to shut down. It's both a blessing and a curse at the same time.

    On another note, I also wanted to mention that I couldn't help but think of Smacky when I picked up all the Monique Pro Bundles last month. She came with one outfit that, to me, was screaming to be paired with a car:

    I totally pictured Smacky, only maybe a different car and more of a 70s vibe.

    - Greg

    Post edited by algovincian on
  • JClaveJClave Posts: 64
    Uthgard said:

    What kind of styles are we talking about? Pencil, crayon, paint? Could it reproduce specific styles like manga or engraving? And how about consistency for a series of images meant for the same work, like a sprite sheet for a 2D game? Also, what does this tool offer compared to programs like DAP or some Photoshop filters?

    Sorry if I sound over-eager, it's just this is one of my goals with 3D art and so far I haven't seen anything that quite clicks with what I envision.

    Semi-realistic art styles appropriate for games and such, like the following https://i.imgur.com/lXyu4JI.jpg, would be my focus.

    So the first release would aim to be capable of those art styles, one of them being what people call '4-color comic' art.

    I don't have yet enough experience to answer for frame-by-frame consistency. I imagine that would be a nice-to-have feature in the tech but not the core requirement.

    As stated before, this tool will aim to cut all the elbow grease out of fixing artifacts and adjusting things here and there that you have to do with alternative methods, ideally just being a simple click and finish process.

    I still have doubts that we can ever achieve truly hand-drawn looks right out of a 3D renderer - if Disney can't do it, none of us are likely to - but I'm certainly for anything that gets us a little bit closer.

    Disney couldn't do it because it's not a research firm with multi million dollar investments in emerging computer vision technologies. deep learning technology is a rapidly evolving field. For example, 'GAN' algorithm that is now widely used for this kind of image processing was only introduced by a researcher in 2014. And there are new research publications improving upon that work every year.

    Features wanted:

    1)  Good outlines.  Figures, especially facial features, are the hardest to get decent results for.  Things composed of straight/hard lines, like vehicles and buildings, are simple to get a 2D look for.

    2)  Smooth, flowing shadows.  I have a few different ways to fake a 2D look currently, but the shadows jut all over the place instead of being simple, smooth arcs.

    3)  Variable line width would be nice.  Most hand-inked images have variable thickness depending on how the brush/pen is held and the pressure used.

    4)  Good color translation.  Dark skin tends to get washed out when run through different plugins.

    Thanks for the input. I will definitely look into addressing these issues!

    1) I need to see the before and afters. The literal before and after to judge. Some images translate better than others. People and creatures tend to have curves to the shading and are always difficult for any process to judge the importance - black or white? Thick or thin? Outline or detail?

    2) Maybe all of this- probably more time as an issue- could be offered as a service. People submit/upload and the process happens. Even if it's a pay for service. Might be best done for book covers or important illustrations Maybe not quick enough for day to day and or graphic novels, etc....

    3) Personally, I'd rather have it NOT be related to daz, but if getting it work with a script to figure- better- what's a surface/outershell and let that get all the heavy line work might be key.  The results from that other thread were spectacular, but yeah, might be still far out of reach.

    4) For Conclave: I'd rather hear how close or how possible this all is. And what you intend to do about it. You didn't show up and do the usual

    "I'm working on a Daz shader that...."

    "Me and a few buddies have this app...."

    "I'm thinking on one day looking into this and I'm just making a feeler post to get ideas and no, you won't be hearing about this again...ever....thanks for getting excited about this fantasy product, I'm not currently developing." [add an lol if you feel sensitive]

    1) You can check the Algovincian's thread that I linked at the top, there might be some before&after's in it.

    2) I was thinking that too, accessible cloud service that can be integrated with Daz. Maybe not as pay-per-image, since if you want to experiment a lot with customisable settings (Daz or its own), you probably don't want to pay each time you experiment.

    3) This is kind of a touchy topic. Due to the forum policy of mentioning commercial product, I only found out that this was approved 2 days after I posted this thread. I'd rather put the specifics on hold with this. I'm sure something can be worked out but it will require confirmation with Daz.

    4) I'm at the very beginning stage as you might have suspected. So not close at all. I found it hard to get motivated working on this lately because pretty much everyone I knew in person were uninterested about this idea. That's probably the main reason I posted this. Now my mind is made up to spend that $5k for a deep learning PC. S yeah, I'm committed and I aim to release a usable version in a few years.

     

  • FirstBastionFirstBastion Posts: 7,448
    edited December 2017

    There are some off the shelf products that can deliver postwork already out there right now. 

     

    Post edited by FirstBastion on
  • algovincianalgovincian Posts: 2,578
    edited December 2017

    I still have doubts that we can ever achieve truly hand-drawn looks right out of a 3D renderer - if Disney can't do it, none of us are likely to - but I'm certainly for anything that gets us a little bit closer.

    Disney couldn't do it because it's not a research firm with multi million dollar investments in emerging computer vision technologies. deep learning technology is a rapidly evolving field. For example, 'GAN' algorithm that is now widely used for this kind of image processing was only introduced by a researcher in 2014. And there are new research publications improving upon that work every year.

    Actually, you may be surprised. I was doing some research on photogrammetry/structured light years ago, and came across this white paper published by Disney Research:

    https://www.disneyresearch.com/publication/nonlinear-disparity-mapping-for-stereoscopic-3d/ ;

    Back up a directory - their database is quite extensive (includes 500+ papers).

    - Greg

     

    Post edited by algovincian on
  • bluejauntebluejaunte Posts: 1,861

    If it looked like algovincian's results I'd buy it in a heartbeat. As we know, that's probably not going to happen. But obviously it could still look good.

  • bluejauntebluejaunte Posts: 1,861

    This would bring me back to doing 3D, but what connection do you have to Algovincian and his project? I still have doubts that we can ever achieve truly hand-drawn looks right out of a 3D renderer - if Disney can't do it, none of us are likely to - but I'm certainly for anything that gets us a little bit closer.

    In short, the answer is none. I recently received a couple of messages outside of these forums that made no sense to me at the time, but now that I've seen this thread they do lol (the part about me providing examples). To clear up any confusion, I will say that the images I've posted in either of the threads of mine mentioned by Conclave were created by systems/algorithms that I've developed over the last 20+ years, and not examples of any work that Conclave has done.

    That being said, I'm always interested in seeing anybody's NPR work - including any examples of Conclave's own work.

    On another note related to @SnowSultan's comments (and I don't want to speak for anyone else), I will say that distilling my process down to something that would run as a stand-alone or plugin to DS never made sense to me from a practical stand point. The number of wheels that would have had to be re-created was prohibitive.

    There are a few people that have actually seen the process execute from start to finish, and they were astonished at the scale of the system as a whole. Going in, they were familiar with the details of the system that I've posted publicly. But when they saw the scale of the actual hardware (distributed network), the number/size of the intermediate files generated for each scene processed (approaching 1GB - the bulk of it compressed image data), and the complexity of all the code being executed, they were shocked.

    I've talked in the past about how the development of my systems/algorithms has been an evolutionary process (small incremental changes made over long periods of time), and it really is true. I went down many forks in the road that ended up not being the right path, but there was no way for me to know ahead of time. There was no book that I read that walked me through the process. There was no class that I could take to teach me everything I needed to know to make it happen. I was making it all up as I went - creating components to solve problems as they popped up.

    The journey was (and continues to be) a lot of fun, and I wish anybody endeavoring in something similar the best of luck!

    - Greg

    Have you ever thought of offering a simple service where people can upload their renders for a fee and you send back the processed image? Or is the initial rendering part already too involved?

  • If it looked like algovincian's results I'd buy it in a heartbeat. As we know, that's probably not going to happen. But obviously it could still look good.

    "Yes" - The internet(s)

  • jaxprogjaxprog Posts: 312

    I would be interested looking at what you offer.

    How different would it be or in comparison to DarkWorld's Arthouse Postwork product found on Renderosity.com?

    https://www.renderosity.com/mod/bcs/arthouse-postwork/119091/

  • I am very interested on both, a rendering cloud sounds interesting too

Sign In or Register to comment.