New Tech to generate 2d illustration from 3d renders

2»

Comments

  • SnowSultanSnowSultan Posts: 3,507

    I totally pictured Smacky, only maybe a different car and more of a 70s vibe.

    Haha, thank you for thinking of Smacky.  :)   I miss making images with her, but hopefully as our technology advances, I'll get back into the swing of things. My main reason for taking an extended break from 3D really has to do with the mismatch in quality between our photorealistic figures and textures and our unfortunately not-so-photorealistic hair and clothing models, but I still get inspired to experiment once in a while.

     

    I haven't looked over those Disney papers, but I just thought it was odd that they wrote new software from scratch specifically to handle Rapunzel's hair (and I believe Merida's too) but haven't really been able to do anything revolutionary when it comes to non-photorealistic renders. Some of the more interesting recent examples are the video games Guilty Gear and the new Dragon Ball Fighter; they do a really good job of giving a cel-shaded look to 3D models, but I have a feeling the shading may be hand-painted.

    Just please don't give up Algovincian, you've really made some interesting progress where pretty much all others have been unable to.  ;)

  • ArtiniArtini Posts: 8,861
    Conclave said:

    Inspired by Algovincian's NPR thread: https://www.daz3d.com/forums/discussion/68493/algovincian-non-photorealistic-rendering-npr/p1,

    Would you be interested in buying a tool that can generate an illustration from 3d renders that is indistinguishable to hand-drawn art?

    The tool could allow artistic & style adjustments for non-artist people like how Daz3d was designed for non-technical people.

    Main selling point of this tech is that there is NO human intervention during this generation process. It is a simple click and finish process instead of having to be an illustration expert and needing to fix imperfections here and there throughout the process.

    Some examples of illustration capabilities provided by Algovincian:

    https://www.daz3d.com/forums/discussion/comment/986517/#Comment_986935

    This could have uses in comics, 2d games and other artistic illustrations like book covers.

    If this interests you, I would like to discuss on exact features that would make it useful for you.

    Yes, I would also be interested in such a possibility.

    Keep posting with all information on the progress of bringing it to Daz 3D.

     

  • nonesuch00nonesuch00 Posts: 17,929

    I like good crosshatching as a drawing style - you see it on lots of old pre-20th century illustrations.

  • algovincianalgovincian Posts: 2,576

    This would bring me back to doing 3D, but what connection do you have to Algovincian and his project? I still have doubts that we can ever achieve truly hand-drawn looks right out of a 3D renderer - if Disney can't do it, none of us are likely to - but I'm certainly for anything that gets us a little bit closer.

    In short, the answer is none. I recently received a couple of messages outside of these forums that made no sense to me at the time, but now that I've seen this thread they do lol (the part about me providing examples). To clear up any confusion, I will say that the images I've posted in either of the threads of mine mentioned by Conclave were created by systems/algorithms that I've developed over the last 20+ years, and not examples of any work that Conclave has done.

    That being said, I'm always interested in seeing anybody's NPR work - including any examples of Conclave's own work.

    On another note related to @SnowSultan's comments (and I don't want to speak for anyone else), I will say that distilling my process down to something that would run as a stand-alone or plugin to DS never made sense to me from a practical stand point. The number of wheels that would have had to be re-created was prohibitive.

    There are a few people that have actually seen the process execute from start to finish, and they were astonished at the scale of the system as a whole. Going in, they were familiar with the details of the system that I've posted publicly. But when they saw the scale of the actual hardware (distributed network), the number/size of the intermediate files generated for each scene processed (approaching 1GB - the bulk of it compressed image data), and the complexity of all the code being executed, they were shocked.

    I've talked in the past about how the development of my systems/algorithms has been an evolutionary process (small incremental changes made over long periods of time), and it really is true. I went down many forks in the road that ended up not being the right path, but there was no way for me to know ahead of time. There was no book that I read that walked me through the process. There was no class that I could take to teach me everything I needed to know to make it happen. I was making it all up as I went - creating components to solve problems as they popped up.

    The journey was (and continues to be) a lot of fun, and I wish anybody endeavoring in something similar the best of luck!

    - Greg

    Have you ever thought of offering a simple service where people can upload their renders for a fee and you send back the processed image? Or is the initial rendering part already too involved?

    Yes, I have, but I believe it would be best if DAZ were involved, as they already have the captive market, infrastructure setup, etc. to do it efficiently. Also, the analysis passes do have a tendency to get busted with new releases, and reading the shader info from an infinite set of possible initial shaders applied to objects in the scene is always going to be challenging. It is doable, but requires some knowledge/understanding on the user's part (perhaps more than is typically required of end users of DAZ products).

    To get back to Conclave's work . . . have you actually setup any GANs that were stabile enough to train successfully and converge? I make use of multiple back propagation networks to make very narrow/focused decisions. I've played with the VGG network, which is the base of DeepArt's work, but ultimately found it to be too hit or miss to really be more than a novelty (not to say that somebody else couldn't get better results).

    - Greg

  • JClaveJClave Posts: 64
    To get back to Conclave's work . . . have you actually setup any GANs that were stabile enough to train successfully and converge? I make use of multiple back propagation networks to make very narrow/focused decisions. I've played with the VGG network, which is the base of DeepArt's work, but ultimately found it to be too hit or miss to really be more than a novelty (not to say that somebody else couldn't get better results).

    - Greg

    I have nothing to report since I haven't actually tried training GANs yet. From what I've read so far, I can only speculate that as you said, it is difficult to reach an optimal solution relying on it. I guess I will keep in mind about this factor, as I believe this is how you are able to generate that beautiful sharp outlines in your renders as opposed to just overly painterly styles seen with other NPR techniques.

  • algovincianalgovincian Posts: 2,576

    Given all of this talk about the arrival on the scene of the RTX2080 cards, Turing, Tensor cores, etc., I thought this article might interest some people:

    https://www.christies.com/features/A-collaboration-between-two-artists-one-human-one-a-machine-9332-1.aspx

    GANs were used in this case - made me think of this thread. Hard to believe it's been 9 months already . . . how goes your work, @Conclave?

    - Greg

  • JClaveJClave Posts: 64
    edited August 2018

    Hah sorry to disappoint you. Haven't made much progress due to sleep issues. But I'm still going at it.

    Kind of embarrassed to admit this is what I have to show for the past 9 month's work.

    Also wasted many weeks working on OpenGL based rendering but now rewriting codes in Optix cause I realised raytracing is better for non-realtime quality renderings.

    Regarding design, I realised neural network driven algorithms like GANs is not ideal when it comes to good artistic control.

    From all the research papers and demos I've read and seen, stroke based rendering approach seems to be the most promising when it comes to artistic control, intuitiveness and temporal coherence.

    Hopefully in another 9 months, I will show cool demos of stroke based renderings, like Disney's Overcoat, except automated.

    Post edited by JClave on
  • tkdroberttkdrobert Posts: 3,533

    I hope you get it done.  I would buy it in a heartbeat.

Sign In or Register to comment.