Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Haha, thank you for thinking of Smacky. :) I miss making images with her, but hopefully as our technology advances, I'll get back into the swing of things. My main reason for taking an extended break from 3D really has to do with the mismatch in quality between our photorealistic figures and textures and our unfortunately not-so-photorealistic hair and clothing models, but I still get inspired to experiment once in a while.
I haven't looked over those Disney papers, but I just thought it was odd that they wrote new software from scratch specifically to handle Rapunzel's hair (and I believe Merida's too) but haven't really been able to do anything revolutionary when it comes to non-photorealistic renders. Some of the more interesting recent examples are the video games Guilty Gear and the new Dragon Ball Fighter; they do a really good job of giving a cel-shaded look to 3D models, but I have a feeling the shading may be hand-painted.
Just please don't give up Algovincian, you've really made some interesting progress where pretty much all others have been unable to. ;)
Yes, I would also be interested in such a possibility.
Keep posting with all information on the progress of bringing it to Daz 3D.
I like good crosshatching as a drawing style - you see it on lots of old pre-20th century illustrations.
Yes, I have, but I believe it would be best if DAZ were involved, as they already have the captive market, infrastructure setup, etc. to do it efficiently. Also, the analysis passes do have a tendency to get busted with new releases, and reading the shader info from an infinite set of possible initial shaders applied to objects in the scene is always going to be challenging. It is doable, but requires some knowledge/understanding on the user's part (perhaps more than is typically required of end users of DAZ products).
To get back to Conclave's work . . . have you actually setup any GANs that were stabile enough to train successfully and converge? I make use of multiple back propagation networks to make very narrow/focused decisions. I've played with the VGG network, which is the base of DeepArt's work, but ultimately found it to be too hit or miss to really be more than a novelty (not to say that somebody else couldn't get better results).
- Greg
I have nothing to report since I haven't actually tried training GANs yet. From what I've read so far, I can only speculate that as you said, it is difficult to reach an optimal solution relying on it. I guess I will keep in mind about this factor, as I believe this is how you are able to generate that beautiful sharp outlines in your renders as opposed to just overly painterly styles seen with other NPR techniques.
Given all of this talk about the arrival on the scene of the RTX2080 cards, Turing, Tensor cores, etc., I thought this article might interest some people:
https://www.christies.com/features/A-collaboration-between-two-artists-one-human-one-a-machine-9332-1.aspx
GANs were used in this case - made me think of this thread. Hard to believe it's been 9 months already . . . how goes your work, @Conclave?
- Greg
Hah sorry to disappoint you. Haven't made much progress due to sleep issues. But I'm still going at it.
Kind of embarrassed to admit this is what I have to show for the past 9 month's work.
Also wasted many weeks working on OpenGL based rendering but now rewriting codes in Optix cause I realised raytracing is better for non-realtime quality renderings.
Regarding design, I realised neural network driven algorithms like GANs is not ideal when it comes to good artistic control.
From all the research papers and demos I've read and seen, stroke based rendering approach seems to be the most promising when it comes to artistic control, intuitiveness and temporal coherence.
Hopefully in another 9 months, I will show cool demos of stroke based renderings, like Disney's Overcoat, except automated.
I hope you get it done. I would buy it in a heartbeat.