Is AI killing the 3D star?

11314151719

Comments

  • NylonGirlNylonGirl Posts: 2,202

    Masterstroke said:

    BTW: Does anybody else feels noxious when watching AI movies? There is always a nightmare feeling to it. Don't know why.

    I don't have this problem but if it is a problem, I suspect it has something to do with the framerate. The AI movies seem to exhibit something like "soap opera effect" in which a movie meant for a low framerate (such as 24 frames per second or less) tends to look quite unusual when viewed at 60 frames per second. This happens even if extra frames are added to make it play at the right speed. Maybe that's what they're doing. Maybe they're rendering 15 frame per second movies and interpolating them to 60 frames per second.

  • FSMCDesignsFSMCDesigns Posts: 12,843

    Wonderland said:

    I just used that image as a reference with a prompt describing the scene (without Cupid and adding a city backdrop) and in 5 seconds got this in Midjourney. It literally takes no creativity. EXCEPT it's like being an art director rather being an artist. You are working with a very talented artist (the AI) and it's up to you to convey what you are looking for, then picking the best image to suit your needs. It took me longer to write this post than generate these images. And it was so easy to "steal" your general style. There are a few errors that need to be fixed but that can done easily in Midjourney or Photoshop.

    I'm still on the fence about using it. Great for commercial art/marketing/ads but it's not real "art".

    I don't see using premade 3D assets and rendering any different that using AI to create an image, you are still an "art director" either way IMO. You either guide the AI to do the image or you guide the assets in DS to create the render.  The only time I will give 3D the boost is for those that don't rely on premade assets or use a ton of postwork, but then again, with AI i usually end up doing a lot of inpainting to get an image exactly how i want it, which is essentially postwork for AI.

    I took your first image, used it a a prompt and guided it to more realistic or anime

    o.png
    848 x 1264 - 1M
    ro2.png
    848 x 1264 - 2M
  • RyderRyder Posts: 24

    As someone who is using Stable Diffusion, AI can't kill the 3D artists and artworks. I keep revert from SD to Daz, I have been using Daz for like 4-5 years. You can't have the same freedom in AI the way you have it in 3D softwares. You can create whatever you want in Daz Studio but you can't do it in AI. Even if you managed to do it, it is still Artificial Intelligence, one way or another you will get uncanny valley dogsh*t results. Accuracy in AI is bad. Can it be better? sure, maybe for like in 3-5 years. But as a Stable Diffusion user, i want to say that AI definitely not going to beat any 3D art and artists. There is no REALLY freedom in AI. It serves me well in some cases though, like enhanching my renders etc.

  • WonderlandWonderland Posts: 7,133

    FSMCDesigns said:

    Wonderland said:

    I just used that image as a reference with a prompt describing the scene (without Cupid and adding a city backdrop) and in 5 seconds got this in Midjourney. It literally takes no creativity. EXCEPT it's like being an art director rather being an artist. You are working with a very talented artist (the AI) and it's up to you to convey what you are looking for, then picking the best image to suit your needs. It took me longer to write this post than generate these images. And it was so easy to "steal" your general style. There are a few errors that need to be fixed but that can done easily in Midjourney or Photoshop.

    I'm still on the fence about using it. Great for commercial art/marketing/ads but it's not real "art".

    I don't see using premade 3D assets and rendering any different that using AI to create an image, you are still an "art director" either way IMO. You either guide the AI to do the image or you guide the assets in DS to create the render.  The only time I will give 3D the boost is for those that don't rely on premade assets or use a ton of postwork, but then again, with AI i usually end up doing a lot of inpainting to get an image exactly how i want it, which is essentially postwork for AI.

    I took your first image, used it a a prompt and guided it to more realistic or anime

     I described the image in Midjourney and used your image as a style reference and came up with this. Its's just too easy. It's fun. I was up till 5:30AM the other night playing around in Midjourney but it's just too easy to create something amazing and you know your art will never live up to it. I do a lot of work fixing up bad AI images and do postwork but it's still difficult for me to call it my art. It's a little more interesting for me to do AI on my own rendered images and AI will always come up with something new and cool. 

    alicia_wonderland_An_anime_boy_and_girl_in_love_the_boy_gives_t_7b189a59-be94-4d6b-ad38-89167c2b4973.png
    1856 x 2464 - 7M
  • FSMCDesignsFSMCDesigns Posts: 12,843

    Ryder said:

    As someone who is using Stable Diffusion, AI can't kill the 3D artists and artworks. I keep revert from SD to Daz, I have been using Daz for like 4-5 years. You can't have the same freedom in AI the way you have it in 3D softwares. You can create whatever you want in Daz Studio but you can't do it in AI. Even if you managed to do it, it is still Artificial Intelligence, one way or another you will get uncanny valley dogsh*t results. Accuracy in AI is bad. Can it be better? sure, maybe for like in 3-5 years. But as a Stable Diffusion user, i want to say that AI definitely not going to beat any 3D art and artists. There is no REALLY freedom in AI. It serves me well in some cases though, like enhanching my renders etc.

    Try Flux instead of SD, it's much more accurate to prompts and more realistic looking IMO

    I have been using DS for a long time, so this is how I look at it. There are many times i create a scene and light it and render only to feel like the image has no impact. So I try different poses, different lights, even different skins and usually that doesn't help. So no matter what you control in DS, it sometimes never comes out like you would like. Same for AI, I can describe what I want and the chance it comes exactly as prompted is slim, but there have been many times I got more than what i wanted and ran with that one. If you can get a great base image, you can really take it up a notch with AI inpainting.

  • RyderRyder Posts: 24
    edited March 18

    FSMCDesigns said:

    Ryder said:

    As someone who is using Stable Diffusion, AI can't kill the 3D artists and artworks. I keep revert from SD to Daz, I have been using Daz for like 4-5 years. You can't have the same freedom in AI the way you have it in 3D softwares. You can create whatever you want in Daz Studio but you can't do it in AI. Even if you managed to do it, it is still Artificial Intelligence, one way or another you will get uncanny valley dogsh*t results. Accuracy in AI is bad. Can it be better? sure, maybe for like in 3-5 years. But as a Stable Diffusion user, i want to say that AI definitely not going to beat any 3D art and artists. There is no REALLY freedom in AI. It serves me well in some cases though, like enhanching my renders etc.

    Try Flux instead of SD, it's much more accurate to prompts and more realistic looking IMO

    I have been using DS for a long time, so this is how I look at it. There are many times i create a scene and light it and render only to feel like the image has no impact. So I try different poses, different lights, even different skins and usually that doesn't help. So no matter what you control in DS, it sometimes never comes out like you would like. Same for AI, I can describe what I want and the chance it comes exactly as prompted is slim, but there have been many times I got more than what i wanted and ran with that one. If you can get a great base image, you can really take it up a notch with AI inpainting.

    I mostly do NSFW so i skip Flux models. It can't do NSFW nor there is good model of it. I run AI locally via Stable Diffusion. It's Python based so i download checkpoints, loras from sites like tensor art etc and then run them. So SD and Flux is same thing. You get a Flux, SDXL or SD.1.5 model checkpoint from one of the sites and run them locally via SD. No [tiresomeness] like credits, payment etc like those paywall sites.

    Post edited by Richard Haseltine on
  • FSMCDesignsFSMCDesigns Posts: 12,843

    Ryder said:

    FSMCDesigns said:

    Ryder said:

    As someone who is using Stable Diffusion, AI can't kill the 3D artists and artworks. I keep revert from SD to Daz, I have been using Daz for like 4-5 years. You can't have the same freedom in AI the way you have it in 3D softwares. You can create whatever you want in Daz Studio but you can't do it in AI. Even if you managed to do it, it is still Artificial Intelligence, one way or another you will get uncanny valley dogsh*t results. Accuracy in AI is bad. Can it be better? sure, maybe for like in 3-5 years. But as a Stable Diffusion user, i want to say that AI definitely not going to beat any 3D art and artists. There is no REALLY freedom in AI. It serves me well in some cases though, like enhanching my renders etc.

    Try Flux instead of SD, it's much more accurate to prompts and more realistic looking IMO

    I have been using DS for a long time, so this is how I look at it. There are many times i create a scene and light it and render only to feel like the image has no impact. So I try different poses, different lights, even different skins and usually that doesn't help. So no matter what you control in DS, it sometimes never comes out like you would like. Same for AI, I can describe what I want and the chance it comes exactly as prompted is slim, but there have been many times I got more than what i wanted and ran with that one. If you can get a great base image, you can really take it up a notch with AI inpainting.

    I mostly do NSFW so i skip Flux models. It can't do NSFW nor there is good model of it. I run AI locally via Stable Diffusion. It's Python based so i download checkpoints, loras from sites like tensor art etc and then run them. So SD and Flux is same thing. You get a Flux, SDXL or SD.1.5 model checkpoint from one of the sites and run them locally via SD. No [tiresomeness] like credits, payment etc like those paywall sites.

    Flux does NSFW images just fine, it's mainly what I use it for. You need good Loras for specific NSFW, but there are plenty at CivitAI  I use Fooocus and Forge locally for both SDL and Flux

  • suffo85suffo85 Posts: 211

    I like Flux and SDXL/Pony.  I think they each have their strengths and weaknesses.  Although that could be because I use two different UIs to use each so they both come with different sets of tools, but the standard img2img and controlnet are still in both.  For art I still use 3d applications, mostly Daz and Blender but I use Embergen and a handful of other things also.  Once in a while I might sit down and try to guide an AI to do something artistic that I am imagining in my head, but I can almost just create a render in 3d space in the same amount of time it takes to prompt an AI to generate enough to refine and re-generate and postwork etc, it all still takes time, but time that you can often do something else while it's working away on whatever instructions you gave it.  I have only on rare occasions gotten AI to give me something similar to what I was imagining from my first prompt or first refine of a prompt, I'll be using a lot of img2img and reprompting.

    I haven't tried MidJourney but I've tinkered with almost all of the SD/Pony/Illustrious models and Flux.

    Also can confirm @FSMCDesigns comment, Flux does do NSFW, even when you're not even asking for it if you've got some uncensored loras going.  But SD/XL will also make NSFW stuff when I don't ask for it, I usually have to specify sfw in the prompt or nsfw in the negative.  There are times I've forgotten and come back to several hundred images that I let generate while off doing something and half end up being nsfw ugh, although it really depends on what models I'm using too.  The straight up SD models don't usually generate too much content like that, but once you start getting into civitai stuff you never know what you'll end up with heh.

  • RyderRyder Posts: 24

    suffo85 said:

    I like Flux and SDXL/Pony.  I think they each have their strengths and weaknesses.  Although that could be because I use two different UIs to use each so they both come with different sets of tools, but the standard img2img and controlnet are still in both.  For art I still use 3d applications, mostly Daz and Blender but I use Embergen and a handful of other things also.  Once in a while I might sit down and try to guide an AI to do something artistic that I am imagining in my head, but I can almost just create a render in 3d space in the same amount of time it takes to prompt an AI to generate enough to refine and re-generate and postwork etc, it all still takes time, but time that you can often do something else while it's working away on whatever instructions you gave it.  I have only on rare occasions gotten AI to give me something similar to what I was imagining from my first prompt or first refine of a prompt, I'll be using a lot of img2img and reprompting.

    I haven't tried MidJourney but I've tinkered with almost all of the SD/Pony/Illustrious models and Flux.

    Also can confirm @FSMCDesigns comment, Flux does do NSFW, even when you're not even asking for it if you've got some uncensored loras going.  But SD/XL will also make NSFW stuff when I don't ask for it, I usually have to specify sfw in the prompt or nsfw in the negative.  There are times I've forgotten and come back to several hundred images that I let generate while off doing something and half end up being nsfw ugh, although it really depends on what models I'm using too.  The straight up SD models don't usually generate too much content like that, but once you start getting into civitai stuff you never know what you'll end up with heh.

    Which Flux checkpoint model you guys recommend?

  • juvesatrianijuvesatriani Posts: 561

    Is AI Killing the 3D stars ?

    Yes and NO , depend which kind of 3D art we`re talking about .

    This view based on my own experiences as graphic designer/music creator who mostly getting jobs from agencies or online freelancer sites . And also as guy who handle postwork in small 3D archviz firm .

    So its possible that my experiences and views I`m talking here different with yours .

    For any type of 3D works which need super precise output from client`s brief will keep going . Meanwhile ( ironically) any type 3D works which based on imaginations and creativities immensively going down . And this not only happen in 3D spaces but also in 2D. We can argue as well about music production. 

    Still hard getting killed by AI :

    • 3D rendering for architecture or site project vizualisation 
    • 3D animation for product promotions especially from established brands
    • 3D animation for products explainers or onboarding process
    • 3D rigging and texture artist . Not for everyone but if some jobs ever coming to them that means the project need specific treatments . 

    Slowly going downhill :

    • 3D Character designer or Concept artists who ironically can create awesome output from nothing.
    • 3D Background artist 
    • 3D shaders and material coder, who in real life are math and physic experts ( Chat GPT and other LLM model can easily iterate your samples codes and spewing hundreds of variations even in CLI mode)
    • ...... ( you can adding as many as you wish) 


    But personally I think there is still hope !! 

    If we carefully see how the business behind AI developed , someday they will end in two options. Start charging money from users or stop because they dont see the profit . 

    And as other "open sources" solutions which already out , those things still need backed up with serious money for build machines in their bedroom . Someday it will come day when they need justification to buy high performance GPUs (because the last one just fried). Or keep paying tokens for generating images . 

    Unless those people in real life are a designer - a motion artists or a DIY dreamers and crazy fans like us !

    So my two cents  is Keep doing whatever you`re doing . Every knowledges matter . Either from talking lots with our inner soul or any type of machines .

  • JoeQuickJoeQuick Posts: 1,729
    edited March 30

    GPT is finally rolling out some of the 4.Omni features they showcased nearly a year ago. Maybe you've seen the wave of Ghibli-style generations—and the backlash?

    I'd always wanted the sand castle in my teddy bear / shark morph combo promo to be crumbling... but at the time, it wasn’t worth the effort. Building a more elaborate sandcastle just for a promo didn’t make sense. I had tried using Stable Diffusion and Adobe to achieve the effect, but the results felt too out of place.

    I don’t think I ever revisited it with MidJourney after they introduced editing for existing images. Maybe it could still handle it better?

    But I did run it through 4.O today and asked it to simply make the castle crumble. Instead of modifying the original, it had to regenerate the entire image—and struggled with applying the change to just one section. Still, I like what it came up with.

    Original on the left, GPT on the right.

    It reminds me of my Transformers figures. I’ve got all these modern remakes of the ones I had in the ’80s. They look exactly how I remember them—but side by side, it’s clear they’re different, better in almost every way.

    Except for the sandcastle. If you'd shown me the GPT version alone, I probably would have mistaken it for mine, not realizing just how much better it actually is until I saw them side by side.

    It looks the way I thought my original did in my imagination.

    Don't know what to make of that.

    Still not sure about that belly button and those nipples though.

    4omni_example.jpg
    2000 x 1536 - 2M
    Post edited by JoeQuick on
  • Alerted by Curious Refuge, for the first time yesterday I was successful at uploading a crude image to ChatGPT 4o, requested a realistic image, and actually got something that was true to the original image, just better. And then, it kept consistency while I iteratively corrected everything that was wrong.

    I think the days of materials and brute-force raytracing might soon be over; AI is finally actually listening to us :)

     

  • JoeQuickJoeQuick Posts: 1,729
    edited March 30


    This one wasn’t built at all in Daz or using 3D.

    It was shaped iteratively entirely through conversation with a machine—an approach far smoother than the usual AI approach of stringing together keywords and endlessly rolling, rerolling, and inpainting in the desperate hope of landing on something usable.

    Spare my immortal soul, but I did ask for a Ghibli version too.

    The first set of images tracks the evolution of the idea: from a full chef’s uniform to a grumpy, grizzled Roman in a shirt, then to a happier yet still battle-worn version. The final iteration—Titus in armor, holding his infamous pie—felt like the strongest realization of the concept.

    There are a few other variations. The middle image leans into the Julie Taymor film idea that our leaders are just children, treating soldiers like toys—an unsettling, layered commentary. The left is more pared down, more graphic, something bold and theatrical.

    Then there's the Ghibli one on the right, Miyazaki forgive me.

    tituses.jpg
    4096 x 1536 - 7M
    moretituses.jpg
    3072 x 1536 - 4M
    Post edited by JoeQuick on
  • The examples above demonstrate what I have been thinking for a while now. AI is going to be a great tool. If you can use DAZ Studio to get reasonably close to what you wanted, a conversation with the AI can get you from that point to perfection.

  • SnowSultanSnowSultan Posts: 3,773

    JoeQuick, can you explain a little about what is required to do what you did there? I have never installed Chat GPT, so I don't know if you need a paid version for image generation or how it works. Any information or links would be appreciated. Thanks very much.

  • JoeQuickJoeQuick Posts: 1,729
    Chatgpt is an online thing. I think you need to be a subscriber for image generation access. I believe they were letting everyone use it when this new feature debuted a couple days ago, but they then reigned access in. Altman complained that his servers were melting.
  • nonesuch00nonesuch00 Posts: 18,714
    edited March 30

    I have finally installed ComfyUI as another DAZ Forum user, I think artini, recommend. I have 128GB RAM, an AMD 5700G (which is not even fast enough to consider using for locally installed LLM with a high number of parameters) , and a PNY Geforce RTX 4070 12GB, which is fast enough to run a local LLM, SD3.5, SDXL, PONY, and Flux. You choose which at runtime. Unless your system OS storage SSD is 4TB or more I recommend sticking to one of the SD3.5, SDXL, PONY, or Flux diffusion models. I'm still learning but realized if I wanted to learn to do edits and better control of what the diffusion process was creating I needed to install and do this locally as running up charges on online servers can get quite expensive once you get into a learning and using more compex series of edits and checkpoints. LOL, the bill was making me uneasy. 

    Here are the tutorials: ComfyUI Examples | ComfyUI_examples

    Here is the github: comfyanonymous/ComfyUI: The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.

    Here is a Windows installer Desktop: https://www.comfy.org/download

    A Windows installer portable:  https://github.com/comfyanonymous/ComfyUI#installing

    Manual Install Windows/Liinux (as it's nVidia CUDA/Tensor libraries & python based for the most part, You can configure it to use CPU though: https://github.com/comfyanonymous/ComfyUI#manual-install-windows-linux

     

     

    Post edited by nonesuch00 on
  • Matt_CastleMatt_Castle Posts: 3,003
    edited April 21

    I don't think I've put any of my experiments with local AI generation in here yet. I think it's interesting to see what the technology is capable of (and what its limitations are), but I'm generally not impressed enough with outputs to add to the torrent of stuff already on the internet.

    This one was a bit more interesting than most, though. I originally did this spoof of Art Frahm's (1906-1981) "Damsels in Distress" series solely as a 3D render, but I came back to it later, figuring it would be interesting to see if it was possible to run it through Stable Diffusion to turn it into something closer to Frahm's original style. (Actually, I originally did it to spoof a specific painting and it had a plain background to match that, but I revisted it with a more developed scene to be a better experiment).

    And... well, despite using some LORA models supposedly trained for the job of emulating Frahm, I don't think I can honestly say it's got his specific style, which favoured bold and almost blocky contrasts between light and shadow, with plenty of saturation and often visible brushstrokes - no element of which is really here. It's clearly somewhere in the general vein of many 1950s paintings, but it's not quite Frahm. Maybe some of Norman Rockwell's stuff.

    In any case, the experiment offers some insight into the limitations that exist within AI image generation; aside from not quite getting the desired style, I had to lean on it very hard with canvas inputs to get it to preserve some elements of the composition that were outside its normal dataset.



    Gallery Link

    EDIT: Fixing broken gallery image links with attachments

    RinaFrahmB30f_MC.jpg
    2400 x 2400 - 2M
    RinaFrahmB30d_SD_00019-3289176247_MC.jpg
    2400 x 2400 - 1M
    Post edited by Matt_Castle on
  • SnowSultanSnowSultan Posts: 3,773

    Matt_Castle, I actually think that did an amazing job. It did change her glasses, but I personally think almost everything else is superior to the 3D version. Great job.

  • Matt_CastleMatt_Castle Posts: 3,003
    edited March 30

    I didn't put an overwhelming amount of effort into the 3D version; it was originally done for a weekly challenge thread on the Slackers Discord (although I forget what the theme was), so it was a fairly quick render in the first place and I only revisited it relatively briefly to put in more of a scene around it.

    The thing though is that the output, while it manages to manage being along the lines of a 1950's pin-up, misses what I actually asked for. If you look at the specific image I was originally spoofing, Frahm's style is more painterly, and the contrasts in lighting much more pronounced than what Stable Diffusion actually offered in return.

    (I will note that no, there are no glasses in the original painting, but they ended up in there because that's the character of mine I used for the render).

    And as I had two different "Art Frahm" LORAs plugged in, it should have had at least some clue.

    Now, part of the lighting will be because the AI was fed a slightly different version of the render, but that was itself down to its own failings; the version of the scene I fed into the AI had to have the strong lighting from the left removed, because it could NOT get it into its head to paint a clear shadow for the centaur that was actually the right shape and in the right place. If I dropped the denoise strength to try to force it to keep it, it still couldn't get it to work and it started to look too much like a render or photograph. So I had to settle for more wishy-washy lighting that at least had about the right drop shadow.

    And even with the aid of a normal map and depth map out of DS, I couldn't push the denoise any higher without it starting to try to completely reinvent areas. The idea of doing the scene purely with AI without a starting point to wrestle it into place would at present be unthinkable. While it's not hard for a human to imagine that scene (I mean, I did), the idea of a horse having its underwear around its ankles is not in the data set and just too alien for the AI to actually know how to do by itself.

    art frahm. 019.jpg
    957 x 1320 - 88K
    Post edited by Matt_Castle on
  • COMIXIANTCOMIXIANT Posts: 260
    edited March 30

    I just wanted to thank Wendy for making me aware of the term "Gunne Sax Dress".  Never heard the term before, but I just searched it and saw the image results.  I love to see women in those frilly, feminine dresses and I really wish they were in fashion!

    And that stuff you said in another post about wanting to do a LoRa of your mother, well, I couldn't help but think of this new tech which, scary as it is, seems like it would be capable of bringing her back to life, in fact the first thing that crossed my mind when I saw how good it was, is to do that with my father, nana, and dog.

    It's scary good, but absolutely amazing tech, especially for the sort of thing you're talking about.  And since it can do all of that from a single photo, just imagine the accuracy if it were fed a whole bunch of photos that were shot from different angles.  I reckon it would be indistinguishable since it kind of already is!
     

     

    Post edited by COMIXIANT on
  • SnowSultanSnowSultan Posts: 3,773
    edited March 30

    The thing though is that the output, while it manages to manage being along the lines of a 1950's pin-up, misses what I actually asked for. If you look at the specific image I was originally spoofing, Frahm's style is more painterly, and the contrasts in lighting much more pronounced than what Stable Diffusion actually offered in return.

     

    That's what postwork is for. It also might be worth running the result through another painterly Lora at a low value with Controlnets set to preserve as much of the original as possible.

    When AI is used with artistic creativity and some manual work, it's a lot of fun.   :)

    Post edited by SnowSultan on
  • nonesuch00nonesuch00 Posts: 18,714

    Matt_Castle said:

    I don't think I've put any of my experiments with local AI generation in here yet. I think it's interesting to see what the technology is capable of (and what its limitations are), but I'm generally not impressed enough with outputs to add to the torrent of stuff already on the internet.

    This one was a bit more interesting than most, though. I originally did this spoof of Art Frahm's (1906-1981) "Damsels in Distress" series solely as a 3D render, but I came back to it later, figuring it would be interesting to see if it was possible to run it through Stable Diffusion to turn it into something closer to Frahm's original style. (Actually, I originally did it to spoof a specific painting and it had a plain background to match that, but I revisted it with a more developed scene to be a better experiment).

    And... well, despite using some LORA models supposedly trained for the job of emulating Frahm, I don't think I can honestly say it's got his specific style, which favoured bold and almost blocky contrasts between light and shadow, with plenty of saturation and often visible brushstrokes - no element of which is really here. It's clearly somewhere in the general vein of many 1950s paintings, but it's not quite Frahm. Maybe some of Norman Rockwell's stuff.

    In any case, the experiment offers some insight into the limitations that exist within AI image generation; aside from not quite getting the desired style, I had to lean on it very hard with canvas inputs to get it to preserve some elements of the composition that were outside its normal dataset.


     

     

     

    Gallery Link

    I think the AI took the technically "cold" accuracy of iRay and the physical world, and then warmed it up with a human touch, which is very noticible.

  • Matt_CastleMatt_Castle Posts: 3,003

    SnowSultan said:

    That's what postwork is for. It also might be worth running the result through another painterly Lora at a low value with Controlnets set to preserve as much of the original as possible.

    There's nothing to postwork; I can throw on contrast layers all I like, but it's not going to make a painterly style appear that isn't there, nor is it going to put back shadows the AI actually removed.

    I took many attempts at this, including starting from different versions of the original render with different lighting. This is already a composite of the best parts of several attempts, and indeed a few parts I painted in personally.

    At this point, if I have to do any more manual work on it, I might as well be painting most of the thing myself!

    nonesuch00 said:

    I think the AI took the technically "cold" accuracy of iRay and the physical world, and then warmed it up with a human touch, which is very noticible.

    I don't hate the output (which is why I shared it at all), but it is not what I asked for.

    Despite attempts with many different models - both checkpoints and LORAs - the actual technical style is not Frahm. The subject matter might be, but that's entirely something it's got from what I fed in.

  • nonesuch00nonesuch00 Posts: 18,714

    Matt_Castle said:

    SnowSultan said:

    That's what postwork is for. It also might be worth running the result through another painterly Lora at a low value with Controlnets set to preserve as much of the original as possible.

    There's nothing to postwork; I can throw on contrast layers all I like, but it's not going to make a painterly style appear that isn't there, nor is it going to put back shadows the AI actually removed.

    I took many attempts at this, including starting from different versions of the original render with different lighting. This is already a composite of the best parts of several attempts, and indeed a few parts I painted in personally.

    At this point, if I have to do any more manual work on it, I might as well be painting most of the thing myself!

    nonesuch00 said:

    I think the AI took the technically "cold" accuracy of iRay and the physical world, and then warmed it up with a human touch, which is very noticible.

    I don't hate the output (which is why I shared it at all), but it is not what I asked for.

    Despite attempts with many different models - both checkpoints and LORAs - the actual technical style is not Frahm. The subject matter might be, but that's entirely something it's got from what I fed in.

    The "artistic style" on the AI revamping seems like a generic statiistically averaged amalgam of 50s magazine advertisement illustrators, but of no one artist iin particular. The subject matter is common old running joke of loosing underwear that's too big for the wearer - which was done by more than Art Frahm. I remember when I was a busboy the hostess at Holiday Inn where I worked lost her slip taking the customers to be seated in from of thhe whole diningroom full of customers. It was pretty funny.

  • NylonGirlNylonGirl Posts: 2,202

    SnowSultan said:

    Sorry, I should have said I was talking about Stable Diffusion and the public models. Midjourney's have their own style by default, and I used to use it (and liked it, until it became too expensive and censored to get anything I wanted). Thank you for the examples - still has trouble making a nice round afro though? Haha, so does every other model I've used except some wacky toony Japanese LoRA.   ;)

    There does seem to be the "AI afro". It looks nice but different from most in reality. I had recently heard on a podcast that there were people showing their desired hairstyles to their hair stylists. And the hair stylists were unable to do their hair like the pictures because it turns out the pictures were generated by AI and had hairstyles that couldn't exist in reality.  I guess it would be like showing a clothing designer certain DAZ outfits and asking them to make a real version of the outfit.

  • WendyLuvsCatzWendyLuvsCatz Posts: 40,039

    I read an interesting article that honestly left me scratching my head

    https://www.msn.com/en-au/news/techandscience/ai-dolls-are-taking-over-but-real-artists-are-sick-of-them/ar-AA1Dcxxa?ocid=socialshare&cvid=74627c3e8b3c4686e9b26fb15b2139c2&ei=7

    I prompted one of those Starter kits myself in ChatGPT4 a few days ago

    then

    I loaded Carrara, used a Blister pack made by our own Stezza and rendered a 3D version, exported the blister pack as obj and rendered one in DAZ studio too

    I am still puzzled about that article

  • SquishySquishy Posts: 707

    Matt_Castle said:

    I don't hate the output (which is why I shared it at all), but it is not what I asked for.

    Despite attempts with many different models - both checkpoints and LORAs - the actual technical style is not Frahm. The subject matter might be, but that's entirely something it's got from what I fed in.

    this is the thing that sucks and will forever suck about generative AI, it won't ever be what you ask for, you will run it 50 times and pick the least bad compromise.

  • NylonGirlNylonGirl Posts: 2,202

    WendyLuvsCatz said:

    I read an interesting article that honestly left me scratching my head

    https://www.msn.com/en-au/news/techandscience/ai-dolls-are-taking-over-but-real-artists-are-sick-of-them/ar-AA1Dcxxa?ocid=socialshare&cvid=74627c3e8b3c4686e9b26fb15b2139c2&ei=7

    I prompted one of those Starter kits myself in ChatGPT4 a few days ago

    then

    I loaded Carrara, used a Blister pack made by our own Stezza and rendered a 3D version, exported the blister pack as obj and rendered one in DAZ studio too

    I am still puzzled about that article

    My brother was doing some of that earlier this week. When he posted the pictures of himself, I told everybody it was a rare GI Joe action figure, third most collectible behind Baroness and Destro. Or something like that.

  • FSMCDesignsFSMCDesigns Posts: 12,843

    Squishy said:

    Matt_Castle said:

    I don't hate the output (which is why I shared it at all), but it is not what I asked for.

    Despite attempts with many different models - both checkpoints and LORAs - the actual technical style is not Frahm. The subject matter might be, but that's entirely something it's got from what I fed in.

    this is the thing that sucks and will forever suck about generative AI, it won't ever be what you ask for, you will run it 50 times and pick the least bad compromise.

    That is where AI inpainting comes in. There is more to AI than just generating images.

This discussion has been closed.