AI as render engine

Why cant we use diffusion as render engine/controlnet? It would give so many possibilities, even for old content.

Comments

  • WendyLuvsCatzWendyLuvsCatz Posts: 40,239

    nobody is stopping you

    I often run Filament or OpenGL renders through Stable Diffusion Image2Image

  • Richard HaseltineRichard Haseltine Posts: 108,493

    What do you mean? Rendering and generative AI are doing vey diffrent things, even if both produce an image. Are you wanting a way to use the current state of the Viewport for an image-to-image generation without having to save and import the image into the AI tool?

  • colcurvecolcurve Posts: 171

    @richard:  no, I do not think so. what you design in a scene is input to a render engine and/or to a controlnet. basically a 3d scene is the controlnet for a "render" engine, be it iray or "stable diffusion, flux", or any other magic

    when I currently browse the store, I nowadays have a specific look of vintage items, because I think they have a good idea but night have latest shaders or whatever. maybe i think the textures are not modern but as depth map/outline render it could be excellent... as controlnet input

    I think if daz would offer tools to use old content as controlnet... it would raise the value of large contents of the store by 100x..

  • CybersoxCybersox Posts: 9,307

    A thing to keep in mind is that, if things continue as they seem to be going, that art created by AI programs will be be far less protected by copyright, if at all.  Beyond that, I've tried a number of the various programs, most of them still have a lot of issues with basic anatomy.  That said, I recently bought Gigapixel to enlarge some old images and I've been experimenting with using the AI function it on some of my newer images with the enhancement/extrapolation levels set very low, then using the reuslting image to augment parts of the original render. Where it really excels is on cleaning up digital hair and creating realistic... if often way over the top... wrinkles on cloth or on the skin at joints like the underarms.  Unfortunately, where it's still incredibly weak is on things like hands, feet, and smaller skin details like nipples , navels, small details like text tends to become gibberish, and while the AI will refine some elements of a facial structure, it also tends to make the characters look significantly different from the original model.  In other words, it's still a long way away from being something that I'd be ready to trust as the central point of control for creating anything I was going to put my name on.   

  • windli3356windli3356 Posts: 290

    There are nothing alike, text to image (or image to image style change) vs 3D to iray render are fundamentally different field, while AI generated images offers some charm, the lack of fine control, unpredictable generating process, and moral limition severly limited what creators can/can not do.   

  • Richard HaseltineRichard Haseltine Posts: 108,493

    colcurve said:

    @richard:  no, I do not think so. what you design in a scene is input to a render engine and/or to a controlnet. basically a 3d scene is the controlnet for a "render" engine, be it iray or "stable diffusion, flux", or any other magic

    No, the processes are completely different - when rendering you place elements in the scene with properties and those completely determine what the render engine produces. With Generative systems it produces a cloud of noise and refines it using statisical associations from the prompts and influenced by any control net image etc. - the process is still deterministic, but one of the determining factors is a pseudo-random seed value used to generate the initial noise cloud.

    when I currently browse the store, I nowadays have a specific look of vintage items, because I think they have a good idea but night have latest shaders or whatever. maybe i think the textures are not modern but as depth map/outline render it could be excellent... as controlnet input

    I think if daz would offer tools to use old content as controlnet... it would raise the value of large contents of the store by 100x..

  • NetherFalconNetherFalcon Posts: 885
    edited March 2025

    Unsure what OP really wants with Daz.  The act of inputting your prompt becomes the rendering.  Daz Studio isn't for those who really like to dabble in AI.  Yes, there's an AI version, but it doesn't hold a candle compared to the base product and what can be done with human artistic skills.  And yes, AI can also be used to enhance images, but that's a different beast than doing entire image generation.  At that point, you already spent time on your rendering in Daz Studio.  And we have Photoshop for that kind of thing, also with Daz support.

    I much prefer Daz Studio as the ability to fully control what I'm doing is paramount to the finished product.  That control, human eye, experience and drive also saves an immense amount of time and frustration when I'm tweaking things to exactly how I envision them.  But these things come with working on improving oneself's abilities at something, and sadly, many don't like that and look for quick fixes, instead of spending time to develop said skill (not saying you OP, just in general, it's way too common today).

    Post edited by NetherFalcon on
  • Nyghtfall3DNyghtfall3D Posts: 813
    edited March 2025

    NetherFalcon said:

    Unsure what OP really wants with Daz.

    They want an AI render engine to piggyback off of Iray and 3DL.  The workflow would be something like: Set up scene > Render with Iray or 3DL > Enhance with AI.  All from within Daz.  No 3rd-party apps or AI models needed.

    Post edited by Nyghtfall3D on
  • GordigGordig Posts: 10,614

    Nyghtfall3D said:

    NetherFalcon said:

    Unsure what OP really wants with Daz.

    They want an AI render engine to piggyback off of Iray and 3DL.  The workflow would be something like: Set up scene > Render with Iray or 3DL > Enhance with AI.  All from within Daz.  No 3rd-party apps or AI models needed.

    That's not how I read the OP. I got the impression that they wanted to set up their scene inside DS, then output it directly to an AI (which, as Richard pointed out, isn't really how this works).

  • WendyLuvsCatzWendyLuvsCatz Posts: 40,239
    edited March 2025

    yeah, iray is ray pathtracing, bouncing rays off 3D co-ordinates

    a mathematical geometry based process

    Diffusers and Ai is something quite different, a dataset trained on images

    you can use depth maps and images rendered in DAZ in ControNet which I do and what that thread I linked is all about

    someone could make a plugin but Photoshop bridge already exists if you want to go down that path 

    no need for a plugin though as I often run Fooocus and DAZ together myself just rendering the image then loading it into Fooocus

    using openGL to do my prompt image in DAZ keeps the VRAM available, is sufficient for canny guidence

    Post edited by WendyLuvsCatz on
  • Nyghtfall3DNyghtfall3D Posts: 813

    Gordig said:

    That's not how I read the OP. I got the impression that they wanted to set up their scene inside DS, then output it directly to an AI (which, as Richard pointed out, isn't really how this works).

    Ah, well, in that case, I stand corrected.

  • colcurvecolcurve Posts: 171

    Richard Haseltine said:

    colcurve said:

    @richard:  no, I do not think so. what you design in a scene is input to a render engine and/or to a controlnet. basically a 3d scene is the controlnet for a "render" engine, be it iray or "stable diffusion, flux", or any other magic

    No, the processes are completely different - when rendering you place elements in the scene with properties and those completely determine what the render engine produces. With Generative systems it produces a cloud of noise and refines it using statisical associations from the prompts and influenced by any control net image etc. - the process is still deterministic, but one of the determining factors is a pseudo-random seed value used to generate the initial noise cloud.

    This is not a contradiction. Render produces an image and this can be input to any step in the overall pipeline, daz offers an AI denoise postfilter to iray as well already.

    How much render and how much AI postprocess should be left to the user.

    AI has problem that it does not have good control with the prompt, thats a spin but it is not precision of any sort.

    DAZ has infinite control but as such can take forever to tweak things or set them up.

     

    Using a Daz scene as depth map can bring interesting outcome. Clicking it to configure canvas however and convert the exr to an image was much work so I bought the depth map maker from the store, but this tool often crashes for me and it does not support limiting the scene to a certain depth, so you have can invisible/far away things in your scene that ruin the depth map because the value range gets overstreched.

    But  you can also use e.g. partial finished render as controlnet, some controlnets have an "anything" approach accepting just any image. Or you can denoise (as you said) with low/medium intensity.

    Doing so, it does not matter (so much) whether the things in your scene are iray or 3dlight shaded.

    I understand diffusion as a post-render step. 

    Ideally, we would have a DAZ node for comfyui. It would read image data from DAZ application (render, depth pass, specific material as mask) and make it accessible to comfy, there you could further consume it. E.g. if DAZ could provide 1.) a render and 2.) a mask image with the same camera as the render, one could use comfy to modify the masked part in the image with something the render engine has a difficult job at (e.g. "add a background crowd", "three creepy zombies looking at "xyz"" or anything else.

    I think using 3d scene with AI made specific elements/postprocess is just the best way to use these things.

    The good thing with old DAZ products is that they are pretty lightweight. Less faces, highly responsive viewport, ...

    DAZ has a unique position as simplified enabler of 3d, in contrast to blender where you need a PhD to use it. I think ignoring the mixed render/AI use cases is bad decision, as it misses a rather large business opportunity which does not take vast resources to create, and the blender/... community will develop the pipeline anyway and it will be competing the DAZ ecosystem which would be the greatest waste.

    If someone PA /DAZ developer here reads this: Welcome to make contact...

     

  • MasterstrokeMasterstroke Posts: 2,328

    ...like an AI powered post render realism filter?

  • Richard HaseltineRichard Haseltine Posts: 108,493

    Use a script to render and save (or save the last draw) and then launch the stand-alone AI with the saved iamge as input and whatever other parameters were required.

  • Richard HaseltineRichard Haseltine Posts: 108,493
    edited March 2025

    Richard Haseltine said:

    Use a script to render and save (or save the last draw) and then launch the stand-alone AI with the saved iamge as input and whatever other parameters were required.

    See http://docs.daz3d.com/doku.php/public/software/dazstudio/4/referenceguide/scripting/api_reference/samples/rendering/render_post_process/start for a starting point using a render

    Post edited by Richard Haseltine on
  • You can just download the C++ SDK and create one yourself.

  • Cam FoxCam Fox Posts: 306
    edited March 2025

    Ai image generation tech, such as stable diffusion, does not simulate light bounces like Iray, or have a 3d reference of the scene. So it's more like post processing on an already-rendered image.

    You can give stable diffusion extra information about 3d space using ControlNets. That can help reduce the divergence from your original input image as the ai reimagines it.

    For example, you can use a normal map to indicate surface directions, and a depth map to show each object's distance from the camera, canny or lineart to preserve detail areas, and tile to maintain overall composition. Other tools can do object recognition or body parts recognition, like automatically detecting and detailing eyes, or style transfer to completely change the look and feel.

    Daz Studio can already render some of the maps useful as input for ControlNet. (Depth and Normal in particular.) So somebody could definitely make a plugin to help smooth the workflows. Some future iteration of Daz Studio could also push a scene into the cloud, render it in Iray or Filament, and help apply some ai post processing without the user needing a powerful PC or fiddling with a complicated stable diffusion setup. ie There's some opportunity to make tools like stable diffusion more accessible to 3d artists.

    Post edited by Cam Fox on
  • eeyuneeyun Posts: 45

    I got DepthMap Maker for the same reasons - but it's worth noting that depth ControlNets are not trained on actual depth images (there just aren't the massive quantities required), but synthesized ones. So in many cases you may be better off having the ControlNet process your regular viewport image/render and have it synthesize depth data similar to what it was trained on, rather than give it actual depth data that may contain artifacts that it was not trained on.

    It does seem way past time, though, for a utility that would output an OpenPose model, like posemy-dot-art.

  • ParadigmParadigm Posts: 425
    AI just keeps getting better and better. AI studio flopped. I think a world where AI and Daz co-exist can be a thing but Daz needs to innovate. Using text to pose would decrease the time it takes to do poses and would finally address the bloat that is buying pose pack after pose pack for every figure. Character, consistency and multiple character scenes remain very difficult to do with AI and very Daz would work very well as an intermediary. I just set up comfy UI and the barrier of entry was much lower than learning daz for the first time and I think that will need to be addressed sooner rather than later. It's been the same faces here for over a decade and while that's cool, a company isn't going to survive this new reality without accepting the direction the space has moved. I say this because I enjoy daz and want to see ur t stay modern.
  • NylonGirlNylonGirl Posts: 2,243

    I believe AI has caused me to use DAZ Studio more often. I think the people who despise AI are mainly thinking about the kind where you start with nothing, type a description of what you want, and then the AI cobbles something together from everything it’s ripped off of the internet. But the kind I use takes a picture I already have, and modifies it to my specifications.

    My biggest problem with DAZ Studio is there is a lack of realism, and the rate of improvement seems to have hit a plateau. Now, I can create my DAZ Studio scenes and then let the AI do post processing to add the realism I want.

  • AsuCafeAsuCafe Posts: 247
    edited 3:17AM

    I don't have a deep understanding of AI technology. My understanding of AI is based on image synthesis. In the past few years, AI was obviously 2D synthesis. However, based on my recent experience, I "feel" that many AIs use 3D models as a base for synthesis in order to synthesize the correct parts of the material into the correct positions, and even produce the correct shadows and sense of distance, because the finished images look very much like 3D images.

    I guess what you mean is similar to what I mentioned in another section before. When 3D rendering reaches a certain stage, the general colors, materials, and lighting are already determined. Only the "details" remain to be completed. At this point, it is handed over to AI. Based on the 3D setting data (model, materials, HDRI settings, etc.), the AI ​​knows what the "details" should look like, and then selects the most suitable material to composite it. The result is almost indistinguishable from 3D rendering.

    But is this really faster? In my experience, generating images using AI on a personal computer is quite slow...

    Incidentally, the only thing I dislike about AI is that its content is based on stolen material. However, DAZ is openly seeking AI partners. If there is such AI based on legally licensed material libraries, allowing creators freedom to create without violating the law, and whose products enjoy legal copyright, I would be happy to use it.

     

    Post edited by AsuCafe at
  • AsuCafeAsuCafe Posts: 247

    Paradigm said:

    AI just keeps getting better and better. AI studio flopped. I think a world where AI and Daz co-exist can be a thing but Daz needs to innovate. Using text to pose would decrease the time it takes to do poses and would finally address the bloat that is buying pose pack after pose pack for every figure. Character, consistency and multiple character scenes remain very difficult to do with AI and very Daz would work very well as an intermediary. I just set up comfy UI and the barrier of entry was much lower than learning daz for the first time and I think that will need to be addressed sooner rather than later. It's been the same faces here for over a decade and while that's cool, a company isn't going to survive this new reality without accepting the direction the space has moved. I say this because I enjoy daz and want to see ur t stay modern.

    DAZ's collaboration with AI doesn't even need to be in image generation. I'd be happy if AI could help me quickly select the pose I need from thousands of poses. 

  • MasterstrokeMasterstroke Posts: 2,328
    edited 10:59AM

    Here is, what AI SHOULD do ONLY:
    -Improving realism on finished renders.
    -Adding render styles to finished render, like toon style, vintage photo style, film stock or using an image style as a reference for post proccessing.
    -Working locally on users' computers.

    Here is, what AI must NOT do:
    -Changing image content.
    -Changing Character appearances
    -Halucinating items in or out of the rendered scene.
    -Connecting to a data base and using YOUR work to train AI models!!!!!!!!
    -Working as a browser app.

    Post edited by Masterstroke at
  • ArtiniArtini Posts: 10,356

    I just asked AI what it thinks about first suggestion. Below is the answer.

    There are quite a few ways I can help you improve realism in your finished renders
    from **Daz Studio**—both inside Daz itself and in your post‑processing workflow.
    Here’s how I can assist and what approaches usually give the biggest realism gains:

    ## How I Can Help You Improve Realism

    I can help in these areas:

    ### **1. Lighting Optimization**

    Realism = lighting + materials, above all else.  
    I can help you:

    *   Choose proper HDRIs (studio, outdoor, nighttime, etc.)
    *   Build multi‑light setups (3‑point, cinematic, rim-lighting, bounce light)
    *   Set correct **exposure**, **tone mapping**, and **white balance**
    *   Add believable global illumination & soft shadows

    If you want, you can describe your typical scene, and I’ll design a custom lighting setup for it.

    ### **2. Material & Shader Enhancements**

    I can guide you on:

    *   Improving skin realism (SSS settings, bump/normal/displacement tuning)
    *   Enhancing metallic, glass, and fabric shaders
    *   Fixing plastic-looking surfaces
    *   Tuned Iray Uber shader parameters
    *   Handling translucency & thin-walled materials correctly

    If you provide a screenshot of your surface settings, I can analyze them.

    ### **3. Camera & Composition Realism**

    I can help you:

    *   Pick **real-world F‑stop / focal length** combos
    *   Use depth‑of‑field properly (not too strong)
    *   Add chromatic aberration & lens imperfections subtly
    *   Choose filmic composition (rule of thirds, leading lines, etc.)

    Realism increases dramatically when camera behavior feels physically accurate.

    ### **4. Rendering Settings for Maximum Realism**

    I can help configure:

    *   Iray render quality settings
    *   Convergence vs max samples
    *   Caustics and spectral rendering
    *   Denoiser usage (strength, when to disable, when to blend)

    I can also help find the sweet spot for your hardware.

    ### **5. Post‑Processing for Photoreal Results**

    This is where finished renders often go from “CG” to “real”.

    I can guide you through:

    *   Color grading (cinematic LUTs, S‑curves)
    *   Exposure & white‑balance correction
    *   Adding micro‑noise / grain for film realism
    *   Sharpening vs clarity
    *   Bloom, glare, and lens artifacts
    *   Subtle atmosphere / haze / depth grading

    If you upload a render (or describe it), I can tell you exactly what would improve realism.

    ### **6. Scene Optimization**

    Realism comes from details:

    *   Correct scale of objects (critical)
    *   Natural character posing (no stiffness)
    *   Better cloth draping & wind simulation
    *   Avoiding empty backgrounds
    *   Using props with proper PBR maps

    I can also help you build more realistic scenes from scratch.

    ## If you want, I can also:

    *   Review a render and give a step‑by‑step realism improvement plan
    *   Create a **post‑processing recipe** tailored to your style
    *   Suggest specific Daz/Photoshop/GIMP tools to enhance realism
    *   Provide realistic lighting presets based on your scenario

     

  • WendyLuvsCatzWendyLuvsCatz Posts: 40,239

    I avoid using just prompts at least

    I try to at least start with an image I had some input in, even if it's a photo

    people are actually pretty good at stealing and editing stuff without touching ai surprise

  • MasterstrokeMasterstroke Posts: 2,328

    Artini said:

    I just asked AI what it thinks about first suggestion. Below is the answer.

    There are quite a few ways I can help you improve realism in your finished renders
    from **Daz Studio**—both inside Daz itself and in your post‑processing workflow.
    Here’s how I can assist and what approaches usually give the biggest realism gains:

    ## How I Can Help You Improve Realism

    I can help in these areas:

    ### **1. Lighting Optimization**

    Realism = lighting + materials, above all else.  
    I can help you:

    *   Choose proper HDRIs (studio, outdoor, nighttime, etc.)
    *   Build multi‑light setups (3‑point, cinematic, rim-lighting, bounce light)
    *   Set correct **exposure**, **tone mapping**, and **white balance**
    *   Add believable global illumination & soft shadows

    If you want, you can describe your typical scene, and I’ll design a custom lighting setup for it.

    ### **2. Material & Shader Enhancements**

    I can guide you on:

    *   Improving skin realism (SSS settings, bump/normal/displacement tuning)
    *   Enhancing metallic, glass, and fabric shaders
    *   Fixing plastic-looking surfaces
    *   Tuned Iray Uber shader parameters
    *   Handling translucency & thin-walled materials correctly

    If you provide a screenshot of your surface settings, I can analyze them.

    ### **3. Camera & Composition Realism**

    I can help you:

    *   Pick **real-world F‑stop / focal length** combos
    *   Use depth‑of‑field properly (not too strong)
    *   Add chromatic aberration & lens imperfections subtly
    *   Choose filmic composition (rule of thirds, leading lines, etc.)

    Realism increases dramatically when camera behavior feels physically accurate.

    ### **4. Rendering Settings for Maximum Realism**

    I can help configure:

    *   Iray render quality settings
    *   Convergence vs max samples
    *   Caustics and spectral rendering
    *   Denoiser usage (strength, when to disable, when to blend)

    I can also help find the sweet spot for your hardware.

    ### **5. Post‑Processing for Photoreal Results**

    This is where finished renders often go from “CG” to “real”.

    I can guide you through:

    *   Color grading (cinematic LUTs, S‑curves)
    *   Exposure & white‑balance correction
    *   Adding micro‑noise / grain for film realism
    *   Sharpening vs clarity
    *   Bloom, glare, and lens artifacts
    *   Subtle atmosphere / haze / depth grading

    If you upload a render (or describe it), I can tell you exactly what would improve realism.

    ### **6. Scene Optimization**

    Realism comes from details:

    *   Correct scale of objects (critical)
    *   Natural character posing (no stiffness)
    *   Better cloth draping & wind simulation
    *   Avoiding empty backgrounds
    *   Using props with proper PBR maps

    I can also help you build more realistic scenes from scratch.

    ## If you want, I can also:

    *   Review a render and give a step‑by‑step realism improvement plan
    *   Create a **post‑processing recipe** tailored to your style
    *   Suggest specific Daz/Photoshop/GIMP tools to enhance realism
    *   Provide realistic lighting presets based on your scenario

     

    Thank you for this excellent example, why asking AI is completely useless. 
    The usual yada, yada - answer by the AI.
    Not wrong, but often already heard in tutorials for multiple render softwares, but not taking in account the little shortfalls of specific render softwares.
    E.G.: In our case of DAZ Studio, IRAY, having more light sources is counter productive.
    IRAY lights have in accuracies, that summs up with the amount of light sources in a scene.
    In my experience, you should not use more than on HDRI and one light source. Every extra light source makes it more difficult to render a photo realistic scene.

  • ArtiniArtini Posts: 10,356

    Good points. I was just interested if AI could really help.

     

  • ArtiniArtini Posts: 10,356

    Below is an example answer of running AI on the local computer.

    Here are **four complete, repeatable ComfyUI / Stable Diffusion workflows**
    you can use to style your **finished Daz Studio renders**
    into **toon**, **painterly**, **film‑grain**, and **photographic** looks.
    Each recipe assumes you start with a **finished PNG from Daz Studio**
    and prefer **image‑to‑image** (so pose/geometry are preserved).
    I’ll give you **node graphs (by name), key settings, prompts, and tuning tips**.

    > **Models referenced:** SDXL base+refiner (or SD 1.5 if you prefer),
     relevant LoRAs, ControlNet(s), and IP‑Adapter for reference styling.  
    > **General guidance:** Start with **denoise=0.20–0.35** for gentle stylization;
     increase only if you want stronger changes.

    ## 0) Common Setup & Best Practices

    **Nodes you’ll reuse across recipes**

    *   **Load Checkpoint**: `SDXL 1.0 Base` (or your preferred toon/painterly/photo model)
    *   **Load VAE**: matching the model (SDXL VAE)
    *   **CLIP Text Encode (positive/negative)**
    *   **Load Image**: your Daz render (preferably **16‑bit PNG/TIFF**, linear or neutral tone)
    *   **VAE Encode Image → Latent**
    *   **KSampler** (Euler a / DPM++ 2M SDE / UniPC)
    *   **VAE Decode Latent → Image**

    **Shared settings (good defaults)**

    *   Resolution: match your render (e.g., **1536×864**, **2048×1152**, **1024×1536** for portrait)
    *   CFG: **4.0–6.5** (SDXL likes lower CFG)
    *   Steps: **20–28** (SDXL base), **14–20** (refiner)
    *   Denoise: **0.20–0.35** (light stylization), **0.45–0.65** (strong style change)
    *   Seed: fix seeds for reproducibility; change **only one** thing at a time

    **Structure for SDXL base+refiner**

    1.  **Pass 1** (Base): prompts, ControlNet/IP‑Adapter, LoRA(s) → **latent**
    2.  **Pass 2** (Refiner): feed the latent to **SDXL Refiner** at denoise **0.05–0.20** for texture/finish

    ## 1) **Toon / Anime / Cel‑Shaded**

    ### A) Minimal, controllable toon (no line art)

    **Node graph**

    *   Load Checkpoint: **“toony-mix SDXL”** or **“anything-v4.5 (SD1.5)”** (if SDXL toon is unavailable)
    *   Load Image → VAE Encode
    *   CLIP Text Encode (positive):
            highly stylized toon, cel shading, clean color blocks, minimal shading, vibrant palette, studio lighting, character centered, sharp silhouette, line accuracy
    *   CLIP Text Encode (negative):
            photo realism, skin pores, film grain, noise, artifacts, overexposure, washed out colors, extra fingers, deformed anatomy
    *   Optional **LoRA Stack**: Toon LoRA at **0.6–0.9** weight
    *   **KSampler**: steps 24, CFG 5.5, **denoise 0.28**
    *   VAE Decode → Image

    **Tips**

    *   For stronger “cel” surfaces, increase denoise to **0.38–0.48**
    *   Keep CFG < **7** to avoid washed colors
    *   Scale your latent to **1.5x** and run a second gentle pass for cleaner flat regions

    ### B) Toon with **clear outlines**

    Add:

    *   **ControlNet: LineArt (Anime) or Scribble**
        *   Preprocessor: **Lineart/AnimeLineart**
        *   Strength: **0.6–0.8** (higher = stronger line adherence)
        *   Guidance: **0.9–1.2**

    And/or:

    *   **IP‑Adapter (Style)** with a toon reference image
        *   Weight: **0.5–0.8**
        *   Use **Style Only** mode (not content) if available in your node pack

    ## 2) **Painterly (Oil / Watercolor / Impressionist)**

    ### A) Reference‑style painterly (IP‑Adapter)

    **Node graph**

    *   Load Checkpoint: **SDXL Base** (or a painterly SDXL finetune like “**dynavisionXL-art**” if you have one)
    *   Load Image (Daz) → VAE Encode
    *   Load Image (Style reference, e.g., Monet painting scan)
    *   **IP‑Adapter Apply (Style)** → connect to KSampler
        *   Weight: **0.55–0.75**
        *   Keep “content preservation” enabled if provided (helps pose/shape)
    *   CLIP Positive prompt:
            impressionist oil painting, expressive brush strokes, canvas texture, rich impasto, natural color harmony, soft edges, volumetric light
    *   Negative prompt:
            photoreal skin pores, harsh line art, CGI plastic, oversmoothing, excessive blur, neon palette
    *   KSampler: steps 26, CFG 5.5, **denoise 0.32**
    *   VAE Decode → Image

    **Optional**: **ControlNet Depth**

    *   Extract depth from the Daz image to keep structure
        *   Strength **0.4–0.6**

    **Refiner pass**

    *   Switch **Load Checkpoint → SDXL Refiner**
    *   Denoise **0.10–0.18**
    *   This adds luscious micro‑texture that reads like paint

    ### B) Painterly via LoRA (no reference)

    *   Add a **Painterly LoRA** (oil/watercolor) at **0.6–0.9**
    *   Increase **negative** “photorealism” terms to push away CG feel
    *   Slightly **downscale** (e.g., to 1400px height) to encourage broader brush strokes;
    upscale later with an **Art‑friendly upscaler** (R‑ESRGAN 4x anime6B can work well even for painterly)

    ## 3) **Film‑Grain / Analog Film Stock Emulation**

    You’ll combine **color science + halation + grain**.
    Two approaches: **LUT‑driven** or **prompt‑driven**.

    ### A) LUT + grain (post within ComfyUI)

    **Node graph**

    *   Base generation: SDXL Base/Refiner at **denoise 0.20–0.28** (minimal stylistic change)
    *   **Color Transform / Apply LUT** node (or **Filmic Tonemapper** bundle if you have it)
        *   Load a film LUT: **Kodak 2383/2393, Fuji Eterna, Portra 400, Vision3 500T**
        *   Intensity: **0.3–0.6**
    *   **Halation/Bloom Node** (or custom glare pipeline)
        *   Threshold: **0.7–0.85**
        *   Radius: **0.6–1.2** (scene dependent)
        *   Intensity: **0.15–0.35**
    *   **Film Grain Node**
        *   Amount: **0.12–0.25**
        *   Size: **0.8–1.5** (relative to resolution)
        *   Blend mode: **overlay/soft light**

    **Prompts (light touch)**

    *   Positive:
            cinematic color science, natural contrast, soft highlight rolloff, authentic film grain, mild halation, subtle chromatic aberration
    *   Negative:
            plastic skin, aggressive sharpening, digital artifacts, posterization

    ### B) Model/LoRA‑driven film look (with IP‑Adapter color match)

    *   Use a **cinematic SDXL** checkpoint (e.g., photoreal‑cinema tuned)
    *   IP‑Adapter **Style** with a **single film still** from the target era/stock
        *   Weight: **0.35–0.6** (keep gentle to avoid faces changing)
    *   Keep **denoise 0.20–0.30**
    *   Post nodes same as A (LUT + grain + halation)
    *   Add **Chromatic Aberration node** at **0.2–0.4 px** (subtle!)

    **Pro tip:** Grain should sit **on top** of the image at 1:1 viewing.
    If it disappears at 100%, raise the amount slightly or use a **luma‑weighted grain** node.

    ## 4) **Photographic (High‑end commercial look)**

    Goal: realistic, crisp, with filmic contrast. Preserve your Daz lighting, only enhance.

    **Node graph**

    *   Load Checkpoint: **SDXL Base** (neutral, photoreal) → Refiner
    *   Load Daz Image → VAE Encode
    *   **ControlNet: Depth** at **0.35–0.55** (keeps geometry crisp)
    *   **(Optional) IP‑Adapter Style** with a **reference photo** that matches your desired grade
    (e.g., fashion editorial). Keep weight **0.25–0.45** for subtle color grade, not content change.
    *   Positive prompt:
            ultra realistic photograph, natural skin tones, precise detail, soft filmic contrast, subtle bokeh, clean color separation, accurate highlights, professional studio lighting
    *   Negative prompt:
            oversaturation, HDR look, plastic skin, waxy, over-sharpened edges, banding, color fringing
    *   KSampler: **steps 24**, CFG **5.0–6.0**, **denoise 0.22–0.30**
    *   Refiner: **denoise 0.08–0.15**
    *   Post:
        *   **Local Contrast/Clarity** node: **0.1–0.2**
        *   **Vignette**: strength **0.12–0.2**
        *   **Sharpen (mid‑frequency)**: radius **0.4–0.8**, amount **0.2–0.35**
        *   **Very light grain**: amount **0.05–0.12**

    **White balance tip:** If your Daz render has a tinted HDRI, correct WB first (Kelvin ≈ **5500–6500K**)
     before stylizing.

    ***

    ## 5) **Full ComfyUI Wiring (Template)**

    Here’s a generic **image‑to‑image SDXL** skeleton you can re‑use and expand with
    IP‑Adapter, ControlNet, LoRA, and LUTs:

    ```markdown
    ### Nodes (connect in order)
    1) Load Checkpoint → SDXL Base
    2) Load VAE → SDXL VAE
    3) CLIP Text Encode (Positive)
    4) CLIP Text Encode (Negative)
    5) Load Image (your Daz render)
    6) VAE Encode (Image → Latent)
    7) [Optional] ControlNet (Depth/Lineart):
         - Preprocess from the same input image
         - Connect to KSampler as conditioning
    8) [Optional] IP-Adapter (Style):
         - Load reference style image
         - Connect to KSampler as additional conditioning
    9) [Optional] LoRA Stack:
         - Attach to model (weights 0.4–0.9 depending on strength)
    10) KSampler (Base):
         - Steps 24, CFG 5.5, Denoise 0.28
         - Connect positive/negative encodings + conditionings
         - Output Latent
    11) VAE Decode (Latent → Image) → "Base Output"
    12) Load Checkpoint → SDXL Refiner
    13) VAE Encode (Base Output → Latent)
    14) KSampler (Refiner):
         - Steps 18, CFG 5.0, Denoise 0.12
    15) VAE Decode → "Refined Output"
    16) [Optional Post] Apply LUT → Halation → Grain → Vignette → Sharpen
    17) Save Image
    ```

    You can clone this graph and swap the **optional** blocks depending on the style:

    *   **Toon**: LineArt ControlNet + Toon LoRA, raise denoise to 0.35–0.45
    *   **Painterly**: IP‑Adapter (Style) + Painterly LoRA, Depth ControlNet at 0.5
    *   **Film**: Minimal denoise, then LUT + halation + grain stack
    *   **Photographic**: Depth ControlNet + Refiner + subtle post

    ## 6) Prompts You Can Paste

    **Toon (positive)**

        clean cel shading, flat color regions, bold silhouette, stylized character proportions, minimal shading, vibrant color palette, studio rim light

    **Toon (negative)**

        photorealism, skin pores, noise, film grain, texture artifacts, smeared details

    **Painterly (positive)**

        painterly oil on canvas, visible brushwork, thick impasto, soft edges, natural palette, gentle diffusion of light, canvas texture

    **Painterly (negative)**

        hyperreal pores, repetitive patterns, harsh outlines, neon saturation

    **Film (positive)**

        cinematic color science, natural highlight rolloff, authentic fine film grain, delicate halation, balanced contrast, accurate skin tones

    **Film (negative)**

        digital harshness, oversharpening, banding, chroma noise, plastic skin

    **Photographic (positive)**

        high-end editorial photography, controlled studio lighting, realistic skin texture, filmic contrast, subtle bokeh, color accuracy

    **Photographic (negative)**

        overprocessed, HDR effect, waxy skin, halos, blown highlights

    ***

    ## 7) Troubleshooting & Tuning

    *   **Faces change too much** → Lower **denoise**, use **Identity/Face ControlNet** or **IP‑Adapter Face**.
    *   **Colors drift** → Lower CFG, add **IP‑Adapter style** at 0.3–0.5, or apply **LUT** after.
    *   **Edges get jaggy** → Increase steps slightly or run a **refiner** pass;
        consider **latent upscaling** then decode.
    *   **Too much stylization** → Reduce LoRA weight (0.4–0.6), reduce IP‑Adapter weight,
        drop denoise to **0.22–0.28**.
    *   **Flat results** → Slightly increase CFG (by 0.5), add mild **local contrast** post,
        or add a **rim‑light cue** in the prompt.

    ## 8) Performance Notes (useful on local GPUs)

    *   SDXL loves **fp16** and **xformers** (if your build supports it).
    *   Prefer **latent size close to final**; avoid huge upscales inside the sampler.
        Upscale after with **ESRGAN/4x-UltraSharp**.
    *   Batch size **1**, accumulate seeds for variants; batch >1 increases VRAM footprint quickly.

     

  • NylonGirlNylonGirl Posts: 2,243
    If there were some way to get realistic images from DAZ Studio alone, I would probably use DAZ Studio alone without the AI. The AI that I'm using can't possibly be stealing art from DAZ Studio creators because none of them make anything like that.
Sign In or Register to comment.