• Daz 3D
  • Shop
  • 3D Software
    • Daz Studio Premier
    • Daz Studio
    • Install Manager
    • Exporters
    • Daz to Roblox
    • Daz to Maya
    • Daz to Blender
    • Daz to Unreal
    • Daz to Unity
    • Daz to 3ds Max
    • Daz to Cinema 4D
  • 3D Models
    • Genesis 9
    • Genesis 8.1
    • Free 3D Models
  • Community
    • Gallery
    • Forums
    • Blog
    • Press
    • Help
  • Memberships
    • Daz Premier
    • Daz Plus
    • Daz Base
    • Compare
  • AI Solutions
  • Download Studio
  • Menu
  • Daz 3D
  • Shop
  • 3d Software
    • Daz Studio Premier
    • Daz Studio
    • Install Manager
    • Exporters
    • Daz to Roblox
    • Daz to Maya
    • Daz to Blender
    • Daz to Unreal
    • Daz to Unity
    • Daz to 3ds Max
    • Daz to Cinema 4D
  • 3D Models
    • Genesis 9
    • Genesis 8.1
    • Free 3D Models
  • Community
    • Our Community
    • Gallery
    • Forums
    • Blog
    • Press
    • Help
  • Memberships
    • Daz Premier
    • Daz Plus
    • Daz Base
    • Compare
  • AI Solutions

Notifications

You currently have no notifications.

Loading...
Daz 3D Forums > Search
  • Will 3D Characters Ever Replace Live Models?

    Of course they will eventually. Any prediction to the contrary isn't looking very closely at how technology of 3D graphics has changed radically. When I started in 3D graphics (35 years ago in 1984) we had to program all of our own tools. The idea of a "hobbiest" rendering tool like Poser or Daz Studio was almost laughable given how long even the simplest renders would take on our "blazing" 8Mhz Intel 286 CPUs with a whopping 4 MB of memory!

    13 years ago, in 2006 when I started with Daz Studio, we were still mostly limited to biased rendering engines that required a great deal of dedication to get realistic approximations of real world lighting. Physically Based Rendering engines were mostly limited to some academics and industry leaders with access to high-end computing power. Nobody was ever going to be fooled by a 3Delight render of Victoria 3!

    We're still on a pace where computing power is doubling every 2.5 - 3 years. To predict that technology will "never" be powerful enough to replace humans with virtual actors is likely to be proven false. There will certainly be a time when it is cheaper and faster to render an animation than to take an entire human film crew on location.

    As for people not accepting CG replacements for "real" actors, I think that's innaccurate as well. Do you think kids (or adults for that matter) today care that Woody isn't "real"? Didn't we still tear up when Andy said goodbye to him? Did we really care that Gollum was a 3D generated character? If we can get past the "uncanny valley" stage of 3D animation and expression, I can easily see people becoming fans of specific CGI characters. I can forsee a time when the box office draw of "Victoria the 9th" is just as big as any human actress. Or, perhaps even more likely, when people don't care who the actor is anymore, they're going to movies to see the characters and the story, not the headliner.

    I have re-watched "Alita Battle Angel" several times, mostly because the blending of CGI and live action is so good (to my eyes) that I'm amazed at the technology and skill of the movie making. I don't even know how many of the characters in that movie didn't have a real actor, just someone doing the voice. And, tbh, I don't really care. The characters and story are compelling enough for me to watch (and re-watch) the movie.

    Characters and story.  #1 and #1.  And I liked Alita very much too.

    The time will come, if not completely within my lifetime, certainly within my daughters. And I can see on the horizon a time when my grandkids don't even pay attention to "who" is playing the character anymore. The focus will shift completely to directors, writers, animation houses, and voice actors (until we can generate realistic, nuanced human speech then they'll be out of a job too).

    But we're in an amazing time to be alive.  Just like how you can write, play, produce, publish, and distribute your own music album with just a laptop and a good audio/MIDI interface, we have the technology to make our own stories come to life.

    There once was a day when EVERYBODY could tell stories around the campfire, or in the kitchen while peeling potatoes.  I for one am very happy to have the media industry cracked wide open for all.  Fresh minds, new ideas!

    I have re-watched "Alita Battle Angel" several times, mostly because the blending of CGI and live action is so good (to my eyes) that I'm amazed at the technology and skill of the movie making

     

     

    Every time I rewatch "Alita battleAngel" I get a little frustrated that this technology and style was not used for the "Ghost in the shell" remake.
     

    I certainly would have preferred a stylized, CG Motoko Kusinaki over a real Scarlet Johansen any day.

    For somebody like me who has zero knowledge of Motoko in the comics/anime/wherever, the Scarlet movie was fabulous.  It had interesting characters, a plot with twists, and overall told a GREAT story.  Putting Johansen in it was a stroke of genius.  And I say that not being a huge fan of her.

    As for people not accepting CG replacements for "real" actors, I think that's innaccurate as well. Do you think kids (or adults for that matter) today care that Woody isn't "real"? Didn't we still tear up when Andy said goodbye to him? Did we really care that Gollum was a 3D generated character?

    And although it's a bit of a trope, we all still cheered when Helen found a job and left Bob at home to take care of the kids and help with homework.  It was great moviemaking.  Bob, Helen, and the kids are ALL 3D based!

    If we can get past the "uncanny valley" stage of 3D animation and expression, I can easily see people becoming fans of specific CGI characters. I can forsee a time when the box office draw of "Victoria the 9th" is just as big as any human actress. Or, perhaps even more likely, when people don't care who the actor is anymore, they're going to movies to see the characters and the story, not the headliner.

    There still are real actors behind all these characters. If there was a regular human looking CG character with a real actor behind it doing the voice acting and motion capture, why not just use the real actor?

    Because they introduce a huge amount of risk to an expensive project.  They get injured, they get sick.  They can die during production.  They can also quit on a whim which may require the reshooting of miles and miles of film (see what I did there?), they have schedule conflicts, they retire at 9 or at 39, suddenly and without warning. 

    Also, human actors can and do let you down.  They get arrested for drug, sex, or violent crimes, they become alcoholic, get hooked on drugs.  Or they can make careless comments in social media, annoy a foreign government while on vacation, or just say something totally boneheaded, scaring off half of your audience even before opening day.  And then everybody has to apologize. 

    Voice actors can be replaced, as evidenced by the voice of Po.  Jack Black in the movies, somebody else on TV.  In the original Avatar: The Last Airbender TV series, Mako, the voice of "Uncle Iroh" died.  Well damn.  But were they going to stop production because of the death of one of the central characters?  No, of course not.  They found a substitute that was more than passable.

    Risk management.  That's why you might not want to hire "the real actors".

    CG is great whenever there's a fantastical component or the actions of the character are impossible to do for a real actor (switching Keanu Reeves to CG when he's fighting 100 agents simultaneously), but using a CG character for your average character drama makes no sense and certainly won't save much money either.

    It's not always about "saving money".  As I said above, managing risk may be even more important than being economical.

    Unless that CG character is a completely autonomous AI that is essentially an actor in its own right. I could see that maybe be a novelty at some point in the future where such an AI could advance to stardom on its own. Even then I doubt this will lead to a complete replacement of real actors. Acting is an art and like other art forms I think these will be the last strongholds of humans even when most other jobs have been replaced by machines.

    It is an art, yes indeed.  But actors are not God's gift to humanity and they certainly are not the only posessors of these art skills.

    SIGGRAPH was in Los Angeles this year, so I went. I was quite surprised to note that what was cutting edge Virtual Reality type stuff is already considered soooo last year... "You say your company has a VR product? So what, buddy, who doesn't?"

    The absolutely most mind-blownig thing I saw at SIGGRAPH was a presentation I attended on FACS. I thought it was going to be about how to construct better facial expressions using the Action Units defined by FACS. I was totally wrong.

    Google "Mark Sagar Baby X", and I think you'll find a cut down version of the video he presented. Basically, researchers have modeled the physiological/chemical changes in the brain in certain scenarios, and discovered that they correspond to facial expressions. So they have a neural network that simulates this so when you talk to the baby, and say, it recognizes its own name, it makes a natural expression. If you say "spider" the baby acknowledges what you said but perhaps doesn't understand and the facial expression is what you would expect, a kind of "watchu talkin bout Willis?" but I you say "scary spider" the baby kind of freaks out, with appropriate expressions. It was not explicitly "programmed" to do any of that. Technological advances are not linear, they're exponential, and as such we can never act fast enough to predict the truly revolutionary ones.

    I say it again, if you think this is not coming, and sooner rather than later, you are not paying attention.

    Thank you for the reference.  I'll watch.

    I have re-watched "Alita Battle Angel" several times, mostly because the blending of CGI and live action is so good (to my eyes) that I'm amazed at the technology and skill of the movie making

     

     

    Every time I rewatch "Alita battleAngel" I get a little frustrated that this technology and style was not used for the "Ghost in the shell" remake.
     

    I certainly would have preferred a stylized, CG Motoko Kusinaki over a real Scarlet Johansen any day.

    Considering "Alita" was a 16 year old "passion project" for Cameron who has enough clout to "call the shots" as he wished resulted in the film we got (Which I ADORE!!!  I've craved this movie since Cameron first announced it in 2003.)  I suspect there was not the same kind of "drive" behind "Ghost in the Shell".

    Sincerely,

    Bill

    Cameron also waited decades to do the first Avatar film.  He waited for the technology to get to a point where it would meet with his expectations for telling the story without pulling the viewer out of the story.

    It's rather funny, but much of what's being debated in this thread was actually addressed in an old 1981 film--called "Looker", staring James Coburn and Albert Finney.

    It's a really good thriller, but there was some really logical goofs that made it a rather laughable film with such a forward-looking premise where they correctly predicted the eventual technology that can replace a live actor with CGI, they utterly failed at conceiving the same thing can be done to replacing film stages with CGI, too...and even neglected a certain little used TV magic and were relying on LIVE BROADCASTING like they had to do in the early days of Dark Shadows and Doctor Who--check out the fantastic chase scene where Finney is trying to avoid being killed in the last 30min of the film! cheeky

    (or they may have intentionally tossed in those logical goofs, who knows?)

    By

    Ryuu@AMcCF Ryuu@AMcCF September 2019 in The Commons
  • Cleaning Up My Hard-Drive of Old Content

    Hmmm... now that's a thought.  Technically, it's all backed up but I'd have to trust the back-up system, which has a sketchy success rate.  Although it did restore that ill-fated year-and-a-half old copy of my content directory.  And I used to burn all the install packs to data disks until I got bored, so I guess I'm covered. And I'm sure I've got the original downloads for Poser Pro 2014.  I found all the earlier ones [shaking head.... along with every working memory chip I ever removed from a computer and my last two video cards...yeah, it's pack-rat central here.]  Onwards to free up space on my content drive.... afterall, there's new stuff to buy.

    By

    sandmanmax sandmanmax September 2019 in The Commons
  • Will 3D Characters Ever Replace Live Models?
     

    Granted, 3D rigging  has improved since the Gen3 days, but there is still a ways to go.

    Basically you want to pose a flesh-and-blood felxible human, but what you get to work with is a slightly improved Barbie doll that only bends at the joints.

     I don't understand this. Couldn't you create morphs or blendshapes or whatever it's called in the software and bend or shape the figure however you want for an animation?

    You get what you get. What I got were rigs on the central server, we used references of that. Modelers would keep changing the rig when needed, whenever you reloaded the scene, you would always work with the latest update. We had several animators working on the same character. It was not up to the animators to create new blendshapes. We could ask for changes made to the rig, sometimes we would get them, sometimes not. It's been a while since I animated in Maya, I hope the rigs got a bit better. But especially the torso and shoulder sections would be way too stiff to do strong lines of action. You'd need a freely bendable spline for that. I was already happy to have a rotating hip that left the upper body where it was. (Otherwise doing walks would have been hell on earth)

    I think it was also a question of hardware limits. We usually did the broad animation with lores rigs, and switched to the hires versions only when going into detail. Made scrolling through the timeline an ordeal. They tried to keep the rigs as light as possible. Doing a single pose is not that much of a burden for the hardware. Animating with loads of keys; that's another story. Especially in multi character setups.

    I saw somebody do a nice elephant rig a few years later, That one had muscles under a loose skin and rigid bones. Must have been a lot of work to animate that. The GOT dragon rigs must have been perfection. The Tarkin and Leia rigs from Rogue One... well, not so much....

    By

    Sempie Sempie September 2019 in The Commons
  • Novica & Forum Members Tips & Product Reviews Pt 12

    Thanks all. 

    Who is in Dorian's path now? It seems Pensacola /Panhandle are safe, but my sister in Jacksonville...not so much now. So what thread folks are in Florida, Georgia, the Carolinas?

    Possibly me. With the slowing of forward speed and consequent change in predicted path it could hug the coast and hit the North Carolina Outer Banks where I am. Also I know that both Miss Bad Wolf and Carrie are in central to eastern North Carolina and I think Petercat is in the coastal Soth Carolina or Georgia area.

    By

    Charlie Judge Charlie Judge August 2019 in Art Studio
  • My phone will not charge complaint thread

    Good time not to be in florida

    I used to live with relatives in the Central east coast area of Florida, right where this storm is heading (at the moment) I was there in 2004 when we had four (count'em 4: Charlie, Frances, Ivan, & Jeanne) hurricanes pass through or near that area.  I stayed in the house while everybody else fled.  I remember waking up to an inch of water on the floor of my bedroom in the basement and the thud of a small tree falling on the livingroom upstairs.  There was (and still is) an absolutely HUGE Live-oak tree with a branch the size of a mature Maple tree hanging horizontally over the full length of the large ranch style house (see example of big live-oak tree below).  Every time there's a hurricane I check with the relatives to see if that branch has fallen yet.  It was a large property and that year we lost 9 palm trees as well as the small live oak that fell on the livingroom, due to the storms.

    By

    LeatherGryphon LeatherGryphon August 2019 in The Commons
  • August 2019 – DAZ 3D New User Challenge – Free Month (With a Twist)

    Okay,

    First, for all the CV artists bold enough to put themselves "out there" for this month's challenge--congrats.  

    My favorite is Leana.  Now, first and foremost, I like the render and butterflies and the bokeh, but in keeping with the challenge rules, the constructive criticism I would offer is as follows.

    I think the color pallet is awesome. I think the pose is both open yet closed, and this is a detail of subtle complexity that I like very much and think works very well in this render. What I mean by this is that in a recent Ted Talk, the speaker described spacial footprints and how uncertain subserviant people often display smaller footprints.  They sit tightly. They stand tightly. They fold their arms, etc.  Conversely, dominant, free-feeling people spread their bodies out into large-body footprints. The example in the Ted Talk was something along the lines of someone going on a vacation and lying on a King-size bed and just sprawling out.  Legs and arms splayed as if physically screaming I'm FREE!    

    Ths woman's emotions are mixed.  Her narrowly poised shoulders are hooked between her outwardly spread yet crossed legs.  In keeping with the Ted Talk interpretation of body language, she feels both carefree yet cautious.  The butterflies--whimsy, youth, innocence.  Perhaps they represent freedom.  They definitely represnet beauty.  

    The thing is the butterflies are all around her--fluttering.  It's chaos.  This is almost like a still shot that also somehow succesffully captures a series of frames because one's eye does not simply capture it all at once.  Instead, my eye captures the scene in still frames:  the expression and pastel blues, yellows, and grays.  The next still shot is the butterflies at large, around her--then that central necklace.  Finally, for me, anyway, the final still shot that stands out as my eyes take in this render is the butterfly on her right knee.  This single butterfly completes the story and connects her to universe at large.   This person is on the verge of opening up to someone--yet, she's cautious.  Her legs are crossed.  Her arms are narrowly positioned.  For her, it's a moment of hopeful vulnerability.

    Excellent Job.

    Okay--now for the difficult job of saying what I would do differently.  Since I am not going to attempt to compete with the complexity and simplicity of that butterfly cloud, what I have done different in my render is turn one render into a triptych.  I have attempted to capture the  arc or still shots that my eye saw in Leana and freeze them in three distinct moments of my triptych. These frames move left to right.

    Frame 1: Outright, uncontrollable joy that would ultimately lead to such "hopefulness" as described above in the original challenge image. Instead of the butterflies connecting with the viewer or universe, I have chosen eye contact and expression.

    Frame 2: with the fading of initial exuberhance, she becomes protective. There is a drawing in of the body and of the body footprint.  The joy is still there in both the body and expression, but everything is becoming more protective.  The eye contact remains.

    Frame 3: Hopeful vulnerability.  She is giving into the emotion, yet she remains protective. She exhibits a small footprint similar to that of Leana, and in the same way the butterflies connect Leana to the outer world, Ella's eye contact connects her to the viewer, signifying openness, something welcome.  Hope.

    Note: in each part of the triptych, I am attempting to maintain the image of the butterflies in the abstract by keeping the bokeh.  

    In terms of lighting and texture, this challenge render reflects a lot of pastels: lavenders blue, yellow, green.  Basically, the Leana challenge render is Easter--hope eternal.  Even the sharp blues that contrast the pastels signify innocence.  Very nicely done.  Very artistic and metaphorical.  

    So.... to achieve some sort of contrast, I guess I'm going to attempt something more photorealistic and trust in eye contact to convey eternal hope. 

    At any rate, this was a great challenge.  As this late entry shows, I had some difficulty with this and could not submit my render until the last few days of the challenge.  

    Nice renders.  Nice challenge.  I hope people like my interpretation of Leana.

     

     

    By

    Sci-Tasia Sci-Tasia August 2019 in New User Contests and Events
  • Will 3D Characters Ever Replace Live Models?

    Ignoring casting choices and character design, a core discontinuity with the live action "Ghost in the Shell" (GitS for bevity from this point) is the "world" depicted and the Major's situation.  the live action movie presents her as the first of her kind and she's turned into a cybernetic soldier against her will, wiping her memory and telling her lies to make her more compliant.  That's almost Murphy's situation in "RoboCop".  But, Shirow  in his manga and later the anime adaptations (both the theatrical movies and the TV series) depict a society where cyber upgrading is the norm.  It's known by the masses and widely accepted.  In fact, it's commone enough that one "danger" involves the loss of individuality.  There's a scene in the 1995 theatrical anime when Motoko sees another person wearing the same "shell" as she.  (Yes, hers is "tricked out" like a police car or a military humvee, but externally, it's a popular, widely available "model".)  A more humorous spin of this idea unfolds in the original manga when the Major is mistaken for a "pleasure droid" that vanished from a factory.  This is about as polar opposite from the society depicted in the live action movie.

    I mentioned a "RoboCop" parallel in the live action movie, being converted into a cyborg without her consentand her memories wiped and tricked into believing she was someone VERY different from her pre surgery personality.  In all earlier material, various animes and mangas, she knew darn well who she was, her convertion was not the result of duplicity and she joined Section 9 of her own volition.  Again, almost the direct opposite of all earlier versions.  Plus, after learning the truth in the live action flick, being turned into a killing machine against her will, at the end she decides, "Meh, I guess I'll just stick with it."  Giving in like that?  Had that happened to the Major of Shirow's manga or the various animes, she would have said, "F**k that!"

    And while admittedly a "petty" complaint, shoot, they couldn't even get her distinctive hairstyle right!!  Just a few spitz of hairspray and a brush the fluff out the nape and her bangs was all that was needed!  Instead, it looked like a bucket of water was dumped upon her head, her hair hanging limp!

    And need we even discuss the studio opting for a bog standard "corporate duplicity" story instead of the theme of spontaneously evolving artificial intelligence that was the central theme of both Shirow's manga and the 1995 theatrical anime?  Don't tell me audiences wouldn't 'get it".  the concept has been presented in various ways, sometimes even competently, for decades in the movies!

    Yes, the movie did recreate some iconic sequences we associate with the property.  The choir from the '95 anime returned (not the same singers, probably, but that selection of music).  They even had a truly bada$$ Aramaki speaking Japanese, but without a narrative that reflects the original material, its just a disjointed sequence of homages.

    Sincerely,

    Bill

    By

    Redfern Redfern August 2019 in The Commons
  • My phone will not charge complaint thread

    Happy Birthday to me.

    ..congratulations on your most recent trip around our central G-Type main sequence stellar primary which in turn rotates around the core of our  galaxy, which in turn is travelling through the expanding universe.  That's a lot of mileage logged in one year.

    ...in less astrophysical terms:

    Happy, Happy Birthday.

     

    By

    kyoto kid kyoto kid August 2019 in The Commons
  • Will 3D Characters Ever Replace Live Models?

    Of course they will eventually. Any prediction to the contrary isn't looking very closely at how technology of 3D graphics has changed radically. When I started in 3D graphics (35 years ago in 1984) we had to program all of our own tools. The idea of a "hobbiest" rendering tool like Poser or Daz Studio was almost laughable given how long even the simplest renders would take on our "blazing" 8Mhz Intel 286 CPUs with a whopping 4 MB of memory!

    13 years ago, in 2006 when I started with Daz Studio, we were still mostly limited to biased rendering engines that required a great deal of dedication to get realistic approximations of real world lighting. Physically Based Rendering engines were mostly limited to some academics and industry leaders with access to high-end computing power. Nobody was ever going to be fooled by a 3Delight render of Victoria 3!

    We're still on a pace where computing power is doubling every 2.5 - 3 years. To predict that technology will "never" be powerful enough to replace humans with virtual actors is likely to be proven false. There will certainly be a time when it is cheaper and faster to render an animation than to take an entire human film crew on location.

    As for people not accepting CG replacements for "real" actors, I think that's innaccurate as well. Do you think kids (or adults for that matter) today care that Woody isn't "real"? Didn't we still tear up when Andy said goodbye to him? Did we really care that Gollum was a 3D generated character? If we can get past the "uncanny valley" stage of 3D animation and expression, I can easily see people becoming fans of specific CGI characters. I can forsee a time when the box office draw of "Victoria the 9th" is just as big as any human actress. Or, perhaps even more likely, when people don't care who the actor is anymore, they're going to movies to see the characters and the story, not the headliner.

    I have re-watched "Alita Battle Angel" several times, mostly because the blending of CGI and live action is so good (to my eyes) that I'm amazed at the technology and skill of the movie making. I don't even know how many of the characters in that movie didn't have a real actor, just someone doing the voice. And, tbh, I don't really care. The characters and story are compelling enough for me to watch (and re-watch) the movie.

    Characters and story.  #1 and #1.  And I liked Alita very much too.

    The time will come, if not completely within my lifetime, certainly within my daughters. And I can see on the horizon a time when my grandkids don't even pay attention to "who" is playing the character anymore. The focus will shift completely to directors, writers, animation houses, and voice actors (until we can generate realistic, nuanced human speech then they'll be out of a job too).

    But we're in an amazing time to be alive.  Just like how you can write, play, produce, publish, and distribute your own music album with just a laptop and a good audio/MIDI interface, we have the technology to make our own stories come to life.

    There once was a day when EVERYBODY could tell stories around the campfire, or in the kitchen while peeling potatoes.  I for one am very happy to have the media industry cracked wide open for all.  Fresh minds, new ideas!

    I have re-watched "Alita Battle Angel" several times, mostly because the blending of CGI and live action is so good (to my eyes) that I'm amazed at the technology and skill of the movie making

     

     

    Every time I rewatch "Alita battleAngel" I get a little frustrated that this technology and style was not used for the "Ghost in the shell" remake.
     

    I certainly would have preferred a stylized, CG Motoko Kusinaki over a real Scarlet Johansen any day.

    For somebody like me who has zero knowledge of Motoko in the comics/anime/wherever, the Scarlet movie was fabulous.  It had interesting characters, a plot with twists, and overall told a GREAT story.  Putting Johansen in it was a stroke of genius.  And I say that not being a huge fan of her.

    As for people not accepting CG replacements for "real" actors, I think that's innaccurate as well. Do you think kids (or adults for that matter) today care that Woody isn't "real"? Didn't we still tear up when Andy said goodbye to him? Did we really care that Gollum was a 3D generated character?

    And although it's a bit of a trope, we all still cheered when Helen found a job and left Bob at home to take care of the kids and help with homework.  It was great moviemaking.  Bob, Helen, and the kids are ALL 3D based!

    If we can get past the "uncanny valley" stage of 3D animation and expression, I can easily see people becoming fans of specific CGI characters. I can forsee a time when the box office draw of "Victoria the 9th" is just as big as any human actress. Or, perhaps even more likely, when people don't care who the actor is anymore, they're going to movies to see the characters and the story, not the headliner.

    There still are real actors behind all these characters. If there was a regular human looking CG character with a real actor behind it doing the voice acting and motion capture, why not just use the real actor?

    Because they introduce a huge amount of risk to an expensive project.  They get injured, they get sick.  They can die during production.  They can also quit on a whim which may require the reshooting of miles and miles of film (see what I did there?), they have schedule conflicts, they retire at 9 or at 39, suddenly and without warning. 

    Also, human actors can and do let you down.  They get arrested for drug, sex, or violent crimes, they become alcoholic, get hooked on drugs.  Or they can make careless comments in social media, annoy a foreign government while on vacation, or just say something totally boneheaded, scaring off half of your audience even before opening day.  And then everybody has to apologize. 

    Voice actors can be replaced, as evidenced by the voice of Po.  Jack Black in the movies, somebody else on TV.  In the original Avatar: The Last Airbender TV series, Mako, the voice of "Uncle Iroh" died.  Well damn.  But were they going to stop production because of the death of one of the central characters?  No, of course not.  They found a substitute that was more than passable.

    Risk management.  That's why you might not want to hire "the real actors".

    CG is great whenever there's a fantastical component or the actions of the character are impossible to do for a real actor (switching Keanu Reeves to CG when he's fighting 100 agents simultaneously), but using a CG character for your average character drama makes no sense and certainly won't save much money either.

    It's not always about "saving money".  As I said above, managing risk may be even more important than being economical.

    Unless that CG character is a completely autonomous AI that is essentially an actor in its own right. I could see that maybe be a novelty at some point in the future where such an AI could advance to stardom on its own. Even then I doubt this will lead to a complete replacement of real actors. Acting is an art and like other art forms I think these will be the last strongholds of humans even when most other jobs have been replaced by machines.

    It is an art, yes indeed.  But actors are not God's gift to humanity and they certainly are not the only posessors of these art skills.

    SIGGRAPH was in Los Angeles this year, so I went. I was quite surprised to note that what was cutting edge Virtual Reality type stuff is already considered soooo last year... "You say your company has a VR product? So what, buddy, who doesn't?"

    The absolutely most mind-blownig thing I saw at SIGGRAPH was a presentation I attended on FACS. I thought it was going to be about how to construct better facial expressions using the Action Units defined by FACS. I was totally wrong.

    Google "Mark Sagar Baby X", and I think you'll find a cut down version of the video he presented. Basically, researchers have modeled the physiological/chemical changes in the brain in certain scenarios, and discovered that they correspond to facial expressions. So they have a neural network that simulates this so when you talk to the baby, and say, it recognizes its own name, it makes a natural expression. If you say "spider" the baby acknowledges what you said but perhaps doesn't understand and the facial expression is what you would expect, a kind of "watchu talkin bout Willis?" but I you say "scary spider" the baby kind of freaks out, with appropriate expressions. It was not explicitly "programmed" to do any of that. Technological advances are not linear, they're exponential, and as such we can never act fast enough to predict the truly revolutionary ones.

    I say it again, if you think this is not coming, and sooner rather than later, you are not paying attention.

    Thank you for the reference.  I'll watch.

    I have re-watched "Alita Battle Angel" several times, mostly because the blending of CGI and live action is so good (to my eyes) that I'm amazed at the technology and skill of the movie making

     

     

    Every time I rewatch "Alita battleAngel" I get a little frustrated that this technology and style was not used for the "Ghost in the shell" remake.
     

    I certainly would have preferred a stylized, CG Motoko Kusinaki over a real Scarlet Johansen any day.

    Considering "Alita" was a 16 year old "passion project" for Cameron who has enough clout to "call the shots" as he wished resulted in the film we got (Which I ADORE!!!  I've craved this movie since Cameron first announced it in 2003.)  I suspect there was not the same kind of "drive" behind "Ghost in the Shell".

    Sincerely,

    Bill

    Cameron also waited decades to do the first Avatar film.  He waited for the technology to get to a point where it would meet with his expectations for telling the story without pulling the viewer out of the story.

    By

    Subtropic Pixel Subtropic Pixel August 2019 in The Commons
  • Diego 8
    Wow! That sure was a short sale....and I am still waiting for my paycheck. This is why the guys characters don't sell as well. Now I will have to wait until it hits some other sale. That is one thing I like about some of the other sites, everything has a count down clock, I knew the coupons would be gone but didn't know the Diego 8 pro sale would be gone too....very sad.

    I'm so sad Diego only had a 4 day sale instead of the full 14 days. I was afread he was only going to get 1 week, since Daz seems bent on only doing a really great sale for male characters if they are limited to 1 week, but they didn't even give him that much time. Sorry you missed out, I just barely got him yesterday.

     

     

    He looks a bit more Native American Latino if you add a little bit of epicanthic eye folds to his eyes (not 100%), pull the outer corners down (and I don't mean to angle the eyes) and make the upper lids larger again (because the epicanthic eyefold will cover the upper lid too much). Dogz/Zev0 200 Plus Head and Face morphs for G8M will have all the morphs to do this I believe. You'll have to darken the skin too of course. It also helps to make the face a bit rounder as well. I've seen few Native Americans that have a long thin face....mostly more heart shaped and/or round.

    Laurie

    Some nice advice, but I find your comment about most Native Americans having round faces amuzing. While I've seen round and heart shaped faces, my first thought of Native Americans is more oblong and square face shaped. If you look up native actors IMDB has a nice list and the pictures show a variety of face shapes. https://www.imdb.com/list/ls036256818/?ref_=nm_bio_rls_4

    So I made a little gallery folder of my experments with Diego so far, but I'm only going to post my favorites on here. Diego out of the box shape with his own skin textures, cruz textures, Emil textures eyebrows and mustache, Jasper, PRX ohanzi (a PA character for g8m that is inspired by a native american actor), and Vladimir with scars. I think I did a total of 10 of these pictures and I really liked how most of them turned out.

     

     

    I was specifically referring to the Central Americas native peoples whose faces tend to be rounder and heart shaped. Those more north do tend to be more square. :).

    Also, I don't look to actors for that sort of thing. Actors tend to have looks that are idealized and never what I'm looking for ;). Their looks aren't usually  the norm like any other actor's looks aren't the norm...lol. There are always exceptions to everything, of course ;).

    Laurie

    By

    AllenArt AllenArt August 2019 in The Commons
  • Medieval Lands to 3DL?

    I think tileability will be very noticeable even at 8k resolution if we don't use texture blending.

    Unfortunately, yes! smiley And aweSurface doesn't support diffuse overlays (yet), so that's why I had to put this project on hold until I find a way.

    I made shader ground material for 3Delight - Medieval Lands - Ground 3Delight Material.zip

    I recall that the detail is focused in the central part, the farther the more blurry tiling. Therefore, 5 ground materials were created, in each of the bottom different parameters of tiling and bump strength.

    Thank you very much Andrey, you rock

    By

    Ivy Ivy August 2019 in The Commons
  • Medieval Lands to 3DL?

    I think tileability will be very noticeable even at 8k resolution if we don't use texture blending.

    Unfortunately, yes! smiley And aweSurface doesn't support diffuse overlays (yet), so that's why I had to put this project on hold until I find a way.

    I made shader ground material for 3Delight - Medieval Lands - Ground 3Delight Material.zip

    I recall that the detail is focused in the central part, the farther the more blurry tiling. Therefore, 5 ground materials were created, in each of the bottom different parameters of tiling and bump strength.

    Oh woow! Thank You So Much! Can't wait to have a go at itsmiley

    By

    Sven Dullah Sven Dullah August 2019 in The Commons
  • Medieval Lands to 3DL?

    I think tileability will be very noticeable even at 8k resolution if we don't use texture blending.

    Unfortunately, yes! smiley And aweSurface doesn't support diffuse overlays (yet), so that's why I had to put this project on hold until I find a way.

    I made shader ground material for 3Delight - Medieval Lands - Ground 3Delight Material.zip

    I recall that the detail is focused in the central part, the farther the more blurry tiling. Therefore, 5 ground materials were created, in each of the bottom different parameters of tiling and bump strength.

    By

    Andrey Pestryakov Andrey Pestryakov August 2019 in The Commons
  • Diego 8

     

    I like a couple of the outfits, none of the extra characters in the bundle and the hair in the bundle is....meh. The one is...well, I don't like it. The other looks a bit like the hair that came in Sanjay's bundle. The stuff I really like isn't part of the bundle, but I'm gonna have to buy the bundle to keep the prices in my cart from getting ridiculous. I'm really getting tired of buying stuff I don't want to get stuff that I do. Daz, please consider build-your-own bundles in the future. It would really help a lot and I'd probably buy just as much if not more if I could do that.

    Anyway, I do like the base figure and most of the morphs in P3D's morph pack. Two of those look more Central American latino than any of the extra characters on offer today (including Diego himself), which is what I was hoping for.

    On a side note...I don't think I've EVER gotten so many coupons at one time here...right now my brain is just too tired to process them all. Have to sleep on it ;)

    Laurie

    For the first time in my own Daz history I have decided not to buy a male pro bundle.I usually always try to support the 'male' character and clothing lines, but nothing about this character appeals to me in the slightest, with an exception of a few of the add-on characters. I'm really not getting the whole latino/spanish vibe with him. It's almost as if he's just a skin tone in a matador outfit. Daz nailed it with Sanjay, but Diego I find kind of lacking overall, but hey, you can't please all of the people all of the time. I think the Pro Bundle is very poor so I just picked up the hair and the cowboy outfit. I did love all the coupons, though. Totally agree with Laurie that there should be a build-your-own pro bundle feature, if not for the masses, then as a perk for those who usually receive loyalty coupons. 

    By

    Fungible User Fungible User August 2019 in The Commons
  • Diego 8

    Glad that I didn't get any purple coupons, 'cause I wouldn't buy this one with or without them anyway. Again not really much in the pro pack that tickles my fancy...

    Like AllenArt I don't see much central/south american in Diego without the two morph packs, which probably would make even Michael 8 look south american ^^

    And although the "$ 100 worth of items for free" sounds nice, for me it's only about 1 or two items from each category, as more would go over the $ 20.- limit and I'd have to drop some more money into the deal to get more. And considering that the items are at their base price, I'd rather wait for some bargain sale instead to get them for about as much, as this pro pack special would cost me.

    By

    maikdecker maikdecker August 2019 in The Commons
  • Diego 8

    I like a couple of the outfits, none of the extra characters in the bundle and the hair in the bundle is....meh. The one is...well, I don't like it. The other looks a bit like the hair that came in Sanjay's bundle. The stuff I really like isn't part of the bundle, but I'm gonna have to buy the bundle to keep the prices in my cart from getting ridiculous. I'm really getting tired of buying stuff I don't want to get stuff that I do. Daz, please consider build-your-own bundles in the future. It would really help a lot and I'd probably buy just as much if not more if I could do that.

    Anyway, I do like the base figure and most of the morphs in P3D's morph pack. Two of those look more Central American latino than any of the extra characters on offer today (including Diego himself), which is what I was hoping for.

    On a side note...I don't think I've EVER gotten so many coupons at one time here...right now my brain is just too tired to process them all. Have to sleep on it ;)

    Laurie

    I also ran into not wanting the entire pro bundle, and even with the Diego coupon, the total of pro bundle and four addons was way above my budget for this month, so I've wishlisted the items I want and will pick them up eventually.

    Also, is that free items offer 20$ of items free in total or per category? It seems to be in total, since I added a second item from another category to test and they were no longer free but their added price over $20 (not that I have items wishlisted in more than one category anyway).

    So, the freebie coupons were nice, but I'll pass on Diego for now. There's always another sale...

    By

    Nath Nath August 2019 in The Commons
  • Diego 8

    I like a couple of the outfits, none of the extra characters in the bundle and the hair in the bundle is....meh. The one is...well, I don't like it. The other looks a bit like the hair that came in Sanjay's bundle. The stuff I really like isn't part of the bundle, but I'm gonna have to buy the bundle to keep the prices in my cart from getting ridiculous. I'm really getting tired of buying stuff I don't want to get stuff that I do. Daz, please consider build-your-own bundles in the future. It would really help a lot and I'd probably buy just as much if not more if I could do that.

    Anyway, I do like the base figure and most of the morphs in P3D's morph pack. Two of those look more Central American latino than any of the extra characters on offer today (including Diego himself), which is what I was hoping for.

    On a side note...I don't think I've EVER gotten so many coupons at one time here...right now my brain is just too tired to process them all. Have to sleep on it ;)

    Laurie

    By

    AllenArt AllenArt August 2019 in The Commons
  • What features would you like to see appear in dazstudio 5?

    Did Daz really say that? I can't think of what programs those might be, can you? It sure was not WebAnimate, Axis Neuron Pro, ikinema Orion, nor iPi Studio.

    According to a Daz forum mod, those twist bones are the way that  "everyone else" handles limb twists.
    I  personally ,was never able to discover who that "everyone else" actually was.sad
     

    True, but the overwhelming advantage of the G8 that sells it is the massive list of assets that deform according to it. It'd be nice to not have to care about that.....NOTHING supports G3/8s out of the box. 

    Alot of that "massive list of assets that deform" are sexy  female clothing with ridiculous 4K textures
    and  premade Characters  embellished with Daz JCM/HD morphs that cannot effectively "leave the holodeck" of Daz studio to be leveraged  in other program environments.

    We both know that Triple A game companies Like Epic or Activision/Blizzard will never use canned figures and content for  any of their titles.

    And an indie Game developer is likely a male wanting  tough guy armors and weapons etc
    ( see Overwatch,Fortnite),  not endless pretty, young white girls in sparkely underwear.


    Remember "Morph3D/Morph ID?? now rebranded( yet again) as "Tafi3D" .. or something.cool
    I see very little evidence that G8 has made any more headway with the indie game dev community than its predecesors .

    I think your answer to this was MDD, and mine was Alembic :)

    Indeed Daz content is quite useful for single operators Like you and Myself with linear ,unidirectional pipelines
    who  likely can find specifc library content as matter of rote memorization,

    However  the Daz studio content management system makes using Daz studio in large team animated film productions a non starter for multiple operators needing to access the same assests residing on a central server,
    as is done in every Major CG studio.
    Just look at the Squiffy/Mason handshake, work arounds required to run two different full release versions of  Daz studio on one workstation and have the content/plugins/categories functioning properly.  

    By

    wolf359 wolf359 August 2019 in The Commons
  • RTX Benchmark thread...show me the power
    No, take notice they did not do 600 iterations. The CPU added 174 iterations during this run, which significantly helped.

    I'm not going to lie.  This thread is so confusing.  Just render the scene and tell us how long it took and what the setup was.  Shesh.

    Perhaps they did not know it was on? I don't know. It got my attention because that time is like my 2x 1080tis, so I knew something had to be fishy as there is no way 2x 1070ti should match that. Then I saw the line with the CPU. This is actually pretty interesting because most of the time the CPU does't add much to Iray. But in this case whatever CPU it is did quite a bit of work and made a big impact on the final time.

    Most of the people would even go as far as saying buying a top end CPU is a waste if you are doing Iray with GPU. This might suggest otherwise.

    So what kind of CPU are you using there, EBF2003? Is this a new AMD Ryzen?

    Can't quote exact gospel/verse on it at the moment, but if you go and read Iray's official documentation on how it handles load balancing, it has a mechanism where if a single Cuda device out of multiple active Cuda devices during a Photoreal render takes significantly longer than the others to transmit its assigned portion of converged pixels back for inclusion in the central framebuffer, Iray's scheduler assumes that something is wrong with that device and automatically RE-assigns it's current workload to the other Cuda devices in the system. What this effectively means is that once you get beyond a certain rendering performance difference between your CPU and GPU(s), Rendering WITH your CPU results in WORSE overall rendering performance than without since - unbeknownst to you (there is never any indication of any of this in the log file) your fast GPUs are constantly being tasked with double processing data that your CPU is already processing. Hence why EB2003 gets BETTER performance with his Xeon + 1070s. Whereas I, with my 8700K + Titan RTX, get WORSE performance with CPU also enabled.

    Not trying to get too far off here, but the results of the Xeon are pretty interesting.

    Having a slower device dropped makes sense, but then again it doesn't make sense with the different possible GPU combinations out there. There are people who have used the bottom of the barrel GPUs combined with high end GPUs that offered even greater gaps in performance. SY had a GTX 740 with a 980ti and they all played nice together. The 740 is garbage by any standard and there are CPUs that are faster than it, especially today. That's just one example.

    Here's the pull quote (this comes from the most recent version of Nvidia's high-level Iray design document The Iray Light Transport Simulation and Rendering System):

    To eliminate these issues in batch scheduling, all devices render asynchronously (see Fig. 13). Each device works on a local copy of the full framebuffer. The low discrepancy sequences are partitioned in iterations of one sample per pixel. Sets of iterations are assigned dynamically to the devices. This way a high per device workload is maintained, scaling to many devices. To prevent congestion, the device framebuffer content is asynchronously merged into the host framebuffer in a best effort fashion. If a device finishes an iteration set and another device is busy merging, the device will postpone merging and continue rendering its next assigned iteration set. This mechanism could lead to starvation, possibly blocking some devices from ever merging, and increase the potential of loss of progress in case of device failure. Iray resolves this issue by also skipping merging if a device has less than half as many unmerged iterations as any other device on that host, allowing the other device to merge first. (p. 23-24)

    Assuming you mean Sickleyield's OG benchmarking scores, those were done with two 740s and two 980Ti's in the same system. Meaning that the above described anomalous behavior wouldn't be in effect since this is something that only happens when a single Cuda device in a system is severly outgunned by all other active render devices in that system. As was indeed found to be the case even for Sicklyield back then when doing those same tests with CPU enabled in that same system (see here.) My guess is that any other card pairings which have cropped up over the years to seemingly belie this behavior also have similar subtleties to them. 

     

    I had a 1080ti paired with a 970, and one time a 670. I could test that 670 out for kicks to see if it still works with a 1080ti.

    Yeah, if it isn't too much trouble - would love to hear what results you get. Although some quick Googling tells me that a 1080Ti is generically around 3-4x more powerful than a GTX 670. Which may not be a big enough difference for this effect to manifest. To put things into perspective, the rendering performance difference between the single slowest Cuda capable device and any others in my system that does get effected by this (i7-8700K paired with a single Titan RTX) is around 16X. Ie. about the same as what you should see with those two cards SQUARED.

    By

    RayDAnt RayDAnt August 2019 in The Commons
  • RTX Benchmark thread...show me the power
    No, take notice they did not do 600 iterations. The CPU added 174 iterations during this run, which significantly helped.

    I'm not going to lie.  This thread is so confusing.  Just render the scene and tell us how long it took and what the setup was.  Shesh.

    Perhaps they did not know it was on? I don't know. It got my attention because that time is like my 2x 1080tis, so I knew something had to be fishy as there is no way 2x 1070ti should match that. Then I saw the line with the CPU. This is actually pretty interesting because most of the time the CPU does't add much to Iray. But in this case whatever CPU it is did quite a bit of work and made a big impact on the final time.

    Most of the people would even go as far as saying buying a top end CPU is a waste if you are doing Iray with GPU. This might suggest otherwise.

    So what kind of CPU are you using there, EBF2003? Is this a new AMD Ryzen?

    Can't quote exact gospel/verse on it at the moment, but if you go and read Iray's official documentation on how it handles load balancing, it has a mechanism where if a single Cuda device out of multiple active Cuda devices during a Photoreal render takes significantly longer than the others to transmit its assigned portion of converged pixels back for inclusion in the central framebuffer, Iray's scheduler assumes that something is wrong with that device and automatically RE-assigns it's current workload to the other Cuda devices in the system. What this effectively means is that once you get beyond a certain rendering performance difference between your CPU and GPU(s), Rendering WITH your CPU results in WORSE overall rendering performance than without since - unbeknownst to you (there is never any indication of any of this in the log file) your fast GPUs are constantly being tasked with double processing data that your CPU is already processing. Hence why EB2003 gets BETTER performance with his Xeon + 1070s. Whereas I, with my 8700K + Titan RTX, get WORSE performance with CPU also enabled.

    Not trying to get too far off here, but the results of the Xeon are pretty interesting.

    Having a slower device dropped makes sense, but then again it doesn't make sense with the different possible GPU combinations out there. There are people who have used the bottom of the barrel GPUs combined with high end GPUs that offered even greater gaps in performance. SY had a GTX 740 with a 980ti and they all played nice together. The 740 is garbage by any standard and there are CPUs that are faster than it, especially today. That's just one example. I had a 1080ti paired with a 970, and one time a 670. I could test that 670 out for kicks to see if it still works with a 1080ti.

    I am surprised how well the 22 core Xeon hangs with the 1070tis. Its not really that much slower than a single 1070ti looking at the iteration counts. 1070tis are pretty solid cards for Iray, they are right there with 1080s in performance. It makes me VERY curious how well the new Ryzens handle Iray since they are clocked so much higher than any Xeon while still packing a good number of cores, core that have much better IPC than before.

    I would live to see a Ryzen 12 core 3900X paired with a 2080ti or any GPU for that matter and see how they do.

    By

    outrider42 outrider42 August 2019 in The Commons
Previous Next
Adding to Cart…

Daz 3D is part of Tafi

Connect

DAZ Productions, Inc.
7533 S Center View Ct #4664
West Jordan, UT 84084

HELP

Contact Us

Tutorials

Help Center

Sell Your 3D Content

Affiliate Program

Documentation Center

Open Source

Consent Preferences

JOIN DAZ

Memberships

Blog

About Us

Press

Careers

Bridges

Community

In the Studio

Gallery

Forum

DAZ STORE

Shop

Freebies

Published Artists

Licensing Agreement | Terms of Service | Privacy Policy | EULA

© 2026 Daz Productions Inc. All Rights Reserved.