Is Animation Better in Poser or Daz Studio ?

edited November 2017 in Daz Studio Discussion

I use daz studio for animation, but i have a question, is better than poser to do original series youtube ? is possible ?

Post edited by frank0314 on
«1

Comments

  • frank0314frank0314 Posts: 13,380

    Title edited to make it more clear what you wanted to ask.

  • wolf359wolf359 Posts: 3,764

    I use daz studio for animation, but i have a question, is better than poser to do original series youtube ? is possible ?

    Hi if you have invested in the Full animate2 & the Graphmate & keymate pluing for Daz studio  
    you are head & shoulders above posers Character  animation tools as they have not be effectively updated 
    since the end of the last century...no joke

  • Ken OBanionKen OBanion Posts: 1,447

    I have a number of animation projects in the pipeline, and I have been using both Poser and DAZ Studio to realize them.  (I've been threatening to add IClone to my quiver, but I'm still on the asymtotic end of that learning curve!)

    I guess the answer to your question would be..., it depends on what you're trying to do.  I have found some things easy to accomplish in Poser, that I have yet to figure out how to do in DAZ Studio; specifically, I can animate texture transitions in Poser (think Bruce Banner turning into the Incredible Hulk), that I cannot for the life of me figure out how to duplicate in Studio. 

    By the same token, motion animation is easier for me to do in Studio, because I use AniMate, with its huge library of AniBlocks.  Poser's Walk and Talk Designers are helpful, but are limited in their application.  (The Walk Designer, for example, is strictly two-dimensional; so if you want your figure to ascend stairs, or walk up an incline, then you're SOL!)  I also use AniMate's GraphMate and KeyMate add-ons.  Poser has similar tools built-in, but they tend to be a bit "clunky" for me.

    So, on balance, I generally lean more toward DAZ Studio for action-oriented animation sequences, and Poser for those "human-to-demon" type of transitions, simply because I know how to animate the materials changes.  And I haven't really found any glaring differences in the image quality between 3DLight and Firefly, so that hasn't been all that much of an issue for me.  (Others, I'm sure, will disagree with me on that point!)

  • takezo_3001takezo_3001 Posts: 1,924

    Daz studio has puppeteer which is an amazing tool for quick free-form animation, it's great for quick looping animation "blocks" which can be used in conjunction with aniMate but without graph/keymate, it's pretty crooked as Daz Studio's base level time-line sucks in comparison...But then again, dial-posing is incredibly fluid in Daz Studio as most morphs and translations/rotation don't have its limits set at 100000.00%!

  • The part time IK system is pretty much the same thing in Poser and DAZ Studio, like it was when they invented the wheel, but as already said, DAZ  Studio got keymate, graphmate, aniblocks to improve it a bit, poser can animate more parameters that can't be animated in D|S, but if I remember correct there are some plugins to handle that also, but I would think you would be better of with D|S.

     

  • AuroratrekAuroratrek Posts: 201
    edited November 2017

    I've been animating in Poser (using Daz figures) for years and have worked out a decent workflow, tho it includes the use of motion capture, and exporting and rendering in Cinema 4D--I find both Daz and Poser's renderers very slow. I've generally found animating easier in Poser for the kind of work I do, partly because I find that my mocap seems to map more cleanly using Poser than Daz. Sample: https://www.youtube.com/watch?v=bkEm0btAgBI 

    That said, I've been  experimenting with animation in Daz by using another app called "IKinema" for retargeting bvh to the daz skeleton first, then editing the animation using Graphmate and Keymate, which make it pretty similar to Poser in terms of editing mocap, tho there there are a few things that annoy me right off the bat, like you can't seem to select a parameter in Graphmate like "Forearm/Bend" without selecting it in Keymate first, and also that when you move keyframes in Graphmate, the selection wanders all over the place and doesn't "lock" to its time, so you have to be very careful moving them. That aside, I was able to get decent results with one of my motion capture files and using Daz with Gen1: https://www.youtube.com/watch?v=VLQTJZFWTbY

    I would like access to the Genesis characters for animation, but I guess the next test would be lip sync. I have been using Mimic Pro, but it's an ancient program, and I'm not sure if it works well with Genesis characters. I've read that it might work with Gen1 and Gen2, but not sure if it works after that, or what the options are.

    Post edited by Auroratrek on
  • pdr0pdr0 Posts: 204

    I've been  experimenting with animation in Daz by using another app called "IKinema" for retargeting bvh to the daz skeleton first, then editing the animation using Graphmate and Keymate, which make it pretty similar to Poser in terms of editing mocap

    Can I ask why you don't edit /clean the mocap in ikinema ?

     

     like you can't seem to select a parameter in Graphmate like "Forearm/Bend" without selecting it in Keymate first,

     

    The other place is the parameters tab, if you have "listen to parameters from parameters tab" enabled

    and also that when you move keyframes in Graphmate, the selection wanders all over the place and doesn't "lock" to its time, so you have to be very careful moving them.

    Can you clarify what you mean by this ?

     

    I would like access to the Genesis characters for animation, but I guess the next test would be lip sync. I have been using Mimic Pro, but it's an ancient program, and I'm not sure if it works well with Genesis characters. I've read that it might work with Gen1 and Gen2, but not sure if it works after that, or what the options are.

    Did you mean "live" or offline ?  DS still has the built in lip sync (where you feed it an audio file, +/- text file) , but only in the 32bit version. But it works with G1/2/3 for sure

  • I do initial cleanup in IKinema, but the fine tuning, the hand gestures, expressions and lip syncing all have to be done in Poser/Daz.

    LOL--okay, got it--it will work if you click on the parameter in the Parameter tab, but not the same parameter in the Posing tab. Interesting.

    In Poser, the keyframes in the Graph editor "snap" to the timeline, where in Daz they are "free", but I just found that holding down "Command" (on a Mac) makes them snap, so, never mind.

    Daz Lip Sync, in my experience, is really not that accurate (just tried it on one of my files, and it wasn't even close), and trying to fix the phonemes even in key and graph editor is a lot of work. I also need to use recorded audio, not live, and need to be able to edit and adjust it after the initial ingest of the sound file. Truthfully, even Mimic Pro does a pretty lousy job, initially, but its editing tools--tho still a fair amount of work--allow for a reasonable workflow since the phonemes in MImic Pro are "modular", not unlike AniBlocks, so it simplifies the editing.

  • pdr0pdr0 Posts: 204

     

     

    Daz Lip Sync, in my experience, is really not that accurate (just tried it on one of my files, and it wasn't even close), and trying to fix the phonemes even in key and graph editor is a lot of work. I also need to use recorded audio, not live, and need to be able to edit and adjust it after the initial ingest of the sound file. Truthfully, even Mimic Pro does a pretty lousy job, initially, but its editing tools--tho still a fair amount of work--allow for a reasonable workflow since the phonemes in MImic Pro are "modular", not unlike AniBlocks, so it simplifies the editing.

     

    Yes, it's not very accurate. Lots of manual adjustments required

    I also use motionbuilder and it's lip sync isn't very good either

    There is another freebie, papagayo and a script for DS , but lots of manual work too

    https://www.daz3d.com/forums/discussion/179731/script-for-load-files-moho-for-lip-syncing-in-daz-studio

    I haven't seen any lip sync software that is fairly accurate (of if they are , it's only on "cherrypicked" examples, not for general use). They all require work if you want more than just very rough mouth opening and closing animations

  • Okay, thanks! Lip sync is critical for professional or even semiprofessional animation, so I think the fact that Daz doesn't offer or support decent lip sync is one reason there are so few Daz animators--it's just too damn hard to do decent work!

  • pdr0pdr0 Posts: 204
    edited November 2017

    Okay, thanks! Lip sync is critical for professional or even semiprofessional animation, so I think the fact that Daz doesn't offer or support decent lip sync is one reason there are so few Daz animators--it's just too damn hard to do decent work!

    It's not just DS . Even in "professional" applications, realistic "automatic" lip sync just isn't a reality yet on any platform for < $2K . Or I haven't seen any that work well under a variety of general situations. If you find one let me know

    Even expensive markered mocap setups with many many markers requires lots of jitter cleanup, but the morph targets tend to be more accurate. But convincing, realistic voice animation is more than just mouth and lips moving - there are tongue, throat, chest movements during vocalization. (But right now I'd even be happy with semi accurate lip sync)

    Post edited by pdr0 on
  • wolf359wolf359 Posts: 3,764
    pdr0 said:

    Okay, thanks! Lip sync is critical for professional or even semiprofessional animation, so I think the fact that Daz doesn't offer or support decent lip sync is one reason there are so few Daz animators--it's just too damn hard to do decent work!

    It's not just DS . Even in "professional" applications, realistic "automatic" lip sync just isn't a reality yet on any platform for < $2K . Or I haven't seen any that work well under a variety of general situations. If you find one let me know

    Even expensive markered mocap setups with many many markers requires lots of jitter cleanup, but the morph targets tend to be more accurate. But convincing, realistic voice animation is more than just mouth and lips moving - there are tongue, throat, chest movements during vocalization. (But right now I'd even be happy with semi accurate lip sync)

    Quit true sir, unless you are taking years to handcraft your
     lipsinc, like pixar does  , you will not get 100 percent accuracy
    if you can get the blureray or stream "kingsglave final fantasy" note how
     "off" most of the mouth shapes are  during speech.
    They look like they tried very hard to smooth out the quick 
    open shut snapping that  often comes with auto solutions  however they seem 
    to have over smoothed it and its 
    distracting particularly on the blond princess character.

    I still use mimic pro on  Genesis 2 figures exported as poser Cr2's
    not a perfect result but goodd enough for my personal works


  • AuroratrekAuroratrek Posts: 201
    edited November 2017
    wolf359 said:
    pdr0 said:
     

    I still use mimic pro on  Genesis 2 figures exported as poser Cr2's
    not a perfect result but goodd enough for my personal works

     

    I've been stuggling with this all day--I made a Gen2 cr2, and I'm using the dmc file from Daz, but when I try these in Mimic Pro, it's like the jaw is stuck--the lips move, but the mouth won't open, so "AA" , for example, barely registers. Any suggestions?

    Post edited by Auroratrek on
  • @Pdr0

     

    pdr0 said:

    Even expensive markered mocap setups with many many markers requires lots of jitter cleanup, but the morph targets tend to be more accurate. But convincing, realistic voice animation is more than just mouth and lips moving - there are tongue, throat, chest movements during vocalization. (But right now I'd even be happy with semi accurate lip sync)

    Which is also why some of the professional solutions, such as Dynamixyz and Faceware go the route of retargeting using morphs/blendshapes instead of markers (although if your pipeline uses them, Dynamixyz can use them too), but then you are limited to what those blenshapes can do.  I read somewhere that the facial rigs for Star Citizen have 300+ morphs plus other controllers to achieve the realism they have on their characters.

    I've found in my tests with Performer, even with the amount of morphs that genesis, gen2, gen3 and gen8 characters have, I still don't get the nuance that I like, so I have to use the bones too to achieve subtle animation such as lip thinning, stuff like that.

    I remember when I was using Mimic pro, that you can add a lot of secondary animation such as throat and chest movements in your dmc file.. I even added color and texture changes (only in Poser though) for characters such as robots

  • pdr0pdr0 Posts: 204
     

    Which is also why some of the professional solutions, such as Dynamixyz and Faceware go the route of retargeting using morphs/blendshapes instead of markers (although if your pipeline uses them, Dynamixyz can use them too), but then you are limited to what those blenshapes can do. 


    Yes, but it's the same basic accuracy problem whether you go marker or markerless.  Markered setups can be used to drive target blendshapes too (not just the standard or custom bone face rigs) .

     

    I read somewhere that the facial rigs for Star Citizen have 300+ morphs plus other controllers to achieve the realism they have on their charac


    Yes, you can add dozens of extra custom blendshapes for more targets (for example if the shape targets are too generic), with more controllers and complex rigs.  But this can lead to more poblems. Too many targets, and you get "cross talk"  and you still have to do clean up.

     

    I've found in my tests with Performer, even with the amount of morphs that genesis, gen2, gen3 and gen8 characters have, I still don't get the nuance that I like, so I have to use the bones too to achieve subtle animation such as lip thinning, stuff like that.

    Yes that's what I'm referring to . You need to manually tweak subtle things to make it look like actual speech, not just lips flapping about randomly .

     

    I remember when I was using Mimic pro, that you can add a lot of secondary animation such as throat and chest movements in your dmc file.. I even added color and texture changes (only in Poser though) for characters such as robots

     

     It has to be timed properly to "sell" the illusion . Mocap solutions can capture some of that to some extent , but the pure lipsync ones do not

  • pdr0pdr0 Posts: 204

    Do you guys have any tips /tricks to increase the accuracy of the various offline  lip sync solutions that you find helps ?

    I've tried a bunch of things, different voices male/female, synthetic text to speech vs. real voices, editing in A/V editor to enhance segments, cadence variations, phonetic spellings and variations for the text, etc... Some of them slightly increase the accuracy but it's still roughly timed mouth flapping. The accuracy still leaves a lot to be desired. You might luck out on a sentence here or there, but nothing that works ok everywhere. There is always a bunch of manual fixing. Sometimes it's even better to do everything manually

  • @pdr0

    Not exactly, it is only manual "tweaking" in regards that you are training your retargeter (at least in the case of Performer), when your original result is not realistic enough (I've had to stare for hours and people's faces to see how they actually move).. I'm not going back and manually changing animation curves or adding animation after the fact as I had to with other methods.

    With Performer you are only using the virtual "markers" to define the outline of the features of the face, the actual retargeting is done by example, the process is that you match a specific series of "expressions" to the same expressions on your character in your 3d app, (the idea is to have enough to cover a range of movement of the actor) using whatever is there to achieve that expression.. generally, the more expressions you use, the better that retargeting will be, regardless of what you are using or the # of targets.  

    When done right, the process afterwards is automatic for other captures of the same actor and none of the crosstalk you get with audio based animation.  It allows for realistic real-time facial animation such as this (although here she is not talking in realtime) https://www.telyuka.com/  and my latest here https://vimeo.com/243938754  But no system is perfect, there will always some amount of cleaning, adding animation or fixing.. you just try to get as close as you can, with the least amount of effort , then use your skills to add to that (that is why we exist as animators no?)

    Going back to audio lip synch though..  The best I've seen is done with programs such as Mobu, using constraints to dampen the animation curves and blend the phonemes..  but still, a lot of fixing.

     

  • pdr0pdr0 Posts: 204

    Thanks for sharing your experiences Bryan . When you get some free time, can I ask you to do a mini review on Dynamixyz Performer ? Including pros/cons , pricing,  what you would change or suggest to improve ?

    Your test clip looks good . The expressions, facial movement ,eyes etc... everything help to sell it. Was that 100% automatic , 1 take,  or was there some manual fixing, and if so what sort / what amount ?

    Without a doubt that would reduce the amount of manual work

  • pdr0 said:

    Do you guys have any tips /tricks to increase the accuracy of the various offline  lip sync solutions that you find helps ?

    I've tried a bunch of things, different voices male/female, synthetic text to speech vs. real voices, editing in A/V editor to enhance segments, cadence variations, phonetic spellings and variations for the text, etc... Some of them slightly increase the accuracy but it's still roughly timed mouth flapping. The accuracy still leaves a lot to be desired. You might luck out on a sentence here or there, but nothing that works ok everywhere. There is always a bunch of manual fixing. Sometimes it's even better to do everything manually

    I can really only speak for Mimic Pro (so to speak), but after using it for about a decade, I've discovered three main things, applied here to this example: https://www.youtube.com/watch?v=kHLAXD6Tprk

     1. Don't bother adding the text file in Mimic--this generally makes things worse, since Mimic tries to shoe horn the phonemes into places where there are no sounds, or jams them up into places where they don't belong, and also makes more phonemes than you need. Cleanup takes way longer than just using the audio.

    2. When cleaning up, remove as many phonemes as you can get away with--this reduces the "flappy" look from too many phonemes/mouth movements. You don't need a phoneme for every sound, particulalry a lot of consonants like c, k, g, j etc. when they are in the middle of a word--a lot of times the transition between two phonemes will allow you to drop the phoneme in the middle since it looks like it happens "on the way".

    3. The "correct" phoneme is not always the best one. I tend to replace  IY and IH with AA and EH instead, since IY and IH have a "smiley" EE sound that looks weird if what the character is saying isn't happy. You can test this just by talking out loud and recognizing how your own mouth moves.

    Just overall, once you accept the fact that Mimic is not going to do a great job with "automatic" lip sync, going in and editing by actually isn't so bad, and you get better results. There are a lot of other little things I do particular for Mimic, but these three things would probably hold true for any phoneme editing. My 2 cents!

  • wolf359wolf359 Posts: 3,764
    wolf359 said:
    pdr0 said:
     

    I still use mimic pro on  Genesis 2 figures exported as poser Cr2's
    not a perfect result but goodd enough for my personal works

     

    I've been stuggling with this all day--I made a Gen2 cr2, and I'm using the dmc file from Daz, but when I try these in Mimic Pro, it's like the jaw is stuck--the lips move, but the mouth won't open, so "AA" , for example, barely registers. Any suggestions?

    I get this from time to time. are you sure you are opening the correct  .obj file
    that gets requested by the session manager??

  • Thank you Pdr0.. Sure thing, I'll try to keep it short  :)

    First.. I am a biased opinion, I am a distributor for Dynamixyz, but I have tested and used many facial solutions over the years, Maskarad (di-o-matic), mimic, mobu, etc and I've settled on it as my facial mocap system of choice due to the flexibility and control it gives you over the final product.

    It is designed as a production tool, targeted to Autodesk products and game engines, and used in large scale animation projects, so its strength is when there are large amounts of data to be tracked.  It is expensive, around 16k for the premium version of the product (capture, track, retarget and realtime into autodesk products, Unity and Unreal) however, there is a very large discount for indie customers (there are some requirements to meet for this special pricing), which is around 70% (still expensive relative to other solutions, around 3.5 - 4.8k depending on version), it has a batch mode so you can load dozens of videos to track and retarget and leave it overnight to do its job

    It can be used with an HMC or any camera (the software is agnostic to the hardware you are using, whether it be anyone else's HMC, a gopro, webcam, etc), but it does work better with an hmc, especially in conjunction with a body mocap system, and you can capture at the same time so that data is synchronized.

    As it is what is referred to as a "trained" retargeter (meaning, you have to tell it how to interpret what it has tracked to solve to animation), one of the few cons I would say it has (and it is open to interpretation) is that it does take time to set this up (it can take me several hours to get to a point where I like the resulting animation, depending on the type of character used), however, once that is done.. all the subsequent tracking and retargeting of captured video is automatic and takes a few minutes as long as the video was captured in the same session and with the same actor. (you see now where it comes in handy in productions)

    The other con is that it is targeted to Autodesk products (at this time).  I use motionbuilder to retarget to, then bring this into Daz Studio or Poser if needed for final render when I want a higher resolution mesh (I'm now using Marmoset toolbag though, as most of what I do doesn't require that), so that is an added cost, but if you are already using Maya or 3ds max, it is a good solution.

    There are two basic ways to approach projects in performer, depending on whether you are doing a one off or you are doing a retargeting setup for multiple videos.  The last example that I have on my vimeo page is the first type.  In this case, you pick the frames with facial expressions that are unique to that particular video.. arrange the virtual markers to delineate the facial features on those frames, then track the video to set it up for retargeting to your character.  Depending on the contents of that video, you can get away with anywhere from 12-24 expressions.  This tracking setup can take a while, normally a few hours to move all the points around to get everything to track right)

    Then you take those same frames and while connected to your 3d app, you mimic the same expressions on your 3d character, using whatever controller you are using to keyframe animation, frame by frame, save, then solve to animation on your rig.  90% of the time, if you did this correctly, that is all you need to do.  Now, nobody is perfect, and since it depends on your interpretation of that expression on your 3d character, many times you have to go back to the frames, change the strength of the blendshape or move the bones you are using, resolve and so on.  Normally I work by sections, concentrating first on the overall animation, then things like the lips, then adding and adjusting for other areas.  The retargeting setup can take a long time, depending on the character, how familiar you are with it, etc.

     That video took me several weeks (not full time of course) because I really wanted to animate the fine nuances in the performance (especially in the lips)  What made this one also more difficult, is that the video was provided at 24 frames (the capture software with performer can go from 23.976 to 120 fps.. the more the better), and that it was captured on a static camera.

    There is a mechanism in Performer to force your 3d app to hit a certain keyframe..(called an anchor point)  to correct blendshapes that don't quite hit the strength or shape you want.. as the software connects in a way that when you scrub your timeline in Performer it scrubs the timeline in your 3d app, you can then go to that spot on your timeline, adjust your expression and key it so that it forces it to that shape, then when you resolve, it uses that keyframe to reinterpolate all the animation curves  This would be the only "manual" correction, but it is done in Performer, not in your 3d app directly, and you can also use this to add animation that is not tracked by the expressions, such as the tongue.

    this video here of some bloopers is an example of what you can get on a first basic pass https://vimeo.com/196451753

    The second approach, is when you know there will be multiple videos to track and solve.  Then you capture a range of motion of your actor (base 52 expressions, but you can have more or less, whatever you want, the idea being that you encompass a full range of expressions particular to that actor), as the first of the videos in your session.. go through the process above, then when you get it like you want it.. you use that to track and solve all the remaining videos (the bloopers video was done like this.. each scene is a different video, which used the range of motion) and you can always go back and add expression frames in each video if needed (for example, if the actor did a funky face that none of the expressions from the range of motion could track correctly.. like puffing cheeks or doing a crosseye or something like that)

    Other tracking solutions use a more generic approach, which is quicker, but doesn't give you the capability to go back and refine, adjust and change when needed, which result in lower quality animation, which then you have to use your 3d app to adjust and change.   Here's Dynamixyz's vimeo page if you want to see more examples, reels and training https://vimeo.com/user4206082 and I can arrange for a 30 day trial if you want to play with it.

     

  • wolf359wolf359 Posts: 3,764

    Hi Tim

    I just did a quick test of the G2 male
    I loaded the Genesis2 DMC in the session manager
    and a Cr2 I named  "mimic3 voice actor"
     that resides in my old Poser pro 2014 runtime for mimic use only.

    When I hit OK mimic tries very hard for a minute
     to locate the ridiculously named
     "genesis 2 _male3850-7-274576-5896_389664.obj "file.
    referenced in the exported poser Cr2
    before giving up and asking me to locate it.

    I have no Idea where this is located  ..... 
    however it is of no concern because I have already exported an .obj file
    with the poser scale preset,of the G2 male and female to a location 
    familiar to me.. 
    My poser pro 2014 Character library.

    mimic pro does not care about the name.. only the vertex count apparently
    so make sure you use a .obj that has not  somehow had a higher Daz subD level
    Baked in the .obj export

    when I select My easily named "Genesis 2 male voice actor .obj"I get requests for missing
     textures that I simply ignore.

    The Default G2 male appears in the session manager with a base lip synch
     ready for phoneme editing as Pictured here.


    This is how it has  been working  for me , but as usual ,YMMV

    mimicpro the best.jpg
    1166 x 734 - 168K
  • pdr0pdr0 Posts: 204

    @Auroratrek - thanks for the input. Looking forward to more episodes . What is the plan given the CBS restrictions ? Or do you have something else entirely different planned ?

    @Bryan - thanks for taking the time to write the review. I looked at the website before but I wanted input from some actual user experiences. It's quite expensive unless it's used for dedicated studio/production work. There are lower cost solutions , but you can see issues even with the handpicked demos. I guess you get what you pay for. I'm looking for something less expensive for a personal project

     

     

  • @pdr0

    I understand completely!  Have you looked at Brekel face?  it is something similar, using the kinect 2 sensor.. and does have export to daz studio.  http://brekel.com/brekel-pro-face-2/  if importing into Mobu, there's some filters that can be used to correct the slight jittering it has (plus the built in ones).. and at 139.. 

  • pdr0pdr0 Posts: 204

    Yes, I've used Brekel before... not that good

     

  • wolf359 said:

    Hi Tim

    I just did a quick test of the G2 male
    I loaded the Genesis2 DMC in the session manager
    and a Cr2 I named  "mimic3 voice actor"
     that resides in my old Poser pro 2014 runtime for mimic use only.

    When I hit OK mimic tries very hard for a minute
     to locate the ridiculously named
     "genesis 2 _male3850-7-274576-5896_389664.obj "file.
    referenced in the exported poser Cr2
    before giving up and asking me to locate it.

    I have no Idea where this is located  ..... 
    however it is of no concern because I have already exported an .obj file
    with the poser scale preset,of the G2 male and female to a location 
    familiar to me.. 
    My poser pro 2014 Character library.

    mimic pro does not care about the name.. only the vertex count apparently
    so make sure you use a .obj that has not  somehow had a higher Daz subD level
    Baked in the .obj export

    when I select My easily named "Genesis 2 male voice actor .obj"I get requests for missing
     textures that I simply ignore.

    The Default G2 male appears in the session manager with a base lip synch
     ready for phoneme editing as Pictured here.


    This is how it has  been working  for me , but as usual ,YMMV

    Hey Wolf, thanks for the reply, but I can't seem to get this to work. I tried making new cr2 files, for both Male and Female gen2, to no avail. I'm stumped. I did the same thing that I did for Gen1, and that works fine. What version of DS are you using? I'm using 4.10. Could that matter?

  • Or, should/can I export a new obj file, like you are using? 

  • wolf359wolf359 Posts: 3,764

    "Hey Wolf, thanks for the reply, but I can't seem to get this to 
    I tried making new cr2 files, for both Male and Female gen2, 
    to no avail. I'm stumped. I did the same thing that I did for 
    Gen1, and that works fine. What version of DS are you using? 
    I'm using 4.10. Could that matter?"

    I am afraid I am out of Ideas then Mate.frown

    I remain on DS 4.8 windows 7  as I have no use  DS Iray
    so I cant offer any speculation about possible 4.10.x  incompatabilities
    if mimic pro is not requesting the location of the  .obj file associated with your
    exported cr2s that should mean it is finding them itself   so yeah , im stumped.

  • wolf359 said:

    "Hey Wolf, thanks for the reply, but I can't seem to get this to 
    I tried making new cr2 files, for both Male and Female gen2, 
    to no avail. I'm stumped. I did the same thing that I did for 
    Gen1, and that works fine. What version of DS are you using? 
    I'm using 4.10. Could that matter?"

    I am afraid I am out of Ideas then Mate.frown

    I remain on DS 4.8 windows 7  as I have no use  DS Iray
    so I cant offer any speculation about possible 4.10.x  incompatabilities
    if mimic pro is not requesting the location of the  .obj file associated with your
    exported cr2s that should mean it is finding them itself   so yeah , im stumped.

    Okay, thanks! I'll see what I can do. As usual I'll fight, scratch, claw my way to a solution. I guess I have to understand that they're really busy upgrading the next extremely important ni@@le morph and can't spare a few minutes to improve the basic functionality of their software and give us animators a break for a change.

  • PadonePadone Posts: 3,481
    edited November 2017

    Animation is better in Blender .. cheeky

    No, seriously. DS and Poser are not animation tools. They don't even have a decent ik system to work with. That's the very basic. I believe part-time IK for FK posing is back to 1998 or something.

    That said, if you only plan to use pre-made animations and mix them together and/or slightly edit them, then DS can be a good tool to work with. Puppetteer is excellent too, very effective for quick animation "sketching".

    Post edited by Padone on
Sign In or Register to comment.