Mocap Question
in The Commons
Mcasual has a kinect script that requires Xbox's Kinect Sensor and the PC connector. This sensor has been discontinued though you can still buy it certain places. My question is, if I wanted to get into Mocap in Daz, is this the only route or is there another sensor you can use? How would you go about it?

Comments
Microsoft Kinect for Azure
There are other sensors as good as Kinect I (the Intel-branded Creative Labs stuff comes immediately to mind) but Microsoft was way ahead of the game when it came to head tracking and skeletal estimation.
Kinect II is an entirely different league. It's a professional TOF (time of flight) depeth camera with 4x the resolution of the best Infinon/PMD has to offer. The only thing out there that touches Kinect II is Kinect III. If it hadn't been for Microsoft deciding to use K2 as a vehicle to force Win Seven users to upgrade to Win hate eight, it would have really gone somewhere. Sadly it appears they're trying to do the same thing with K3 and Eye-sore Clown Azure Cloud, and thus dies the entire 3D imaging field.
I don't know what your requirements are, but I'm not sure there's a depth-sensor solution out there that will do what you want. None of them are suitable for a hero character. I have iPiSoft Enterprise, but I end up only using it for background characters and for when I need that actor to do something that might damage the capture hardware.
My option of first resort is the Perception Neuron Pro. I had a PN v2, but it was a ridiculous hassle to put on and the sensors were incredibly finnicky. The PN Pro is much much easier/quicker, is pretty much immune to magnetic fields, and the data is beautiful. Any errors are easily fixed.
The Rokoko Smart Suit Pro is cheaper, but I've never actually used one. I did see a live demo at SIGGRAPH, and I have to admit that it did look as good as the PN Pro data.
The HTC Vive Pro with Orion gave absolutely the best results, but Apple, by purchasing iKinema, messed that one up for the rest of us. ManusVR is supposedly going to release an Orion replacement in May. If you know how, you could actually get Maya or Motionbuilder to drive a skeleton via bone constraints, but that requires some expertise.
And that brings me to the last point: getting a good capture is just the beginning of the challenges. The rig that makes Daz G8s animate so beautifully also causes headaches when it comes time to retarget your motion data to them. 3DXchange can do it, Maya can do it, and Motionbuilder can do it, but all cost money. This is my #1 wish for features in Daz Studio... it should be able to retarget motion data to its own doggone rigs.
You should really consider taking up the challenge of getting your figures into Blender, and invest the time to become a good keyframer. With a legitimate IK system, good reference video, and having studied the classic animation books, it's really not that hard to get passable if not technically brilliant results. But appreciate that until you're good at animation, a compelling character and story will trump your terrible animation every time, any way. :)
I've found that I end up changing my mocapped motion to add subtle things that I told the actor to do but she forgot, misinterpretted, or I didn't explain well, or I simply thought of later. Mocap is just faster, not better. I wish I had been counseled to just invest the time to get better at keyframing before I spent so much money. I would have spent it on better video cameras to capture reference video from more angles.
But the good thing is that whatever you do, you're going to have a ball doing it, you're going to learn a lot, and it should be criminal how well and cheaply Daz lets us express our artistic vision, even for someone like me who actually has NEGATIVE artistic talent.
I concur. I'd even go further to say that v3 is far beyond v2. Not because of that super-clean point cloud, because the v2 point cloud was already good enough, but because the v3's are daisy chained and so they don't interfere with each other like three v2's do. I got 4 of them to work with iPiSoft with no fussing about where I set them up like I had to with 2 v2's. I never did get 3 let alone 4 v2's to work. But 4 v3's just work, and I can actually just think about the scene, and not how to keep literally everything else in the scene from occluding the actor. Of course, all this is beside the fact that depth-sensor solutions are only good for background characters, or times when the actor needs to do something that would damage an inertial sensor, like fighting and being thrown to the ground or something. The head tracking is just not there yet, and it can't discern bone twists without help that's a hassle to set up and kind of Mickey Mouse in the first place.
If you already own a VR headset, I can imagine it would be super affordable, but I'm guessing the mocap wouldn't be that great.
Ah, I didn't see that comment about the HTC Vive Pro.
6 trackers:
The framerates and accuracy was ubelievable. If you touch the virtual controllers together, the real world ones touch within what seems to be like a millimeter. iKinema's IK is better than Autodesk's HIK.
Now that that option is gone, there is a product called Brekel OpenVR Recorder. It records the data from a VR session. You can use those empties to set up bone constraints in Maya or Motionbuilder to drive a skeleton, and clean up/edit it using layers. It takes a bit of learning, but apparently works reasonably well. The good thing is that the number of trackers you can use is limited only by your USB bandwidth; a guy on the Brekel product forums has gotten it to work with 10, I believe, by adding a dedicated USB port card to his computer.
There is also the Vive Mocap Kit for Unreal.
Yes, available USB ports is one thing my computer lacks. :)
What kind of computer hardware requirements do you need for hooking up 6 to 10 trackers?
I can expand my USB ports by getting an internal USB card, but I'm not sure about other specs.
Not sure what the minimum/recommended specs are. It's worked on every desktop I've tried it on, but with 8 I only tried it on a pretty high-end laptop. The guy I was talking to on the Brekel forums didn't say what he was using, just that he had to purchase another discrete USB card so that not all his trackers were on the same controller. Jasper Brekel can probably guide you at brekel.com with minutiae. I suspect that it really doesn't take much other than lots of USB bandwidth.
But be aware that OpenVR recorder just captures the location/orientation of the empties; it doesn't solve a skeleton for you. You need Maya/Motionbuilder for that.
Is that how most motion capture systems work? You can't apply a skeleton until after the motion is recorded?
Because I think things would be simplified if you could apply a skeleton during the motion capture. The capture application could then just dump it to .bvh format or some streaming pose format.
It's more complicated than that. Whatever motion capture system you use, it is not going to know what a Genesis 8 is, and the bone hierarchy that the capture system generates will not match your Genesis figures so the motion can't be applied directly. The process of making them match is called "retargeting".
The real problem is that the same skeleton that makes your Genesis 8 pose so beautifully is also what makes it incompatible with most retargeting software. I can only think of three that are powerful enough to handle G8s in stride: 3DXChange, Maya, and Motionbuilder. Ones that do this in realtime are high-end ones that come with expensive optical systems like Vicon, Optitrack, and others. Note that Live Action in iClone isn't retargeting, it's applying it to iClone's own skeleton.
It's a hassle, yes, but Genesis 8 characters do look amazing in the end.
This is just a brief note to say thanks for making this sentence. It's something I have long felt sure was true, but never read or heard anyone put it into words. It tells me I have a small chance for a small success.
I think one can obtain competency (at the very least) in anything for which we can make repeatable, so that learning can take place.
Something that I do is to re-watch my favorite films, not to enjoy them, but to analyze the heck out of them. If I see something that I thought was emotional, exciting, simply cool, or anything at all, I pause the video and try to write down, in as much detail as possible, EXACTLY what I thought was so good. After a short while, I had an encyclopedia of coolness in cinema that I kept studying over and over and over until I internalized it and I think about those things automatically now while writing.