Semi-OT: Ethan Carter.

Comments

  • kyoto kidkyoto kid Posts: 42,040

    ...fascinating.  If they an do ths for games, why not for 3D sets and props?

  • pearbearpearbear Posts: 227
    edited February 2016
    kyoto kid said:

    ...fascinating.  If they an do ths for games, why not for 3D sets and props?

    There are some scanned 3D prop and environment assets available out there at other 3D asset stores. Human figures too. I'm not sure why these kind of scanned props haven't been seen at DAZ yet, possibly just that the PAs have a different workflow and haven't gotten into using this technology. There is a lot of cleanup involved in reducing a hi-res scan to a managable low-poly asset with a clean UV map. Also, a large building with non-tiled textures means that the texture files are massive! I've been doing some 3D scanning similar to the technique in that article of locations around my city (buildings and streets etc.) just fo my own use, it is super fun but takes hours of processing time for a good quality result. I try not to waste a good overcast cloudy day though, perfect for going out and scanning some street scenes and not getting baked in shadows.

    Here's a recent scan I did of a local street, I think extracted from around 50 photos. This hasn't been cleaned up at all yet, lots of stray excess bits floating around. But I could probably make a neat little vignette for backgrounds with this, or use parts of it for modular sidewalk props etc. That's the plan anyway.

    CaptureS.JPG
    2288 x 1748 - 524K
    CaptureS2.JPG
    1500 x 1717 - 626K
    Post edited by pearbear on
  • kyoto kidkyoto kid Posts: 42,040

    ...so how are highly detailed scanned textures handled in games which rely on refreshing the imagry every second?

  • pearbearpearbear Posts: 227
    kyoto kid said:

    ...so how are highly detailed scanned textures handled in games which rely on refreshing the imagry every second?

    That I couldn't say, but I'm sure a lot of manhours and clever coding go into optimizing how textures are used for the game. For DAZ props, we expect every prop and building to look good in close ups with no visible pixels. That isn't as neccessary for games where most of the props won't be seen that closely.

    That example scene I posted has a 16,000x16,000 px texture map, and it's just part of one street corner. A smaller map stretched over that much terrain would look noticably less detailed though. The real world is freaky big and detailed.

  • StonemasonStonemason Posts: 1,230
    edited February 2016

    there's some food products sold at DAZ that are based on photo scanned data so i know of at least one other daz pa using this method ,The statue in 'streets of china 2' is based on a scanned object,as are the statues in 'village courtyard'.the rocks in 'contemporary living' and 'path to cloud temple',  and some other bits and pieces in various sets ,there is quite a lot of clean up,baking and rebuilding required though so it's not the fastest of workflows.

    I use this app which is free and easy to use http://www.123dapp.com/catch, they also have this one https://memento.autodesk.com/about

    photogrammetry has been around for years but sometimes it's actualy easier to just work from flat images and model it manualy

    Post edited by Richard Haseltine on
  • pearbearpearbear Posts: 227
    edited February 2016

    Stonemason, that is super interesting. In thinking about DAZ PAs using photogrammetry, my first thought was actually "If anyone, I bet Stonemason has messed around with this a bit, or will be the first..."

    I've been using 3DF Zephyr Lite which isn't free but I found it to be a lot more functional than 123D. I was able to take screengrabs from the Blu Ray of the film Alien's opening shots (those slow pans across the Nostromo interior) and get Zephyr to stitch them into some 3D models. Not great models, but still I was pretty excited that it was possible to grab frames from a Ridley Scott movie and get rudimentary 3D props from them!

    I attached some screencaps of what I was able to get from Alien. I'm just using these to study from, learning the shapes that go into such good science fiction design etc., and just to see if it was possible to extract a 3D scan from a scene from a Blu Ray of a Hollywood film. They're too crude to be of any real practical use as a 3D asset in a rendered scene, but a fun experiment.

    alien01obj.JPG
    2319 x 821 - 192K
    alien02obj.JPG
    1514 x 843 - 110K
    alien03obj.JPG
    2915 x 801 - 181K
    Post edited by pearbear on
  • StonemasonStonemason Posts: 1,230

    cool.i've not yet tried doing it from video, if anything it's just a good opportunity to get out of the house for the day and go photograph trees and rocks :)

  • StonemasonStonemason Posts: 1,230

    last year when doing the streets of old london I wanted some mannequins in the windows ,so i clothed a daz figure and rendered a bunch of different views in DS,ran that through 123dcatch and got a really good looking 3d model of a 3d model..but then figured it was a bit iffy on the legal side so never used it,it's cool tech,would work well with drone based photography too.

  • pearbearpearbear Posts: 227

    Ha! Reminds me of making HDRI domes from DAZ renders... I've done that before when my background had way too many assets and it was slowing down my scene.

    Seeing the pace of progress in photogrammetry makes me think that in a few years the technology will be at a point where making 3D scans is as fast and simple as shooting a five second video of the subject, and affordable enough for anyone. Most everyone has a decent enough camera on their phone already, it's just a matter of getting a computer beastly enough to crunch the numbers. I'm thinking some better AI will be able to fill in the present gaps in turning photos into models.

  • mjc1016mjc1016 Posts: 15,001

    last year when doing the streets of old london I wanted some mannequins in the windows ,so i clothed a daz figure and rendered a bunch of different views in DS,ran that through 123dcatch and got a really good looking 3d model of a 3d model..but then figured it was a bit iffy on the legal side so never used it,it's cool tech,would work well with drone based photography too.

    You could have used Make Human models for that with no problems...in fact, if all you wanted were static mannequins, you could have just posed the MH models and used them (maybe a little higher on the poly count)...or retopoed them. 

     

    pearbear said:

    Seeing the pace of progress in photogrammetry makes me think that in a few years the technology will be at a point where making 3D scans is as fast and simple as shooting a five second video of the subject, and affordable enough for anyone. Most everyone has a decent enough camera on their phone already, it's just a matter of getting a computer beastly enough to crunch the numbers. I'm thinking some better AI will be able to fill in the present gaps in turning photos into models.

    And then holo-tech won't be too far away...

    I want my Holodeck!

  • kyoto kidkyoto kid Posts: 42,040

    ...crikey, for Iray, I would think photogrammetric (if that is a word) textures would be a real help  Granted I'd probably need a 32 GB Quadro Pascal GPU to handle the rendering.

  • So how well does e.g. 123dcatch work? Good enough to scan in a small model airplane or a G.I. Joe vehicle and start breaking off e.g. ther missles and look into some rigging, etc? Or is there a lot of cleanup work needed before it would be ready for that?

  • pearbearpearbear Posts: 227
    edited February 2016
    argel1200 said:

    So how well does e.g. 123dcatch work? Good enough to scan in a small model airplane or a G.I. Joe vehicle and start breaking off e.g. ther missles and look into some rigging, etc? Or is there a lot of cleanup work needed before it would be ready for that?

    I'd recommend giving it a try to see how you can work with it, since it's free. It's fascinating trying it out and getting even rudimentary 3D models of rooms in your house or your household objects etc.  But you probably won't want to go straight from 123d catch's output to rigging. Even the photogrammetry programs that cost money require at least some cleanup - you get a rather messy mesh and uv map straight out of the program. I started with 123d catch which got me hooked and showed me that it was possible to make scans with my camera (ten year old Canon DSLR) and computer. The free trial version of 3DF Zephyr Lite was so much more powerful that when the trial expired I felt it was worth forking over the money to pay for it. I think it might be having a big sale on Steam right now, at least I noticed that it was yesterday.

    Scanning people with only one camera is possible but not very practical, since people tend to move. I've gotten some good results of face scans though if the subject is sitting in a chair and can hold a neutral expression for a few minutes. Cleanly captured extreme facial expressions or full body scans are out of the question though without a multi-camera rig or specialized scanning equipment. The cost of that is beyond what I want to invest in 3D scanning, and I'll wait for the next inevitable leap in the technology's affordability before trying to regularly capture good scans of people. It takes a lot of trial and error and failed attempts with a one camera approach, but I eventually managed to get pretty good scans of my closest family members which I think is something we'll treasure in the future.

    Post edited by pearbear on
Sign In or Register to comment.