As their development progressed we discussed desired features quite a bit. As a film maker my demands can be summarized as needing great (and sophisticated) color/texture resolution and detail and needing a little bit of depth. Where as an engineering heavy group might need super accuracy in their 3D information and not need color at all. Lynx Labs deployed a Raster Alignment feature in the software system of their A-cam and it was simply brilliant. It was based loosely on a similar Raster Alignment tool (which I have had very little success with) that was available in Meshlab. The results of applying high fidelity imagery to a mid-quality 3D scan was fantastic. So how do I go about reproducing this result with my Kinect scans?
An overcast day is great but selecting an object in shadow is equally valuable. The reason is that we are trying to capture a subject in ambient lighting so that we get a clean diffuse texture (without lighting perspective) that can be synthetically re-lit once it is on a 3D model.
The Kinect can be a bit finicky and limited but you need to get your scan. The Kinect can sometimes scan successfully on an overcast day but don't count on it. Wait until dark or use a tent to knock out the sunlight. In this particular scan attempt it was sleeting on me (making me rather impatient and cold). The cold also makes your cables go rather ridged so things get a little more clumsy. Here also, in order to get the Kinect powered I brought a battery backup with me. I used the Kinect Fusion SDK demo to do the scanning and it has a bit of limitations.
Warning: it is EXTREMELY easy to lose scanner alignment on the Kinect.
Personally, I'd like to see a software package that uses a second Kinect to motion capture your skeleton to keep track of where you're holding the device in 3d space. Really anything here would be nice to enable you to NOT lose sync. Take your time.
Just to give you an idea of how finicky this device, I kept losing sync and couldn't figure out why. I happen to notice that when I would exhale I would often lose sync. I tested the relationship with exhaling and found that when I did the heat of my breath (remember it was really cold outside) acted like a cloud of darkness to the scanner. So every time my breath wafted past the sensor it would go blind. Haha! Crazy finicky!
Meshlab is a free, open-source 3d software platform for processing scans.
1. Import both models.
2. Press the Align button in the top toolbar.
4. Select the Kinect Scan derived model and select "Point Based Glueing."
5. A point matching window will pop up. Check the box to "Allow Scaling".
7. If the alignment is reasonably close then click the "Process" button in the Align Tool window.
8. Close the Align Tool window and then right click on your aligned Kinect scanned model.
9. A long menu of options will drop down, select "Freeze Current Matrix". A window will pop up.
10. Hit "Apply." Now export just your aligned Kinect scanned model. You're done here.
My test project was just that a poor looking Kinect scan meant to test whether I could bring higher quality images into alignment with my model. The wonderful part about using this approach (vs aligning each photo in meshlab, which I can not seem to get to work well for me) is that is saves you a lot of time by essentially aligning all the cameras at once. Basically, the camera alignment in Agisoft is used across the new model.
While the Kinect is rather finicky device to scan with, it is capable of resolving scans in many situations which are inherently problematic for Photo Scanning (i.e.-flat surfaces with low texture information and others). Enjoy, and please share with me your successes or improvements on this workflow.