Pages

Tuesday, November 06, 2012

The Em2 and 3D Scanning: a few lessons learned

It's election day here in the USofA. I voted early, because my wife and I have busy teaching days on Tuesday - not having to then march over to the polling station and wait in what should be a long line is a greatly appreciated luxury that living in a rather democratically progressive state (for now) provides.

If you haven't voted yet and are able, please do so. If it's against my candidate of choice I hope that the results at least provide clarity of purpose in ways that allow us to work together. If it helps my candidates of choice, good on you.

Last night I posted some pictures of using the Kinect scanner for 3D imaging of the Em2 stream table. In general I have to say there are a couple things that are absolutely fabulous, and a couple things that are really frustrating about this technology.

First the frustrating things:

1) Accuracy and Precision:
To be specific, the overall accuracy is quite good. That is to say if the sensor thinks an object in front of it is 75 cm away from the sensor, there's a good chance it is. There's not a lot of wildly varying values reported in most cases. Accuracy suffers when there's a lot of infrared noise, or the surface being scanned is very reflective. This is one of the reasons why I lined the base of the stream channel with a black rubber sheet - much less of a reflective "blind spot" in the middle of the scan.

Precision, however, is another story. The distance model the camera uses really changes beyond about 1 meter. The closest you can get to the scanned surface is about 0.3 meters and from there to about 1m, we've been able to pick up the presence of a single penny placed on a flat surface. Considering the D50 of the standard Em2 sediment is about 0.5mm, it doesn't take much deposition or erosion to show up in scans. Once you go beyond a meter, the precision drops to less than a centimeter. Not bad for rough 3D scans of rooms for robot navigation, but not terribly useful for the subtle changes produced on a stream table. We're looking into ways of data processing that might improve this. But being limited to a camera-subject distance of less than 1m leads me into the next frustrating thing.

2) Field of View:
At a distance of 1m, the field of view on the Kinect is somewhere around 90cm x 70 cm. The entire functional length of the Em2 is just under 2m. To get the entire stream table in the scan, we could pull the scanner back, but then the problem with precision pops up. We tried using a rolling frame, but we've had issues stitching the frames together and maintaining a common reference point. The latter issue is more of a computing process, but the distance problem appears to be something that creates a firm upper limit on the area a single scanner can reliably measure. An alternative is to get another Kinect scanner. Having two scanners locked down would probably work, but that's a problem that will have to be solved later.

But now the fabulous things:
1) Accuracy and Precision:
Within a meter's distance, we've been able to resolve individual pennies that were present in one scan, but removed in another. A single penny isn't much more than a millimeter in thickness, so getting high-resolution information of a 90 cm reach of stream is really exciting. Not only can you detect subtle changes in aggradation and erosion of the active channel, but you can observe and measure bank/valley wall collapse. Want to measure the contribution from mass wasting? We've got you covered. Want to measure the rate of floodplain aggradation based on sediment supply? There's an app for that, too :)

A comparison of two scans one before and the second at about 20 minutes into an experimental run. Green values represent changes that are closer to the camera (aggradation/deposition), red is further away (erosion). I've found that most of the sediment transport/depo that occurs in the first half hour is directly related to profile adjustment and smoothing out the small bumps that I introduced when smoothing out the sediment prior to the run.


The top image is the distance data obtained from the Kinect. Below is a view of the same setup (a little later in the run) showing where the stacks of pennies (1, 2, and 3 pennies tall) were added to the scene. They were added, so that spot appears closer to the camera. Yes, even with a bit of noise, you can tell if a single penny has been added or removed from the scene (at least if the scanner distance is within about 1m).

2) Speed and Detail:
The other great thing about the setup is the speed at which we collect data. To get enough distance information, we let the camera continually scan the scene for about 1 minute. Imagine how long it would take to get sub-centimeter resolution from a 90 cm long channel measuring by hand? Totally impractical. Not to mention your estimates of mass wasting would be much more difficult. Here, you can do cut/fill measurements over time scales that are quite useful.

Here's a complete sequence of a multi-hour run (click to embiggen). The time between scans varies from about 15 minutes to about 30 minutes. The experiment looked at simple incision of the main stream for the first couple hours, and then I turned on the valve for the tributary stream and let it run for another hour. I love the fact that you can see small-scale readjustments and intermediate stages of sediment storage/removal. Look closely at the 3D data - the little white foam blocks are 3mm tall and you can see when they've moved around. The picture's a little small to easily observe the pennies, but they do show up in a larger version (see example above).


I think our next step is to add a second Kinect to increase the scanned area and look into methods of making isopach or contour maps from the 3D data. Needless to say, I'm really excited about these preliminary results.

Of course all of this could not have been accomplished without the dedicated help of many other people. I want to give a tip of the internet hat to Steve at LRRD whose enthusiasm for table-size stream tables gave birth to the Em-series models in the first place. I also want to give props to my colleague Todd Zimmerman - a computer whiz who in the span of a couple days brought himself up to speed with the Kinect 3D data files to program ways of averaging large data sets and visualize the changes between individual scans. Incidentally, both Todd and I went to the same undergraduate institution (Lawrence University in Appleton, WI), so I guess I can forgive him for being a physicist ;). Without the dedicated work of my student researcher Bryn (also a Lawrence student), this project would have stalled out well before the sediment supply and constant head tank systems were built. She's the one who spent many hours monitoring the pump during some early experiments - and the one who most appreciated the constant head tanks and the speed/convenience of measuring via remote sensing. Finally, I have to give credit to the person who gave me the idea of installing the Kinect on an Em2 in the first place, Ken Mankoff - he's never heard of me, but he was more than happy to give Todd some suggestions on improving the Kinect data processing. Plus he's got a shiny publication that ought to turn the head of any tech-minded geoscientist considering using the Kinect in the lab OR in the field. Good stuff there.

Oh, and I've got a fantastic announcement to make after people get back from GSA this year. It's super-awesome and I hope it's a sign of really exciting things to come. :)


1 comment:

  1. Wonderful work, Matt, thanks for props and very glad to see you pushing the envelope with our Emriver model.

    ReplyDelete