Pages

Saturday, March 30, 2013

Hacking the Em2: Using the ImageJ to create 3D surface images

I've been putzing around with ImageJ as a way to visualize the 3D data produced by the Kinect scanner. So far, I'm please with how quick and easy the process is, but I'm a little disappointed at the relative lack or simplicity of additional analytical capabilities. However, looking through the available plug-ins and scripting commands, it may be possible to do some of the higher-level quantitative analyses by establishing a few reference points on the model in conjunction with image processing scripts to correct for lens aberrations and other errors.

The Python scripts (which can export to an ImageJ file) Todd Zimmerman has been working on have been on pause during the school year due to teaching demands, but I hope we can produce some custom Python scripts that people can download. Most of our process has been based off what Ken Mankoff has published on his blog, which has been followed up with a lot of trial and error.

For this post, I'll talk a little bit about the process that I used to make images like this:



First off, this description assumes you have already installed a functioning version of kinect_record or similar program to grab the .pgm (16-bit grayscale depth/distance data) and .ppm (8-bit RGB image) files directly from the Kinect sensor.

These currently produce 640x480 pixel images - but I'm hoping we can tap into the raw output and expand it to 1024x768. The uncertainty in image quality jumps considerably when the sensor is more than 1 meter away from the table, so maximizing precision means sacrificing field of view. The x/y view at 1m distance is a little more than 1m, so with one sensor, we can cover about half the table. Since the very top reach of the Em2 is occupied by the flow output and the lower reach is occupied by the standpipe, you get about 60% coverage of the effective stream table. That's not bad, but we're tempted to get a 2nd Kinect and set it up so their fields of view overlap in the middle and get the upper and lower portions.

I've started using a "C-Stand" to mount the Kinect sensor more directly above the Em2 - it's much more flexible in terms of positioning than a tripod. They're not built to be lightweight. The steel tube construction is pretty bulletproof and stays put. Now I just need another one to mount some good directional lighting to help bring out texture and detail.



The Kinect is set up to dump the data to a Linux computer sitting on the back table, Todd wrote a script to run the program so we can create a specific folder to contain all the data and also add a text file with a brief description of the experimental conditions. It's very convenient - plus our naming convention of each data folder as YYYYMMDD_EXPERIMENT-TYPE means the directories are easy to sort through. Particularly useful because the narrowed view of the stream table means it's hard to tell what we were doing by the appearance of the RGB image.


To get a first look at the 3D data, there are a couple of ways to go about it. I've been using the 3D Surface Plot plugin and you can choose to open one .pgm file in ImageJ and from the top menus select "Plugins >  Interactive 3D Surface Plot." You'll see the 3D viewer and options to change the color ramp, scaling, etc. But one of the reasons we go through all the effort to get the raw .pgm files is because we can combine several hundred images into a single stack and create an averaged view.

To create an image stack use the "File > Import > Image Sequence" to select the folder with a bunch* of .pgm files. You may need to allocate more memory to the program, since a stack of 1000 images is over 1GB in size.

*I used to drag a few hundred files into ImageJ and then turn those open images into a stack, but the results were unpredictable (not all the files would be included in the stack, so I would have to stack the stacks using the "Concatenate" command from the "Stack" menu.



Once I have a stack, I want to render it as a 3D surface (ImageJ uses the 16-bit grayscale values 0-64,0000 for the Z-axis values by default). I choose "Stack > Z-Project"and then pick "Average Intensity" to account for the variation of individual pixels. You can also select "Standard Deviation" or "Median" if you're curious to look at other characteristics of the data.



When in the 3D Surface Plotter, you can use one of the .ppm files to drape an RGB image onto the plot (to produce the image at the start of this post.



 The two images above are from a scan of my lab floor (area of view is about 1.5m x 1m). I placed a ~2cm tall ruler along the left side of the view to aid in looking at variation of the data with distance. The top image is an averaged view of the depth data, while the lower image shows the standard deviation. The biggest variation is along the front of the scan (where there was probably some scaling/distance issues) and along the sides of the ruler. It's a handy way of quickly seeing the data quality. The floor isn't that bumpy either - there's some aberrations that we'll have to build into our data correction scripts.

So what can I do right now? Here are a few examples:

You can see the lower edge drops off in this surface model.

Mapping the standard deviation, you can see where the scanner was having trouble getting data. (standing water can interfere with the IR laser the Kinect uses to measure distance).

There's a Contour Plotter plugin that you can use to map contours - haven't figured out how to export these contours directly onto the 3D image (I draped a screen shot of the contour output onto the 3D surface).

So for now, I'm making very pretty pictures with ImageJ. If I didn't want more quantified results, this might be enough. But I'd really like to tack numbers onto this output, and compare stream profiles that are produced by changing inputs like discharge and sediment supply.










No comments:

Post a Comment