Pages

Sunday, March 31, 2013

This is not a... cow?

I visited the Eau Claire Children's museum this weekend with my sister and niece. One of the items was a miniature dentist's office, complete with some kind of animal sitting in the dentist's chair. Is it a cow? A horse? I'm not really sure. Any paleontologist or biologist familiar with vertebrates will tell you that mammals can be identified to specific groups fairly confidently by the number and type of teeth in the mouth. Based on that, however, the animal sitting in the chair is a human (or other Ape)...



My 13-month-old niece found the critter somewhat amusing. The adults in our group found it creepy.

Easter Monotreme

It's Easter Sunday for many people around the world - this also means that the Easter "Bunny" is showing up, hiding eggs for people to find. But rabbits and eggs don't generally co-occur. But there is a group of mammals that do lay eggs. Monotremes - like the platypus and echidna lay eggs.

The platypus also has a flattened snout that looks like a duck. Instead of the Easter Bunny, I think we need to create the Easter Platypus. Besides - platypus have a nasty claw-like spur on their back legs and the males' spurs are venomous.


Saturday, March 30, 2013

Dinosaur WHAT now?!?

Visited the Children's Museum with my sister and niece. On our way out the door, I noticed this on one of the gift shop shelves:

ORLY?

So that's how it is in their family...

Hacking the Em2: Using the ImageJ to create 3D surface images

I've been putzing around with ImageJ as a way to visualize the 3D data produced by the Kinect scanner. So far, I'm please with how quick and easy the process is, but I'm a little disappointed at the relative lack or simplicity of additional analytical capabilities. However, looking through the available plug-ins and scripting commands, it may be possible to do some of the higher-level quantitative analyses by establishing a few reference points on the model in conjunction with image processing scripts to correct for lens aberrations and other errors.

The Python scripts (which can export to an ImageJ file) Todd Zimmerman has been working on have been on pause during the school year due to teaching demands, but I hope we can produce some custom Python scripts that people can download. Most of our process has been based off what Ken Mankoff has published on his blog, which has been followed up with a lot of trial and error.

For this post, I'll talk a little bit about the process that I used to make images like this:



First off, this description assumes you have already installed a functioning version of kinect_record or similar program to grab the .pgm (16-bit grayscale depth/distance data) and .ppm (8-bit RGB image) files directly from the Kinect sensor.

These currently produce 640x480 pixel images - but I'm hoping we can tap into the raw output and expand it to 1024x768. The uncertainty in image quality jumps considerably when the sensor is more than 1 meter away from the table, so maximizing precision means sacrificing field of view. The x/y view at 1m distance is a little more than 1m, so with one sensor, we can cover about half the table. Since the very top reach of the Em2 is occupied by the flow output and the lower reach is occupied by the standpipe, you get about 60% coverage of the effective stream table. That's not bad, but we're tempted to get a 2nd Kinect and set it up so their fields of view overlap in the middle and get the upper and lower portions.

I've started using a "C-Stand" to mount the Kinect sensor more directly above the Em2 - it's much more flexible in terms of positioning than a tripod. They're not built to be lightweight. The steel tube construction is pretty bulletproof and stays put. Now I just need another one to mount some good directional lighting to help bring out texture and detail.



The Kinect is set up to dump the data to a Linux computer sitting on the back table, Todd wrote a script to run the program so we can create a specific folder to contain all the data and also add a text file with a brief description of the experimental conditions. It's very convenient - plus our naming convention of each data folder as YYYYMMDD_EXPERIMENT-TYPE means the directories are easy to sort through. Particularly useful because the narrowed view of the stream table means it's hard to tell what we were doing by the appearance of the RGB image.


To get a first look at the 3D data, there are a couple of ways to go about it. I've been using the 3D Surface Plot plugin and you can choose to open one .pgm file in ImageJ and from the top menus select "Plugins >  Interactive 3D Surface Plot." You'll see the 3D viewer and options to change the color ramp, scaling, etc. But one of the reasons we go through all the effort to get the raw .pgm files is because we can combine several hundred images into a single stack and create an averaged view.

To create an image stack use the "File > Import > Image Sequence" to select the folder with a bunch* of .pgm files. You may need to allocate more memory to the program, since a stack of 1000 images is over 1GB in size.

*I used to drag a few hundred files into ImageJ and then turn those open images into a stack, but the results were unpredictable (not all the files would be included in the stack, so I would have to stack the stacks using the "Concatenate" command from the "Stack" menu.



Once I have a stack, I want to render it as a 3D surface (ImageJ uses the 16-bit grayscale values 0-64,0000 for the Z-axis values by default). I choose "Stack > Z-Project"and then pick "Average Intensity" to account for the variation of individual pixels. You can also select "Standard Deviation" or "Median" if you're curious to look at other characteristics of the data.



When in the 3D Surface Plotter, you can use one of the .ppm files to drape an RGB image onto the plot (to produce the image at the start of this post.



 The two images above are from a scan of my lab floor (area of view is about 1.5m x 1m). I placed a ~2cm tall ruler along the left side of the view to aid in looking at variation of the data with distance. The top image is an averaged view of the depth data, while the lower image shows the standard deviation. The biggest variation is along the front of the scan (where there was probably some scaling/distance issues) and along the sides of the ruler. It's a handy way of quickly seeing the data quality. The floor isn't that bumpy either - there's some aberrations that we'll have to build into our data correction scripts.

So what can I do right now? Here are a few examples:

You can see the lower edge drops off in this surface model.

Mapping the standard deviation, you can see where the scanner was having trouble getting data. (standing water can interfere with the IR laser the Kinect uses to measure distance).

There's a Contour Plotter plugin that you can use to map contours - haven't figured out how to export these contours directly onto the 3D image (I draped a screen shot of the contour output onto the 3D surface).

So for now, I'm making very pretty pictures with ImageJ. If I didn't want more quantified results, this might be enough. But I'd really like to tack numbers onto this output, and compare stream profiles that are produced by changing inputs like discharge and sediment supply.










Friday, March 29, 2013

A "Spring" Break project: Making a Snow Dragon

It's now been a full week since our Spring Break. And in that time, many people have commented about our big "Seekrit Projekt." Kelly blogged about it, Neil managed to get in to town and see it, too.

Sometimes big things come out of small comments. In this case it was a thought I mentioned aloud to Kelly McCullough a few months ago. My idea was to build a large Elephant in out of snow in Neil's yard. Since Neil has been away and very busy, I felt he needed something a little more bizarre to come home to. But Kelly had another thought - a giant dragon. Using existing landscape features. It would end up being over 220 feet long and require five days of construction. With the help of many volunteers (including our ever-patient and understanding spouses), we put in over 40 hours of snow moving.

Here's what I sketched up at the start:
Dragon Sketch #1

Then we walked the general shape out and Kelly spent a couple of days using a snow blower to move lots of the snow. We got towards the end and made adjustments to the head, which required a re-design, since the original dragon was too much "monkey" and not "asian dragon."

Dragon head sketch

Of course we had to take pictures. But not only did we get some fun pictures, I set up a time lapse camera to record the entire process. Here's the video link for those that can't see the embed.

Building the Snow Dragon from Matt Kuchta on Vimeo.

Having graduated with a double major in Geology and Studio Art, I sometimes have these competing whims. As a geology professor, I have great opportunities to indulge my geologic curiosity. Having friends with wild imaginations, motivation, and a knack for the silly was a great opportunity to make some Art over spring break.

Rescuing the barbarian.

Me and my dragon.

Thursday, March 21, 2013

This Is Interesting

I downloaded ImageJ and have been playing around with combining the 3D data into a composite stack. What's nice is that it will take several hundred individual scans and combine them by median or mean depth values. So of course I had to run the color-coded sediment and take a Kinect scan:


The process needs refining, but this might be exactly what I'm looking for...

The first rule of completing a big snow sculpture

Is to pace yourself. It's a marathon, not a sprint. The second rule - don't spend too much time thinking to yourself "wow this is huge." At least not when you're down there chipping snow away from what's going to be a claw-tipped finger...


Go home Jet Stream, you're drunk

It's been an exceptionally cold and snowy March - contrast this to last year when we had an exceptionally warm March. Like 60°F warmer than this year. Much of the reason for this is the Jet Stream - most years it flops back and forth between winter and summer circulation patterns - this shift in the position of the Jet is one of the reasons why those of us in the middle west of the US get so much precipitation in spring and fall. This year, the high and low pressure regions that influence where the Jet Stream can go are such that the Jet Stream can loop around so much. And one of these "loops" is allowing cold arctic air to circulate southward into our neck of the woods. Dr. Jeff's Wunderblog has a great summary (but DO NOT expect to read the comments and retain your faith in humanity).

The side benefit to Kelly and Me is that we have nearly two feet of snow with which to work with for our "Seekrit Projekt." We spent another couple of hours yesterday - with the help of Todd - and the objet d'art is shaping up nicely.


Wednesday, March 20, 2013

Collaboration: it's what makes science work

I've been gathering a ton of data on the Emriver color-coded plastic media - I've got another batch of specific gravity measurements that I'll post either today or tomorrow, and an update on using image processing software to analyze time lapse photos. But I came home after seeing a movie with my wife and some friends to find that Steve Gough had posted a shout-out with some very kind things to say about my analyses of their coded media.


I must say - their four years of R&D really shows through. Every time I look at this stuff, I notice that its characteristics are really appropriate for these small to medium scale stream systems. It's got the right amount of fine material to hold valley walls together until oversteepening due to lateral stream erosion causes bank collapse, but not so much fine material that you find it clinging to everything or that it's all fallen through the sediment net in the reservoir and gummed up the pump filter. The coarse and medium fraction is large enough to show up at reasonable time-lapse photo distances.

And the colors. Don't forget the colors. Steve mentioned my frequency distribution chart was "Tuftonian." Which it was, but that was in large part due to LRRD's choice of colors to begin with. They are easy on the eyes, show up well in person and in photographs, and look gorgeous moving around in the stream table. So Thanks Steve, Alee, Lily, Meriam, Christina, Nathan, and Jim for the tip of the hat. I'm grateful that there are people like you working on making geoscience education easier for all of us. I feel like I've got a new car - I'm almost done driving this thing the required break-in distance and am ready to get onto the highway and give it some gas to see what this thing can really do.

And I realize that for many of us, many of the really high-end tools (like the Em4, or even the color-coded media, or just the basic Em2) are beyond the means for so many deserving and qualified outreach and education professionals. One of my goals is to bring a little bit of that to you and show how you can accomplish many of the same features with a reduced budget. If you haven't yet emailed LRRD and at least talked to them about what they might be able to do for you, please do. They're very busy, but they've bent over backwards to help me and I'm sure they'd do the same for you. Or you can check in with me, or any of the 100+ owners of their stream tables around the world.



Tuesday, March 19, 2013

"Spring" Break?

It's a little snowier this year. I'm ready for the jet stream to kick itself northward and bring some warm air with it. I'm almost done with snow. ALMOST. Kelly and I have a seekrit projekt in the works, so having all this snow is actually a good thing.
What's going on? Stay tuned!

Sunday, March 17, 2013

Spring Break!

It's Spring Break around here. I have some plans for time-lapse movies of the Em2 with the color-coded sediment. I should be able to wrap up the specific gravity measurements, too.

And given the amounts of snow we have around here, Kelly McCullough and I have designs on making a giant snow dragon over at Neil's. We've done a few other silly things there in the past...

Myth12

L&K_7163


Saturday, March 16, 2013

Comet hunting in about 10 minutes

I'm going to head out and try to see comet Pan-STARRS this evening. A few days ago the overhead sky was pretty clear, but the clouds on the horizon obscured any view of the comet. Today the sky is fairly clear and hopefully the western horizon will be more visible.

UPDATE: No Joy on the comet... (sad trombone)
_MG_7326

Using Photoshop for Stream Table Image Analysis

Yet another wonderful thing about the color-coded plastic media is the ability to analyze stream features over time. The colors tend to organize themselves into patterns based on where the water is flowing that correspond to areas of erosion, transport, and deposition. My hope is that I can use these colors to define active/abandoned channels in each time lapse frame - stacking frames will show how those channel patterns over time. But how to simplify the images? One of the challenges lies in the fact that the yellow sediment contains some red hues - a simple saturation boost would make the yellows pop out too much. So I grabbed a picture of the stream table and tried out some Photoshop filters and adjustments to simplify the image. I've put them into an animated gif to show the differences.




I've noticed that the smallest red grains tend to show up in areas of scouring, while the white and yellow are often the first to move and the first to be deposited. Some of these adjustments really help pull out the yellow and red. Now it's time to get a time-lapse sequence and try some time series analysis.

Even more grain data


I finished the sieve analysis of the Emriver coded media. I pulled about a quart of material from each of the five buckets that were delivered (about 650 g per sample) for a total of 3.2 kg sieved material.


 I have also been analyzing the sieved fractions for specific gravity (SG). As mentioned before, I noticed the white material was yielding lower SG values than the other colors. Steve Gough suggested that I let the material soak a little longer. I had done that with a few samples, but I've started to let them soak overnight. The white material is still producing slightly lower values than the yellow, brown, or red. Here's a summary of the results so far (the two lowest SG values weren't soaked overnight):


Here's a set of bulk sediment measurements - I think the effect of not soaking them overnight is yielding a few slightly lower measurements, but the results suggest a bulk SG of about 1.53 - just a little shy of what the LRRD folks report (1.55).



Friday, March 15, 2013

Commenting?

It appears that at least one regular visitor hasn't been able to comment on the blog. It also appears that google reader's demise means that I have one less aggregation system to troubleshoot. Although RSS feeds are still cut short.

Have you, dear reader, been able to comment? It's kind of silly to judge this based solely on comments for obvious reasons. But if you want to send an email to capn.pituitary AT gmail dot com and let me know of your commenting woes, that would work too.

Thursday, March 14, 2013

MOAR GRAINS!

I'm still tweaking the grain size analysis. Instead of a visual estimate, I took a picture of the sieved fraction and measured the proportion of pixels in different color ranges. By selecting a "color range" I can find out how many pixels are yellow, compare that to the total number of pixels and get a much more refined estimate.

Here's an example - I masked off a circle to isolate just the sieved sediment.


Using the color proportions measured with Photoshop (ImageJ also can do this), I modified the histogram to represent each color fraction.


Just for fun, here's an animated GIF showing each fraction (Pan-#10)

Wednesday, March 13, 2013

Specific Gravity of coded media

I've been measuring the specific gravity of the color-coded plastic media. The sieve fractions that I separated earlier yielded some interesting preliminary results:

These represent single measurements - so there may be a lot of variation that's not accounted for. I used a volumetric flask and vacuum method to remove the air bubbles. The white fraction did appear to generate more bubbles than the other color fractions, so perhaps there is something inherent in the material to explain why they seem to have a slightly lower specific gravity. Does this influence how the material sorts itself during a stream run? Maybe, but there's lots of other fluid dynamics stuff to consider, too.

Here's one of Steve's Time-Lapse videos showing how running water can generate gorgeous color patterns.

Steve Gough posted a comment, but it was eaten alive by Google:
This is super interesting to us at LRRD!  We ran a lot of specific gravity tests as we developed the media; and found very close clustering, but more at about 1.60.  I'm not too concerned about a 0.05 difference between colors, but if it got bigger I would be!  Melamine has very low water absorption (for some plastics it can be 10% plus), but it's possible that's part of the problem; you might try soaking all the samples for a few days before testing.  Thanks!

Tuesday, March 12, 2013

Sunday, March 10, 2013

Digitizing the Em2

Some of you may be familiar with the Raspberry Pi minicomputer concept. Some of you might also be familiar with the Arduino digital interface/controller. My colleague Todd ordered a Raspberry Pi and it arrived this last week. I couldn't help but imagining how we could incorporate this tiny little computer into our Em2 hacks.

The Raspberry Pi has a few advantages over the Arduino board that makes the Pi a better base platform for digitizing the Em2.

  • Linux-based operating system, programming opportunities with Python

  • HDMI output for display on a large monitor

  • USB input/output for keyboard, mouse, hard drives, Kinect scanner, etc.

  • Ethernet (10/100) connectivity

  • SD card as boot/flash drive (download/mount data and images with regular computer)

  • General purpose Input/Output pins for connecting other devices



  • That last item - the I/O pins provide opportunities to connect to digital devices like the LRRD's digital flow controller, or other sensors that could collect data and then combine and sync all the output.

    My first priority is to get the Raspberry Pi to talk to the Kinect and automate the 3D scanning process. Ideally, we'd have two Kinects to cover the entire stream table. These would then be linked to an overall timeline of a particular experiment run (with information on discharge, sediment supply, and base level). There is a digital camera in development (5 MP) that could form the basis of photogrammetry measurements. Being automated, one could lock the cameras into position to maintain consistency.

    My dream setup includes a Raspberry Pi (or two) controlling:
    •Discharge from digital flow controller (or monitoring via simple Ventury tube, pump voltage, etc)
    •Kinect/3D data
    •Sediment Supply system voltage (either the LEGO version I've got, or something more robust)
    •Base Level elevation measurements
    •Time lapse photography
    •Optical sediment sensor consisting of a UV LED to record the presence of individual fluorescent plastic bits that make up a small fraction of the sediment (as an estimate of bedload transport)
    •All synchronized to a single timeline

    So, I'll keep on hacking and we'll see where we're at in a month, semester, or year, etc. Who knows - these kinds of things always end up changing as new opportunities arise, plans end up being overly ambitious, technology doesn't cooperate, or whatever. The thing to keep in mind is that (to rephrase John Lennon) "Research is what happens while you are busy making other plans."

    Saturday, March 09, 2013

    Emriver color-coded sediment, un-mixing the media.

    My soil mechanics students had their grain size analysis lab this week. It's fun - they get to analyze granular materials by playing with sand. Specifically, the plastic sand that comes standard with the Em2 stream table. Last year I ran the standard media through a stack of sieves. This year, I ran the color-coded plastic media through the sieves.

    These gradation curves are a quick way of describing and comparing different sediments. The standard media (blue curve) is different by having a little bit less material between 2 and 1mm in size (but more material larger than 2.4mm), and quite a bit more fine material around 0.5mm. The horizontal axis is in microns (1 mm = 1,000 microns) and it's on a log scale, because there's such a huge range in diameter.


    Here's a frequency curve showing the relative proportions of various size fractions. The "Phi Value" is another way sedimentologists scale the wide range of particles (Phi Value = -log2diameter in mm). So the Phi Value of a sand grain 1 mm in diameter is zero. Notice the big bump in the standard media between 1.5 and 2 phi. My own speculation is that as the material moves around, the big particles grind themselves down into particles around this size (0.3mm).

    What you can't see, and what is the true brilliance behind the color-coded material, is how moving water separates the color-coded particles into various color patterns. You can also see the colors in the sieve separates, too.

    CIMG1244


    So let's look at the distribution of grain sizes in the color coded media a little more closely:



     I like the distribution graph (given a little bit of visual "boost" by arbitrarily smoothing the line graph in Excel) because it hints at the formula for the Emriver sediment. There are four "bumps" that correspond to the sizes of the four different colors (Yellow = 1.4mm, White = 1mm, Brown = 0.7mm, Red = 0.4mm).

    Here's an overlay of the color fractions - the curves below represent approximate distributions of each individual fraction.

    Here's a picture from Steve, the big guy at LRRD's blog (Riparian Rap), showing the Em4 being filled with unmixed coded media. Given that some of the smallest yellow particles are smaller than the largest white particles, un-mixing the coded media into perfect color fractions isn't possible by mechanical means alone.

    My plan now: use the grain size distributions to create "color facies" for the plastic media - providing a way to do grain size analysis with time lapse photographs.

    Update: talking with Steve over email, I realized that the color fraction "curves" as drawn above may imply more quantitative "knowledge" than I really have about the distribution of each color. Here's a histogram, with the color fractions approximated by visually estimating the proportion of different colors visible in each container (vertical scale is in grams):




    Thursday, March 07, 2013

    Slow-Mo Sedimentation: When Stokes' Law Doesn't Apply

    Thanks to the ever helpful folks at LRRD, the colored sediment for my lab's Em2 arrived last week. I shot some high speed video of a scoop falling through water. Based on the results, I tried again with the help of my colleague Todd Zimmerman. We tried a few different colored gels on the translucent backdrop, but the first one that I used, a nice deep blue, works the best. I think it's because of the red-orange hues in all of the sediment particle sizes contrast well with the blue.

    Colored sediment: when Stokes Law does not apply from Matt Kuchta on Vimeo.

    Contrast this video with the footage we captured a few years ago of ball bearings falling through corn syrup:
    What a Drag! Falling Through Syrup from Matt Kuchta on Vimeo.

    The single ball bearing is a good example of how Stokes' Law works. I blogged about it before, too. But the first video shows many particles. These particles are banging into each other, the combined mass of the particles is also pushing the water around in turbulent eddies. Stokes' Law does not apply because the settling of each particle is hindered by interactions with other particles and the surrounding fluid. In these cases, we're often without a simple, elegant equation to describe what's happening. Instead, we have to rely on empirical observations. Such as the bedforms left behind in the sediments after the particles are deposited. In the end, however, many of the smallest particles are left behind to drape over the entire pile of material. So even in these chaotic, turbulent systems, Stokes' observations can still help inform us of these processes.



    Tuesday, March 05, 2013

    Emriver Color-Coded sediment in action

    A little test run with the new color-coded sediment that arrived last week. It's so pretty!


    CIMG1220

    CIMG1221

    I seem to recall some fluvial stratigraphy diagrams tossing around facies patterns that look strikingly similar.

    CIMG1236

    I'm rather excited about 3D facies mapping possibilities here. Too bad I have so many other irons in the fire. 

    Saturday, March 02, 2013

    Sediment Transport Porn

    I think I've found the right material to use in high-speed video of sediment transport.

    A link for aggregate readers that may cut things off: https://vimeo.com/60882601

    The color-coded media supplied by Little River Research and Design looks absolutely gorgeous. I got some the other day and I just had to grab a few scoops to shoot some video friday afternoon:

    Color-Coded Sediment from Matt Kuchta on Vimeo.
    Some new material arrived in the lab today. Just a quick clip of a scoop of the color-coded sediment settling onto the bottom of a glass of water.