Because sometimes water vapor goes directly to the solid phase, does not pass Go, and does not collect $200.
If you aren't moving at a snail's pace, you aren't moving at all. -Iris Murdoch
Pages
▼
Friday, November 30, 2012
Wednesday, November 28, 2012
Volcano Webcam Photobombing
About two years ago, I was checking the Kamoamoa webcam near the Pu'u O'o vent in Hawaii Volcanoes National Park and I happened to see this:
It's an amusing reminder that for those of us far away from the volcano, there are teams of dedicated professionals working hard every day to learn more about these fascinating systems. As an educator, it's a great reminder that there are people directly involved in bringing into our living room these real-time views of the world around us. Thanks!
Tuesday, November 27, 2012
It's Time for another BOOK RELEASE!
My pal Kelly McCullough has another book in the "Broken Blade" series coming out today. It's amazing to think about how fast time has flown by since I first drew maps for his upcoming series. Go buy a copy! If you're in western Wisconsin, or the Twin Cities area be sure and check out the signing events! He'll be at Har Mar this thursday evening and at Uncle Hugo's on Saturday. Or go grab a copy from your local bookseller. They don't have it? Ask them to get copies. Live far away? There's a Kindle version, too!
How many fantasy books can you name that contain maps drafted by an actual geologist? I'm glad to say that Kelly's use of geology and geography avoids many of the irritating geologic pitfalls that this genre is prone to. Besides, you know you want to read a story about an ex-assassin who has a dragon familiar that is literally a shadow.
Who knows - if enough people buy his books I might need to revisit and update/expand the maps I drew. And that would make both of us happy.
How many fantasy books can you name that contain maps drafted by an actual geologist? I'm glad to say that Kelly's use of geology and geography avoids many of the irritating geologic pitfalls that this genre is prone to. Besides, you know you want to read a story about an ex-assassin who has a dragon familiar that is literally a shadow.
Who knows - if enough people buy his books I might need to revisit and update/expand the maps I drew. And that would make both of us happy.
Tuesday, November 20, 2012
Random Tuesday Thots
I had many things I wanted to do today. The student projects in my soil science class are starting to ramp up and the drying oven is on just about 24/7 (actually only about 20/7, because there's a 12-hr timer on the oven). But of all the things that I want/need to do the day before Thanksgiving break, dealing with a broken sink is not one of them. The connecting nut between the downspout/trap assembly and the sink basin has sheared off. It's a good thing I had put a bin underneath the trap last year when I noticed a few drips - otherwise I'd have about half a gallon of mud, water, and methylene blue all over my floor. Sigh. Being in charge of a lab is hard work - I hope the students get something valuable from the opportunity to work on semi-independent projects for half a semester. Now I just need to get the sink fixed so that they can do the washing up...
My sister's birthday was yesterday. She's six years younger than I am and is a high school english teacher. Her husband is a college chemistry prof. Our mom is a K-12 Librarian/Learning Technology Specialist, our dad is a high school chemistry/biology teacher, our maternal grandpa was a high school chemistry teacher, and our grandmother was an elementary school special education teacher. I think my sister and I were doomed to a career in education from the get-go. Makes me wonder what her 9-month old daughter is going to do. Is she pre-disposed to the kinds of activities that favor education? Is being raised by educators giving her the kinds of experiences that foster being a good teacher? I don't know, but she is getting to be quite the cute kid (here's a photo my sister took the other day).
Looks like blogging is in the family, too. My sister has her own blog - go over and say hi if you'd like. And don't ask her why I'm so weird - she doesn't know either.
My sister's birthday was yesterday. She's six years younger than I am and is a high school english teacher. Her husband is a college chemistry prof. Our mom is a K-12 Librarian/Learning Technology Specialist, our dad is a high school chemistry/biology teacher, our maternal grandpa was a high school chemistry teacher, and our grandmother was an elementary school special education teacher. I think my sister and I were doomed to a career in education from the get-go. Makes me wonder what her 9-month old daughter is going to do. Is she pre-disposed to the kinds of activities that favor education? Is being raised by educators giving her the kinds of experiences that foster being a good teacher? I don't know, but she is getting to be quite the cute kid (here's a photo my sister took the other day).
Looks like blogging is in the family, too. My sister has her own blog - go over and say hi if you'd like. And don't ask her why I'm so weird - she doesn't know either.
Saturday, November 17, 2012
Slow-Motion Saturday
I spent what little good light we have during these fall days shooting some slow-motion footage of bursting water balloons. But I went for a more dramatic approach:
WaterBalloonGlissando from Matt Kuchta on Vimeo.
I've dropped balloons onto thumbtacks, stuck them with needles, even shot them with BBs. But there's something absolutely amazing and vaguely disturbing about using a straight razor. It also marks the first time using my new C-stand. It's just about the best thing ever to hang the balloons, or hold on to the razor and throw balloons at. They also make excellent blast shield (1/4" to 1/2") plexiglass holders for protecting the camera from flying debris.
Here's what the setup looked like with the balloon hanging off the C-stand. I shot the video with the balloons lit from behind by the sun. It made for a wonderful effect with the balloons, but I think I need to find a better backdrop.
The demon barber of water balloons...
WaterBalloonGlissando from Matt Kuchta on Vimeo.
I've dropped balloons onto thumbtacks, stuck them with needles, even shot them with BBs. But there's something absolutely amazing and vaguely disturbing about using a straight razor. It also marks the first time using my new C-stand. It's just about the best thing ever to hang the balloons, or hold on to the razor and throw balloons at. They also make excellent blast shield (1/4" to 1/2") plexiglass holders for protecting the camera from flying debris.
Here's what the setup looked like with the balloon hanging off the C-stand. I shot the video with the balloons lit from behind by the sun. It made for a wonderful effect with the balloons, but I think I need to find a better backdrop.
The demon barber of water balloons...
Star Trails
Went out early this morning, looking for some meteors. I saw a dozen or so, including a couple of very bright ones, right about on par with the estimates for this year's Leonids. It may have been a low-key year as far as the Leonids go, but it was the best conjunction of time, light, and sky for me.
I took pictures of star trails - no meteors showed up in my pictures, but I do kind of like this shot of Orion. I used a trick I learned several years ago from an astrophotographer - open the shutter, then cover the lens for 30 seconds to a minute, then uncover the lens once more to make the trails.
I took pictures of star trails - no meteors showed up in my pictures, but I do kind of like this shot of Orion. I used a trick I learned several years ago from an astrophotographer - open the shutter, then cover the lens for 30 seconds to a minute, then uncover the lens once more to make the trails.
Thursday, November 15, 2012
This doesn't seem good...
I was setting up a spot in my lab where my students could measure soil pH - and as I took one of the probes out of the box, I noticed little black spots clinging to the side of the storage solution container:
Impressive, since the storage solution has a pH of 4.0. Although perhaps not so much any more...
Impressive, since the storage solution has a pH of 4.0. Although perhaps not so much any more...
Tuesday, November 13, 2012
Hacking the Em2: The Dye Job
The ground plastic that comes with the Em2 is really neat stuff. It's melamine, which a ground thermoset resin. Melamine is interesting stuff. Mostly, it's just a bunch of Carbon, Nitrogen and Hydrogen linked into polymer chains. This is great for the plastic itself - there's not much else to react with the stuff it comes into contact with, so it works well as the material in kitchen spoons and bowls. The turmeric, tomatoes, and food dyes don't "stick" and stain the plastic.
This is also a challenge to those of us trying to "colorize" specific size ranges of the plastic sand. Steve Gough has some interesting descriptions of LRRD's early experiments in adding color to the plastic. From Steve's comments, it sounds like they were using some kind of common fabric dye (like RIT) that requires boiling water. This kind of dye often forms an "anionic" dye ion (dye-on?) that bonds with positively charged parts of the clothing molecules. The problem is that there aren't really any sites for this kind of reaction in melamine.
There are another group of dyes where the "dye-on" is positively charged. These cationic dyes are often used to color nylon and other synthetic fabrics. It's quite possible these would work with melamine, but most cationic dyes are rather nasty. These dye cations can easily attach themselves to proteins. This makes them great for tissue stains in biology lab, but not something you want to ingest in any quantity.
The type of dye I'm experimenting with right now is called a "reactive dye" and instead of relying on an ionic charge, it forms a covalent bond with the target item (like wool). The reacting part of the dye molecule is a "dichlorotriazine" unit (two chlorines stuck on an N-C ring). The bond takes one of the chlorines and tosses it out in favor of a covalent bond between the oxygen from a hydroxyl (OH) group on the target. This gives the dye a stronger bond - increasing the ability of the dye to resist coming out in the wash.
So what I'm hoping is that either this substitution via the Chlorine in the dye can occur with the melamine polymers via the N-H bonds, or that some of the melamine polymers have hydroxyl groups attached to them. Most examples of melamine resin that I've found on the web were different from each other. Some examples had OH galore, others had none. At this point, I'm just going to experiment to see how colorfast this stuff is.
I've sieved a bunch of the plastic to separate the sand into various size fractions and my experiments are using the #40-#45 sieve fraction (about 0.4mm), since the fines generally have a dark appearance and this could help emphasize the appearance and bring out these areas where fines are being deposited.
A few words of caution. First, the dye itself is often referred to its trade name as "Procion MX" type dyes and comes shipped as a dry powder. Don't mix a bunch and then let it sit around, because the dye will hydrolize with water (using the OH in H2O) and during the dyeing, a lot of dye may hydrolize and not react with the target material.
Second, the powdered dye itself is listed as an irritant - repeated exposure might result in developing an allergic reaction, so wear rubber gloves and mix it under a fume hood, or with a dust mask. Work small, because a little color goes a long way. The tiny red fraction of the LRRD colored media accounts for less than 5% by weight of the total material. Once in liquid, the dye itself will stain most fabrics, so proceed with care.
Of course there's the ultimate test - how does it work in the stream table? Once the colorfast tests are done, I'll run some through the stream table and see what happens.
This is also a challenge to those of us trying to "colorize" specific size ranges of the plastic sand. Steve Gough has some interesting descriptions of LRRD's early experiments in adding color to the plastic. From Steve's comments, it sounds like they were using some kind of common fabric dye (like RIT) that requires boiling water. This kind of dye often forms an "anionic" dye ion (dye-on?) that bonds with positively charged parts of the clothing molecules. The problem is that there aren't really any sites for this kind of reaction in melamine.
There are another group of dyes where the "dye-on" is positively charged. These cationic dyes are often used to color nylon and other synthetic fabrics. It's quite possible these would work with melamine, but most cationic dyes are rather nasty. These dye cations can easily attach themselves to proteins. This makes them great for tissue stains in biology lab, but not something you want to ingest in any quantity.
The type of dye I'm experimenting with right now is called a "reactive dye" and instead of relying on an ionic charge, it forms a covalent bond with the target item (like wool). The reacting part of the dye molecule is a "dichlorotriazine" unit (two chlorines stuck on an N-C ring). The bond takes one of the chlorines and tosses it out in favor of a covalent bond between the oxygen from a hydroxyl (OH) group on the target. This gives the dye a stronger bond - increasing the ability of the dye to resist coming out in the wash.
A diagram of the dye molecule bonding with the target via a hydroxyl group on the target. The Chlorine goes into solution at the end of the experiment.
So what I'm hoping is that either this substitution via the Chlorine in the dye can occur with the melamine polymers via the N-H bonds, or that some of the melamine polymers have hydroxyl groups attached to them. Most examples of melamine resin that I've found on the web were different from each other. Some examples had OH galore, others had none. At this point, I'm just going to experiment to see how colorfast this stuff is.
I've sieved a bunch of the plastic to separate the sand into various size fractions and my experiments are using the #40-#45 sieve fraction (about 0.4mm), since the fines generally have a dark appearance and this could help emphasize the appearance and bring out these areas where fines are being deposited.
Here are the results so far. There isn't much difference in color intensity between most of these containers, despite the fact that they range from 7 hours to 4 days of soaking in the dye.
A dye concentration of 0.1g dye powder to 10g sand seems to be just about right. And there doesn't seem to be much difference between 7 hours (at 150°F) or 48 hours at room temp, but we'll see if there is a difference in how the dye stays with the melamine after repeated washing, mixing with an 800:1 dilution of bleach, churning around in a stand mixer to simulate repeated use in the stream table.
At this point, I think I've got something that will stand up to at least a few weeks of use in the stream table. But in terms of long-term stability and predictability, I don't see this replacing the expensive (and very beautiful) color-coded material.
A few words of caution. First, the dye itself is often referred to its trade name as "Procion MX" type dyes and comes shipped as a dry powder. Don't mix a bunch and then let it sit around, because the dye will hydrolize with water (using the OH in H2O) and during the dyeing, a lot of dye may hydrolize and not react with the target material.
Second, the powdered dye itself is listed as an irritant - repeated exposure might result in developing an allergic reaction, so wear rubber gloves and mix it under a fume hood, or with a dust mask. Work small, because a little color goes a long way. The tiny red fraction of the LRRD colored media accounts for less than 5% by weight of the total material. Once in liquid, the dye itself will stain most fabrics, so proceed with care.
Of course there's the ultimate test - how does it work in the stream table? Once the colorfast tests are done, I'll run some through the stream table and see what happens.
Monday, November 12, 2012
Administrivia
The many levels of college deans, provosts, vice-chancellors, chancellors, and presidents can be described as "Administrata."
Saturday, November 10, 2012
Friday, November 09, 2012
Taking it with you: A review of the Kiboko 22L camera bag
Gullfoss and Rainbow, Iceland Canon 7D 24mm Tilt-Shift Lens.
Over the years, I've used several smaller over-the-shoulder with a waist belt style bags. One I have now is a pretty decent design. It's roomy and I can toss my camera and a couple of lenses in there no problem. But to carry ALL THE THINGS, I needed a larger backpack style bag. I went so far as to pick up a used Lowe Mini-Trekker at one point, but the large access flap made it difficult to fish one particular item out without exposing the entire collection to the elements, or risk of spilling out of the bag. So I was really excited to see the design of the Kiboko 22L camera bag from Gura Gear had a butterfly style set of access flaps. Some people have found them awkward. I've found them invaluable.
So, a little bit about the bag itself. I went with the "22L+" size, which states, among other things that you can fit a whole bunch of stuff into the bag, including a 17" MacBook Pro. I didn't bring my 17" computer on the trip, but before I left I verified that yes, this giant surfboard of a computer does fit in the zipper slot behind the main bag. They also advertise room for 5 or 6 small-medium size lenses a camera flash, and a couple of camera bodies. Based on my experience, the advertised capacity for "stuff" is just about spot-on. Here's a more detailed review done by the Luminous Landscape photography blog - I don't really need to spell out features, you can read up on the architecture of the bag there.
Taking in the view at Reykjavik Harbor (photo by Kelly McCullough).
Thingvelier, Iceland
Stones of Stenness, Mainland Orkney Scotland.
Here's an interesting anecdote about the design of the bags: I was photographing fulmars and guillemots on Mainland Orkney and completely engrossed in the view, when I stopped to take my bag off my shoulders and set it on the ground, I discovered that I hadn't zipped the main flaps. So here I was, traipsing across 100-foot cliffs and bare rock with a completely unzipped camera bag! But thanks to the design, nothing fell out. The flaps stayed shut. I might have saved myself several times the cost of the bag in terms of lens repair/replacement with that stunt alone. Lesson learned. The positioning of the zipper pulls makes it easy enough to verify things are closed.
Atop Arthur's Seat, Edinburgh Scotland (photo by Kelly McCullough)
Necropolis, Glasgow Scotland
On the Royal Mile, Edinburgh Scotland (photo by Kelly McCullough)
The camera bag held up very well, allowing me to lug about 12 kg (25 lbs) of equipment around the hard cobblestone streets of Edinburgh for about six hours without getting too tired. Plus there was plenty of room to stick a water bottle and some snacks to keep going.
Tilt-Shift Edinburgh
Squeezing up the stairs on the Scott Monument, Edinburgh Scotland (photo by Kelly McCullough)
It's rather big - which is great for holding gear, but made navigating tight paths a little challenging. I had to learn that I was skinnier facing forwards than sideways when I had the Kiboko on my shoulders. But the bag comfortably fit into the overhead compartment of the IcelandAir 757. I even had a short moment to check to see if it would fit underneath the seat in front of me. I was able to stash it under there, but this wouldn't likely work on airlines with less room beneath the seats.
As far as science goes, this bag is something that isn't good for carrying messy gear - all the little pockets and folds are liable to trap grit and things which would be bad for cameras and other electronics. But there's plenty of room for stashing a few well-sealed samples - the adjustable compartments make it easy to stash away a chunk of basalt or ammonite fossil, without the rocks mashing into the more fragile lenses.
Basically, it's been worth it for me. This thing isn't cheap. But if you've got a few thousand dollars worth of photo equipment (giga pans, anyone?), it's a very useful item. You can carry a tripod on the side, or bungie it to the top handle and carry it strapped to the center of the back (where it would be less off-center). Plus you've got an iPad, or laptop carrying pocket built-in. My verdict: probably not what I'm going to bring along when I'm doing grungy fieldwork - at least, I'll leave it in the van while I'm digging, then bring it out for the slightly less dirty aspects of exploration and photography. Just the kind of thing for a trip to northern Europe in the summer...
Panoramic view from the Quirang, Isle of Skye
Thursday, November 08, 2012
Mounting Cameras to view the Em2
Here's a helpful suggestion. Two really. Many of us with the Em-series stream tables, or other linear physical model often want to take time lapse photos of the stream table. To do this, you set up your camera and take a bunch of images in a sequence and then link the images together into a movie. I've also been using the X-Box Kinect sensor to make 3D scans of the Em2 - a setup that also requires a good, stable view of the stream table.
8-16Run01Kinect from Matt Kuchta on Vimeo.
7-16 Em2 run, 3D KinectData from Matt Kuchta on Vimeo.
How do you mount the camera so that it is both stable and oriented to get the best view?
First, you can use a tripod. In general this works rather well, but the camera is going to be elevated and looking down on the stream table at an angle, which is a particularly big problem with tripods that come with a fixed central column. This might look okay in a time lapse video, but is unacceptable for 3D scanning, since one edge of your picture is going to be very close to the scene and the other edge is going to be over 2m away from it.
Tripods with a repositionable central column are a much better option, since you can cantilever the center column out over the table and get closer to a top-down view, making the edges of the scene more equidistant. The resulting side-effect, though, is you lose subject distance and must go with a wider angle lens, introducing the potential for greater distortion. I use both a Gitzo carbon-fiber tripod and a Manfrotto aluminum tripod (something similar at B&H photo here) for time lapse work. I've also found it acceptable for the Kinect, since I don't need to be as far away from the stream table to keep the sensor within 1m of the subject. I went with a high-end carbon fiber tripod since I do a lot of hiking and nature photography, so if you're just doing studio work, it's overkill, but I am absolutely in love with my CF Gitzo 'pod. Links are to B&H items because their product search feature is the more customer friendly of the batch, but you can find similar items at Adorama, Hunts, Amazon, or at your local camera shop. The tripods they sell at places like Wal-Mart and Best Buy trend to the short, cheap and useless end of the product spectrum.
A second option, and one that I'm going to switch to, is using a C-stand (C for century). These things are a little bit more robust and their construction and design allows you to cantilever items much further out than the tripods center column. It'll need to be stabilized with some weights at its base, but I think the smaller footprint and further reach will quickly endear itself to this process.
8-16Run01Kinect from Matt Kuchta on Vimeo.
7-16 Em2 run, 3D KinectData from Matt Kuchta on Vimeo.
How do you mount the camera so that it is both stable and oriented to get the best view?
First, you can use a tripod. In general this works rather well, but the camera is going to be elevated and looking down on the stream table at an angle, which is a particularly big problem with tripods that come with a fixed central column. This might look okay in a time lapse video, but is unacceptable for 3D scanning, since one edge of your picture is going to be very close to the scene and the other edge is going to be over 2m away from it.
Tripods with a repositionable central column are a much better option, since you can cantilever the center column out over the table and get closer to a top-down view, making the edges of the scene more equidistant. The resulting side-effect, though, is you lose subject distance and must go with a wider angle lens, introducing the potential for greater distortion. I use both a Gitzo carbon-fiber tripod and a Manfrotto aluminum tripod (something similar at B&H photo here) for time lapse work. I've also found it acceptable for the Kinect, since I don't need to be as far away from the stream table to keep the sensor within 1m of the subject. I went with a high-end carbon fiber tripod since I do a lot of hiking and nature photography, so if you're just doing studio work, it's overkill, but I am absolutely in love with my CF Gitzo 'pod. Links are to B&H items because their product search feature is the more customer friendly of the batch, but you can find similar items at Adorama, Hunts, Amazon, or at your local camera shop. The tripods they sell at places like Wal-Mart and Best Buy trend to the short, cheap and useless end of the product spectrum.
Here I've got the carbon fiber tripod holding the camera and the aluminum tripod holding the Kinect sensor. Both tripods are equipped with a ballhead - another handy item, since you get nearly full 3-axis movement for adjustments.
A second option, and one that I'm going to switch to, is using a C-stand (C for century). These things are a little bit more robust and their construction and design allows you to cantilever items much further out than the tripods center column. It'll need to be stabilized with some weights at its base, but I think the smaller footprint and further reach will quickly endear itself to this process.
Tuesday, November 06, 2012
The Em2 and 3D Scanning: a few lessons learned
It's election day here in the USofA. I voted early, because my wife and I have busy teaching days on Tuesday - not having to then march over to the polling station and wait in what should be a long line is a greatly appreciated luxury that living in a rather democratically progressive state (for now) provides.
If you haven't voted yet and are able, please do so. If it's against my candidate of choice I hope that the results at least provide clarity of purpose in ways that allow us to work together. If it helps my candidates of choice, good on you.
Last night I posted some pictures of using the Kinect scanner for 3D imaging of the Em2 stream table. In general I have to say there are a couple things that are absolutely fabulous, and a couple things that are really frustrating about this technology.
First the frustrating things:
1) Accuracy and Precision:
To be specific, the overall accuracy is quite good. That is to say if the sensor thinks an object in front of it is 75 cm away from the sensor, there's a good chance it is. There's not a lot of wildly varying values reported in most cases. Accuracy suffers when there's a lot of infrared noise, or the surface being scanned is very reflective. This is one of the reasons why I lined the base of the stream channel with a black rubber sheet - much less of a reflective "blind spot" in the middle of the scan.
Precision, however, is another story. The distance model the camera uses really changes beyond about 1 meter. The closest you can get to the scanned surface is about 0.3 meters and from there to about 1m, we've been able to pick up the presence of a single penny placed on a flat surface. Considering the D50 of the standard Em2 sediment is about 0.5mm, it doesn't take much deposition or erosion to show up in scans. Once you go beyond a meter, the precision drops to less than a centimeter. Not bad for rough 3D scans of rooms for robot navigation, but not terribly useful for the subtle changes produced on a stream table. We're looking into ways of data processing that might improve this. But being limited to a camera-subject distance of less than 1m leads me into the next frustrating thing.
2) Field of View:
At a distance of 1m, the field of view on the Kinect is somewhere around 90cm x 70 cm. The entire functional length of the Em2 is just under 2m. To get the entire stream table in the scan, we could pull the scanner back, but then the problem with precision pops up. We tried using a rolling frame, but we've had issues stitching the frames together and maintaining a common reference point. The latter issue is more of a computing process, but the distance problem appears to be something that creates a firm upper limit on the area a single scanner can reliably measure. An alternative is to get another Kinect scanner. Having two scanners locked down would probably work, but that's a problem that will have to be solved later.
But now the fabulous things:
1) Accuracy and Precision:
Within a meter's distance, we've been able to resolve individual pennies that were present in one scan, but removed in another. A single penny isn't much more than a millimeter in thickness, so getting high-resolution information of a 90 cm reach of stream is really exciting. Not only can you detect subtle changes in aggradation and erosion of the active channel, but you can observe and measure bank/valley wall collapse. Want to measure the contribution from mass wasting? We've got you covered. Want to measure the rate of floodplain aggradation based on sediment supply? There's an app for that, too :)
2) Speed and Detail:
The other great thing about the setup is the speed at which we collect data. To get enough distance information, we let the camera continually scan the scene for about 1 minute. Imagine how long it would take to get sub-centimeter resolution from a 90 cm long channel measuring by hand? Totally impractical. Not to mention your estimates of mass wasting would be much more difficult. Here, you can do cut/fill measurements over time scales that are quite useful.
I think our next step is to add a second Kinect to increase the scanned area and look into methods of making isopach or contour maps from the 3D data. Needless to say, I'm really excited about these preliminary results.
Of course all of this could not have been accomplished without the dedicated help of many other people. I want to give a tip of the internet hat to Steve at LRRD whose enthusiasm for table-size stream tables gave birth to the Em-series models in the first place. I also want to give props to my colleague Todd Zimmerman - a computer whiz who in the span of a couple days brought himself up to speed with the Kinect 3D data files to program ways of averaging large data sets and visualize the changes between individual scans. Incidentally, both Todd and I went to the same undergraduate institution (Lawrence University in Appleton, WI), so I guess I can forgive him for being a physicist ;). Without the dedicated work of my student researcher Bryn (also a Lawrence student), this project would have stalled out well before the sediment supply and constant head tank systems were built. She's the one who spent many hours monitoring the pump during some early experiments - and the one who most appreciated the constant head tanks and the speed/convenience of measuring via remote sensing. Finally, I have to give credit to the person who gave me the idea of installing the Kinect on an Em2 in the first place, Ken Mankoff - he's never heard of me, but he was more than happy to give Todd some suggestions on improving the Kinect data processing. Plus he's got a shiny publication that ought to turn the head of any tech-minded geoscientist considering using the Kinect in the lab OR in the field. Good stuff there.
Oh, and I've got a fantastic announcement to make after people get back from GSA this year. It's super-awesome and I hope it's a sign of really exciting things to come. :)
If you haven't voted yet and are able, please do so. If it's against my candidate of choice I hope that the results at least provide clarity of purpose in ways that allow us to work together. If it helps my candidates of choice, good on you.
Last night I posted some pictures of using the Kinect scanner for 3D imaging of the Em2 stream table. In general I have to say there are a couple things that are absolutely fabulous, and a couple things that are really frustrating about this technology.
First the frustrating things:
1) Accuracy and Precision:
To be specific, the overall accuracy is quite good. That is to say if the sensor thinks an object in front of it is 75 cm away from the sensor, there's a good chance it is. There's not a lot of wildly varying values reported in most cases. Accuracy suffers when there's a lot of infrared noise, or the surface being scanned is very reflective. This is one of the reasons why I lined the base of the stream channel with a black rubber sheet - much less of a reflective "blind spot" in the middle of the scan.
Precision, however, is another story. The distance model the camera uses really changes beyond about 1 meter. The closest you can get to the scanned surface is about 0.3 meters and from there to about 1m, we've been able to pick up the presence of a single penny placed on a flat surface. Considering the D50 of the standard Em2 sediment is about 0.5mm, it doesn't take much deposition or erosion to show up in scans. Once you go beyond a meter, the precision drops to less than a centimeter. Not bad for rough 3D scans of rooms for robot navigation, but not terribly useful for the subtle changes produced on a stream table. We're looking into ways of data processing that might improve this. But being limited to a camera-subject distance of less than 1m leads me into the next frustrating thing.
2) Field of View:
At a distance of 1m, the field of view on the Kinect is somewhere around 90cm x 70 cm. The entire functional length of the Em2 is just under 2m. To get the entire stream table in the scan, we could pull the scanner back, but then the problem with precision pops up. We tried using a rolling frame, but we've had issues stitching the frames together and maintaining a common reference point. The latter issue is more of a computing process, but the distance problem appears to be something that creates a firm upper limit on the area a single scanner can reliably measure. An alternative is to get another Kinect scanner. Having two scanners locked down would probably work, but that's a problem that will have to be solved later.
But now the fabulous things:
1) Accuracy and Precision:
Within a meter's distance, we've been able to resolve individual pennies that were present in one scan, but removed in another. A single penny isn't much more than a millimeter in thickness, so getting high-resolution information of a 90 cm reach of stream is really exciting. Not only can you detect subtle changes in aggradation and erosion of the active channel, but you can observe and measure bank/valley wall collapse. Want to measure the contribution from mass wasting? We've got you covered. Want to measure the rate of floodplain aggradation based on sediment supply? There's an app for that, too :)
A comparison of two scans one before and the second at about 20 minutes into an experimental run. Green values represent changes that are closer to the camera (aggradation/deposition), red is further away (erosion). I've found that most of the sediment transport/depo that occurs in the first half hour is directly related to profile adjustment and smoothing out the small bumps that I introduced when smoothing out the sediment prior to the run.
The top image is the distance data obtained from the Kinect. Below is a view of the same setup (a little later in the run) showing where the stacks of pennies (1, 2, and 3 pennies tall) were added to the scene. They were added, so that spot appears closer to the camera. Yes, even with a bit of noise, you can tell if a single penny has been added or removed from the scene (at least if the scanner distance is within about 1m).
2) Speed and Detail:
The other great thing about the setup is the speed at which we collect data. To get enough distance information, we let the camera continually scan the scene for about 1 minute. Imagine how long it would take to get sub-centimeter resolution from a 90 cm long channel measuring by hand? Totally impractical. Not to mention your estimates of mass wasting would be much more difficult. Here, you can do cut/fill measurements over time scales that are quite useful.
Here's a complete sequence of a multi-hour run (click to embiggen). The time between scans varies from about 15 minutes to about 30 minutes. The experiment looked at simple incision of the main stream for the first couple hours, and then I turned on the valve for the tributary stream and let it run for another hour. I love the fact that you can see small-scale readjustments and intermediate stages of sediment storage/removal. Look closely at the 3D data - the little white foam blocks are 3mm tall and you can see when they've moved around. The picture's a little small to easily observe the pennies, but they do show up in a larger version (see example above).
I think our next step is to add a second Kinect to increase the scanned area and look into methods of making isopach or contour maps from the 3D data. Needless to say, I'm really excited about these preliminary results.
Of course all of this could not have been accomplished without the dedicated help of many other people. I want to give a tip of the internet hat to Steve at LRRD whose enthusiasm for table-size stream tables gave birth to the Em-series models in the first place. I also want to give props to my colleague Todd Zimmerman - a computer whiz who in the span of a couple days brought himself up to speed with the Kinect 3D data files to program ways of averaging large data sets and visualize the changes between individual scans. Incidentally, both Todd and I went to the same undergraduate institution (Lawrence University in Appleton, WI), so I guess I can forgive him for being a physicist ;). Without the dedicated work of my student researcher Bryn (also a Lawrence student), this project would have stalled out well before the sediment supply and constant head tank systems were built. She's the one who spent many hours monitoring the pump during some early experiments - and the one who most appreciated the constant head tanks and the speed/convenience of measuring via remote sensing. Finally, I have to give credit to the person who gave me the idea of installing the Kinect on an Em2 in the first place, Ken Mankoff - he's never heard of me, but he was more than happy to give Todd some suggestions on improving the Kinect data processing. Plus he's got a shiny publication that ought to turn the head of any tech-minded geoscientist considering using the Kinect in the lab OR in the field. Good stuff there.
Oh, and I've got a fantastic announcement to make after people get back from GSA this year. It's super-awesome and I hope it's a sign of really exciting things to come. :)
Monday, November 05, 2012
Hacking the Em2 Part the Fourth: 3D Scanning
Okay, here's the last post in a series on hacking the Em2. At least for now. Once I got all the modifications done to control discharge and sediment flux, I needed a way to measure the influence of all this stuff. The simplest way is to run up and down the channel with a ruler. But that's fairly time-intensive, particularly if you make measurements over short intervals.
Enter the X-Box Kinect sensor. It uses an IR laser pattern to measure distances. Over short (<1m a="a" amazing="amazing" by="by" can="can" changes.="changes." detect="detect" distances="distances" done="done" href="http://kenmankoff.com/2012/09/12/accepted-with-minor-revisions-the-kinect-a-low-cost-high-resolution-short-range-3d-camera" is="is" it="it" millimeter-scale="millimeter-scale" precision="precision" s="s" such="such" thanks="thanks" that="that" the="the" to="to" work="work" you="you">Ken Mankoff1m>
, we had a working methodology to start with. With lots of computing help from my colleague Todd (who blogs at "Talking Physics"), we had the ability to collect and analyze lots of spatial data. The big question was "how do we scan the stream table?"
Our first design was a kind of "roller skate" frame that traversed the entire length of the stream table. There were lots of things that worked with this design, but we ended up having trouble matching subsequent scans. There's still lots of potential, but we didn't have the time needed to to the matching.
Kinect scanners are sold separately for about $150 or so. I got mine used at a game store for about $70. Don't forget to pick up a power supply (which comes with the USB cable that will interface with the computer).
Some software that we've used:
Meshlab: for viewing the point cloud data
RGBDemo: for making quick 3D surfaces
OpenKinect: Information about grabbing the 3D data
To process the raw data, Todd combined the distance file data using a Python script (running on a Linux box) and then compared the averaged distance values from one scan to another.
Enter the X-Box Kinect sensor. It uses an IR laser pattern to measure distances. Over short (<1m a="a" amazing="amazing" by="by" can="can" changes.="changes." detect="detect" distances="distances" done="done" href="http://kenmankoff.com/2012/09/12/accepted-with-minor-revisions-the-kinect-a-low-cost-high-resolution-short-range-3d-camera" is="is" it="it" millimeter-scale="millimeter-scale" precision="precision" s="s" such="such" thanks="thanks" that="that" the="the" to="to" work="work" you="you">Ken Mankoff1m>
, we had a working methodology to start with. With lots of computing help from my colleague Todd (who blogs at "Talking Physics"), we had the ability to collect and analyze lots of spatial data. The big question was "how do we scan the stream table?"
Our first design was a kind of "roller skate" frame that traversed the entire length of the stream table. There were lots of things that worked with this design, but we ended up having trouble matching subsequent scans. There's still lots of potential, but we didn't have the time needed to to the matching.
The scans from the "roller" worked pretty well, but they were sometimes distorted. Comparisons between separate scans were also difficult to line up.
So we went with a "locked down" setup for the scanner. To keep the precision of the short distance for the scanner, we were limited to a narrower view of the stream table. This let us directly compare individual scans to study changes over time (or from changes in discharge, etc.).
Here's the setup with the kinect and time-lapse camera set up to record movies and spatial data simultaneously.
Here's a quick example of what we've done so far (click to embiggen).
Kinect scanners are sold separately for about $150 or so. I got mine used at a game store for about $70. Don't forget to pick up a power supply (which comes with the USB cable that will interface with the computer).
Some software that we've used:
Meshlab: for viewing the point cloud data
RGBDemo: for making quick 3D surfaces
OpenKinect: Information about grabbing the 3D data
To process the raw data, Todd combined the distance file data using a Python script (running on a Linux box) and then compared the averaged distance values from one scan to another.
Hacking the Em2, Part 3: Keeping Your Head
I'm reading people's updates about GSA and miss being able to check in on everyone, especially Steve and Co. from LRRD. Fortunately, they've been updating with photos from the convention floor. While my Em2 shipped with some T-shirts, their bowling shirts are pretty sharp. Besides, Steve kind of has a "dude abides" air about him - when he's not working 80 hours a week making cool fluvial geomodels.
With all the modifications I've made to my Em2, one of the challenges I kept running into was related to discharge. While the standard electric control on the pump does a good job at keeping the pump running evenly, if you drastically change the level of water in the reservoir, you'll get big changes in the pump's output (due to the change in hydraulic gradient).
To avoid the discharge from dropping or jumping up too quickly, I went with a "constant head" tank. It's basically a tank that, once full, allows any excess to drain back to the reservoir. The water is forced out the bottom and since the height of the water column inside the central tank is constant, the water comes out at the same pressure (any extra water that would have increased the water level drains out the overflow reservoir). The pressure head is kept constant, hence - "constant head."
Our design is based off of one described by Thomas Hickson's "desktop delta" model. Discharge is through the elbow on the left side, while the overflow drains back to the reservoir via the outlet on the right.
An added bonus is that I'm able to divert the pump's output into two tanks. By adjusting the discharge valves on the tanks, I can run a main stream at one rate and a tributary at a smaller one. I've got two 3' lengths of aluminum L-channel set up to support the tanks near the top of the model.
Oh, remember the notch gage I had calibrated earlier? I can calibrate the valves on the constant head tanks. Eventually I hope to make them adjustable while in operation, but for now I've just set the valve at a particular point and just kept it at that point.
Another handy detail. I can just plug up the outlet next to the constant head tank if I need to quickly shut off the flow. I can stop the flow and make measurements of the channel or whatever. When I'm done, I just remove the stopper. With the overflow redirected to the reservoir, I can keep it plugged up indefinitely. And since the pressure head is constant, the water doesn't shoot out of the end. Made of win.
UPDATE: LRRD makes an electronic flow controller, based off the Arduino open-source control boards. I bet this could be used to help limit the pump output directly, or control a solenoid to change the valve on the constant head tanks. Oh the possibilities...