I’m going to be doing a lot of soldering in the near future, so for the sake of my respiratory system, I figured it would be useful to have a fume extractor. I already had a fan, and air filters are cheap, so I printed out a frame to interface the two. The full writeup and files are at Thingiverse.
I bought an InkShield from the Kickstarter a few months ago mostly out of a desire to support an interesting Open Hardware project. It wasn’t until yesterday that I thought of something useful to do with it. Instead of that, I made this project, called the Semi-Automatic Paintbrush. Using an infrared camera, an InkShield, an ink cartridge with an infrared LED stuck to it, and your arm, you can copy great works of art, or just any old picture.
The desktop side software involved is called paintbrush.py. It conveniently uses the homography module I wrote a year ago to map what the IR camera sees to the coordinate system of the canvas. The mapping is calibrated interactively by placing the cartridge/LED at each of the four corners of the canvas and pressing a key when prompted. After that, the motion of the LED is tracked, the corresponding region of the image is found, and the script sends serial commands to an Arduino with the InkShield telling it which nozzles to fire at what duty cycle to achieve the correct level of gray, or in this case, green. The painted regions are tracked to prevent flooding.
As you can see from the image above, the results are not going to end up in the Louvre, but they do have a kind of partially mechanical, partially organic flavor to them. If you have an InkShield, an IR LED, and a pygame supported IR camera (I use a modified PS3 Eye), and you’re interested in making your own lazy artwork, the script is available on github under an ISC License. The Arduino sketch requires the InkShield library and is LGPL. Usage instructions for the script are contained with it.
Lytro recently made its namesake light field camera available for preordering. The light field camera reaches closer to the plenoptic function than a standard camera in that instead of only summing the photons to arrive at chromacity and luminosity at each pixel, it additionally determines directional information. It does so by placing an array of microlenses above the sensor, each of which represents a light field pixel and covers a region of sensor pixels. Each sensor pixel then captures a ray arriving at a specific direction at its parent microlens. Ren Ng’s thesis is full of fascinating uses for this, but it seems Lytro is primarily focusing on the ability to refocus the light field image.
There is very little information available about the format the camera is capturing the light field in, but I suspect that it will not be impossible to use the files for other purposes like viewing parallax and perspective changes on a single capture. So far, the information we have is that the 8 gigabyte model can store 350 images, the sensor can capture 11 megarays, and the examples in the online gallery have resolutions of 831×831 to 1080×1080. Since the sensor in a light field camera captures one ray per pixel, we can assume the physical sensor is 11 megapixels. Conveniently, 350 11 megapixel images of 2 bytes per pixel add up to roughly 8 gigabytes. This suggests the format may be either a raw 16 bit Bayer array off of the sensor or a processed and packed RGB array. As for the microlens array, I suspect that it is a roughly 831×831 grid of hexagonal lenses, each of which cover a roughly 16 square pixel area, for a total sensor resolution of 3324×3324 pixels. We probably won’t know for sure until the cameras ship in early 2012.
In the meantime, we do have some sample images to play with, but not in the format captured by the camera. The Lytro desktop app apparently exports compressed representations of the light field to reduce file sizes and rendering requirements for web display. The .lfp files are simply a set of JPEGs representing the unique visually interesting sections of the light field. That is, a set in which each image shows a different area in focus. It appears to do so dynamically, picking the minimum number of images necessary to show all focusable objects in narrow depths of field. These images are stored along with their estimated depths and a depth lookup table for the image. This allows for HTML5 and Flash applications like the one embedded above in which the user clicks on a region of the image, the value of that region is looked up, and the depth image closest to that value is displayed.
To allow for viewing the files offline and to satisfy my curiosity, I wrote a tool called lfpsplitter that reads in an .lfp and writes out its component images as .jpg files and the depth lookup table and image metadata as plain text files. It is available on github, along with a README describing the file format in detail. Until we have Lytro cameras and .lfp files of our own to play with, you can find example files by examining the html source of Lytro’s gallery page.
Update: Given the animated parallax shift image of Walt Mossberg on the Lytro blog, it seems that each microlens covers an area 5 pixels across horizontally. Perhaps the sensor is 4096×4096 and 11 megarays describes the number of pixels getting useful photons, or the microlenses are arranged in a honeycomb pattern with a maximum width of 5px.
Thingiverse is an enormous resource for mostly open source, ready to print 3D objects. It conveniently has both a built in understanding of derivative objects and a web based 3D object viewer. It has no mechanism for combining the two though, making comparing any two objects a matter of either visual guesswork or downloading and comparing the files against each other.
Continuing on the theme of interesting things to do with an interactive globe, I added a live satellite tracking mode to sosg. The program polls a local PREDICT server for location and visibility information about the satellites being tracked by it. It draws the name of each satellite and the path it is following in red. It also draws a little icon that turns green when the satellite is visible overhead. There is enough civilization in close proximity to my apartment that I can’t see actually see one pass by, but it is nice to know they are there above me. As before, the ISC licensed code is available on github at git://github.com/nrpatel/SnowGlobe.git
Science On a Sphere is a NOAA project typically using four projectors to display planetary data on a six foot diameter sphere. As a federal agency, NOAA publishes data that is not copyrightable. These public domain datasets are pretty impressive, ranging from plate tectonics to solar storms. They are also insanely high resolution, with mp4 videos and jpg images at 2048×1024 and 4096×2048.
To shrink this four projector, five computer, high resolution science center exhibit down to a picoprojector, old laptop, bathroom lighting fixture setup, I had to move beyond my unoptimized python scripts to SDL, OpenGL, libvlc, and GLSL. I wrote a program called sosg, Science On a Snow Globe, which reads in images and videos and displays them in the correct format for Snow Globe. Doing the equirectangular to fisheye transform in a fragment shader is extremely lightweight, even with GMA graphics. Using libvlc makes video decoding quite performant as well, despite the resolution.
The program is the closest I’ve written to “shippable” in recent memory, but there are some rough spots. I ran into a bottleneck trying to decode high resolution jpgs in real time, so currently sosg does not support the image slideshows that the full size SOS does. It also doesn’t attempt to read in .sos playlist information. Basically, it is not an SOS replacement, just a cheap and cheerful way to view planetary data on your desktop. Unlike the original, it is also available under a permissive license and can be cloned directly from git://github.com/nrpatel/SnowGlobe.git.
While capturing the time-lapse last week, John and I ran into two irritating issues. The first is that the moving platform causes the object being printed to come in and out of the focal plane of the camera and makes for a jarring video. The second is that because the interval between photos is constant, some large and slow layers will have multiple shots taken while several consecutive quick layers can be skipped entirely. The solution to both of these is to dynamically remote trigger the camera from the printer.
I wrote a Skeinforge photograph plugin that inserts a new G-code command, M240, which tells the printer to trigger a photograph. The module offers three modes. End of Layer, as demonstrated by Yoda above, is the simplest. It takes one picture at the start of the first layer and then another at the end of each layer of the print, resolving only the second of the aforementioned issues. Corner of Layer takes a picture at the minimum Y,X of each layer. Least Change between Layers tries to take shots that are as close as possible to each other from layer to layer. I had the most visually interesting results with the last setting, as shown in the Flower print up top. The module can be downloaded from github, and installation instructions are included within its text.
The other half of the control scheme is triggering the camera from the RepRap. Since I didn’t want to risk coupling my T2i directly to the printer, I went for emulating a Canon RC-1 Remote, which has been thoroughly reverse engineered. The hardware is simply an 850nm infrared LED in series with a 180 ohm resistor connected to one of the I/O pins on the Arduino Mega. I chose pin 23 because I could solder to it without pulling my RAMPS board off. The software side is equally simple. For this, I forked the excellent Sprinter firmware to respond to M240 and send the correct pulse over the IR LED. My fork is on github, but the diff that adds M240 support is the interesting bit.
John visited recently and suggested that we bring another photographic production to the world: this time, a time-lapse of the RepRap printing out an interesting looking object. After some frustrating attempts to install the Canon EOS Utility, we just used an intervalometer directly on my T2i with the Magic Lantern firmware. In case you want to try it out and to save me a lot of Googling in the future, here are the mencoder parameters to generate a sanely sized video from high resolution stills.
mencoder -ovc lavc -lavcopts vcodec=mjpeg -mf fps=10:type=jpg -vf scale=960:720 'mf://*.JPG' -o timelapse.avi
Depending on which project gets swapped into my next free time slot, I may have another post soon exploring an extension on this that John and I discussed.
Since reading Snow Crash, I’ve been drawn to the idea of having my own personal Earth. Because I’m stuck in reality and the virtual version of it is always 5 years away, I’m building a physical artifact that approximates the idea: an interactive spherical display. This is of course something that exists and can likely be found at your local science center. The ones they use are typically 30-100″ in diameter and cost enough that they don’t have prices publicly listed. Snow Globe is my 8″ diameter version that costs around $200 to build if you didn’t buy a Microvision SHOWWX for $600 when they launched like I did.
The basic design here is to shoot a picoprojector through a 180° fisheye lens into a frosted glass globe. The projector is a SHOWWX since I already have one, but it likely works better than any of the non-laser alternatives since you avoid having to deal with keeping the surface of the sphere focused. Microvision also publishes some useful specs, and if you ask nicely, they’ll email you a .STL model of their projector. The lens is an Opteka fisheye designed to be attached to handheld camcorders. It is by far the cheapest 180° lens I could find with a large enough opening to project through. The globe, as in my last dome based project is for use on lighting fixtures. This time I bought one from the local hardware store for $6 instead of taking the one in my bathroom.
I’ve had a lot of fun recently copying keys and people, but my objective in building a 3D printer was to make it easier to do projects like this one. Designing a model in OpenSCAD, printing it, tweaking it, and repeating as necessary is much simpler than any other fabrication technique I’m capable of. In this case, I printed a mount that attaches the lens to the correct spot in front of the projector at a 12.15° angle to center the projected image. I also printed brackets to attach the globe to the lens/projector mount. The whole thing is sitting on a GorillaPod until I get around to building something more permanent.
Actually calibrating a projector with slight pincushion through a $25 lens into a bathroom fixture attached together with some guesswork and a 3D printer is well beyond my linear algebra skill, so I simplified the calibration procedure down to four terms. We need to find the radius in pixels of the circle being projected and the x and y position of the center of that circle for starters. The more difficult part, which tested my extremely rusty memory of trigonometry is figuring out how to map the hemisphere coming out of the fisheye lens to the spherical display surface. For that, we have a single number for the distance from the center of the sphere to the lens, in terms of a ratio of the projected radius. The math is all available in the code, but the calibration script I wrote is pretty simple to use. It uses pygame to project longitude lines and latitude color sections as in the image above. You use the arrow keys to line up the longitude lines correctly to arrive at the x and y position, plus and minus keys to adjust the radius size until it fits the full visible area of the sphere, and 9 and 0 to adjust the lens offset until the latitudes look properly aligned. What you end up with is close enough to correct to look good, though as you can see in the images, the projector doesn’t quite fit the lens or fill the sphere. The script saves the calibration information in a pickle file for use elsewhere.
Going back to the initial goal, I wrote a script to turn equirectangular projected maps of the Earth into roughly azimuthal equidistant projected images calibrated for a Snow Globe like the one above. There are plenty of maps of the former projection available freely, like Natural Earth and Blue Marble. Written in python, the script is quite slow, but it serves as a proof of concept. The script, along with the calibration script and the models for the 3D printed mounts are all available on github. I’ve finally fully accepted git and no longer see a point in attaching the files to these posts themselves. I put a Part One in the title to warn you that this blog is going to be all Snow Globe all the time for the foreseeable future. Up next is writing a faster interface to interactively display to it in real time, and if I think of a good way to do it, touch input is coming after that.
Download from github: