I’m going to be doing a lot of soldering in the near future, so for the sake of my respiratory system, I figured it would be useful to have a fume extractor. I already had a fan, and air filters are cheap, so I printed out a frame to interface the two. The full writeup and files are at Thingiverse.
I bought an InkShield from the Kickstarter a few months ago mostly out of a desire to support an interesting Open Hardware project. It wasn’t until yesterday that I thought of something useful to do with it. Instead of that, I made this project, called the Semi-Automatic Paintbrush. Using an infrared camera, an InkShield, an ink cartridge with an infrared LED stuck to it, and your arm, you can copy great works of art, or just any old picture.
The desktop side software involved is called paintbrush.py. It conveniently uses the homography module I wrote a year ago to map what the IR camera sees to the coordinate system of the canvas. The mapping is calibrated interactively by placing the cartridge/LED at each of the four corners of the canvas and pressing a key when prompted. After that, the motion of the LED is tracked, the corresponding region of the image is found, and the script sends serial commands to an Arduino with the InkShield telling it which nozzles to fire at what duty cycle to achieve the correct level of gray, or in this case, green. The painted regions are tracked to prevent flooding.
As you can see from the image above, the results are not going to end up in the Louvre, but they do have a kind of partially mechanical, partially organic flavor to them. If you have an InkShield, an IR LED, and a pygame supported IR camera (I use a modified PS3 Eye), and you’re interested in making your own lazy artwork, the script is available on github under an ISC License. The Arduino sketch requires the InkShield library and is LGPL. Usage instructions for the script are contained with it.
Lytro recently made its namesake light field camera available for preordering. The light field camera reaches closer to the plenoptic function than a standard camera in that instead of only summing the photons to arrive at chromacity and luminosity at each pixel, it additionally determines directional information. It does so by placing an array of microlenses above the sensor, each of which represents a light field pixel and covers a region of sensor pixels. Each sensor pixel then captures a ray arriving at a specific direction at its parent microlens. Ren Ng’s thesis is full of fascinating uses for this, but it seems Lytro is primarily focusing on the ability to refocus the light field image.
There is very little information available about the format the camera is capturing the light field in, but I suspect that it will not be impossible to use the files for other purposes like viewing parallax and perspective changes on a single capture. So far, the information we have is that the 8 gigabyte model can store 350 images, the sensor can capture 11 megarays, and the examples in the online gallery have resolutions of 831×831 to 1080×1080. Since the sensor in a light field camera captures one ray per pixel, we can assume the physical sensor is 11 megapixels. Conveniently, 350 11 megapixel images of 2 bytes per pixel add up to roughly 8 gigabytes. This suggests the format may be either a raw 16 bit Bayer array off of the sensor or a processed and packed RGB array. As for the microlens array, I suspect that it is a roughly 831×831 grid of hexagonal lenses, each of which cover a roughly 16 square pixel area, for a total sensor resolution of 3324×3324 pixels. We probably won’t know for sure until the cameras ship in early 2012.
In the meantime, we do have some sample images to play with, but not in the format captured by the camera. The Lytro desktop app apparently exports compressed representations of the light field to reduce file sizes and rendering requirements for web display. The .lfp files are simply a set of JPEGs representing the unique visually interesting sections of the light field. That is, a set in which each image shows a different area in focus. It appears to do so dynamically, picking the minimum number of images necessary to show all focusable objects in narrow depths of field. These images are stored along with their estimated depths and a depth lookup table for the image. This allows for HTML5 and Flash applications like the one embedded above in which the user clicks on a region of the image, the value of that region is looked up, and the depth image closest to that value is displayed.
To allow for viewing the files offline and to satisfy my curiosity, I wrote a tool called lfpsplitter that reads in an .lfp and writes out its component images as .jpg files and the depth lookup table and image metadata as plain text files. It is available on github, along with a README describing the file format in detail. Until we have Lytro cameras and .lfp files of our own to play with, you can find example files by examining the html source of Lytro’s gallery page.
Update: Given the animated parallax shift image of Walt Mossberg on the Lytro blog, it seems that each microlens covers an area 5 pixels across horizontally. Perhaps the sensor is 4096×4096 and 11 megarays describes the number of pixels getting useful photons, or the microlenses are arranged in a honeycomb pattern with a maximum width of 5px.
Thingiverse is an enormous resource for mostly open source, ready to print 3D objects. It conveniently has both a built in understanding of derivative objects and a web based 3D object viewer. It has no mechanism for combining the two though, making comparing any two objects a matter of either visual guesswork or downloading and comparing the files against each other.
Continuing on the theme of interesting things to do with an interactive globe, I added a live satellite tracking mode to sosg. The program polls a local PREDICT server for location and visibility information about the satellites being tracked by it. It draws the name of each satellite and the path it is following in red. It also draws a little icon that turns green when the satellite is visible overhead. There is enough civilization in close proximity to my apartment that I can’t see actually see one pass by, but it is nice to know they are there above me. As before, the ISC licensed code is available on github at git://github.com/nrpatel/SnowGlobe.git
Science On a Sphere is a NOAA project typically using four projectors to display planetary data on a six foot diameter sphere. As a federal agency, NOAA publishes data that is not copyrightable. These public domain datasets are pretty impressive, ranging from plate tectonics to solar storms. They are also insanely high resolution, with mp4 videos and jpg images at 2048×1024 and 4096×2048.
To shrink this four projector, five computer, high resolution science center exhibit down to a picoprojector, old laptop, bathroom lighting fixture setup, I had to move beyond my unoptimized python scripts to SDL, OpenGL, libvlc, and GLSL. I wrote a program called sosg, Science On a Snow Globe, which reads in images and videos and displays them in the correct format for Snow Globe. Doing the equirectangular to fisheye transform in a fragment shader is extremely lightweight, even with GMA graphics. Using libvlc makes video decoding quite performant as well, despite the resolution.
The program is the closest I’ve written to “shippable” in recent memory, but there are some rough spots. I ran into a bottleneck trying to decode high resolution jpgs in real time, so currently sosg does not support the image slideshows that the full size SOS does. It also doesn’t attempt to read in .sos playlist information. Basically, it is not an SOS replacement, just a cheap and cheerful way to view planetary data on your desktop. Unlike the original, it is also available under a permissive license and can be cloned directly from git://github.com/nrpatel/SnowGlobe.git.
While capturing the time-lapse last week, John and I ran into two irritating issues. The first is that the moving platform causes the object being printed to come in and out of the focal plane of the camera and makes for a jarring video. The second is that because the interval between photos is constant, some large and slow layers will have multiple shots taken while several consecutive quick layers can be skipped entirely. The solution to both of these is to dynamically remote trigger the camera from the printer.
I wrote a Skeinforge photograph plugin that inserts a new G-code command, M240, which tells the printer to trigger a photograph. The module offers three modes. End of Layer, as demonstrated by Yoda above, is the simplest. It takes one picture at the start of the first layer and then another at the end of each layer of the print, resolving only the second of the aforementioned issues. Corner of Layer takes a picture at the minimum Y,X of each layer. Least Change between Layers tries to take shots that are as close as possible to each other from layer to layer. I had the most visually interesting results with the last setting, as shown in the Flower print up top. The module can be downloaded from github, and installation instructions are included within its text.
The other half of the control scheme is triggering the camera from the RepRap. Since I didn’t want to risk coupling my T2i directly to the printer, I went for emulating a Canon RC-1 Remote, which has been thoroughly reverse engineered. The hardware is simply an 850nm infrared LED in series with a 180 ohm resistor connected to one of the I/O pins on the Arduino Mega. I chose pin 23 because I could solder to it without pulling my RAMPS board off. The software side is equally simple. For this, I forked the excellent Sprinter firmware to respond to M240 and send the correct pulse over the IR LED. My fork is on github, but the diff that adds M240 support is the interesting bit.
John visited recently and suggested that we bring another photographic production to the world: this time, a time-lapse of the RepRap printing out an interesting looking object. After some frustrating attempts to install the Canon EOS Utility, we just used an intervalometer directly on my T2i with the Magic Lantern firmware. In case you want to try it out and to save me a lot of Googling in the future, here are the mencoder parameters to generate a sanely sized video from high resolution stills.
mencoder -ovc lavc -lavcopts vcodec=mjpeg -mf fps=10:type=jpg -vf scale=960:720 'mf://*.JPG' -o timelapse.avi
Depending on which project gets swapped into my next free time slot, I may have another post soon exploring an extension on this that John and I discussed.