I’ll have more detailed posts about the Adjacent Reality Tracker in the future, but here is a preview of some of what you can expect to see at our booth in Fiesta Hall at the Bay Area Maker Faire on May 19th-20th this year. In the meantime, you can follow Donnie and I working on the thing at GitHub.
Thingiverse is an enormous resource for mostly open source, ready to print 3D objects. It conveniently has both a built in understanding of derivative objects and a web based 3D object viewer. It has no mechanism for combining the two though, making comparing any two objects a matter of either visual guesswork or downloading and comparing the files against each other.
Continuing on the theme of interesting things to do with an interactive globe, I added a live satellite tracking mode to sosg. The program polls a local PREDICT server for location and visibility information about the satellites being tracked by it. It draws the name of each satellite and the path it is following in red. It also draws a little icon that turns green when the satellite is visible overhead. There is enough civilization in close proximity to my apartment that I can’t see actually see one pass by, but it is nice to know they are there above me. As before, the ISC licensed code is available on github at git://github.com/nrpatel/SnowGlobe.git
Science On a Sphere is a NOAA project typically using four projectors to display planetary data on a six foot diameter sphere. As a federal agency, NOAA publishes data that is not copyrightable. These public domain datasets are pretty impressive, ranging from plate tectonics to solar storms. They are also insanely high resolution, with mp4 videos and jpg images at 2048×1024 and 4096×2048.
To shrink this four projector, five computer, high resolution science center exhibit down to a picoprojector, old laptop, bathroom lighting fixture setup, I had to move beyond my unoptimized python scripts to SDL, OpenGL, libvlc, and GLSL. I wrote a program called sosg, Science On a Snow Globe, which reads in images and videos and displays them in the correct format for Snow Globe. Doing the equirectangular to fisheye transform in a fragment shader is extremely lightweight, even with GMA graphics. Using libvlc makes video decoding quite performant as well, despite the resolution.
The program is the closest I’ve written to “shippable” in recent memory, but there are some rough spots. I ran into a bottleneck trying to decode high resolution jpgs in real time, so currently sosg does not support the image slideshows that the full size SOS does. It also doesn’t attempt to read in .sos playlist information. Basically, it is not an SOS replacement, just a cheap and cheerful way to view planetary data on your desktop. Unlike the original, it is also available under a permissive license and can be cloned directly from git://github.com/nrpatel/SnowGlobe.git.
Since reading Snow Crash, I’ve been drawn to the idea of having my own personal Earth. Because I’m stuck in reality and the virtual version of it is always 5 years away, I’m building a physical artifact that approximates the idea: an interactive spherical display. This is of course something that exists and can likely be found at your local science center. The ones they use are typically 30-100″ in diameter and cost enough that they don’t have prices publicly listed. Snow Globe is my 8″ diameter version that costs around $200 to build if you didn’t buy a Microvision SHOWWX for $600 when they launched like I did.
The basic design here is to shoot a picoprojector through a 180° fisheye lens into a frosted glass globe. The projector is a SHOWWX since I already have one, but it likely works better than any of the non-laser alternatives since you avoid having to deal with keeping the surface of the sphere focused. Microvision also publishes some useful specs, and if you ask nicely, they’ll email you a .STL model of their projector. The lens is an Opteka fisheye designed to be attached to handheld camcorders. It is by far the cheapest 180° lens I could find with a large enough opening to project through. The globe, as in my last dome based project is for use on lighting fixtures. This time I bought one from the local hardware store for $6 instead of taking the one in my bathroom.
I’ve had a lot of fun recently copying keys and people, but my objective in building a 3D printer was to make it easier to do projects like this one. Designing a model in OpenSCAD, printing it, tweaking it, and repeating as necessary is much simpler than any other fabrication technique I’m capable of. In this case, I printed a mount that attaches the lens to the correct spot in front of the projector at a 12.15° angle to center the projected image. I also printed brackets to attach the globe to the lens/projector mount. The whole thing is sitting on a GorillaPod until I get around to building something more permanent.
Actually calibrating a projector with slight pincushion through a $25 lens into a bathroom fixture attached together with some guesswork and a 3D printer is well beyond my linear algebra skill, so I simplified the calibration procedure down to four terms. We need to find the radius in pixels of the circle being projected and the x and y position of the center of that circle for starters. The more difficult part, which tested my extremely rusty memory of trigonometry is figuring out how to map the hemisphere coming out of the fisheye lens to the spherical display surface. For that, we have a single number for the distance from the center of the sphere to the lens, in terms of a ratio of the projected radius. The math is all available in the code, but the calibration script I wrote is pretty simple to use. It uses pygame to project longitude lines and latitude color sections as in the image above. You use the arrow keys to line up the longitude lines correctly to arrive at the x and y position, plus and minus keys to adjust the radius size until it fits the full visible area of the sphere, and 9 and 0 to adjust the lens offset until the latitudes look properly aligned. What you end up with is close enough to correct to look good, though as you can see in the images, the projector doesn’t quite fit the lens or fill the sphere. The script saves the calibration information in a pickle file for use elsewhere.
Going back to the initial goal, I wrote a script to turn equirectangular projected maps of the Earth into roughly azimuthal equidistant projected images calibrated for a Snow Globe like the one above. There are plenty of maps of the former projection available freely, like Natural Earth and Blue Marble. Written in python, the script is quite slow, but it serves as a proof of concept. The script, along with the calibration script and the models for the 3D printed mounts are all available on github. I’ve finally fully accepted git and no longer see a point in attaching the files to these posts themselves. I put a Part One in the title to warn you that this blog is going to be all Snow Globe all the time for the foreseeable future. Up next is writing a faster interface to interactively display to it in real time, and if I think of a good way to do it, touch input is coming after that.
Download from github: git://github.com/nrpatel/SnowGlobe.git
Math. It turns out its not quite like riding a bike. A year since college, and two since my last computer vision course, my knowledge of linear algebra is basically nil. Several projects I’m stewing on are bottlenecked on this. I decided to relearn some basics and create a tool I’ve wanted for a while, a method to quickly and easily calculate the homography between a camera and a projector. That is, a transformation that allows you to map points from the camera plane to the display plane. This opens up a world of possibilities, like virtual whiteboards and interactive displays.
I won’t go into detail about deriving the transformation matrix, as there is information elsewhere better than I could present. The calculation requires four or more matching point pairs between the two planes. Finding the points manually is a pain, so I wrote a script that uses Pygame and NumPy to do it interactively. The script works as follows:
Point an infrared camera at a projector screen, with both connected to the same computer.
Run the script.
Align an lit IR LED to the green X on the projector screen, and press any key.
Repeat step 3 until you have four points (or more, depending on the script mode), at which point,
The script will calculate the homography, print it out, and save it as a NumPy file.
The script in its current form uses any Pygame supported infrared camera. This today is likely a modded PS3 Eye or other webcam, unless you’re lucky enough to have a Point Grey IR camera. I do not, so I hot glued an IR filter from eBay to the front of my Eye, that I may have forgotten to return to CMU’s ECE Department when I graduated. Floppy disk material and exposed film can also function as IR filters on the cheap, just be sure to pop the visible light filter out of the camera first.
It would be overly optimistic of me to believe there are many people in the world with both the hardware and the desire to run this script. Luckily, due to the magic of open source software and the modularity of Python, individual classes and methods from the file are potentially useful. It should also relatively straightforward to modify to accept other types of input, like a regular webcam with color tracking or a Wii Remote. I will add the latter myself if I can find a reasonable Python library to interface one with.
Once you have the transformation matrix, you can simply dot the matrix with the point from the camera to get the corresponding point on the projector. The blackboard script, demonstrated by my roommate above and downloadable below shows a use case for this, a drawing app using an IR LED as a sort of spray can. The meat of it, to convert a point from camera to projector coordinates, is basically:
# use homogeneous coordinates
p = numpy.array([point,point,1])# convert the point from camera to display coordinates
p = numpy.dot(matrix,p)# normalize it
point =(p/p, p/p)
The homography.py script takes the filename of the matrix file to output to as its only argument. It also has an option, “-l”, that allows you to use four or more points randomly placed around the screen, rather than the four corner points. This could come in handy if your camera doesn’t cover the entire field of view of the projector. You can hit the right arrow key to skip points you can’t reach and escape to end and calculate. The blackboard.py script takes the name of the file the homography script spits out as its only argument. Both require Pygame 1.9 or newer, as it contains the camera module. Both are also licensed for use under the ISC license, which is quite permissive.
nrp@Mediabox:~/Desktop$ python homography.py blah
First display point (0, 0)
Point from source (18, 143). Need more points? True
New display point (1920, 0)
Point from source (560, 137). Need more points? True
New display point (0, 1080)
Point from source (32, 446). Need more points? True
New display point (1920, 1080)
Point from source (559, 430). Need more points? False
Saving matrix to blah.npy
array([[ 3.37199729e+00, -1.55801855e-01, -3.84162860e+01],
[ 3.78207304e-02, 3.41647264e+00, -4.89236361e+02],
[ -6.36755677e-05, -8.73581448e-05, 1.00000000e+00]])
nrp@Mediabox:~/Desktop$ python blackboard.py blah.npy
This idea, like most good ones, came to me while I was in my bathroom. I recently took delivery of a Microvision SHOWWX laser pico projector, and I was trying to come up with a quick project to show the cool stuff it could do. I noticed that the light above my mirror was diffused by a large frosted glass dome, which led to this strange project. It’ll probably make more sense around Halloween.
The software is pretty similar to what I wrote the last time I did face tracking: OpenCV’s very convenient Haar feature detection along with Pygame to do the image display. I tried taking a picture of my own eye, but Flickr user Sarah Cartwright’s is much more photogenic and available under Creative Commons Attribution-Share Alike.
A regular pico projector would probably work as well, but the contrast ratio on this thing means it looks like the iris is being projected by itself. I have some much cooler projects planned that will really take advantage of the focus free nature of the laser projector. In the mean time, the code for this project is available below. The image is CC A-SA, the code is public domain, and the Haar file is Intel License Agreement like the rest of OpenCV. I also attached the Python script itself for perusal by search engines or folks who just want to see the code.
This is what happens (best case) when geeks have too much free time. My friend John Martin and I decided to take part in Star Wars Uncut, an experiment to recreate Star Wars: A New Hope in a series of several hundred 15 second chunks, each created by random people across the internet. I chose a scene with a pleasant blend of dialog and pyrotechnics.
John has an inordinate quantity of Star Wars merchandise, so we went with stop motion animation for the scene, something neither of us was familiar with. We found the actual action figures for almost every part of the 15 seconds, and really only had to improvise on the explosions, as we would rather not blow up collectibles.
With assistance from Peter Martin and Meg Blake, we fabricated Y-wings out of soda bottles, cardboard tubes, cardboard, and spray paint. We filled each with a mixture of half potassium nitrate and half sugar, lit it with a fuse, and dragged it with a string as we took pictures. As you can see by the video above, the results are reasonable for an afternoon of filming and a $0 budget.