Snow Globe Redux: Gakken Worldeye Projector Upgrade

Worldeye projector upgrade

A few month hiatus from this blog turned into five and a half years, but that is a much longer story.  This one is about the state of desktop spherical displays in 2018.  In 2011, I hacked together the Snow Globe spherical display from a laser pico-projector, an off the shelf fish-eye lens, a bathroom light fixture, and some shader code.  I had hoped to make it easy for folks to build their own version by publishing everything, but the lens ended up being unobtanium.  Judging by the comments on the post, nobody was able to properly replicate the build.

Gakken Worldeye Teardown

A few months ago Palmer Luckey gave me heads up that a company called Gakken in Japan had a consumer version of the idea and that like everything in the world, there were sellers on eBay and Amazon importing it into the US.  The Gakken Worldeye sounded like it could fulfill the dream of a desktop spherical display, so I bought one to use and another to tear down.  It ended up being a hemispherical display with a pretty decent projection surface but a terrible projector and even worse driving electronics.  The guts of the sphere are above.  There is a VGA resolution TI DLP that is cropped by the lens to a 480 pixel circle.  The Worldeye takes 720p input over HDMI, which is then downsampled and squashed horizontally to that circle by an MStar video bridge.  Between the poor projector resolution and the questionable resampling, the results look extremely blurry.

Sony spherical projector

I figured it would be possible to improve on the sphere by taking advantage of the display surface and lens and swapping out the projector and electronics.  In the time since the ShowWX used in the Snow Globe was released, Microvision has developed higher resolution laser scanning projector modules in conjunction with Sony and others.  I picked up a Sony MP-CL1 with one of these modules, which is natively 1280×720.  This should have been a decent improvement over the 848×480 in the ShowWX.  I then CAD’d up and 3d printed a holder to mount it along with the original Worldeye lens into the globe.

Higher resolution Gakken Worldeye

The results are a bit underwhelming.  The image looks better than the stock Worldeye, but still looks quite blurry.  I realized afterwards that the sphere diameter is too small to take advantage of the projector resolution.  At around a 5″ radius, the surface of the sphere is getting around 1.8 pixels per mm (assuming uniform distortion).  The laser beam coming out of the projector is well over 1 mm wide, and probably closer to 1.5mm.  This means that neighboring pixels are blending heavily into each other.  The lens MTF is probably also pretty poor, which doesn’t help the sharpness issue.  If you’re interested in trying this out anyway, the .scad and .stl files are up on Thingiverse and the code for the Science on a Snow Globe application to display equirectangular images and videos is on GitHub.  The conclusion to the opening prompt is that spherical displays are more accessible in 2018 than they were in 2011, but don’t seem to be any better quality.  Hopefully someone takes the initiative to solve this.

6 Comments on Snow Globe Redux: Gakken Worldeye Projector Upgrade

Blinded by the Light: DIY Retinal Projection

Retinal Projection

After grabbing a couple of Microvision SHOWWX laser picoprojectors when they went up on Woot a few months back, I started looking for ways to use them.  Microvision started out of a project at the University of Washington HITLab in 1994 to develop laser based virtual retinal displays.  That is, a display that projects an image directly onto the user’s retina.  This allows for a potentially very compact see through display that is only visible by the user.  The system they developed reflected lasers off of a mechanical resonant scanner to deflect them vertically and horizontally, placing pixels at the right locations to form an image.  The lasers were modulated to vary the brightness of the pixels.  The SHOWWX is essentially this setup after 15 years of development to make it inexpensive and miniaturize it to pocket size.  The rest of the retinal display system was a set of optics designed to reduce the scanned image down to a point at the user’s pupil.  I thought I would try to shrink and cheapen that part of it as well.

The setup I built is basically what Michael Tidwell describes in his Virtual Retinal Displays thesis.  The projected image passes through a beamsplitter where some of the light is reflected away, reflects off of a spherical concave mirror to reduce back down to a point, and hits the other side of the beamsplitter, where some of the light passes through and the rest is reflected to the user’s pupil along with light passing through the splitter from the outside world.  For the sake of cost savings, all of my mirrors are from the bargain bin of Anchor Optics.  The key to the project is picking the right size and focal length of the spherical mirror.  The larger setup in the picture below uses a 57mm focal length mirror, which results in a fairly large rig with the laser scanner sitting at twice the focal length (the center of curvature) away from the mirror.  The smaller setup has a focal length around 27mm, which results in an image that is too close to focus on unless I take my contact lenses out.  The mirror also has to be large enough to cover most of the projected image, which means the radius should be at least ~0.4x the focal length for the 24.3° height and at most ~0.8x for the 43.2° width coming from a SHOWWX.  Note that this also puts the field of view of the virtual image entering the eye somewhere between a 24.3° diameter circle and a 24.3° by 43.2° rounded rectangle.

Projection Rig

Aside from my inability to find properly shaped mirrors, the big weakness of this rig is the size of the exit pupil.  The exit pupil is basically the useful size of the image leaving the system.  In this case, it is the width of the point that hits the user’s pupil.  If the point is too small, eye movement will cause eye pupil to miss the image entirely.  Because the projector is at the center of curvature of the mirror (see the optical invariant), the exit pupil is the same the width as the laser beams coming out of the projector: around 1.5 mm wide.  This makes it completely impractical to use head mounted or really, any other way.  I paused work on this project a few months ago with the intention of coming back to it when I could think of a way around this.  With usable see through consumer head mounted displays just around the bend though, I figured it was time to abandon the project and publish the mistakes I’ve made in case it helps anyone else.

If you do want to build something like this, keep in mind that the title of this post is only half joking.  I don’t normally use bold, but this is extra important: If you don’t significantly reduce the intensity of light coming from the projector, you will damage your eyes, possibly permanently.  The HITLab system had a maximum laser power output of around 2 μW.  The SHOWWX has a maximum of 200mW, which is 100,000x as much!  Some folks at the HITLab published a paper on retinal display safety and determined that the maximum permissible exposure from a long term laser display source is around 150 μW, so I needed to reduce the power by at least 10,000x to have a reasonable safety margin.  As you can see in the picture above, I glued a ND1024 neutral density filter over the exit of the projector, which reduces the output to 0.1%.  Additionally, the beamsplitter I picked reflects away 10% of the light after it exits the projector, and 90% of what bounces off of the concave mirror.  Between the ND filter, the beamsplitter, and setting the projector to its lowest brightness setting, the system should be safe to use.  The STL file and a fairly ugly parametric OpenSCAD file for the 3D printed rig to hold it all together are below.

blinded.scad
blinded.stl

10 Comments on Blinded by the Light: DIY Retinal Projection

Reverse Engineering the Lytro .LFP File Format

Lytro Microlens Array

After getting my Lytro camera yesterday, I set about answering the questions about the light field capture format I had from the last time around.  Lytro may be focusing (pun absolutely intended) on the Facebook using crowd with their camera and software, but their file format suggests they don’t mind nerds like us poking around.  The file structure is the same as what they use for their compressed web display .lfp files, complete with a plain text table of contents, so I was able to re-use the lfpsplitter tool I wrote earlier with some minor modifications.  The README with the tool describes in detail the format of the file and how to parse it.

The table of contents in the raw .lfp files gives away most of the camera’s secrets.  It contains a bunch of useful metadata and calibration data like the focal length, sensor temperature, exposure length, and zoom length.  It also gives away the fact that the camera contains a 3 axis accelerometer, storing the orientation of the camera with respect to gravity in each image.   The physical sensor is 3280 by 3280 pixels, and the raw file just contains a BGGR Bayer array of it at 12 bits per pixel.  Saving the array and converting it to tif using the raw2tiff command below shows that each microlens is about 10 pixels in diameter with some vignetting on the edges.

raw2tiff -w 3280 -l 3280 -d short IMG_0004_imageRef0.raw output.tif

Syncing the camera to Lytro’s desktop software backs it up the first time.  Amazingly, the backup file uses the same structure as both .lfp file types.  The file contains a huge amount of factory calibration data like an array of hot or stuck pixels and color calibration under different lighting conditions.  Incredibly, it also lets loose that there is functioning Wi-Fi on board the camera with files named “C:\\CALIB\\WIFI_PING_RESULT.TXT” and “C:\\CALIB\\WIFI_MAC_ADDR.TXT”, which matches what the FCC teardowns show.  There is no mention of Bluetooth support though, despite support by the chipset.  In any case, it seems there is a lot of cool stuff coming via firmware updates.

Hopefully one of those updates enables a USB Mass Storage mode, as there does not appear to be any way to get files off of the camera in Linux. I had to borrow my roommate’s MacBook Air for this escapade. The camera shows up as a SCSI CD drive, but mounting /dev/sr0 only shows a placeholder message intended for Windows users.

Thank you for purchasing your Lytro camera.  Unfortunately, we do not have a
Windows version of our desktop application at this time.  Please check out
http://support.lytro.com for the latest info on Windows support.

It was pretty trivial to write the lfpsplitter to get the raw data shown above, but doing anything useful with it will take more effort.  Normally simple stuff like demosiacing the Bayer array will likely be complicated by the need to avoid the gaps between microlenses and not distort the ray direction information.  Getting high quality results will probably also require applying the calibration information from the camera backups.  A first party light field editing library would be wonderful, but Lytro probably has other priorities.

You can grab my lfpsplitter tool from GitHub at git://github.com/nrpatel/lfptools.git and I uploaded an example .lfp you can use with it if you want to play with light field captures without the $400 hardware commitment.

68 Comments on Reverse Engineering the Lytro .LFP File Format

Semi-Automatic Paintbrush

The Mona Lisa

I bought an InkShield from the Kickstarter a few months ago mostly out of a desire to support an interesting Open Hardware project.  It wasn’t until yesterday that I thought of something useful to do with it.  Instead of that, I made this project, called the Semi-Automatic Paintbrush.  Using an infrared camera, an InkShield, an ink cartridge with an infrared LED stuck to it, and your arm, you can copy great works of art, or just any old picture.

The desktop side software involved is called paintbrush.py.  It conveniently uses the homography module I wrote a year ago to map what the IR camera sees to the coordinate system of the canvas.  The mapping is calibrated interactively by placing the cartridge/LED at each of the four corners of the canvas and pressing a key when prompted.  After that, the motion of the LED is tracked, the corresponding region of the image is found, and the script sends serial commands to an Arduino with the InkShield telling it which nozzles to fire at what duty cycle to achieve the correct level of gray, or in this case, green.  The painted regions are tracked to prevent flooding.

As you can see from the image above, the results are not going to end up in the Louvre, but they do have a kind of partially mechanical, partially organic flavor to them.  If you have an InkShield, an IR LED, and a pygame supported IR camera (I use a modified PS3 Eye), and you’re interested in making your own lazy artwork, the script is available on github under an ISC License.  The Arduino sketch requires the InkShield library and is LGPL.  Usage instructions for the script are contained with it.

5 Comments on Semi-Automatic Paintbrush

Augmented Reality for 3D Printing

Yoda
My roommate’s struggles designing his first 3D printed part gave me the idea to write an augmented reality viewer that lets you preview and interact with STL models in the real world without having to commit an object to plastic. This is actually sort of an update on part of a project I did for a Computational Photography course three years ago, but not terrible looking this time. I used the ArUco library to track the fiducial markers, largely because there is a javascript version if I ever want to make it web based. The program, which I uncreatively named arstl, reads in ASCII and binary STL files and displays them on top of the tracked marker. Right now, it uses a pretty basic OpenGL shader for a shiny plastic look, but I plan on making a more convincingly plastic one with bump mapping and subsurface scattering soon. As usual, the code is up on github under an ISC License. The STL parsing part of it is in the public domain, in case anyone finds it useful.

2 Comments on Augmented Reality for 3D Printing