Maker Ant Farm: Minecraft Skin Generation with a Kinect

Since my seemingly fragile 3D printer had never left my desk before and even in prime condition could only print an object every 10 minutes or so, I decided that I needed a backup project for the Bay Area Maker Faire last month.  I conscripted Will to help me out on a purely software Kinect based project.  After downscoping our ideas several times as the Faire weekend approached, we eventually settled on generating Minecraft player skins of visitors.  The printer ended up working fine (and more reliably than the software only project), but the Minecraft “Maker Ant Farm” was more of a crowd pleaser.

A visitor would stand in front of the Kinect and enter fieldgoal/psi calibration pose.  We used OpenNI and NITE to find their pose and segment them out of the background for a preview display.  Using OpenCV, we mapped body parts to the corresponding sections of the Minecraft skin texture.  Since we could only see the fronts and parts of the sides of a person, we just made up what the back would look like based on the front.  This was of course imprecise and resulted heads that often looked like they had massive bald spots.  Rather than trying to write some kind of intelligent texture fill algorithm on a short schedule, we just gave all of the skins yellow hard hats (not blonde hair, contrary to popular opinion).  After generating the skin, we loaded it back onto ShnitzelKiller’s player rig in Panda3D.  I had planned on writing full skeletal tracking for the rig, but ran out of time and settled on just having it follow the position and rotation of the user and perform an animated walk.  After walking around a bit watching a low res version of him or herself, the user could enter in a Twitter handle or email address to keep the skin.  The blocky doppelgänger was then dropped onto a Minecraft server instance we had running as a bot that did simple things like walk around in circles or drown.

Minecraft Skin

Despite some crashiness in NITE and the extremely short timeframe we wrote the project in, it ended up working reasonably well.  Thanks to the low resolution style and implied insistence on imagination in Minecraft, the players avoid looking like the ghastly zombies in Kinect Me.  You can see examples of some of the generated skins on @MakerAntFarm.  I hate not releasing code, but I almost hate releasing this code more.  It is very likely to be the worst I have ever hacked together, and I can’t help but suspect it will be held against me at some point.  Nonetheless, for the greater good, it’s up on github.  There are vague instructions on how one might use it in the README.  Good luck, and I’m sorry.

No Comments on Maker Ant Farm: Minecraft Skin Generation with a Kinect

Evil Eye: Microvision SHOWWX as a Face Tracking Eyeball

This idea, like most good ones, came to me while I was in my bathroom.  I recently took delivery of a Microvision SHOWWX laser pico projector, and I was trying to come up with a quick project to show the cool stuff it could do.  I noticed that the light above my mirror was diffused by a large frosted glass dome, which led to this strange project.  It’ll probably make more sense around Halloween.

Microvision SHOWWX Setup

The software is pretty similar to what I wrote the last time I did face tracking:  OpenCV’s very convenient Haar feature detection along with Pygame to do the image display.  I tried taking a picture of my own eye, but Flickr user Sarah Cartwright’s is much more photogenic and available under Creative Commons Attribution-Share Alike.

Inside the Eye

A regular pico projector would probably work as well, but the contrast ratio on this thing means it looks like the iris is being projected by itself.  I have some much cooler projects planned that will really take advantage of the focus free nature of the laser projector.  In the mean time, the code for this project is available below.  The image is CC A-SA, the code is public domain, and the Haar file is Intel License Agreement like the rest of OpenCV.  I also attached the Python script itself for perusal by search engines or folks who just want to see the code.

Download:
EvilEye.zip
EvilEye.py

8 Comments on Evil Eye: Microvision SHOWWX as a Face Tracking Eyeball

A Dirty Hack for OpenCV on the OLPC XO

OpenCV has issues using the camera on the XO, rising from the way it negotiates color palette settings with the OV7670 v4l2 driver.  I haven’t delved deep enough into v4l2 yet to come up with a proper fix for either the driver or OpenCV.  However, I have a temporary fix that (probably) works fine but is very ugly.

The easy and dirty solution is to install the opencv, opencv-devel, and opencv-python packages with yum, to download the libhighgui.so.1.0.0 that I compiled, and to copy it to /usr/lib/libhighgui.so.1.0.0.  In short:

sudo yum install opencv opencv-devel opencv-python
sudo wget  https://eclecti.cc/files/2008/03/libhighguiso100 -O /usr/lib/libhighgui.so.1.0.0

Continue for details on the problem and the actual temporary fix that I did

4 Comments on A Dirty Hack for OpenCV on the OLPC XO

Face Detection on the OLPC XO

Sample Face Detection Image

Ok, here is some working code for face detection using the XO’s webcam.  It requires the opencv, opencv-python, and xawtv packages.  Xawtv can be skipped if you change it to use gstreamer instead, but gstreamer takes around 3 seconds to initialize the camera, take a picture, and save it.  Xawtv, on the other hand, takes about a half second.  The face detection algorithm, from OpenCV itself takes anywhere from 0.1 seconds to 3 seconds depending on what parameters you choose.  I chose ones that seem to work almost all of the time and take about 0.25 seconds.

Object detection is pretty fascinating stuff, and well beyond the scope of this blog post, so I’ll just refer you to this pdf.  By training and customizing Haar files, one could detect an endless possibility of objects.  Gesture detection and sign language to tex tare two of the more interesting possibilities.  If I can find image databases, I might try training a Haar filter for hand detection.

This is basically just proof of concept code attached.  I’m going to try a v4l2 python library and package something up as a usable XO activity this weekend.  To run this, just put both of the files below in the same directory and run “python face.py”.  Assuming you’re on an XO and you have opencv, opencv-python, and xawtv installed, it should pause for a second while the camera turns on and captures a picture, and then display an image with any faces enclosed by green rectangles.  It works best if most of your head is within the image frame and there is a decent amount of light.

Face detection script
The Haar cascade file that it uses

11 Comments on Face Detection on the OLPC XO