Speeding Up Skeinforge with PyPy

PyPy 1.5 vs CPython 2.6.6 for Skeinforge

Now that I’ve recovered from Maker Faire, I can continue documenting what I did.  In the lead up to the event, I tried to streamline the FaceCube project as much as possible so visitors wouldn’t have to waste precious Faire time waiting for a print to start.  On the hardware side, I kept the extruder and heated bed warmed up to operating temperature and (literally) hot swapped 4″x4″ pieces of glass so that prints could run back to back.  I updated the FaceCube script to do capture, cleaning, meshing, scaling, and running through OpenSCAD with a single button press.  The remaining bottleneck was running Skeinforge on my geriatric in computer years laptop.  Skeinforge is an amazing utility, but written in Python, it is slower than a drunk sloth.

There are ways of speeding up drunk sloths though.  Psyco is commonly recommended, but does not support 64 bit architectures.  My roommate Will came up with a plan to run a Skeinforge server on PyPy on a faster computer and have a client on my laptop send STLs to it for skeining.  We ran out of time on that, but we did get PyPy running normal Skeinforge on my laptop.  As of PyPy 1.5, there is support for Tkinter.  Following those instructions to install PyPy and Tkinter and run Skeinforge on 64 bit Linux:

wget https://bitbucket.org/pypy/pypy/downloads/pypy-1.5-linux64.tar.bz2
tar -xjvf pypy-1.5-linux64.tar.bz2
cd pypy-c-jit-43780-b590cf6de419-linux64
wget http://peak.telecommunity.com/dist/ez_setup.py
./bin/pypy ez_setup.py
./bin/easy_install tkinter-pypy
./bin/pypy ~/path_to_skeinforge/skeinforge.py

The fonts may look slightly different, but the application should behave the same.  Export times should decrease the first couple of times you put a file through as the JIT compiler optimizes and then stay good as long as you keep the process running.  On my laptop with a 2.00 GHz Core 2 Duo, Skeinforge runs 2 to 3 times faster on PyPy than on stock CPython 2.6.6. The tested objects were a Weighted Storage Cube, a Flower, Whistle v2, and the Prusa Mendel vertex.

Gestural Printing: Jumping the Shark on Kinect Hacks

We’ve seen a seemingly endless array of amazing Kinect hacks over the last few months, from superhero generators to obstacle avoiding quadcopters.  However, it was only a matter of time before someone came up with a hack so inane and irrelevant that it would bring shame to the entire hobby.  That time is now, and that someone is me.  I bring to you, gestural 3D printing!  Using the Kinect to track your hand, you can draw one layer at a time, with the printer following your every move.  Pushing forward extrudes plastic, while pulling your hand back will start a new layer.  Who needs difficult and confusing CAD software when you can just directly draw the object you want to print?

Really though, you can only get through 4 or 5 layers before your arm feels like it’s going to fall off, and the resulting object will look like a stringy blob of plastic vomit.  The source is in the FaceCube GitHub repository.  I don’t recommend actually using it, but if for some reason you want to, the dependencies are mindbogglingly complex.  You’ll need to install OpenNI and NITE to start with; this guide at Keyboardmods is helpful.  You’ll also need my branch of OSCeleton, which improves on hand tracking.  With the Kinect hooked up, you can run ./osceleton -n -f to start hand tracking in an Open Sound Control server.  You can then run the gestureprinter.py script, which requires pyOSC, pygame, and the RepRapArduinoSerialSender script from Skeinforge, which is also in the FaceCube repository.  Of course, you’ll also need both a Kinect and a 3D printer that is compatible with the Gcode that RepRap firmwares use.  The script is set up for my printer specifically, but it should be straightforward to tweak for others if you dare.

Gestural Print

FaceCube: Copy Real Life with a Kinect and 3D Printer

Thumbs Up

This project is a tangent off of something cool I’ve been hacking on in small pieces over the last few months.  I probably would not have gone down this tangent had it not been for the recent publication of Fabricate Yourself.  Nothing irks inspires me more than when someone does something cool and then releases only a description and pictures of it.  Thus, I’ve written FaceCube, my own open source take on automatic creation of solid models of real life objects using the libfreenect python wrapper, pygame, NumPy, MeshLab, and OpenSCAD.

The process is currently multi-step, but I hope to have it down to one button press in the future.  First, run facecube.py, which brings up a psychedelic preview image showing the closest 10 cm of stuff to the Kinect.  Use the up and down arrow keys to adjust that distance threshold.  Pressing spacebar toggles pausing capture to make it easier to pick objects.  Click on an object in the preview to segment it out.  Everything else will disappear; clicking elsewhere will clear the choice.  You can still use the arrow keys while it is paused and segmented to adjust the depth of what you want to capture.  You can also use the H and G keys to adjust hole filling to smooth out noise and fill small holes in the object.  If the object is intended to have holes in it, press D to enable donut mode, which leaves the holes open.  Once you are satisfied, you can press P to take a screenshot or S to save the object as a PLY format point cloud.

FaceCubeSegmentedPoint Cloud

You can then open the PLY file in MeshLab to turn it into a solid STL.  I followed a guide to figure out how to do that and created a filter script attached below.  To use it, click Filters -> Show current filter script, click Open Script, choose meshing.mlx, and click Apply Script.  You may have to click in the preview, but after a few seconds, it will say that it Successfully created a mesh.  You can click Render -> Render Mode -> Flat Lines to see what it looks like.  You can then click File -> Save As, and save it as an STL.  You can probably get better results if you manually pick the right filters for your object, but this script will be enough most of the time.

MeshLabOpenSCADRepsnapper

You can then open the STL in OpenSCAD or Blender and scale it and modify to your heart’s (or printer’s) content.  Of course, the real magic comes from when you take advantage of all that OpenSCAD has to offer.  Make a copy of yourself frozen in carbonite, put your face on a gear, or make paper weights shaped like your foot.  This is also where the name FaceCube comes from.  My original goal going into this, I think at my roommate’s suggestion, was to create ice cube trays in the shapes of people’s faces.  This can be done very easily in OpenSCAD, involving just subtracting the face object from a cube.

difference() {
	cube([33,47,17]);
	scale([0.15,0.15,0.15]) translate([85,140,120]) rotate([180,0,0]) import_stl("face.stl");
}

FaceCube Tray

Since all of the cool kids are apparently doing it, I’ve put this stuff into a GitHub repository.  Go ahead and check it out, err… git clone it out.  The facecube.py script requires the libfreenect from the unstable branch and any recent version of pygame, numpy, and scipy.  You’ll need any recent version of MeshLab or Blender after that to do the meshing.  I’ve been using this on Ubuntu 10.10, but it should work without much trouble on Windows or OS X.  The latest code will be on git, but if you are averse to it for whatever reason, I’ve attached the script and the meshlab filter script below.  Since Thingiverse is the place for this sort of thing, I’ve also posted it along with some sample objects as thing:6839.

Download:
git clone git@github.com:nrpatel/FaceCube.git

facecube.py
meshing.mlx

AC Powered Heated Build Platform for RepRap

Heated build platform

One of the unpleasant surprises you come across when first learning how to operate a RepRap is that any object longer than an inch or so in any dimension printed in ABS will warp quite a lot as the lower layers cool.  The workaround, other than switching to another plastic, is to print onto a heated build platform.  There a few varieties available to buy, but I decided to build one out of parts I could get at Halted.  I found a ~2mm thick sheet of aluminum in roughly the correct dimensions with holes predrilled for $2, and a few 50 watt resistors for about $2 each.

Thermistor

My primary design goal was to avoid putting more load on my (fused) RAMPS board and mini-ATX power supply by directly powering the bed off of AC.  As a purely resistive load, this is also by far the most efficient way of doing it.  I connected the resistors in series with 16 gauge high temperature teflon insulated wire and JB-Welded them to the sheet.  I also used JB-Weld to mount a thermistor near the middle resistor to get temperature readings.  I then mounted the board on springs above the normal build platform and covered the surface with Kapton tape.  The relay is being switched by one of the MOSFETs on the RAMPS board.  A red LED indicates that the relay is powered, and there is a flyback diode across the relay coil.  The Arduino Mega was resetting randomly partway through prints until I added a decoupling capacitor in parallel to the coil as well.
Schematic

Overall, it works well.  With roughly 80 watts of power, it heats up to 110C in around 5 minutes, which is sufficient for ABS.  I managed to print a 150mm long object with no warping.  I’ve also been using it with PLA at 60C.  Right now it poses a mild electrocution hazard sitting on my desk, but I plan on printing out an enclosure for it as soon as I figure out how to use OpenSCAD.

Easy Interactive Camera-Projector Homography in Python

Camera and Projector

Math.  It turns out its not quite like riding a bike.  A year since college, and two since my last computer vision course, my knowledge of linear algebra is basically nil.  Several projects I’m stewing on are bottlenecked on this.  I decided to relearn some basics and create a tool I’ve wanted for a while, a method to quickly and easily calculate the homography between a camera and a projector.  That is, a transformation that allows you to map points from the camera plane to the display plane.  This opens up a world of possibilities, like virtual whiteboards and interactive displays.

I won’t go into detail about deriving the transformation matrix, as there is information elsewhere better than I could present.  The calculation requires four or more matching point pairs between the two planes.  Finding the points manually is a pain, so I wrote a script that uses Pygame and NumPy to do it interactively.  The script works as follows:

  1. Point an infrared camera at a projector screen, with both connected to the same computer.
  2. Run the script.
  3. Align an lit IR LED to the green X on the projector screen, and press any key.
  4. Repeat step 3 until you have four points (or more, depending on the script mode), at which point,
  5. The script will calculate the homography, print it out, and save it as a NumPy file.

The script in its current form uses any Pygame supported infrared camera.  This today is likely a modded PS3 Eye or other webcam, unless you’re lucky enough to have a Point Grey IR camera.  I do not, so I hot glued an IR filter from eBay to the front of my Eye, that I may have forgotten to return to CMU’s ECE Department when I graduated.  Floppy disk material and exposed film can also function as IR filters on the cheap,  just be sure to pop the visible light filter out of the camera first.

It would be overly optimistic of me to believe there are many people in the world with both the hardware and the desire to run this script.  Luckily, due to the magic of open source software and the modularity of Python, individual classes and methods from the file are potentially useful.  It should also relatively straightforward to modify to accept other types of input, like a regular webcam with color tracking or a Wii Remote.  I will add the latter myself if I can find a reasonable Python library to interface one with.

Once you have the transformation matrix, you can simply dot the matrix with the point from the camera to get the corresponding point on the projector.  The blackboard script, demonstrated by my roommate above and downloadable below shows a use case for this, a drawing app using an IR LED as a sort of spray can.  The meat of it, to convert a point from camera to projector coordinates, is basically:

# use homogeneous coordinates
p = numpy.array([point[0],point[1],1])
# convert the point from camera to display coordinates
p = numpy.dot(matrix,p)
# normalize it
point = (p[0]/p[2], p[1]/p[2])

The homography.py script takes the filename of the matrix file to output to as its only argument.  It also has an option, “-l”, that allows you to use four or more points randomly placed around the screen, rather than the four corner points.  This could come in handy if your camera doesn’t cover the entire field of view of the projector.  You can hit the right arrow key to skip points you can’t reach and escape to end and calculate.  The blackboard.py script takes the name of the file the homography script spits out as its only argument.  Both require Pygame 1.9 or newer, as it contains the camera module.  Both are also licensed for use under the ISC license, which is quite permissive.

nrp@Mediabox:~/Desktop$ python homography.py blah
First display point (0, 0)
Point from source (18, 143). Need more points? True
New display point (1920, 0)
Point from source (560, 137). Need more points? True
New display point (0, 1080)
Point from source (32, 446). Need more points? True
New display point (1920, 1080)
Point from source (559, 430). Need more points? False
Saving matrix to blah.npy
 array([[  3.37199729e+00,  -1.55801855e-01,  -3.84162860e+01],
       [  3.78207304e-02,   3.41647264e+00,  -4.89236361e+02],
       [ -6.36755677e-05,  -8.73581448e-05,   1.00000000e+00]])
nrp@Mediabox:~/Desktop$ python blackboard.py blah.npy

Download:
homography.py
blackboard.py

Normal people don’t have these problems

Auxilliary Input

I drive the least interesting car in the world, a gray 2004 Toyota Camry.  To stave off death from boredom while driving, I use a tape adapter connected to an A2DP receiver to wirelessly listen to music from my iPhone.  A few weeks ago, the tape deck developed an extremely irritating clicking noise.  The easy options, the ones that most Camry owners would choose, would be to turn on the radio, get an FM transmitter for the iPhone, listen to CDs, or just let the tape deck click.  Being an engineer, I refuse pick the easy option, but rather the one that seems best, which inevitably devolves into a weekend of hacking, cursing, and setting things on fire, with a best case of eventually restoring the object of interest to something resembling a functioning state.

The plan was to enable direct auxiliary input, which the Internet claimed was possible on this model.  The backup plan was to resolve the clicking noise by just unplugging the tape motor, which we assumed was unnecessary to operate the adapter.  My roommate Will and I popped the head unit out of the car with much effort and two trips to the hardware store.  Apparently stealing a radio isn’t easy, even from your own car.

Attempting to power the unit off of the 12v rail of a 350w ATX power supply resulted in it shutting off immediately.  With a 450w PSU connected to the battery and accessory voltage lines, a tiny laptop speaker from my spare parts bin connected to the massive amp, and another speaker being used as an antenna, we managed to pull in the beautiful sound of late 70’s hits on FM radio.

I’m not entirely sure what happened, but Will managed to break the tape deck in about a dozen different ways over the course of the next couple of hours.  It wouldn’t play at all without the drive gears spinning, it refused to eject tapes because it believed it was empty, and still, the clicking gear clicked.

It was then up to me to get aux in working.  Apparently, Toyotas from around my model year use AVC-Lan, a communications bus based on IEBus for the head unit to connect to things like a CD changer.  There is some pretty solid research across the web on how to emulate a device on the bus to message the head unit to use its aux input.  I used the circuit and software from SigmaObjects, as it required only parts I already had or could get from Halted.  The code there is designed for an ATmega8, but with some trial and error, I managed to port it to the current ATmega48/88/168 series.  Unfortunately, the code doesn’t mention being under any permissive license, so I can’t share my modifications.

While hooking it up to the head unit, the wire connecting the device to ground actually burst into flames, burning part of a connector and melting copper.  On later inspection, we found that the wires were extremely high gauge internally, and it is likely that only a single thin strand was carrying the current in the area that caught fire.  Miraculously, neither the radio nor the ATmega168 was damaged by the incident.  I switched to lower gauge higher quality wire, checked my car insurance terms relating to explosion due to user error, and continued.

Burnt Wire

After a few more hours of debugging, I realized that I had the two data lines backwards, and after switching them it worked instantly.  We repurposed the TAPE button to switch to aux in.  I taped the board down, put the head unit back together, and we stuck it back into the car.  There was an unnerving startup delay the first few times, but I am now the proud owner of a gray 2004 Toyota Camry with auxiliary audio input… and no tape deck.