Dangerroom ran this robotic packmule video last year, commenting mainly on the engineering. The thing is a wonder: When it’s given a bump or slips on the ice, it reacts incredibly fluidly to stay afoot.
As amazing as that is, however, I find myself agog even more at how we feel (how I feel, anyway) as we watch it scramble to stay upright.
This doubtless happens because the engineers, obviously taking their mechanics from real animals, programmed reactive motions that make the thing look incredibly lifelike. In fact, this beast never seems more lifelike than when it’s struggling. Watch, for instance, when the engineer gives the mule a kick (40 seconds in) and then when it slips on ice (at 0:53).
Even more than its lifelike anatomy (setting aside its lack of a head, as it were), it is these movements that lead us to see this machine as lifelike. The guy kicks it, you feel sorry for the thing, but not just because he kicks it, but because of the way it scrambles. (If it just fell over, you’d just laugh.) And when the mule hits that ice and scrambles to stay upright, I find myself really pulling for it to stay upright. But of course there’s no beast here to feel sorry for or pull for . It’s a machine. Yet its flailing, so familiar to anyone who has seen human or beast struggle to not fall, instantly evokes sympathy. We all understand falling to be not just humiliating but dangerous, even mortally so. So we pull for this thing.
Meanwhile, the movements of this humanoid robot actually make it more alien, at least to my eye:
Its humanlike anatomy makes it simultaneously more interesting and more threatening; when the guy pushes it, I don’t so much care. I want to catch the mule but step away from this headless humanoid.
Is this because it’s safer to empathize with a packmule?
I’m not sure what to make of this, other than that lifelike movement exerts an incredible power over our conception of the thing moving and over our capacity for sympathy for it. I suspect robotic researchers haven’t studied the effect of lifelike movement in robots as much as they have power of realistic facial expression. (If you know, please chime in in the comments.) Good reason for that, of course: facial expressions tell us a world about someone. If a robot can get facial expressions right, it will seem pretty lifelike. Yet I suspect a robot with lifelike facial movements will seem creepy if its body movements are off. Or maybe they’ll just seem creepy until they’re completely lifelike — at which point they’ll go from creepy to terrifying, like Ahnold.
I’d hate to leave facial expressions out of this. I’ll close with a pointer to a rather eerie video of Jules, a robotic head at the Bristol (UK) Robotics Laboratory, imitating the facial expressions of an actress. (I’m not able to pull the vid over, so you have to go here and then scroll down a bit to see it.) It’s hard to watch this without feeling, against your knowledge, that Jules understands the feelings the actress is trying to convey.
I find Jules pretty creepy. I like the packmule better.