Just for reference, the “Uncanny Valley” is not some cute comment on life in Silicon Valley – it’s a popular concept in computer animation that refers to the challenges inherent in trying to produce “realistic” computer animation of human characters. I wrote a blog post on the concept back in 2006:
Playstation 3, Uncanny Valley & Product Design
Uncanny Valley is a theory borrowed from robotics that says that when you have something relatively non-human like a puppy or a teddy bear, people will anthropomorphize it and like the “human-like” qualities of it. However, if you make something too close to human, like a robot, people start to dislike it strongly as they focus on some key, missing detail. Think about the uneasy feeling around corpses, zombies, or prosthetics.
Well, much to potential delight of 30 Rock fans, we may be closer to crossing that valley than we thought.
Meet Emily.
Emily is not real. She is computer animated, leveraging new techniques for incorporating involuntary eye movement and other incredibly subtle cues from a real actress to generate a realistic effect. She still comes across as a bit stiff, but not in an unnatural way.
Here is an explanation from the original article in the Times UK:
Researchers at a Californian company which makes computer-generated imagery for Hollywood films started with a video of an employee talking. They then broke down down the facial movements down into dozens of smaller movements, each of which was given a ‘control system’.
The team at Image Metrics – which produced the animation for the Grand Theft Auto computer game – then recreated the gestures, movement by movement, in a model. The aim was to overcome the traditional difficulties of animating a human face, for instance that the skin looks too shiny, or that the movements are too symmetrical.
“Ninety per cent of the work is convincing people that the eyes are real,” Mike Starkenburg, chief operating officer of Image Metrics, said.
If the historical pace of innovation in this area is any indication, we are likely less than three years away from seeing this type of technique utilized in a mass market short medium (commercial, animated short, small film segment) and within five years of seeing this used in a long medium (video game, television show, full length feature).
Amazing.
On a related note, this concept of more intense real-life movement capture to drive computer animation seems to be taking hold aggressively in commercial entertainment as well. My son’s new favorite show is Sid the Science Kid, which is a new innovation from the wizards at The Jim Henson Company. It uses a real-time motion capture from a live actor to generate a computer animated special that can be produced in real-time. A fascinating blend of puppetry techniques and computer animation makes it possible, and the result is a computer animated character who presents realistic faults and behavior on screen. Here is the Muppet Wikia entry on the show.
It stands to reason that as more performances are captured, and computational storage and processing power increase, it will be relatively trivial to assemble a library of realistic behaviors and actions that will generate truly realistic, but completely artificial, performances.
Neat stuff. I’ll have to ask the kids about Sid (they spend too much time watching PBS Kids).
Here’s a link to a motion capture dance animation I saw recently at Dartmouth. The normal version appears to be on their borked Quicktime Streaming Server, so the link is only to the 3gp version.
If the motion capture is of high enough resolution, re-imaging the performances with better rendering or alternate characters ought to provide ongoing fun for quite a while. I wonder if they’re going to extend Copyright law to motion capture data sets… Anyway, don’t tell George Lucas.