Tuesday, February 19, 2008
Unreal Engine 3 Motion Capture by Mova
Mova Contour Reality Capture in Unreal Engine 3 from Joy Stiq on Vimeo.
Traditional, point-based motion capture (the kind brought to you by guys in black suits with reflective balls) has been great for developers that want to capture basic skeletal motion for their in-game characters. But for realistic facial work, even setups with hundreds of reflective dots leave developers with rough, blocky data that requires a lot of post-production work to even start approaching the uncanny valley.
Enter motion capture company Mova, whose Contour Reality Capture system uses an array of cameras to create 100,000 polygon facial models that are accurate to within a tenth of a millimeter -- no special reflective balls required. At this year's GDC, the company is trying to attract the game industry's attention by unveiling examples of their facial modeling running in real-time on the popular Unreal Engine 3. Continue reading for exclusive, real-time video of the technology and excerpts from an interview with the Mova founder Steve Perlman.
"This pushes Unreal Engine 3 to its very limit ... it's about as photo-real as you can get in real time."
So says Steve Perlman, the man behind Apple's Quicktime, Microsoft's WebTV and the Mova Contour Reality Capture system that created the above real-time video, shown running on a dual NVIDIA 8800 GTXs with SLI. Frankly, we find it pretty hard to disagree with his assessment. From the little twitches of the eyelids to the extreme curl of the mouth to the weathered wrinkles in in the face, you'd be hard pressed to find a more realistic facial image running on any real-time system. Perlman says the company has been working privately with developers for some time to adapt the system for video game use.
"People have never had this kind of data available before in a game context ... their heads are spinning," he said. "What you're seeing right there is the result of, having time to wrap our heads around this thing and see how we're going to use it, and yes, we can in fact get a face that looks almost photo-real -- you know, not quite, but almost photo-real -- running in a game engine today."
Believe it or not, though, the Contour system can create even more detailed animation when processing time isn't an issue. Check out the below video, which shows how Reality Capture data can look when pre-rendered for a movie or cut scene.
"You can see the difference then between what's achievable in cinema and what's achievable right now in video games," Perlman says. "But next generation game machines, they'll be able to essentially show in real time what we can do currently in non-real-time using renderers. ... Next generation, you're going to have interactive sequences where people think there's a live person in the game."
Perlman says the cost of a Contour motion-capture session isn't appreciably higher than that for a traditional marker-based capture session: anywhere from a few thousand to a few hundred thousand dollars, depending on the length and complexity of the shoot. The real savings, he says come in the post-production. "Unlike marker-based capture, which has a big manual clean-up process before you see results, with contour it's purely computational," Perlman said. "We've talked to people, and one of the reasons when they announce delays for complex games is because they're fighting to try and make the faces look good. With Contour, you send the guy in, he does a shoot, and we send you a face that looks nearly perfect. It's no longer one of the risk issues for your schedule."
The Contour system generates so much data, Perlman says, that the full value of the rendering won't be apparent until hardware speeds improve. "With markers, you kind of get the resolution of what those markers are and that's it," Perlman says. "When a next-generation game system comes out, or they decide they want to do something for a feature film, you can't really use the data. With Contour, it's actually capturing the data at much higher resolutions than any system in the world, even for feature films, can currently use. What we do is we store that data away ... and when a next generation video game machine comes out and they want the data at higher resolution, they can."
Perlman couldn't reveal which companies are currently using this technology, but said he expects the first games with Contour captures could come out in 2008, depending on developer schedules. The system should be in "wide use" around the game industry by 2009, he said.