|By Michael Mullaney
It was the premier international soccer tournament that inspired Richard Radke to try to help machines see the world in the same way that people do.
At the time a graduate student at Princeton University, Radke worked with his adviser to develop new technology that would allow Japanese television producers airing the 2002 FIFA World Cup to combine several photos and video streams on the fly, enabling viewers to experience the game from previously unimagined and seemingly impossible perspectives.
The project, a collaboration with IBM Corp.’s Tokyo Research Laboratory, was the first foray into the field of machine vision for Radke, who today is an associate professor in the Department of Electrical, Computer, and Systems Engineering. Even though he wasn’t much of a soccer fan, something clicked. “I don’t know if they ever ended up broadcasting any of the stuff we did, but in the end I wasn’t so worried about it because the experience opened my eyes to a new field of research,” Radke says.
In the rapidly expanding field of computer vision, Richard Radke is scanning environmentsand the human bodyto help machines perceive images in the same way people do.
At 33, Radke is a rising star in the rapidly expanding field of machine vision. But his research defies easy categorization.
At its core, machine vision is a vast puzzle of algorithms and computer code. The end results, however, are more easily understood, as are their real-world applications. From algorithms to help treat breast and prostate cancers, to intelligence-gathering tools for battlefields and disaster zones, to building-sized laser scans of the Rensselaer campus, Radke’s work holds the potential to impactand even shapethe future of how humans utilize and interact with technology.
After the World Cup project, Radke went on to earn his doctorate in electrical engineering from Princeton. He joined Rensselaer as an assistant professor in summer 2001 and, intrigued by the notion of using a network of cameras to gain new glimpses into the world, Radke dedicated himself to developing a new framework for distributed computer vision.
“If you had a bunch of cameras that were dropped from a helicopter over a terrain like a battlefield or a disaster area, how can you get these cameras to talk with each other and solve computer vision problems, assuming you don’t have any central computer to collect all of the data?” he says. “People had been doing computer vision on very powerful computers, but not in terms of embedded devices, where you have tens, hundreds, or even thousands of cameras that are each attached to their own tiny computer.”
Traditional computer vision methods generally assume a small, fixed number of stationary cameras with signals that can be simultaneously processed by a central computer. Developing new methods that could allow a network of cameras randomly dispersed throughout an environment to autonomously collect data and shape it into coherent processed information would be significantly more challenging.