Rensselaer’s cognitive science researchers model human cognition to better understand, and enhance, human intelligence and performance. They use eye-gaze recording systems, video and audio recording systems, and powerful computers to monitor volunteers who undertake such tasks as programming a simulated VCR. The researchers collect data on eye movements, keystrokes, and other actions and use this information to build simulated human users. They seek both to learn more about human thought processes and to create synthetic thinking systems for a variety of purposes.
Under a contract with the Office of Naval Research (ONR), Wayne Gray, director of Rensselaer’s CogWorks Laboratory, and Research Assistant Professor Michael Schoelles lead a team that is trying to resolve the “paradox of the active user.” Gray explains that users of a particular piece of electronic equipment may be operating in an environment in which they must deal with 19 other devices. A software developer may write a manual outlining the best way to use a specific tool such as a calendar on a cell phone, but a user, who already has a calendar on his computer, will value completing the task more than reading the manual. They may input dates in the calendar in the same way they already know, even if this is not the most efficient approach.
Ron Sun uses both psychological experiments with human subjects and computational simulation and model development to help understand human psychology. He is particularly interested in human skill learning, ranging from the highly intellectual to sensor-motor tasks, and he is working to develop more unified cognitive architectures that center on learning.Cognitive Engineering: “Milliseconds Matter”
The Cognitive Science Department is positioned directly between theoretical work and applied problems. Knowledge of cognitive systems contributes to solving real problems, while knowledge gained from solving the problems constantly tests and improves the theories. Many of the research programs in the department involve cognitive engineering, designing smooth interfaces between humans and machines.
Gray, for example, has put his models to work in a number of practical studies, including attempting to locate enemy submarines hiding in deep water and developing intelligent tutors. In a contract with the Air Force Office of Scientific Research, he is studying how radar operators decide which blips on the screen pose the most serious threats.
One side of a computer screen displays the numbered blips. On the other side, the test subjects can call up boxes with information on specific blips. Creating and reading the boxes takes time, and operators subconsciously make least-effort tradeoffs. With the motto “milliseconds matter,” Gray has shown that knowledge acquisition costs measured in 100s of milliseconds may lead people to rely on error-prone memory rather than shift attention and eyes to a nearby position.
Gray and Qiang Ji, associate professor of electrical, computer, and systems engineering, are funded by the ONR to develop a system that can offer extra help to humans when they need it. Ji engineers highly sophisticated computer-imaging systems that can study a human face and analyze the size and shape of such features as the eyes and the mouth to warn, for example, that the human is too tired to continue driving. In this project, Ji is developing methods to warn when the human at the computer is confused, and Gray will determine at what point in the reasoning process the problem has occurred. They will then develop AI tools to intervene by such actions as changing the software environment or highlighting pertinent data on the screen.
Brett Fajen has support from the National Science Foundation to study visual perception and the control of braking. As part of this work, he places human subjects in front of a projection screen and monitors their use of a joystick as they react to images such as stop signs. His goals are both to find methods to improve traffic safety and to better understand how people learn to improve their performance.
A Long, Long Road
There are currently some robots that might be compared to lower-level animals such as insects or rodents, he says. Some versions, for example, can move well through an environment, replicating the motions of grasshoppers. If there is a sudden, unexpected change in the environment, however, the grasshopper will adapt and outperform the robot.
But such physical maneuvers are not what distinguishes humans, he adds. Humans can use cognitive models to achieve the results they want. If a man wants to jump farther, he can figure out how to train to improve his performance. That is far beyond what a grasshopper can do, let alone a robot. As for conversation, that is a very difficult application, he says.
Cassimatis compares the development of robots that can converse naturally to a 100-mile foot race. “We’re a lot farther along than we were 10 or 20 years ago,” he says. “We’ve covered 15 miles, which was hard, but we still have 85 to go.”
Every day in their laboratories, Rensselaer researchers are making strides down that long, long road.
Feature Articles | President's View | At Rensselaer | Class Notes Features
Making a Difference | Rensselaer Milestones | Staying Connected | In Memoriam
Opinions expressed in these pages do not necessarily reflect the views of the editors or the policies of the Institute.
|© 2005 Rensselaer Polytechnic Institute. All rights reserved worldwide.|
Rensselaer Polytechnic Institute (RPI), 110 8th St., Troy, NY 12180. (518) 276-6000
Web site design by the Rensselaer Office of Communications.
Questions? Comments? Please contact Tracey Leibach, Managing Editor.