Rensselaer Research Review
Home Page Awards & Grants Patents Accolades Back Issues RPI Research
*
*
* Computational Cognitive Modeling

Rensselaer’s cognitive science researchers model human cognition to better understand, and enhance, human intelligence and performance. They use eye-gaze recording systems, video and audio recording systems, and powerful computers to monitor volunteers who undertake such tasks as programming a simulated VCR. The researchers collect data on eye movements, keystrokes, and other actions and use this information to build simulated human users. They seek both to learn more about human thought processes and to create synthetic thinking systems for a variety of purposes.

Under a contract with the Office of Naval Research, Gray, director of Rensselaer’s CogWorks Laboratory, and Research Assistant Professor Michael Schoelles lead a team that is trying to resolve the “paradox of the active user.” Gray explains that users of a particular piece of electronic equipment may be operating in an environment in which they must deal with 19 other devices. A software developer may write a manual outlining the best way to use a specific tool such as a calendar on a cell phone, but a user, who already has a calendar on his computer, will value completing the task more than reading the manual. They may input dates in the calendar in the same way they already know, even if this is not the most efficient approach.

Ron Sun uses both psychological experiments with human subjects and computational simulation and model development to help understand human psychology. He is particularly interested in human skill learning, ranging from the highly intellectual to sensor-motor tasks, and he is working to develop more unified cognitive architectures that center on learning.

In work supported by the Army Research Institute (ARI), he has built and continues to develop CLARION (Connectionist Learning with Adaptive Rule Induction On-line), a computer architecture that captures a variety of cognitive processes in a unified way. It is a multi-level approach that models multiple-agent interactions, individual cognitive agents, and components of these cognitive agents.

Cognitive Engineering: “Milliseconds Matter”

The Cognitive Science Department is positioned directly between theoretical work and applied problems. Knowledge of cognitive systems contributes to solving real problems, while knowledge gained from solving the problems constantly tests and improves the theories. Many of the research programs in the department involve cognitive engineering, designing smooth interfaces between humans and machines.

Gray, for example, has put his models to work in a number of practical studies, including attempting to locate enemy submarines hiding in deep water and developing intelligent tutors. In a contract with the Air Force Office of Scientific Research, he is studying how radar operators decide which blips on the screen pose the most serious threats.

One side of a computer screen displays the numbered blips. On the other side, the test subjects can call up boxes with information on specific blips. Creating and reading the boxes takes time, and operators subconsciously make least-effort tradeoffs. With the motto “milliseconds matter,” Gray has shown that knowledge acquisition costs measured in 100s of milliseconds may lead people to rely on error-prone memory rather than shift attention and eyes to a nearby position.

To combat terrorism, intelligence agencies must come up with newer, faster, and more efficient ways of extracting vital information from the vast amount of digital data that is available on the Web and in specialized networks. Gray and Bringsjord are part of a federally funded team that is working on this problem. Much of Rensselaer’s role is to create cognitive models, or “simulated cyborgs,” to rapidly evaluate different combinations of next-generation technological tools. The cyborgs will show what types of technology work best for a given assignment.

Gray and Qiang Ji, assistant professor of electrical, computer, and systems engineering, are funded by the ONR to develop a system that can offer extra help to humans when they need it. Ji engineers highly sophisticated computer-imaging systems that can study a human face and analyze the size and shape of such features as the eyes and the mouth to warn, for example, that the human is too tired to continue driving. In this project, Ji is developing methods to warn when the human at the computer is confused, and Gray will determine at what point in the reasoning process the problem has occurred. They will then develop AI tools to intervene by such actions as changing the software environment or highlighting pertinent data on the screen.

*
*

Screenshot from one of the simulated environments used in the driving lab.



Image from the virtual reality lab shows one of the virtual environments that Rensselaer cognitive science researchers are using to study interceptive action.
*
* *
Fajen has support from the National Science Foundation to study visual perception and the control of braking. As part of this work, he places human subjects in front of a projection screen and monitors their use of a joystick as they react to images such as stop signs. His goals are both to find methods to improve traffic safety and to better understand how people learn to improve their performance.

Michael Kalsher, associate professor of cognitive science, looks at issues that impact the effectiveness of warning labels. A member of the American Natural Standards Institute committee that sets the rules for these warnings, he says warnings must be understandable to the target audience, and they have to communicate a strong message that people will remember and be motivated to obey. Kalsher also collaborates with scientists at the Aberdeen Army Research Lab on the use of laptops and other technology to improve communication in noisy environments like tanks and helicopters, where seconds can make a difference between life and death.

A Long, Long Road

Bringsjord says that despite all the work being done, researchers are still a long, long way from building artificial systems that can think, converse, and act like humans.

There are currently some robots that might be compared to lower-level animals such as insects or rodents, he says. Some versions, for example, can move well through an environment, replicating the motions of grasshoppers. If there is a sudden, unexpected change in the environment, however, the grasshopper will adapt and outperform the robot.

But such physical maneuvers are not what distinguishes humans, he adds. Humans can use cognitive models to achieve the results they want. If a man wants to jump farther, he can figure out how to train to improve his performance. That is far beyond what a grasshopper can do, let alone a robot. As for conversation, that is a very difficult application, he says.

Cassimatis compares the development of robots that can converse naturally to a 100-mile foot race. “We’re a lot farther along than we were 10 or 20 years ago,” he says. “We’ve covered 15 miles, which was hard, but we still have 85 to go.”

Every day in their laboratories, Rensselaer researchers are making strides down that long, long road.
*
*
*
*
*

Related Web site:

Rensselaer Department of Cognitive Science

Additional Information:

New Ph.D. Program in Cognitive Science at Rensselaer Polytechnic Institute

NSF Grant Supports Braking Research

RPI Opens New Social and Behavioral Research Laboratory in Downtown Troy

*
*
*
*
* End of Article
“Building a Better Brain”  Page:  1 | 2     Previous Page
*
*
*

Home | Awards & Grants | Recent Patents | Accolades | Back Issues | RPI Research

   Rensselaer Polytechnic Institute
   
110 8th Street, Troy, NY USA 12180
Copyright ©2004 Rensselaer Polytechnic Institute.
Published quarterly by the Rensselaer Office of Communications in collaboration with the Office of Research.

Molecularium(SM) and associated characters are service marks of Rensselaer Polytechnic Institute. All rights are reserved.