Making A.I. Systems that See the World as Humans Do 2017-01-30 10:59:56 / CONFERENCES

A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand the world as humans do.

 

“The model performs in the 75th percentile for American adults, making it better than average,” said Northwestern Engineering’s Ken Forbus. “The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition.”

 

The new computational model is built on CogSketch, an artificial intelligence platform previously developed in Forbus’ laboratory, dailytech.info  says. The platform has the ability to solve visual problems and understand sketches in order to give immediate, interactive feedback. CogSketch also incorporates a computational model of analogy, based on Northwestern psychology professor Dedre Gentner’s structure-mapping theory. 

 

“Most artificial intelligence research today concerning vision focuses on recognition, or labeling what is in a scene rather than reasoning about it,” Forbus said. “But recognition is only useful if it supports subsequent reasoning. Our research provides an important step toward understanding visual reasoning more broadly.”

    Copyright © İnformasiya Texnologiyaları İnstitutu, 2024