Virtual Reality Helps Scientists Read Robots’ Minds, Here’s How

fb share tweet share

robot thoughts

photo courtesy of Melanie Gonick, MIT

A few weeks ago, GFR reported on a robot that had trouble figuring out how to “save” robots representing humans during a study. The automaton was often unable to figure out whether to save one human-bot or the other, often resulting in it being stymied into a state of paralysis, resulting in the “death” of both. While it was clear that the robot was having an Asimovian breakdown because it couldn’t save everyone, researchers couldn’t tell what the reasoning was—or how exactly the programming functioned (or didn’t, as the case may be). But now, thanks to another advancement at MIT, we may be able to read robots’ minds, or at the very least, gain some insight into their intentions.

The scientists used a simpler task than the one that stymied the robot before—this time, instead of saving a human, they only had to reach the other side of the room without crashing into the “pedestrian.” Thus, what the robot has to “think” about is the best route, the one that will both minimize an encounters with the pedestrian while getting it across the room as quickly as possible. Thanks to a new visualization system, called “measurable virtual reality” (MVR) by its creators, scientists can see the robots “thoughts,” or at least their process.