Remote-Controlled Space Robots Will Work Better When They Can See The Future

By Joelle Renstrom | Updated

This article is more than 2 years old

While remote-controlled robots are awesome and can do loads of tasks that are too difficult and dangerous for humans, there’s still the problem of time delay. The further away the robot, the longer the lag, which means that space rovers are affected most of all. By the time a warning message travels through space, one of our precious rovers could conceivably be at the bottom of a crater or a probe might be nestled in the arms of an alien (okay, that would be pretty cool). To address these concerns, the good folks at NASA’s Jet Propulsion Laboratory have been working on a predictive system that will make robot space exploration faster and safer.

Even though we could send a signal to Mars in as little as 11 minutes, when scientists try to communicate with the Curiosity Rover, the round-trip lag is closer to 40 minutes. But scientists give information in bunches, kind of like when we opt to receive a daily digest of all the emails sent by members of our Google Group instead of each individual message. Scientists will send sequential commands, which means the rover will perform one, and then wait for the next set of instructions before executing the next. The pace at which these rovers can respond is painstakingly slow — it might take days to perform movements that would take only minutes if a human operator were in the same room. The general feeling is that if slow is safe, then slow it will be.

NASA’s Jet Propulsion Laboratory’s solution to this rather inefficient approach is to create an interface that simulates a robot’s immediate surroundings. In the video above, this is the “committed state,” or the area where a robot might be when taking time delay into account. Then, by using predictive analysis, or, if you want to be more dramatic, by looking into the future, the software enables the robot to adjust its decisions and movements far faster than it does right now. Essentially, this new interface would give the human operators the chance to give spontaneous directions in the event that they see something interesting or dangerous. The simulation software incorporates a degree of uncertainty, given that it can’t know all the minute and precise details, but then operators can take that uncertainty into account. Scientists are testing the interface as though they’re playing a video game to see what strategies enhance accuracy and efficiency.

Curiosity is currently using an autonavigation system to drive to Mount Sharp (see video below), but scientists believe this new system could be implemented soon — maybe even in time for Curiosity to try it. The biggest obstacle at the moment is limited bandwidth due to the small number of communications satellites orbiting Mars, which limits the number of commands that could be sent each day. But as the number of satellites increases, and as we put rovers and probes near asteroids, the moon, and other places, I predict a lot of use in the future for this system.

Subscribe for Science News
Get More Real But Weird

Science News

Expect a confirmation email if you Subscribe.