Even though many people believe robots and other automated systems will put many out of work (others believe they will usher in a new era of innovation and resourcefulness), research conducted by MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) reveals that workers actually prefer for robots to take the lead in manufacturing tasks. The study explores the two sides of robotic workers: on the upside, they free humans from tasks characterized by the “three D’s” — tasks that are dirty, dangerous, and/or dull. Of course, if robots do assume those jobs, what’s left for humans? Oversight? Programming? Perhaps collaboration with the robots?
As NASA debates whether to send more people to the moon, as well as whether, how, and when to try a manned mission to Mars, it has decided to fund a new kind of robot for space exploration: tumbling cubes.
Ever since I read From the Mixed-Up Files of Mrs. Basil E. Frankweiler, which chronicles the adventures of two kids who take up residence in New York City’s Metropolitan Museum of Art, I’ve had dreams of thwarting security and bunking in a museum (preferably a museum of science, but I’m not picky). Turns out, I’m not the only one. A bunch of robots are fulfilling this dream right now.
Starting last night, a group of robots are spending five nights in the Tate Britain, examining centuries of British art under the cover of darkness — and on film. The Workers, a design studio that recently won the inaugural IK Prize for their project, developed the idea when one of its members was working on an individual project at the Tate that required him to be there after hours. He found the experience fascinating, just like in the book (okay, maybe not just like in the book, but close enough), and wanted to figure out a way for others to see what it was like.
While significant strides have been made recently in natural language processing, one of the current drawbacks for most robots is the inability to understand language that isn’t coded in ones and zeroes. For programmers, a future full of robotic servants, coworkers, and mates might seem pretty exciting, but for those of us who would rely on spoken language to communicate with robots, it seems a little more daunting. Cornell’s Robot Learning Lab is hard at work on this problem, trying to teach robots to take verbal instructions.
Language itself is often vague and broad — take Isaac Asimov’s three laws of robotics, for instance. The first law is that a robot cannot harm a human, or allow a human to come to harm. At first glance, that might seem clear enough, but what exactly constitutes harm? Asimov himself posed this question in the story “Liar!” which features a mind-reading robot named Herbie. Herbie lies to its human colleagues because it knows what they want it to say — he tells an ambitious human that he’s next in line for a big promotion, and he tells a heartsick human that her feelings for her coworker are reciprocated. Herbie lies because telling people what they don’t want to hear would be emotionally harmful, but of course when they realize Herbie has been lying they’re humiliated and undergo harm anyway. Asimov’s law is typically interpreted as intending to prevent physical harm, but Herbie’s read of the law makes sense, given the different types of harm one can experience. If a robot were to be programmed with such a law, the robot would also have to be programmed with an understanding of all the different interpretations of the word harm, as well as relative harm (a scratch versus a bullet wound, etc).
I was never very good at making paper airplanes, but I’ve always had a thing for origami. My brain and my hands don’t seem to be naturally inclined toward the strategic folding of papers into three-dimensional shapes — although in elementary school I did master the art of making the fortune teller, otherwise known as the cootie catcher. Now that origami is being integrated into robotics, its geek cred has skyrocketed and I might need to give it another go.
It makes sense if you think about it. Turning two-dimensional materials into a three-dimensional shape that can actually do something is a process perfectly suited for robotics, particularly for their wheels, as demonstrated by a couple of research groups who presented at this month’s IEEE International Conference on Robotics and Automation.
If you’ve ever lived in New York City, chances are you’ve had occasion to dial 311 at some point. 311 is the city’s 911 for non-emergencies, so when people convene under your window at 4:30 am for an impromptu party, 311 is the number you call (provided you don’t go out and join them). The network fields 60,000 complaints and questions per day via phone, text, app, and website—there’s not much downtime for those on the receiving end of New Yorker complaints. And that’s precisely why—you know what’s coming here—the 311 center is adding robotic systems to answer the easier questions.
Since its inception in 2003, humans have fielded these queries 24/7. It’s surprising that’s gone on so long in the age of automated operators. In fact, the 311 center is so busy that a Microsoft Researcher likened it to a “NASA control center.” All that manpower is, of course, expensive, so after visiting the center, Microsoft began devising programmable software that can answer the easy, factual questions, such as queries about school closings or parking regulations. My initial thought is that people calling with these questions should use this miraculous invention called the Internet. However, it’s true that local questions are harder to answer online than general ones.