0

Help Researchers Figure Out How To Help Robots Understand Your Commands

fb share tweet share

robobutlerWhile significant strides have been made recently in natural language processing, one of the current drawbacks for most robots is the inability to understand language that isn’t coded in ones and zeroes. For programmers, a future full of robotic servants, coworkers, and mates might seem pretty exciting, but for those of us who would rely on spoken language to communicate with robots, it seems a little more daunting. Cornell’s Robot Learning Lab is hard at work on this problem, trying to teach robots to take verbal instructions.

Language itself is often vague and broad — take Isaac Asimov’s three laws of robotics, for instance. The first law is that a robot cannot harm a human, or allow a human to come to harm. At first glance, that might seem clear enough, but what exactly constitutes harm? Asimov himself posed this question in the story “Liar!” which features a mind-reading robot named Herbie. Herbie lies to its human colleagues because it knows what they want it to say — he tells an ambitious human that he’s next in line for a big promotion, and he tells a heartsick human that her feelings for her coworker are reciprocated. Herbie lies because telling people what they don’t want to hear would be emotionally harmful, but of course when they realize Herbie has been lying they’re humiliated and undergo harm anyway. Asimov’s law is typically interpreted as intending to prevent physical harm, but Herbie’s read of the law makes sense, given the different types of harm one can experience. If a robot were to be programmed with such a law, the robot would also have to be programmed with an understanding of all the different interpretations of the word harm, as well as relative harm (a scratch versus a bullet wound, etc).

0

These Robots Have Origami Wheels That Can Change Shape

fb share tweet share

wheelI was never very good at making paper airplanes, but I’ve always had a thing for origami. My brain and my hands don’t seem to be naturally inclined toward the strategic folding of papers into three-dimensional shapes — although in elementary school I did master the art of making the fortune teller, otherwise known as the cootie catcher. Now that origami is being integrated into robotics, its geek cred has skyrocketed and I might need to give it another go.

It makes sense if you think about it. Turning two-dimensional materials into a three-dimensional shape that can actually do something is a process perfectly suited for robotics, particularly for their wheels, as demonstrated by a couple of research groups who presented at this month’s IEEE International Conference on Robotics and Automation.

0

Robotic Systems To Field 311 Calls In New York City

fb share tweet share

311 nycIf you’ve ever lived in New York City, chances are you’ve had occasion to dial 311 at some point. 311 is the city’s 911 for non-emergencies, so when people convene under your window at 4:30 am for an impromptu party, 311 is the number you call (provided you don’t go out and join them). The network fields 60,000 complaints and questions per day via phone, text, app, and website—there’s not much downtime for those on the receiving end of New Yorker complaints. And that’s precisely why—you know what’s coming here—the 311 center is adding robotic systems to answer the easier questions.

Since its inception in 2003, humans have fielded these queries 24/7. It’s surprising that’s gone on so long in the age of automated operators. In fact, the 311 center is so busy that a Microsoft Researcher likened it to a “NASA control center.” All that manpower is, of course, expensive, so after visiting the center, Microsoft began devising programmable software that can answer the easy, factual questions, such as queries about school closings or parking regulations. My initial thought is that people calling with these questions should use this miraculous invention called the Internet. However, it’s true that local questions are harder to answer online than general ones.

0

Post-Apocalyptic Bikers, Colonial Marines, And Steampunk Robots (Oh My)

fb share tweet share

Rebel CampArgentinian artist Ignacio Bazán Lazcano had us at “badass post-apocalyptic bikers,” but he ensured our permanent loyalty when he threw “Colonial Marines” and “steampunk robot cavalry” into the equation. His DeviantArt page is chock-full of all sorts of awesome stuff, but after stumbling across it, I was particularly drawn to three different projects of his. First up is “Rebel Bikes,” which imagines heavily armed motorcycles perfectly suited for crossed the heat-blasted wastes in search of gas and supplies. I love the sheer amount of detail he’s worked into these pics.

0

Godzilla Anatomy And A Handy Visual Guide To Robots

fb share tweet share

GodzillaMuch like the Spanish Inquisition, nobody expects a kaiju rampage. Well, except maybe the people in Tokyo. If they aren’t expecting a kaiju rampage 24/7, they are clearly not learning from previous experience. But regardless of whether you hang your hat in Japan or not, you could always benefit from boning up on giant monster anatomy, just in case you wind up being the plucky everyman tasked with bringing one of the big brutes down. The handy-dandy Godzilla anatomy chart above should be a perfect place to begin your education.

0

Robots May Soon Be Able To Sweat And Get Goosebumps

fb share tweet share

robot-goosebumpsArthur C. Clarke said that “any sufficiently advanced technology is indistinguishable from magic.” Techno-magician Marco Tempest would agree. In his TED robot demo, he acknowledges that one of the reasons robots make people nervous is that “we cannot read their intentions,” which also makes it difficult for us to work closely with them. Tempest suggests that one way to feel more comfortable with robots is to “add a layer of deception,” or the illusion that a machine is thinking or feeling before we have the actual technology to allow for those processes. Researchers at Japan’s Kansai University are doing just that — they’re building robots that react involuntarily, like humans, namely by sweating and getting goosebumps.

The researchers also acknowledge that one of the biggest challenges in robotics is that we don’t know what they’re “thinking.” Sure, robots can exhibit expressions or mimic behavior, but those are essentially illusions designed to put humans at ease. The goosebumps (pictured above) might be a result of a cold wind or a chill-inducing story. They’ve got a robotic head capable of sweating, which makes me think of one of my favorite scenes in Battlestar Galactica — just before Starbuck begins to interrogate Leoben in the first season episode “Flesh and Bone,” she notices that the Cylon is sweating. It gives her pause, as “Cylons shouldn’t sweat.” It’s a small detail, but it’s a huge invasion of the human realm. The Japanese researchers are intentionally transcending the boundaries between human and machine in small but significant ways.

Page 1 of 22123451020Last »