英语巴士网

机器人通过试验和错误能学习新技能

分类: 英语科普 

Researchers at the University of California, Berkeley, have developed algorithms that enable robots to learn motor tasks through trial and error using a process that more closely approximates the way humans learn, marking a major milestone in the field of artificial intelligence. They demonstrated their technique, a type of reinforcement learning, by having a robot complete various tasks -- putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, and more -- without pre-programmed details about its surroundings. 

"What we're reporting on here is a new approach to empowering a robot to learn," said Professor Pieter Abbeel in UC Berkeley's Department of Electrical Engineering and Computer Sciences. "The key is that when a robot is faced with something new, we won't have to reprogram it. The exact same software, which encodes how the robot can learn, was used to allow the robot to learn all the different tasks we gave it." 

The latest developments will be presented on Thursday, May 28, in Seattle at the International Conference on Robotics and Automation (ICRA). Abbeel is leading the project with fellow UC Berkeley faculty member Trevor Darrell, the director of the Berkeley Vision and Learning Center. Other members of the team are postdoctoral researcher Sergey Levine and Ph.D. student Chelsea Finn.

The work is part of a new People and Robots Initiative at UC's Center for Information Technology Research in the Interest of Society (CITRIS). The new multi-campus, multidisciplinary research initiative seeks to keep the dizzying advances in artificial intelligence, robotics and automation aligned to human needs. 

"Most robotic applications are in controlled environments where objects are in predictable positions," said Darrell. "The challenge of putting robots into real-life settings, like homes or offices, is that those environments are constantly changing. The robot must be able to perceive and adapt to its surroundings."

猜你喜欢

推荐栏目