Learning to Open New Doors
Abstract

As robots enter novel, uncertain home and office environments, they are able to navigate these environments successfully. However, to be practically deployed, robots should be able to manipulate their environment to gain access to new spaces, such as by opening a door and operating an elevator. This, however, remains a challenging problem because a robot will likely encounter doors (and elevators) it has never seen before. Objects such as door handles are very different in appearance, yet similar function implies similar form. These general, shared visual features can be extracted to provide a robot with the necessary information to manipulate the specific object and carry out a task. For example, opening a door requires the robot to identify the following properties: (a) location of the door handle axis of rotation, (b) size of the handle, and (c) type of handle (left- turn or right-turn). Given these keypoints, the robot can plan the sequence of control actions required to successfully open the door. We identify these “visual keypoints” using vision-based learning algorithms. Our system assumes no prior knowledge of the 3D location or shape of the door handle. By experimentally verifying our algorithms on doors not seen in the training set, we advance our work towards being the first to enable a robot to navigate to more spaces in a new building by opening doors and elevators, even ones it has not seen before.