Blog Category: Robotics

Improving User Interfaces for Robot Teleoperation

on

The FXPAL robotics research group has recently explored technologies for improving the usability of mobile telepresence robots. We evaluated a prototype head-tracked stereoscopic (HTS) teleoperation interface for a remote collaboration task. The results of this study indicate that using a HTS systems reduces task errors and improves the perceived collaboration success and
viewing experience.

We also developed a new focus plus context viewing technique for mobile robot teleoperation. This allows us to use wide-angle camera images
that proved rich contextual visual awareness of the robot’s surroundings while at the same time preserving a distortion-free region
in the middle of the camera view.

To this, we added a semi-automatic robot control method that allows operators to navigate the telepresence robot via a pointing and clicking directly on
the camera image feed. This through-the-screen interaction paradigm has the advantage of decoupling operators from the robot control loop, freeing them for
other tasks besides driving the robot.

As a result of this work, we presented two papers at the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). We obtained a best paper award for the paper “Look Where You’re Going: Visual Interfaces for Robot Teleoperation” in the Design category.

Using Stereo Vision to Operate Mobile Telepresence Robots

on

The use of mobile telepresence robots (MTRs) is increasing. Very few MTRs have autonomous navigation systems. Thus teleoperation is usually still a manual task, and often has user experience problems. We believe that this may be due to (1) the fixed viewpoint and limited field of view of a 2D camera system and (2) the capability of judging distances due to lack of depth perception.

To improve the experience of teleoperating the robot, we evaluated the use of stereo video coupled with a head-tracked and head-mounted display.

To do this, we installed a brushless gimbal with a stereo camera pair on a robot platform. We used an Oculus Rift (DK1) device for visualization and head tracking.

StereoBot and Gimbal.

Stereobot telepresence robot (left) and stereo gimbal system (right).

We conducted a preliminary user study to gather qualitative feedback about telepresence navigation tasks using stereo vs. a 2D camera feed, and high vs. low camera placement. In a simulated telepresence scenario, participants were asked to drive the robot from an office to a meeting location, have conversation with a tester, then drive back to the starting location.

An ANOVA on System Usability Scale (SUS) scores with visualization type and camera placement as factors results in a significant effect of visualization type on the score. However, we observed a higher SUS score for navigation based on a 2D camera feed. The camera placement height did not show a significant effect.

The following two main reasons could have caused the lower ratings for stereo: (1) about half of the users experienced at least some form of disorientation. This might have been due to their unfamiliarity with immersive VR headsets but also due the sensory distortion effect of being immersed visually in a moving environment while other bodily senses report sitting still. (2) the video transmission quality was not optimal due to interference of the analog video transmission signal by objects in the building and due to the relatively low display resolution of the Oculus Rift DK1 device.

In the future we intend to work on improving the visual quality of the stereo output by using better video transmission and head-worn display. We furthermore intend to evaluate robot navigation tasks using a full VR view. This view will make use of the robot’s sensors and localization system in order to display the robot correctly within a virtual representation of our building.

Towels! and, Open Source Robotics

on Comments (2)

“Cloth Grasp Point Detection based on Multiple-view Geometric Cues with Application to Robotic Towel Folding.” Just watch it:

This is a PR2 robot from Willow Garage, being used in a project led by Berkeley grad student Jeremy Maitlin-Shepard. (The paper on the folding application is here.) The PR2 and its cousin the Texai have visited us at FXPAL; we’re hoping to improve our acquaintance soon (stay tuned!).

The very interesting approach taken by the roboticists at Willow Garage is to encourage the development of the robotics community through open source development. They also loan their hardware to other research labs on a case-by-case basis, again to encourage development on their ROS platform.

What is ROS? From the Willow Garage site:

ROS, Willow Garage’s software platform, stands for two things: Robot Operating System, a loose analogy to a computer operating system, and Robot Open Source. All of the software in development at Willow Garage is released under a BSD license at code.ros.org/gf/projects/ros. It is completely open source and free for others to use, change and commercialize upon — our primary goal is to enable code reuse in robotics research and development. Willow Garage is strongly committed to developing open source and reusable software. With the help of an international robotics community, we’ve also released all of the software we are building on ROS at code.ros.org in the “ros-pkg” and “wg-ros-pkg” projects.