TUD Logo

TUD Home » ... » Institutes » Institute of Artificial Intelligence » Knowledge Representation and Reasoning

Faculty of Computer Science

Research on Intelligent Robotics

Research on intelligent robotics is conducted within the Knowledge Representation and Reasoning (KRR) group at the Artificial Intelligence institute of the TU Dresden.
Intelligent robotics is concerned with perception, control as well as reasoning and planning in the face of uncertainty. Building on the fields of mathematical statistics, computer vision, and artificial intelligence, we want to endow robots with a new level of autonomy and robustness in real-world situations.

Originally, we have been focusing on the application of reasoning-about-actions methods to mobile robots. In 2000, we got two Pioneer 2 mobile robots. They have been used in an office delivery robot application. In particular, we developed new methods for execution monitoring, localisation, and navigation.

Pioneer 2 robots Astra and Bender
zoom Pioneer 2 robots Astra and Bender
Bumblebee stereo camera
zoom Bumblebee stereo camera

More recently, we got involved in research on vision-based localisation and mapping techniques as well. We would like to enable a mobile robot, which is equipped with a single-lens or stereo camera, to perceive its operating environment. That is, this robot should be able to detect obstacles, build a map of the environment and localise itself with respect to this map during its operation mainly using visual information.

We have close relationships with researchers in the Computational Logic group (specialised in reasoning about actions), the Intelligent Systems and Neural Computation groups (specialised in computer vision), and the Photogrammetry group.

The mobile robots

The Pioneer 2 is our main platform for research and student's projects. The first robot, Bender, is equipped with a laser range finder, a pan tilt colour camera, and a gripper. The second one, Astra does not have the laser range finder, but can be used with different kinds of cameras.

Pioneer 2 robot Astra
zoom Pioneer 2 robot Astra
Pioneer 2 robot Bender
zoom Pioneer 2 Bender

Robot vision

Mobile robots perceive their environment through a set of sensors. We can distinguish between distance sensors, such as sonars and laser range finders, and vision sensors, such as monocular cameras and stereo cameras. Time-of-flight distance sensors are mainly used for navigation tasks, including obstacle avoidance, localisation and mapping in office-like environments. As we move to more complex robot applications in natural environments, such as household robots, vision sensors become increasingly useful.
Robot vision is concerned with the use of vision sensors in mobile robotics applications. In general, the methods applied cover the whole range of computer vision techniques, from low-level processing and segmentation over object recognition to understanding of scenes.

Stereo vision as a range sensing technology is not new but is gaining popularity in mobile robotics as the hardware performance has caught up with the processing requirements of this type of sensor. Stereo depth extraction is subject to a number of problems, though. The smoothing effect of mask-based stereo matching algorithms, occlusions, and poor image contents can lead to incorrect and incomplete depth information.

Rectified right camera image
zoom Rectified right camera image
Disparity image
zoom Disparity image

In the current research, we investigate methods to deal with this kind of uncertainty. We are developing representations of stereo data that contain meaningful confidence measures and facilitate the use of probabilistic methods in the subsequent data interpretation steps.

For navigation tasks, it may be sufficient for the robot to distinguish between obstacles and open space. In other, more challenging, applications however, the robot may need to conclude the identity of individual objects from the presence of surfaces of a particular shape and configuration. That is, the robot has to build a geometrical model of the objects in the scene. In fact, this is what we are concerned with.

Rectified right camera image
zoom Rectified right camera image
Detected surfaces
zoom Detected surfaces

We are also interested in various real-time techniques for people tracking. In the course Interaction in Robot Art and Artificial Worlds, we have developed the Hen House robot art installation. In this interactive game, a monocular over-head camera is used to track the players.

For further information, please see the page about Robot Vision projects.

Localisation and mapping

Robot localisation is the problem of estimating the robot's coordinates relative to an external reference frame. Thereby, the robot is given a suitable map of its environment, but to localise itself relative to this map, it needs to consult its sensor data. If the robot does not have access to a map of the environment, a problem arises which is known as simultaneous localisation and mapping (SLAM). In SLAM, the robot acquires a map of its environment while simultaneously localising itself to this map. We are specifically interested in visual SLAM. That is, the robot uses camera as the main sensors for localisation and mapping.

We have developed a sequential position estimation technique for the Pioneer 2 robot based on multiple view geometry. We have also investigated recursive estimation techniques using the Kalman filter. Currently, we are developing extended techniques for visual SLAM that allow the mobile robot to estimate its position using a small set of tracked features and simultaneously obtain information about object surfaces in real time.

Multiple view geometry method
zoom Multiple view geometry method
Kalman filter method
zoom Kalman filter method

For further information, please see the page about SLAM projects.

Intelligent execution monitoring

We face difficulties if we want a mobile robot to perform complex tasks in real-world environments. The information the robot has about its operating environment might be out of date, incomplete, and uncertain. The execution of actions might fail due to a multitude of reasons. Ideally, we want the robot to reason about unexpected situations and to infer possible explanations in order to recover from the failure situation. This is what we refer to as intelligent execution monitoring.

Navigating the corridor
zoom Navigating the corridor
Interacting with people
zoom Interacting with people

For further information, please see the page about Reasoning projects.

Last update: Wed, 30 Aug 2006 11:36:52
Author: Axel Gro▀mann