Biorobotics

Saharan Ants and Robots

RESEARCH PROJECT DESCRIPTION

This research focuses on employing a synthetic methodology for the study of natural systems. The main idea is to use robots to model animal behavior. Over the last six years we have been successfully employing this methodology for the study of the navigation behavior of insects.

Tiny animals like bees and ants are capable of complex and robust navigation behavior. For example, the desert ant Cataglyphis is able to explore its desert habitat for hundreds of meters while foraging and return back to its nest precisely and on a straight line. In addition, Cataglyphis are social animals, they share the same shelter, they can communicate with their nest mates, recognize objects, reproduce, and do many other things in their everyday struggle for survival. With a body of less than 14mm and a brain of less than one cubic millimeter they outperform any autonomous machine we've build so far. Cataglyphis ants use mainly three strategies for navigating, path integration, visual piloting, and systematic search. Path integration is the continuous updating of a vector pointing home by integrating all angles steered and all distances covered during the course. In doing that the insect must first acquire the elementary pieces of information needed for the task. These are: the orientation in which it is traveling and the distance that it covers. Both bees and ants rely on information they can get from the sky to determine their orientation. More specifically, their "compass" is based on sensing skylight patterns, and especially the polarization pattern of the sky. Distance information is either gained through step counting or optical flow.Our goal is to "partially" model parts of insect's navigation behavior using robots and computer simulations. Namely, we apply the synthetic methodology to gain additional insights into the navigation behavior of bees and ants. Inspired by the insect's navigation system we have developed mechanisms for path integration and visual piloting that were successfully employed on the mobile robots Sahabot 1 and Sahabot 2.

As (almost) always there are some good and some bad news. One the one hand, the results obtained sofar provide support for the underlying biological models. Moreover, by taking the parsimonious navigation strategies of insects as a guideline, we have developed computationally cheap navigation methods for mobile robots. On the other hand, this work unveiled more questions that have to be addressed and gave us a feeling of how complex the task of modeling the whole insect might be.

Previous, current and future work

a) An embedded neural model of the insect's polarized light compass: Our previous work with the polarization vision system has focused on the early processing stages of polarized light information as well as testing the validity of different hypotheses about the mechanisms that the insect might be using for extracting compass information from the sky. This was an exploration stage and it offered us an existence proof that such a system can successfully operate in the real world. So far, we haven't tackled the problem of how these mechanisms might be implemented neurally. In this project we currently develop a neural model of the polarized light compass that takes into account existing neurophysiological as well as behavioral data. We will test the model with data from recordings of real polarization sensitive neurons (activity of real POL-neurons) found in insects. The model will later be embedded in the mobile robot Sahabot 2, and it will be tested under realistic conditions outdoors. Some important questions to be investigated are: the influence of changing skylight conditions (e.g., clouds, haze, etc.) as well as the performance of such a system in cluttered natural environments, like forests, where only part of the sky is available to the animals (a full view of the sky is normally occluded by the vegetation and other objects). The experiments we are currently performing include making simultaneous recordings of the polarization sensitive sensors, the neural activity of the simulated polarized light compass, the behavior of the robot, as well as the current environmental conditions (view of the sky as seen by the robot by using a CCD camera build in the robot).

Cataglyhis

Figure 1. The saharan ant Cataglyphis

Sahabot 2

Figure 2. The saharan robot Sahabot 2

This work is very important for many reasons. Let us mention the two most important ones. First, recordings from real polarization sensitive neurons in free running insects under real-world conditions (e.g., cloudy sky, view occluded by trees, etc.) are currently very difficult if not impossible due to mainly technological limitations. By testing the model compass under these conditions we provide a solution that will bridge this gap (neurophysiology-behavior). And second, by embedding the neural model of the polarized light compass in the robot we will have a complete and robust, polarization-based orientation system for robotic applications.

b) Optical flow: In our path integration experiments we used proprioceptive information (wheel encoders) to estimate the distance traveled by the agent. An alternative way to gain this information is to use optical flow. Recent experiments with bees [Srinivasan et al. , 1997a] provide evidence that the insect might be using optical flow induced by egomotion to estimate distance traveled. A technical solution to the visual odometer problem has been proposed by [Srinivasan et al. 1997b]. Their odometer was based on an image interpolation algorithm rather than optical flow. Moreover, the visual system of the robot consisted of two panoramic cameras mounted on the front and rear part of the robot, something that limited the biological plausibility of the system and made it not very practical for robotic applications.

We will test this hypothesis in our mobile robot Sahabot 2, which is already equipped with panoramic vision. In a first stage, we will develop a visual odometer that will be using the optical flow on the lower part of visual field to estimate the distance in a simple setup where the robot will only have to travel a certain distance. The real distance traveled will be recorded manually and compared with the distance estimated by the visual odometer. Based on the insights gained from these experiments we will develop a model that will fuse optical flow (visual odometer) with proprioceptive information (wheel encoders) to provide more reliable distance cues. In a later stage we will embed this model in the path integration system and perform systematic experiments on the complete system.

An advantage of optical flow is that it can be evaluated in different parts of the visual field for different purposes. Flow in the periphery, i.e., lateral flow as well as the motion of the ground, can be used to evaluate speed, or distance covered, whereas flow in the center can be used for detecting objects and avoiding obstacles. So far, we did not use obstacle avoidance in our robot experiments. This was mainly because most experiments were performed outdoors in an empty experimental arena. Performing experiments indoors or in cluttered outdoor environments will require implementing robust obstacle avoidance mechanisms. We will draw on existing work (e.g. Franceschini, 1992) and we will develop obstacle avoidance mechanisms that will make use of the frontal part of the visual field obtained from the panoramic camera. This work will be in coordination with the flying robots project.

c) Long range navigation and route learning: During the course of the project we have been investigating the use of visual landmarks in navigation. We have performed experiments using a biological model for visual landmark navigation, the so-called 'snapshot' model, and we have proposed an alternative, more parsimonious model, the ALV model. Both models are based on the idea of storing a single snapshot (either a complete panoramic image or compressed in the form of an average landmark vector) at a goal position, that could be used on subsequent visits to pinpoint the goal by matching the stored snapshot with the current view. This method works well when the agent is close to the goal, but it is not reliable anymore when the agent is far away. This is because the landmark arrangement on the current image may be very different from that obtained near the goal. It has been suggested [Wehner et. al. 1996], that long-range navigation might be accomplished by taking a sequence of snapshots en route, and following the right route by "replaying" this snapshot album, i.e., by sequentially matching the snapshots. Recent experiments with wood ants [Judd and Collett, 1998] and desert ants [Collett et. al., 1998] provide evidence for this hypothesis (for a review see Srinivasan, 1998). For example, it has been observed that on a familiar route, when Cataglyphis ants can use visual landmarks to steer their course, they adopt a fixed path consisting of several segments pointing at different directions. Such multisegment trajectories might be composed of stored local movement vectors associated with landmarks (or snapshots) that are recalled at the appropriate place.

We currently develop a model accounting for the route following behavior based on our previous work in path integration and visual piloting. Multiple snapshots taken at different locations in the environment en route to the goal, and path integration information (direction, distance between the locations), will be used together to enable learning of specific routes in environment. We will try to address the following questions:

When is the generation of a new snapshot necessary? This relates to the kind of quality (or measure) that can be used to trigger the generation and storage of a new snapshot. Possible candidates are: the discrepancy of the current view from the last stored snapshot (if it exceeds a certain threshold, then a new snapshot is necessary), and the physical distance traveled since the location where the last snapshot was taken.

What is contained in the insect's neural snapshots? Is it a complete panoramic image or is it a partial view centered around 'important' landmarks (as it was recently postulated)? And relating to this, how is 'important' defined, i.e., upon what criteria are landmarks selected (e.g., visual features, distance from the goal, etc.).

How is path integration interacting with the landmark navigation system? Do these two inhibit each other, that is, is the agent making use of one of them at a time, depending on what kind of information is available, or do they work in parallel?

The model will be first tested in realistic computer simulations (using a 3D simulator that is capable of rendering complex scenes) and it will then be embedded in the mobile robot Sahabot I and tested in both outdoor as well as in office environments . This project is in cooperation with the route learning in ants and robots project.

Links

More details about the sahabot project at the AILab and at Neurobiology.

References

Srinivasan, M. V., Zhang, S. W., and Bidwell, N. J. (1997a). Visually mediated odometry in honeybees. Journal of Experimental Biology, 200:2513-2522.

Srinivasan, M. V., Chahl, J. S., and Zhang, S. W. (1997b). Robot navigation by visual dead-reckoning: inspiration from insects. International Journal of Pattern Recognition and Artificial Intelligence, 11(1):35-47

Wehner, R., Michel, B., and Antonsen, P. (1996). Visual navigation in insects: Coupling of egocentric and geocentric information. Journal of Experimental Biology, 199:129-140.

Judd, S. P. D., and Collett, T. S. (1998). Multiple stored views and landmark guidance in ants. Nature, 392:710-714.

Collett, M., Collett, T. S., Bisch, S., and Wehner, R. (1998). Local and global vectors in desert ant navigation. Nature, 394:269-272.

Srinivasan, M. (1998). Ants match as they march. Nature, 392:660-661.

Franceschini, N., Pichon, J.-M., and Blanes, C. (1992). From insect vision to robot vision. Phil. Trans. R. Soc. B, vol. 337, pp. 283-294.

Netter, T., and Franceschini, N. (?). Towards nap-of-the-earth flight using optical flow. ?

Lambrinos D., Möller R., Labhart T., Pfeifer R., Wehner R. (2000) A mobile robot employing insect strategies for navigation. Robotics and Autonomous Systems, 30:39-64

Sahabot Technical Data

  • size: 0.42 x 0.42 x 0.28 m
  • weight: about 8 kg
  • on-board PC104 computer;
  • on-board Intel 196KD 16bit
  • 20MHz microcontroller for low level sensor/actuator processing
  • ethernet radio link
  • two 27 Watt DC motors
  • 35:1 reduction gear
  • 4 propulsed wheels, differential steering (option of replacing two of these wheels with freely spinning caster wheels)
  • two batteries (12V, 3.2 Ah)
  • four fans for temperature control

Sahabot Special sensors

  • 360 degree panoramic digital camera with conical mirror
  • 10 ambient light sensors
  • 6 UV polarized light sensors
  • electronic compass
  • temperature sensors