Abstract:
In this paper we propose a novel method to solve a kidnapped
robot localization problem. A mobile robot plans its sensing action
for localization using learned Bayesian network's inference.
Concretely, we represent the contextual relation between the local
sensing results, actions and the global localization beliefs using the
Bayesian network. The Bayesian network structure is learned from
complete environment information data using K2 algorithm combined with GA.
The mobile robot actively plans its sensing action to obtain sensing
information event by taking into account the trade-off between global
localization belief and the sensing cost. We have validated
the learning and planning algorithm by simulation
experiment in an office environment.
Bayesian network, Structure learning, Sensor planning, Mobile robot,
Localization
Sensing Planning for Mobile Robot Localization
- Use of Learning Bayesian Network Structure -
*Hongjun Zhou (Chuo University), Shigeyuki Sakane (Chuo University)
We propose a novel localization method which models
the local sensing results, global localization and actions in a
Bayesian network[1](BN). The mobile robot plans the
efficient sensing actions to localize itself in this BN.
The BN predicts the possible actions and associate some
possible sensor data
according to these actions, then uses these predicted actions and
sensor data to evaluate the actions by a criterion of sensing efficiency
(the balance of the global localization belief and the sensing cost). The
structure and the parameters (CPTs) of the BN are learned from
the environment information data without heuristics.
We performed simulation experiments in an office
environment (Fig.5). In order to obtain the
complete environment information, the mobile robot must pass all of
the corridors and intersections. To solve this problem, we employ a
framework of the Chinese postman problem(Fig.1).
Figure 1:
(left) A graph representing an office.
(right) A path as a solution of Chinese postman problem.
|
The mobile robot is driven by a potential method in corridors.
A laser rang sensor is assumed to be used, and we obtain
180-direction distance information in front of the
mobile robot. The mobile robot is guided by the
command at each intersection from a command list which was generated
by the next node algorithm[2] beforehand. The labels
(
of Fig.3)
of intersection are given by human while the mobile
robot arrives each intersection. The landmarks (hollow in our
experiments) are detected by
filtering the range data in two sides of the robot. To
simplify the problem, uncertainty of the local distance,
i.e., the local distance between two neighboring landmarks or between
a certain intersection and its neighboring landmark,
is ignored in current system.
The geometrical features of the intersections is recognized
by a supervised learning algorithm, support vector
machines (SVMs).
We define the environment information between two neighboring
intersections to be an information segment (Sg). One
segment involves (1)two intersection labels, (2)landmarks's information
between the two intersections, (3)the geometrical features of
intersections that are read when the mobile robot is entering each
one, and (4)action that how the mobile robot enters this segment. The environment data
of two neighboring Sg is recorded as one data case, and
the recorded sensing data is used to learn the BN's
structure and parameters.
We apply a score based search method, named
K2 algorithm [3], for structure learning of BN.
In order to decrease the search scope, the best structure searching
of K2 algorithm is based on ordering of nodes (i.e.,the causal attributes of
an attribute should appear earlier in the order).
We apply a genetic algorithm(GA) to search the best
ordering as in Ref[4], and based on this ordering, K2 can
learn the best structure form environment information data.
As an example,
variables are defined as nodes of a Bayesian
network.
The nodes
,
,
denote order of the
intersections1. Number of the probabilistic variable
of intersection labels is 12
(
). The nodes Action1(
)
and Action2(
) denote the actions when the mobile robot
enters two segments of one case, respectively. Probabilistic
variables of actions are go forward, turn left, turn right. The
nodes Hf, Mf, Tf are defined by geometrical features of
intersections that are sensed when the mobile robot enters
each one. Number of probabilistic variables of intersection types is
, for example,
and so on. We denote the landmarks in each segment by nodes
, and the
nodes also include the local distance information. We can represent a landmark as a vector
(geometrical feature, local distance),
probabilistic variables of the landmarks take four values.
We also define a mediating variable by label
of every data case, it has
probabilistic variables.
After 100 generations of the GA searches, we
obtained the best ordering. Using the best ordering, K2
algorithm generates a BN structure as shown in Fig.2.
Figure:
Learned BN's structure by K2 and GA.
|
The planning system consists of three phases: (1)inference for localization, (2)prediction for sensor planning,
and (3)sensor planning for localization. Initially,
the mobile robot moves in a certain corridor. While the mobile
robot gets a new sensing information, the system uses BN
to infer the global localization belief. At some
intersection, if the gathered sensing
information is not sufficient for localization,
in other word, the global localization belief can
not exceed a certain threshold, the system will predict some
action and associate some sensing information in the Bayesian
network. Using these information the robot selects an
optimal sensing action to perform active sensing and localize itself.
Initially, the mobile robot starts from intersection
, and
perceived two hollows from two sides of the corridor (from
to
). The global localization belief is inferred by
the learned BN. The probability of
,
and
is shown in Fig.4 and 5. We
find the belief of each intersection is too ambiguous to localize
the mobile robot(Fig.5). The sensor planner runs
at
. The predicted actions are turn left and turn
right, the sensor planner tests the global localization belief
and sensing cost using each predicted actions and some
predicted sensing information according to the action. If the
mobile robot turns left,
the mobile robot can not get enough sensing information for global localization until it moves to
the Tail intersection (J), and if the mobile
robot turn right, the mobile robot can obtain enough
belief(Fig.5) with lower sensing cost. Therefore,
the optimal action at the intersection
is determined as turn
right.
Figure 3:
An example of the experiments of sensor planning for the mobile robot
localization.(In the figure, the real numbers in ( ), ( ) with black
square, ( ) with hatched square represent probability of node T, M, H
respectively.
|
Figure:
An example of the experiments of sensor planning for the mobile robot
localization.(In the figure, the real numbers in ( ), ( ) with black
square, ( ) with hatched square represent probability of node T, M, H
respectively.
|
In this paper, we propose a novel sensor planning method for the
mobile robot localization using a Bayesian
network. The BN structure is learned from environment data based on
K2 algorithm combined with GA.
The sensor planner predicts
possible actions and some sensing information according to these
actions at a certain ambiguous intersection, and selects optimal plan
of sensing action for global localization
by taking into account the trade-off between global localization
belief and the sensing cost.
In the future, we will validate the system using a real mobile robot.
- 1
-
F. V. Jensen, ``Bayesian Networks and Decision Graphs,'' Springer, 2001.
- 2
-
J.Edmonds and E.L.Johnson, ``Matching Euler Tours and the Chinese
Postman,'' Mathematical Programming, 5, pp.88-124, 1973.
- 3
-
G. Cooper and E. Herskovits, ``A Bayesian method for the induction of
probabilistic networks from data,'' Machine Learning, 9:309-347,
1992.
- 4
-
P. Larranaga, C. Kuijpers, R. Murga, Y. Yurramendi, ``Learning Bayesian
network structures by searching for the best ordering with genetic
algorithms,'' IEEE Transactions on System, Man and Cybernetics. Vol
26. No. 4, 487-493, 1996.
Sensing Planning for Mobile Robot Localization
- Use of Learning Bayesian Network Structure -
This document was generated using the
LaTeX2HTML translator Version 2K.1beta (1.47)
Copyright © 1993, 1994, 1995, 1996,
Nikos Drakos,
Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999,
Ross Moore,
Mathematics Department, Macquarie University, Sydney.
The command line arguments were:
latex2html -local_icons -split 0 rsj2002.tex
The translation was initiated by root on 2002-07-13
Footnotes
- ...
intersections1
is the first intersection of two
neighboring Sg,
is the middle intersection, and
is the last one.
root
2002-07-13