SLAM (Simultaneous Localization and Mapping)

Table of Contents

1. SLAM 简介

SLAM stands for Simultaneous Localization and Mapping.

SLAM problems arise when the robot does not have access to a map of the environment, nor does it know its own pose. Instead, all it is given are measurements z1:t and controls u1:t. The term "simultaneous localization and mapping" describes the resulting problem: In SLAM, the robot acquires a map of its environment while simultaneously localizing itself relative to this map.

SLAM 问题的输入是:控制量 u1:t 和测量值 z1:t ,想得到的输出是:地图 m 和机器人路径 x1:t

参考:
Probabilistic Robotics, by Sebastian Thrun, 10 Simultaneous Localization and Mapping
Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms
视觉 SLAM 漫谈:http://www.cnblogs.com/gaoxiang12/p/3695962.html

1.1. Online SLAM 和 Full SLAM

Online SLAM 是指仅求解地图和当前位姿。即求解:
p(xt,mz1:t,u1:t)

Full SLAM 是指求解地图和整个路径。即求解:
p(x1:t,mz1:t,u1:t)

2. EKF SLAM

The EKF SLAM has been the de facto method for SLAM, until the introduction of FastSLAM.

EKF SLAM 和 EKF Localization 算法类似,区别在于 EKF SLAM 除了估计机器人位姿外,还要估计 Landmark 的坐标。

我们知道,要使用 Kalman 滤波器进行估计,需要知道“理论估计值”(在“预测阶段”会使用它)和“测量值”(在“更新阶段”会使用它)。Kalman 滤波器可以看作混合“理论估计值”和“测量值”,从而得到一个更接近真实值的估计值。

EKF SLAM 要同时估计机器人位姿和 Landmark 的坐标。对于估计机器人位姿,它的“理论估计值”可以由机器人的 Motion Model 得到;对于估计 Landmark 的坐标,它的“理论估计值”可以由下式(摘自:SLAM for Dummies)得到:
[rangebearing]=[(λxx)2+(λyy)2+vratan2(λyy,λxx)θ+vθ]
其中, λx,λy 是 Landmark 的坐标, x,y 是机器人坐标估计值, θ 是机器人 Bearing(Heading Direction)。

参考:
SLAM - Simultaneous Localization and Mapping, by Wolfram Burgard, Diego Tipaldi
SLAM for Dummies
Simulataneous localization and mapping with the extended Kalman filter

2.1. Data Association

从传感器得到了 Landmark 的数据,但这个数据是哪个 Landmark 的呢?这就是 Data Association 所要做的工作。

Data association is the process of associating uncertain measurements to known tracks.

参考:
http://ais.informatik.uni-freiburg.de/teaching/ws10/robotics2/pdfs/rob2-15-dataassociation.pdf

2.2. EKF SLAM: Complexity

Cost per step: quadratic in n, the number of landmarks: O(n2)
Total cost to build a map with n landmarks: O(n3)

参考:http://ais.informatik.uni-freiburg.de/teaching/ss09/robotics/slides/j_slam.pdf

3. FastSLAM

如果直接把粒子滤波器应用于 SLAM 算法中,则描述地图时维度太大,计算上不可行。比如,有一个 20m×10m 的地图,且需要的分辨率为 5cm×5cm ,则这个地图中共有 400×200=80,000 个单元,那么有 280,000 个可能的地图!参考:http://people.eecs.berkeley.edu/~pabbeel/cs287-fa12/slides/RBPF.pdf

FastSLAM 算法是 Rao-Blackwellised 粒子滤波器(RBPF)在机器人 SLAM 问题中的应用。

Rao-Blackwellised 粒子滤波器(RBPF, Rao-Blackwellized Particle Filtering)
在高维状态空间中采样时,PF 的效率很低。对某些状态空间模型,状态向量的一部分在其余部分的条件下的后验分布可以用解析方法求得,例如某些状态是条件线性高斯模型,可用 Kalman 滤波器得到条件后验分布,对另外部分状态用 PF,从而得到一种混合滤波器,降低了 PF 采样空间的维数,RBPF 样本的重要性权的方差远远低于 SIR 方法的权的方差,为使用粒子滤波器解决 SLAM 问题提供了理论基础。而 Montemerlo 等人在 2002 年首次将 Rao-Blackwellised 粒子滤波器应用到机器人 SLAM 中,并取名为 FastSLAM 算法。

摘自:http://baike.baidu.com/view/2238505.htm

The Rao-Blackwellized Particle Filtering (RBPF) is a combination of the particle filter and the Kalman filter, where each particle has a Kalman filter associated to it.

RBPF 参考:
Rao-Blackwellized Particle Filters and Localization: http://www2.informatik.uni-freiburg.de/~grisetti/teaching/ls-slam/lectures/pdf/ls-slam-06-rbpfloc.pdf
Thomas Schön, Fredrik Gustafsson, and Per-Johan Nordlund. “Marginalized Particle Filters for Mixed Linear/Nonlinear State-Space Models”. IEEE Transactions on Signal Processing, 53(7):2279-2289, Jul. 2005: http://user.it.uu.se/~thosc112/pubpdf/schongn2005.pdf

FastSLAM 参考:
Probabilistic Robotics, by Sebastian Thrun, 13 The FastSLAM Algorithm
http://ais.informatik.uni-freiburg.de/teaching/ss15/robotics/slides/14-slam-fastslam.pdf

3.1. FastSLAM 基本思想(混合滤波器)

FastSLAM 基于 SLAM 问题的下面分解形式:
p(x1:t,mz1:t,u1:t)=p(x1:tz1:t,u1:t)p(mx1:t,z1:t)

上式的推导如下(说明:下面推导中 p(mx1:t,z1:t,u1:t)=p(mx1:t,z1:t) 的理由是估计地图时可以认为 x1:t 已经包含了 u1:t 的信息):
p(x1:t,mz1:t,u1:t)=Chain rulep(x1:tz1:t,u1:t)p(mx1:t,z1:t,u1:t)=p(x1:tz1:t,u1:t)p(mx1:t,z1:t)

Feature-based FastSLAM 的核心思想是:用粒子滤波器来估计 x1:t (即每个粒子代表一个可能路径);而对每个粒子(可能路径),使用 Extended Kalman Filter 来估计地图 m 即用粒子滤波器求解 p(x1:tz1:t,u1:t) ,用 Extended Kalman Filter 求解 p(mx1:t,z1:t) 。在 Feature-based FastSLAM 中,地图 m 可用 N 个 Landmark 表示。Feature-based FastSLAM 如图 1 所示。

robot_slam_fastslam.png

Figure 1: Feature-based FastSLAM

设有 M 个粒子,每个粒子所关联的地图有 N 个 Landmark,每个 Landmark 都用一个 Extended Kalman Filter 进行估计。Feature-based FastSLAM 中一共有 NM 个 Extended Kalman Filter。

3.2. Grid-based FastSLAM

Feature-based FastSLAM = "Particle Filter" + "Extended Kalman Filter"
Grid-based FastSLAM = "Particle Filter" + "Occupancy Grid Mapping (Mapping With Know Poses)"

参考:
robabilistic Robotics, by Sebastian Thrun, 13.10 Grid-based FastSLAM
http://ais.informatik.uni-freiburg.de/teaching/ss15/robotics/slides/15-slam-gridrbpf.pdf

3.3. GMapping(实现并改进了 Grid-based FastSLAM 2.0)

Using GMapping, you can create a occupancy grid map from laser and pose data collected by a mobile robot.

GMapping implements the Grid-based FastSLAM 2.0 algorithm, with two main improvements:
(1) Improved proposal distribution
(2) Adaptive resampling

参考:
http://irawiki.disco.unimib.it/irawiki/index.php/ROS_GMapping
http://people.eecs.berkeley.edu/~pabbeel/cs287-fa11/slides/gmapping.pdf

3.3.1. GMapping 相关论文

Giorgio Grisetti, Cyrill Stachniss, and Wolfram Burgard: Improved Techniques for Grid Mapping with Rao-Blackwellized Particle Filters, IEEE Transactions on Robotics, Volume 23, pages 34-46, 2007 (pdf link)
Giorgio Grisetti, Cyrill Stachniss, and Wolfram Burgard: Improving Grid-based SLAM with Rao-Blackwellized Particle Filters by Adaptive Proposals and Selective Resampling, In Proc. of the IEEE International Conference on Robotics and Automation (ICRA), 2005 (pdf link)

4. Graph-based SLAM

SLAM 问题可以转换为图优化问题。

参考:
Probabilistic Robotics, by Sebastian Thrun, 11 The GraphSLAM Algorithm
Graph-based SLAM using Least Squares: http://ais.informatik.uni-freiburg.de/teaching/ws11/robotics2/pdfs/rob2-09-least-squares-slam.pdf
A Tutorial on Graph-Based SLAM: http://www2.informatik.uni-freiburg.de/~stachnis/pdf/grisetti10titsmag.pdf
SLAM Back-End, by Giorgio Grisetti: http://kaspar.informatik.uni-freiburg.de/~kitutorial/pdfs/ki-pose-slam.pdf
g2o:A general Framework for (Hyper) Graph Optimization: https://svn.openslam.org/data/svn/g2o/trunk/g2o/doc/g2o.pdf

5. VSLAM (Visual SLAM)

Visual SLAM (Vision-based SLAM) uses camera images to map out the position of a robot in a new environment.

参考:http://frc.ri.cmu.edu/~kaess/vslam_cvpr14/

Author: cig01

Created: <2016-06-26 Sun>

Last updated: <2017-12-13 Wed>

Creator: Emacs 27.1 (Org mode 9.4)