SE(3) constraints for robotics
2021-09-08
This document summarizes some common maths used for state estimation of rigid bodies such as in robotics.
1 Transformation parameterisation
Rigid transformations in 3 dimensions are known as the special Euclidean group, S E ( 3 ) SE(3) SE ( 3 ) , and can be written in the homogeneous form.
1 T 4 × 4 = [ R 3 × 3 t 3 × 1 0 1 × 3 1 ] ∈ S E ( 3 ) . \begin{aligned}\mathbf T_{4\times 4} = \begin{bmatrix}
\mathbf R_{3\times 3} & \mathbf t_{3\times 1}\\
\mathbf 0_{1\times 3} & 1
\end{bmatrix} \in SE(3).\end{aligned} T 4 × 4 = [ R 3 × 3 0 1 × 3 t 3 × 1 1 ] ∈ SE ( 3 ) .
The rotation matrix R \mathbf R R is in the special orthogonal group S O ( 3 ) SO(3) SO ( 3 ) , which means that it is orthogonal (its columns are normal and orthogonal to each other) and it has determinant + 1 +1 + 1 .
1.1 Transforming a point
An element T ∈ S E ( 3 ) \mathbf T \in SE(3) T ∈ SE ( 3 ) may transform a 3D point p ∈ R 3 \mathbf p \in \mathbb R^3 p ∈ R 3 :
2 p 3 × 1 = [ x y z ] . \begin{aligned}\mathbf p_{3\times 1} &= \begin{bmatrix}x\\y\\z\end{bmatrix}.\end{aligned} p 3 × 1 = x y z .
In this document we use it interchangeably with the homogeneous representation:
3 p 4 × 1 = [ x y z 1 ] \begin{aligned}\mathbf p_{4\times 1} = \begin{bmatrix}x\\ y\\ z\\ 1\end{bmatrix}\end{aligned} p 4 × 1 = x y z 1
so that points may be transformed rigidly
4 T p ≡ R p + t . \begin{aligned}\mathbf T \mathbf p \equiv \mathbf R \mathbf p + \mathbf t.\end{aligned} Tp ≡ Rp + t .
1.2 The Lie algebra
Each element in S E ( 3 ) SE(3) SE ( 3 ) is associated with an element on the corresponding Lie algebra, s e ( 3 ) \mathfrak{se}(3) se ( 3 ) :
5 ξ 4 × 4 = [ [ ω ] × τ 0 1 × 3 0 ] ∈ s e ( 3 ) = [ 0 − ω 3 ω 2 τ 1 ω 3 0 − ω 1 τ 2 − ω 2 ω 1 0 τ 3 0 0 0 0 ] ∈ s e ( 3 ) \begin{aligned}\bm \xi_{4\times 4} &= \begin{bmatrix}
[\bm \omega]_\times & \bm \tau\\
\mathbf 0_{1 \times 3} & 0
\end{bmatrix} \in \mathfrak{se}(3)\\
&= \begin{bmatrix}
0 & -\omega_3 & \omega_2 & \tau_1\\
\omega_3 & 0 & -\omega_1 & \tau_2\\
-\omega_2 & \omega_1 & 0 & \tau_3\\
0 & 0 & 0 & 0
\end{bmatrix} \in \mathfrak{se}(3)\end{aligned} ξ 4 × 4 = [ [ ω ] × 0 1 × 3 τ 0 ] ∈ se ( 3 ) = 0 ω 3 − ω 2 0 − ω 3 0 ω 1 0 ω 2 − ω 1 0 0 τ 1 τ 2 τ 3 0 ∈ se ( 3 )
where [ ω ] × [\bm \omega]_\times [ ω ] × is the skew symmetric form of the cross product by ω \omega ω . We may also write it as a 6 × 1 6\times 1 6 × 1 vector.
6 ξ 6 × 1 = [ ω 3 × 1 τ 3 × 1 ] . \begin{aligned}\bm \xi_{6\times 1} = \begin{bmatrix}
\bm \omega_{3\times 1}\\
\bm \tau_{3\times 1}
\end{bmatrix}.\end{aligned} ξ 6 × 1 = [ ω 3 × 1 τ 3 × 1 ] .
We will use the 6 × 1 6\times 1 6 × 1 and 4 × 4 4\times 4 4 × 4 representations interchangeably depending on context.
Note that some other textbooks put the translational part on top and the rotational part below. It doesn’t matter much, but it will affect our notation for things like the adjoint action, differentials, etc below.
1.3 The exponential map
The group S E ( 3 ) SE(3) SE ( 3 ) and its algebra s e ( 3 ) \mathfrak{se}(3) se ( 3 ) are related by the exponential map:
7 exp : s e ( 3 ) → S E ( 3 ) log : S E ( 3 ) → s e ( 3 ) . \begin{aligned}\exp &: \mathfrak{se}(3) \rightarrow SE(3)\\
\log &: SE(3) \rightarrow \mathfrak{se}(3).\end{aligned} exp log : se ( 3 ) → SE ( 3 ) : SE ( 3 ) → se ( 3 ) .
The definition of exp \exp exp is based on the Taylor series:
8 exp ( ξ ) = I 4 × 4 + ξ 4 × 4 + 1 2 ! ξ 4 × 4 2 + 1 3 ! ξ 4 × 4 3 … \begin{aligned}\exp(\bm \xi) = \mathbf I_{4\times 4} + \bm \xi_{4\times 4}
+ \frac{1}{2!} \bm \xi_{4\times 4}^2
+ \frac{1}{3!} \bm \xi_{4\times 4}^3 \ldots\end{aligned} exp ( ξ ) = I 4 × 4 + ξ 4 × 4 + 2 ! 1 ξ 4 × 4 2 + 3 ! 1 ξ 4 × 4 3 …
Closed forms exist.
See: J. L. Blanco’s report jlblanco .
Note that S E ( 3 ) SE(3) SE ( 3 ) is not commutative. The adjoint action relates things in different orders:
9 T exp ( δ ) = exp ( Ad ( T ) δ ) T . \begin{aligned}\mathbf T \exp(\bm \delta) = \exp\left(\operatorname{Ad}(\mathbf T) \bm \delta\right) \mathbf T.\end{aligned} T exp ( δ ) = exp ( Ad ( T ) δ ) T .
The adjoint is a 6 × 6 6\times 6 6 × 6 matrix:
10 Ad ( T ) = [ R 0 3 × 3 [ t ] × R R ] . \begin{aligned}\operatorname{Ad}(\mathbf T) = \begin{bmatrix}
\mathbf R & \mathbf 0_{3\times 3}\\
[\mathbf t]_\times \mathbf R & \mathbf R
\end{bmatrix}.\end{aligned} Ad ( T ) = [ R [ t ] × R 0 3 × 3 R ] .
1.4 Notation summary
In general,
bold lowercase refers to vectors (e.g. translation t \mathbf t t ) bold uppercase refers to matrices (e.g. transformation T \mathbf T T ) non-bold lowerase refers to scalars (e.g. time t t t )
Here we define and summarise some notation for the following sections.
Description Notation skew-symmetric cross product matrix [ 0 − t 3 t 2 t 3 0 − t 1 − t 2 t 1 0 ] \begin{bmatrix}0 & -t_3 & t_2\\t_3 & 0 & -t_1\\-t_2 & t_1 & 0\end{bmatrix} 0 t 3 − t 2 − t 3 0 t 1 t 2 − t 1 0 [ t ] × [\mathbf t]_\times [ t ] × translational part of T ∈ S E ( 3 ) \mathbf T \in SE(3) T ∈ SE ( 3 ) t ( T ) \mathbf t(\mathbf T) t ( T ) rotational part of T ∈ S E ( 3 ) \mathbf T \in SE(3) T ∈ SE ( 3 ) R ( T ) \mathbf R(\mathbf T) R ( T ) translational part of ξ ∈ s e ( 3 ) \bm \xi \in \mathfrak{se}(3) ξ ∈ se ( 3 ) τ ( ξ ) \bm \tau(\bm \xi) τ ( ξ ) rotational part of ξ ∈ s e ( 3 ) \bm \xi \in \mathfrak{se}(3) ξ ∈ se ( 3 ) ω ( ξ ) \bm \omega(\bm \xi) ω ( ξ ) compose T 1 , T 2 ∈ S E ( 3 ) \mathbf T_1, \mathbf T_2 \in SE(3) T 1 , T 2 ∈ SE ( 3 ) T 1 T 2 T_1 T_2 T 1 T 2 exp ξ ∈ s e ( 3 ) \bm \xi \in \mathfrak{se}(3) ξ ∈ se ( 3 ) exp ( ξ ) \exp(\bm \xi) exp ( ξ ) log of T ∈ S E ( 3 ) \mathbf T \in SE(3) T ∈ SE ( 3 ) log ( T ) \log(\mathbf T) log ( T ) exp ω ∈ s o ( 3 ) \bm \omega \in \mathfrak{so}(3) ω ∈ so ( 3 ) exp ( ω ) \exp(\bm \omega) exp ( ω ) log of R ∈ S O ( 3 ) \mathbf R \in SO(3) R ∈ SO ( 3 ) log ( R ) \log(\mathbf R) log ( R ) inverse of ξ ∈ s e ( 3 ) \bm \xi \in \mathfrak{se}(3) ξ ∈ se ( 3 ) − ξ -\bm \xi − ξ inverse of T ∈ S E ( 3 ) \mathbf T \in SE(3) T ∈ SE ( 3 ) T − 1 \mathbf T^{-1} T − 1 inverse of ω ∈ s o ( 3 ) \bm \omega \in \mathfrak{so}(3) ω ∈ so ( 3 ) − ω -\bm \omega − ω inverse of R ∈ S O ( 3 ) \mathbf R \in SO(3) R ∈ SO ( 3 ) R T \mathbf R^T R T adjoint of T ∈ S E ( 3 ) \mathbf T \in SE(3) T ∈ SE ( 3 ) Ad ( T ) \operatorname{Ad}(\mathbf T) Ad ( T ) i i i th element of { ξ ∣ ξ ∈ s e ( 3 ) } \lbrace \bm \xi\vert \bm \xi \in \mathfrak{se}(3)\rbrace { ξ ∣ ξ ∈ se ( 3 )} ξ i \bm \xi_i ξ i i i i th element of { T ∣ T ∈ S E ( 3 ) } \lbrace \mathbf T \vert \mathbf T \in SE(3)\rbrace { T ∣ T ∈ SE ( 3 )} T i \mathbf T_i T i transform p ∈ R 3 \mathbf p \in \mathbb R^3 p ∈ R 3 by T ∈ S E ( 3 ) \mathbf T \in SE(3) T ∈ SE ( 3 ) T p \mathbf T \mathbf p Tp transform p ∈ R 3 \mathbf p \in \mathbb R^3 p ∈ R 3 by ξ ∈ s e ( 3 ) \bm \xi \in \mathfrak{se}(3) ξ ∈ se ( 3 ) exp ( ξ ) p \exp(\bm \xi) \mathbf p exp ( ξ ) p
Table 1 Summary of notation and implementation. For the array ones, operations are applied elementwise.
2 Derivatives
Here, we only differentiate with respect to δ ∈ s e ( 3 ) \bm \delta \in \mathfrak{se}(3) δ ∈ se ( 3 ) around the point δ = 0 \bm \delta = \mathbf 0 δ = 0 . In other words, we have a function F ( T ) \mathbf F(\mathbf T) F ( T ) where T ∈ S E ( 3 ) \mathbf T \in SE(3) T ∈ SE ( 3 ) and we would like to perturb T \mathbf T T by a very small δ \bm \delta δ .
Suppose δ = [ ω τ ] T \bm \delta = \begin{bmatrix}\bm \omega & \bm \tau\end{bmatrix}^T δ = [ ω τ ] T .
If F \mathbf F F is of dimension n n n , then the resulting derivative is an n × 6 n\times 6 n × 6 Jacobian.
11 J n × 6 ≡ ∂ F ( T ⊕ δ ) ∂ δ ] δ = 0 = [ ∂ F 1 ∂ ω 1 ∂ F 1 ∂ ω 2 ∂ F 1 ∂ ω 3 ∂ F 1 ∂ τ 1 ∂ F 1 ∂ τ 2 ∂ F 1 ∂ τ 3 ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ∂ F n ∂ ω 1 ∂ F n ∂ ω 2 ∂ F n ∂ ω 3 ∂ F n ∂ τ 1 ∂ F n ∂ τ 2 ∂ F n ∂ τ 3 ] \begin{aligned}\mathbf J_{n\times 6} \equiv \left.\frac{\partial \mathbf F(\mathbf T \oplus \bm \delta)}{\partial \bm \delta}\right]_{\bm \delta = \mathbf 0} &=
\begin{bmatrix}
\frac{\partial F_1}{\partial \omega_1} &
\frac{\partial F_1}{\partial \omega_2} &
\frac{\partial F_1}{\partial \omega_3} &
\frac{\partial F_1}{\partial \tau_1} &
\frac{\partial F_1}{\partial \tau_2} &
\frac{\partial F_1}{\partial \tau_3} \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
\frac{\partial F_n}{\partial \omega_1} &
\frac{\partial F_n}{\partial \omega_2} &
\frac{\partial F_n}{\partial \omega_3} &
\frac{\partial F_n}{\partial \tau_1} &
\frac{\partial F_n}{\partial \tau_2} &
\frac{\partial F_n}{\partial \tau_3} \\
\end{bmatrix}\end{aligned} J n × 6 ≡ ∂ δ ∂ F ( T ⊕ δ ) ] δ = 0 = ∂ ω 1 ∂ F 1 ⋮ ∂ ω 1 ∂ F n ∂ ω 2 ∂ F 1 ⋮ ∂ ω 2 ∂ F n ∂ ω 3 ∂ F 1 ⋮ ∂ ω 3 ∂ F n ∂ τ 1 ∂ F 1 ⋮ ∂ τ 1 ∂ F n ∂ τ 2 ∂ F 1 ⋮ ∂ τ 2 ∂ F n ∂ τ 3 ∂ F 1 ⋮ ∂ τ 3 ∂ F n
where F ( T ⊕ δ ) = F ( exp ( δ ) T ) \mathbf F(\mathbf T \oplus \bm \delta) = \mathbf F(\exp(\bm \delta)\mathbf T) F ( T ⊕ δ ) = F ( exp ( δ ) T ) in the case of left-perturbations and F ( T exp ( δ ) ) \mathbf F(\mathbf T\exp(\bm \delta)) F ( T exp ( δ )) in the case of right-perturbations.
The adjoint action can be used to relate the left and right derivatives by using the chain rule.
12 ∂ F ( A exp ( δ ) B ) ∂ δ = ∂ F ( exp ( Ad ( A ) δ ) A B ) ∂ δ = ∂ ∂ δ F ( exp ( Ad ( A ) δ ⏟ ϵ ) A B ) = ∂ ∂ ϵ F ( exp ( ϵ ) A B ) ∂ ϵ ∂ δ = ∂ ∂ ϵ F ( exp ( ϵ ) A B ) Ad ( A ) . \begin{aligned}\frac{\partial \mathbf F(\mathbf A \exp(\bm \delta) \mathbf B)}{\partial \bm \delta}
&= \frac{\partial \mathbf F(\exp(\operatorname{Ad}(\mathbf A) \bm \delta) \mathbf{AB})}{\partial \bm \delta}\\
&= \frac{\partial}{\partial \bm \delta} \mathbf F(\exp(\underbrace{\operatorname{Ad}(\mathbf A) \bm \delta}_{\bm \epsilon}) \mathbf{AB})\\
&= \frac{\partial}{\partial \bm \epsilon} \mathbf F(\exp(\bm \epsilon) \mathbf{AB}) \frac{\partial \bm \epsilon}{\partial \bm \delta}\\
&= \frac{\partial}{\partial \bm \epsilon} \mathbf F(\exp(\bm \epsilon) \mathbf{AB}) \operatorname{Ad}(\mathbf A).\end{aligned} ∂ δ ∂ F ( A exp ( δ ) B ) = ∂ δ ∂ F ( exp ( Ad ( A ) δ ) AB ) = ∂ δ ∂ F ( exp ( ϵ Ad ( A ) δ ) AB ) = ∂ ϵ ∂ F ( exp ( ϵ ) AB ) ∂ δ ∂ ϵ = ∂ ϵ ∂ F ( exp ( ϵ ) AB ) Ad ( A ) .
We also have a derivative of the log function:
13 D log ( T ) ≡ ∂ ∂ δ ] δ = 0 6 × 1 log ( exp ( δ ) T ) = [ D log ( ω ) 0 3 × 3 − D log ( ω ) B D log ( ω ) D log ( ω ) ] \begin{aligned}\mathbf D_\text{log}(\mathbf T) &\equiv \left.\frac{\partial}{\partial \bm \delta}\right]_{\bm \delta = \mathbf 0_{6\times 1}}\log(\exp(\bm \delta) \mathbf T)\\
&= \begin{bmatrix}
\mathbf D_\text{log}(\bm \omega) &
\mathbf 0_{3\times 3} \\
-\mathbf D_\text{log}(\bm \omega) \mathbf B \mathbf D_\text{log}(\bm \omega) &
\mathbf D_\text{log}(\bm \omega)
\end{bmatrix}\end{aligned} D log ( T ) ≡ ∂ δ ∂ ] δ = 0 6 × 1 log ( exp ( δ ) T ) = [ D log ( ω ) − D log ( ω ) B D log ( ω ) 0 3 × 3 D log ( ω ) ]
where
14 D log ( ω ) = I − 1 2 [ ω × ] + e θ [ ω ] × 2 B ≡ b θ [ u ] × + c θ ( ω u T + u ω T ) + ( ω t u ) ⋅ W ( ω ) θ = ∥ ω ∥ a θ = sin θ θ b θ = 1 − cos θ θ 2 c θ = 1 − a θ θ 2 e θ = b θ − 2 c θ 2 a θ \begin{aligned}\mathbf D_\text{log}(\bm \omega) &= \mathbf I - \frac{1}{2} [\bm \omega_\times] + e_\theta [\bm \omega]_\times^2\\
\mathbf B &\equiv b_\theta [ \mathbf u]_\times + c_\theta (\bm \omega \mathbf u^T + \mathbf u \bm \omega^T) + (\bm \omega^t \mathbf u) \cdot \mathbf W(\bm \omega)\\
\theta &= \|\bm \omega\|\\
a_\theta &= \frac{\sin\theta}{\theta}\\
b_\theta &= \frac{1-\cos\theta}{\theta^2}\\
c_\theta &= \frac{1-a_\theta}{\theta^2}\\
e_\theta &= \frac{b_\theta - 2c_\theta}{2a_\theta}\end{aligned} D log ( ω ) B θ a θ b θ c θ e θ = I − 2 1 [ ω × ] + e θ [ ω ] × 2 ≡ b θ [ u ] × + c θ ( ω u T + u ω T ) + ( ω t u ) ⋅ W ( ω ) = ∥ ω ∥ = θ sin θ = θ 2 1 − cos θ = θ 2 1 − a θ = 2 a θ b θ − 2 c θ
The exact derivation is in Ethan Eade’s report eade .
Let T ∈ S E ( 3 ) \mathbf T \in SE(3) T ∈ SE ( 3 ) , p ∈ R 3 \mathbf p \in \mathbb R^3 p ∈ R 3 , then the following table lists some useful derivatives.
As you will see later, these are basically all the derivatives you need, and all other derivatives can be easily derived from these, often together with using the adjoint.
Function Derivative F ( δ ) \mathbf F(\bm \delta) F ( δ ) ∂ F ∂ δ ] δ = 0 \left.\frac{\partial \mathbf F}{\partial \bm\delta} \right]_{\bm \delta = \mathbf 0} ∂ δ ∂ F ] δ = 0 exp ( δ ) T p \exp(\bm \delta) \mathbf T \mathbf p exp ( δ ) Tp [ − [ T p ] × I ] \begin{bmatrix}-[\mathbf T \mathbf p]_\times & \mathbf I\end{bmatrix} [ − [ Tp ] × I ] T exp ( δ ) p \mathbf T \exp(\bm \delta) \mathbf p T exp ( δ ) p R ( T ) [ − [ p ] × I ] \mathbf R(\mathbf T) \begin{bmatrix}-[\mathbf p]_\times & \mathbf I\end{bmatrix} R ( T ) [ − [ p ] × I ] log ( exp ( δ ) T ) \log(\exp(\bm \delta) \mathbf T) log ( exp ( δ ) T ) D log ( T ) \mathbf D_\text{log} (\mathbf T) D log ( T )
Table 2 Summary of derivatives
3 Optimisation
Suppose we have a trajectory S ( t ) ∈ S E ( 3 ) \mathbf S(t) \in SE(3) S ( t ) ∈ SE ( 3 ) that is a smooth curve. We would like to minimise some objective function:
15 cost ( S ) = F ( S ) T F ( S ) . \begin{aligned}\operatorname{cost}(\mathbf S) = \mathbf F(\mathbf S)^T \mathbf F(\mathbf S).\end{aligned} cost ( S ) = F ( S ) T F ( S ) .
We seek to reduce the cost as much as possible. This is a nonlinear least squares problem. During the optimisation process, we perturb it by a small perturbation δ ( t ) ∈ s e ( 3 ) \bm \delta(t) \in \mathfrak{se}(3) δ ( t ) ∈ se ( 3 ) :
16 S new ( t ) = S ( t ) ⊕ δ ( t ) \begin{aligned}\mathbf S_{\text{new}}(t) = \mathbf S(t) \oplus \bm\delta(t)\end{aligned} S new ( t ) = S ( t ) ⊕ δ ( t )
where the ⊕ \oplus ⊕ operator can either be the left-update:
17 S ⊕ δ ≡ exp ( δ ) S \begin{aligned}\mathbf S \oplus \bm \delta \equiv \exp(\bm\delta) \mathbf S\end{aligned} S ⊕ δ ≡ exp ( δ ) S
or the right-update:
18 S ⊕ δ ≡ S exp ( δ ) . \begin{aligned}\mathbf S \oplus \bm \delta \equiv \mathbf S \exp(\bm\delta).\end{aligned} S ⊕ δ ≡ S exp ( δ ) .
Both are valid depending on numerical properties of the problem.
In any case, we linearise the problem around δ = 0 \bm \delta = \mathbf 0 δ = 0 .
Let the Jacobian matrix be:
19 J = ∂ ∂ δ F ( S ⊕ δ ) ] δ = 0 . \begin{aligned}\mathbf J = \left. \frac{\partial}{\partial \bm \delta} \mathbf F(\mathbf S \oplus \bm\delta) \right]_{\bm \delta = 0}.\end{aligned} J = ∂ δ ∂ F ( S ⊕ δ ) ] δ = 0 .
Then we can solve δ \bm \delta δ using Gauss-Newton:
20 J T J δ GN = − J T F \begin{aligned}\mathbf J^T \mathbf J \bm \delta_\text{GN} = -\mathbf J^T \mathbf F\end{aligned} J T J δ GN = − J T F
or steepest descent:
21 δ s = − J T F \begin{aligned}\bm \delta_\text{s} = -\mathbf J^T \mathbf F\end{aligned} δ s = − J T F
or more sophisticated algorithms. In practice, we use Powell’s Dog Leg. The update is
22 δ = c GN δ GN + c s δ s \begin{aligned}\bm \delta = c_\text{GN} \bm \delta_\text{GN} + c_\text{s} \bm \delta_\text{s}\end{aligned} δ = c GN δ GN + c s δ s
where scalar weights c GN c_\text{GN} c GN , c s c_\text{s} c s are chosen such that ∥ δ ∥ ≤ Δ \|\bm \delta \| \leq \Delta ∥ δ ∥ ≤ Δ where the scalar parameter Δ \Delta Δ is the radius of the trust region . When Δ \Delta Δ is small, the problem is behaving badly and the quadratic approximation that Gauss-Newton uses is not very valid. In this case, c GN c_\text{GN} c GN is zero, allowing the optimiser to take timid steps along the steepest descent direction. When Δ \Delta Δ is large, the quadratic approximation is good and we take bigger steps in the Gauss-Newton direction.
Heuristics are used to determine when to increase Δ \Delta Δ and when to decrease it.
3.1 Optimisation under uncertainty
When uncertainty is present, the data is associated with some covariance matrix Σ \bm \Sigma Σ which is an n × n n\times n n × n matrix where n n n is the sise of the residual. The Gauss-Newton update then becomes:
23 J T Σ − 1 J δ = − J T Σ − 1 F \begin{aligned}\mathbf J^T \bm \Sigma^{-1} \mathbf J \bm \delta = -\mathbf J^T \bm \Sigma^{-1} \mathbf F\end{aligned} J T Σ − 1 J δ = − J T Σ − 1 F
The matrix Σ − 1 \bm \Sigma^{-1} Σ − 1 is also sometimes called the information matrix . In practice, it is useful to factor this matrix, for example, by taking the matrix square root. Let W = Σ − 1 2 \mathbf W = \bm \Sigma^{-\frac{1}{2}} W = Σ − 2 1 , then,
24 ( W J ) T ( W J ) δ = − ( W J ) T ( W F ) . \begin{aligned}(\mathbf W\mathbf J)^T (\mathbf W\mathbf J) \bm \delta = -(\mathbf W \mathbf J)^T (\mathbf W \mathbf F).\end{aligned} ( WJ ) T ( WJ ) δ = − ( WJ ) T ( WF ) .
In other words, we just pre-multiply the Jacobian and residual by a weight matrix. This is a form of whitening . In the common case where Σ \bm \Sigma Σ is a diagonal matrix, we simply divide each row of the Jacobian and the residual by the standard deviation.
Another common factorisation is the eigendecomposition of the covariance matrix Σ \bm \Sigma Σ :
25 Σ = Q Λ Q T \begin{aligned}\bm \Sigma = \mathbf Q \bm \Lambda \mathbf Q^T\end{aligned} Σ = Q Λ Q T
where Q ∈ S O ( n ) \mathbf Q \in SO(n) Q ∈ SO ( n ) is the square matrix whose columns are the orthonormal eigenvectors of Σ \bm \Sigma Σ and Λ \bm \Lambda Λ is the diagonal matrix whose entries are the eigenvalues. Then,
26 W = Λ − 1 2 Q T . \begin{aligned}\mathbf W = \bm \Lambda^{-\frac{1}{2}} \mathbf Q^T.\end{aligned} W = Λ − 2 1 Q T .
This is useful, for example, in the case of surfel matches. Recall that the covariance matrix is always symmetric positive semidefinite, allowing for easy eigendecomposition.
3.2 Robust loss functions
A least squares problem can be thought of as minimising the negative log likelihood function of a Gaussian. The likelihood function is of the form
27 L = ∏ exp ( − f i 2 ) \begin{aligned}\mathcal L = \prod \exp(-f_i^2)\end{aligned} L = ∏ exp ( − f i 2 )
Then, the log likelihood is of the least squares form:
28 cost = − log L = ∑ f i 2 . \begin{aligned}\operatorname{cost} = -\log \mathcal L = \sum f_i^2.\end{aligned} cost = − log L = ∑ f i 2 .
However, in many cases, the random variable is not Gaussian. The Gaussian has very thin tails, making it incredibly unlikely to have outliers. In the real life, there are often many outliers that necessitate a more thick-tailed distribution. To model such situations, we need robust loss functions.
Instead of optimising ∑ f i 2 \sum f_i^2 ∑ f i 2 , we optimise:
29 cost = ∑ ρ ( f i ) 2 \begin{aligned}\operatorname{cost} = \sum \rho(f_i)^2\end{aligned} cost = ∑ ρ ( f i ) 2
where ρ \rho ρ is a robust loss function. We can then weigh each row or block of the Jacobian J \mathbf J J and the residual F \mathbf F F with a robust reweighting factor :
30 r i ≡ ∂ ρ ( f i ) ∂ f i . \begin{aligned}r_i \equiv \frac{\partial \rho(f_i)}{\partial f_i}.\end{aligned} r i ≡ ∂ f i ∂ ρ ( f i ) .
Name Definition TrivialLoss
ρ ( x ) = x \rho(x) = x ρ ( x ) = x CauchyLoss
ρ ( x ) = log ( 1 + x ) \rho(x) = \log(1 + x) ρ ( x ) = log ( 1 + x ) HuberLoss
ρ ( x ) = { x x < k 2 k x − k 2 x ≥ k 2 \rho(x) = \begin{cases} x & x < k\\2 k\sqrt{x} - k^2 & x \geq k^2\end{cases} ρ ( x ) = { x 2 k x − k 2 x < k x ≥ k 2
Table 3 Some loss functions for x = ∣ f ∣ x = |f| x = ∣ f ∣ .
4 Trajectory representation
The goal is to recover the trajectory of a vehicle as a parametric curve S ( t ) ∈ S E ( 3 ) \mathbf S(t) \in SE(3) S ( t ) ∈ SE ( 3 ) as a function of time t t t .
We may assume the curve does not oscillate faster than some frequency (say, 10~Hz).
Now we should find a trajectory representation that satisfies the following properties:
For some finite period of time, the trajectory should be approximately correct and differentiable. For any given real number t t t within the domain of the curve, we can efficiently evaluate S ( t ) \mathbf S(t) S ( t ) in constant time. Local perturbations δ ( t ) \bm \delta (t) δ ( t ) may be applied to the curve, so that S new ( t ) = S ( t ) ⊕ δ ( t ) \mathbf S_\text{new}(t) = \mathbf S(t) \oplus \bm \delta(t) S new ( t ) = S ( t ) ⊕ δ ( t ) , such that the perturbation δ ( t ) \bm \delta(t) δ ( t ) may be parameterised with a finite number of parameters, each with compact support. The trajectory must be independent of the choice of the inertial reference frame.
Our solution to satisfy these requirements is a piecewise linear trajectory.
The trajectory is represented as a sequence of elements S i \mathbf S_i S i that represent S ( t i ) \mathbf S(t_i) S ( t i ) for some t i t_i t i sampled with even spacing at a high frequency (say, 100~Hz).
To evaluate S ( t ) \mathbf S(t) S ( t ) , we use a geodesic interpolation for t i ≤ t < t i + 1 t_i \leq t < t_{i + 1} t i ≤ t < t i + 1 .
31 S ( t ) = exp ( ( t − t i ) log ( S i + 1 S i − 1 ) ) S i . \begin{aligned}\mathbf S(t) = \exp((t - t_i) \log(\mathbf S_{i + 1} \mathbf S_i^{-1})) \mathbf S_i.\end{aligned} S ( t ) = exp (( t − t i ) log ( S i + 1 S i − 1 )) S i .
To perturb this spline with a curve δ ( t ) \bm \delta(t) δ ( t ) , we update each S i \mathbf S_i S i , like so:
32 S i , new = S i ⊕ δ ( t i ) \begin{aligned}\mathbf S_{i, \text{new}} = \mathbf S_i \oplus \bm \delta(t_i)\end{aligned} S i , new = S i ⊕ δ ( t i )
Alternative approaches for parameterising trajectories include:
5 Parameterisation of the perturbation
When applying perturbations, the curve δ \bm \delta δ is a smooth curve which may be thought of as a vector of infinite dimension.
To ensure that the problem is computationally tractable and that S new \mathbf S_{\text{new}} S new remains twice-differentiable, we parameterise the perturbation δ \bm \delta δ by a finite vector ξ \bm \xi ξ .
The vector ξ \bm \xi ξ is the concatenation many six-dimensional vectors ξ i ∈ s e ( 3 ) \bm \xi_i \in \mathfrak{se}(3) ξ i ∈ se ( 3 ) , such that
33 δ ( t ) = ∑ i n ξ i β ( ( t − i ) Δ t ) . \begin{aligned}\bm \delta(t) = \sum_i^n \bm \xi_i \beta((t - i)\Delta t).\end{aligned} δ ( t ) = i ∑ n ξ i β (( t − i ) Δ t ) .
Notice that, since δ ( t ) \bm \delta(t) δ ( t ) only applies small local perturbations, it is possible to add together the ξ i \bm \xi_i ξ i treating them as vectors in R 6 \mathbb R^6 R 6 .
The scalar-valued function β \beta β is a basis function with compact support, which means that it is nonzero for a finite contiguous segment of t t t and zero everywhere else.
A good choice is the piecewise polynomial for a cardinal cubic B-spline:
34 β ( t ) = { 1 6 t 3 t ∈ [ 0 , 1 ] 1 6 ( − 3 t 3 + 12 t 2 − 12 t + 4 ) t ∈ [ 1 , 2 ] 1 6 ( 3 t 3 − 24 t 2 + 60 t − 44 ) t ∈ [ 2 , 3 ] 1 6 ( − t 3 + 12 t 2 − 48 t + 64 ) t ∈ [ 3 , 4 ] 0 t ∉ [ 0 , 4 ] \begin{aligned}\beta(t) = \begin{cases}
\frac{1}{6}t^3 & t \in [0,1]\\
\frac{1}{6}\left(-3t^3 + 12t^2 - 12t+4 \right)& t \in [1,2]\\
\frac{1}{6}\left(3t^3 - 24t^2 +60t-44 \right) & t \in [2,3]\\
\frac{1}{6}\left(-t^3 + 12t^2 -48t+64 \right) & t \in [3,4]\\
0 & t \notin [0,4]
\end{cases}\end{aligned} β ( t ) = ⎩ ⎨ ⎧ 6 1 t 3 6 1 ( − 3 t 3 + 12 t 2 − 12 t + 4 ) 6 1 ( 3 t 3 − 24 t 2 + 60 t − 44 ) 6 1 ( − t 3 + 12 t 2 − 48 t + 64 ) 0 t ∈ [ 0 , 1 ] t ∈ [ 1 , 2 ] t ∈ [ 2 , 3 ] t ∈ [ 3 , 4 ] t ∈ / [ 0 , 4 ]
We can now redefine the Jacobian to be with respect to the parameters:
35 J = ∂ ∂ ξ F ( S ⊕ δ ) ∣ δ = 0 . \begin{aligned}\mathbf J = \left. \frac{\partial}{\partial \bm \xi} \mathbf F(\mathbf S \oplus \bm\delta) \right|_{\bm \delta = 0}.\end{aligned} J = ∂ ξ ∂ F ( S ⊕ δ ) δ = 0 .
Since β \beta β is a constant,
36 ∂ ∂ ξ i = β ( ( t − i ) Δ t ) ∂ ∂ δ ( t ) . \begin{aligned}\frac{\partial}{\partial \bm \xi_i} = \beta((t - i)\Delta t) \frac{\partial}{\partial \bm \delta(t)}.\end{aligned} ∂ ξ i ∂ = β (( t − i ) Δ t ) ∂ δ ( t ) ∂ .
The elements ξ i ∈ s e ( 3 ) \bm \xi_i \in \mathfrak{se}(3) ξ i ∈ se ( 3 ) are known as the spline knots . Since each knot is a 6 × 1 6\times 1 6 × 1 vector, the number of columns of J \mathbf J J is 6 × n knots 6 \times n_\text{knots} 6 × n knots where n knots n_\text{knots} n knots is the number of spline knots.
As you can see, the derivative of the trajectory at any point t t t in time is a linear combination of the derivatives of up to four spline knots.
In the next sections we will compute derivatives
37 ∂ ∂ δ ( t ) \begin{aligned}\frac{\partial}{\partial \bm \delta(t)}\end{aligned} ∂ δ ( t ) ∂
knowing that each of these blocks will contribute up to four blocks, weighted with scalar weights β \beta β , to the actual Jacobian where we are optimizing the spline knots ξ \bm \xi ξ .
6 Constraints
The function F \mathbf F F is known as the residual . It consists of many constraints:
38 F = [ F position F loop F gravity ⋮ ] \begin{aligned}\mathbf F = \begin{bmatrix}
\mathbf F_\text{position}\\
\mathbf F_\text{loop}\\
\mathbf F_\text{gravity}\\
\vdots
\end{bmatrix}\end{aligned} F = F position F loop F gravity ⋮
For people familiar with the Ceres library, each constraint is a residual block .
6.1 Position constraint
The position constraint seeks to penalise the distance between the pose’s translational component and a 3D point.
For example, the 3D point may be the GPS position p ( t ) \mathbf p(t) p ( t ) measured at time t t t .
The residual is a 3 × 1 3\times 1 3 × 1 vector:
39 F position = t ( T ( t ) ) − p ( t ) \begin{aligned}\mathbf F_\text{position} = \mathbf t(\mathbf T(t)) - \mathbf p(t)\end{aligned} F position = t ( T ( t )) − p ( t )
Recall from the notation section that t ( T ) \mathbf t(\mathbf T) t ( T ) is the translational component of the S E ( 3 ) SE(3) SE ( 3 ) element T \mathbf T T .
6.1.2 Left Jacobian
The left Jacobian is 3 × 6 3 \times 6 3 × 6 :
40 J left ≡ ∂ F position ∂ δ = ∂ t ( exp ( δ ) T ( t ) ) ∂ δ = [ − [ t ( T ( t ) ) ] × I 3 × 3 ] \begin{aligned}\mathbf J_\text{left}
\equiv \frac{\partial \mathbf F_\text{position}}{\partial \bm \delta}
&=
\frac{\partial \mathbf t(\exp(\bm \delta) \mathbf T(t))}{\partial \bm \delta}\\
&= \begin{bmatrix}
-[\mathbf t(\mathbf T(t))]_\times & \mathbf I_{3 \times 3}
\end{bmatrix}\end{aligned} J left ≡ ∂ δ ∂ F position = ∂ δ ∂ t ( exp ( δ ) T ( t )) = [ − [ t ( T ( t )) ] × I 3 × 3 ]
The trick is to view t ( T ) = T p \mathbf t(\mathbf T) = \mathbf T \mathbf p t ( T ) = Tp where p = 0 \mathbf p = \mathbf 0 p = 0 .
Then we can apply the Jacobian for transforming a point in the derivatives table.
6.1.3 Right Jacobian
The right Jacobian is 3 × 6 3 \times 6 3 × 6 :
41 J right ≡ ∂ F position ∂ δ = ∂ t ( T ( t ) exp ( δ ) ) ∂ δ = [ 0 3 × 3 R ( T ( t ) ) ] \begin{aligned}\mathbf J_\text{right}
\equiv \frac{\partial \mathbf F_\text{position}}{\partial \bm \delta}
&=
\frac{\partial \mathbf t(\mathbf T(t)\exp(\bm \delta))}{\partial \bm \delta}\\
&= \begin{bmatrix}
\mathbf 0_{3 \times 3} & \mathbf R(\mathbf T(t))
\end{bmatrix}\end{aligned} J right ≡ ∂ δ ∂ F position = ∂ δ ∂ t ( T ( t ) exp ( δ )) = [ 0 3 × 3 R ( T ( t )) ]
6.2 Loop closure constraint
Suppose that we have aligned the poses from two points in time along a trajectory, e.g. using ICP.
This produces a relative transformation A \mathbf A A .
The residual is a 6 × 1 6\times 1 6 × 1 vector:
42 F loop closure = log ( T ( t 1 ) − 1 T ( t 2 ) ⏟ trajectory A ) \begin{aligned}\mathbf F_\text{loop closure}
= \log\left(
\underbrace{\mathbf T(t_1)^{-1} \mathbf T(t_2)}_\text{trajectory} \mathbf A
\right)\end{aligned} F loop closure = log trajectory T ( t 1 ) − 1 T ( t 2 ) A
6.2.2 Left Jacobians
43 J left , t 1 ≡ ∂ F loop closure ∂ δ ( t 1 ) = ∂ log ( ( exp ( δ ) T ( t 1 ) ) − 1 T ( t 2 ) A ) ∂ δ = ∂ log ( T ( t 1 ) − 1 exp ( − δ ) T ( t 2 ) A ) ∂ δ = − ∂ log ( T ( t 1 ) − 1 exp ( δ ) T ( t 2 ) A ) ∂ δ = − ∂ log ( exp ( Ad ( T ( t 1 ) − 1 ) δ ) T ( t 1 ) − 1 T ( t 2 ) ) ∂ δ = − ∂ log ( exp ( ϵ ) T ( t 1 ) − 1 T ( t 2 ) A ) ∂ ϵ Ad ( T ( t 1 ) − 1 ) = − D log ( T ( t 1 ) − 1 T ( t 2 ) A ) Ad ( T ( t 1 ) − 1 ) \begin{aligned}\mathbf J_{\text{left}, t_1}
&\equiv \frac{\partial \mathbf F_\text{loop closure}}{\partial \bm \delta(t_1)}\\
&=
\frac{\partial
\log\left(
(\exp(\bm \delta) \mathbf T(t_1))^{-1} \mathbf T(t_2) \mathbf A
\right)
}{\partial \bm \delta}\\
&=
\frac{\partial
\log\left(
\mathbf T(t_1)^{-1} \exp(-\bm \delta) \mathbf T(t_2) \mathbf A
\right)
}{\partial \bm \delta}\\
&= -
\frac{\partial
\log\left(
\mathbf T(t_1)^{-1} \exp(\bm \delta) \mathbf T(t_2) \mathbf A
\right)
}{\partial \bm \delta}\\
&= -
\frac{\partial
\log\left(
\exp(\operatorname{Ad}(\mathbf T(t_1)^{-1}) \bm \delta) \mathbf T(t_1)^{-1} \mathbf T(t_2)
\right)
}{\partial \bm \delta}\\
&= -
\frac{\partial
\log\left(
\exp(\bm \epsilon) \mathbf T(t_1)^{-1} \mathbf T(t_2) \mathbf A
\right)
}{\partial \bm \epsilon}\operatorname{Ad}(\mathbf T(t_1)^{-1}) \\
&= -
\mathbf D_\text{log}\left(
\mathbf T(t_1)^{-1} \mathbf T(t_2) \mathbf A
\right)
\operatorname{Ad}(\mathbf T(t_1)^{-1})\end{aligned} J left , t 1 ≡ ∂ δ ( t 1 ) ∂ F loop closure = ∂ δ ∂ log ( ( exp ( δ ) T ( t 1 ) ) − 1 T ( t 2 ) A ) = ∂ δ ∂ log ( T ( t 1 ) − 1 exp ( − δ ) T ( t 2 ) A ) = − ∂ δ ∂ log ( T ( t 1 ) − 1 exp ( δ ) T ( t 2 ) A ) = − ∂ δ ∂ log ( exp ( Ad ( T ( t 1 ) − 1 ) δ ) T ( t 1 ) − 1 T ( t 2 ) ) = − ∂ ϵ ∂ log ( exp ( ϵ ) T ( t 1 ) − 1 T ( t 2 ) A ) Ad ( T ( t 1 ) − 1 ) = − D log ( T ( t 1 ) − 1 T ( t 2 ) A ) Ad ( T ( t 1 ) − 1 )
44 J left , t 2 ≡ ∂ F loop closure ∂ δ ( t 2 ) = ∂ log ( T ( t 1 ) − 1 exp ( δ ) T ( t 2 ) A ) ∂ δ = D log ( T ( t 1 ) − 1 T ( t 2 ) A ) Ad ( T ( t 1 ) − 1 ) \begin{aligned}\mathbf J_{\text{left}, t_2}
&\equiv \frac{\partial \mathbf F_\text{loop closure}}{\partial \bm \delta(t_2)}\\
&=
\frac{\partial
\log\left(
\mathbf T(t_1)^{-1} \exp(\bm \delta) \mathbf T(t_2) \mathbf A
\right)
}{\partial \bm \delta}\\
&=
\mathbf D_\text{log}\left(
\mathbf T(t_1)^{-1} \mathbf T(t_2) \mathbf A
\right)
\operatorname{Ad}(\mathbf T(t_1)^{-1})\end{aligned} J left , t 2 ≡ ∂ δ ( t 2 ) ∂ F loop closure = ∂ δ ∂ log ( T ( t 1 ) − 1 exp ( δ ) T ( t 2 ) A ) = D log ( T ( t 1 ) − 1 T ( t 2 ) A ) Ad ( T ( t 1 ) − 1 )
6.2.3 Right Jacobians
45 J right , t 1 ≡ ∂ F loop closure ∂ δ ( t 1 ) = ∂ log ( ( T ( t 1 ) exp ( δ ) ) − 1 T ( t 2 ) A ) ∂ δ = ∂ log ( exp ( − δ ) T ( t 1 ) − 1 T ( t 2 ) A ) ∂ δ = − D log ( T ( t 1 ) − 1 T ( t 2 ) A ) \begin{aligned}\mathbf J_{\text{right}, t_1}
&\equiv \frac{\partial \mathbf F_\text{loop closure}}{\partial \bm \delta(t_1)}\\
&=
\frac{\partial
\log\left(
(\mathbf T(t_1) \exp(\bm \delta))^{-1} \mathbf T(t_2) \mathbf A
\right)
}{\partial \bm \delta}\\
&=
\frac{\partial
\log\left(
\exp(-\bm \delta) \mathbf T(t_1)^{-1} \mathbf T(t_2) \mathbf A
\right)
}{\partial \bm \delta}\\
&= -
\mathbf D_\text{log}\left(
\mathbf T(t_1)^{-1} \mathbf T(t_2) \mathbf A
\right)\end{aligned} J right , t 1 ≡ ∂ δ ( t 1 ) ∂ F loop closure = ∂ δ ∂ log ( ( T ( t 1 ) exp ( δ ) ) − 1 T ( t 2 ) A ) = ∂ δ ∂ log ( exp ( − δ ) T ( t 1 ) − 1 T ( t 2 ) A ) = − D log ( T ( t 1 ) − 1 T ( t 2 ) A )
46 J right , t 2 ≡ ∂ F loop closure ∂ δ ( t 2 ) = ∂ log ( ( T ( t 1 ) − 1 T ( t 2 ) exp ( δ ) A ) ∂ δ = D log ( T ( t 1 ) − 1 T ( t 2 ) A ) Ad ( T ( t 1 ) − 1 T ( t 2 ) ) \begin{aligned}\mathbf J_{\text{right}, t_2}
&\equiv \frac{\partial \mathbf F_\text{loop closure}}{\partial \bm \delta(t_2)}\\
&=
\frac{\partial
\log\left(
(\mathbf T(t_1)^{-1} \mathbf T(t_2) \exp(\bm \delta) \mathbf A
\right)
}{\partial \bm \delta}\\
&=
\mathbf D_\text{log}\left(
\mathbf T(t_1)^{-1} \mathbf T(t_2) \mathbf A
\right)
\operatorname{Ad}(\mathbf T(t_1)^{-1}\mathbf T(t_2))\end{aligned} J right , t 2 ≡ ∂ δ ( t 2 ) ∂ F loop closure = ∂ δ ∂ log ( ( T ( t 1 ) − 1 T ( t 2 ) exp ( δ ) A ) = D log ( T ( t 1 ) − 1 T ( t 2 ) A ) Ad ( T ( t 1 ) − 1 T ( t 2 ))
6.3 Gravity constraint
The gravity constraint seeks to penalise the distance between up vector of the car R ( T ( t ) ) u \mathbf R(\mathbf T(t)) \mathbf u R ( T ( t )) u and the true up vector u true = [ 0 , 0 , 1 ] T \mathbf u_\text{true} = [0, 0, 1]^T u true = [ 0 , 0 , 1 ] T .
For example, if the sensor were mounted perfectly upright, then u = u true \mathbf u = \mathbf u_\text{true} u = u true .
We can also use the accelerometer reading to obtain a different u \mathbf u u at each point in time, as long as you remember to subtract the inertial acceleration.
The residual is a 3 × 1 3 \times 1 3 × 1 vector:
47 F gravity = ( R ( T ( t ) ) u − u true ) \begin{aligned}\mathbf F_\text{gravity} = \left(\mathbf R(\mathbf T(t)) \mathbf u - \mathbf u_\text{true}\right)\end{aligned} F gravity = ( R ( T ( t )) u − u true )
6.3.2 Left Jacobian
Since the gravity constraint only depends on rotation and not on translation, we only care about the derivative with respect to ω ( δ ) \bm \omega(\bm \delta) ω ( δ ) , which we hereafter write as ω \bm \omega ω .
48 J left ≡ ∂ F gravity ∂ ω = ∂ exp ( ω ) R ( T ( t ) ) u ∂ ω = − [ R ( T ( t ) ) u ] × \begin{aligned}\mathbf J_\text{left} \equiv \frac{\partial \mathbf F_\text{gravity}}{\partial \bm \omega}
&=\frac{\partial \exp(\bm \omega) \mathbf R(\mathbf T(t))\mathbf u}{\partial \bm \omega}\\
&= -[\mathbf R(\mathbf T(t)) \mathbf u]_\times\end{aligned} J left ≡ ∂ ω ∂ F gravity = ∂ ω ∂ exp ( ω ) R ( T ( t )) u = − [ R ( T ( t )) u ] ×
6.3.3 Right Jacobian
49 J right ≡ ∂ F gravity ∂ ω = ∂ R ( T ( t ) ) exp ( ω ) u ∂ ω = − R ( T ( t ) ) [ u ] × \begin{aligned}\mathbf J_\text{right} \equiv \frac{\partial \mathbf F_\text{gravity}}{\partial \bm \omega}
&= \frac{\partial \mathbf R(\mathbf T(t)) \exp(\bm \omega)\mathbf u}{\partial \bm \omega}\\
&= -\mathbf R(\mathbf T(t)) [\mathbf u]_\times\end{aligned} J right ≡ ∂ ω ∂ F gravity = ∂ ω ∂ R ( T ( t )) exp ( ω ) u = − R ( T ( t )) [ u ] ×
Recall that R ( T ) \mathbf R(\mathbf T) R ( T ) is the rotational component of the pose T \mathbf T T .
6.4 Point constraint
We are aligning a “moving” or “map” point m \mathbf m m to a “static” or “scene” point s \mathbf s s by transforming the moving point with the pose T \mathbf T T .
The matrix A \mathbf A A can be used to store the uncertainty of the point s \mathbf s s , i.e. an information matrix (the inverse of a covariance matrix).
If you are registering the point to a line, you can let A \mathbf A A be 2 × 3 2\times 3 2 × 3 . A common choice is to compute an orthonormal basis from the direction of the ray, e.g. using Duff et al’s approach . This is commonly used in visual odometry . If you are registering the point to a plane, you can let A \mathbf A A be 1 × 3 1\times 3 1 × 3 .
The residual is a 3 × 1 3\times 1 3 × 1 vector:
50 F point = A ( T m − s ) . \begin{aligned}\mathbf F_\text{point} = \mathbf A (\mathbf T \mathbf m - \mathbf s).\end{aligned} F point = A ( Tm − s ) .
6.4.2 Left Jacobian
The left Jacobian is k × 6 k\times 6 k × 6 , where A \mathbf A A is k × 3 k\times 3 k × 3 .
51 J left = A [ − [ T m ] × I ] \begin{aligned}\mathbf J_\text{left} &= \mathbf A \begin{bmatrix}
-[\mathbf T \mathbf m]_\times & \mathbf I
\end{bmatrix}\end{aligned} J left = A [ − [ Tm ] × I ]
6.4.3 Right Jacobian
The right Jacobian is k × 6 k\times 6 k × 6 , where A \mathbf A A is k × 3 k\times 3 k × 3 .
52 J right = A R ( T ) [ − [ m ] × I ] \begin{aligned}\mathbf J_\text{right} &= \mathbf A \mathbf R(\mathbf T)\begin{bmatrix}
-[\mathbf m]_\times & \mathbf I
\end{bmatrix}\end{aligned} J right = AR ( T ) [ − [ m ] × I ]
Recall that R ( T ) \mathbf R(\mathbf T) R ( T ) is the rotational component of the pose T \mathbf T T .
6.5 Velocity constraint
The velocity constraint seeks to penalise deviations in vehicle velocity from the 6 degree of freedom velocity estimates from another source.
For ease of implementation, the vehicle velocity is obtained by numerical differentiation, e.g. by evaluating the pose at times t 1 t_1 t 1 and t 2 t_2 t 2 .
Let h = 1 / ( t 2 − t 1 ) h = 1 / (t_2 - t_1) h = 1/ ( t 2 − t 1 ) .
The residual is a 6 × 1 6\times 1 6 × 1 vector.
For concise notation let us define the 6 × 6 6\times 6 6 × 6 matrix Δ \bm \Delta Δ such that
53 F velocity = h log ( Δ ) − ξ velocity = h log ( T ( t 2 ) T ( t 1 ) − 1 ) − ξ velocity \begin{aligned}\mathbf F_\text{velocity} &= h \log\left(\bm \Delta\right) - \bm \xi_\text{velocity}\\
&= h \log\left(
\mathbf T(t_2)
\mathbf T(t_1)^{-1}
\right) - \bm \xi_\text{velocity}\end{aligned} F velocity = h log ( Δ ) − ξ velocity = h log ( T ( t 2 ) T ( t 1 ) − 1 ) − ξ velocity
6.5.2 Left Jacobian
Consider differentiating with respect to left-updates of T ( t 1 ) \mathbf T(t_1) T ( t 1 ) .
The Jacobian is 6 × 6 6\times 6 6 × 6 .
54 J left ( t 1 ) ≡ ∂ F velocity ∂ δ ( t 1 ) = ∂ ∂ δ h log ( T ( t 2 ) ( exp ( δ ) T ( t 1 ) ) − 1 ) = ∂ ∂ δ h log ( T ( t 2 ) T ( t 1 ) − 1 ⏟ Δ exp ( − δ ) ) = ∂ ∂ δ h log ( exp ( − Ad ( Δ ) δ ⏟ ϵ ) Δ ) = ∂ ∂ ϵ h log ( exp ( ϵ ) Δ ) ∂ ϵ ∂ δ = − h D log ( Δ ) Ad ( Δ ) = − h D log ( T ( t 2 ) T ( t 1 ) − 1 ) Ad ( T ( t 2 ) T ( t 1 ) − 1 ) . \begin{aligned}\mathbf J_\text{left}(t_1)
\equiv \frac{\partial \mathbf F_\text{velocity}}{\partial \bm \delta(t_1)}
&= \frac{\partial}{\partial \bm \delta}
h \log\left(
\mathbf T(t_2)
\left(
\exp(\bm \delta)
\mathbf T(t_1)
\right)^{-1}
\right)\\
&= \frac{\partial}{\partial \bm \delta}
h \log\left(
\underbrace{
\mathbf T(t_2)
\mathbf T(t_1)^{-1}
}_{\bm \Delta}
\exp(-\bm \delta)
\right)\\
&= \frac{\partial}{\partial \bm \delta}
h \log\left(\exp(\underbrace{-\operatorname{Ad}(\bm \Delta)\bm \delta}_{\bm \epsilon}) \bm \Delta \right) \\
&= \frac{\partial}{\partial \bm \epsilon}
h \log\left(\exp(\bm \epsilon) \bm \Delta \right) \frac{\partial \bm \epsilon}{\partial \bm \delta} \\
&= -h\mathbf D_{\log}\left(\bm \Delta \right) \operatorname{Ad}(\bm \Delta)\\
&= -h\mathbf D_{\log}(\mathbf T(t_2)\mathbf T(t_1)^{-1}) \operatorname{Ad}(
\mathbf T(t_2)
\mathbf T(t_1)^{-1}
).\end{aligned} J left ( t 1 ) ≡ ∂ δ ( t 1 ) ∂ F velocity = ∂ δ ∂ h log ( T ( t 2 ) ( exp ( δ ) T ( t 1 ) ) − 1 ) = ∂ δ ∂ h log Δ T ( t 2 ) T ( t 1 ) − 1 exp ( − δ ) = ∂ δ ∂ h log ( exp ( ϵ − Ad ( Δ ) δ ) Δ ) = ∂ ϵ ∂ h log ( exp ( ϵ ) Δ ) ∂ δ ∂ ϵ = − h D l o g ( Δ ) Ad ( Δ ) = − h D l o g ( T ( t 2 ) T ( t 1 ) − 1 ) Ad ( T ( t 2 ) T ( t 1 ) − 1 ) .
Now, consider differentiating with respect to left-updates to T ( t 2 ) \mathbf T(t_2) T ( t 2 ) .
55 J left ( t 2 ) ≡ ∂ F velocity ∂ δ ( t 2 ) = ∂ ∂ δ h log ( exp ( δ ) T ( t 2 ) T ( t 1 ) − 1 ) = h D log ( T ( t 2 ) T ( t 1 ) − 1 ) . \begin{aligned}\mathbf J_\text{left}(t_2)
\equiv \frac{\partial \mathbf F_\text{velocity}}{\partial \bm \delta(t_2)}
&= \frac{\partial}{\partial \bm \delta}
h \log\left(
\exp(\bm \delta)
\mathbf T(t_2)
\mathbf T(t_1)^{-1}
\right)\\
&= h\mathbf D_{\log}(
\mathbf T(t_2)
\mathbf T(t_1)^{-1}
).\end{aligned} J left ( t 2 ) ≡ ∂ δ ( t 2 ) ∂ F velocity = ∂ δ ∂ h log ( exp ( δ ) T ( t 2 ) T ( t 1 ) − 1 ) = h D l o g ( T ( t 2 ) T ( t 1 ) − 1 ) .
6.6 Regularisation constraint
The regularisation constraint implements a basic Tikhonov regulariser.
It seeks to dampen the problem to avoid divergent oscillations, overfitting, or other types of poor convergence.
The residual is 6 × 1 6\times 1 6 × 1 :
56 F regulariser = ξ . \begin{aligned}\mathbf F_\text{regulariser} = \bm \xi.\end{aligned} F regulariser = ξ .
The Jacobian is the identity:
57 J regulariser = I . \begin{aligned}\mathbf J_\text{regulariser} = \mathbf I.\end{aligned} J regulariser = I .
7 References
jlblanco Blanco, J.L. (2020) A tutorial on SE(3) transformation parameterizations and on-manifold optimization. link eade Eade, E. (2018) Derivative of the Exponential map. link