build details

System identification: final report

Modified 2018-05-27 by Andrea Censi

TODO for Jacopo Tani: switch intermediate and first videos

previous task next (4 of 24) index
for:Jacopo Tanitask

The following was marked as "todo".

TODO for Jacopo Tani: switch intermediate and first videos

File book/fall2017_projects/13_sysid/12-final-project-report-sysid.md.

File book/fall2017_projects/13_sysid/12-final-project-report-sysid.md
in repo duckietown/docs-fall2017_projects branch master17 commit 2fcca8c7
last modified by Andrea Censi on 2018-09-02 16:46:23

Created by function create_notes_from_elements in module mcdp_docs.task_markers.

The final result

Modified 2018-06-25 by Andrea Censi

To reproduce the results see the operation manual which includes detailed instructions for a demo.

Mission and Scope

Modified 2018-02-14 by RomeoStaub

Motivation

Modified 2018-02-28 by SofianeGadi

The mission is to make the controller more robust to different configurations of the robot. The approach chosen to do this is obtaining a mathematical model of the Duckiebot in order to understand its behavior. The mathematical model can then be used to design a controller to obtain robust desired behaviors and performances.

The Duckiebot is in a differential-drive configuration. It actuates each wheel with a separate DC Motor. By applying the same torque on both wheels one can go straight, and by applying different torques the Duckiebot turns. A schematic overview of the model can be seen in Figure Figure 3.4 [1].

A mathematical model for a differential drive robot will be derived. This model can be used to provide a simple method of maintaining an individual’s position or velocity estimate in the absence of computationally expensive position updates from external sources such as the mounted camera.

The derived model describes the expected output of the pose (e.g. position, velocity) w.r.t. a fixed inertial frame for a certain voltage input. The model makes several assumptions, such as rigid body motion, symmetry, pure rolling and no lateral slipping. Most important of all, the model assumes the knowledge of certain constants that characterize the DC motors as well as the robot’s geometry.

However, there will never be two duckiebots that show exactly the same behavior. This can be very problematic. You might have noticed that your vehicle doesn’t really go in a straight line when you command it to. For example, when the same voltage is supplied to each motor, the Duckiebot will not go straight as might expected. Also, the vehicle might not go at the velocity you are commanding it to drive at.

Therefore, these constants needs to be identified individually for each single robot. The determination process to do so is called system identification. This can be done by odometry calibration : we determine the model parameter by finding the parameters that fit best some measurements of the position we can get.

Hence, when these kinematic parameters are defined, we are able to reconstruct the robot’s velocity from the given voltage input.

Increasing the accuracy of the Duckiebot’s odometry will result in reduced operation cost as the robot requires fewer absolute positioning updates with the camera. When the duckiebot is crossing an intersection forward kinematics is used. Therefore, the performance of safe crossing is closely related to having well calibrated odometry parameters.

Existing solution

Modified 2018-02-14 by RomeoStaub

The existing mathematical model was the following :

\begin{align} V_{l} &= (g+t)(v_{A}-\omega L) \label{eq:V_l}\tag{1} \\ V_{r} &= (g-t)(v_{A}+\omega L) \label{eq:V_r}\tag{2} \end{align}

Note that if the gain $g = 1.0$ and trim $t= 0.0$, the wheel’s voltages are exactly the same as the linear velocity + or - angular velocity times half the baseline length $V_{l,r}=v_a \pm \omega L$. With gain $g \gt{} 1.0$ the vehicle goes faster given the same velocity command, and for gain $g \lt{} 1.0$ it would go slower. With trim $t \gt{} 0$, the right wheel will turn slightly more than the left wheel given the same velocity command; with trim $t\lt{}0$, the left wheel will turn slightly more the right wheel.

The parameters $g$ and $t$ were to be set manually during the wheels calibration procedure.

The current implementation of the calibration procedure can be found in the #wheel-calibration section.

Hereby, the Duckiebot is placed on a line (e.g. tape). Afterwards the joystick demo is launched with the following command:

duckiebot: $roslaunch duckietown_demos joystick.launch veh:=${VEHICLE_NAME}


Now the human operator commands the Duckiebot to go straight for around 2m.

Observe the Duckiebot from the point where it started moving and annotate on which side of the tape the Duckiebot drifted (Figure 3.6).

If the Duckiebot drifted to the left side of the tape, decrease the value of $t$, for example:

duckiebot: $rosservice call /${VEHICLE_NAME}/inverse_kinematics_node/set_trim -- 0.01


Or changing the trim in a negative way, e.g. to -0.01:

duckiebot: $rosservice call /${VEHICLE_NAME}/inverse_kinematics_node/set_trim -- -0.01


This procedure is repeated until there is less than around $10 cm$ drift for two meters distance. The speed of the duckiebot can be adjusted by setting the gain:

duckiebot: $rosservice call /${VEHICLE_NAME}/inverse_kinematics_node/set_gain -- 1.1


The parameters of the Duckiebot are saved in the file

duckietown/config/baseline/calibration/kinematics/{VEHICLE_NAME}.yaml


Opportunity

Modified 2018-02-14 by RomeoStaub

• Human in the loop
• The car is not able to calibrate itself without human input
• The procedure is laborious and can be long
• Lack of precision
• The calibration is only done for a straight line
• The speed of the Duckiebot is not known

A crucial step should be to take the human out of the loop. This means that the car will calibrate itself, without any human input.

There were several possible approaches discussed to overcome the shortcomings of the current calibration:

• Localization based calibration
• E.g. determine relative pose w.r.t. Chessboard from successive images
• Closed loop calibration
• Modify the trim while Duckiebot is following a loop until satisfactory
• Motion blur based calibration
• Reconstruct dynamics from blurred images

Because we needed to have very precise measurments of the Duckiebot’s position, the localization based calibration has been chosen. To simplify the calibration procedures, we decided also to use the same chessboard as for the camera calibration. But since the computational power needed for detecting the chessboard was big, we had to do the chessboard detection on the laptop.

We also kept a kynematic model, without including any dynamic and made some assumptions about the physics of the Duckiebot: the wheels do not slip and the velocity of the wheels is proportional to the voltage applied. Hence, if the results do not meet our expectations or if the Duckiebot’s configuration is changed, the model can also be changed or it can be made more complex.

Preliminaries

Modified 2018-06-25 by Andrea Censi

Definition of the problem

Modified 2018-02-28 by tanij

The approach we chose to improve the behaviour of the Duckiebots was to derive a model with some parameters, and to identify this parameters for each Duckiebot independantely. Hence, we first construct a theoretical model and then we try to fit the model to the measurements of the position we get from the camera and the chessboard.

Odometry formulation

Modified 2018-06-25 by Andrea Censi

The general problem definition for the odometry is to find the most likely calibration parameters given the duckiebot model #duckiebot-modeling and a set of discrete measurement from which the output can be estimated. [2] The model of the system [2] with the notations explained in Table Table 3.4 can be described as :

\begin{align} \dot{x} &= f(p;x,u) \label{eq:model1}\tag{11} \\ y & = g(x) \label{eq:model2}\tag{12} \\ \mathcal{M} & = \{ m_k=m(t_k), t_1 \lt{} \dots \lt{} t_k \lt{} \dots \lt{} t_n)\} \label{eq:measurements}\tag{13} \\ \hat{\mathcal{Y}} & = \{ \hat{y}_{k}=h(m_k),k=1, \dots ,n \} \label{eq:outputestimates}\tag{14} \end{align}

The model $f(\cdot)$ can be a kinematic model, constrained dynamic model or more general dynamic model. The pose $g(\cdot)$ can be the robot pose or the sensor pose. The measurements $m_k$ can be from “internal” sensors e.g. wheel encoders, IMUs etc. or from “external” sensors such as Lidar, Infrared or camera.

For our project, our set of measurments was obtained thanks to the camera : we put the Duckiebot in front of a chessboard, and then we were able to derive the position of the Duckiebot at every image relative to the chessboard $\big( \hat{x}_i, \hat{y}_i \big)$.

At the same time, from our kinematic model, we could estimate the position of the Duckiebot $\big( x_i, y_i\big)$ recursively with the formula :

\begin{align} x_{k+1} & = x_k+v_A \cdot cos(\theta) \label{eq:1}\tag{15} \\ y_{k+1} & = y_k+v_A \cdot sin(\theta) \label{eq:2}\tag{16} \end{align}

Because $v_A=\frac{c_r \cdot V_r+c_l \cdot V_l}{2}$ we can express every position $x_i,y_i$ of the Duckiebot with help of the parameters:

\begin{align} x_{k+1} & = x_k+\frac{c_r \cdot V_r+c_l \cdot V_l}{2} \cdot cos(\theta) \label{eq:xk}\tag{17} \\ y_{k+1} & = y_k+\frac{c_r \cdot V_r+c_l \cdot V_l}{2} \cdot sin(\theta) \label{eq:yk}\tag{18} \end{align}

By minimizing the position of the Duckiebot $x_i,y_i$ and its theoretical position given by our model $x_i,y_i$ at every time $t_i$ , we can estimate the parameters $c_r , c_l$ and $L$.

We used the L2-norm :

\begin{align} \begin{pmatrix}c_{l}^{\star}\\c_{r}^{\star}\\L^{\star}\end{pmatrix}= \underset{c_l,c_r,L}{\mathrm{argmin}} \begin{pmatrix} x_1-\hat{x}_1\\y_1-\hat{y}_1\\\vdots\\x_n-\hat{x}_n\\y_n-\hat{y}_n\end{pmatrix}^T \begin{pmatrix} x_1-\hat{x}_1\\y_1-\hat{y}_1\\\vdots\\x_n-\hat{x}_n\\y_n-\hat{y}_n\end{pmatrix} \end{align}

Contribution / Added functionality

Modified 2018-06-25 by Andrea Censi

The calibration procedure consists of two parts:

• Recording Rosbag for different Duckiebot maneuvers in front of a chessboard

• Offline processing of rosbag to find odometry parameters with fit

To reproduce the results see the operation manual which includes detailed instructions for a demo.

Formal performance evaluation / Results

Modified 2018-02-14 by RomeoStaub

Future avenues of development

Modified 2018-02-28 by tanij

• Dynamic model of the Duckiebot

• Since the kinematic model seem to be insufficient for the rotation, a dynamic model should be developed.
• Caster wheel identification

• The initial aim was to include the kinematics of the caster wheel, however due to time constraint, we sticked to the roller wheel.
• Position estimation based on april tags

• Because of noisy position measurements with the chessboard, some other methods could be used, as the april tags. It could even be possible to put several april tags on a track, so that we do not have to replace the Duckietown at the beginning of the track after each movement.
• Simultaneous odometry and camera calibration

• To take even more the human out of the loop, a automatic camera and odometery calibration could be implemented

No questions found. You can ask a question on the website.