Home
build details

Show: section status errors & todos local changes recent changes last change in-page changes feedback controls

Autonomy overview

Modified 2018-06-22 by Andrea Censi

Liam Paull

previous task next (6 of 54) index
for:Liam Paulltask

The following was marked as "special-par-assigned".

Liam Paull

File book/learning_materials/10_autonomy/20_autonomy_overview/20_autonomy_overview.md.

File book/learning_materials/10_autonomy/20_autonomy_overview/20_autonomy_overview.md
in repo duckietown/docs-learning_materials branch master17 commit d59a4632
last modified by Andrea Censi on 2018-06-24 20:52:35

Created by function create_notes_from_elements in module mcdp_docs.task_markers.

This unit introduces some basic concepts ubiquitous in autonomous vehicle navigation.

Basic Building Blocks of Autonomy

Modified 2018-06-22 by Andrea Censi

The minimal basic backbone processing pipeline for autonomous vehicle navigation is shown in Figure 2.2.

The basic building blocks of any autonomous vehicle

For an autonomous vehicle to function, it must achieve some level of performance for all of these components. The level of performance required depends on the task and the required performance. In the remainder of this section, we will discuss some of the most basic options. In the next section we will briefly introduce some of the more advanced options that are used in state-of-the-art autonomous vehicles.

Sensors

Modified 2018-06-22 by Andrea Censi

Velodyne 3D Laser Scanner
Camera
Automotive Radar
GPS Receiver
Inertial Measurement Unit
Some common sensors used for autonomous navigation

TODO for Liam Paull: Actually we can directly include the SVG files.

previous task next (7 of 54) index
for:Liam Paulltask

The following was marked as "todo".

TODO for Liam Paull: Actually we can directly include the SVG files.

File book/learning_materials/10_autonomy/20_autonomy_overview/20_autonomy_overview.md.

File book/learning_materials/10_autonomy/20_autonomy_overview/20_autonomy_overview.md
in repo duckietown/docs-learning_materials branch master17 commit d59a4632
last modified by Andrea Censi on 2018-06-24 20:52:35

Created by function create_notes_from_elements in module mcdp_docs.task_markers.

Sensor A sensor is a device that or mechanism that is capable of generating a measurement of some external phyiscal quantity

In general, sensors have two major types. Passive sensors generate measurements without affecting the environment that they are measuring. Examples include inertial sensors, odometers, GPS receivers, and cameras. Active sensors emit some form of energy into the environment in order to make a measurement. Examples of this type of sensor include Light Detection And Ranging (LiDAR), Radio Detection And Ranging (RaDAR), and Sound Navigation and Ranging (SoNAR). All of these sensors emit energy (from different spectra) into the environment and then detect some property of the energy that is reflected from the environment (e.g., the time of flight or the phase shift of the signal)

Raw Data Processing

Modified 2018-06-24 by Andrea Censi

The raw data that is input from a sensor needs to be processed in order to become useful and even understandandable to a human.

First, calibration is usually required to convert units, for example from a voltage to a physical quantity. As a simple example consider a thermometer, which measures temperature via an expanding liquid (usually mercury). The calibration is the known mapping from amount of expansion of liquid to temperature. In this case it is a linear mapping and is used to put the markings on the thermometer that make it useful as a sensor.

We will distiguish between two fundamentally types of calibrations.

Intrinsic Calibration An intrinsic calibration is required to determine sensor-specific paramaters that are internal to a specific sensor.

Extrinsic Calibration An extrinsic calibration is required to determine the external configuration of the sensor with respect to some reference frame.

For more information about reference frames check out Unit B-6 - Reference frames

Calibration is very important consideration in robotics. In the field, the most advanced algorithms will fail if sensors are not properly calibrated.

Once we have properly calibrated data in some meaningful units, we often do some preprocessing to reduce the overall size of the data. This is true particularly for sensors that generate a lot of data, like cameras. Rather than deal with every pixel value generated by the camera, we will process an image to generate feature-points of interest. In “classical” computer vision many different feature descriptors have been proposed (Harris, BRIEF, BRISK, SURF, SIFT, etc), and more recently Convolutional Neural Networks (CNNs) are being used to learn these features.

The important property of these features is that they should be as easily to associate as possible across frames. In order to achieve this, the feature descriptors should be invariant to nuissance parameters.

Top: A raw image with feature points indicated. Bottom: Lines projected onto ground plane using extrinsic calibration and ground projections

State Estimation

Modified 2018-06-22 by Andrea Censi

Now that we have used our sensors to generate a set of meaningful measurements, we need to combine these measurements together to produce an estimate of the underlying hidden state of the robot and possibly to environment.

State The state $\state_t \in \statesp$ is a *sufficient statistic* of the environment, i.e. it contains all sufficient information required for the robot to carry out its task in that environment. This can (and usually does) include the *configuration* of the robot itself.

What variables are maintained in the state space $\statesp$ depends on the problem at hand. For example we may just be interested in a single robot’s configuration in the plane, in which case $\state_t \equiv \pose_t$. However, in other cases, such as simultaneous localization and mapping, me may also be tracking the map in the state space.

According to Bayesian principles, any system parameters that are not fully known and deterministic should be maintained in the state space.

In general, we do not have direct access to values in $\state$, instead we rely on our (noisy) sensor measurements to tell us something about them, and then we infer the values.

The video is at https://vimeo.com/232324847.

Lane Following in Duckietown. *Top Right*: Raw image; *Bottom Right*: Line detections; *Top Left*: Line projections and estimate of robot position within the lane (green arrow); *Bottom Left*: Control signals sent to wheels.

The animation in Figure 2.8 shows the lane following procedure. The output of the state estimator produces the green arrow in the top left pane.

Actuation

Modified 2018-06-24 by Andrea Censi

The very innermost control loop deals with actually tracking the correct voltage to be sent to the motors. This is generally executed as close to the hardware level as possible. For example we have a Stepper Motor HAT See the parts list (unknown ref opmanual_duckiebot/acquiring-parts-c0)

warning next (1 of 3) index
warning
I will ignore this because it is an external link. 

 > I do not know what is indicated by the link '#opmanual_duckiebot/acquiring-parts-c0'.

Location not known more precisely.

Created by function check_if_any_href_is_invalid in module mcdp_docs.check_missing_links.
.

Infrastructure and Prior Information

Modified 2018-06-22 by Andrea Censi

In general, we can make the autonomous navigation a simpler one by exploiting existing structure, infastructure, and contextual prior knowledge.

Infrastructure example: Maps or GPS satellites

Structure example: Known color and location of lane markings

Contextual prior knowledge example: Cars tend to follow the Rules of the Road

Advanced Building Blocks of Autonomy

Modified 2018-06-22 by Andrea Censi

The basic building blocks enable static navigation in Duckietown. However, many other components are necessary for more realistic scenarios.

Object Detection

Modified 2018-06-22 by Andrea Censi

Advanced Autonomy: Object Detection

One key requirement is the ability to detect objects in the world such as but not limited to: signs, other robots, people, etc.

SLAM

Modified 2018-06-24 by Andrea Censi

The simultaneous localization and mapping (SLAM) problem involves simultaneously estimating not only the robot state but also the map at the same time, and is a fundamental capability for mobile robotics. In autonomous driving, generally the most common application for SLAM is actual in the map-building task. Once a map is built then it can be pre-loaded and then used for pure localization. A demonstration of this in Duckietown is shown in Figure 2.18.

The video is at https://vimeo.com/232333888.

Localization in Duckietown

No questions found. You can ask a question on the website.