Home
build details

Show: section status errors & todos local changes recent changes last change in-page changes feedback controls

Exercise: Augmented Reality

Modified 2018-10-16 by Andrea Censi

Jonathan Michaux and Dzenan Lapandic

previous task (12 of 12) index
for:Andrea Censitask

The following was marked as "special-par-assigned".

Jonathan Michaux and Dzenan Lapandic

File book/software_devel/36_ros_exercises/40_exercise_image_pipeline.md.

File book/software_devel/36_ros_exercises/40_exercise_image_pipeline.md
in repo duckietown/docs-software_devel branch master commit 7dc8051b
last modified by Andrea Censi on 2018-10-16 14:09:44

Created by function create_notes_from_elements in module mcdp_docs.task_markers.

Specification of dt_augmented_reality

Modified 2018-10-16 by Andrea Censi

In this assignment you will be writing a ROS package to perform the augmented reality exercise. The program will be invoked with the following syntax:

$ roslaunch dt_augmented_reality-robot name augmenter.launch map_file:=map file robot_name:=robot name local:=1

where map file is a YAML file containing the map (specified in Section 9.5 - Specification of the map).

If robot name is not given, it defaults to the hostname.

The program does the following:

  1. It loads the intrinsic / extrinsic calibration parameters for the given robot.
  2. It reads the map file.
  3. It listens to the image topic /robot name/camera_node/image/compressed.
  4. It reads each image, projects the map features onto the image, and then writes the resulting image to the topic

    /![robot name]/AR/![map file basename]/image/compressed

where map file basename is the basename of the file without the extension.

We provide you with ROS package template that contains the AugmentedRealityNode. By default, launching the AugmentedRealityNode should publish raw images from the camera on the new /robot name/AR/map file basename/image/compressed topic.

In order to complete this exercise, you will have to fill in the missing details of the Augmenter class by doing the following:

  1. Implement a method called process_image that undistorts raw images.
  2. Implement a method called ground2pixel that transforms points in ground coordinates (i.e. the robot reference frame) to pixels in the image.
  3. Implement a method called callback that writes the augmented image to the appropriate topic.

Specification of the map

Modified 2018-10-16 by Andrea Censi

The map file contains a 3D polygon, defined as a list of points and a list of segments that join those points.

The format is similar to any data structure for 3D computer graphics, with a few changes:

  1. Points are referred to by name.
  2. It is possible to specify a reference frame for each point. (This will help make this into a general tool for debugging various types of problems).

Here is an example of the file contents, hopefully self-explanatory.

The following map file describes 3 points, and two lines.

points:
    # define three named points: center, left, right
    center: [axle, [0, 0, 0]] # [reference frame, coordinates]
    left: [axle, [0.5, 0.1, 0]]
    right: [axle, [0.5, -0.1, 0]]
segments:
- points: [center, left]
  color: [rgb, [1, 0, 0]]
- points: [center, right]
  color: [rgb, [1, 0, 0]]

No questions found. You can ask a question on the website.