build details

Show: section status errors & todos local changes recent changes last change in-page changes feedback controls

Harder Better Faster Stronger : Project Instructions

Modified 2020-12-23 by Raphael Jean

This project is an adaption of the https://github.com/sercant/mobile-segmentation repository. They presented a computationally efficient approach to semantic segmentation, while achieving a high mean intersection over union (mIOU) of 70.33% on the Cityscapes challenge. The network proposed is capable of running real-time on mobile devices.

This models performs pretty well in the Duckietown Simulator too! The mIoU is 75.60% on the Duckietown Segmentation Dataset.

This page describes how to implement the project Harder, Better, Faster, Stroger and get similar results as we did.

Duckiebot in configuration DB19

Duckietown without intersections

Camera calibration completed

Laptop with Duckietown Shell command installed and setup.

Video of expected results

Modified 2020-12-23 by Raphael Jean

If all the instuctions are followed and the entire setup is succesfull, you should get similar performance on the AIDO LF challenge as shown below:


The video is at https://vimeo.com/493139195.

Lane Following Performance on the AIDO LF challenge


The video is at https://vimeo.com/493555586.

Lane Following Performance on the Duckiebot

Laptop setup notes

Modified 2020-12-23 by Raphael Jean

Laptop must have Duckietown Shell, Docker, etc, as configured in Unit C-1 - Laptop Setup

Duckietown setup notes

Modified 2020-12-23 by Raphael Jean

We make the following assumtions about the Duckietown setup:

  • Duckietown without intersections required
  • Enough lighting around the track

Duckiebot setup notes

Modified 2020-12-23 by Raphael Jean

The Duckiebot must be in DB19 configuration.

Pre-flight checklist

Modified 2020-12-23 by Raphael Jean

The pre-flight checklist describes the steps that are sufficient to ensure that the demo will be correct:

Check: A virtual environment, as it will prevent conflicts with the python installed on the computer.

Check: All the packages as mention in the requirements.txt file are installed in your virtual environment.

Instructions

Modified 2020-12-23 by Raphael Jean

Getting ready

Modified 2020-12-23 by Raphael Jean

  1. Download or generate a “raw” DuckieTown dataset. Refer to the Duckietown Dataset Generator
  2. Convert the dataset to a MS COCO Compatible format that can be used by most segmentation models. Refer to the Conversion Jupyter Notebooks
  3. Prepare COCO Compatible dataset for training. Example scripts and code is available under the dataset folder. The dataset should be in tfrecord format.

Model zoo

Modified 2020-12-23 by Raphael Jean

Please refer to the original repository for pre-trained models.

Training

Modified 2020-12-23 by Raphael Jean

To learn more about the available flags you can check common.py and the specific script that you are trying to run (e.g. train.py).

2-3 epochs of fine-tuning should be enough, more would likely cause overfitting. The model is already pre-trained on Cityscapes, so the final training is basically domain adaptation.

The “output_stride” parameter can be used to allow this network to work on smaller resolution images. The Network was originally designed to work with 640x480 images, with an output stride of 16. For smaller images, such as the ones we use in Duckietown, the bottleneck is to narrow. Reducing the output stride to 8 for 320x240 and 4 for 160x120 reduce this bottleneck. The only drawback is that those network take as much time as the 640x480 image on lower resolution image.

Long story short: Next time, we should generate 640x480 datasets, because lowering the resolution will not help!

Example Training Configuration

Training on Duckietown:

python train.py \
    --model_variant=shufflenet_v2 \
    --tf_initial_checkpoint=./checkpoints/model.ckpt \
    --training_number_of_steps=12000 \
    --base_learning_rate=0.001 \
    --fine_tune_batch_norm=True \
    --initialize_last_layer=False \
    --output_stride=4 \
    --train_crop_size=120 \
    --train_crop_size=160 \
    --train_batch_size=16 \
    --dataset=duckietown \
    --train_split=train \
    --dataset_dir=./dataset/duckietown2/merged_with_real/tfrecords \
    --save_summaries_images \
    --train_logdir=./logs \
    --loss_function=sce

Training on Duckietown with Bezier:

python train.py \
    --model_variant=shufflenet_v2 \
    --tf_initial_checkpoint=./checkpoints/model.ckpt \
    --training_number_of_steps=120000 \
    --base_learning_rate=0.001 \
    --fine_tune_batch_norm=True \
    --initialize_last_layer=False \
    --output_stride=8 \
    --train_crop_size=240 \
    --train_crop_size=320 \
    --train_batch_size=16 \
    --dataset=duckietown \
    --train_split=train \
    --dataset_dir=./dataset/duckietown/bezier/tfrecords \
    --save_summaries_images \
    --train_logdir=./logs \
    --loss_function=sce

Training on Cityscapes:

python train.py \
    --model_variant=shufflenet_v2 \
    --tf_initial_checkpoint=./checkpoints/model.ckpt \
    --training_number_of_steps=120000 \
    --base_learning_rate=0.001 \
    --fine_tune_batch_norm=True \
    --initialize_last_layer=False \
    --output_stride=16 \
    --train_crop_size=769 \
    --train_crop_size=769 \
    --train_batch_size=16 \
    --dataset=cityscapes \
    --train_split=train \
    --dataset_dir=./dataset/cityscapes/tfrecord \
    --train_logdir=./logs \
    --loss_function=sce

Training with 8gb commodity GPU:

python train.py     --model_variant=shufflenet_v2     --tf_initial_checkpoint=./checkpoints/model.ckpt     --training_number_of_steps=120000     --base_learning_rate=0.001     --fine_tune_batch_norm=True     --initialize_last_layer=False     --output_stride=16     --train_crop_size=769     --train_crop_size=769     --train_batch_size=3     --dataset=cityscapes     --train_split=train     --dataset_dir=./dataset/cityscapes/tfrecord     --train_logdir=./logs     --loss_function=sce

Important: To use DPC architecture in your model, you should also set this parameter:

--dense_prediction_cell_json=./core/dense_prediction_cell_branch5_top1_cityscapes.json

Evaluation

Modified 2020-12-23 by Raphael Jean

The trained model can be evaluated by executing the following commands on your laptop terminal:

Example evaluation configuration

Duckietown “merged_with_real”:

python evaluate.py \
    --model_variant=shufflenet_v2 \
    --eval_crop_size=120 \
    --eval_crop_size=160 \
    --output_stride=4 \
    --eval_logdir=./logs/eval \
    --checkpoint_dir=./logs \
    --dataset=duckietown \
    --dataset_dir=./dataset/duckietown2/merged_with_real/tfrecords

Duckietown “bezier”:

python evaluate.py \
    --model_variant=shufflenet_v2 \
    --eval_crop_size=240 \
    --eval_crop_size=320 \
    --output_stride=8 \
    --eval_logdir=./logs/eval \
    --checkpoint_dir=./logs \
    --dataset=duckietown \
    --dataset_dir=./dataset/duckietown2/bezier/tfrecords

Cityscapes:

python evaluate.py \
    --model_variant=shufflenet_v2 \
    --eval_crop_size=1025 \
    --eval_crop_size=2049 \
    --output_stride=4 \
    --eval_logdir=./logs/eval \
    --checkpoint_dir=./logs \
    --dataset=cityscapes \
    --dataset_dir=./dataset/cityscapes/tfrecord

Visualize

Modified 2020-12-23 by Raphael Jean

In order to visualize and see the results of the trained model you can run the following commands in the terminal on your laptop:

DuckieTown In order to visualize segmentation for the Duckietown dataset:

python visualize.py --checkpoint_dir logs \
     --vis_logdir logs \
      --dataset_dir dataset/duckietown2/merged_with_real/tfrecords/ \
      --output_stride 4 \
      --dataset duckietown

Cityscapes In order to visualize segmentation for the Cityscapes dataset:

python visualize.py --checkpoint_dir checkpoints --vis_logdir logs --dataset_dir dataset/cityscapes/tfrecord/

Important: If you are trying to evaluate a checkpoint that uses DPC architecture, you should also set this parameter:

--dense_prediction_cell_json=./core/dense_prediction_cell_branch5_top1_cityscapes.json

Running on Duckietown

Modified 2020-12-23 by Raphael Jean

A pure pursuit controller will take as an input the output of the points generated by the segementation mask.

See the solution folder

Troubleshooting

Modified 2020-12-23 by Raphael Jean

Here is some common problem your might encounter with our solution.

The Duckiebot does not move.

Check battery charge. Hard reboot the Duckybot. (I know, but it is still worth mentionning)

Segmentation Quality is poor, and the Duckiebot crashes.

We fine-tuned different models for different task. Please make sure than the model specified in the Object Detection Node is the right one. See the following two files:

  • exercise_ws/src/object_detection/src/object_detection_node.py
  • exercise_ws/src/object_detection/include/object_detection/model.py

Segmentation is good, but the Duckiebot goes really fast in erratic motion.

Change the speed, turn_speed, K and D parameters in the lane controller node until the Duckie becomes well-behaved.

Duckiebot drives fine, but looses control and starts oscillating.

Go slower! Lower “turn_speed” and “speed”. A good idea is to start with low speeds (0.2 - 0.3) and then increase speed and gains (K, D) iteratively.

Duckiebot oscillates even at slow speeds.

Lower the K gain until it stop oscillating. Increase the “D” gain to get better performance in curves. Don’t increase too much though, it will oscillate again at some point!

Duckiebot has not fear and can crash into objects.

Go slower! The perception system is limited by the network latency and CPU power of your machine. Running at 30 FPS on a Desktop allows going faster than running at 12 frames per seconds on a 3 years old laptop. A faster CPU will give the Duckiebot better reflexes. If you have time, do the inference on GPU, this would allow a laptop to process frames much faster, and give better reaction times to the Duckiebot.

Demo failure demonstration

Modified 2020-12-23 by Raphael Jean

Here is a video of the Duckiebot going way too fast. It will eventually crash, just a matter of time!

The video is at https://vimeo.com/494207868.

A Duckie driving too fast!