build details

Show: section status errors & todos local changes recent changes last change in-page changes feedback controls

Behavior Cloning

Modified 2021-11-03 by tanij

In this part, you can find the required steps to make a submission based on Behavior Cloning with Tensorflow for the lane following task, using data from real world or simulator data. It can be used as a strong starting point for any of the challenges.

That you have made a submission with the tensorflow template.

You could win the AI-DO!

The video is at

Behavior Cloning


Modified 2021-11-03 by tanij

This baseline refers to the approach for behavior cloning for autonomous vehicles described in this paper: End to End Learning for Self-Driving Cars. It is created by Frank (Chude Qian) for his submission to AI-DO 3 at NeurIPS 2019. The submission was very successful on simulator challenge, however, it was not the best for real world challenges.

A detailed description on the specific implementation for this baseline can be found on the summary poster here: Teaching Cars to Drive Themselves.


Modified 2021-11-03 by tanij

Clone the baseline Behavior Cloning repository:

$ git clone -b daffy

$ cd challenge-aido_LF-baseline-behavior-cloning

The code is structured into 5 folders:

  1. Teach your Duckiebot to drive itself in duckieSchool.

  2. (Optional) Store all the logs that can be used for training using duckieLog.

  3. Train your model using tensorflow in duickieTrainer.

  4. (Optional) Hold all previous models you generated in duckieModels.

  5. Submit your submission via duckieChallenger folder.

The duckieSchool

Modified 2021-11-03 by tanij

In side this folder you find two types of duckieSchool: simulator based duckieGym and real robot based duckieRoad.

Installing duckietown Gym

Modified 2021-10-17 by Andrea Censi

To install duckietown Gym and all the necessary dependencies:

pip3 install  -r requirements.txt

Use joystick to drive

Modified 2021-11-03 by tanij

Before you use the script, make sure you have the joystick connected to your computer.

To run the script, use the following command:

$ python3

The system utilizes an Xbox One S joystick to drive around. Left up and down controls the speed and right stick left and right controls the velocity. Right trigger enables the “DRS” mode and allows the vehicle to drive full speed forward. (Note there are no angular acceleration when this mode is enabled).

In addition, every 1500 steps in simulator, the recording will pause and playback. You will have the chance to review the result and decide whether to keep the log or not. The logs are recorded in two formats: raw_log saves all the raw information for future re-processing, and traning_data saves the directly feedable log.

Options for joystick script

Modified 2021-11-03 by tanij

For driving a Duckiebot with a joystick in a simulator, you have the following options:

  1. --env-name: currently the default is None.

  2. --map-name: This sets the map you choose to run. Currently, it is set as small_loop_cw.

  3. --draw-curve: This draw the lane following curve. Default is set as False. However, if you are new to the system, you should familiarize yourself with enabling this option as True.

  4. --draw-bbox: This helps draw out the collision detection bounding boxes. Default is set as False.

  5. --domain-rand: This enables domain randomization. Default is set as True.

  6. --playback: This enables playback after each record section for you to inspect the log you just took. Default is set as True.

  7. --distortion: This enables distortion to let the view as fisheye lens. Default is set as True.

  8. --raw_log: This enables recording also a high resolution version of the log instead of the down-sampled version. Default is set as True. Note: if you disable this option, playback will be disabled too.

  9. --steps: This sets how many steps to record once. Default is set as 1500.

  10. --nb-episodes: This controls how many episodes (a.k.a. sessions) you drive.

  11. --logfile: This specifies where you can store your log file. Default will just save the log file in the current folder.

  12. --downscale: This option currently is disabled.

  13. --filter-bad-data: This option allows you to only log driving that is better than the last state. It uses reward feedback on the duckietown gym for tracking the reward status.

Additionally, some other features has been hard coded:

  1. The training images are stored as YUV color space, you can change it in line 258.

  2. The frames are sized as 150x200, per original paper recommendation. This could be not the most effective resolution.

  3. The logger resets if it detects driving out of bounds.

Automated log generation using pure pursuit

Modified 2021-11-03 by tanij

This baseline also provides an option to automatically generate training samples using the pure pursuit control algorithm.

The configurable parameters are similar to the human driver agent case described above.

If you would like to mass generate training samples on a headless server, under the util folder you will find the necessary tools.

To start pure pursuit data generation:

$ python3

Log using an actual Duckiebot

Modified 2021-11-03 by tanij

To log using an actual Duckiebot, refer to this tutorial on how to get a rosbag on a duckiebot.

Once you have obtained the ROS bag, you can use the folder duckieRoad to process that log.

Process a log from an actual Duckiebot

Modified 2021-11-03 by tanij

You will find the following files in the duckieRoad directory.

├── Dockerfile                      # File that sets up the docker image
├── bag_files                       # Put your ROS bags here.
│   ├── ROSBAG1                     # Your ROS bag.
│   ├── ROSBAG2                     # Your training on Date 2.
│   └── ...
├── converted                       # Stores the converted log for you to train the Duckiebot
├── src                             # Scripts to convert ROS bag to pickle log
│   ├──                 # Logger used to log the pickle log
│   ├──   # Helper function for the script
│   └──             # Convertion script. You set your Duckiebot
|                                     name, and topic to convert here.
├── MakeFile                        # Make file.
├── requirements.txt                # Used for docker to setup dependency
└──                     # Convert the pickle2 style log produced to                                    pickle 3

You should change line 83 to the correct VEHICLE_NAME.

First put your ROS bags in the bag_files folder. Then:

$ make make_extract_container

Next start the conversion docker:

$ make start_extract_data

It will automatically mount the bags folder as well as the converted folder.

NOTE: When you run the make file, make sure you are in duckieRoad not in the src folder!

The duckieLog

Modified 2020-11-16 by frank-qcd-qk

This folder is set for your to put all of your duckie logs. Some helper functions are provided. However, they might not be the most efficient ones to run. It is here for your reference.

The log viewer

Modified 2020-11-16 by frank-qcd-qk

To view the logs, under duckieLog folder:

$ python3 util/ --log_name YOUR_LOG_FILE_NAME.log

The log combiner

Modified 2020-11-16 by frank-qcd-qk

To combine the logs, under duckieLog folder:

$ python3 util/ --log1 dataset1.log --log2 dataset2.log --output newdataset.log

The duckieTrainer

Modified 2020-03-22 by frank-qcd-qk

This section describes everything you need to know using the duckieChallenger.

Folder structure

Modified 2020-11-16 by frank-qcd-qk

In this folder you can find the following fils:

├── __pycache__                     # Python Compile stuff.
├── trainlogs                            # Training logs for tfboard.
│   ├── Date 1                      # Your training on Date 1.
│   ├── Date 2                      # Your training on Date 2.
│   └── ...
├── trainedModel                    # Your trained model is here.
│   ├── FrankNetBest_Loss.h5        # Lowest training loss model.
│   ├── FrankNetBest_Validation.h5  # Lowest validation loss model.
│   └── FrankNet.h5                 # The last model of the training.
├──                   # The deep learning model.
├──                    # Helper file for reading the log
├──                        # The training setup.
├── requirements.txt                # Required pip3 packges for training
└── train.log                       # Your training data.

Environment Setup

Modified 2020-11-16 by frank-qcd-qk

To setup your environment, I strongly urge you to train the model using a system with GPU. Tensorflow and GPU sometimes can be confusing, and I recommend you to refer to tensorflow documentation for detailed information.

Currently, the system requires TensorFlow 2.2.1. To setup TensorFlow, you can refer to the official TensorFlow install guide here.

Additionally, this training sytem utilizes scikit-learn and numpy. You can find a provided requirements.txt file that helps you install all the necessary packages.

$ pip3 install -r requirements.txt

Model Adjustment

Modified 2020-03-08 by frank-qcd-qk

To change the model, you can modify the file as it includes the model architecture. Currently it uses a parallel architecture to seperately generate a linear and angular velocity. It might perform better if they are not setup seperately.

To change your training parameters, you can find EPOCHS, LEARNING RATE, and BATCH size at the beginning of You should tweak around these values with respect to your own provided training data.

Before Training

Modified 2020-03-22 by frank-qcd-qk

Before you start training, make sure your log is stored at the root of the duckieTrainer folder. It should be named as train.log.

Make sure you have saved all the desired trained models into duckieModels. Trust me you do not want your overnight training overwritten by accident. Yes I have been through losing my overnight training result.

Train it

Modified 2020-03-08 by frank-qcd-qk

To train your model:

$ python3

To observe using tensorboard, run this command in the duckieTrainer directory:

$ tensorboard --logdir logs

You should be able to also see your training status at http://localhost:6006/. If your computer is accessible by other computers, you can also see it by visiting http://TRAINERIP:6006

Things to improve

Modified 2020-03-08 by frank-qcd-qk

There are a lot of things could be improved as this is an overnight hack for me. The data loading could be maybe more efficient. Currently it just load all and stores all in a global variable. The training loss reference might not be the best. The optimizeer might be improved. And most importantly, the way of choosing which model to use could be drastically improved.


Modified 2020-03-22 by frank-qcd-qk

Symptom: tensorflow.python.framework.errors_impl.InternalError: CUDA runtime implicit initialization on GPU:0 failed. Status: out of memory

Resolution: Currently there is no known fix other than cross your fingers and run again and reducing your batch size.

The duckieModels

Modified 2020-03-08 by frank-qcd-qk

This is a folder created just for you to keep track of all your potential models. There is nothing functional in it.

The duckieChallenger

Modified 2020-11-16 by frank-qcd-qk

This is the folder where you submit to challenge. The folder is structured as follows:

├── Dockerfile                      # Docker file used for compiling a container.
|                                     Modify this file if you added file, etc.
├──                   # Helper file for all helper functions.
├── requirements.txt                # All required pip3 install.
├──                     # Your actual solution
└── submission.yaml                 # Submission configuration.

After you put your trained model FrankNet.h5 in this folder, you can proceed as normal submission:

$ dts challenges submit

Or run locally:

$ dts challenges evaluate

An example submission looks like this


Modified 2020-11-16 by frank_qcd_qk

We would like to thank: Anthony Courchesne and Kay (Kaiyi) Chen for their help and support during the development of this baseline.