Getting Started with Autonomous Driving Using Donkey Car

I’ve been interesting in robotics and artificial intelligence for a long time, but with the start of the new year I decided to expand my comfort zone and start experimenting with creating actual hardware using the Raspberry Pi, and Python (Tensorflow, Keras and OpenCV). I began this project to help advance my understanding of programming and machine learning. I am intensely interested in artificial intelligence and machine learning, but it was hard for me to acquire hands-on experience with these things by just playing around with software. I like to connect things to the physical world and see things actually working and moving around. I came across a community which is devoted to making an open source DIY autonomous RC car, with the goal of teaching people the basics of machine learning and computer vision. The Donkey Car is a popular platform that provides easy access to folks to who just want to learn more about autonomous vehicles, and get up and running quickly.

Components

The actual hardware needed is very minimal and can be found online for relatively low cost. To get up running with your own AI piloted car you’ll need:

  • small scale RC car

  • Raspberry Pi

    • wide angle Raspberry Pi Camera

    • micro sd card

  • servo driver board

  • usb battery

  • jumper wires

IMG_20200213_204050909.jpg

Radio Controlled Car

A standard RC car will have one main motor for propulsion, and a servo to control the steering. An Electronic Speed Controller (ESC) controls the voltage to the drive motor receives to regulate the speed of the vehicle. In addition, there is a radio receiver which communicates with the handheld controller and provides input for the ESC and steering servo.

The Donkey Car can take advantage of several modern single board computers (SBC’s), mainly the Raspberry Pi and Nvidia Jetson Nano. The Jetson Nano was specifically developed for machine learning, machine vision, and video processing applications, however that might be a future upgrade. For now I’m using the Raspberry Pi 4B which has more than enough processing power. To power the Raspberry Pi we’ll use a USB power brick. The Pi will receive input from a camera at the front of the car, and output signals to a servo driver board to control the steering and throttle using the global input/output pins on the Pi.

Electronics Layout.jpg

The Brains - Raspberry Pi Computer

The additional electronics can be easily mounted on top of the frame of the RC car. There are also official Donkey Car mounting plates and roll cages available on Thingiverse for 3d printing.

Software Setup

Getting the hardware set up is the easy part, everything sort of snaps together like Lego parts. Next we have to install software on the host PC and Raspberry Pi. Setting up Donkey Car is fairly straightforward, although you do need a basic understanding of UNIX and how to work with a Raspberry Pi. The guide on the official Donkey Car website is really great and will walk through most steps.

Host PC Software Setup OverviewDownload and install MiniCondaIn terminal,create a new project folderChange into project folderand install donkey libraryCreate a Pythonvirtual environmentInstall Tensorflow - GPUCreate local working directory in donkey environmentRaspberry Pi Software Setup OverviewFlash Raspbian Lite OSonto sd cardSet up wifi and SSHfor first bootConnect host PC to Pi via SSHConnect host PC to Pi via SSHUpdate Pi and Installall dependenciesCreate new directoryand install donkey library and tensorflowCreate new directoryand install donkey library

Installing Tensorflow GPU support on my Windows PC is the only area which caused me some headache. There are quite a few steps involved with getting Tensorflow to work with the Nvidia GPU I’m using and it is easy to misstep and mess up the installation. A full guide to installing Tensorflow GPU support on Windows can be found here. However, after some digging online I was able to find a much easier installation method which asses which version (CUDA, CUDNN) and install it correctly while not changing any path environments. First, create a new environment in Anaconda, then use Conda-Forge to install the Tensorflow GPU:

conda create -n [name] python=[version]
conda install -c conda-forge tensorflow-gpu 
 



Donkeycar Library

The Donkeycar library is a high level Python library created to train and pilot autonomous vehicles with neural networks. The software has everything necessary to control the car’s hardware, and also the utilities required to train and build a neural network model with the host PC. Once the base software is installed and running, it is easy to modify it and include additional “parts” to take advantage of auxiliary sensors, actuators, and neural networks other than the standard Tensorflow library.

There is one main drive loop which controls the functions of the car. This makes it very easy to add or change parts .

Camera
Inputs: 
Camera Sensor Data

Outputs:
Image Array
Camera...
Joystick
Inputs:
Image Array
Outputs:
User Steering Angle, User Throttle Position
Joystick...
           Pilot
Inputs:
Drive Mode
Outputs:
Run AI Pilot (y/n)
Pilot...
AI Pilot
Inputs:
Image Array
Run Pilot
Outputs:
Pilot Steering Angle
Pilot Throttle
AI Pilot...
            Drive Mode
Inputs:
User Mode
User Angle
User Throttle
Pilot Angle
Pilot Throttle
Outputs:
Angle Throttle
Drive Mode...
Steering PWM
Inputs:
Angle
Outputs:
Steering PWM Signal
Steering PWM...
Throttle PWM
Inputs:
Throttle
Outputs:
Throttle PWM Signal
Throttle PWM...
Write Tub Data
Inputs:
Image Array
User Angle
User Throttle
User Mode
Outputs:
Tub Record
Write Tub DataInputs:...
Main 
Drive 
Loop
Main...
Viewer does not support full SVG 1.1


Training a Model

Getting ready to collect training data for the neural network involves a bit of preparation and set up to ensure that quality and controlled data is being collected. To begin my data collection, I made a small track in a spare bedroom with some painters tape. I made sure there was adequate lighting as well. Collecting the training data is fairly simple. You connect to the vehicle via SSH, open a web browser and navigate to your Pi’s IP address and port 8887. The next step is to simply drive around the track while the car records images, throttle and steering position.

IMG_20200210_001627433.jpg

Many people have recommendations on the best way to train the donkey car, and I learned a lot by listening to other peoples advice on the Slack channel and community forums.

I found the best formula for me was driving about 30% of the time very slowly and carefully right on the center line, 40% normal driving, 20% swerving back and forth slightly over the center line, and 10% swerving between the extremes of the track boundary. I mixed up these driving behaviors and changed my driving style every one or two laps on the track.

The data collected consists of a JSON file with the throttle and steering position, along with the captured image. Once enough training data has been generated (between 5-20,000 images) you can send the data stored on the Raspberry Pi back to the host PC for training. Once all the data is in the correct directory, I used miniConda, activated the Donkey environment and ran the training script. It is easy to use the default setting and simply run the training script, but you can also pass in arguments to change the type of model. It took 15-20 minutes to finish training my model.

8760_cam-image_array_.jpg

The data in a .json file and associated image:

{

"cam/image_array": "8760_cam-image_array_.jpg",

"user/angle": -0.6305429242835779,

"user/throttle": 0.36653920102542187,

"user/mode": "user",

"milliseconds": 674866

}

Results - A.I. Piloted Car

My first model worked enough to surprise me, but didn’t perform that well. I was mostly concerned with getting a proof of concept finished with the software to make sure I had completed everything correctly, and didn’t expect much. However, I was very surprised at how well the car performed even with my sloppy training data.

With my next attempt, I spent much more time following a careful driving routine and ensuring all the data was high quality. Any time I made a mistake I was sure to delete those records before continuing training. I also collected about twice the amount of training data and the results were much better. In the future I am going to add additional sensors such as sonar and train a behavioral model to attempt lane changing and better throttle control.

Previous
Previous

Using Stereo Photogrammetry to Find the Shape of Skulls