Getting Started with Autonomous Driving Using Donkey Car
I’ve been interesting in robotics and artificial intelligence for a long time, but with the start of the new year I decided to expand my comfort zone and start experimenting with creating actual hardware using the Raspberry Pi, and Python (Tensorflow, Keras and OpenCV). I began this project to help advance my understanding of programming and machine learning. I am intensely interested in artificial intelligence and machine learning, but it was hard for me to acquire hands-on experience with these things by just playing around with software. I like to connect things to the physical world and see things actually working and moving around. I came across a community which is devoted to making an open source DIY autonomous RC car, with the goal of teaching people the basics of machine learning and computer vision. The Donkey Car is a popular platform that provides easy access to folks to who just want to learn more about autonomous vehicles, and get up and running quickly.
Components
The actual hardware needed is very minimal and can be found online for relatively low cost. To get up running with your own AI piloted car you’ll need:
small scale RC car
Raspberry Pi
wide angle Raspberry Pi Camera
micro sd card
servo driver board
usb battery
jumper wires
The Donkey Car can take advantage of several modern single board computers (SBC’s), mainly the Raspberry Pi and Nvidia Jetson Nano. The Jetson Nano was specifically developed for machine learning, machine vision, and video processing applications, however that might be a future upgrade. For now I’m using the Raspberry Pi 4B which has more than enough processing power. To power the Raspberry Pi we’ll use a USB power brick. The Pi will receive input from a camera at the front of the car, and output signals to a servo driver board to control the steering and throttle using the global input/output pins on the Pi.
Software Setup
Getting the hardware set up is the easy part, everything sort of snaps together like Lego parts. Next we have to install software on the host PC and Raspberry Pi. Setting up Donkey Car is fairly straightforward, although you do need a basic understanding of UNIX and how to work with a Raspberry Pi. The guide on the official Donkey Car website is really great and will walk through most steps.
Installing Tensorflow GPU support on my Windows PC is the only area which caused me some headache. There are quite a few steps involved with getting Tensorflow to work with the Nvidia GPU I’m using and it is easy to misstep and mess up the installation. A full guide to installing Tensorflow GPU support on Windows can be found here. However, after some digging online I was able to find a much easier installation method which asses which version (CUDA, CUDNN) and install it correctly while not changing any path environments. First, create a new environment in Anaconda, then use Conda-Forge to install the Tensorflow GPU:
conda create -n [name] python=[version] conda install -c conda-forge tensorflow-gpu
Donkeycar Library
The Donkeycar library is a high level Python library created to train and pilot autonomous vehicles with neural networks. The software has everything necessary to control the car’s hardware, and also the utilities required to train and build a neural network model with the host PC. Once the base software is installed and running, it is easy to modify it and include additional “parts” to take advantage of auxiliary sensors, actuators, and neural networks other than the standard Tensorflow library.
There is one main drive loop which controls the functions of the car. This makes it very easy to add or change parts .
Training a Model
Getting ready to collect training data for the neural network involves a bit of preparation and set up to ensure that quality and controlled data is being collected. To begin my data collection, I made a small track in a spare bedroom with some painters tape. I made sure there was adequate lighting as well. Collecting the training data is fairly simple. You connect to the vehicle via SSH, open a web browser and navigate to your Pi’s IP address and port 8887. The next step is to simply drive around the track while the car records images, throttle and steering position.
The data collected consists of a JSON file with the throttle and steering position, along with the captured image. Once enough training data has been generated (between 5-20,000 images) you can send the data stored on the Raspberry Pi back to the host PC for training. Once all the data is in the correct directory, I used miniConda, activated the Donkey environment and ran the training script. It is easy to use the default setting and simply run the training script, but you can also pass in arguments to change the type of model. It took 15-20 minutes to finish training my model.
Results - A.I. Piloted Car
My first model worked enough to surprise me, but didn’t perform that well. I was mostly concerned with getting a proof of concept finished with the software to make sure I had completed everything correctly, and didn’t expect much. However, I was very surprised at how well the car performed even with my sloppy training data.
With my next attempt, I spent much more time following a careful driving routine and ensuring all the data was high quality. Any time I made a mistake I was sure to delete those records before continuing training. I also collected about twice the amount of training data and the results were much better. In the future I am going to add additional sensors such as sonar and train a behavioral model to attempt lane changing and better throttle control.