United Kingdom: +44 (0)208 088 8978

GPU Accelerated Docker Containers

This week Ryan is back with a blog that combines two of our recent topics - machine learning and Docker! Find out how easy it is to train neural networks in a Docker container using your GPU, and pick up a couple of good book recommendations along the way.

We're hiring Software Developers

Click here to find out more

As part of my continuing adventures into machine learning, I recently purchased a couple of great books.

The State of the Art

Having worked through a fantastic foundational and quite mathematical machine learning course (now archived) by Andrew Ng on Coursera, I was looking for something to give me an up-to-date and concise view of the bigger picture.

I wanted a reference for the different tools and techniques available, how they are applied and the kinds of problems they help solve.

I found The Hundred Page Machine Learning Book by Andriy Burkov which provides exactly this, striking a great balance between brevity and packing in essential information.

Practical Application

With the mathematical foundations and bigger picture covered, the next book is very much aimed at getting hands on, practical experience applying machine learning techniques to real world problems.

So much so, in fact, it is in the name: Hands-On Machine Learning with Scikit-Learn, Keras & TensorFlow, Third Edition by Aurélien Géron.

The Third Edition was published on October 31st 2022, the same day that I received it in the post!

Being so up-to-date it even includes entries discussing cutting edge technologies such as image-generating diffusion models like DALL-E.

Training on the GPU

Being such a practical-focussed book, it comes accompanied by an extensive repository of Jupyter notebooks, which anyone can access.

Many of these notebooks train neural networks using TensorFlow, for which they can use either a CPU or GPU.

GPUs are highly optimised for parallel mathematical operations using matrices, so they are the natural choice if you have access.

The notebooks can be opened and executed in your browser using the cloud service Google Colab, which is the recommended approach for most users given it is free and very easy to get started.

What if, as in my case, you have a ridiculously overpriced graphics card and are looking to validate your purchase? 😸

In that case, you may wish to run the notebooks directly on your own machine, making use of that hardware.

In most cases you need a modern NVidia GPU that supports CUDA to run TensorFlow. Alternatively, you can use almost any modern CPU, it will just be slower in many operations. See the hardware requirements for more info.

Setting Up

If you decide to run the notebooks on your own machine, you have two options.

  1. Install all of the software (Python, TensorFlow, SciKit, Numpy, Matplotlib etc etc) plus the required GPU libraries (CUDA runtime etc) on your machine and try to configure it.

  2. Build or download a Docker image with everything set up for you, and launch it with a single command.

Obviously the second is much more appealing.

WSL, GPUs and Docker

The repository documentation provides instructions for using a Docker image with GPU acceleration enabled, however they are listed as 'Linux (experimental)' and I am using Windows 11.

As you may know, Docker Desktop on Windows uses the Windows Subsystem for Linux (WSL), a full Linux kernel that ships with Windows that allows you to install and run distros seamlessly from your desktop.

Recently NVidia started bundling CUDA drivers into their mainline driver packages, and Windows 11 now natively supports GPU acceleration for containers.

This means that everything 'just works' essentially - just make sure your drivers and WSL are up to date and you should be Good To GoTM.

Once the container is up and running you can open a browser, following a link from the console output, and start exploring the included notebooks or creating your own.

I did go down a bit of a rabbit hole installing the NVidia CUDA Toolkit for WSL on my Ubuntu distro, but this was (in retrospect, obviously!) unnecessary as it is only needed if you want to compile new CUDA apps. There are also many out-of-date guides out there from when the process was more arduous, requiring preview drivers etc, but they can safely be ignored now.


As you can see, getting set up with GPU acceleration in Docker on Windows 11 using WSL is easy, in fact pretty much seamless.

Using Docker makes creating, updating and deleting your development environment fast and simple, freeing you up to focus on learning.

The books I have mentioned are wonderful resources and I highly recommend them to those looking to get into the subject.