Setting Up a Computer for Deep Learning

Deep learning is all the rage these days, still achieving increasingly better results on various machine learning problems. While the field is evolving rapidly, it is also becoming more and more accessible for experiementation. Anyone can download state of the art models and run them using bleeding edge research software, thanks to most leading implementations being open source and/or available at no cost.

Though very easy solutions for environment setup and computation is made available via cloud providers' GPU instances, as well as preconfigured virtual machines and containers, there's something to be said for running your own rig with capable hardware.

In this post, I'll detail the setup of a computer for deep learning purposes, step by step. It assumes that the reader is somewhat familiar with software relevant to deep learning.

There are many great resources out there, but the two I used the most during my experience of setting up my own machine were Sai Soundararaj's guide and Roelof Pieters' blogpost.


I won't delve too much into hardware details, but in order to achive decent performance you'll need a relatively new NVIDIA GPU, as most of the GPU-accelerated software is built upon their CUDA-framework.

You can check out a short summary of information relevant to the choice of components here.

In my own case, I went for this computer, which has the following specs:

Motherboard: ASUS B150M-PLUS, Socket-1151
CPU: Intel Core i5-6500 Skylake Processor
RAM: Kingston ValueRam DDR4 2133MHz 16GB
GPU: MSI GeForce GTX 1060 6GB OC
Hard drive: Samsung PM961 SSD 256GB M.2 NVMe 2800/1100MB/s
Extensions: ASUS PCE-N15 N300 Wireless Adapter
Power supply: Cooler Master B500 V2 KOMPLETT Edition
Case: Komplett Carbide SPEC-03 Midi Tower

The the parts that references "Komplett" are just OEM stuff - rebranded versions for the retailer, e.g. differently colored or featuring the retailer logo.

The GTX 1060 GPU isn't top of the line when it comes to processing power - and it doesn't support SLI - but it's an OK starting point, not to mention reasonably priced.


The most natural choice of OS for these sorts of things is Linux. I chose the Ubuntu distribution, as it comes with a lot of practical software out of the box. It is a very popular distro, and it's generally not very difficult to find solutions for any problems that may arise with it online.

Additionally, most relevant machine learning software and packages provide installation instructions for Ubuntu.

If you also choose Ubuntu, you should go for the latest LTS (Long Term Support) version, which is 16.04 at the time of writing.


First off, you'll need to add the proprietary GPU drivers PPA.

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update

Then you'll need to install the latest NVIDIA drivers. This is nvidia-375 at the time of writing, but you can run apt-cache search nvidia and see if any later versions exist.

sudo apt-get install nvidia-375

CUDA and related libraries

You'll also be needing the NVIDIA CUDA Toolkit – which enables GPU-acceleration for non-graphics applications – and a library called cuDNN, which contains deep neural network primitives. The latter will purportedly give a speedup of minimum 44% - some users report over 6x speedups with Torch and Caffe.

For CUDA, go to the CUDA download page and select "Linux", "x86_64", "Ubuntu", "16.04", and "deb(local)", as seen in the image below.

CUDA download page

When the download is finished, just open the file (e.g double-click it). This will present an Ubuntu Software window; Click "Install".

Then go to the download location in a terminal, and type the installation instructions from the download page:

sudo dpkg -i cuda-repo-ubuntu1604-8-0-local_8.0.44-1_amd64.deb
sudo apt-get update
sudo apt-get install cuda

You will then need to add CUDA to your path:

echo 'export PATH=/usr/local/cuda/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc

To get cuDNN, you must register an account with Nvidia. It may take up to a couple of days for your account to be approved.

cuDNN page

After you have logged in and downloaded cuDNN, go to the download location in a terminal, and type:

tar xvzf cudnn-8.0-linux-x64-v5.1.tgz
cd cuda
sudo cp -P include/cudnn.h /usr/local/cuda/include
sudo cp -P lib64/libcudnn* /usr/local/cuda/lib64
sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
sudo rm /usr/lib/x86_64-linux-gnu/; sudo ln -s /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/

Now reboot your machine (sudo shutdown -r now).


You can now optionally download and install OpenBLAS, which is an open source implementation of BLAS. It will basically optimize linear algebra operations.

sudo apt-get install git gfortran
mkdir /tmp/git
cd /tmp/git
git clone
cd OpenBLAS
make FC=gfortran -j $(($(nproc) + 1))
sudo make PREFIX=/usr/local install
echo 'export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH' >> ~/.bashrc


Install the latest version of the Anaconda python 2 distribution, like so:

curl -o ~/Downloads/

bash ~/Downloads/

Make sure you answer 'yes' when the installer asks to prepend the Anaconda2 install location to your PATH in its final step. Then run source ~/.bashrc to make anaconda available.


First create a conda environment called learning by typing conda create -n learning python=2.7 in a terminal. Then activate it by typing source activate learning.

You can now install some deep learning frameworks...


sudo apt-get install python-dev
pip install --ignore-installed --upgrade $TF_BINARY_URL


sudo apt-get install libopenblas-dev
conda install numpy scipy mkl nose sphinx pydot-ng
echo 'export CUDA_ROOT=/usr/local/cuda' >> ~/.bashrc
source ~/.bashrc
pip install git+

cd /tmp
git clone
cd libgpuarray
mkdir Build
cd Build
# you can pass -DCMAKE_INSTALL_PREFIX=/path/to/somewhere to install to an alternate location
sudo apt-get install cmake
cmake .. -DCMAKE_BUILD_TYPE=Release # or Debug if you are investigating a crash
sudo make install
cd ..
sudo ldconfig
# Work around a glibc bug
echo -e "\n[nvcc]\nflags=-D_FORCE_INLINES\n" >> ~/.theanorc
python -c "from theano import *"
cat > ~/.theanorc <<- EOM
floatX = float32
device = gpu0

fastmath = True
echo 'Theano-config at ~/.theanorc'


pip install h5py
pip install keras
python -c "import keras"
echo 'You can specify your Keras backend (tensorflow|theano) in ~/.keras/keras.json after Keras has run at least once.'
echo "By default, TensorFlow is used."


pip install --upgrade


conda install matplotlib jupyter pandas scikit-image scikit-learn

Now deactivate the env by typing source deactivate.

You can use this environment as a base for new experiments, and clone it using conda create --name <NAME> --clone learning.

Remote control

I've also set up TeamViewer on my own machine, so that I can get a quick graphical interface remotely.

To start teamviewer at login, find the options menu item and check the relevant box.

If you want to be able to reboot remotely, you can edit the file /etc/lightdm/lightdm.conf.d/50-myconfig.con, and add


to autologin (i.e. skip password-entry) on boot.

Wrapping up

If you're setting up your own machine or environment for deep learning, I hope you've found this post useful.

Regrettably, I haven't implemented support for comments on my site at the time of writing, but if you know of any errors or omissions that would be helpful if included, please feel free to contact me.

I also hope to be blogging a bit more about machine learning experiments going forward, so stay tuned.

Newer post
Plain Text TODO
Older post