Machine learning is, apparently, increasingly becoming part of University curricula, and this is how I found myself faced with the requirement of running Tensorflow.
However, if you’re on a budget, which will often be the case if you’re a student, you might find yourself trying to run demanding software on older hardware. My PC still sports an AMD R9 280X, a card with a design originally from 2011. The R9 280X, being a rebrand of the HD7970, is based on the GCN 1.1 architecture, unlike some of the other cards from the 2xx generation.
The disadvantage of the old design becomes clear when trying to run computational software on the GPU. While there used to be OpenCL support for this card under Linux, support for this has actually been dropped somewhere during the kernel 4.x era.
To further make things a little more annoying, tensorflow by default only supports CUDA – so nVidia cards – for gpGPU acceleration only. Luckily, there are solutions to all these limitations, if you’re willing to run some experimental drivers.
The AMDGPU Driver
The AMDGPU driver is the successor to the old open-source radeon driver. It offers support for Vulkan, which can improve performance of games running under Wine, and is generally more up-to-date. It is even still getting improvements for older graphics card, so running it is generally desirable if you want to make optimal use of your older AMD GPU under Linux.
However, GPUs of the Southern Islands (GCN 1.1) generation such as the HD7950, HD7970, R9 280 and R9 280X are not supported by this driver by default. Some features are still missing from the driver and support for these cards has been labeled experimental. While that may change in the future, for now, if you want to make use of this driver, you’ll have to add
radeon.si_support=0 amdgpu.si_support=1 to your kernel parameters. As far as I know, this only works for kernel versions 5.8 and up.
So, now that you have forced Linux to load the AMDGPU driver rather than the radeon driver, how can you gain GPU acceleration support in applications such as tensorflow? Luckily, tensorflow supports keras, and PlaidML provides an OpenCL keras backend! Unfortunately, this will not immediately work with the old GCN 1.1 cards, as OpenCL is no longer officially supported for them.
This does not mean that getting OpenCL to work is very difficult though – especially not as I’ve already done all the hard work of finding out what to do for you. Start by installing the basic OpenCL libraries.
sudo apt install mesa-opencl-icd opencl-c-headers opencl-clhpp-headers opencl-headers ocl-icd-libopencl1
Next up, setup a basic environment for testing PlaidML:
mkdir plaidml-tests # Or any other folder name you like cd plaidml-tests python3 -m virtualenv ./venv source venv/bin/activate python -m pip install plaidml-keras plaidbench
plaidml-setup to select the device you want to use for running PlaidmL. You’ll probably notice that your GPU is missing from the list. PlaidML needs OpenCL 1.2 to make use of your GPU, but if you run
clinfo at this point, you should get output saying that there is actually a proper OpenCL 1.2 device available!
Some additional steps are necessary to setup your AMD GPU for running with PlaidML. While the open-source AMDGPU driver includes OpenGL support, OpenCL support is actually proprietary. Officially, only devices that can run the AMDGPU-PRO driver – a user-space driver with proprietary AMD code that runs on top of AMDGPU – are supported for compute applications. Fortunately, getting it to work for our old cards is easy – even if I’m not exactly sure why it works.
Download the newest version of the AMDGPU-PRO package from the AMD website (20.45 is the newest version at the time of writing). Then, unpack it. Now, rather than actually installing AMDGPU-PRO by running
amdgpu-pro-install and breaking your graphics driver, the command to run is
This command will install all the things needed to run OpenCL on your card, without actually breaking your graphics driver. Now isn’t that neat.
You might notice that this command still installs some AMDGPU-PRO packages. This is true and actually something we want: Without these components, OpenCL would not work. However, if you try to run a full installation of the AMDGPU-PRO driver, your system will become unbootable because this driver does not actually support the old graphics cards.
After rebooting, if you run plaidml-setup again in your environment, you should be given the option to enable experimental devices, and your GPU should be among them, available for you to select. Try running
plaidbench keras mobilenet to get an idea of the performance.
Now, I don’t know if there’s just someone over at AMD who wants to secretly and unofficially still provide support for these old cards or whether it is a total accident that this happens to work. While I’d like to think the former is true, I’m just happy that it works.
How well does it work?
Well… The performance is not great.
|Device||FPS||Execution Time [s]|
|Intel Xeon E3 1231v3 (i7 4770)||11.9||86.9|
|AMD Radeon R9 280X||107||14.0|
|nVidia Quadro M1200 (GTX960M)||230||7.54|
My desktop’s GPU offers an almost ten times performance increase over using my CPU. This definitely makes it worthwhile to use GPU acceleration. However, even when still using PlaidML, the GPU of my laptop is much, much faster than my desktop card, even though in most other scenarios the R9 280X will outperform the GTX960M easily.
All in all, getting OpenCL to work on your old graphics card under Linux is really not that hard, once you’ve found some completely undocumented behaviour that makes it happen.
There’s a lot of old information out there – even recommending fglrx – on the internet that says it can’t be done. My system is proof to the contrary. For reference, I use Ubuntu 20.04 with Linux kernel 5.8.0-40-generic, which is just the default kernel right now.
Still, newer graphics cards, even ones that seem way less powerful comparing just gaming benchmark results, contain more advanced gpGPU features that, if you have the appropriate drivers, software like PlaidML or Tensorflow can take advantage of to provide huge speed gains. So even though as long as you’re on a budget you can at least make use of your old GPU for machine learning, if you want to get serious with it it’s probably better to save up the money to get a newer device to work with.