Our previous guide was on installing PyTorch. Then, why TensorFlow needed a separate guide? Is not running few commands would install TensorFlow on that setup? There are practical differences when current version of Ubuntu server considered, some way would invite crush of server out of slightly buggy packages. With symlinking somehow works and most human forget what exactly done to someway fix.
We have discussed about GPU computing as minimally needed theoretical background. Also, in an earlier guide we have shown Nvidia CUDA tool installation on MacOS X. Here is Practical Guide On How To Install TensorFlow on Ubuntu 18.04 Server With Nvidia GPU. Installation demands server architecture which has Nvidia graphics card – there are such dedicated servers available for various purposes including gaming. Installing on localhost for intense and time consuming work not recommended for the sake of life of the device. The graphics card must support at least Nvidia compute 3.0 for more works than just TensorFlow.
Steps To Install TensorFlow on Ubuntu 18.04 Server
We are assuming a 64 bit version of OS with card like GeForce 740m. SSH to server. Update and upgrade :
---
1 2 | apt update -y apt upgrade -y |
Run this big command to install the Python libraries :
1 | sudo apt install openjdk-8-jdk git python-dev python3-dev python-numpy python3-numpy python-six python3-six build-essential python-pip python3-pip python-virtualenv swig python-wheel python3-wheel libcurl3-dev libcupti-dev |
Also run :
1 | sudo apt install libcurl4-openssl-dev |
We can see what graphics card hardware installed by running :
1 | sudo lshw -C display | grep product |
We need Nvidia driver installed. We can check whether and what graphics driver on SSH:
1 | nvidia-smi |
Here is Ubuntu’s PPA, browse it :
1 | https://launchpad.net/~graphics-drivers/+archive/ubuntu/ppa |
nvidia-graphics-drivers-396
is newest but probably not much tested. We can add nvidia-graphics-drivers-390
PPA and install that driver :
1 2 3 4 5 | sudo add-apt-repository ppa:graphics-drivers/ppa sudo apt update sudo apt upgrade ubuntu-drivers devices sudo ubuntu-drivers autoinstall |
If autoinstall
somehow does not work then run :
1 | sudo apt install nvidia-390 |
Now, again run the command :
1 | nvidia-smi |
You will get meaningful output. We should hold that version to stop getting upgraded :
1 | sudo apt-mark hold nvidia-driver-390 |
Install Linux headers :
1 | sudo apt install linux-headers-$(uname -r) |
We need gcc, g++ etc for the next steps :
1 2 3 4 5 | apt install gcc g++ gcc-6 g++-6 gcc-4.8 g++-4.8 # if gcc-4.8 package not found run # sudo add-apt-repository ppa:ubuntu-toolchain-r/test # sudo apt update # sudo apt install gcc-4.8 g++-4.8 |
Now we have to install CUDA toolkit :
1 | apt install nvidia-cuda-toolkit libcupti-dev |
Reboot :
1 | sudo reboot |
Now we need to install CUDA toolkit itself :
1 | https://developer.nvidia.com/cuda-toolkit |
Run :
1 2 | cd Downloads/ sudo sh cuda_9.0.176_384.81_linux.run --override --silent --toolkit |
Next, you will need to install CUDNN, NCCL. For that, like older way for PyTorch you need to login using Nvdia account, that is easy. You will get link to something like cuDNN v7.1.x Library for Linux
. You need to download that deb file and upload to server via FTP. The URL is :
1 2 | https://developer.nvidia.com/rdp/cudnn-download https://developer.nvidia.com/nccl |
Find the directory where you have installed CUDA. It is copying the files over to /usr/local/cuda/
. Move the above content into the directory where you install CUDA and run these operations (be careful about version numbered directory, below is example of format) :
1 2 3 4 | tar -xzvf cudnn-9.0-linux-x64-v7.1.tgz sudo cp cuda/include/cudnn.h /usr/local/cuda/include sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64 sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn* |
Above will save space and avoid warning with apt. Open profile file like .bashrc
:
1 | nano ~/.bashrc |
Add these :
1 2 | export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64" export CUDA_HOME=/usr/local/cuda |
Reload :
1 2 3 | source ~/.bashrc sudo ldconfig echo $CUDA_HOME |
Install Bazel :
1 2 3 4 5 6 7 8 | sudo apt install curl echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add - sudo apt update -y sudo apt upgrade -y sudo apt install bazel sudo apt upgrade bazel pip install keras |
That is it. Check Nvidia version :
1 2 3 4 5 6 | cd ~ git clone https://github.com/tensorflow/tensorflow cd ~/tensorflow # check current revision number from browser git checkout r1.11 cd ~/tensorflow |
Create configuration file by running :
1 | ./configure |
You’ll get this kind of output :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3 Do you wish to build TensorFlow with jemalloc as malloc support? [Y/n]: Y Do you wish to build TensorFlow with Google Cloud Platform support? [Y/n]: N Do you wish to build TensorFlow with Hadoop File System support? [Y/n]: N Do you wish to build TensorFlow with Amazon S3 File System support? [Y/n]: N Do you wish to build TensorFlow with Apache Kafka Platform support? [y/N]: N Do you wish to build TensorFlow with XLA JIT support? [y/N]: N Do you wish to build TensorFlow with GDR support? [y/N]: N Do you wish to build TensorFlow with VERBS support? [y/N]: N Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: N Do you wish to build TensorFlow with CUDA support? [y/N]: Y Please specify the CUDA SDK version you want to use, e.g. 7.0. [Leave empty to default to CUDA 9.0]: 9.0 Please specify the location where CUDA 9.1 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7.0]: 7.1 Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda Do you wish to build TensorFlow with TensorRT support? [y/N]: N Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 5.0] 3.0 Do you want to use clang as CUDA compiler? [y/N]: N Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: /usr/bin/gcc-4.8 Do you wish to build TensorFlow with MPI support? [y/N]: N Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]: -march=native Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]:N |
Build TensorFlow :
1 | sudo bazel build --config=opt --config=cuda --action_env="/usr/local/cuda/lib64" //tensorflow/tools/pip_package:build_pip_package |
Last steps :
1 2 3 | bazel-bin/tensorflow/tools/pip_package/build_pip_package tensorflow_pkg cd tensorflow_pkg/ sudo pip3 install tensorflow-<name_of_generated_file>.whl |
Check if your build is working by changing into another directory and running python:
1 2 3 4 | import tensorflow as tf hello = tf.constant('Hello World!') sess = tf.Session() print(sess.run(hello)) |
You’ll get Hello World!
output. TensorFlow has models :
1 | https://github.com/tensorflow/models |
You can run :
1 2 3 | git clone https://github.com/tensorflow/models.git cd models/tutorials/image/imagenet python classify_image.py |
That is about basic setup and testing.
Tagged With how to install nccl on ubuntu 18 04 , ubuntu18 04 install nccl , tensorflow cuda ubuntu server , tensorflow 1 12 ubuntu 18 04 , nccl ubuntu 18 04 location , install xserver on ubuntu 18 04 server for gpu cards , install xla jit ubuntu , install tensorflow ubuntu server , how to install NCCL on ubuntu 18 04 for tensorflow , will ubuntu 18 support nvida gpu