## Install GPU versions of Tensorflow and Pytorch under Windows in 2021

Recently, the upsurge of Bitcoin has gradually faded, and the price of graphics cards has also dropped, so friends can observe the recent market, and when appropriate, you can start with a few graphics cards to engage in deep learning. Most of the friends who follow me are from the series of tutorials on how to do big homework. The series of how to do big homework used tensorlfow to train several object classification models. In the open source code, I also basically gave everyone the model I trained. Some friends It is relatively slow to reflect the process of running the training by yourself. Most of this reason is because everyone runs in the CPU environment, so the speed is not very fast. If you use the GPU to run the training, the efficiency is almost 10 times that of the CPU.

! ! ! Note: For deep learning partners, please choose an Nvidia graphics card

Graphics card brands can basically be divided into two camps: AMD and Nvidia. If it is used for deep learning, you must choose Nvidia graphics cards. Although AMD graphics cards are relatively cost-effective, their support for deep learning is not very good, so you must check whether it is an Nvidia graphics card before buying. The following is Nvidia's graphics card. icon.

## Get to know the graphics card ( GPU )

Before buying a graphics card, let's take an Nvidia graphics card as an example to explain what the numbers on the graphics card model mean.`GALAXY`Indicates the manufacturer of the graphics card,`Geforce`is the name of the graphics card series,`GTX`Indicates the grade of the graphics card. The number is generally four digits. The first two digits represent the algebra of the graphics card.`16`It means that it is the 16th generation graphics card. The third digit in the middle represents the performance level of the graphics card. The higher the number, the better the performance level of the graphics card. The last digit is generally 0. has an English suffix,`SE`means castration,`OF`and`Super`Indicates an enhanced version, such as 1660TI, which is enhanced on the basis of 1660.`6G`It means that the video memory of the graphics card is 6 GB, and the cool name at the end is the name set by the manufacturer. This is generally a gimmick and does not need to be paid too much attention.

In addition, it is more important for everyone to learn to read the parameters of the graphics card, namely architecture, technology, stream processor, core frequency, video memory frequency, video memory bit width, and video memory capacity. For example, here is the graphics card information of 3090.

1. Architecture: Equivalent to running layout, the better the layout, the smoother the running.
2. Process: The smaller the process, the higher the precision, and the more performance it can exert.
3. Raster and stream processor: equivalent to labor, the more people, the stronger the execution.
4. Core frequency: reaction speed, equivalent to the speed increase efficiency of a sports car 100 meters.
5. Video memory frequency: equivalent to the speed limit sign, determines the maximum operating speed.
6. Video memory bit width: equivalent to a dash, which determines the maximum operating channel.
7. Video memory capacity: equivalent to the road width limit, which determines the maximum carrying capacity.

Take PUBG as an example, this game is more memory-intensive. Because the game map data is loaded in the video memory, the more refined the 3D picture data, the higher the required video memory capacity. Deep learning generally focuses on two points, one is the memory capacity and the other is the number of cuda cores. The larger the two indicators, the better.

Small partners with desktop computers can directly buy a separate graphics card and plug it into the motherboard, pay attention to see whether their power supply can provide power for the graphics card.

Small partners without a desktop computer can also consider buying a notebook with an Nvidia graphics card, and the Target Lenovo Savior series is a good choice.

## Install graphics driver

The first step is to install the graphics card driver. To install the graphics card driver, you need to download the driver from the official website. First, you need to check your graphics card model in the device manager. Here is my graphics card.

After the installation is complete, restart the computer and enter in cmd`nvidia-smi`, the output of the following information indicates that the graphics card driver is installed successfully.

## Install Anaconda

Old friends should all know how to use Anaconda and Pycharm. For new friends, just read the following tutorial.

How to configure the virtual environment of anaconda in pycharm - Dejahu's Blog - CSDN Blog

After the installation is complete, enter in cmd`conda`, if the following information is output, the anaconda installation is successful.

After the installation is complete, be sure to update to the domestic source to speed up the download speed of the third-party library, and execute the following command in cmd.

``````conda config --remove-key channels
conda config --set show_channel_urls yes
pip config set global.index-url https://mirrors.ustc.edu.cn/pypi/web/simple
``````
• 1
• 2
• 3
• 4
• 5
• 6

## Install the GPU version of Tensorflow

Installation of GPU version Tensorflow

### Create and activate a virtual environment

Open cmd, first create a virtual environment, enter the following two commands respectively to complete the creation and activation of the virtual environment

``````conda create -n dejahu-tf python==3.7.3
``````
• 1
• 2

### Install

We first need to install cuda and cudnn using conda

``````conda install cudatoolkit = 10.1
conda install cudnn == 7.6.5
``````
• 1
• 2

Then use the pip command to install the gpu version of tensorflow

``````pip install tensorflow-gpu==2.3.0
``````
• 1

### Test if GPU is available

Now test whether the GPU is available on the command line, first enter python into the python interpreter

Enter the following two instructions, if the output is True, it means that the GPU can be used

``````import tensorflow as tf
print(tf.test.is_gpu_available())
``````
• 1
• 2

If everyone's output is similar to mine, then you can happily use the GPU version of tensorflow

If the GPU version of tensorflow is installed, the GPU is used by default. You don't need to specify it in the code, you can use it directly.

## Install the GPU version of Pytorch

Installation of GPU version Pytorch

### Create and activate a virtual environment

Open cmd, first create a virtual environment, enter the following two commands respectively to complete the creation and activation of the virtual environment

``````conda create -n dejahu-torch python==3.7.3
conda activated dejahu-torch
``````
• 1
• 2

### Install

We use conda here to install the gpu version of pytorch, which is very convenient. Just enter the following commands in the activated virtual environment. The latest version of Pytorch is installed here.

``````conda install pytorch torchvision torchaudio cudatoolkit=10.2
``````
• 1

30 series friends need the support of cuda11, please execute the following command

``````conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge
``````
• 1

If you need to specify the version number, do it like this

``````conda install pytorch == 1.5.0 torchvision == 0.6.1 cudatoolkit = 10.2
``````
• 1

### Test if GPU is available

Now test whether the GPU is available on the command line, first enter python into the python interpreter

Enter the following two instructions, if the output is True, it means that the GPU can be used

``````import torch
print(torch.cuda.is_available())
``````
• 1
• 2

If your output is similar to mine, then you can happily use the GPU version of Pytorch

However, Pytorch will pay attention to using the device to specify the GPU, and you need to transfer it through the to() method.

## back up plan

In addition, due to network reasons, you may need to manually install cuda and cudnn, but this method requires cuda and cudnn to match, and it is not recommended for everyone to use, the command is as follows

``````conda install cudatoolkit = 10.1
conda install cudnn == 7.6.5
``````
• 1
• 2

Using these APIs, you can develop profiling tools to gain insight into the CPU and GPU behavior of CUDA applications. CUPTI is provided as a dynamic library on all platforms supported by CUDA. See the CUPTI documentation.

nvidia and cuda need to satisfy the relationship:
https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html

cuda and cudnn need to satisfy the relationship:
https://developer.nvidia.com/rdp/cudnn-archive