From the course: PyTorch Essential Training: Deep Learning

Unlock the full course today

Join today to access over 23,200 courses taught by industry experts.

Creating a tensor GPU example

Creating a tensor GPU example

- [Instructor] GPUs were originally developed for rendering computer graphics. So when I was a kid, every gamer talked about them. Meanwhile, with the need for speed of computational processing involving neural networks, they now play a crucial role in deep learning. In PyTorch, we have the CUDA library that is instrumental in detecting, activating, and harnessing the power of GPUs. Just as for the CPU, we'll explore a simple example. First, before using CPUs, we are going to check if they are configured and ready to use. We are going to import PyTorch and print the PyTorch version. Next, by calling the torch.cuda.is_available function, we'll move the tensors to the GPU device if one is available. So, our device has a GPU support. We are going to create two tensors and let's just call them tens_a and tens_b. Now let's apply a simple arithmetic operation, in our case, multiplication. We can do that using the asterisk operator and store the result in the third tensor, which we'll call…

Contents