From the course: PyTorch Essential Training: Deep Learning

Tensor operations

- Managing and transforming large data sets in deep learning can be frustrating. You're likely wasting time and computational power if your tensor operations aren't optimized, making the task of handling data cumbersome and less effective. Let's learn to efficiently split, combine, and manipulate tensors, which lead to more robust and scalable deep learning models. Let's perform indexing and slicing on tensors. If you have used NumPy, I have great news. Indexing and slicing of tensors is done the same way as per NumPy arrays. If you're new to this, don't worry. Indexing means accessing a single element of a tensor, and slicing means accessing a range of elements. Now let's create a one-dimensional tensor, call it 1-dim-tensor. If we want to access and display a third element of our tensor, we just have to type print 1-dim-tensor two. When we run the cell, notice we got tensor three. In PyTorch, when we are passing the tensor element to a function like print, we first have to convert it to a Python value by calling the item function. Let's do that and rerun our code. And you can see, we got the value as a number. Next, let's see how slicing is done. There is a format that we use start column and column step. We need to specify a start index and the end index separated by column. The step is optional, and it defines the number of indexes to move forward while slicing an object. If the step is not indicated, it means moving without skipping any index. The output tensor will contain all the elements from the starting index including, to the ending index excluding. So, let's say we want to get elements between the first and the fourth element. We will type 1-dim-tensor, one column four. Next, let's see how to do indexing and slicing on a two-dimensional tensor. We'll create a four by six tensor called 2-dim tensor. To access the fourth element of the second array, we'll type: 2-dim tensor one three. In the case when we want to access two or more elements at the same time, we'll perform slicing. Let's say we want to access the first three elements of the first row, and the first four elements of the second row. We'll type print first three elements on the first row, 2-dim tensor, zero zero three. And we'll print first four elements of the second row, 2-dim tensor one zero four. Sometimes we want to use indexing to extract the data that meets some criteria. For example, if we would like to keep only elements that are less than 10, we would type 2-dim tensor, 2-dim tensor less than 11, and display it. PyTorch has many different built-in functions that we can use to access, combine, and split tensors. Let's use the two most common ones in our example. If we want to combine tensors, there is a function called torch stack. It concatenates a sequence of tensors along a new dimension. Let's create a new tensor called combined tensor. Let's combine our two-dimensional tensor using torch stack function. The second useful function is for splitting tensors, and it is called torch and bind. If we want to split our two-dimensional tensor into four tensors, let's call them first tensor, second tensor, and so on. We will type: first tensor, second tensor, third tensor, fourth tensor equals torch unbind, 2-dim tensor. And display it. As you can see, torch and bind function splits the tensor accordingly to the number of rows, but we can select along which dimension we want it to work. Now, if we put dim equals one, which corresponds to the column dimension, the tensor will split into six smaller tensors, with the corresponding values of each column.

Contents