From the course: PyTorch Essential Training: Deep Learning

Unlock the full course today

Join today to access over 23,200 courses taught by industry experts.

Automatic differentiation (Autograd)

Automatic differentiation (Autograd)

- [Instructor] When we train a neural network, we follow two simple steps: forward propagation and backward propagation. In forward propagation, we input data into the network. It runs the input data through each of these functions and makes the best guess about the correct output. Then we compare the predicted output with the actual output. We calculate the lowest function to determine the difference between the predicted output and actual output. In the next step called backward propagation, the neural network adjusts its parameters proportionate to the error in its guess. So after we find the loss function, we take the derivative of the loss function in terms of the parameters of our neural network. Lastly, we iteratively update the weight parameters accordingly so that the loss function returns the smallest possible loss. We call this step iterative optimization as we use an optimizer to perform the update of the parameters. We call this process gradient-based optimization…

Contents