This story was originally published on HackerNoon at: https://hackernoon.com/understanding-stochastic-average-gradient). Techniques like Stochastic Gradient Descent (SGD) are designed to improve the calculation performance but at the cost of convergence accuracy. Check more stories related to machine-learning at: https://hackernoon.com/c/machine-learning). You can also check exclusive content about #ml), #machine-learning), #algorithms), #gradient-descent), #ai-optimization), #model-optimization), #loss-functions), #convergence-rates), and more.
This story was written by: [@kustarev](https://hackernoon.com/u/kustarev)). Learn more about this writer by checking [@kustarev's](https://hackernoon.com/about/kustarev)) about page,
and for more stories, please visit [hackernoon.com](https://hackernoon.com)).
Gradient descent is a popular optimization used for locating global minima of the provided objective functions. The algorithm uses the gradient of the objective function to traverse the function slope until it reaches the lowest point.
Full Gradient Descent (FG) and Stochastic Gradient Descent (SGD) are two popular variations of the algorithm. FG uses the entire dataset during each iteration and provides a high convergence rate at a high computation cost. At each iteration, SGD uses a subset of data to run the algorithm. It is far more efficient but with an uncertain convergence.
Stochastic Average Gradient (SAG) is another variation that provides the benefits of both previous algorithms. It uses the average of past gradients and a subset of the dataset to provide a high convergence rate with low computation. The algorithm can be further modified to improve its efficiency using vectorization and mini-batches.