未加星标

Stochastic Gradient Descent (SGD) with Python

字体大小 | |
[开发(python) 所属分类 开发(python) | 发布者 店小二03 | 时间 2016 | 作者 红领巾 ] 0人收藏点击收藏

Stochastic Gradient Descent (SGD) with Python

In last week’s blog post, we discussed gradient descent , a first-order optimization algorithm that can be used to learn a set of classifier coefficients for parameterized learning .

However, the “vanilla” implementation of gradient descent can be prohibitively slow to run on large datasets ― in fact, it can even be considered computationally wasteful.

Instead, we should apply Stochastic Gradient Descent (SGD) , a simple modification to the standard gradient descent algorithm that computes the gradient and updates our weight matrix W on small batches of training data , rather than the entire training set itself.

While this leads to “noiser” weight updates, it also allows us to take more steps along the gradient ( 1 step for each batch versus 1 step per epoch ), ultimately leading to faster convergence and no negative affects to loss and classification accuracy.

To learn more about Stochastic Gradient Descent, keep reading.

Looking for the source code to this post?

Jump right to the downloads section. Stochastic Gradient Descent (SGD) with python

Taking a look at last week’s blog post , it should be (at least somewhat) obviousthat the gradient descent algorithm will run very slowly on large datasets. The reason for this “slowness” is because each iteration of gradient descent requires that we compute a prediction for each training point in our training data.

For image datasets such as ImageNet where we have over 1.2 million training images, this computation can take a long time.

It also turns out that computing predictions for every training data point before taking a step and updating our weight matrix W is computationally wasteful (and doesn’t help us in the long run).

Instead, what we should do is batch our updates.

Updating our gradient descent optimization algorithm

Before I discuss Stochastic Gradient Descent in more detail, let’s first look at the original gradient descent pseudocode and then the updated, SGD pseudocode, both inspired by the CS231n course slides .

Below follows the pseudocode for vanilla gradient descent:

while True: Wgradient = evaluate_gradient(loss, data, W) W += -alpha * Wgradient

And here we can see the pseudocode for Stochastic Gradient Descent:

while True: batch = next_training_batch(data, 256) Wgradient = evaluate_gradient(loss, batch, W) W += -alpha * Wgradient

As you can see, the implementations are quite similar.

The only difference between vanilla gradient descent and Stochastic Gradient Descent is the addition of the next_training_batch function. Instead of computing our gradient over the entire data set, we instead sample our data, yielding a batch .

We then evaluate the gradient on this batch and update our weight matrix W .

Note:For an implementation perspective, we also randomize our training samples before applying SGD.

Batching gradient descent for machine learning

After looking at the pseudocode for SGD, you’ll immediately notice an introduction of a new parameter: the batch size.

In a “purist” implementation ofSGD, your mini-batch size would be set to 1 . However, we often uses mini-batches that are > 1 . Typical values include 32 , 64 , 128 , and 256 .

So, why are these common mini-batch size values?

To start, using batches > 1 helps reduce variance in the parameter update , ultimately leading to a more stable convergence.

Secondly, optimized matrix operation libraries are often more efficient when the input matrix size is a power of 2.

In general, the mini-batch size is not a hyperparameter that you should worry much about. You basically determine how many training examples will fit on your GPU/main memory and then use the nearest power of 2 as the batch size.

Implementing Stochastic Gradient Descent (SGD) with Python

We are now ready to update our code from last week’s blog post on vanilla gradient descent . Since I have already reviewed this code in detail earlier, I’ll defer an exhaustive, thorough review of each line of code to last week’s post.

That said, I will still be pointing out the salient, importantlines of code in this example.

To get started, open up a new file, name it sgd . py , and insert the following code:

# import the necessary packages import matplotlib.pyplotas plt from sklearn.datasets.samples_generatorimport make_blobs import numpyas np import argparse def sigmoid_activation(x): # compute and return the sigmoid activation value for a # given input value return 1.0 / (1 + np.exp(-x)) def next_batch(X, y, batchSize): # loop over our dataset `X` in mini-batches of size `batchSize` for i in np.arange(0, X.shape[0], batchSize): # yield a tuple of the current batched data and labels yield (X[i:i + batchSize], y[i:i + batchSize])

Lines 2-5start by importing our required Python packages. Then, Line 7 defines our sigmoid_activation function used during the training process.

In order to apply Stochastic Gradient Descent, we need a function that yields mini-batches of training data ― and that is exactly what the next_batch function on Lines 12-16 does.

The next_method requires tree parameters:

X : Our training dataset of feature vectors. y : The class labels associated with each of the training data points. batchSize : The size of each mini-batch that will be returned.

Lines 14-16then loop over our training examples, yielding subsets of both X and y as mini-batches.

Next, let’s parse our command line arguments:

# construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-e", "--epochs", type=float, default=100, help="# of epochs") ap.add_argument("-a", "--alpha", type=float, default=0.01, help="learning rate") ap.add_argument("-b", "--batch-size", type=int, default=32, help="size of SGD mini-batches") args = vars(ap.parse_args()) Lines 19-26par

本文开发(python)相关术语:python基础教程 python多线程 web开发工程师 软件开发工程师 软件开发流程

主题: SGPython
分页:12
转载请注明
本文标题:Stochastic Gradient Descent (SGD) with Python
本站链接:http://www.codesec.net/view/484449.html
分享请点击:


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 开发(python) | 评论(0) | 阅读(39)