未加星标

Gradient descent with Python

字体大小 | |
[开发(python) 所属分类 开发(python) | 发布者 店小二03 | 时间 2016 | 作者 红领巾 ] 0人收藏点击收藏

Gradient descent with Python

Over the past few weeks we’ve been studying the fundamental building blocks of machine learning and neural network classifiers.

We started with an introduction to linear classification that discussed the concept of parameterized learning , and how this type of learning enables us to define a scoring function that maps our input data to output class labels.

Thisscoring function is defined in terms of parameters ; specifically, our weight matrix W and our bias vector b . Our scoring function accepts these parameters as inputs and returns a predicted class label for each input data point .

Fromthere, we discussed two common loss functions:Multi-class SVM loss andcross-entropy loss (commonly referred to in the same breath as “Softmax classifiers”). Loss functions, at the most basic level, are used to quantify how “good” or “bad” a given predictor (i.e., a set of parameters) are at classifying the input data points in our dataset.

Given these building blocks, we can now move on to arguably the most important aspect of machine learning, neural networks, and deep learning ― optimization.

Throughout this discussion we’ve learned that high classification accuracy is dependent on finding a set of weights W such that our data points are correctly classified. Given W ,can compute our output class labels via our scoring function . And finally, we can determine how good/poor our classifications are given some W via our loss function .

But how do we go about finding and obtaining a weight matrix W that obtains high classification accuracy?

Do we randomly initialize W , evaluate, and repeat over and over again, hoping that at some point we land on a W that obtains reasonable classification accuracy?

Well we could ― and it some cases that might work just fine.

But in most situations, we instead need to define an optimization algorithm that allows us to iteratively improve our weight matrix W .

In today’s blog post, we’ll be looking at arguably themost common algorithm used to find optimal values of W ― gradient descent.

Looking for the source code to this post?

Jump right to the downloads section. Gradient descent with python

The gradient descent algorithm comes in two flavors:

The standard “vanilla” implementation. The optimized “stochastic” version that is more commonly used.

Today well be reviewing the basic vanilla implementation to form a baseline for our understanding. Then next week I’ll be discussing the stochastic version of gradient descent.

Gradient descent is an optimization algorithm

The gradient descent method is an iterative optimization algorithm that operates over a loss landscape.

We can visualize our loss landscape as a bowl, similar to the one you may eat cereal or soup out of:


Gradient descent with Python

Figure 1:A plot of our loss landscape. We typically see this landscape depicted as a “bowl”. Our goal is to move towards the basin of this bowl where this is minimal loss.

The surface of our bowl is called our loss landscape , which is essentially a plot of our loss function.

The difference between our loss landscape and your cereal bowl is that your cereal bowl only exists in three dimensions, while your loss landscape exists in many dimensions , perhaps tens, hundreds, or even thousands of dimensions.

Each position along the surface of the bowl corresponds to a particular loss value given our set of parameters, W (weight matrix) and b (bias vector).

Our goal is to try different values of W and b , evaluate their loss, and then take a step towards more optimal values that will (ideally) have lower loss.

Iteratively repeating this process will allow us to navigate our loss landscape, following the gradient of the loss function (the bowl), and find a set of parameters that have minimum loss and high classification accuracy.

The “gradient” in gradient descent

To make our explanation of gradient descent a little more intuitive, let’s pretend that we have a robot ― let’s name him Chad:


Gradient descent with Python

Figure 2:Introducing our robot, Chad, who will help us understand the concept of gradient descent.

We place Chad on a random position in our bowl (i.e., the loss landscape):


Gradient descent with Python

Figure 3:Chad is placed on a random position on the loss landscape. However, Chad has only one sensor ― the loss value at the exact position he is standing at. Using this sensor (and this sensor alone), how is he going to get to the bottom of the basin?

It’s now Chad’s job to navigate to the bottom of the basin (where thereis minimum loss).

Seems easy enough, right? All Chad has to do is orient himself such that he’s facing “downhill” and then ride the slope until he reaches the bottom of the basin.

But we have a problem: Chad isn’t a very smart robot.

Chadonly has one sensor ― this sensor allows him to take his weight matrix W and compute a loss function L .

Therefore, Chad is able to compute his (relative) position on the loss landscape, but he has absolutely no idea in which direction he should take a step to move himself closer to the bottom of the basin.

What is Chad to do?

The answer is to apply gradient descent.

All we need to do is follow the slope of the gradient W . We can compute the gradient of W across all dimensions using the following equation:


Gradient descent with Python

In > 1 dimensions, our gradient becomes a vector of partial derivatives.

The problem with this equation is that:

It’s an approximation to the gradient. It’s very slow.

In practice, we use the analytic gradient instead. This method is exact, fast, but extremely challenging to implement due to partial derivatives and multivariable calculus. You can read more about the numeric and analytic gradients here .

For the sake of this discussion, simply try to internalize what gradient descent is doing: attempting to optimize our parameters for low loss and high classification accuracy.

Pseudocode for gradient descent

Below I have included some Python-like pseudocode of the standard, vanilla gradient descent algorithm, inspired by the CS231n slides :

while True: Wgradient = evaluate_gradient(loss, data, W) W += -alpha * Wgradient

This pseudocode is essentially what all variations of gradient descent are built off of.

We start off on Line 1 by looping until some condition is met. Normally this condition is either:

A specified number of epochs has passed

本文开发(python)相关术语:python基础教程 python多线程 web开发工程师 软件开发工程师 软件开发流程

主题: Python
分页:12
转载请注明
本文标题:Gradient descent with Python
本站链接:http://www.codesec.net/view/481183.html
分享请点击:


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 开发(python) | 评论(0) | 阅读(58)