1.5. Stochastic Gradient Descent (2024)

Stochastic Gradient Descent (SGD) is a simple yet very efficientapproach to fitting linear classifiers and regressors underconvex loss functions such as (linear) Support Vector Machines and LogisticRegression.Even though SGD has been around in the machine learning community fora long time, it has received a considerable amount of attention justrecently in the context of large-scale learning.

SGD has been successfully applied to large-scale and sparse machinelearning problems often encountered in text classification and naturallanguage processing. Given that the data is sparse, the classifiersin this module easily scale to problems with more than 10^5 trainingexamples and more than 10^5 features.

Strictly speaking, SGD is merely an optimization technique and does notcorrespond to a specific family of machine learning models. It is only away to train a model. Often, an instance of SGDClassifier orSGDRegressor will have an equivalent estimator inthe scikit-learn API, potentially using a different optimization technique.For example, using SGDClassifier(loss='log_loss') results in logistic regression,i.e. a model equivalent to LogisticRegressionwhich is fitted via SGD instead of being fitted by one of the other solversin LogisticRegression. Similarly,SGDRegressor(loss='squared_error', penalty='l2') andRidge solve the same optimization problem, viadifferent means.

The advantages of Stochastic Gradient Descent are:

  • Efficiency.

  • Ease of implementation (lots of opportunities for code tuning).

The disadvantages of Stochastic Gradient Descent include:

  • SGD requires a number of hyperparameters such as the regularizationparameter and the number of iterations.

  • SGD is sensitive to feature scaling.

Warning

Make sure you permute (shuffle) your training data before fitting the modelor use shuffle=True to shuffle after each iteration (used by default).Also, ideally, features should be standardized using e.g.make_pipeline(StandardScaler(), SGDClassifier()) (see Pipelines).

1.5.1. Classification#

The class SGDClassifier implements a plain stochastic gradientdescent learning routine which supports different loss functions andpenalties for classification. Below is the decision boundary of aSGDClassifier trained with the hinge loss, equivalent to a linear SVM.

As other classifiers, SGD has to be fitted with two arrays: an array Xof shape (n_samples, n_features) holding the training samples, and anarray y of shape (n_samples,) holding the target values (class labels)for the training samples:

>>> from sklearn.linear_model import SGDClassifier>>> X = [[0., 0.], [1., 1.]]>>> y = [0, 1]>>> clf = SGDClassifier(loss="hinge", penalty="l2", max_iter=5)>>> clf.fit(X, y)SGDClassifier(max_iter=5)

After being fitted, the model can then be used to predict new values:

>>> clf.predict([[2., 2.]])array([1])

SGD fits a linear model to the training data. The coef_ attribute holdsthe model parameters:

>>> clf.coef_array([[9.9..., 9.9...]])

The intercept_ attribute holds the intercept (aka offset or bias):

>>> clf.intercept_array([-9.9...])

Whether or not the model should use an intercept, i.e. a biasedhyperplane, is controlled by the parameter fit_intercept.

The signed distance to the hyperplane (computed as the dot product betweenthe coefficients and the input sample, plus the intercept) is given bySGDClassifier.decision_function:

>>> clf.decision_function([[2., 2.]])array([29.6...])

The concrete loss function can be set via the lossparameter. SGDClassifier supports the following loss functions:

Please refer to the mathematical section below for formulas.The first two loss functions are lazy, they only update the modelparameters if an example violates the margin constraint, which makestraining very efficient and may result in sparser models (i.e. with more zerocoefficients), even when L2 penalty is used.

Using loss="log_loss" or loss="modified_huber" enables thepredict_proba method, which gives a vector of probability estimates\(P(y|x)\) per sample \(x\):

>>> clf = SGDClassifier(loss="log_loss", max_iter=5).fit(X, y)>>> clf.predict_proba([[1., 1.]]) array([[0.00..., 0.99...]])

The concrete penalty can be set via the penalty parameter.SGD supports the following penalties:

  • penalty="l2": L2 norm penalty on coef_.

  • penalty="l1": L1 norm penalty on coef_.

  • penalty="elasticnet": Convex combination of L2 and L1;(1 - l1_ratio) * L2 + l1_ratio * L1.

The default setting is penalty="l2". The L1 penalty leads to sparsesolutions, driving most coefficients to zero. The Elastic Net [11] solvessome deficiencies of the L1 penalty in the presence of highly correlatedattributes. The parameter l1_ratio controls the convex combinationof L1 and L2 penalty.

SGDClassifier supports multi-class classification by combiningmultiple binary classifiers in a “one versus all” (OVA) scheme. For eachof the \(K\) classes, a binary classifier is learned that discriminatesbetween that and all other \(K-1\) classes. At testing time, we compute theconfidence score (i.e. the signed distances to the hyperplane) for eachclassifier and choose the class with the highest confidence. The Figurebelow illustrates the OVA approach on the iris dataset. The dashedlines represent the three OVA classifiers; the background colors showthe decision surface induced by the three classifiers.

In the case of multi-class classification coef_ is a two-dimensionalarray of shape (n_classes, n_features) and intercept_ is aone-dimensional array of shape (n_classes,). The i-th row of coef_ holdsthe weight vector of the OVA classifier for the i-th class; classes areindexed in ascending order (see attribute classes_).Note that, in principle, since they allow to create a probability model,loss="log_loss" and loss="modified_huber" are more suitable forone-vs-all classification.

SGDClassifier supports both weighted classes and weightedinstances via the fit parameters class_weight and sample_weight. Seethe examples below and the docstring of SGDClassifier.fit forfurther information.

SGDClassifier supports averaged SGD (ASGD) [10]. Averaging can beenabled by setting average=True. ASGD performs the same updates as theregular SGD (see Mathematical formulation), but instead of usingthe last value of the coefficients as the coef_ attribute (i.e. the valuesof the last update), coef_ is set instead to the average value of thecoefficients across all updates. The same is done for the intercept_attribute. When using ASGD the learning rate can be larger and even constant,leading on some datasets to a speed up in training time.

For classification with a logistic loss, another variant of SGD with anaveraging strategy is available with Stochastic Average Gradient (SAG)algorithm, available as a solver in LogisticRegression.

Examples

  • SGD: Maximum margin separating hyperplane

  • Plot multi-class SGD on the iris dataset

  • SGD: Weighted samples

  • Comparing various online solvers

  • SVM: Separating hyperplane for unbalanced classes(See the Note in the example)

1.5.2. Regression#

The class SGDRegressor implements a plain stochastic gradientdescent learning routine which supports different loss functions andpenalties to fit linear regression models. SGDRegressor iswell suited for regression problems with a large number of trainingsamples (> 10.000), for other problems we recommend Ridge,Lasso, or ElasticNet.

The concrete loss function can be set via the lossparameter. SGDRegressor supports the following loss functions:

  • loss="squared_error": Ordinary least squares,

  • loss="huber": Huber loss for robust regression,

  • loss="epsilon_insensitive": linear Support Vector Regression.

Please refer to the mathematical section below for formulas.The Huber and epsilon-insensitive loss functions can be used forrobust regression. The width of the insensitive region has to bespecified via the parameter epsilon. This parameter depends on thescale of the target variables.

The penalty parameter determines the regularization to be used (seedescription above in the classification section).

SGDRegressor also supports averaged SGD [10] (here again, seedescription above in the classification section).

For regression with a squared loss and a l2 penalty, another variant ofSGD with an averaging strategy is available with Stochastic AverageGradient (SAG) algorithm, available as a solver in Ridge.

1.5.3. Online One-Class SVM#

The class sklearn.linear_model.SGDOneClassSVM implements an onlinelinear version of the One-Class SVM using a stochastic gradient descent.Combined with kernel approximation techniques,sklearn.linear_model.SGDOneClassSVM can be used to approximate thesolution of a kernelized One-Class SVM, implemented insklearn.svm.OneClassSVM, with a linear complexity in the number ofsamples. Note that the complexity of a kernelized One-Class SVM is at bestquadratic in the number of samples.sklearn.linear_model.SGDOneClassSVM is thus well suited for datasetswith a large number of training samples (> 10,000) for which the SGDvariant can be several orders of magnitude faster.

Mathematical details#

Its implementation is based on the implementation of the stochasticgradient descent. Indeed, the original optimization problem of the One-ClassSVM is given by

\[\begin{split}\begin{aligned}\min_{w, \rho, \xi} & \quad \frac{1}{2}\Vert w \Vert^2 - \rho + \frac{1}{\nu n} \sum_{i=1}^n \xi_i \\\text{s.t.} & \quad \langle w, x_i \rangle \geq \rho - \xi_i \quad 1 \leq i \leq n \\& \quad \xi_i \geq 0 \quad 1 \leq i \leq n\end{aligned}\end{split}\]

where \(\nu \in (0, 1]\) is the user-specified parameter controlling theproportion of outliers and the proportion of support vectors. Getting rid ofthe slack variables \(\xi_i\) this problem is equivalent to

\[\min_{w, \rho} \frac{1}{2}\Vert w \Vert^2 - \rho + \frac{1}{\nu n} \sum_{i=1}^n \max(0, \rho - \langle w, x_i \rangle) \, .\]

Multiplying by the constant \(\nu\) and introducing the intercept\(b = 1 - \rho\) we obtain the following equivalent optimization problem

\[\min_{w, b} \frac{\nu}{2}\Vert w \Vert^2 + b\nu + \frac{1}{n} \sum_{i=1}^n \max(0, 1 - (\langle w, x_i \rangle + b)) \, .\]

This is similar to the optimization problems studied in sectionMathematical formulation with \(y_i = 1, 1 \leq i \leq n\) and\(\alpha = \nu/2\), \(L\) being the hinge loss function and \(R\)being the L2 norm. We just need to add the term \(b\nu\) in theoptimization loop.

As SGDClassifier and SGDRegressor, SGDOneClassSVMsupports averaged SGD. Averaging can be enabled by setting average=True.

1.5.4. Stochastic Gradient Descent for sparse data#

Note

The sparse implementation produces slightly different resultsfrom the dense implementation, due to a shrunk learning rate for theintercept. See Implementation details.

There is built-in support for sparse data given in any matrix in a formatsupported by scipy.sparse. For maximumefficiency, however, use the CSRmatrix format as defined in scipy.sparse.csr_matrix.

Examples

  • Classification of text documents using sparse features

1.5.5. Complexity#

The major advantage of SGD is its efficiency, which is basicallylinear in the number of training examples. If X is a matrix of size (n, p)training has a cost of \(O(k n \bar p)\), where k is the numberof iterations (epochs) and \(\bar p\) is the average number ofnon-zero attributes per sample.

Recent theoretical results, however, show that the runtime to get somedesired optimization accuracy does not increase as the training set size increases.

1.5.6. Stopping criterion#

The classes SGDClassifier and SGDRegressor provide twocriteria to stop the algorithm when a given level of convergence is reached:

  • With early_stopping=True, the input data is split into a training setand a validation set. The model is then fitted on the training set, and thestopping criterion is based on the prediction score (using the scoremethod) computed on the validation set. The size of the validation setcan be changed with the parameter validation_fraction.

  • With early_stopping=False, the model is fitted on the entire input dataand the stopping criterion is based on the objective function computed onthe training data.

In both cases, the criterion is evaluated once by epoch, and the algorithm stopswhen the criterion does not improve n_iter_no_change times in a row. Theimprovement is evaluated with absolute tolerance tol, and the algorithmstops in any case after a maximum number of iteration max_iter.

1.5.7. Tips on Practical Use#

  • Stochastic Gradient Descent is sensitive to feature scaling, so itis highly recommended to scale your data. For example, scale eachattribute on the input vector X to [0,1] or [-1,+1], or standardizeit to have mean 0 and variance 1. Note that the same scaling must beapplied to the test vector to obtain meaningful results. This can be easilydone using StandardScaler:

    from sklearn.preprocessing import StandardScalerscaler = StandardScaler()scaler.fit(X_train) # Don't cheat - fit only on training dataX_train = scaler.transform(X_train)X_test = scaler.transform(X_test) # apply same transformation to test data# Or better yet: use a pipeline!from sklearn.pipeline import make_pipelineest = make_pipeline(StandardScaler(), SGDClassifier())est.fit(X_train)est.predict(X_test)

    If your attributes have an intrinsic scale (e.g. word frequencies orindicator features) scaling is not needed.

  • Finding a reasonable regularization term \(\alpha\) isbest done using automatic hyper-parameter search, e.g.GridSearchCV orRandomizedSearchCV, usually in therange 10.0**-np.arange(1,7).

  • Empirically, we found that SGD converges after observingapproximately 10^6 training samples. Thus, a reasonable first guessfor the number of iterations is max_iter = np.ceil(10**6 / n),where n is the size of the training set.

  • If you apply SGD to features extracted using PCA we found thatit is often wise to scale the feature values by some constant csuch that the average L2 norm of the training data equals one.

  • We found that Averaged SGD works best with a larger number of featuresand a higher eta0.

References

1.5.8. Mathematical formulation#

We describe here the mathematical details of the SGD procedure. A goodoverview with convergence rates can be found in [12].

Given a set of training examples \((x_1, y_1), \ldots, (x_n, y_n)\) where\(x_i \in \mathbf{R}^m\) and \(y_i \in \mathcal{R}\) (\(y_i \in{-1, 1}\) for classification), our goal is to learn a linear scoring function\(f(x) = w^T x + b\) with model parameters \(w \in \mathbf{R}^m\) andintercept \(b \in \mathbf{R}\). In order to make predictions for binaryclassification, we simply look at the sign of \(f(x)\). To find the modelparameters, we minimize the regularized training error given by

\[E(w,b) = \frac{1}{n}\sum_{i=1}^{n} L(y_i, f(x_i)) + \alpha R(w)\]

where \(L\) is a loss function that measures model (mis)fit and\(R\) is a regularization term (aka penalty) that penalizes modelcomplexity; \(\alpha > 0\) is a non-negative hyperparameter that controlsthe regularization strength.

Loss functions details#

Different choices for \(L\) entail different classifiers or regressors:

  • Hinge (soft-margin): equivalent to Support Vector Classification.\(L(y_i, f(x_i)) = \max(0, 1 - y_i f(x_i))\).

  • Perceptron:\(L(y_i, f(x_i)) = \max(0, - y_i f(x_i))\).

  • Modified Huber:\(L(y_i, f(x_i)) = \max(0, 1 - y_i f(x_i))^2\) if \(y_i f(x_i) >-1\), and \(L(y_i, f(x_i)) = -4 y_i f(x_i)\) otherwise.

  • Log Loss: equivalent to Logistic Regression.\(L(y_i, f(x_i)) = \log(1 + \exp (-y_i f(x_i)))\).

  • Squared Error: Linear regression (Ridge or Lasso depending on\(R\)).\(L(y_i, f(x_i)) = \frac{1}{2}(y_i - f(x_i))^2\).

  • Huber: less sensitive to outliers than least-squares. It is equivalent toleast squares when \(|y_i - f(x_i)| \leq \varepsilon\), and\(L(y_i, f(x_i)) = \varepsilon |y_i - f(x_i)| - \frac{1}{2}\varepsilon^2\) otherwise.

  • Epsilon-Insensitive: (soft-margin) equivalent to Support Vector Regression.\(L(y_i, f(x_i)) = \max(0, |y_i - f(x_i)| - \varepsilon)\).

All of the above loss functions can be regarded as an upper bound on themisclassification error (Zero-one loss) as shown in the Figure below.

Popular choices for the regularization term \(R\) (the penaltyparameter) include:

  • L2 norm: \(R(w) := \frac{1}{2} \sum_{j=1}^{m} w_j^2 = ||w||_2^2\),

  • L1 norm: \(R(w) := \sum_{j=1}^{m} |w_j|\), which leads to sparsesolutions.

  • Elastic Net: \(R(w) := \frac{\rho}{2} \sum_{j=1}^{n} w_j^2 +(1-\rho) \sum_{j=1}^{m} |w_j|\), a convex combination of L2 and L1, where\(\rho\) is given by 1 - l1_ratio.

The Figure below shows the contours of the different regularization termsin a 2-dimensional parameter space (\(m=2\)) when \(R(w) = 1\).

1.5.8.1. SGD#

Stochastic gradient descent is an optimization method for unconstrainedoptimization problems. In contrast to (batch) gradient descent, SGDapproximates the true gradient of \(E(w,b)\) by considering asingle training example at a time.

The class SGDClassifier implements a first-order SGD learningroutine. The algorithm iterates over the training examples and for eachexample updates the model parameters according to the update rule given by

\[w \leftarrow w - \eta \left[\alpha \frac{\partial R(w)}{\partial w}+ \frac{\partial L(w^T x_i + b, y_i)}{\partial w}\right]\]

where \(\eta\) is the learning rate which controls the step-size inthe parameter space. The intercept \(b\) is updated similarly butwithout regularization (and with additional decay for sparse matrices, asdetailed in Implementation details).

The learning rate \(\eta\) can be either constant or gradually decaying. Forclassification, the default learning rate schedule (learning_rate='optimal')is given by

\[\eta^{(t)} = \frac {1}{\alpha (t_0 + t)}\]

where \(t\) is the time step (there are a total of n_samples * n_itertime steps), \(t_0\) is determined based on a heuristic proposed by Léon Bottousuch that the expected initial updates are comparable with the expectedsize of the weights (this assuming that the norm of the training samples isapprox. 1). The exact definition can be found in _init_t in BaseSGD.

For regression the default learning rate schedule is inverse scaling(learning_rate='invscaling'), given by

\[\eta^{(t)} = \frac{eta_0}{t^{power\_t}}\]

where \(eta_0\) and \(power\_t\) are hyperparameters chosen by theuser via eta0 and power_t, resp.

For a constant learning rate use learning_rate='constant' and use eta0to specify the learning rate.

For an adaptively decreasing learning rate, use learning_rate='adaptive'and use eta0 to specify the starting learning rate. When the stoppingcriterion is reached, the learning rate is divided by 5, and the algorithmdoes not stop. The algorithm stops when the learning rate goes below 1e-6.

The model parameters can be accessed through the coef_ andintercept_ attributes: coef_ holds the weights \(w\) andintercept_ holds \(b\).

When using Averaged SGD (with the average parameter), coef_ is set to theaverage weight across all updates:coef_ \(= \frac{1}{T} \sum_{t=0}^{T-1} w^{(t)}\),where \(T\) is the total number of updates, found in the t_ attribute.

1.5.9. Implementation details#

The implementation of SGD is influenced by the Stochastic Gradient SVM of[7].Similar to SvmSGD,the weight vector is represented as the product of a scalar and a vectorwhich allows an efficient weight update in the case of L2 regularization.In the case of sparse input X, the intercept is updated with asmaller learning rate (multiplied by 0.01) to account for the fact thatit is updated more frequently. Training examples are picked up sequentiallyand the learning rate is lowered after each observed example. We adopted thelearning rate schedule from [8].For multi-class classification, a “one versus all” approach is used.We use the truncated gradient algorithm proposed in [9]for L1 regularization (and the Elastic Net).The code is written in Cython.

References

1.5. Stochastic Gradient Descent (2024)
Top Articles
Latest Posts
Article information

Author: Gov. Deandrea McKenzie

Last Updated:

Views: 6648

Rating: 4.6 / 5 (46 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Gov. Deandrea McKenzie

Birthday: 2001-01-17

Address: Suite 769 2454 Marsha Coves, Debbieton, MS 95002

Phone: +813077629322

Job: Real-Estate Executive

Hobby: Archery, Metal detecting, Kitesurfing, Genealogy, Kitesurfing, Calligraphy, Roller skating

Introduction: My name is Gov. Deandrea McKenzie, I am a spotless, clean, glamorous, sparkling, adventurous, nice, brainy person who loves writing and wants to share my knowledge and understanding with you.