外贸网站建设优化推广,阿狸网站建设,网站建设有哪些主题,太原百度网站建设文章目录1. 梯度下降2. mini-Batch 梯度下降3. 动量4. Adam5. 不同优化算法下的模型5.1 Mini-batch梯度下降5.2 带动量的Mini-batch梯度下降5.3 带Adam的Mini-batch梯度下降5.4 对比总结测试题#xff1a;参考博文
笔记#xff1a;02.改善深层神经网络#xff1a;超参数调试…
文章目录1. 梯度下降2. mini-Batch 梯度下降3. 动量4. Adam5. 不同优化算法下的模型5.1 Mini-batch梯度下降5.2 带动量的Mini-batch梯度下降5.3 带Adam的Mini-batch梯度下降5.4 对比总结测试题参考博文
笔记02.改善深层神经网络超参数调试、正则化以及优化 W2.优化算法
导入一些包
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import math
import sklearn
import sklearn.datasetsfrom opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
from testCases import *%matplotlib inline
plt.rcParams[figure.figsize] (7.0, 4.0) # set default size of plots
plt.rcParams[image.interpolation] nearest
plt.rcParams[image.cmap] gray1. 梯度下降
BatchGradient Descent 梯度下降 对每一层都进行
W[l]W[l]−α∗dW[l]W^{[l]} W^{[l]}-\alpha *d W^{[l]}W[l]W[l]−α∗dW[l]
b[l]b[l]−α∗db[l]b^{[l]} b^{[l]}-\alpha *d b^{[l]}b[l]b[l]−α∗db[l]
lll 是层号α\alphaα 是学习率
# GRADED FUNCTION: update_parameters_with_gddef update_parameters_with_gd(parameters, grads, learning_rate):Update parameters using one step of gradient descentArguments:parameters -- python dictionary containing your parameters to be updated:parameters[W str(l)] Wlparameters[b str(l)] blgrads -- python dictionary containing your gradients to update each parameters:grads[dW str(l)] dWlgrads[db str(l)] dbllearning_rate -- the learning rate, scalar.Returns:parameters -- python dictionary containing your updated parameters L len(parameters) // 2 # number of layers in the neural networks# Update rule for each parameterfor l in range(L):### START CODE HERE ### (approx. 2 lines)parameters[W str(l1)] parameters[Wstr(l1)] - learning_rate * grads[dWstr(l1)]parameters[b str(l1)] parameters[bstr(l1)] - learning_rate * grads[dbstr(l1)]### END CODE HERE ###return parametersStochastic Gradient Descent 随机梯度下降
每次只用1个样本来更新梯度当训练集很大的时候SGD 很快其寻优过程有震荡
代码差异
(Batch) Gradient Descent:
X data_input
Y labels
parameters initialize_parameters(layers_dims)
for i in range(0, num_iterations):# Forward propagationa, caches forward_propagation(X, parameters)# Compute cost.cost compute_cost(a, Y)# Backward propagation.grads backward_propagation(a, caches, parameters)# Update parameters.parameters update_parameters(parameters, grads)
Stochastic Gradient Descent:
X data_input
Y labels
parameters initialize_parameters(layers_dims)
for i in range(0, num_iterations):for j in range(0, m):# Forward propagationa, caches forward_propagation(X[:,j], parameters)# Compute costcost compute_cost(a, Y[:,j])# Backward propagationgrads backward_propagation(a, caches, parameters)# Update parameters.parameters update_parameters(parameters, grads)3者的差别在于一次梯度更新时用到的样本数量不同
调好参数的 mini-batch 梯度下降通常优于梯度下降或随机梯度下降特别是当训练集很大时
2. mini-Batch 梯度下降
如何从训练集 (X,Y)(X,Y)(X,Y) 建立 mini-batches
步骤1随机打乱数据X 和 Y 是同步进行的保持对应关系 步骤2切分数据集每个子集大小为 mini_batch_size最后一个可能不够没关系
# GRADED FUNCTION: random_mini_batchesdef random_mini_batches(X, Y, mini_batch_size 64, seed 0):Creates a list of random minibatches from (X, Y)Arguments:X -- input data, of shape (input size, number of examples)Y -- true label vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)mini_batch_size -- size of the mini-batches, integerReturns:mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)np.random.seed(seed) # To make your random minibatches the same as oursm X.shape[1] # number of training examplesmini_batches []# Step 1: Shuffle (X, Y)permutation list(np.random.permutation(m))shuffled_X X[:, permutation]shuffled_Y Y[:, permutation].reshape((1,m))# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.num_complete_minibatches math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionningfor k in range(0, num_complete_minibatches):### START CODE HERE ### (approx. 2 lines)mini_batch_X X[:, k*mini_batch_size : (k1)*mini_batch_size]mini_batch_Y Y[:, k*mini_batch_size : (k1)*mini_batch_size]### END CODE HERE ###mini_batch (mini_batch_X, mini_batch_Y)mini_batches.append(mini_batch)# Handling the end case (last mini-batch mini_batch_size)if m % mini_batch_size ! 0:### START CODE HERE ### (approx. 2 lines)mini_batch_X X[:, num_complete_minibatches*mini_batch_size : ]mini_batch_Y Y[:, num_complete_minibatches*mini_batch_size : ]### END CODE HERE ###mini_batch (mini_batch_X, mini_batch_Y)mini_batches.append(mini_batch)return mini_batches3. 动量
带动量 的 梯度下降可以降低 mini-batch 梯度下降时的震荡
原因Momentum 考虑过去的梯度对当前的梯度进行平滑梯度不会剧烈变化
初始化梯度的 初速度为 0
# GRADED FUNCTION: initialize_velocitydef initialize_velocity(parameters):Initializes the velocity as a python dictionary with:- keys: dW1, db1, ..., dWL, dbL - values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.Arguments:parameters -- python dictionary containing your parameters.parameters[W str(l)] Wlparameters[b str(l)] blReturns:v -- python dictionary containing the current velocity.v[dW str(l)] velocity of dWlv[db str(l)] velocity of dblL len(parameters) // 2 # number of layers in the neural networksv {}# Initialize velocityfor l in range(L):### START CODE HERE ### (approx. 2 lines)v[dW str(l1)] np.zeros(parameters[Wstr(l1)].shape)v[db str(l1)] np.zeros(parameters[bstr(l1)].shape)### END CODE HERE ###return v对每一层更新动量 {vdW[l]βvdW[l](1−β)dW[l]W[l]W[l]−αvdW[l]\begin{cases} v_{dW^{[l]}} \beta v_{dW^{[l]}} (1 - \beta) dW^{[l]} \\ W^{[l]} W^{[l]} - \alpha v_{dW^{[l]}} \end{cases}{vdW[l]βvdW[l](1−β)dW[l]W[l]W[l]−αvdW[l]
{vdb[l]βvdb[l](1−β)db[l]b[l]b[l]−αvdb[l]\begin{cases} v_{db^{[l]}} \beta v_{db^{[l]}} (1 - \beta) db^{[l]} \\ b^{[l]} b^{[l]} - \alpha v_{db^{[l]}} \end{cases}{vdb[l]βvdb[l](1−β)db[l]b[l]b[l]−αvdb[l]
# GRADED FUNCTION: update_parameters_with_momentumdef update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):Update parameters using MomentumArguments:parameters -- python dictionary containing your parameters:parameters[W str(l)] Wlparameters[b str(l)] blgrads -- python dictionary containing your gradients for each parameters:grads[dW str(l)] dWlgrads[db str(l)] dblv -- python dictionary containing the current velocity:v[dW str(l)] ...v[db str(l)] ...beta -- the momentum hyperparameter, scalarlearning_rate -- the learning rate, scalarReturns:parameters -- python dictionary containing your updated parameters v -- python dictionary containing your updated velocitiesL len(parameters) // 2 # number of layers in the neural networks# Momentum update for each parameterfor l in range(L):### START CODE HERE ### (approx. 4 lines)# compute velocitiesv[dW str(l1)] beta* v[dW str(l1)] (1-beta)*grads[dW str(l1)]v[db str(l1)] beta* v[db str(l1)] (1-beta)*grads[db str(l1)]# update parametersparameters[W str(l1)] parameters[W str(l1)] - learning_rate*v[dW str(l1)]parameters[b str(l1)] parameters[b str(l1)] - learning_rate*v[db str(l1)]### END CODE HERE ###return parameters, v注意
速度 v 初始化为 0算法需要几次迭代后才能把 v 加上来然后开始采用大的步长β0\beta 0β0 就是不带动量的标准梯度下降
如何选择 β\betaβ
β\betaβ 越大考虑的过去的梯度越多梯度输出也更光滑太大也不行过度光滑经常取值为 0.8 - 0.999 之间如果不确定0.9 是个合理的默认值参数验证选取看其如何影响损失函数
4. Adam
参看笔记
对每一层 {vdW[l]β1vdW[l](1−β1)∂J∂W[l]vdW[l]correctedvdW[l]1−(β1)tsdW[l]β2sdW[l](1−β2)(∂J∂W[l])2sdW[l]correctedsdW[l]1−(β2)tW[l]W[l]−αvdW[l]correctedsdW[l]correctedε\begin{cases} v_{dW^{[l]}} \beta_1 v_{dW^{[l]}} (1 - \beta_1) \frac{\partial \mathcal{J} }{ \partial W^{[l]} } \\ v^{corrected}_{dW^{[l]}} \frac{v_{dW^{[l]}}}{1 - (\beta_1)^t} \\ s_{dW^{[l]}} \beta_2 s_{dW^{[l]}} (1 - \beta_2) (\frac{\partial \mathcal{J} }{\partial W^{[l]} })^2 \\ s^{corrected}_{dW^{[l]}} \frac{s_{dW^{[l]}}}{1 - (\beta_2)^t} \\ W^{[l]} W^{[l]} - \alpha \frac{v^{corrected}_{dW^{[l]}}}{\sqrt{s^{corrected}_{dW^{[l]}}} \varepsilon} \end{cases}⎩⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎧vdW[l]β1vdW[l](1−β1)∂W[l]∂JvdW[l]corrected1−(β1)tvdW[l]sdW[l]β2sdW[l](1−β2)(∂W[l]∂J)2sdW[l]corrected1−(β2)tsdW[l]W[l]W[l]−αsdW[l]correctedεvdW[l]corrected
初始化为 0
# GRADED FUNCTION: initialize_adamdef initialize_adam(parameters) :Initializes v and s as two python dictionaries with:- keys: dW1, db1, ..., dWL, dbL - values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.Arguments:parameters -- python dictionary containing your parameters.parameters[W str(l)] Wlparameters[b str(l)] blReturns: v -- python dictionary that will contain the exponentially weighted average of the gradient.v[dW str(l)] ...v[db str(l)] ...s -- python dictionary that will contain the exponentially weighted average of the squared gradient.s[dW str(l)] ...s[db str(l)] ...L len(parameters) // 2 # number of layers in the neural networksv {}s {}# Initialize v, s. Input: parameters. Outputs: v, s.for l in range(L):### START CODE HERE ### (approx. 4 lines)v[dW str(l1)] np.zeros(parameters[W str(l1)].shape)v[db str(l1)] np.zeros(parameters[b str(l1)].shape)s[dW str(l1)] np.zeros(parameters[W str(l1)].shape)s[db str(l1)] np.zeros(parameters[b str(l1)].shape)### END CODE HERE ###return v, s迭代更新
# GRADED FUNCTION: update_parameters_with_adamdef update_parameters_with_adam(parameters, grads, v, s, t, learning_rate 0.01,beta1 0.9, beta2 0.999, epsilon 1e-8):Update parameters using AdamArguments:parameters -- python dictionary containing your parameters:parameters[W str(l)] Wlparameters[b str(l)] blgrads -- python dictionary containing your gradients for each parameters:grads[dW str(l)] dWlgrads[db str(l)] dblv -- Adam variable, moving average of the first gradient, python dictionarys -- Adam variable, moving average of the squared gradient, python dictionarylearning_rate -- the learning rate, scalar.beta1 -- Exponential decay hyperparameter for the first moment estimates beta2 -- Exponential decay hyperparameter for the second moment estimates epsilon -- hyperparameter preventing division by zero in Adam updatesReturns:parameters -- python dictionary containing your updated parameters v -- Adam variable, moving average of the first gradient, python dictionarys -- Adam variable, moving average of the squared gradient, python dictionaryL len(parameters) // 2 # number of layers in the neural networksv_corrected {} # Initializing first moment estimate, python dictionarys_corrected {} # Initializing second moment estimate, python dictionary# Perform Adam update on all parametersfor l in range(L):# Moving average of the gradients. Inputs: v, grads, beta1. Output: v.### START CODE HERE ### (approx. 2 lines)v[dW str(l1)] beta1*v[dW str(l1)] (1-beta1)*grads[dW str(l1)]v[db str(l1)] beta1*v[db str(l1)] (1-beta1)*grads[db str(l1)]### END CODE HERE #### Compute bias-corrected first moment estimate. Inputs: v, beta1, t. Output: v_corrected.### START CODE HERE ### (approx. 2 lines)v_corrected[dW str(l1)] v[dW str(l1)]/(1-np.power(beta1,t))v_corrected[db str(l1)] v[db str(l1)]/(1-np.power(beta1,t))### END CODE HERE #### Moving average of the squared gradients. Inputs: s, grads, beta2. Output: s.### START CODE HERE ### (approx. 2 lines)s[dW str(l1)] beta2*s[dW str(l1)] (1-beta2)*grads[dW str(l1)]**2s[db str(l1)] beta2*s[db str(l1)] (1-beta2)*grads[db str(l1)]**2### END CODE HERE #### Compute bias-corrected second raw moment estimate. Inputs: s, beta2, t. Output: s_corrected.### START CODE HERE ### (approx. 2 lines)s_corrected[dW str(l1)] s[dW str(l1)]/(1-np.power(beta2,t))s_corrected[db str(l1)] s[db str(l1)]/(1-np.power(beta2,t))### END CODE HERE #### Update parameters. Inputs: parameters, learning_rate, v_corrected, s_corrected, epsilon. Output: parameters.### START CODE HERE ### (approx. 2 lines)parameters[W str(l1)] parameters[W str(l1)] - learning_rate*v_corrected[dW str(l1)]/(np.sqrt(s_corrected[dW str(l1)])epsilon)parameters[b str(l1)] parameters[b str(l1)] - learning_rate*v_corrected[db str(l1)]/(np.sqrt(s_corrected[db str(l1)])epsilon)### END CODE HERE ###return parameters, v, s5. 不同优化算法下的模型
数据集使用以下数据集进行测试 3层神经网络模型
Mini-batch Gradient Descent: 使用函数 update_parameters_with_gd()Mini-batch Momentum: 使用函数 initialize_velocity() 和 update_parameters_with_momentum()Mini-batch Adam 使用函数 initialize_adam() 和 update_parameters_with_adam()
def model(X, Y, layers_dims, optimizer, learning_rate 0.0007, mini_batch_size 64, beta 0.9,beta1 0.9, beta2 0.999, epsilon 1e-8, num_epochs 10000, print_cost True):3-layer neural network model which can be run in different optimizer modes.Arguments:X -- input data, of shape (2, number of examples)Y -- true label vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)layers_dims -- python list, containing the size of each layerlearning_rate -- the learning rate, scalar.mini_batch_size -- the size of a mini batchbeta -- Momentum hyperparameterbeta1 -- Exponential decay hyperparameter for the past gradients estimates beta2 -- Exponential decay hyperparameter for the past squared gradients estimates epsilon -- hyperparameter preventing division by zero in Adam updatesnum_epochs -- number of epochsprint_cost -- True to print the cost every 1000 epochsReturns:parameters -- python dictionary containing your updated parameters L len(layers_dims) # number of layers in the neural networkscosts [] # to keep track of the costt 0 # initializing the counter required for Adam updateseed 10 # For grading purposes, so that your random minibatches are the same as ours# Initialize parametersparameters initialize_parameters(layers_dims)# Initialize the optimizerif optimizer gd:pass # no initialization required for gradient descentelif optimizer momentum:v initialize_velocity(parameters)elif optimizer adam:v, s initialize_adam(parameters)# Optimization loopfor i in range(num_epochs):# Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epochseed seed 1minibatches random_mini_batches(X, Y, mini_batch_size, seed)for minibatch in minibatches:# Select a minibatch(minibatch_X, minibatch_Y) minibatch# Forward propagationa3, caches forward_propagation(minibatch_X, parameters)# Compute costcost compute_cost(a3, minibatch_Y)# Backward propagationgrads backward_propagation(minibatch_X, minibatch_Y, caches)# Update parametersif optimizer gd:parameters update_parameters_with_gd(parameters, grads, learning_rate)elif optimizer momentum:parameters, v update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)elif optimizer adam:t t 1 # Adam counterparameters, v, s update_parameters_with_adam(parameters, grads, v, s,t, learning_rate, beta1, beta2, epsilon)# Print the cost every 1000 epochif print_cost and i % 1000 0:print (Cost after epoch %i: %f %(i, cost))if print_cost and i % 100 0:costs.append(cost)# plot the costplt.plot(costs)plt.ylabel(cost)plt.xlabel(epochs (per 100))plt.title(Learning rate str(learning_rate))plt.show()return parameters5.1 Mini-batch梯度下降
# train 3-layer model
layers_dims [train_X.shape[0], 5, 2, 1]
parameters model(train_X, train_Y, layers_dims, optimizer gd)# Predict
predictions predict(train_X, train_Y, parameters)# Plot decision boundary
plt.title(Model with Gradient Descent optimization)
axes plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)表现 Accuracy: 0.79 Mini-batch梯度下降
5.2 带动量的Mini-batch梯度下降
# train 3-layer model
layers_dims [train_X.shape[0], 5, 2, 1]
parameters model(train_X, train_Y, layers_dims, beta 0.9, optimizer momentum)# Predict
predictions predict(train_X, train_Y, parameters)# Plot decision boundary
plt.title(Model with Momentum optimization)
axes plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)表现 Accuracy: 0.79带动量的Mini-batch梯度下降 本例子由于太过简单所以动量的优势没有体现出来在大的数据集上会较不带动量的模型更好
5.3 带Adam的Mini-batch梯度下降
# train 3-layer model
layers_dims [train_X.shape[0], 5, 2, 1]
parameters model(train_X, train_Y, layers_dims, optimizer adam)# Predict
predictions predict(train_X, train_Y, parameters)# Plot decision boundary
plt.title(Model with Adam optimization)
axes plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)表现 Accuracy: 0.9366666666666666带Adam的Mini-batch梯度下降 5.4 对比总结
优化方法准确率cost shapeGradient descent79.7%振荡我的结果是光滑Momentum79.7%振荡 我的结果是光滑求指点Adam94%更光滑
动量Momentum 通常是有帮助的但是 较小的学习率 和 过于简单的数据集优势体现不出来Adam明显优于 mini-batch梯度下降 和 动量如果运行更多的迭代次数三种方法都会产生非常好的结果。但是 Adam 收敛更快Adam优点 相对较低的内存要求虽然比 梯度下降 和 动量梯度下降更高 即使很少调整超参数除了 学习率通常也能很好地工作 我的CSDN博客地址 https://michael.blog.csdn.net/
长按或扫码关注我的公众号Michael阿明一起加油、一起学习进步