当前位置: 首页 > news >正文

什么什么网站高中生自己做 网站

什么什么网站,高中生自己做 网站,那里可以找建网站的人,搜索网排名项目地址#xff1a;https://github.com/VainF/Torch-Pruning Torch-Pruning 是一个专用于torch的模型剪枝库#xff0c;其基于DepGraph 技术分析出模型layer中的依赖关系。DepGraph 与现有的修剪方法#xff08;如 Magnitude Pruning 或 Taylor Pruning#xff09;相结合… 项目地址https://github.com/VainF/Torch-Pruning Torch-Pruning 是一个专用于torch的模型剪枝库其基于DepGraph 技术分析出模型layer中的依赖关系。DepGraph 与现有的修剪方法如 Magnitude Pruning 或 Taylor Pruning相结合可以达到良好的剪枝效果。 本博文结合项目官网案例对信息进行结构话抽离出剪枝技术说明、剪枝模型保存与加载、剪枝技术的基本使用剪枝技术的具体使用案例。并结合外部信息分析剪枝对模型性能精度的影响。 1、基本说明 1.1 项目安装 打开https://github.com/VainF/Torch-Pruning下载项目 然后在终端中进入项目目录并执行pip install -r requirements.txt 安装项目依赖库 然后在执行 pip install -e . 将项目安装在当前目录下并设置为editing模式。 验证安装执行命令python -c import torch_pruning, 如果没有输出报错信息则表示安装成功。 1.2 DepGraph 技术说明 在结构修剪中组被定义为深度网络中最小的可移除单元。每个组由多个相互依赖的层组成需要同时修剪这些层以保持最终结构的完整性。然而深度网络通常表现出层与层之间错综复杂的依赖关系这对结构修剪提出了重大挑战。这项研究通过引入一种名为 DepGraph 的自动化机制来解决这一挑战该机制可以轻松实现参数分组并有助于修剪各种深度网络。 直接剪枝会会破坏layer间的依赖关系会导致forward流程报错。具体如下面代码移除model.conv1模块中的idxs为0与1的channel导致后续的bn1层输入输入与参数格式对不上号然后报错。 from torchvision.models import resnet18 import torch_pruning as tp import torchmodel resnet18().eval() tp.prune_conv_out_channels(model.conv1, idxs[0,1]) # remove channel 0 and channel 1 output model(torch.randn(1,3,224,224)) # test基本在后续层添加剪枝运行代码也会保存因为batchnorm的下一层要求的输出channel是64。 model resnet18(pretrainedTrue).eval() tp.prune_conv_out_channels(model.conv1, idxs[0,1]) tp.prune_batchnorm_out_channels(model.bn1, idxs[0,1]) tp.prune_batchnorm_in_channels(model.layer1[0].conv1, idxs[0,1]) output model(torch.randn(1,3,224,224)) 使用DepGraph剪枝代码如下先使用tp.DependencyGraph().build_dependenc构建出依赖图然后基于DG.get_pruning_group函数获取目标剪枝层的依赖关系组最后在检验关系并进行剪枝。 import torch from torchvision.models import resnet18 import torch_pruning as tpmodel resnet18(pretrainedTrue).eval()# 1. build dependency graph for resnet18 DG tp.DependencyGraph().build_dependency(model, example_inputstorch.randn(1,3,224,224))# 2. Specify the to-be-pruned channels. Here we prune those channels indexed by [2, 6, 9]. group DG.get_pruning_group( model.conv1, tp.prune_conv_out_channels, idxs[2, 6, 9] )# 3. prune all grouped layers that are coupled with model.conv1 (included). print(group) if DG.check_pruning_group(group): # avoid full pruning, i.e., channels0.group.prune()# 4. Save Load model.zero_grad() # We dont want to store gradient information torch.save(model, model.pth) # without .state_dict model torch.load(model.pth) # load the model object代码执行后的输出如下所示可以看到捕捉到group对应的依赖layer --------------------------------Pruning Group -------------------------------- [0] prune_out_channels on conv1 (Conv2d(3, 64, kernel_size(7, 7), stride(2, 2), padding(3, 3), biasFalse)) prune_out_channels on conv1 (Conv2d(3, 64, kernel_size(7, 7), stride(2, 2), padding(3, 3), biasFalse)), idxs[2, 6, 9] (Pruning Root) [1] prune_out_channels on conv1 (Conv2d(3, 64, kernel_size(7, 7), stride(2, 2), padding(3, 3), biasFalse)) prune_out_channels on bn1 (BatchNorm2d(64, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)), idxs[2, 6, 9] [2] prune_out_channels on bn1 (BatchNorm2d(64, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)) prune_out_channels on _ElementWiseOp_20(ReluBackward0), idxs[2, 6, 9] [3] prune_out_channels on _ElementWiseOp_20(ReluBackward0) prune_out_channels on _ElementWiseOp_19(MaxPool2DWithIndicesBackward0), idxs[2, 6, 9] [4] prune_out_channels on _ElementWiseOp_19(MaxPool2DWithIndicesBackward0) prune_out_channels on _ElementWiseOp_18(AddBackward0), idxs[2, 6, 9] [5] prune_out_channels on _ElementWiseOp_19(MaxPool2DWithIndicesBackward0) prune_in_channels on layer1.0.conv1 (Conv2d(64, 64, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)), idxs[2, 6, 9] [6] prune_out_channels on _ElementWiseOp_18(AddBackward0) prune_out_channels on layer1.0.bn2 (BatchNorm2d(64, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)), idxs[2, 6, 9] [7] prune_out_channels on _ElementWiseOp_18(AddBackward0) prune_out_channels on _ElementWiseOp_17(ReluBackward0), idxs[2, 6, 9] [8] prune_out_channels on _ElementWiseOp_17(ReluBackward0) prune_out_channels on _ElementWiseOp_16(AddBackward0), idxs[2, 6, 9] [9] prune_out_channels on _ElementWiseOp_17(ReluBackward0) prune_in_channels on layer1.1.conv1 (Conv2d(64, 64, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)), idxs[2, 6, 9] [10] prune_out_channels on _ElementWiseOp_16(AddBackward0) prune_out_channels on layer1.1.bn2 (BatchNorm2d(64, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)), idxs[2, 6, 9] [11] prune_out_channels on _ElementWiseOp_16(AddBackward0) prune_out_channels on _ElementWiseOp_15(ReluBackward0), idxs[2, 6, 9] [12] prune_out_channels on _ElementWiseOp_15(ReluBackward0) prune_in_channels on layer2.0.downsample.0 (Conv2d(64, 128, kernel_size(1, 1), stride(2, 2), biasFalse)), idxs[2, 6, 9] [13] prune_out_channels on _ElementWiseOp_15(ReluBackward0) prune_in_channels on layer2.0.conv1 (Conv2d(64, 128, kernel_size(3, 3), stride(2, 2), padding(1, 1), biasFalse)), idxs[2, 6, 9] [14] prune_out_channels on layer1.1.bn2 (BatchNorm2d(64, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)) prune_out_channels on layer1.1.conv2 (Conv2d(64, 64, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)), idxs[2, 6, 9] [15] prune_out_channels on layer1.0.bn2 (BatchNorm2d(64, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)) prune_out_channels on layer1.0.conv2 (Conv2d(64, 64, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)), idxs[2, 6, 9] --------------------------------1.3 剪枝模型的保存与加载 剪枝后的模型由于网络结构改变了如果只保存模型参数是无法支持原始网络结构需要将模型结构连参数一并保存。加载时连同参数一起加载。 model.zero_grad() # We dont want to store gradient information torch.save(model, model.pth) # without .state_dict model torch.load(model.pth) # load the pruned model或者基于tp库中tp.state_dict函数提取目标参数进行保存并基于tp.load_state_dict函数将剪枝后的参数赋值到原始模型中形成剪枝模型。 # save the pruned state_dict, which includes both pruned parameters and modified attributes state_dict tp.state_dict(pruned_model) # the pruned model, e.g., a resnet-18-half torch.save(state_dict, pruned.pth)# create a new model, e.g. resnet18 new_model resnet18().eval()# load the pruned state_dict into the unpruned model. loaded_state_dict torch.load(pruned.pth, map_locationcpu) tp.load_state_dict(new_model, state_dictloaded_state_dict) print(new_model) # This will be a pruned model.2、剪枝基本案例 2.1 具有目标结构的剪枝 以下代码使用TaylorImportance指标进行剪枝设置忽略输出层的剪枝。并设置MagnitudePruner中对通道剪枝50%一共分iterative_steps步完成剪枝每一次剪枝都进行微调。 整体来说具备目标结构的剪枝效果是最差的。 基于https://blog.csdn.net/a486259/article/details/140407147 分析的数据得出的结论。 import torch from torchvision.models import resnet18 import torch_pruning as tp#model resnet18(pretrainedTrue) model resnet18()# Importance criteria example_inputs torch.randn(1, 3, 224, 224) imp tp.importance.TaylorImportance()ignored_layers [] for m in model.modules():if isinstance(m, torch.nn.Linear) and m.out_features 1000:ignored_layers.append(m) # DO NOT prune the final classifier!iterative_steps 5 # progressive pruning pruner tp.pruner.MagnitudePruner(model,example_inputs,importanceimp,iterative_stepsiterative_steps,ch_sparsity0.5, # remove 50% channels, ResNet18 {64, 128, 256, 512} ResNet18_Half {32, 64, 128, 256}#pruning_ratio0.5, # remove 50% channels, ResNet18 {64, 128, 256, 512} ResNet18_Half {32, 64, 128, 256}ignored_layersignored_layers, )base_macs, base_nparams tp.utils.count_ops_and_params(model, example_inputs) for i in range(iterative_steps):if isinstance(imp, tp.importance.TaylorImportance):# Taylor expansion requires gradients for importance estimationloss model(example_inputs).sum() # a dummy loss for TaylorImportanceloss.backward() # before pruner.step()pruner.step()macs, nparams tp.utils.count_ops_and_params(model, example_inputs)print(fiter {i} | rate:{macs/base_macs:.4f} {nparams/base_nparams:.4f}) print(model)# finetune your model here# finetune(model)# ...代码的输出信息如下所示可以看到macs与nparams在逐步降低。最终输出的模型结构所有的chanel都减半了只有输出层例外。 iter 0 | rate:0.8092 0.8111 iter 1 | rate:0.6469 0.6445 iter 2 | rate:0.4971 0.4979 iter 3 | rate:0.3718 0.3695 iter 4 | rate:0.2674 0.2614 ResNet((conv1): Conv2d(3, 32, kernel_size(7, 7), stride(2, 2), padding(3, 3), biasFalse)(bn1): BatchNorm2d(32, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue)(maxpool): MaxPool2d(kernel_size3, stride2, padding1, dilation1, ceil_modeFalse)(layer1): Sequential((0): BasicBlock((conv1): Conv2d(32, 32, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn1): BatchNorm2d(32, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue)(conv2): Conv2d(32, 32, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(32, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue))(1): BasicBlock((conv1): Conv2d(32, 32, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn1): BatchNorm2d(32, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue)(conv2): Conv2d(32, 32, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(32, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)))(layer2): Sequential((0): BasicBlock((conv1): Conv2d(32, 64, kernel_size(3, 3), stride(2, 2), padding(1, 1), biasFalse)(bn1): BatchNorm2d(64, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue)(conv2): Conv2d(64, 64, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(64, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(downsample): Sequential((0): Conv2d(32, 64, kernel_size(1, 1), stride(2, 2), biasFalse)(1): BatchNorm2d(64, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)))(1): BasicBlock((conv1): Conv2d(64, 64, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn1): BatchNorm2d(64, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue)(conv2): Conv2d(64, 64, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(64, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)))(layer3): Sequential((0): BasicBlock((conv1): Conv2d(64, 128, kernel_size(3, 3), stride(2, 2), padding(1, 1), biasFalse)(bn1): BatchNorm2d(128, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue)(conv2): Conv2d(128, 128, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(128, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(downsample): Sequential((0): Conv2d(64, 128, kernel_size(1, 1), stride(2, 2), biasFalse)(1): BatchNorm2d(128, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)))(1): BasicBlock((conv1): Conv2d(128, 128, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn1): BatchNorm2d(128, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue)(conv2): Conv2d(128, 128, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(128, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)))(layer4): Sequential((0): BasicBlock((conv1): Conv2d(128, 256, kernel_size(3, 3), stride(2, 2), padding(1, 1), biasFalse)(bn1): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue)(conv2): Conv2d(256, 256, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(downsample): Sequential((0): Conv2d(128, 256, kernel_size(1, 1), stride(2, 2), biasFalse)(1): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)))(1): BasicBlock((conv1): Conv2d(256, 256, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn1): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue)(conv2): Conv2d(256, 256, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)))(avgpool): AdaptiveAvgPool2d(output_size(1, 1))(fc): Linear(in_features256, out_features1000, biasTrue) ) PS D:\开源项目\Torch-Pruning-master(avgpool): AdaptiveAvgPool2d(output_size(1, 1))(fc): Linear(in_features256, out_features1000, biasTrue) )(avgpool): AdaptiveAvgPool2d(output_size(1, 1))(avgpool): AdaptiveAvgPool2d(output_size(1, 1))(fc): Linear(in_features256, out_features1000, biasTrue) )2.2 自动结构剪枝 这里的自动结构是有一个预设目标即将总体channel剪枝到原模型的多少但没有预定的目标结构。可能有的laye通道剪枝数多有的剪枝数少。 与2.1中的代码相比主要是增加了参数 global_pruningTrue。但这个剪枝方式比具有目标结构的剪枝更加有效。就像裁员一样要求各个部门内裁员比例相同与在公司内控制裁员比例各个部门裁员比例按重要度排列裁员比例不一样必然是第二种方式更有效。第一种方式使低效率部门的靠前但无用员工保留下来了。 import torch from torchvision.models import resnet18 import torch_pruning as tp#model resnet18(pretrainedTrue) model resnet18()# Importance criteria example_inputs torch.randn(1, 3, 224, 224) imp tp.importance.TaylorImportance()ignored_layers [] for m in model.modules():if isinstance(m, torch.nn.Linear) and m.out_features 1000:ignored_layers.append(m) # DO NOT prune the final classifier!iterative_steps 3 # progressive pruning pruner tp.pruner.MagnitudePruner(model,example_inputs,importanceimp,iterative_stepsiterative_steps,pruning_ratio0.5, # remove 50%的channelignored_layersignored_layers,global_pruningTrue )base_macs, base_nparams tp.utils.count_ops_and_params(model, example_inputs) for i in range(iterative_steps):if isinstance(imp, tp.importance.TaylorImportance):# Taylor expansion requires gradients for importance estimationloss model(example_inputs).sum() # a dummy loss for TaylorImportanceloss.backward() # before pruner.step()pruner.step()macs, nparams tp.utils.count_ops_and_params(model, example_inputs)print(fiter {i} | rate:{macs/base_macs:.4f} {nparams/base_nparams:.4f}) print(model)# finetune your model here# finetune(model)# ...2.3 MagnitudePruner中的参数 指定特定层的剪枝比例 通过pruning_ratio_dict参数指定model.layer2的剪枝比例为20%这里适用于有先验经验的layer控制对特定layer的剪枝比例。 import torch from torchvision.models import resnet18 import torch_pruning as tpmodel resnet18() example_inputs torch.randn(1, 3, 224, 224) imp tp.importance.MagnitudeImportance(p2)pruner tp.pruner.MagnitudePruner(model,example_inputs,imp,pruning_ratio 0.5,pruning_ratio_dict {model.layer2: 0.2} ) pruner.step() print(model)代码执行后的层为ResNet{64, 128, 256, 512} ResNet{32, 102, 128, 256} 设置最大剪枝比例 通过 max_pruning_ratio 参数设置最大剪枝比例避免由于稀疏剪枝或者自动剪枝时某个层被严重剪枝或者移除。 剪枝次数与剪枝调度器 您打算分多轮修剪模型iterative_steps 会很有用。默认情况下修剪器会逐渐增加模型的稀疏度直到达到所需的 pruning_ratio。如以下代码分5次实现剪枝目标。 import torch from torchvision.models import resnet18 import torch_pruning as tpmodel resnet18() example_inputs torch.randn(1, 3, 224, 224) imp tp.importance.MagnitudeImportance(p2)iterative_steps 5 # progressive pruning pruner tp.pruner.MagnitudePruner(model,example_inputs,importanceimp,iterative_stepsiterative_steps,pruning_ratio0.5, # remove 50% channels, ResNet18 {64, 128, 256, 512} ResNet18_Half {32, 64, 128, 256} )# prune the model, iteratively if necessary. base_macs, base_nparams tp.utils.count_ops_and_params(model, example_inputs) for i in range(iterative_steps):pruner.step()macs, nparams tp.utils.count_ops_and_params(model, example_inputs)print(Round %d/%d, Params: %.2f M % (i1, iterative_steps, nparams/1e6))# finetune your model here# finetune(model)# ... print(model)对应输出如下 Round 1/5, Params: 9.44 M Round 2/5, Params: 7.45 M Round 3/5, Params: 5.71 M Round 4/5, Params: 4.20 M Round 5/5, Params: 2.93 M 设置忽略的层 这主要是避免对输出层进行剪枝修改模型的输出结构。使用代码如下通过ignored_layers参数传入忽略的layer对象。 import torch from torchvision.models import resnet18 import torch_pruning as tpmodel resnet18() example_inputs torch.randn(1, 3, 224, 224) imp tp.importance.MagnitudeImportance(p2)pruner tp.pruner.MagnitudePruner(model,example_inputs,importanceimp,pruning_ratio0.5, # remove 50% channelsignored_layers[model.conv1, model.fc] # ignore the first last layers ) pruner.step() print(model)channel取整 在很多的时候都认为channel为16的倍数gpu运行效率最高。使用代码如下通过round_to参数保持channel是特定数的倍数。 import torch from torchvision.models import resnet18 import torch_pruning as tpmodel resnet18() example_inputs torch.randn(1, 3, 224, 224) imp tp.importance.MagnitudeImportance(p2)pruner tp.pruner.MagnitudePruner(model,example_inputs,importanceimp,pruning_ratio0.3, # remove 50% channels, ResNet18 {64, 128, 256, 512} ResNet18_Half {32, 64, 128, 256}round_to10 # round to 10x. Note: 10x is not a good practice. )pruner.step() print(model)channel_groups 某些层例如 nn.GroupNorm 和 nn.Conv2d具有 group 参数这会在层内引入额外的依赖项。修剪后保持所有组的大小相同至关重要。为了满足这一要求引入了参数 channel_groups 以启用对这些通道的手动分组。如以下代码通过channel_groups参数控制model.group_conv1中的参数为8个一组 pruner tp.pruner.MagnitudePruner(model,example_inputsexample_inputs,importanceimportance,iterative_steps1,pruning_ratio0.5,channel_groups {model.group_conv1: 8} # For Conv2d(32, 64, kernel_size(3, 3), stride(1, 1), groups8))额外参数剪枝 有些时候模型具备的可训练参数并非conv、fc等传统layer中需要基于unwrapped_parameters参数将额外的可剪枝参数传入到剪枝器中。具体如下所示: from torchvision.models.convnext import CNBlock, ConvNeXt unwrapped_parameters [] for m in model.modules():if isinstance(m, CNBlock):unwrapped_parameters.append( (m.layer_scale, 0) )pruner tp.pruner.MagnitudePruner(model,example_inputs,importanceimp,pruning_ratio0.5, unwrapped_parametersunwrapped_parameters 限定剪枝范围 root_module_types 参数用于指定组的“根”或第一层。在许多情况下我们专注于修剪线性层和卷积 (Conv) 层。要专门针对这些层启用修剪我们可以使用以下参数root_module_types[nn.Conv2D, nn.Linear]。这可确保将修剪应用于所需的层。 pruner tp.pruner.MagnitudePruner(model,example_inputs,importanceimp,pruning_ratio0.5, root_module_types[nn.Conv2D, nn.Linear]3、具体应用案例 3.1 timm模型剪枝 官方代码为examples\timm_models\prune_timm_models.py 具体详情如下这里有一个特殊用法是通过num_heads参数实现对于transformer layer的支持 import os, sys sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(os.path.realpath(__file__)))))) os.environ[TIMM_FUSED_ATTN] 0 import torch import torch.nn as nn import torch.nn.functional as F from typing import Sequence import timm from timm.models.vision_transformer import Attention import torch_pruning as tp import argparseparser argparse.ArgumentParser(descriptionPrune timm models) parser.add_argument(--model, defaultNone, typestr, helpmodel name) parser.add_argument(--pruning_ratio, default0.5, typefloat, helpchannel pruning ratio) parser.add_argument(--global_pruning, defaultFalse, actionstore_true, helpglobal pruning) parser.add_argument(--pretrained, defaultFalse, actionstore_true, helpglobal pruning) parser.add_argument(--list_models, defaultFalse, actionstore_true, helplist all models in timm) args parser.parse_args()def main():timm_models timm.list_models()if args.list_models:print(timm_models)if args.model is None: returnassert args.model in timm_models, Model %s is not in timm model list: %s%(args.model, timm_models)device cuda if torch.cuda.is_available() else cpumodel timm.create_model(args.model, pretrainedargs.pretrained, no_jitTrue).eval().to(device)imp tp.importance.GroupNormImportance()print(Pruning %s...%args.model)input_size model.default_cfg[input_size]example_inputs torch.randn(1, *input_size).to(device)test_output model(example_inputs)ignored_layers []num_heads {}for m in model.modules():if hasattr(m, head): #isinstance(m, nn.Linear) and m.out_features model.num_classes:ignored_layers.append(model.head)print(Ignore classifier layer: , m.head)# Attention layersif hasattr(m, num_heads):if hasattr(m, qkv):num_heads[m.qkv] m.num_headsprint(Attention layer: , m.qkv, m.num_heads)elif hasattr(m, qkv_proj):num_heads[m.qkv_proj] m.num_headsprint(Before pruning)print(model)base_macs, base_params tp.utils.count_ops_and_params(model, example_inputs)pruner tp.pruner.MetaPruner(model, example_inputs, global_pruningargs.global_pruning, # If False, a uniform pruning ratio will be assigned to different layers.importanceimp, # importance criterion for parameter selectioniterative_steps1, # the number of iterations to achieve target pruning ratiopruning_ratioargs.pruning_ratio, # target pruning rationum_headsnum_heads,ignored_layersignored_layers,)for g in pruner.step(interactiveTrue):g.prune()for m in model.modules():# Attention layersif hasattr(m, num_heads):if hasattr(m, qkv):m.num_heads num_heads[m.qkv]m.head_dim m.qkv.out_features // (3 * m.num_heads)elif hasattr(m, qkv_proj):m.num_heads num_heads[m.qqkv_projkv]m.head_dim m.qkv_proj.out_features // (3 * m.num_heads)print(After pruning)print(model)test_output model(example_inputs)pruned_macs, pruned_params tp.utils.count_ops_and_params(model, example_inputs)print(MACs: %.4f G %.4f G%(base_macs/1e9, pruned_macs/1e9))print(Params: %.4f M %.4f M%(base_params/1e6, pruned_params/1e6))if __name____main__:main()3.2 llm模型剪枝 在examples\LLMs\prune_llama.py中提供了一个对于llama模型的剪枝案例. 核心代码如下可以看到也是基于num_heads记录transformer的结构信息然后在剪枝后将num_heads数据赋值到对应模型参数上。与原始代码相比这里删除了模型精度验证相关的代码。 # Code adapted from # https://github.com/IST-DASLab/sparsegpt/blob/master/datautils.py # https://github.com/locuslab/wandaimport os, sys sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(os.path.realpath(__file__))))))import argparse import os import numpy as np import torch from transformers import AutoTokenizer, AutoModelForCausalLM from importlib.metadata import version import time import torch import torch.nn as nn from collections import defaultdict import fnmatch import numpy as np import randomprint(torch, version(torch)) print(transformers, version(transformers)) print(accelerate, version(accelerate)) print(# of gpus: , torch.cuda.device_count())def get_llm(model_name, cache_dir./cache):model AutoModelForCausalLM.from_pretrained(model_name, torch_dtypetorch.float16, cache_dircache_dir, device_mapauto)model.seqlen model.config.max_position_embeddings return modeldef main():parser argparse.ArgumentParser()parser.add_argument(--model, typestr, helpLLaMA model)parser.add_argument(--seed, typeint, default0, helpSeed for sampling the calibration data.)parser.add_argument(--nsamples, typeint, default128, helpNumber of calibration samples.)parser.add_argument(--pruning_ratio, typefloat, default0, helpSparsity level)parser.add_argument(--cache_dir, default./cache, typestr )parser.add_argument(--save, typestr, defaultNone, helpPath to save results.)parser.add_argument(--save_model, typestr, defaultNone, helpPath to save the pruned model.)parser.add_argument(--eval_zero_shot, actionstore_true)args parser.parse_args()# Setting seeds for reproducibilitynp.random.seed(args.seed)torch.random.manual_seed(args.seed)model_name args.model.split(/)[-1]print(floading llm model {args.model})model get_llm(args.model, args.cache_dir) model.eval()tokenizer AutoTokenizer.from_pretrained(args.model, use_fastFalse)device torch.device(cuda:0)if 30b in args.model or 65b in args.model: # for 30b and 65b we use device_map to load onto multiple A6000 GPUs, thus the processing here.device model.hf_device_map[lm_head]print(use device , device)############### Pruning##############print(----------------- Before Pruning -----------------)print(model)text Hello world.inputs torch.tensor(tokenizer.encode(text)).unsqueeze(0).to(model.device)import torch_pruning as tp num_heads {}for name, m in model.named_modules():if name.endswith(self_attn):num_heads[m.q_proj] model.config.num_attention_headsnum_heads[m.k_proj] model.config.num_key_value_headsnum_heads[m.v_proj] model.config.num_key_value_headshead_pruning_ratio args.pruning_ratiohidden_size_pruning_ratio args.pruning_ratiopruner tp.pruner.MagnitudePruner(model, example_inputsinputs,importancetp.importance.GroupNormImportance(),global_pruningFalse,pruning_ratiohidden_size_pruning_ratio,ignored_layers[model.lm_head],num_headsnum_heads,prune_num_headsTrue,prune_head_dimsFalse,head_pruning_ratiohead_pruning_ratio,)pruner.step()# Update model attributesnum_heads int( (1-head_pruning_ratio) * model.config.num_attention_heads )num_key_value_heads int( (1-head_pruning_ratio) * model.config.num_key_value_heads )model.config.num_attention_heads num_headsmodel.config.num_key_value_heads num_key_value_headsfor name, m in model.named_modules():if name.endswith(self_attn):m.hidden_size m.q_proj.out_featuresm.num_heads num_headsm.num_key_value_heads num_key_value_headselif name.endswith(mlp):model.config.intermediate_size m.gate_proj.out_featuresprint(----------------- After Pruning -----------------)print(model)#ppl_test eval_ppl(args, model, tokenizer, device)#print(fwikitext perplexity {ppl_test})if args.save_model:model.save_pretrained(args.save_model)tokenizer.save_pretrained(args.save_model)if __name__ __main__:main()3.3 目标检测模型剪枝 在Torch-Pruning 库中提供了针对yolov8、yolov7、yolov5的剪枝案例。关于yolov8还提供了剪枝后的训练策略其主要技巧在与对不可剪枝层的可剪枝话处理C2f模块的剪枝其含split操作不利于剪枝索引。后续会补充博客说明对yolov8的剪枝使用。 4、其他信息 4.1 剪枝器中的评价指标 在torch_pruning\pruner\importance.py中有很多个剪枝评价指标 __all__ [# Base ClassImportance,# Basic Group ImportanceGroupNormImportance,GroupTaylorImportance,GroupHessianImportance,# AliasesMagnitudeImportance,TaylorImportance,HessianImportance,# Other ImportanceBNScaleImportance,LAMPImportance,RandomImportance, ] 整体来看是TaylorImportance最好一直使用该值即可。 4.2 剪枝对性能精度的影响 在博客https://blog.csdn.net/a486259/article/details/140407147?spm1001.2014.3001.5501 中基本确定了剪枝50%对模型精度是没有任何影响的。这里对Torch-Pruning 库相关的论文数据进行二次核验以致于分析出剪枝中速度提升对精度的影响。 以DepGraph: Towards Any Structural Pruning数据为例可以发现最高支持6x速度剪枝后保持模型性能。 以LLM-Pruner: On the Structural Pruning of Large Language Models 论文数据为例可以发现使用Vector评价方法的剪枝移除10%的参数zero-shot下对模型精度影响不大。而图4更表明剪枝方法正确的话移除50%的参数对模型性能影响也不大。 以论文 Structural Pruning for Diffusion Models 的数据为分析同样可以发现剪枝50%左右的通道对结果影响不对。
http://www.pierceye.com/news/195212/

相关文章:

  • 网站维护工作太原公司网站建设
  • 个性化网站建设报价案例查询网站
  • 淘宝网站框架项目管理软件下载
  • 网站建设课程内容如何优化关键词
  • 龙口网站建设公司电子商务网站建设 课件
  • 权威做网站的公司网站织梦程序改成wordpress
  • 用cms建网站容易吗平面设计网课平台哪个好
  • 网站设计怎么做好什么视频直播网站做挣钱
  • 西安 网站开发 招聘app开发难吗
  • 富阳网站建设 优帮云邯郸市商标设计品牌策划公司
  • 整站优化费用中国网新重庆
  • 找别人做网站wordpress怎么更改栏目权限
  • 珠海市建设工程质量监督检测站网站在小网站上做点击广告
  • 网站拉圈圈接口怎么做传媒网站设计
  • 淘宝客做的最好的网站盐山建网站
  • 西城企业网站建设深圳设计网站多少钱
  • 电子商务网站建设a卷网站建设厘金手指排名二一
  • 网站空间便宜网站的信息管理建设的必要性
  • 校级特色专业建设网站博达站群网站建设教程
  • 有没有做任务的网站吗网站首页开发
  • 公司名字变了网站备案济南网站建设公司哪个好点呢
  • 图书馆网站建设的规章制度企业免费招聘网站
  • 效果图网站大全系统优化的例子
  • 京东的网站建设介绍网站开发要源码多少钱
  • 东莞网站制作公司报价企业定制
  • 创同盟做网站生成拼贴的网站
  • 网站备案号查电话号码商场网站开发
  • 手机网站建站教育模板下载泰州公司注册
  • 如何做商业网站推广西安市城乡建设管理局网站的公示栏
  • 上海做兼职哪个网站腾讯企业邮箱域名是什么