当前位置: 首页 > news >正文

深圳专业网站建设产品运营之中的广度讲解网站建设演示ppt

深圳专业网站建设产品运营之中的广度讲解,网站建设演示ppt,白山北京网站建设,西安有一个电影他要拉投资做网站一、引言 目标检测是计算机视觉领域的一个重要任务#xff0c;广泛应用于自动驾驶、安防监控、工业检测等领域。YOLOv5作为YOLO系列的最新版本#xff0c;以其高效性和准确性在实际应用中表现出色。然而#xff0c;随着应用场景的复杂化#xff0c;传统的卷积神经网络在处…一、引言 目标检测是计算机视觉领域的一个重要任务广泛应用于自动驾驶、安防监控、工业检测等领域。YOLOv5作为YOLO系列的最新版本以其高效性和准确性在实际应用中表现出色。然而随着应用场景的复杂化传统的卷积神经网络在处理复杂背景和多尺度目标时可能会遇到性能瓶颈。为此引入注意力机制成为了一种有效的改进方法。本文将详细介绍如何在YOLOv5中引入SESqueeze-and-Excitation注意力机制通过修改模型配置文件和代码实现提升模型性能并对比训练效果。 YOLOv5是YOLO系列的最新版本相较于之前的版本YOLOv5在模型结构、训练策略和数据增强等方面进行了多项改进显著提升了模型的性能和效率。其主要特点包括 模型结构优化YOLOv5采用新的骨干网络Backbone和路径聚合网络Neck提高了特征提取和融合的能力。数据增强策略引入了多种数据增强方法如Mosaic、MixUp等提升了模型的泛化能力。训练策略改进采用动态标签分配策略SimOTA提高了训练效率和检测精度。 然而随着任务复杂度的增加传统的卷积神经网络在处理多尺度目标时的表现不够理想SE注意力机制的引入为提升目标检测精度提供了新的思路。 二、YOLOv5与SE注意力机制 2.1 YOLOv5简介 YOLOv5以其高效性和准确性在目标检测中得到了广泛应用。其主要结构特点是 Backbone负责从输入图像中提取特征。Neck通过特征融合提高模型的多尺度感知能力。Head根据提取的特征进行预测。 2.2 SE注意力机制简介 SESqueeze-and-Excitation注意力机制是一种轻量级的注意力模块旨在通过显式地建模通道间的依赖关系提升模型的表示能力。SE模块由两个关键部分组成 Squeeze压缩通过全局平均池化操作将特征图的空间维度压缩为1生成通道描述符。Excitation激励通过两个全连接层和一个Sigmoid激活函数生成通道权重用于重新校准特征图的通道响应。 通过引入SE模块YOLOv5能够更加关注重要的特征通道抑制不重要的特征通道从而提升模型性能。 三、YOLOv5 SE注意力机制的实现 3.1 模型配置文件修改 首先想要将SE注意力机制引入到Yolov5中去需要修改以下几个文件commom.py、yolo.py和yolov5s.yaml文件。需要修改YOLOv5的模型配置文件yolov5_se.yaml在Backbone和Neck中引入SE模块。注意将SE模块引入之后需要更改层数的号码SE注意力机制也可以加入到其他层中比如head层的P3输出之前等等。以下是修改后的配置文件内容 # YOLOv5 馃殌 by Ultralytics, GPL-3.0 license# Parameters nc: 80 # number of classes depth_multiple: 0.33 # model depth multiple width_multiple: 0.50 # layer channel multiple anchors:- [10,13, 16,30, 33,23] # P3/8- [30,61, 62,45, 59,119] # P4/16- [116,90, 156,198, 373,326] # P5/32# YOLOv5 v6.0 backbone backbone:# [from, number, module, args][[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2[-1, 1, Conv, [128, 3, 2]], # 1-P2/4[-1, 3, C3, [128]],[-1, 1, Conv, [256, 3, 2]], # 3-P3/8[-1, 6, C3, [256]],[-1, 1, Conv, [512, 3, 2]], # 5-P4/16[-1, 9, C3, [512]],[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32[-1, 3, C3, [1024]],[-1, 1, SENet,[1024]], #SEAttention #9[-1, 1, SPPF, [1024, 5]], # 10]# YOLOv5 v6.0 head head:[[-1, 1, Conv, [512, 1, 1]],[-1, 1, nn.Upsample, [None, 2, nearest]],[[-1, 6], 1, Concat, [1]], # cat backbone P4[-1, 3, C3, [512, False]], # 13[-1, 1, Conv, [256, 1, 1]],[-1, 1, nn.Upsample, [None, 2, nearest]],[[-1, 4], 1, Concat, [1]], # cat backbone P3#[-1, 1, SENet,[1024]], #SEAttention #9[-1, 3, C3, [256, False]], # 18 (P3/8-small)[-1, 1, Conv, [256, 3, 2]],[[-1, 14], 1, Concat, [1]], # cat head P4#[-1, 1, SENet,[1024]], #SEAttention #9[-1, 3, C3, [512, False]], # 21 (P4/16-medium)[-1, 1, Conv, [512, 3, 2]],[[-1, 10], 1, Concat, [1]], # cat head P5#[-1, 1, SENet,[1024]], #SEAttention #9[-1, 3, C3, [1024, False]], # 24 (P5/32-large)[[18, 21, 24], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)]3.2 SE注意力模块的代码实现 在YOLOv5的代码中需要实现SE模块。以下是一个SEBlock的实现 import torch import torch.nn as nnclass SENet(nn.Module):#c1, c2, n1, shortcutTrue, g1, e0.5def __init__(self, c1, c2, n1, shortcutTrue, g1, e0.5 ):super(SENet, self).__init__()#c*1*1self.avgpool nn.AdaptiveAvgPool2d(1)self.l1 nn.Linear(c1, c1 // 16, biasFalse)self.relu nn.ReLU(inplaceTrue)self.l2 nn.Linear(c1 // 16, c1, biasFalse)self.sig nn.Sigmoid()def forward(self, x):b, c, _, _ x.size()y self.avgpool(x).view(b, c)y self.l1(y)y self.relu(y)y self.l2(y)y self.sig(y)y y.view(b, c, 1, 1)return x * y.expand_as(x)3.3 使用SE注意力模块 为了在YOLOv5的Backbone和Neck中引入SE模块可以对Yolo.py文件原有的parse_model进行修改以下是修改后的Bottleneck模块 def parse_model(d, ch): # model_dict, input_channels(3)# Parse a YOLOv5 model.yaml dictionaryLOGGER.info(f\n{:3}{from:18}{n:3}{params:10} {module:40}{arguments:30})anchors, nc, gd, gw, act d[anchors], d[nc], d[depth_multiple], d[width_multiple], d.get(activation)if act:Conv.default_act eval(act) # redefine default activation, i.e. Conv.default_act nn.SiLU()LOGGER.info(f{colorstr(activation:)} {act}) # printna (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchorsno na * (nc 5) # number of outputs anchors * (classes 5)layers, save, c2 [], [], ch[-1] # layers, savelist, ch outfor i, (f, n, m, args) in enumerate(d[backbone] d[head]): # from, number, module, argsm eval(m) if isinstance(m, str) else m # eval stringsfor j, a in enumerate(args):with contextlib.suppress(NameError):args[j] eval(a) if isinstance(a, str) else a # eval stringsn n_ max(round(n * gd), 1) if n 1 else n # depth gainif m in {Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv,BottleneckCSP, C3, C3TR, C3SPP, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x,SENet,}:c1, c2 ch[f], args[0]if c2 ! no: # if not outputc2 make_divisible(c2 * gw, 8)args [c1, c2, *args[1:]]if m in {BottleneckCSP, C3, C3TR, C3Ghost, C3x, CBAMBottleneck, CABottleneck, CBAMC3, SENet, CANet, CAC3, CBAM, ECANet, GAMNet}:args.insert(2, n) # number of repeatsn 1elif m is nn.BatchNorm2d:args [ch[f]]elif m is Concat:c2 sum(ch[x] for x in f)# TODO: channel, gw, gdelif m in {Detect, Segment}:args.append([ch[x] for x in f])if isinstance(args[1], int): # number of anchorsargs[1] [list(range(args[1] * 2))] * len(f)if m is Segment:args[3] make_divisible(args[3] * gw, 8)elif m is Contract:c2 ch[f] * args[0] ** 2elif m is Expand:c2 ch[f] // args[0] ** 2else:c2 ch[f]m_ nn.Sequential(*(m(*args) for _ in range(n))) if n 1 else m(*args) # modulet str(m)[8:-2].replace(__main__., ) # module typenp sum(x.numel() for x in m_.parameters()) # number paramsm_.i, m_.f, m_.type, m_.np i, f, t, np # attach index, from index, type, number paramsLOGGER.info(f{i:3}{str(f):18}{n_:3}{np:10.0f} {t:40}{str(args):30}) # printsave.extend(x % i for x in ([f] if isinstance(f, int) else f) if x ! -1) # append to savelistlayers.append(m_)if i 0:ch []ch.append(c2)return nn.Sequential(*layers), sorted(save)3.4 模型训练与效果对比 完成模型配置文件和代码的修改后可以开始训练模型。推荐使用 COCO数据集或自定义数据集进行训练和验证。或者其他的自定义数据集也可以在这里我使用自定义数据集camel_elephant_training进行100个epoch训练该数据集仅仅有骆驼和大象两个种类。 训练完成后可以通过AP平均精度指标来评估引入SE注意力机制前后的模型性能。一般情况下引入SE模块后YOLOv5在复杂背景和多尺度目标的检测中表现更为出色。 训练之后的结果如下 由于时间有限我仅仅训练了100个epoch正常情况下应设置150~200epoch从train/obj_loss来看仍然有下降的空间。 3.5 训练步骤 配置训练环境确保已安装YOLOv5和相关依赖。下载COCO数据集或使用自定义数据集进行训练。修改训练脚本加载修改后的模型配置文件yolov5_se.yaml。开始训练并监控训练过程中的损失和精度。完成训练后使用验证集评估效果。 3.6 模型部署 将训练好的数据权重通过export.py文件转换成.onnx格式可以部署到任意平台上。 import argparse import contextlib import json import os import platform import re import subprocess import sys import time import warnings from pathlib import Pathimport pandas as pd import torch from torch.utils.mobile_optimizer import optimize_for_mobileFILE Path(__file__).resolve() ROOT FILE.parents[0] # YOLOv5 root directory if str(ROOT) not in sys.path:sys.path.append(str(ROOT)) # add ROOT to PATH if platform.system() ! Windows:ROOT Path(os.path.relpath(ROOT, Path.cwd())) # relativefrom models.experimental import attempt_load from models.yolo import ClassificationModel, Detect, DetectionModel, SegmentationModel from utils.dataloaders import LoadImages from utils.general import (LOGGER, Profile, check_dataset, check_img_size, check_requirements, check_version,check_yaml, colorstr, file_size, get_default_args, print_args, url2file, yaml_save) from utils.torch_utils import select_device, smart_inference_modeMACOS platform.system() Darwin # macOS environmentdef export_formats():# YOLOv5 export formatsx [[PyTorch, -, .pt, True, True],[TorchScript, torchscript, .torchscript, True, True],[ONNX, onnx, .onnx, True, True],[OpenVINO, openvino, _openvino_model, True, False],[TensorRT, engine, .engine, False, True],[CoreML, coreml, .mlmodel, True, False],[TensorFlow SavedModel, saved_model, _saved_model, True, True],[TensorFlow GraphDef, pb, .pb, True, True],[TensorFlow Lite, tflite, .tflite, True, False],[TensorFlow Edge TPU, edgetpu, _edgetpu.tflite, False, False],[TensorFlow.js, tfjs, _web_model, False, False],[PaddlePaddle, paddle, _paddle_model, True, True],]return pd.DataFrame(x, columns[Format, Argument, Suffix, CPU, GPU])def try_export(inner_func):# YOLOv5 export decorator, i..e try_exportinner_args get_default_args(inner_func)def outer_func(*args, **kwargs):prefix inner_args[prefix]try:with Profile() as dt:f, model inner_func(*args, **kwargs)LOGGER.info(f{prefix} export success 鉁?{dt.t:.1f}s, saved as {f} ({file_size(f):.1f} MB))return f, modelexcept Exception as e:LOGGER.info(f{prefix} export failure 鉂?{dt.t:.1f}s: {e})return None, Nonereturn outer_functry_export def export_torchscript(model, im, file, optimize, prefixcolorstr(TorchScript:)):# YOLOv5 TorchScript model exportLOGGER.info(f\n{prefix} starting export with torch {torch.__version__}...)f file.with_suffix(.torchscript)ts torch.jit.trace(model, im, strictFalse)d {shape: im.shape, stride: int(max(model.stride)), names: model.names}extra_files {config.txt: json.dumps(d)} # torch._C.ExtraFilesMap()if optimize: # https://pytorch.org/tutorials/recipes/mobile_interpreter.htmloptimize_for_mobile(ts)._save_for_lite_interpreter(str(f), _extra_filesextra_files)else:ts.save(str(f), _extra_filesextra_files)return f, Nonetry_export def export_onnx(model, im, file, opset, dynamic, simplify, prefixcolorstr(ONNX:)):# YOLOv5 ONNX exportcheck_requirements(onnx)import onnxLOGGER.info(f\n{prefix} starting export with onnx {onnx.__version__}...)f file.with_suffix(.onnx)output_names [output0, output1] if isinstance(model, SegmentationModel) else [output0]if dynamic:dynamic {images: {0: batch, 2: height, 3: width}} # shape(1,3,640,640)if isinstance(model, SegmentationModel):dynamic[output0] {0: batch, 1: anchors} # shape(1,25200,85)dynamic[output1] {0: batch, 2: mask_height, 3: mask_width} # shape(1,32,160,160)elif isinstance(model, DetectionModel):dynamic[output0] {0: batch, 1: anchors} # shape(1,25200,85)torch.onnx.export(model.cpu() if dynamic else model, # --dynamic only compatible with cpuim.cpu() if dynamic else im,f,verboseFalse,opset_versionopset,do_constant_foldingTrue,input_names[images],output_namesoutput_names,dynamic_axesdynamic or None)# Checksmodel_onnx onnx.load(f) # load onnx modelonnx.checker.check_model(model_onnx) # check onnx model# Metadatad {stride: int(max(model.stride)), names: model.names}for k, v in d.items():meta model_onnx.metadata_props.add()meta.key, meta.value k, str(v)onnx.save(model_onnx, f)# Simplifyif simplify:try:cuda torch.cuda.is_available()check_requirements((onnxruntime-gpu if cuda else onnxruntime, onnx-simplifier0.4.1))import onnxsimLOGGER.info(f{prefix} simplifying with onnx-simplifier {onnxsim.__version__}...)model_onnx, check onnxsim.simplify(model_onnx)assert check, assert check failedonnx.save(model_onnx, f)except Exception as e:LOGGER.info(f{prefix} simplifier failure: {e})return f, model_onnxtry_export def export_openvino(file, metadata, half, prefixcolorstr(OpenVINO:)):# YOLOv5 OpenVINO exportcheck_requirements(openvino-dev) # requires openvino-dev: https://pypi.org/project/openvino-dev/import openvino.inference_engine as ieLOGGER.info(f\n{prefix} starting export with openvino {ie.__version__}...)f str(file).replace(.pt, f_openvino_model{os.sep})cmd fmo --input_model {file.with_suffix(.onnx)} --output_dir {f} --data_type {FP16 if half else FP32}subprocess.run(cmd.split(), checkTrue, envos.environ) # exportyaml_save(Path(f) / file.with_suffix(.yaml).name, metadata) # add metadata.yamlreturn f, Nonetry_export def export_paddle(model, im, file, metadata, prefixcolorstr(PaddlePaddle:)):# YOLOv5 Paddle exportcheck_requirements((paddlepaddle, x2paddle))import x2paddlefrom x2paddle.convert import pytorch2paddleLOGGER.info(f\n{prefix} starting export with X2Paddle {x2paddle.__version__}...)f str(file).replace(.pt, f_paddle_model{os.sep})pytorch2paddle(modulemodel, save_dirf, jit_typetrace, input_examples[im]) # exportyaml_save(Path(f) / file.with_suffix(.yaml).name, metadata) # add metadata.yamlreturn f, Nonetry_export def export_coreml(model, im, file, int8, half, prefixcolorstr(CoreML:)):# YOLOv5 CoreML exportcheck_requirements(coremltools)import coremltools as ctLOGGER.info(f\n{prefix} starting export with coremltools {ct.__version__}...)f file.with_suffix(.mlmodel)ts torch.jit.trace(model, im, strictFalse) # TorchScript modelct_model ct.convert(ts, inputs[ct.ImageType(image, shapeim.shape, scale1 / 255, bias[0, 0, 0])])bits, mode (8, kmeans_lut) if int8 else (16, linear) if half else (32, None)if bits 32:if MACOS: # quantization only supported on macOSwith warnings.catch_warnings():warnings.filterwarnings(ignore, categoryDeprecationWarning) # suppress numpy1.20 float warningct_model ct.models.neural_network.quantization_utils.quantize_weights(ct_model, bits, mode)else:print(f{prefix} quantization only supported on macOS, skipping...)ct_model.save(f)return f, ct_modeltry_export def export_engine(model, im, file, half, dynamic, simplify, workspace4, verboseFalse, prefixcolorstr(TensorRT:)):# YOLOv5 TensorRT export https://developer.nvidia.com/tensorrtassert im.device.type ! cpu, export running on CPU but must be on GPU, i.e. python export.py --device 0try:import tensorrt as trtexcept Exception:if platform.system() Linux:check_requirements(nvidia-tensorrt, cmds-U --index-url https://pypi.ngc.nvidia.com)import tensorrt as trtif trt.__version__[0] 7: # TensorRT 7 handling https://github.com/ultralytics/yolov5/issues/6012grid model.model[-1].anchor_gridmodel.model[-1].anchor_grid [a[..., :1, :1, :] for a in grid]export_onnx(model, im, file, 12, dynamic, simplify) # opset 12model.model[-1].anchor_grid gridelse: # TensorRT 8check_version(trt.__version__, 8.0.0, hardTrue) # require tensorrt8.0.0export_onnx(model, im, file, 12, dynamic, simplify) # opset 12onnx file.with_suffix(.onnx)LOGGER.info(f\n{prefix} starting export with TensorRT {trt.__version__}...)assert onnx.exists(), ffailed to export ONNX file: {onnx}f file.with_suffix(.engine) # TensorRT engine filelogger trt.Logger(trt.Logger.INFO)if verbose:logger.min_severity trt.Logger.Severity.VERBOSEbuilder trt.Builder(logger)config builder.create_builder_config()config.max_workspace_size workspace * 1 30# config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, workspace 30) # fix TRT 8.4 deprecation noticeflag (1 int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))network builder.create_network(flag)parser trt.OnnxParser(network, logger)if not parser.parse_from_file(str(onnx)):raise RuntimeError(ffailed to load ONNX file: {onnx})inputs [network.get_input(i) for i in range(network.num_inputs)]outputs [network.get_output(i) for i in range(network.num_outputs)]for inp in inputs:LOGGER.info(f{prefix} input {inp.name} with shape{inp.shape} {inp.dtype})for out in outputs:LOGGER.info(f{prefix} output {out.name} with shape{out.shape} {out.dtype})if dynamic:if im.shape[0] 1:LOGGER.warning(f{prefix} WARNING 鈿狅笍 --dynamic model requires maximum --batch-size argument)profile builder.create_optimization_profile()for inp in inputs:profile.set_shape(inp.name, (1, *im.shape[1:]), (max(1, im.shape[0] // 2), *im.shape[1:]), im.shape)config.add_optimization_profile(profile)LOGGER.info(f{prefix} building FP{16 if builder.platform_has_fast_fp16 and half else 32} engine as {f})if builder.platform_has_fast_fp16 and half:config.set_flag(trt.BuilderFlag.FP16)with builder.build_engine(network, config) as engine, open(f, wb) as t:t.write(engine.serialize())return f, Nonetry_export def export_saved_model(model,im,file,dynamic,tf_nmsFalse,agnostic_nmsFalse,topk_per_class100,topk_all100,iou_thres0.45,conf_thres0.25,kerasFalse,prefixcolorstr(TensorFlow SavedModel:)):# YOLOv5 TensorFlow SavedModel exporttry:import tensorflow as tfexcept Exception:check_requirements(ftensorflow{ if torch.cuda.is_available() else -macos if MACOS else -cpu})import tensorflow as tffrom tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2from models.tf import TFModelLOGGER.info(f\n{prefix} starting export with tensorflow {tf.__version__}...)f str(file).replace(.pt, _saved_model)batch_size, ch, *imgsz list(im.shape) # BCHWtf_model TFModel(cfgmodel.yaml, modelmodel, ncmodel.nc, imgszimgsz)im tf.zeros((batch_size, *imgsz, ch)) # BHWC order for TensorFlow_ tf_model.predict(im, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres)inputs tf.keras.Input(shape(*imgsz, ch), batch_sizeNone if dynamic else batch_size)outputs tf_model.predict(inputs, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres)keras_model tf.keras.Model(inputsinputs, outputsoutputs)keras_model.trainable Falsekeras_model.summary()if keras:keras_model.save(f, save_formattf)else:spec tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype)m tf.function(lambda x: keras_model(x)) # full modelm m.get_concrete_function(spec)frozen_func convert_variables_to_constants_v2(m)tfm tf.Module()tfm.__call__ tf.function(lambda x: frozen_func(x)[:4] if tf_nms else frozen_func(x), [spec])tfm.__call__(im)tf.saved_model.save(tfm,f,optionstf.saved_model.SaveOptions(experimental_custom_gradientsFalse) if check_version(tf.__version__, 2.6) else tf.saved_model.SaveOptions())return f, keras_modeltry_export def export_pb(keras_model, file, prefixcolorstr(TensorFlow GraphDef:)):# YOLOv5 TensorFlow GraphDef *.pb export https://github.com/leimao/Frozen_Graph_TensorFlowimport tensorflow as tffrom tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2LOGGER.info(f\n{prefix} starting export with tensorflow {tf.__version__}...)f file.with_suffix(.pb)m tf.function(lambda x: keras_model(x)) # full modelm m.get_concrete_function(tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype))frozen_func convert_variables_to_constants_v2(m)frozen_func.graph.as_graph_def()tf.io.write_graph(graph_or_graph_deffrozen_func.graph, logdirstr(f.parent), namef.name, as_textFalse)return f, Nonetry_export def export_tflite(keras_model, im, file, int8, data, nms, agnostic_nms, prefixcolorstr(TensorFlow Lite:)):# YOLOv5 TensorFlow Lite exportimport tensorflow as tfLOGGER.info(f\n{prefix} starting export with tensorflow {tf.__version__}...)batch_size, ch, *imgsz list(im.shape) # BCHWf str(file).replace(.pt, -fp16.tflite)converter tf.lite.TFLiteConverter.from_keras_model(keras_model)converter.target_spec.supported_ops [tf.lite.OpsSet.TFLITE_BUILTINS]converter.target_spec.supported_types [tf.float16]converter.optimizations [tf.lite.Optimize.DEFAULT]if int8:from models.tf import representative_dataset_gendataset LoadImages(check_dataset(check_yaml(data))[train], img_sizeimgsz, autoFalse)converter.representative_dataset lambda: representative_dataset_gen(dataset, ncalib100)converter.target_spec.supported_ops [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]converter.target_spec.supported_types []converter.inference_input_type tf.uint8 # or tf.int8converter.inference_output_type tf.uint8 # or tf.int8converter.experimental_new_quantizer Truef str(file).replace(.pt, -int8.tflite)if nms or agnostic_nms:converter.target_spec.supported_ops.append(tf.lite.OpsSet.SELECT_TF_OPS)tflite_model converter.convert()open(f, wb).write(tflite_model)return f, Nonetry_export def export_edgetpu(file, prefixcolorstr(Edge TPU:)):# YOLOv5 Edge TPU export https://coral.ai/docs/edgetpu/models-intro/cmd edgetpu_compiler --versionhelp_url https://coral.ai/docs/edgetpu/compiler/assert platform.system() Linux, fexport only supported on Linux. See {help_url}if subprocess.run(f{cmd} /dev/null, shellTrue).returncode ! 0:LOGGER.info(f\n{prefix} export requires Edge TPU compiler. Attempting install from {help_url})sudo subprocess.run(sudo --version /dev/null, shellTrue).returncode 0 # sudo installed on systemfor c in (curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -,echo deb https://packages.cloud.google.com/apt coral-edgetpu-stable main | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list,sudo apt-get update, sudo apt-get install edgetpu-compiler):subprocess.run(c if sudo else c.replace(sudo , ), shellTrue, checkTrue)ver subprocess.run(cmd, shellTrue, capture_outputTrue, checkTrue).stdout.decode().split()[-1]LOGGER.info(f\n{prefix} starting export with Edge TPU compiler {ver}...)f str(file).replace(.pt, -int8_edgetpu.tflite) # Edge TPU modelf_tfl str(file).replace(.pt, -int8.tflite) # TFLite modelcmd fedgetpu_compiler -s -d -k 10 --out_dir {file.parent} {f_tfl}subprocess.run(cmd.split(), checkTrue)return f, Nonetry_export def export_tfjs(file, prefixcolorstr(TensorFlow.js:)):# YOLOv5 TensorFlow.js exportcheck_requirements(tensorflowjs)import tensorflowjs as tfjsLOGGER.info(f\n{prefix} starting export with tensorflowjs {tfjs.__version__}...)f str(file).replace(.pt, _web_model) # js dirf_pb file.with_suffix(.pb) # *.pb pathf_json f{f}/model.json # *.json pathcmd ftensorflowjs_converter --input_formattf_frozen_model \f--output_node_namesIdentity,Identity_1,Identity_2,Identity_3 {f_pb} {f}subprocess.run(cmd.split())json Path(f_json).read_text()with open(f_json, w) as j: # sort JSON Identity_* in ascending ordersubst re.sub(r{outputs: {Identity.?.?: {name: Identity.?.?}, rIdentity.?.?: {name: Identity.?.?}, rIdentity.?.?: {name: Identity.?.?}, rIdentity.?.?: {name: Identity.?.?}}}, r{outputs: {Identity: {name: Identity}, rIdentity_1: {name: Identity_1}, rIdentity_2: {name: Identity_2}, rIdentity_3: {name: Identity_3}}}, json)j.write(subst)return f, Nonedef add_tflite_metadata(file, metadata, num_outputs):# Add metadata to *.tflite models per https://www.tensorflow.org/lite/models/convert/metadatawith contextlib.suppress(ImportError):# check_requirements(tflite_support)from tflite_support import flatbuffersfrom tflite_support import metadata as _metadatafrom tflite_support import metadata_schema_py_generated as _metadata_fbtmp_file Path(/tmp/meta.txt)with open(tmp_file, w) as meta_f:meta_f.write(str(metadata))model_meta _metadata_fb.ModelMetadataT()label_file _metadata_fb.AssociatedFileT()label_file.name tmp_file.namemodel_meta.associatedFiles [label_file]subgraph _metadata_fb.SubGraphMetadataT()subgraph.inputTensorMetadata [_metadata_fb.TensorMetadataT()]subgraph.outputTensorMetadata [_metadata_fb.TensorMetadataT()] * num_outputsmodel_meta.subgraphMetadata [subgraph]b flatbuffers.Builder(0)b.Finish(model_meta.Pack(b), _metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER)metadata_buf b.Output()populator _metadata.MetadataPopulator.with_model_file(file)populator.load_metadata_buffer(metadata_buf)populator.load_associated_files([str(tmp_file)])populator.populate()tmp_file.unlink()smart_inference_mode() def run(dataROOT / data/coco128.yaml, # dataset.yaml pathweightsROOT / yolov5s.pt, # weights pathimgsz(640, 640), # image (height, width)batch_size1, # batch sizedevicecpu, # cuda device, i.e. 0 or 0,1,2,3 or cpuinclude(torchscript, onnx), # include formatshalfFalse, # FP16 half-precision exportinplaceFalse, # set YOLOv5 Detect() inplaceTruekerasFalse, # use KerasoptimizeFalse, # TorchScript: optimize for mobileint8False, # CoreML/TF INT8 quantizationdynamicFalse, # ONNX/TF/TensorRT: dynamic axessimplifyFalse, # ONNX: simplify modelopset12, # ONNX: opset versionverboseFalse, # TensorRT: verbose logworkspace4, # TensorRT: workspace size (GB)nmsFalse, # TF: add NMS to modelagnostic_nmsFalse, # TF: add agnostic NMS to modeltopk_per_class100, # TF.js NMS: topk per class to keeptopk_all100, # TF.js NMS: topk for all classes to keepiou_thres0.45, # TF.js NMS: IoU thresholdconf_thres0.25, # TF.js NMS: confidence threshold ):t time.time()include [x.lower() for x in include] # to lowercasefmts tuple(export_formats()[Argument][1:]) # --include argumentsflags [x in include for x in fmts]assert sum(flags) len(include), fERROR: Invalid --include {include}, valid --include arguments are {fmts}jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle flags # export booleansfile Path(url2file(weights) if str(weights).startswith((http:/, https:/)) else weights) # PyTorch weights# Load PyTorch modeldevice select_device(device)if half:assert device.type ! cpu or coreml, --half only compatible with GPU export, i.e. use --device 0assert not dynamic, --half not compatible with --dynamic, i.e. use either --half or --dynamic but not bothmodel attempt_load(weights, devicedevice, inplaceTrue, fuseTrue) # load FP32 model# Checksimgsz * 2 if len(imgsz) 1 else 1 # expandif optimize:assert device.type cpu, --optimize not compatible with cuda devices, i.e. use --device cpu# Inputgs int(max(model.stride)) # grid size (max stride)imgsz [check_img_size(x, gs) for x in imgsz] # verify img_size are gs-multiplesim torch.zeros(batch_size, 3, *imgsz).to(device) # image size(1,3,320,192) BCHW iDetection# Update modelmodel.eval()for k, m in model.named_modules():if isinstance(m, Detect):m.inplace inplacem.dynamic dynamicm.export Truefor _ in range(2):y model(im) # dry runsif half and not coreml:im, model im.half(), model.half() # to FP16shape tuple((y[0] if isinstance(y, tuple) else y).shape) # model output shapemetadata {stride: int(max(model.stride)), names: model.names} # model metadataLOGGER.info(f\n{colorstr(PyTorch:)} starting from {file} with output shape {shape} ({file_size(file):.1f} MB))# Exportsf [] * len(fmts) # exported filenameswarnings.filterwarnings(actionignore, categorytorch.jit.TracerWarning) # suppress TracerWarningif jit: # TorchScriptf[0], _ export_torchscript(model, im, file, optimize)if engine: # TensorRT required before ONNXf[1], _ export_engine(model, im, file, half, dynamic, simplify, workspace, verbose)if onnx or xml: # OpenVINO requires ONNXf[2], _ export_onnx(model, im, file, opset, dynamic, simplify)if xml: # OpenVINOf[3], _ export_openvino(file, metadata, half)if coreml: # CoreMLf[4], _ export_coreml(model, im, file, int8, half)if any((saved_model, pb, tflite, edgetpu, tfjs)): # TensorFlow formatsassert not tflite or not tfjs, TFLite and TF.js models must be exported separately, please pass only one type.assert not isinstance(model, ClassificationModel), ClassificationModel export to TF formats not yet supported.f[5], s_model export_saved_model(model.cpu(),im,file,dynamic,tf_nmsnms or agnostic_nms or tfjs,agnostic_nmsagnostic_nms or tfjs,topk_per_classtopk_per_class,topk_alltopk_all,iou_thresiou_thres,conf_thresconf_thres,keraskeras)if pb or tfjs: # pb prerequisite to tfjsf[6], _ export_pb(s_model, file)if tflite or edgetpu:f[7], _ export_tflite(s_model, im, file, int8 or edgetpu, datadata, nmsnms, agnostic_nmsagnostic_nms)if edgetpu:f[8], _ export_edgetpu(file)add_tflite_metadata(f[8] or f[7], metadata, num_outputslen(s_model.outputs))if tfjs:f[9], _ export_tfjs(file)if paddle: # PaddlePaddlef[10], _ export_paddle(model, im, file, metadata)# Finishf [str(x) for x in f if x] # filter out and Noneif any(f):cls, det, seg (isinstance(model, x) for x in (ClassificationModel, DetectionModel, SegmentationModel)) # typedir Path(segment if seg else classify if cls else )h --half if half else # --half FP16 inference args # WARNING 鈿狅笍 ClassificationModel not yet supported for PyTorch Hub AutoShape inference if cls else \# WARNING 鈿狅笍 SegmentationModel not yet supported for PyTorch Hub AutoShape inference if seg else LOGGER.info(f\nExport complete ({time.time() - t:.1f}s)f\nResults saved to {colorstr(bold, file.parent.resolve())}f\nDetect: python {dir / (detect.py if det else predict.py)} --weights {f[-1]} {h}f\nValidate: python {dir / val.py} --weights {f[-1]} {h}f\nPyTorch Hub: model torch.hub.load(ultralytics/yolov5, custom, {f[-1]}) {s}f\nVisualize: https://netron.app)return f # return list of exported files/dirsdef parse_opt():parser argparse.ArgumentParser()parser.add_argument(--data, typestr, defaultROOT / data/coco128.yaml, helpdataset.yaml path)parser.add_argument(--weights, nargs, typestr, defaultROOT / yolov5s.pt, helpmodel.pt path(s))parser.add_argument(--imgsz, --img, --img-size, nargs, typeint, default[640, 640], helpimage (h, w))parser.add_argument(--batch-size, typeint, default1, helpbatch size)parser.add_argument(--device, defaultcpu, helpcuda device, i.e. 0 or 0,1,2,3 or cpu)parser.add_argument(--half, actionstore_true, helpFP16 half-precision export)parser.add_argument(--inplace, actionstore_true, helpset YOLOv5 Detect() inplaceTrue)parser.add_argument(--keras, actionstore_true, helpTF: use Keras)parser.add_argument(--optimize, actionstore_true, helpTorchScript: optimize for mobile)parser.add_argument(--int8, actionstore_true, helpCoreML/TF INT8 quantization)parser.add_argument(--dynamic, actionstore_true, helpONNX/TF/TensorRT: dynamic axes)parser.add_argument(--simplify, actionstore_true, helpONNX: simplify model)parser.add_argument(--opset, typeint, default12, helpONNX: opset version)parser.add_argument(--verbose, actionstore_true, helpTensorRT: verbose log)parser.add_argument(--workspace, typeint, default4, helpTensorRT: workspace size (GB))parser.add_argument(--nms, actionstore_true, helpTF: add NMS to model)parser.add_argument(--agnostic-nms, actionstore_true, helpTF: add agnostic NMS to model)parser.add_argument(--topk-per-class, typeint, default100, helpTF.js NMS: topk per class to keep)parser.add_argument(--topk-all, typeint, default100, helpTF.js NMS: topk for all classes to keep)parser.add_argument(--iou-thres, typefloat, default0.45, helpTF.js NMS: IoU threshold)parser.add_argument(--conf-thres, typefloat, default0.25, helpTF.js NMS: confidence threshold)parser.add_argument(--include,nargs,default[torchscript],helptorchscript, onnx, openvino, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle)opt parser.parse_args()print_args(vars(opt))return optdef main(opt):for opt.weights in (opt.weights if isinstance(opt.weights, list) else [opt.weights]):run(**vars(opt))if __name__ __main__:opt parse_opt()main(opt)四、总结 本文介绍了如何在YOLOv5中引入SE注意力机制包括模型配置文件的修改、代码实现、训练步骤以及效果对比。通过引入SE模块YOLOv5在多尺度目标和复杂背景下的检测精度有所提升。未来可以继续探索其他注意力机制如CBAM、ECA等的应用以进一步提升YOLOv5的性能。感谢大家的支持。
http://www.pierceye.com/news/850848/

相关文章:

  • 网站空间管理平台网站模版 优帮云
  • 网站开发的比较备案期间 需要关闭网站吗
  • 做网站 怎么推广上海市企业服务云十问十答
  • 怎么做一种网站为别人宣传wordpress query_posts()
  • 网站的运营和维护专业做网站官网
  • 详细论述制作网站的步骤做网站需求 后期方便优化
  • 蒙icp备 网站建设学校网站建设管理
  • 做免费外贸网站册域名网站大全免黄
  • 祈网网站建设制作网站如何赚钱
  • 最讨厌网站门户类网站的主页设计
  • 国家建设环保局网站网站做的好赚钱吗
  • 如何设置网站服务器做标签的网站
  • 网站建设高端培训学校做网站交易平台
  • 公司网站建设收费优化网站排名解析推广
  • 昆明快速建站模板汽车网站建设多少钱
  • 网站注销主体注销广州联享网站建设公司怎么样
  • 中山seo建站新手建站教程报价单
  • 台州制作网站软件陈坤做直播在哪个网站
  • 北湖区网站建设公司企业主题wordpress 含演示数据
  • 网站建设简历自我评价做招聘信息的网站有哪些内容
  • 怎么和其它网站做友情链接网络营销师证怎么考
  • 百度推广要自己做网站吗做的视频传到哪个网站好
  • 个人建设门户网站 如何备案网站推广服务报价表
  • 广州企业网站建设哪家服务好西安家政公司网站建设
  • 住房与城乡建设部网站 黑龙江wordpress 采集系统
  • 阜阳网站建设云平台玉溪建设局门户网站
  • 网站建设什么原因最主要怎么制作网站首页
  • 网站建设深圳赶集网网页设计工程师工资
  • 哪家企业网站建设好闵行区网站制作
  • 重庆行业网站建设陕西省建设监理协会查询官方网站