当前位置: 首页 > news >正文

网站的构成优质商品网上购物商城

网站的构成,优质商品网上购物商城,建设网站的企业专业服务,男女做暖网站1#xff0c;本文介绍 MobileNetV4 是最新的 MobileNet 系列模型#xff0c;专为移动设备优化。它引入了通用反转瓶颈#xff08;UIB#xff09;和 Mobile MQA 注意力机制#xff0c;提升了推理速度和效率。通过改进的神经网络架构搜索#xff08;NAS#xff09;和蒸馏…1本文介绍 MobileNetV4 是最新的 MobileNet 系列模型专为移动设备优化。它引入了通用反转瓶颈UIB和 Mobile MQA 注意力机制提升了推理速度和效率。通过改进的神经网络架构搜索NAS和蒸馏技术MobileNetV4 在多种硬件平台上实现了高效和准确的表现在 ImageNet-1K 数据集上达到 87% 的准确率同时在 Pixel 8 EdgeTPU 上的运行时间为 3.8 毫秒。 关于MobileNetV4的详细介绍可以看论文[2404.10518] MobileNetV4 - Universal Models for the Mobile Ecosystem 本文将讲解如何将MobileNetV4融合进yolov8 话不多说上代码 2 将MobileNetV4融合进yolov8 2.1 步骤一 首先找到如下的目录ultralytics/nn/modules然后在这个目录下创建一个MobileNetV4.py文件文件名字可以根据你自己的习惯起然后将MobileNetV4的核心代码复制进去。 from typing import Optional import torch import torch.nn as nn import torch.nn.functional as F__all__ [MobileNetV4ConvLarge, MobileNetV4ConvSmall, MobileNetV4ConvMedium, MobileNetV4HybridMedium, MobileNetV4HybridLarge]MNV4ConvSmall_BLOCK_SPECS {conv0: {block_name: convbn,num_blocks: 1,block_specs: [[3, 32, 3, 2]]},layer1: {block_name: convbn,num_blocks: 2,block_specs: [[32, 32, 3, 2],[32, 32, 1, 1]]},layer2: {block_name: convbn,num_blocks: 2,block_specs: [[32, 96, 3, 2],[96, 64, 1, 1]]},layer3: {block_name: uib,num_blocks: 6,block_specs: [[64, 96, 5, 5, True, 2, 3],[96, 96, 0, 3, True, 1, 2],[96, 96, 0, 3, True, 1, 2],[96, 96, 0, 3, True, 1, 2],[96, 96, 0, 3, True, 1, 2],[96, 96, 3, 0, True, 1, 4],]},layer4: {block_name: uib,num_blocks: 6,block_specs: [[96, 128, 3, 3, True, 2, 6],[128, 128, 5, 5, True, 1, 4],[128, 128, 0, 5, True, 1, 4],[128, 128, 0, 5, True, 1, 3],[128, 128, 0, 3, True, 1, 4],[128, 128, 0, 3, True, 1, 4],]},layer5: {block_name: convbn,num_blocks: 2,block_specs: [[128, 960, 1, 1],[960, 1280, 1, 1]]} }MNV4ConvMedium_BLOCK_SPECS {conv0: {block_name: convbn,num_blocks: 1,block_specs: [[3, 32, 3, 2]]},layer1: {block_name: fused_ib,num_blocks: 1,block_specs: [[32, 48, 2, 4.0, True]]},layer2: {block_name: uib,num_blocks: 2,block_specs: [[48, 80, 3, 5, True, 2, 4],[80, 80, 3, 3, True, 1, 2]]},layer3: {block_name: uib,num_blocks: 8,block_specs: [[80, 160, 3, 5, True, 2, 6],[160, 160, 3, 3, True, 1, 4],[160, 160, 3, 3, True, 1, 4],[160, 160, 3, 5, True, 1, 4],[160, 160, 3, 3, True, 1, 4],[160, 160, 3, 0, True, 1, 4],[160, 160, 0, 0, True, 1, 2],[160, 160, 3, 0, True, 1, 4]]},layer4: {block_name: uib,num_blocks: 11,block_specs: [[160, 256, 5, 5, True, 2, 6],[256, 256, 5, 5, True, 1, 4],[256, 256, 3, 5, True, 1, 4],[256, 256, 3, 5, True, 1, 4],[256, 256, 0, 0, True, 1, 4],[256, 256, 3, 0, True, 1, 4],[256, 256, 3, 5, True, 1, 2],[256, 256, 5, 5, True, 1, 4],[256, 256, 0, 0, True, 1, 4],[256, 256, 0, 0, True, 1, 4],[256, 256, 5, 0, True, 1, 2]]},layer5: {block_name: convbn,num_blocks: 2,block_specs: [[256, 960, 1, 1],[960, 1280, 1, 1]]} }MNV4ConvLarge_BLOCK_SPECS {conv0: {block_name: convbn,num_blocks: 1,block_specs: [[3, 24, 3, 2]]},layer1: {block_name: fused_ib,num_blocks: 1,block_specs: [[24, 48, 2, 4.0, True]]},layer2: {block_name: uib,num_blocks: 2,block_specs: [[48, 96, 3, 5, True, 2, 4],[96, 96, 3, 3, True, 1, 4]]},layer3: {block_name: uib,num_blocks: 11,block_specs: [[96, 192, 3, 5, True, 2, 4],[192, 192, 3, 3, True, 1, 4],[192, 192, 3, 3, True, 1, 4],[192, 192, 3, 3, True, 1, 4],[192, 192, 3, 5, True, 1, 4],[192, 192, 5, 3, True, 1, 4],[192, 192, 5, 3, True, 1, 4],[192, 192, 5, 3, True, 1, 4],[192, 192, 5, 3, True, 1, 4],[192, 192, 5, 3, True, 1, 4],[192, 192, 3, 0, True, 1, 4]]},layer4: {block_name: uib,num_blocks: 13,block_specs: [[192, 512, 5, 5, True, 2, 4],[512, 512, 5, 5, True, 1, 4],[512, 512, 5, 5, True, 1, 4],[512, 512, 5, 5, True, 1, 4],[512, 512, 5, 0, True, 1, 4],[512, 512, 5, 3, True, 1, 4],[512, 512, 5, 0, True, 1, 4],[512, 512, 5, 0, True, 1, 4],[512, 512, 5, 3, True, 1, 4],[512, 512, 5, 5, True, 1, 4],[512, 512, 5, 0, True, 1, 4],[512, 512, 5, 0, True, 1, 4],[512, 512, 5, 0, True, 1, 4]]},layer5: {block_name: convbn,num_blocks: 2,block_specs: [[512, 960, 1, 1],[960, 1280, 1, 1]]} }def mhsa(num_heads, key_dim, value_dim, px):if px 24:kv_strides 2elif px 12:kv_strides 1query_h_strides 1query_w_strides 1use_layer_scale Trueuse_multi_query Trueuse_residual Truereturn [num_heads, key_dim, value_dim, query_h_strides, query_w_strides, kv_strides,use_layer_scale, use_multi_query, use_residual]MNV4HybridConvMedium_BLOCK_SPECS {conv0: {block_name: convbn,num_blocks: 1,block_specs: [[3, 32, 3, 2]]},layer1: {block_name: fused_ib,num_blocks: 1,block_specs: [[32, 48, 2, 4.0, True]]},layer2: {block_name: uib,num_blocks: 2,block_specs: [[48, 80, 3, 5, True, 2, 4],[80, 80, 3, 3, True, 1, 2]]},layer3: {block_name: uib,num_blocks: 8,block_specs: [[80, 160, 3, 5, True, 2, 6],[160, 160, 0, 0, True, 1, 2],[160, 160, 3, 3, True, 1, 4],[160, 160, 3, 5, True, 1, 4, mhsa(4, 64, 64, 24)],[160, 160, 3, 3, True, 1, 4, mhsa(4, 64, 64, 24)],[160, 160, 3, 0, True, 1, 4, mhsa(4, 64, 64, 24)],[160, 160, 3, 3, True, 1, 4, mhsa(4, 64, 64, 24)],[160, 160, 3, 0, True, 1, 4]]},layer4: {block_name: uib,num_blocks: 12,block_specs: [[160, 256, 5, 5, True, 2, 6],[256, 256, 5, 5, True, 1, 4],[256, 256, 3, 5, True, 1, 4],[256, 256, 3, 5, True, 1, 4],[256, 256, 0, 0, True, 1, 2],[256, 256, 3, 5, True, 1, 2],[256, 256, 0, 0, True, 1, 2],[256, 256, 0, 0, True, 1, 4, mhsa(4, 64, 64, 12)],[256, 256, 3, 0, True, 1, 4, mhsa(4, 64, 64, 12)],[256, 256, 5, 5, True, 1, 4, mhsa(4, 64, 64, 12)],[256, 256, 5, 0, True, 1, 4, mhsa(4, 64, 64, 12)],[256, 256, 5, 0, True, 1, 4]]},layer5: {block_name: convbn,num_blocks: 2,block_specs: [[256, 960, 1, 1],[960, 1280, 1, 1]]} }MNV4HybridConvLarge_BLOCK_SPECS {conv0: {block_name: convbn,num_blocks: 1,block_specs: [[3, 24, 3, 2]]},layer1: {block_name: fused_ib,num_blocks: 1,block_specs: [[24, 48, 2, 4.0, True]]},layer2: {block_name: uib,num_blocks: 2,block_specs: [[48, 96, 3, 5, True, 2, 4],[96, 96, 3, 3, True, 1, 4]]},layer3: {block_name: uib,num_blocks: 11,block_specs: [[96, 192, 3, 5, True, 2, 4],[192, 192, 3, 3, True, 1, 4],[192, 192, 3, 3, True, 1, 4],[192, 192, 3, 3, True, 1, 4],[192, 192, 3, 5, True, 1, 4],[192, 192, 5, 3, True, 1, 4],[192, 192, 5, 3, True, 1, 4, mhsa(8, 48, 48, 24)],[192, 192, 5, 3, True, 1, 4, mhsa(8, 48, 48, 24)],[192, 192, 5, 3, True, 1, 4, mhsa(8, 48, 48, 24)],[192, 192, 5, 3, True, 1, 4, mhsa(8, 48, 48, 24)],[192, 192, 3, 0, True, 1, 4]]},layer4: {block_name: uib,num_blocks: 14,block_specs: [[192, 512, 5, 5, True, 2, 4],[512, 512, 5, 5, True, 1, 4],[512, 512, 5, 5, True, 1, 4],[512, 512, 5, 5, True, 1, 4],[512, 512, 5, 0, True, 1, 4],[512, 512, 5, 3, True, 1, 4],[512, 512, 5, 0, True, 1, 4],[512, 512, 5, 0, True, 1, 4],[512, 512, 5, 3, True, 1, 4],[512, 512, 5, 5, True, 1, 4, mhsa(8, 64, 64, 12)],[512, 512, 5, 0, True, 1, 4, mhsa(8, 64, 64, 12)],[512, 512, 5, 0, True, 1, 4, mhsa(8, 64, 64, 12)],[512, 512, 5, 0, True, 1, 4, mhsa(8, 64, 64, 12)],[512, 512, 5, 0, True, 1, 4]]},layer5: {block_name: convbn,num_blocks: 2,block_specs: [[512, 960, 1, 1],[960, 1280, 1, 1]]} }MODEL_SPECS {MobileNetV4ConvSmall: MNV4ConvSmall_BLOCK_SPECS,MobileNetV4ConvMedium: MNV4ConvMedium_BLOCK_SPECS,MobileNetV4ConvLarge: MNV4ConvLarge_BLOCK_SPECS,MobileNetV4HybridMedium: MNV4HybridConvMedium_BLOCK_SPECS,MobileNetV4HybridLarge: MNV4HybridConvLarge_BLOCK_SPECS }def make_divisible(value: float,divisor: int,min_value: Optional[float] None,round_down_protect: bool True, ) - int:This function is copied from herehttps://github.com/tensorflow/models/blob/master/official/vision/modeling/layers/nn_layers.pyThis is to ensure that all layers have channels that are divisible by 8.Args:value: A float of original value.divisor: An int of the divisor that need to be checked upon.min_value: A float of minimum value threshold.round_down_protect: A bool indicating whether round down more than 10%will be allowed.Returns:The adjusted value in int that is divisible against divisor.if min_value is None:min_value divisornew_value max(min_value, int(value divisor / 2) // divisor * divisor)# Make sure that round down does not go down by more than 10%.if round_down_protect and new_value 0.9 * value:new_value divisorreturn int(new_value)def conv_2d(inp, oup, kernel_size3, stride1, groups1, biasFalse, normTrue, actTrue):conv nn.Sequential()padding (kernel_size - 1) // 2conv.add_module(conv, nn.Conv2d(inp, oup, kernel_size, stride, padding, biasbias, groupsgroups))if norm:conv.add_module(BatchNorm2d, nn.BatchNorm2d(oup))if act:conv.add_module(Activation, nn.ReLU6())return convclass InvertedResidual(nn.Module):def __init__(self, inp, oup, stride, expand_ratio, actFalse, squeeze_excitationFalse):super(InvertedResidual, self).__init__()self.stride strideassert stride in [1, 2]hidden_dim int(round(inp * expand_ratio))self.block nn.Sequential()if expand_ratio ! 1:self.block.add_module(exp_1x1, conv_2d(inp, hidden_dim, kernel_size3, stridestride))if squeeze_excitation:self.block.add_module(conv_3x3,conv_2d(hidden_dim, hidden_dim, kernel_size3, stridestride, groupshidden_dim))self.block.add_module(red_1x1, conv_2d(hidden_dim, oup, kernel_size1, stride1, actact))self.use_res_connect self.stride 1 and inp oupdef forward(self, x):if self.use_res_connect:return x self.block(x)else:return self.block(x)class UniversalInvertedBottleneckBlock(nn.Module):def __init__(self,inp,oup,start_dw_kernel_size,middle_dw_kernel_size,middle_dw_downsample,stride,expand_ratio):An inverted bottleneck block with optional depthwises.Referenced from here https://github.com/tensorflow/models/blob/master/official/vision/modeling/layers/nn_blocks.pysuper().__init__()# Starting depthwise conv.self.start_dw_kernel_size start_dw_kernel_sizeif self.start_dw_kernel_size:stride_ stride if not middle_dw_downsample else 1self._start_dw_ conv_2d(inp, inp, kernel_sizestart_dw_kernel_size, stridestride_, groupsinp, actFalse)# Expansion with 1x1 convs.expand_filters make_divisible(inp * expand_ratio, 8)self._expand_conv conv_2d(inp, expand_filters, kernel_size1)# Middle depthwise conv.self.middle_dw_kernel_size middle_dw_kernel_sizeif self.middle_dw_kernel_size:stride_ stride if middle_dw_downsample else 1self._middle_dw conv_2d(expand_filters, expand_filters, kernel_sizemiddle_dw_kernel_size, stridestride_,groupsexpand_filters)# Projection with 1x1 convs.self._proj_conv conv_2d(expand_filters, oup, kernel_size1, stride1, actFalse)# Ending depthwise conv.# this not used# _end_dw_kernel_size 0# self._end_dw conv_2d(oup, oup, kernel_size_end_dw_kernel_size, stridestride, groupsinp, actFalse)def forward(self, x):if self.start_dw_kernel_size:x self._start_dw_(x)# print(_start_dw_, x.shape)x self._expand_conv(x)# print(_expand_conv, x.shape)if self.middle_dw_kernel_size:x self._middle_dw(x)# print(_middle_dw, x.shape)x self._proj_conv(x)# print(_proj_conv, x.shape)return xclass MultiQueryAttentionLayerWithDownSampling(nn.Module):def __init__(self, inp, num_heads, key_dim, value_dim, query_h_strides, query_w_strides, kv_strides,dw_kernel_size3, dropout0.0):Multi Query Attention with spatial downsampling.Referenced from here https://github.com/tensorflow/models/blob/master/official/vision/modeling/layers/nn_blocks.py3 parameters are introduced for the spatial downsampling:1. kv_strides: downsampling factor on Key and Values only.2. query_h_strides: vertical strides on Query only.3. query_w_strides: horizontal strides on Query only.This is an optimized version.1. Projections in Attention is explict written out as 1x1 Conv2D.2. Additional reshapes are introduced to bring a up to 3x speed up.super().__init__()self.num_heads num_headsself.key_dim key_dimself.value_dim value_dimself.query_h_strides query_h_stridesself.query_w_strides query_w_stridesself.kv_strides kv_stridesself.dw_kernel_size dw_kernel_sizeself.dropout dropoutself.head_dim key_dim // num_headsif self.query_h_strides 1 or self.query_w_strides 1:self._query_downsampling_norm nn.BatchNorm2d(inp)self._query_proj conv_2d(inp, num_heads * key_dim, 1, 1, normFalse, actFalse)if self.kv_strides 1:self._key_dw_conv conv_2d(inp, inp, dw_kernel_size, kv_strides, groupsinp, normTrue, actFalse)self._value_dw_conv conv_2d(inp, inp, dw_kernel_size, kv_strides, groupsinp, normTrue, actFalse)self._key_proj conv_2d(inp, key_dim, 1, 1, normFalse, actFalse)self._value_proj conv_2d(inp, key_dim, 1, 1, normFalse, actFalse)self._output_proj conv_2d(num_heads * key_dim, inp, 1, 1, normFalse, actFalse)self.dropout nn.Dropout(pdropout)def forward(self, x):batch_size, seq_length, _, _ x.size()if self.query_h_strides 1 or self.query_w_strides 1:q F.avg_pool2d(self.query_h_stride, self.query_w_stride)q self._query_downsampling_norm(q)q self._query_proj(q)else:q self._query_proj(x)px q.size(2)q q.view(batch_size, self.num_heads, -1, self.key_dim) # [batch_size, num_heads, seq_length, key_dim]if self.kv_strides 1:k self._key_dw_conv(x)k self._key_proj(k)v self._value_dw_conv(x)v self._value_proj(v)else:k self._key_proj(x)v self._value_proj(x)k k.view(batch_size, self.key_dim, -1) # [batch_size, key_dim, seq_length]v v.view(batch_size, -1, self.key_dim) # [batch_size, seq_length, key_dim]# calculate attn scoreattn_score torch.matmul(q, k) / (self.head_dim ** 0.5)attn_score self.dropout(attn_score)attn_score F.softmax(attn_score, dim-1)context torch.matmul(attn_score, v)context context.view(batch_size, self.num_heads * self.key_dim, px, px)output self._output_proj(context)return outputclass MNV4LayerScale(nn.Module):def __init__(self, init_value):LayerScale as introduced in CaiT: https://arxiv.org/abs/2103.17239Referenced from here https://github.com/tensorflow/models/blob/master/official/vision/modeling/layers/nn_blocks.pyAs used in MobileNetV4.Attributes:init_value (float): value to initialize the diagonal matrix of LayerScale.super().__init__()self.init_value init_valuedef forward(self, x):gamma self.init_value * torch.ones(x.size(-1), dtypex.dtype, devicex.device)return x * gammaclass MultiHeadSelfAttentionBlock(nn.Module):def __init__(self,inp,num_heads,key_dim,value_dim,query_h_strides,query_w_strides,kv_strides,use_layer_scale,use_multi_query,use_residualTrue):super().__init__()self.query_h_strides query_h_stridesself.query_w_strides query_w_stridesself.kv_strides kv_stridesself.use_layer_scale use_layer_scaleself.use_multi_query use_multi_queryself.use_residual use_residualself._input_norm nn.BatchNorm2d(inp)if self.use_multi_query:self.multi_query_attention MultiQueryAttentionLayerWithDownSampling(inp, num_heads, key_dim, value_dim, query_h_strides, query_w_strides, kv_strides)else:self.multi_head_attention nn.MultiheadAttention(inp, num_heads, kdimkey_dim)if self.use_layer_scale:self.layer_scale_init_value 1e-5self.layer_scale MNV4LayerScale(self.layer_scale_init_value)def forward(self, x):# Not using CPE, skipped# input normshortcut xx self._input_norm(x)# multi queryif self.use_multi_query:x self.multi_query_attention(x)else:x self.multi_head_attention(x, x)# layer scaleif self.use_layer_scale:x self.layer_scale(x)# use residualif self.use_residual:x x shortcutreturn xdef build_blocks(layer_spec):if not layer_spec.get(block_name):return nn.Sequential()block_names layer_spec[block_name]layers nn.Sequential()if block_names convbn:schema_ [inp, oup, kernel_size, stride]for i in range(layer_spec[num_blocks]):args dict(zip(schema_, layer_spec[block_specs][i]))layers.add_module(fconvbn_{i}, conv_2d(**args))elif block_names uib:schema_ [inp, oup, start_dw_kernel_size, middle_dw_kernel_size, middle_dw_downsample, stride,expand_ratio, msha]for i in range(layer_spec[num_blocks]):args dict(zip(schema_, layer_spec[block_specs][i]))msha args.pop(msha) if msha in args else 0layers.add_module(fuib_{i}, UniversalInvertedBottleneckBlock(**args))if msha:msha_schema_ [inp, num_heads, key_dim, value_dim, query_h_strides, query_w_strides, kv_strides,use_layer_scale, use_multi_query, use_residual]args dict(zip(msha_schema_, [args[oup]] (msha)))layers.add_module(fmsha_{i}, MultiHeadSelfAttentionBlock(**args))elif block_names fused_ib:schema_ [inp, oup, stride, expand_ratio, act]for i in range(layer_spec[num_blocks]):args dict(zip(schema_, layer_spec[block_specs][i]))layers.add_module(ffused_ib_{i}, InvertedResidual(**args))else:raise NotImplementedErrorreturn layersclass MobileNetV4(nn.Module):def __init__(self, model):# MobileNetV4ConvSmall MobileNetV4ConvMedium MobileNetV4ConvLarge# MobileNetV4HybridMedium MobileNetV4HybridLargeParams to initiate MobilenNetV4Args:model : support 5 types of models as indicated inhttps://github.com/tensorflow/models/blob/master/official/vision/modeling/backbones/mobilenet.pysuper().__init__()assert model in MODEL_SPECS.keys()self.model modelself.spec MODEL_SPECS[self.model]# conv0self.conv0 build_blocks(self.spec[conv0])# layer1self.layer1 build_blocks(self.spec[layer1])# layer2self.layer2 build_blocks(self.spec[layer2])# layer3self.layer3 build_blocks(self.spec[layer3])# layer4self.layer4 build_blocks(self.spec[layer4])# layer5self.layer5 build_blocks(self.spec[layer5])self.width_list [i.size(1) for i in self.forward(torch.randn(1, 3, 640, 640))]def forward(self, x):x0 self.conv0(x)x1 self.layer1(x0)x2 self.layer2(x1)x3 self.layer3(x2)x4 self.layer4(x3)# x5 self.layer5(x4)# x5 nn.functional.adaptive_avg_pool2d(x5, 1)return [x1, x2, x3, x4]def MobileNetV4ConvSmall():model MobileNetV4(MobileNetV4ConvSmall)return modeldef MobileNetV4ConvMedium():model MobileNetV4(MobileNetV4ConvMedium)return modeldef MobileNetV4ConvLarge():model MobileNetV4(MobileNetV4ConvLarge)return modeldef MobileNetV4HybridMedium():model MobileNetV4(MobileNetV4HybridMedium)return modeldef MobileNetV4HybridLarge():model MobileNetV4(MobileNetV4HybridLarge)return model 2.2 步骤二 在task.py导入我们的模块 2.3 步骤三 如下图标注框所示添加两行代码 2.4 步骤四 在task.py如下图所示位置添加标注框内所示代码 elif m in {MobileNetV4ConvLarge, MobileNetV4ConvSmall, \MobileNetV4ConvMedium, MobileNetV4HybridMedium, MobileNetV4HybridLarge}:m m(*args)c2 m.width_listbackbone True 2.5 步骤五 在task.py如下图所示位置添加标注框内所示代码 2.6 步骤六 在task.py如下图所示位置的代码需要替换 替换为下图所示代码 if verbose:LOGGER.info(f{i:3}{str(f):20}{n_:3}{m.np:10.0f} {t:45}{str(args):30}) # printsave.extend(x % (i 4 if backbone else i) for x in ([f] if isinstance(f, int) else f) if x ! -1) # append to savelistlayers.append(m_)if i 0:ch []if isinstance(c2, list):ch.extend(c2)if len(c2) ! 5:ch.insert(0, 0)else:ch.append(c2) 2.7 步骤七 这次修改在base_model的predict_once方法里面在task.py的前面部分代码中。 在task.py如下图所示位置的代码需要替换 替换为下图所示代码 def _predict_once(self, x, profileFalse, visualizeFalse, embedNone):y, dt, embeddings [], [], [] # outputsfor m in self.model:if m.f ! -1: # if not from previous layerx y[m.f] if isinstance(m.f, int) else [x if j -1 else y[j] for j in m.f] # from earlier layersif profile:self._profile_one_layer(m, x, dt)if hasattr(m, backbone):x m(x)if len(x) ! 5: # 0 - 5x.insert(0, None)for index, i in enumerate(x):if index in self.save:y.append(i)else:y.append(None)x x[-1] # 最后一个输出传给下一层else:x m(x) # runy.append(x if m.i in self.save else None) # save outputif visualize:feature_visualization(x, m.type, m.i, save_dirvisualize)if embed and m.i in embed:embeddings.append(nn.functional.adaptive_avg_pool2d(x, (1, 1)).squeeze(-1).squeeze(-1)) # flattenif m.i max(embed):return torch.unbind(torch.cat(embeddings, 1), dim0)return x 2.8 步骤八 将下图所示代码注释掉在ultralytics/utils/torch_utils.py中 修改为下图所示 2.9 步骤九 将下图所示代码注释掉在task.py中,改为s640 到这里完成修改但是这里面细节很多大家一定要注意仔细修改步骤比较多出现错误很难找出来 复制下面的yaml文件运行即可 yaml文件 # Ultralytics YOLO , AGPL-3.0 license # YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters nc: 80 # number of classes scales: # model compound scaling constants, i.e. modelyolov8n.yaml will call yolov8.yaml with scale n# [depth, width, max_channels]n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPss: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPsm: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPsl: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPsx: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs# YOLOv8.0n backbone backbone:# [from, repeats, module, args]# MobileNetV4ConvSmall, MobileNetV4ConvLarge, MobileNetV4ConvMedium,# MobileNetV4HybridMedium, MobileNetV4HybridLarge 支持这五种版本- [-1, 1, MobileNetV4ConvSmall, []] # 4 将左面的MobileNetV4ConvSmall改为上面任意一个即替换对应的MobileNetV4版本- [-1, 1, SPPF, [1024, 5]] # 5# YOLOv8.0n head head:- [-1, 1, nn.Upsample, [None, 2, nearest]] # 6- [[-1, 3], 1, Concat, [1]] # 7 cat backbone P4- [-1, 3, C2f, [512]] # 8- [-1, 1, nn.Upsample, [None, 2, nearest]] # 9- [[-1, 2], 1, Concat, [1]] # 10 cat backbone P3- [-1, 3, C2f, [256]] # 11 (P3/8-small)- [-1, 1, Conv, [256, 3, 2]] # 12- [[-1, 8], 1, Concat, [1]] # 13 cat head P4- [-1, 3, C2f, [512]] # 14 (P4/16-medium)- [-1, 1, Conv, [512, 3, 2]] # 15- [[-1, 5], 1, Concat, [1]] # 16 cat head P5- [-1, 3, C2f, [1024]] # 17 (P5/32-large)- [[11, 14, 17], 1, Detect, [nc]] # Detect(P3, P4, P5) # 今天这个修改的地方比较多大家一定要仔细检查 不知不觉已经看完了哦动动小手留个点赞吧--_--
http://www.dnsts.com.cn/news/216105.html

相关文章:

  • win2003建设网站仿淘宝网站源码+php
  • 南宁手机网站建设网站推广链接怎么做
  • 洪梅网站建设公司wordpress导入页面
  • 产品展示网站模板源码百度app官网下载安装
  • 网站功能设计怎么写怀化汽车网站
  • 皮具 东莞网站建设濮阳专业做网站公司
  • 手把手教你做网站 怎么注册域名盗版小说网站怎么做
  • 北京做商铺的网站广西网络品牌推广哪家公司好
  • 东莞手机建网站wordpress伪静态路径
  • 排名网站却搜不到网站制作方案有哪些
  • 网站突然不收录2017山东seo费用多少
  • 企业门户网站设计论文施工企业环保应急预案
  • 怎么搭建一个网站自媒体人15种赚钱方法
  • 论坛网站建设视频好订单网服装加工接单
  • dedecms网站关键词网站建设培训心得体会
  • 有做微信婚介网站的吗温州网络公司前十名
  • python 兼职网站开发目前主要的电商平台
  • 桂林网站定制建设宁波网站推广厂家
  • 写文案的网站网站开发总体功能设计
  • 网站页面排版企业网站管理后台
  • 揭阳网站建设模板收录批量查询工具
  • Html5做旅游网站的设计思路wordpress邮箱非必填
  • 创意经济型网站建设桂林市区有什么好玩的
  • 深圳网站设计必选成都柚米科技09做久久建筑网下载
  • 前端做网站需要学什么软件建设银行网站联系电话
  • 网站颜色搭配表wordpress 嵌入百度地图
  • 大连网站设计策划wordpress怎么修改文字大小
  • 汉中市网站建设公司怡美工业设计公司
  • 果洛wap网站建设网站建立的
  • 如何做服装的微商城网站vi设计公司成都