以域名做网站关键词,少部分网站ie打不开这些网站域名ping不通,网页制作创建站点,中国工业设计十佳公司一、导言
Real-Time DEtection TRansformer#xff08;RT-DETR#xff09;#xff0c;是一种实时端到端目标检测器#xff0c;克服了Non-Maximum Suppression#xff08;NMS#xff09;对速度和准确性的影响。通过设计高效的混合编码器和不确定性最小化查询选择#xf…
一、导言
Real-Time DEtection TRansformerRT-DETR是一种实时端到端目标检测器克服了Non-Maximum SuppressionNMS对速度和准确性的影响。通过设计高效的混合编码器和不确定性最小化查询选择RT-DETR在保持准确性的同时提高了速度实现了实时检测的要求。实验结果表明RT-DETR在COCO数据集上达到了53.1的平均精度AP并且在T4 GPU上实现了108 FPS的速度优于以前的先进YOLO检测器。此外RT-DETR还支持灵活的速度调节适用于不同场景的应用。
AIFIAttention-based Intra-scale Feature Interaction是论文DETRs Beat YOLOs on Real-time Object Detection中提出的一个组件属于Real-Time Detection Transformer (RT-DETR)模型的一部分。这个组件专注于解决Transformer编码器中多尺度特征交互导致的高计算成本问题以实现更高效的实时目标检测。下面是对AIFI部分的优缺点的详细分析
优点 减少计算成本 AIFI通过仅在高级别的特征图如S5上执行特征内的自注意力操作显著减少了计算负担。高级特征富含语义信息对对象定位和识别至关重要而低级别特征的自注意力操作可能引入冗余和混淆因此针对性的处理提高了效率。 提升特征利用率 通过聚焦于高层特征上的自注意力交互AIFI能够更好地捕捉概念实体间的关联这对于后续模块进行精确的物体定位和分类是极为有利的。 优化内存使用 限制自注意力操作的范围到单个尺度的特征图有助于减少内存占用这对于实时应用来说至关重要因为它允许在有限资源的硬件上运行。 提高准确性与速度平衡 实验结果显示在变体DS5中仅在S5层进行的尺度内交互不仅显著提升了处理速度比基础变体快35%而且提高了检测精度AP提高0.4%。这表明AIFI在保持实时性的同时对模型性能有正面影响。
缺点 忽略低级特征交互 AIFI忽略了低级别特征图的尺度内交互这可能意味着模型丢失了一些基于局部细节的特征信息。低级别特征通常包含边缘、纹理等基本信息这些对于精细的物体识别也是必要的尽管在实际中这种忽略并未导致性能下降但理论上可能限制了模型在某些特定场景下的表现。 设计复杂性 AIFI的设计涉及到对Transformer编码器结构的改动包括如何有效整合不同尺度的特征以及如何优化自注意力运算的范围这增加了模型架构的复杂度可能对模型理解和调试带来一定挑战。 参数调整敏感性 任何涉及特征选择和交互方式改变的设计都可能对模型的整体表现非常敏感这意味着AIFI的参数设置如Transformer层数、特征融合方式等需要细致调优以达到最佳性能。
综上所述AIFI在RT-DETR模型中是一个创新性的设计它通过精心设计的特征交互策略有效提高了实时目标检测的速度和精度。尽管存在一些潜在的局限性如对低级特征的处理简化但实验结果证明了其整体的有效性和实用性。 二、准备工作
首先在YOLOv5/v7的models文件夹下新建文件aifi.py导入如下代码
class TransformerEncoderLayer(nn.Module):Defines a single layer of the transformer encoder.def __init__(self, c1, cm2048, num_heads8, dropout0.0, actnn.GELU(), normalize_beforeFalse):Initialize the TransformerEncoderLayer with specified parameters.super().__init__()self.ma nn.MultiheadAttention(c1, num_heads, dropoutdropout, batch_firstTrue)# Implementation of Feedforward modelself.fc1 nn.Linear(c1, cm)self.fc2 nn.Linear(cm, c1)self.norm1 nn.LayerNorm(c1)self.norm2 nn.LayerNorm(c1)self.dropout nn.Dropout(dropout)self.dropout1 nn.Dropout(dropout)self.dropout2 nn.Dropout(dropout)self.act actself.normalize_before normalize_beforestaticmethoddef with_pos_embed(tensor, posNone):Add position embeddings to the tensor if provided.return tensor if pos is None else tensor posdef forward_post(self, src, src_maskNone, src_key_padding_maskNone, posNone):Performs forward pass with post-normalization.q k self.with_pos_embed(src, pos)src2 self.ma(q, k, valuesrc, attn_masksrc_mask, key_padding_masksrc_key_padding_mask)[0]src src self.dropout1(src2)src self.norm1(src)src2 self.fc2(self.dropout(self.act(self.fc1(src))))src src self.dropout2(src2)return self.norm2(src)def forward_pre(self, src, src_maskNone, src_key_padding_maskNone, posNone):Performs forward pass with pre-normalization.src2 self.norm1(src)q k self.with_pos_embed(src2, pos)src2 self.ma(q, k, valuesrc2, attn_masksrc_mask, key_padding_masksrc_key_padding_mask)[0]src src self.dropout1(src2)src2 self.norm2(src)src2 self.fc2(self.dropout(self.act(self.fc1(src2))))return src self.dropout2(src2)def forward(self, src, src_maskNone, src_key_padding_maskNone, posNone):Forward propagates the input through the encoder module.if self.normalize_before:return self.forward_pre(src, src_mask, src_key_padding_mask, pos)return self.forward_post(src, src_mask, src_key_padding_mask, pos)class AIFI(TransformerEncoderLayer):Defines the AIFI transformer layer.def __init__(self, c1, cm2048, num_heads8, dropout0, actnn.GELU(), normalize_beforeFalse):Initialize the AIFI instance with specified parameters.super().__init__(c1, cm, num_heads, dropout, act, normalize_before)def forward(self, x):Forward pass for the AIFI transformer layer.c, h, w x.shape[1:]pos_embed self.build_2d_sincos_position_embedding(w, h, c)# Flatten [B, C, H, W] to [B, HxW, C]x super().forward(x.flatten(2).permute(0, 2, 1), pospos_embed.to(devicex.device, dtypex.dtype))return x.permute(0, 2, 1).view([-1, c, h, w]).contiguous()staticmethoddef build_2d_sincos_position_embedding(w, h, embed_dim256, temperature10000.0):Builds 2D sine-cosine position embedding.grid_w torch.arange(int(w), dtypetorch.float32)grid_h torch.arange(int(h), dtypetorch.float32)grid_w, grid_h torch.meshgrid(grid_w, grid_h, indexingij)assert embed_dim % 4 0, \Embed dimension must be divisible by 4 for 2D sin-cos position embeddingpos_dim embed_dim // 4omega torch.arange(pos_dim, dtypetorch.float32) / pos_dimomega 1. / (temperature ** omega)out_w grid_w.flatten()[..., None] omega[None]out_h grid_h.flatten()[..., None] omega[None]return torch.cat([torch.sin(out_w), torch.cos(out_w), torch.sin(out_h), torch.cos(out_h)], 1)[None]
其次在在YOLOv5/v7项目文件下的models/yolo.py中在文件首部添加代码
from models.aifi import AIFI
并搜索def parse_model(d, ch)
定位到如下行添加以下代码 elif m is AIFI:args [ch[f], *args] 三、YOLOv7-tiny改进工作
完成二后在YOLOv7项目文件下的models文件夹下创建新的文件yolov7-tiny-aifi.yaml导入如下代码。
# parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple# anchors
anchors:- [10,13, 16,30, 33,23] # P3/8- [30,61, 62,45, 59,119] # P4/16- [116,90, 156,198, 373,326] # P5/32# yolov7-tiny backbone
backbone:# [from, number, module, args] c2, k1, s1, pNone, g1, actTrue[[-1, 1, Conv, [32, 3, 2, None, 1, nn.LeakyReLU(0.1)]], # 0-P1/2[-1, 1, Conv, [64, 3, 2, None, 1, nn.LeakyReLU(0.1)]], # 1-P2/4[-1, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],[-2, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],[-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[[-1, -2, -3, -4], 1, Concat, [1]],[-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 7[-1, 1, MP, []], # 8-P3/8[-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],[-2, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],[-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[[-1, -2, -3, -4], 1, Concat, [1]],[-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 14[-1, 1, MP, []], # 15-P4/16[-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],[-2, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],[-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[[-1, -2, -3, -4], 1, Concat, [1]],[-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 21[-1, 1, MP, []], # 22-P5/32[-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],[-2, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]],[-1, 1, Conv, [256, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[-1, 1, Conv, [256, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[[-1, -2, -3, -4], 1, Concat, [1]],[-1, 1, Conv, [512, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 28]# yolov7-tiny head
head:[[-1, 1, AIFI, [256]], # 29[-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],[-1, 1, nn.Upsample, [None, 2, nearest]],[21, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # route backbone P4[[-1, -2], 1, Concat, [1]],[-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],[-2, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],[-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[[-1, -2, -3, -4], 1, Concat, [1]],[-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 39[-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],[-1, 1, nn.Upsample, [None, 2, nearest]],[14, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # route backbone P3[[-1, -2], 1, Concat, [1]],[-1, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],[-2, 1, Conv, [32, 1, 1, None, 1, nn.LeakyReLU(0.1)]],[-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[-1, 1, Conv, [32, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[[-1, -2, -3, -4], 1, Concat, [1]],[-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 49[-1, 1, Conv, [128, 3, 2, None, 1, nn.LeakyReLU(0.1)]],[[-1, 39], 1, Concat, [1]],[-1, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],[-2, 1, Conv, [64, 1, 1, None, 1, nn.LeakyReLU(0.1)]],[-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[-1, 1, Conv, [64, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[[-1, -2, -3, -4], 1, Concat, [1]],[-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 57[-1, 1, Conv, [256, 3, 2, None, 1, nn.LeakyReLU(0.1)]],[[-1, 29], 1, Concat, [1]],[-1, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],[-2, 1, Conv, [128, 1, 1, None, 1, nn.LeakyReLU(0.1)]],[-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[-1, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[[-1, -2, -3, -4], 1, Concat, [1]],[-1, 1, Conv, [256, 1, 1, None, 1, nn.LeakyReLU(0.1)]], # 65[49, 1, Conv, [128, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[57, 1, Conv, [256, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[65, 1, Conv, [512, 3, 1, None, 1, nn.LeakyReLU(0.1)]],[[66, 67, 68], 1, IDetect, [nc, anchors]], # Detect(P3, P4, P5)]from n params module arguments 0 -1 1 928 models.common.Conv [3, 32, 3, 2, None, 1, LeakyReLU(negative_slope0.1)]1 -1 1 18560 models.common.Conv [32, 64, 3, 2, None, 1, LeakyReLU(negative_slope0.1)]2 -1 1 2112 models.common.Conv [64, 32, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]3 -2 1 2112 models.common.Conv [64, 32, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]4 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]5 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]6 [-1, -2, -3, -4] 1 0 models.common.Concat [1] 7 -1 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]8 -1 1 0 models.common.MP [] 9 -1 1 4224 models.common.Conv [64, 64, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]10 -2 1 4224 models.common.Conv [64, 64, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]11 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]12 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]13 [-1, -2, -3, -4] 1 0 models.common.Concat [1] 14 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]15 -1 1 0 models.common.MP [] 16 -1 1 16640 models.common.Conv [128, 128, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]17 -2 1 16640 models.common.Conv [128, 128, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]18 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]19 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]20 [-1, -2, -3, -4] 1 0 models.common.Concat [1] 21 -1 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]22 -1 1 0 models.common.MP [] 23 -1 1 66048 models.common.Conv [256, 256, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]24 -2 1 66048 models.common.Conv [256, 256, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]25 -1 1 590336 models.common.Conv [256, 256, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]26 -1 1 590336 models.common.Conv [256, 256, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]27 [-1, -2, -3, -4] 1 0 models.common.Concat [1] 28 -1 1 525312 models.common.Conv [1024, 512, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]29 -1 1 1315584 models.aifi.AIFI [512, 256] 30 -1 1 65792 models.common.Conv [512, 128, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]31 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, nearest] 32 21 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]33 [-1, -2] 1 0 models.common.Concat [1] 34 -1 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]35 -2 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]36 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]37 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]38 [-1, -2, -3, -4] 1 0 models.common.Concat [1] 39 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]40 -1 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]41 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, nearest] 42 14 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]43 [-1, -2] 1 0 models.common.Concat [1] 44 -1 1 4160 models.common.Conv [128, 32, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]45 -2 1 4160 models.common.Conv [128, 32, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]46 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]47 -1 1 9280 models.common.Conv [32, 32, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]48 [-1, -2, -3, -4] 1 0 models.common.Concat [1] 49 -1 1 8320 models.common.Conv [128, 64, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]50 -1 1 73984 models.common.Conv [64, 128, 3, 2, None, 1, LeakyReLU(negative_slope0.1)]51 [-1, 39] 1 0 models.common.Concat [1] 52 -1 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]53 -2 1 16512 models.common.Conv [256, 64, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]54 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]55 -1 1 36992 models.common.Conv [64, 64, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]56 [-1, -2, -3, -4] 1 0 models.common.Concat [1] 57 -1 1 33024 models.common.Conv [256, 128, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]58 -1 1 295424 models.common.Conv [128, 256, 3, 2, None, 1, LeakyReLU(negative_slope0.1)]59 [-1, 29] 1 0 models.common.Concat [1] 60 -1 1 98560 models.common.Conv [768, 128, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]61 -2 1 98560 models.common.Conv [768, 128, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]62 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]63 -1 1 147712 models.common.Conv [128, 128, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]64 [-1, -2, -3, -4] 1 0 models.common.Concat [1] 65 -1 1 131584 models.common.Conv [512, 256, 1, 1, None, 1, LeakyReLU(negative_slope0.1)]66 49 1 73984 models.common.Conv [64, 128, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]67 57 1 295424 models.common.Conv [128, 256, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]68 65 1 1180672 models.common.Conv [256, 512, 3, 1, None, 1, LeakyReLU(negative_slope0.1)]69 [66, 67, 68] 1 17132 models.yolo.IDetect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]Model Summary: 250 layers, 6771468 parameters, 6771468 gradients, 12.9 GFLOPS
运行后若打印出如上文本代表改进成功。
四、YOLOv5s改进工作
完成二后在YOLOv5项目文件下的models文件夹下创建新的文件yolov5s-aifi.yaml导入如下代码。
nc: 80 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple
anchors:- [10,13, 16,30, 33,23] # P3/8- [30,61, 62,45, 59,119] # P4/16- [116,90, 156,198, 373,326] # P5/32# YOLOv5 v6.0 backbone
backbone:# [from, number, module, args][[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2[-1, 1, Conv, [128, 3, 2]], # 1-P2/4[-1, 3, C3, [128]],[-1, 1, Conv, [256, 3, 2]], # 3-P3/8[-1, 6, C3, [256]],[-1, 1, Conv, [512, 3, 2]], # 5-P4/16[-1, 9, C3, [512]],[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32[-1, 3, C3, [1024]],[-1, 1, Conv, [512, 1]], # 9[-1, 1, AIFI, [1024, 8]], # 10]# YOLOv5 v6.0 head
head:[[-1, 1, nn.Upsample, [None, 2, nearest]],[[-1, 6], 1, Concat, [1]], # cat backbone P4[-1, 3, C3, [512, False]], # 13[-1, 1, Conv, [256, 1, 1]],[-1, 1, nn.Upsample, [None, 2, nearest]],[[-1, 4], 1, Concat, [1]], # cat backbone P3[-1, 3, C3, [256, False]], # 17 (P3/8-small)[-1, 1, Conv, [256, 3, 2]],[[-1, 14], 1, Concat, [1]], # cat head P4[-1, 3, C3, [512, False]], # 20 (P4/16-medium)[-1, 1, Conv, [512, 3, 2]],[[-1, 10], 1, Concat, [1]], # cat head P5[-1, 3, C3, [1024, False]], # 23 (P5/32-large)[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)]from n params module arguments 0 -1 1 3520 models.common.Conv [3, 32, 6, 2, 2] 1 -1 1 18560 models.common.Conv [32, 64, 3, 2] 2 -1 1 18816 models.common.C3 [64, 64, 1] 3 -1 1 73984 models.common.Conv [64, 128, 3, 2] 4 -1 2 115712 models.common.C3 [128, 128, 2] 5 -1 1 295424 models.common.Conv [128, 256, 3, 2] 6 -1 3 625152 models.common.C3 [256, 256, 3] 7 -1 1 1180672 models.common.Conv [256, 512, 3, 2] 8 -1 1 1182720 models.common.C3 [512, 512, 1] 9 -1 1 131584 models.common.Conv [512, 256, 1] 10 -1 1 789760 models.aifi.AIFI [256, 1024, 8] 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, nearest] 12 [-1, 6] 1 0 models.common.Concat [1] 13 -1 1 361984 models.common.C3 [512, 256, 1, False] 14 -1 1 33024 models.common.Conv [256, 128, 1, 1] 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, nearest] 16 [-1, 4] 1 0 models.common.Concat [1] 17 -1 1 90880 models.common.C3 [256, 128, 1, False] 18 -1 1 147712 models.common.Conv [128, 128, 3, 2] 19 [-1, 14] 1 0 models.common.Concat [1] 20 -1 1 296448 models.common.C3 [256, 256, 1, False] 21 -1 1 590336 models.common.Conv [256, 256, 3, 2] 22 [-1, 10] 1 0 models.common.Concat [1] 23 -1 1 1182720 models.common.C3 [512, 512, 1, False] 24 [17, 20, 23] 1 16182 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]Model Summary: 271 layers, 7155190 parameters, 7155190 gradients, 15.8 GFLOPs
运行后若打印出如上文本代表改进成功。
五、YOLOv5n改进工作
完成二后在YOLOv5项目文件下的models文件夹下创建新的文件yolov5n-aifi.yaml导入如下代码。
nc: 80 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.25 # layer channel multiple
anchors:- [10,13, 16,30, 33,23] # P3/8- [30,61, 62,45, 59,119] # P4/16- [116,90, 156,198, 373,326] # P5/32# YOLOv5 v6.0 backbone
backbone:# [from, number, module, args][[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2[-1, 1, Conv, [128, 3, 2]], # 1-P2/4[-1, 3, C3, [128]],[-1, 1, Conv, [256, 3, 2]], # 3-P3/8[-1, 6, C3, [256]],[-1, 1, Conv, [512, 3, 2]], # 5-P4/16[-1, 9, C3, [512]],[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32[-1, 3, C3, [1024]],[-1, 1, Conv, [512, 1]], # 9[-1, 1, AIFI, [1024, 8]], # 10]# YOLOv5 v6.0 head
head:[[-1, 1, nn.Upsample, [None, 2, nearest]],[[-1, 6], 1, Concat, [1]], # cat backbone P4[-1, 3, C3, [512, False]], # 13[-1, 1, Conv, [256, 1, 1]],[-1, 1, nn.Upsample, [None, 2, nearest]],[[-1, 4], 1, Concat, [1]], # cat backbone P3[-1, 3, C3, [256, False]], # 17 (P3/8-small)[-1, 1, Conv, [256, 3, 2]],[[-1, 14], 1, Concat, [1]], # cat head P4[-1, 3, C3, [512, False]], # 20 (P4/16-medium)[-1, 1, Conv, [512, 3, 2]],[[-1, 10], 1, Concat, [1]], # cat head P5[-1, 3, C3, [1024, False]], # 23 (P5/32-large)[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)]from n params module arguments 0 -1 1 1760 models.common.Conv [3, 16, 6, 2, 2] 1 -1 1 4672 models.common.Conv [16, 32, 3, 2] 2 -1 1 4800 models.common.C3 [32, 32, 1] 3 -1 1 18560 models.common.Conv [32, 64, 3, 2] 4 -1 2 29184 models.common.C3 [64, 64, 2] 5 -1 1 73984 models.common.Conv [64, 128, 3, 2] 6 -1 3 156928 models.common.C3 [128, 128, 3] 7 -1 1 295424 models.common.Conv [128, 256, 3, 2] 8 -1 1 296448 models.common.C3 [256, 256, 1] 9 -1 1 33024 models.common.Conv [256, 128, 1] 10 -1 1 329856 models.aifi.AIFI [128, 1024, 8] 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, nearest] 12 [-1, 6] 1 0 models.common.Concat [1] 13 -1 1 90880 models.common.C3 [256, 128, 1, False] 14 -1 1 8320 models.common.Conv [128, 64, 1, 1] 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, nearest] 16 [-1, 4] 1 0 models.common.Concat [1] 17 -1 1 22912 models.common.C3 [128, 64, 1, False] 18 -1 1 36992 models.common.Conv [64, 64, 3, 2] 19 [-1, 14] 1 0 models.common.Concat [1] 20 -1 1 74496 models.common.C3 [128, 128, 1, False] 21 -1 1 147712 models.common.Conv [128, 128, 3, 2] 22 [-1, 10] 1 0 models.common.Concat [1] 23 -1 1 296448 models.common.C3 [256, 256, 1, False] 24 [17, 20, 23] 1 8118 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [64, 128, 256]]Model Summary: 271 layers, 1930518 parameters, 1930518 gradients, 4.3 GFLOPs
运行后打印如上代码说明改进成功。
更多文章产出中主打简洁和准确欢迎关注我共同探讨