学习资源站

RT-DETR改进策略【注意力机制篇】WACV-2024D-LKA可变形的大核注意针对大尺度、不规则的目标图像-

RT-DETR改进策略【注意力机制篇】| WACV-2024 D-LKA 可变形的大核注意 针对大尺度、不规则的目标图像

一、本文介绍

本文记录的是 利用 D-LKA 模块优化 RT-DETR 的目标检测网络模型 D-LKA 结合了 大卷积核的广阔感受野 可变形卷积的灵活性 ,有效地处理复杂的图像信息。本文将其应用到 RT-DETR 中,并进行 二次创新 ,使网络能够综合多种维度信息,更好地突出重要特征,从而提升 对不同尺度目标和不规则形状目标的特征提取能力。



二、D-LKA介绍

2.1 设计出发点

  • 解决传统卷积和注意力机制的局限性
    • 传统卷积神经网络(CNN)在处理图像分割时,对于不同尺度的物体检测存在问题。如果物体超出对应网络层的感受野,会导致分割不足;而过大的感受野相比物体实际大小,背景信息可能会对预测产生不当影响。
    • Vision Transformer(ViT) 虽然能通过注意力机制聚合 全局信息 ,但在有效建模 局部信息方面存在局限 ,难以检测局部纹理。
  • 充分利用体积上下文并提高计算效率
    • 大多数当前方法处理三维体积图像数据时采用逐片处理的方式(伪3D),丢失了关键的片间信息,降低了模型的整体性能。
    • 需要一种 既能充分理解体积上下文,又能避免计算开销过大的方法 ,同时还要考虑医学领域中病变形状经常变形的特点。

2.2 原理

2.2.1 Large Kernel Attention(LKA)原理

  • 相似感受野的构建 :大卷积核可以通过 深度可分离卷积(depth-wise convolution) 深度可分离空洞卷积(depthwise dilated convolution) 1×1卷积 来构建,其能提供与自注意力机制相似的感受野,但 参数和计算量更少。
  • 参数和计算量计算
    • 对于二维输入(维度为 H × W H×W H × W ,通道为 c c c ), 深度可分离卷积核 大小 D W = ( 2 d − 1 ) × ( 2 d − 1 ) DW=(2d - 1)×(2d - 1) D W = ( 2 d 1 ) × ( 2 d 1 ) 深度可分离空洞卷积核 大小 D W − D = ⌈ K d ⌉ × ⌈ K d ⌉ DW - D=\left\lceil\frac{K}{d}\right\rceil×\left\lceil\frac{K}{d}\right\rceil D W D = d K × d K K K K 为目标卷积核大小, d d d 为空洞率)。参数量: P ( K , d ) = C ( ⌈ K d ⌉ 2 + ( 2 d − 1 ) 2 + 3 + C ) P(K, d)=C(\left\lceil\frac{K}{d}\right\rceil^{2}+(2d - 1)^{2}+3 + C) P ( K , d ) = C ( d K 2 + ( 2 d 1 ) 2 + 3 + C ) ,浮点运算次数(FLOPs): F ( K , d ) = P ( K , d ) × H × W F(K, d)=P(K, d)×H×W F ( K , d ) = P ( K , d ) × H × W
    • 对于三维输入(维度为 H × W × D H×W×D H × W × D ,通道为 c c c ),参数量: P 3 d ( K , d ) = C ( ⌈ K d ⌉ 3 + ( 2 d − 1 ) 3 + 3 + C ) P_{3d}(K, d)=C(\left\lceil\frac{K}{d}\right\rceil^{3}+(2d - 1)^{3}+3 + C) P 3 d ( K , d ) = C ( d K 3 + ( 2 d 1 ) 3 + 3 + C ) ,FLOPs: F 3 d ( K , d ) = P 3 d ( K , d ) × H × W × D F_{3d}(K, d)=P_{3d}(K, d)×H×W×D F 3 d ( K , d ) = P 3 d ( K , d ) × H × W × D

2.2.2 Deformable Large Kernel Attention(D - LKA)原理

  • 引入可变形卷积 :在LKA的基础上引入 可变形卷积(Deformable Convolutions) ,可变形卷积能够通过 整数偏移量调整采样网格 ,实现 自由变形。
  • 自适应卷积核的形成 :一个额外的卷积层从特征图中学习变形,创建一个偏移场,基于特征本身学习变形会产生一个自适应卷积核,这种灵活的核形状可以改善对变形物体的表示,从而 增强对物体边界的定义。

在这里插入图片描述

2.3 结构

2.3.1 2D D - LKA模块结构

  • 整体结构 :包含 LayerNorm deformable LKA Multi - Layer Perceptron(MLP) ,并集成了 残差连接 ,以确保有效的特征传播。
  • 计算公式
    • x 1 = D − L K A − A t t n ( L N ( x i n ) ) + x i n x_{1}=D - LKA - Attn\left(LN\left(x_{in}\right)\right)+x_{in} x 1 = D L K A A tt n ( L N ( x in ) ) + x in
    • x o u t = M L P ( L N ( x 1 ) ) + x 1 x_{out}=MLP\left(LN\left(x_{1}\right)\right)+x_{1} x o u t = M L P ( L N ( x 1 ) ) + x 1
    • M L P = C o n v 1 ( G e L U ( C o n v d ( C o n v 1 ( x ) ) ) ) MLP=Conv_{1}\left(GeLU\left(Conv_{d}\left(Conv_{1}(x)\right)\right)\right) M L P = C o n v 1 ( G e LU ( C o n v d ( C o n v 1 ( x ) ) ) ) (其中 x i n x_{in} x in 为输入特征, L N LN L N 为层归一化, D − L K A − A t t n D - LKA - Attn D L K A A tt n 为可变形大核注意力, C o n v d Convd C o n v d 为深度卷积, C o n v 1 Conv1 C o n v 1 为线性层, G e L U GeLU G e LU 为激活函数)

2.3.2 3D D - LKA模块结构

  • 整体结构 :包括 层归一化 D - LKA Attention ,后面跟着应用了 残差连接 3×3×3卷积层 1×1×1卷积层
  • 计算公式
    • x 1 = D A t t n ( L N ( x i n ) ) + x i n x_{1}=D Attn\left(LN\left(x_{in}\right)\right)+x_{in} x 1 = D A tt n ( L N ( x in ) ) + x in
    • x o u t = C o n v 1 ( C o n v 3 ( x 1 ) ) + x 1 x_{out}=Conv_{1}\left(Conv_{3}\left(x_{1}\right)\right)+x_{1} x o u t = C o n v 1 ( C o n v 3 ( x 1 ) ) + x 1 (其中 x i n x_{in} x in 为输入特征, L N LN L N 层归一化 D A t t n D Attn D A tt n 可变形大核注意力 C o n v 1 Conv_{1} C o n v 1 线性层 C o n v 3 Conv_{3} C o n v 3 为包含 两个卷积层和激活函数的前馈网络 x o u t x_{out} x o u t 输出特征

2.3.3 基于D - LKA模块的网络结构

  • 2D D - LKA Net
    • 编码器 :使用 MaxViT 作为编码器组件进行高效特征提取,首先通过卷积干将输入图像维度降低到 H 4 × W 4 × C \frac{H}{4}×\frac{W}{4}×C 4 H × 4 W × C ,然后通过四个阶段的MaxViT块进行特征提取,每个阶段后跟着下采样层。
    • 解码器 :包含四个阶段的 D - LKA 层,每个阶段有两个 D - LKA 块,接着是 patch - expanding层 用于分辨率上采样和通道维度降低,最后通过线性层生成最终输出。

在这里插入图片描述

  • 3D D - LKA Net
    • 编码器 - 解码器设计 :使用 patch embedding层 将输入图像维度从 ( H × W × D ) (H×W×D) ( H × W × D ) 降低到 ( H 4 × W 4 × D 2 ) (\frac{H}{4}×\frac{W}{4}×\frac{D}{2}) ( 4 H × 4 W × 2 D ) ,编码器内有三个 D-LKA 阶段,每个阶段包含三个 D-LKA 块,每个阶段后进行 下采样 ,中央瓶颈包含两组 D-LKA 块。
    • 解码器结构与编码器对称,使用转置卷积来双倍特征分辨率并降低通道计数,每个解码器阶段使用三个 D-LKA 块促进长程特征依赖,最终通过 3×3×3 1×1×1卷积层 生成分割输出,并通过卷积形成 跳连接

在这里插入图片描述

2.4 优势

  • 有效处理上下文信息和局部描述符 D-LKA模块 在架构中平衡了上下文信息处理和局部描述符保留,能够实现精确的语义分割。
  • 动态适应感受野 :基于数据动态调整感受野,克服了传统卷积操作中固定滤波器掩码的固有局限性。
  • 适用于2D和3D数据 :开发了2D和3D版本的 D-LKA Net 架构,3D模型的 D-LKA 机制适合3D上下文,能够在不同体积之间无缝交换信息。
  • 计算效率高 :仅依靠 D-LKA 概念实现了计算效率,在各种分割基准测试中取得了优异性能,确立了该方法作为一种新的SOTA方法。同时, 可变形LKA 虽然增加了模型的参数和FLOPs,但在批量处理时,由于其高效的实现方式,甚至可以观察到推理时间的减少。

论文: https://arxiv.org/pdf/2309.00121.pdf
源码: https://github.com/mindflow-institue/deformableLKA

三、D-LKA的实现代码

D-LKA 及其改进的实现代码如下:

import torch
import torch.nn as nn
import torchvision
from ultralytics.nn.modules.conv import LightConv
import torch.nn.functional as F
 
class DeformConv(nn.Module):
 
    def __init__(self, in_channels, groups, kernel_size=(3, 3), padding=1, stride=1, dilation=1, bias=True):
        super(DeformConv, self).__init__()
 
        self.offset_net = nn.Conv2d(in_channels=in_channels,
                                    out_channels=2 * kernel_size[0] * kernel_size[1],
                                    kernel_size=kernel_size,
                                    padding=padding,
                                    stride=stride,
                                    dilation=dilation,
                                    bias=True)
 
        self.deform_conv = torchvision.ops.DeformConv2d(in_channels=in_channels,
                                                        out_channels=in_channels,
                                                        kernel_size=kernel_size,
                                                        padding=padding,
                                                        groups=groups,
                                                        stride=stride,
                                                        dilation=dilation,
                                                        bias=False)
 
    def forward(self, x):
        offsets = self.offset_net(x)
        out = self.deform_conv(x, offsets)
        return out

class deformable_LKA(nn.Module):
    def __init__(self, dim):
        super().__init__()
        self.conv0 = DeformConv(dim, kernel_size=(5,5), padding=2, groups=dim)
        self.conv_spatial = DeformConv(dim, kernel_size=(7,7), stride=1, padding=9, groups=dim, dilation=3)
        self.conv1 = nn.Conv2d(dim, dim, 1)

    def forward(self, x):
        u = x.clone()
        attn = self.conv0(x)
        attn = self.conv_spatial(attn)
        attn = self.conv1(attn)
 
        return u * attn
 
class deformable_LKA_Attention(nn.Module):
    def __init__(self, d_model):
        super().__init__()
 
        self.proj_1 = nn.Conv2d(d_model, d_model, 1)
        self.activation = nn.GELU()
        self.spatial_gating_unit = deformable_LKA(d_model)
        self.proj_2 = nn.Conv2d(d_model, d_model, 1)
 
    def forward(self, x):
        shorcut = x.clone()
        x = self.proj_1(x)
        x = self.activation(x)
        x = self.spatial_gating_unit(x)
        x = self.proj_2(x)
        x = x + shorcut
        return x

def autopad(k, p=None, d=1):  # kernel, padding, dilation
    """Pad to 'same' shape outputs."""
    if d > 1:
        k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k]  # actual kernel-size
    if p is None:
        p = k // 2 if isinstance(k, int) else [x // 2 for x in k]  # auto-pad
    return p

class Conv(nn.Module):
    """Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)."""
 
    default_act = nn.SiLU()  # default activation
 
    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
        """Initialize Conv layer with given arguments including activation."""
        super().__init__()
        self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)
        self.bn = nn.BatchNorm2d(c2)
        self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
 
    def forward(self, x):
        """Apply convolution, batch normalization and activation to input tensor."""
        return self.act(self.bn(self.conv(x)))
 
    def forward_fuse(self, x):
        """Perform transposed convolution of 2D data."""
        return self.act(self.conv(x))

class HGBlock_DLKA(nn.Module):
    """
    HG_Block of PPHGNetV2 with 2 convolutions and LightConv.

    https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/modeling/backbones/hgnet_v2.py
    """

    def __init__(self, c1, cm, c2, k=3, n=6, lightconv=False, shortcut=False, act=nn.ReLU()):
        """Initializes a CSP Bottleneck with 1 convolution using specified input and output channels."""
        super().__init__()
        block = LightConv if lightconv else Conv
        self.m = nn.ModuleList(block(c1 if i == 0 else cm, cm, k=k, act=act) for i in range(n))
        self.sc = Conv(c1 + n * cm, c2 // 2, 1, 1, act=act)  # squeeze conv
        self.ec = Conv(c2 // 2, c2, 1, 1, act=act)  # excitation conv
        self.add = shortcut and c1 == c2
        self.cv = deformable_LKA_Attention(c2)
        
    def forward(self, x):
        """Forward pass of a PPHGNetV2 backbone layer."""
        y = [x]
        y.extend(m(y[-1]) for m in self.m)
        y = self.cv(self.ec(self.sc(torch.cat(y, 1))))
        return y + x if self.add else y

class ResNetBlock(nn.Module):
    """ResNet block with standard convolution layers."""

    def __init__(self, c1, c2, s=1, e=4):
        """Initialize convolution with given parameters."""
        super().__init__()
        c3 = e * c2
        self.cv1 = Conv(c1, c2, k=1, s=1, act=True)
        self.cv2 = Conv(c2, c2, k=3, s=s, p=1, act=True)
        self.cv3 = Conv(c2, c3, k=1, act=False)
        self.cv4 = deformable_LKA_Attention(c2)
        self.shortcut = nn.Sequential(Conv(c1, c3, k=1, s=s, act=False)) if s != 1 or c1 != c3 else nn.Identity()

    def forward(self, x):
        """Forward pass through the ResNet block."""
        return F.relu(self.cv3(self.cv4(self.cv2(self.cv1(x)))) + self.shortcut(x))

class ResNetLayer_DLKA(nn.Module):
    """ResNet layer with multiple ResNet blocks."""

    def __init__(self, c1, c2, s=1, is_first=False, n=1, e=4):
        """Initializes the ResNetLayer given arguments."""
        super().__init__()
        self.is_first = is_first

        if self.is_first:
            self.layer = nn.Sequential(
                Conv(c1, c2, k=7, s=2, p=3, act=True), nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
            )
        else:
            blocks = [ResNetBlock(c1, c2, s, e=e)]
            blocks.extend([ResNetBlock(e * c2, c2, 1, e=e) for _ in range(n - 1)])
            self.layer = nn.Sequential(*blocks)

    def forward(self, x):
        """Forward pass through the ResNet layer."""
        return self.layer(x)


四、创新模块

4.1 改进点1⭐

模块改进方法 :基于 DLKA模块 HGBlock 第五节讲解添加步骤 )。

第一种改进方法是对 RT-DETR 中的 HGBlock模块 进行改进,并将 DLKA 在加入到 HGBlock 模块中。

改进代码如下:

HGBlock 模块进行改进,加入 DLKA模块 ,并重命名为 HGBlock_DLKA

class HGBlock_DLKA(nn.Module):
    """
    HG_Block of PPHGNetV2 with 2 convolutions and LightConv.

    https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/modeling/backbones/hgnet_v2.py
    """

    def __init__(self, c1, cm, c2, k=3, n=6, lightconv=False, shortcut=False, act=nn.ReLU()):
        """Initializes a CSP Bottleneck with 1 convolution using specified input and output channels."""
        super().__init__()
        block = LightConv if lightconv else Conv
        self.m = nn.ModuleList(block(c1 if i == 0 else cm, cm, k=k, act=act) for i in range(n))
        self.sc = Conv(c1 + n * cm, c2 // 2, 1, 1, act=act)  # squeeze conv
        self.ec = Conv(c2 // 2, c2, 1, 1, act=act)  # excitation conv
        self.add = shortcut and c1 == c2
        self.cv = deformable_LKA_Attention(c2)
        
    def forward(self, x):
        """Forward pass of a PPHGNetV2 backbone layer."""
        y = [x]
        y.extend(m(y[-1]) for m in self.m)
        y = self.cv(self.ec(self.sc(torch.cat(y, 1))))
        return y + x if self.add else y

在这里插入图片描述

4.2 改进点2⭐

模块改进方法 :基于 DLKA模块 ResNetLayer 第五节讲解添加步骤 )。

第二种改进方法是对 RT-DETR 中的 ResNetLayer模块 进行改进,并将 DLKA 在加入到 ResNetLayer 模块中。

改进代码如下:

ResNetLayer_DLKA 模块进行改进,加入 DLKA模块

class ResNetBlock(nn.Module):
    """ResNet block with standard convolution layers."""

    def __init__(self, c1, c2, s=1, e=4):
        """Initialize convolution with given parameters."""
        super().__init__()
        c3 = e * c2
        self.cv1 = Conv(c1, c2, k=1, s=1, act=True)
        self.cv2 = Conv(c2, c2, k=3, s=s, p=1, act=True)
        self.cv3 = Conv(c2, c3, k=1, act=False)
        self.cv4 = deformable_LKA_Attention(c2)
        self.shortcut = nn.Sequential(Conv(c1, c3, k=1, s=s, act=False)) if s != 1 or c1 != c3 else nn.Identity()

    def forward(self, x):
        """Forward pass through the ResNet block."""
        return F.relu(self.cv3(self.cv4(self.cv2(self.cv1(x)))) + self.shortcut(x))

class ResNetLayer_DLKA(nn.Module):
    """ResNet layer with multiple ResNet blocks."""

    def __init__(self, c1, c2, s=1, is_first=False, n=1, e=4):
        """Initializes the ResNetLayer given arguments."""
        super().__init__()
        self.is_first = is_first

        if self.is_first:
            self.layer = nn.Sequential(
                Conv(c1, c2, k=7, s=2, p=3, act=True), nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
            )
        else:
            blocks = [ResNetBlock(c1, c2, s, e=e)]
            blocks.extend([ResNetBlock(e * c2, c2, 1, e=e) for _ in range(n - 1)])
            self.layer = nn.Sequential(*blocks)

    def forward(self, x):
        """Forward pass through the ResNet layer."""
        return self.layer(x)
    

在这里插入图片描述

注意❗:在 第五小节 中需要声明的模块名称为: HGBlock_DLKA ResNetLayer_DLKA


五、添加步骤

5.1 修改一

① 在 ultralytics/nn/ 目录下新建 AddModules 文件夹用于存放模块代码

② 在 AddModules 文件夹下新建 DLKA.py ,将 第三节 中的代码粘贴到此处

在这里插入图片描述

5.2 修改二

AddModules 文件夹下新建 __init__.py (已有则不用新建),在文件内导入模块: from .DLKA import *

在这里插入图片描述

5.3 修改三

ultralytics/nn/modules/tasks.py 文件中,需要在两处位置添加各模块类名称。

首先:导入模块

在这里插入图片描述

其次:在 parse_model函数 中注册 HGBlock_DLKA ResNetLayer_DLKA 模块

在这里插入图片描述

在这里插入图片描述


六、yaml模型文件

6.1 模型改进版本1

此处以 ultralytics/cfg/models/rt-detr/rtdetr-l.yaml 为例,在同目录下创建一个用于自己数据集训练的模型文件 rtdetr-l-HGBlock_DLKA.yaml

rtdetr-l.yaml 中的内容复制到 rtdetr-l-HGBlock_DLKA.yaml 文件下,修改 nc 数量等于自己数据中目标的数量。

📌 模型的修改方法是将 骨干网络 中的 HGBlock 替换成 HGBlock_DLKA

# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-l object detection model with P3-P5 outputs. For details see https://docs.ultralytics.com/models/rtdetr

# Parameters
nc: 1 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'
  # [depth, width, max_channels]
  l: [1.00, 1.00, 1024]

backbone:
  # [from, repeats, module, args]
  - [-1, 1, HGStem, [32, 48]] # 0-P2/4
  - [-1, 6, HGBlock, [48, 128, 3]] # stage 1

  - [-1, 1, DWConv, [128, 3, 2, 1, False]] # 2-P3/8
  - [-1, 6, HGBlock, [96, 512, 3]] # stage 2

  - [-1, 1, DWConv, [512, 3, 2, 1, False]] # 4-P4/16
  - [-1, 6, HGBlock_DLKA, [192, 1024, 5, True, False]] # cm, c2, k, light, shortcut
  - [-1, 6, HGBlock_DLKA, [192, 1024, 5, True, True]]
  - [-1, 6, HGBlock_DLKA, [192, 1024, 5, True, True]] # stage 3

  - [-1, 1, DWConv, [1024, 3, 2, 1, False]] # 8-P5/32
  - [-1, 6, HGBlock, [384, 2048, 5, True, False]] # stage 4

head:
  - [-1, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 10 input_proj.2
  - [-1, 1, AIFI, [1024, 8]]
  - [-1, 1, Conv, [256, 1, 1]] # 12, Y5, lateral_convs.0

  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [7, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 14 input_proj.1
  - [[-2, -1], 1, Concat, [1]]
  - [-1, 3, RepC3, [256]] # 16, fpn_blocks.0
  - [-1, 1, Conv, [256, 1, 1]] # 17, Y4, lateral_convs.1

  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [3, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 19 input_proj.0
  - [[-2, -1], 1, Concat, [1]] # cat backbone P4
  - [-1, 3, RepC3, [256]] # X3 (21), fpn_blocks.1

  - [-1, 1, Conv, [256, 3, 2]] # 22, downsample_convs.0
  - [[-1, 17], 1, Concat, [1]] # cat Y4
  - [-1, 3, RepC3, [256]] # F4 (24), pan_blocks.0

  - [-1, 1, Conv, [256, 3, 2]] # 25, downsample_convs.1
  - [[-1, 12], 1, Concat, [1]] # cat Y5
  - [-1, 3, RepC3, [256]] # F5 (27), pan_blocks.1

  - [[21, 24, 27], 1, RTDETRDecoder, [nc]] # Detect(P3, P4, P5)

6.2 模型改进版本2⭐

此处以 ultralytics/cfg/models/rt-detr/rtdetr-resnet50.yaml 为例,在同目录下创建一个用于自己数据集训练的模型文件 rtdetr-ResNetLayer_DLKA.yaml

rtdetr-resnet50.yaml 中的内容复制到 rtdetr-ResNetLayer_DLKA.yaml 文件下,修改 nc 数量等于自己数据中目标的数量。

📌 模型的修改方法是将 骨干网络 中的 ResNetLayer模块 替换成 ResNetLayer_DLKA模块

# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-ResNet50 object detection model with P3-P5 outputs.

# Parameters
nc: 1 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'
  # [depth, width, max_channels]
  l: [1.00, 1.00, 1024]

backbone:
  # [from, repeats, module, args]
  - [-1, 1, ResNetLayer_DLKA, [3, 64, 1, True, 1]] # 0
  - [-1, 1, ResNetLayer_DLKA, [64, 64, 1, False, 3]] # 1
  - [-1, 1, ResNetLayer_DLKA, [256, 128, 2, False, 4]] # 2
  - [-1, 1, ResNetLayer_DLKA, [512, 256, 2, False, 6]] # 3
  - [-1, 1, ResNetLayer_DLKA, [1024, 512, 2, False, 3]] # 4

head:
  - [-1, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 5
  - [-1, 1, AIFI, [1024, 8]]
  - [-1, 1, Conv, [256, 1, 1]] # 7

  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [3, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 9
  - [[-2, -1], 1, Concat, [1]]
  - [-1, 3, RepC3, [256]] # 11
  - [-1, 1, Conv, [256, 1, 1]] # 12

  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [2, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 14
  - [[-2, -1], 1, Concat, [1]] # cat backbone P4
  - [-1, 3, RepC3, [256]] # X3 (16), fpn_blocks.1

  - [-1, 1, Conv, [256, 3, 2]] # 17, downsample_convs.0
  - [[-1, 12], 1, Concat, [1]] # cat Y4
  - [-1, 3, RepC3, [256]] # F4 (19), pan_blocks.0

  - [-1, 1, Conv, [256, 3, 2]] # 20, downsample_convs.1
  - [[-1, 7], 1, Concat, [1]] # cat Y5
  - [-1, 3, RepC3, [256]] # F5 (22), pan_blocks.1

  - [[16, 19, 22], 1, RTDETRDecoder, [nc]] # Detect(P3, P4, P5)


七、成功运行结果

打印网络模型可以看到 HGBlock_DLKA ResNetLayer_DLKA 已经加入到模型中,并可以进行训练了。

rtdetr-l-HGBlock_DLKA

rtdetr-l-HGBlock_DLKA summary: 718 layers, 61,074,047 parameters, 61,074,047 gradients, 197.7 GFLOPs

                   from  n    params  module                                       arguments                     
  0                  -1  1     25248  ultralytics.nn.modules.block.HGStem          [3, 32, 48]                   
  1                  -1  6    155072  ultralytics.nn.modules.block.HGBlock         [48, 48, 128, 3, 6]           
  2                  -1  1      1408  ultralytics.nn.modules.conv.DWConv           [128, 128, 3, 2, 1, False]    
  3                  -1  6    839296  ultralytics.nn.modules.block.HGBlock         [128, 96, 512, 3, 6]          
  4                  -1  1      5632  ultralytics.nn.modules.conv.DWConv           [512, 512, 3, 2, 1, False]    
  5                  -1  6  11117332  ultralytics.nn.AddModules.DLKA.HGBlock_DLKA  [512, 192, 1024, 5, 6, True, False]
  6                  -1  6  11477780  ultralytics.nn.AddModules.DLKA.HGBlock_DLKA  [1024, 192, 1024, 5, 6, True, True]
  7                  -1  6  11477780  ultralytics.nn.AddModules.DLKA.HGBlock_DLKA  [1024, 192, 1024, 5, 6, True, True]
  8                  -1  1     11264  ultralytics.nn.modules.conv.DWConv           [1024, 1024, 3, 2, 1, False]  
  9                  -1  6   6708480  ultralytics.nn.modules.block.HGBlock         [1024, 384, 2048, 5, 6, True, False]
 10                  -1  1    524800  ultralytics.nn.modules.conv.Conv             [2048, 256, 1, 1, None, 1, 1, False]
 11                  -1  1    789760  ultralytics.nn.modules.transformer.AIFI      [256, 1024, 8]                
 12                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]              
 13                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          
 14                   7  1    262656  ultralytics.nn.modules.conv.Conv             [1024, 256, 1, 1, None, 1, 1, False]
 15            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 16                  -1  3   2232320  ultralytics.nn.modules.block.RepC3           [512, 256, 3]                 
 17                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]              
 18                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          
 19                   3  1    131584  ultralytics.nn.modules.conv.Conv             [512, 256, 1, 1, None, 1, 1, False]
 20            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 21                  -1  3   2232320  ultralytics.nn.modules.block.RepC3           [512, 256, 3]                 
 22                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]              
 23            [-1, 17]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 24                  -1  3   2232320  ultralytics.nn.modules.block.RepC3           [512, 256, 3]                 
 25                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]              
 26            [-1, 12]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 27                  -1  3   2232320  ultralytics.nn.modules.block.RepC3           [512, 256, 3]                 
 28        [21, 24, 27]  1   7303907  ultralytics.nn.modules.head.RTDETRDecoder    [1, [256, 256, 256]]          
rtdetr-l-HGBlock_DLKA summary: 718 layers, 61,074,047 parameters, 61,074,047 gradients, 197.7 GFLOPs

rtdetr-ResNetLayer_DLKA

rtdetr-ResNetLayer_DLKA summary: 785 layers, 69,680,675 parameters, 69,680,675 gradients, 276.9 GFLOPs

                   from  n    params  module                                       arguments                     
  0                  -1  1      9536  ultralytics.nn.AddModules.DLKA.ResNetLayer_DLKA[3, 64, 1, True, 1]           
  1                  -1  1   1429884  ultralytics.nn.AddModules.DLKA.ResNetLayer_DLKA[64, 64, 1, False, 3]         
  2                  -1  1   4554832  ultralytics.nn.AddModules.DLKA.ResNetLayer_DLKA[256, 128, 2, False, 4]       
  3                  -1  1  17693048  ultralytics.nn.AddModules.DLKA.ResNetLayer_DLKA[512, 256, 2, False, 6]       
  4                  -1  1  26738620  ultralytics.nn.AddModules.DLKA.ResNetLayer_DLKA[1024, 512, 2, False, 3]      
  5                  -1  1    524800  ultralytics.nn.modules.conv.Conv             [2048, 256, 1, 1, None, 1, 1, False]
  6                  -1  1    789760  ultralytics.nn.modules.transformer.AIFI      [256, 1024, 8]                
  7                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]              
  8                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          
  9                   3  1    262656  ultralytics.nn.modules.conv.Conv             [1024, 256, 1, 1, None, 1, 1, False]
 10            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 11                  -1  3   2232320  ultralytics.nn.modules.block.RepC3           [512, 256, 3]                 
 12                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]              
 13                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          
 14                   2  1    131584  ultralytics.nn.modules.conv.Conv             [512, 256, 1, 1, None, 1, 1, False]
 15            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 16                  -1  3   2232320  ultralytics.nn.modules.block.RepC3           [512, 256, 3]                 
 17                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]              
 18            [-1, 12]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 19                  -1  3   2232320  ultralytics.nn.modules.block.RepC3           [512, 256, 3]                 
 20                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]              
 21             [-1, 7]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 22                  -1  3   2232320  ultralytics.nn.modules.block.RepC3           [512, 256, 3]                 
 23        [16, 19, 22]  1   7303907  ultralytics.nn.modules.head.RTDETRDecoder    [1, [256, 256, 256]]          
rtdetr-ResNetLayer_DLKA summary: 785 layers, 69,680,675 parameters, 69,680,675 gradients, 276.9 GFLOPs