学习资源站

RT-DETR改进策略【注意力机制篇】2023MCAttention多尺度交叉轴注意力获取多尺度特征和全局上下文信息-

RT-DETR改进策略【注意力机制篇】| 2023 MCAttention 多尺度交叉轴注意力 获取多尺度特征和全局上下文信息

一、本文介绍

本文记录的是 基于MCA注意力模块的RT-DETR目标检测改进方法研究 。普通的轴向注意力难以实现长距离交互,不利于捕获分割任务中所需的空间结构或形状,而 MCA注意力 模块通过构建了两个并行轴向注意力之间的交互,更有效地利用多尺度特征和全局上下文 ,在改进 RT-DETR 的过程中,能够契合目标形态,更有效的获取目标的全局信息。



二、MCANet原理

MCANet :基于多尺度交叉轴关注的医学图像分割

MCANet(Medical Image Segmentation with Multi - Scale Cross - Axis Attention) 是一种用于医学图像分割的网络,其核心组件是多尺度交叉轴注意力 (Multi - Scale Cross - Axis Attention,MCA)

2.1 MCA的原理:

  1. 回顾轴向注意力
    • 轴向注意力将自注意力分解为两个部分,分别负责沿水平或垂直维度计算自注意力,基于此, Axial - DeepLab 可沿水平和垂直方向依次聚合特征,使捕获全局信息成为可能。
    • 轴向注意力比自注意力更高效,计算复杂度从 O ( H W × H W ) O(HW \times HW) O ( H W × H W ) 降低到 O ( H W × ( H + W ) ) O(HW \times (H + W)) O ( H W × ( H + W ))
    • 但在许多医学图像分割任务中,数据集相对较小,轴向注意力难以实现长距离交互,不利于捕获分割任务中所需的空间结构或形状。
  2. 多尺度交叉轴注意力
    • MCA 结构分为两个并行分支,分别计算水平和垂直轴向注意力,每个分支由三个不同核大小的1D卷积组成,用于沿一个空间维度编码多尺度上下文信息,随后通过交叉轴注意力沿另一个空间维度聚合特征。
    • 以顶部分支为例,给定特征图 F F F (编码器最后三个阶段特征图的组合),使用三个并行的1D卷积对其进行编码,输出通过求和融合并送入一个 1 × 1 1\times1 1 × 1 卷积,公式为 F x = C o n v 1 × 1 ( ∑ i = 0 2 C o n v 1 D i x ( N o r m ( F ) ) ) F_{x} = Conv_{1\times1}\left(\sum_{i = 0}^{2}Conv1D_{i}^{x}(Norm(F))\right) F x = C o n v 1 × 1 ( i = 0 2 C o n v 1 D i x ( N or m ( F )) ) ,其中 C o n v 1 D i x ( ⋅ ) Conv1D_{i}^{x}(\cdot) C o n v 1 D i x ( ) 表示沿 x x x 轴维度的1D卷积, N o r m ( ⋅ ) Norm(\cdot) N or m ( ) 是层归一化, F x F_{x} F x 是输出。对于1D卷积的核大小,设置为 1 × 7 1\times7 1 × 7 1 × 11 1\times11 1 × 11 1 × 21 1\times21 1 × 21 。底部分支的输出 F y F_{y} F y 可通过类似方式得到。
    • 对于顶部分支的 F x F_{x} F x ,将其送入 y y y 轴注意力,为更好地利用来自两个空间方向的多尺度卷积特征,计算 F x F_{x} F x F y F_{y} F y 之间的交叉注意力,具体将 F x F_{x} F x 作为键和值矩阵, F y F_{y} F y 作为查询矩阵,计算过程为 F T = M H C A y ( F y , F x , F x ) F_{T} = MHCA_{y}(F_{y}, F_{x}, F_{x}) F T = M H C A y ( F y , F x , F x ) ,其中 M H C A y ( ⋅ , ⋅ , ⋅ ) MHCA_{y}(\cdot, \cdot, \cdot) M H C A y ( , , ) 表示沿 x x x 轴的多头交叉注意力。底部分支以类似方式编码沿 y y y 轴方向的上下文,即 F B = M H C A x ( F x , F y , F y ) F_{B} = MHCA_{x}(F_{x}, F_{y}, F_{y}) F B = M H C A x ( F x , F y , F y ) ,其中 M H C A x ( ⋅ , ⋅ , ⋅ ) MHCA_{x}(\cdot, \cdot, \cdot) M H C A x ( , , ) 表示沿 y y y 轴的多头交叉注意力。
    • MCA 的输出为 F o u t = C o n v 1 × 1 ( F T ) + C o n v 1 × 1 ( F B ) + F F_{out} = Conv_{1\times1}(F_{T}) + Conv_{1\times1}(F_{B}) + F F o u t = C o n v 1 × 1 ( F T ) + C o n v 1 × 1 ( F B ) + F

在这里插入图片描述

2.2 MCA的优势:

  1. 引入轻量级多尺度卷积 :处理病变区域或器官各种大小和形状的有效方式。
  2. 创新的注意力机制 :与大多数以前的工作不同, MCA 不直接应用轴向注意力来捕获全局上下文,而是构建两个并行轴向注意力之间的交互,更有效地利用多尺度特征和全局上下文。
  3. 解码器轻量级 :微小型号的模型参数数量仅为 0.14 M 0.14M 0.14 M ,更适合实际应用场景。

论文: https://arxiv.org/pdf/2312.08866v1
源码: https://github.com/haoshao-nku/medical_seg

三、MCA的实现代码

MCA模块 的实现代码如下:

import math
import torch
import torch.nn as nn

from ultralytics.nn.modules.conv import LightConv

class StdPool(nn.Module) :
    def __init__(self):
        super(StdPool, self).__init__()
    def forward (self, x):
        b, c, _, _ = x.size()
        std = x.view(b, c, -1).std(dim=2, keepdim=True)
        std = std.reshape(b, c, 1, 1)
        return std
        
class MCAGate (nn.Module):
    def __init__(self, k_size, pool_types=['avg', 'std']):
 
        super(MCAGate, self).__init__()
        self.pools = nn.ModuleList([])
        for pool_type in pool_types:
            if pool_type == 'avg':
                self.pools.append(nn.AdaptiveAvgPool2d(1))
            elif pool_type == 'max':
                self.pools.append(nn.AdaptiveMaxPool2d(1))
            elif pool_type == 'std':
                self.pools.append(StdPool())
            else:
                raise NotImplementedError
 
        self.conv = nn.Conv2d(1, 1, kernel_size=(1, k_size),  stride=1, padding=(0, (k_size - 1) // 2), bias=False)
        self.sigmoid = nn.Sigmoid()
        self.weight = nn.Parameter(torch.rand(2))
 
    def forward(self, x):
        feats = [pool(x) for pool in self.pools]
 
        if len(feats) == 1:
            out = feats[0]
        elif len(feats) == 2:
            weight = torch.sigmoid(self.weight)
            out = 1/2*(feats[0] + feats[1]) + weight[0] * feats[0] + weight[1] * feats[1]
        else:
            assert False, "特征提取异常"
 
        out = out.permute(0, 3, 2, 1).contiguous()
        out = self.conv(out)
        out = out.permute(0, 3, 2, 1).contiguous()
 
        out = self.sigmoid(out)
        out = out.expand_as(x)
 
        return x * out
 
class MCA(nn.Module):
    def __init__(self, channel, no_spatial=False): 
        """Constructs a MCA module.
        Args:
            inp: Number of channels of the input feature maps
            no_spatial: whether to build channel dimension interactions
        """
        super(MCA, self).__init__()
        lambd = 1.5
        gamma = 1
        temp = round(abs((math.log2(channel) - gamma) / lambd))
        kernel = temp if temp % 2 else temp - 1
 
        self.h_cw = MCAGate(3)
        self.w_hc = MCAGate(3)
        self.no_spatial = no_spatial
        if not no_spatial:
            self.c_hw = MCAGate(kernel)
 
    def forward(self, x):
        x_h = x.permute(0, 2, 1, 3).contiguous()
        x_h = self.h_cw(x_h)
        x_h = x_h.permute(0, 2, 1, 3).contiguous() 
 
        x_w = x.permute(0, 3, 2, 1).contiguous()
        x_w = self.w_hc(x_w)
        x_w = x_w.permute(0, 3, 2, 1).contiguous() 
 
        if not self.no_spatial:   
            x_c = self.c_hw(x)
            x_out = 1 / 3 * (x_c + x_h + x_w)
        else:
            x_out = 1 / 2 * (x_h + x_w)
 
        return x_out
def autopad(k, p=None, d=1):  # kernel, padding, dilation
    """Pad to 'same' shape outputs."""
    if d > 1:
        k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k]  # actual kernel-size
    if p is None:
        p = k // 2 if isinstance(k, int) else [x // 2 for x in k]  # auto-pad
    return p

class Conv(nn.Module):
    """Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)."""
 
    default_act = nn.SiLU()  # default activation
 
    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
        """Initialize Conv layer with given arguments including activation."""
        super().__init__()
        self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)
        self.bn = nn.BatchNorm2d(c2)
        self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
 
    def forward(self, x):
        """Apply convolution, batch normalization and activation to input tensor."""
        return self.act(self.bn(self.conv(x)))
 
    def forward_fuse(self, x):
        """Perform transposed convolution of 2D data."""
        return self.act(self.conv(x))

class HGBlock_MCA(nn.Module):
    """
    HG_Block of PPHGNetV2 with 2 convolutions and LightConv.

    https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/modeling/backbones/hgnet_v2.py
    """

    def __init__(self, c1, cm, c2, k=3, n=6, lightconv=False, shortcut=False, act=nn.ReLU()):
        """Initializes a CSP Bottleneck with 1 convolution using specified input and output channels."""
        super().__init__()
        block = LightConv if lightconv else Conv
        self.m = nn.ModuleList(block(c1 if i == 0 else cm, cm, k=k, act=act) for i in range(n))
        self.sc = Conv(c1 + n * cm, c2 // 2, 1, 1, act=act)  # squeeze conv
        self.ec = Conv(c2 // 2, c2, 1, 1, act=act)  # excitation conv
        self.add = shortcut and c1 == c2
        self.cv = MCA(c2)
        
    def forward(self, x):
        """Forward pass of a PPHGNetV2 backbone layer."""
        y = [x]
        y.extend(m(y[-1]) for m in self.m)
        y = self.cv(self.ec(self.sc(torch.cat(y, 1))))
        return y + x if self.add else y

四、添加步骤

4.1 改进点1

模块改进方法 1️⃣:直接加入 MCA模块 。( 第五节讲解添加步骤
MCA模块 添加后如下:

在这里插入图片描述

注意❗:需要声明的模块名称为: MCA

4.2 改进点2⭐

模块改进方法 2️⃣:基于 MCA模块 HGBlock 。( 第五节讲解添加步骤

相较方法一中的直接插入注意力模块,利用注意力模块对卷积等其他模块进行改进,其新颖程度会更高一些,训练精度可能会表现的更高。

第二种改进方法是对 RT-DETR 中的 HGBlock模块 进行改进, MCA注意力模块 通过构建了两个并行轴向注意力之间的交互,更有效地利用多尺度特征和全局上下文,在加入到 HGBlock模块 后,能够更加契合目标形态,更有效的获取目标的全局信息。

改进代码如下:

class HGBlock_MCA(nn.Module):
    """
    HG_Block of PPHGNetV2 with 2 convolutions and LightConv.

    https://github.com/PaddlePaddle/PaddleDetection/blob/develop/ppdet/modeling/backbones/hgnet_v2.py
    """

    def __init__(self, c1, cm, c2, k=3, n=6, lightconv=False, shortcut=False, act=nn.ReLU()):
        """Initializes a CSP Bottleneck with 1 convolution using specified input and output channels."""
        super().__init__()
        block = LightConv if lightconv else Conv
        self.m = nn.ModuleList(block(c1 if i == 0 else cm, cm, k=k, act=act) for i in range(n))
        self.sc = Conv(c1 + n * cm, c2 // 2, 1, 1, act=act)  # squeeze conv
        self.ec = Conv(c2 // 2, c2, 1, 1, act=act)  # excitation conv
        self.add = shortcut and c1 == c2
        self.cv = MCA(c2)
        
    def forward(self, x):
        """Forward pass of a PPHGNetV2 backbone layer."""
        y = [x]
        y.extend(m(y[-1]) for m in self.m)
        y = self.cv(self.ec(self.sc(torch.cat(y, 1))))
        return y + x if self.add else y

在这里插入图片描述

注意❗:需要声明的模块名称为: HGBlock_MCA


五、添加步骤

5.1 修改一

① 在 ultralytics/nn/ 目录下新建 AddModules 文件夹用于存放模块代码

② 在 AddModules 文件夹下新建 MCA.py ,将 第三节 中的代码粘贴到此处

在这里插入图片描述

5.2 修改二

AddModules 文件夹下新建 __init__.py (已有则不用新建),在文件内导入模块: from .MCA import *

在这里插入图片描述

5.3 修改三

ultralytics/nn/modules/tasks.py 文件中,需要在两处位置添加各模块类名称。

首先:导入模块

在这里插入图片描述

其次:在 parse_model函数 中注册 MCA HGBlock_MCA 模块

在这里插入图片描述

在这里插入图片描述
在这里插入图片描述


六、yaml模型文件

6.1 模型改进版本一

在代码配置完成后,配置模型的YAML文件。

此处以 ultralytics/cfg/models/rt-detr/rtdetr-l.yaml 为例,在同目录下创建一个用于自己数据集训练的模型文件 rtdetr-l-MCA.yaml

rtdetr-l.yaml 中的内容复制到 rtdetr-l-MCA.yaml 文件下,修改 nc 数量等于自己数据中目标的数量。
在骨干网络中添加 MCA模块 只需要填入一个参数,通道数

# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-l object detection model with P3-P5 outputs. For details see https://docs.ultralytics.com/models/rtdetr

# Parameters
nc: 1 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'
  # [depth, width, max_channels]
  l: [1.00, 1.00, 1024]

backbone:
  # [from, repeats, module, args]
  - [-1, 1, HGStem, [32, 48]] # 0-P2/4
  - [-1, 6, HGBlock, [48, 128, 3]] # stage 1

  - [-1, 1, DWConv, [128, 3, 2, 1, False]] # 2-P3/8
  - [-1, 6, HGBlock, [96, 512, 3]] # stage 2

  - [-1, 1, DWConv, [512, 3, 2, 1, False]] # 4-P4/16
  - [-1, 6, HGBlock, [192, 1024, 5, True, False]] # cm, c2, k, light, shortcut
  - [-1, 6, HGBlock, [192, 1024, 5, True, True]]
  - [-1, 6, HGBlock, [192, 1024, 5, True, True]] # stage 3

  - [-1, 1, DWConv, [1024, 3, 2, 1, False]] # 8-P5/32
  - [-1, 1, MCA, [1024]] # stage 4
  - [-1, 6, HGBlock, [384, 2048, 5, True, False]] # stage 4

head:
  - [-1, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 10 input_proj.2
  - [-1, 1, AIFI, [1024, 8]]
  - [-1, 1, Conv, [256, 1, 1]] # 12, Y5, lateral_convs.0

  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [7, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 14 input_proj.1
  - [[-2, -1], 1, Concat, [1]]
  - [-1, 3, RepC3, [256]] # 16, fpn_blocks.0
  - [-1, 1, Conv, [256, 1, 1]] # 17, Y4, lateral_convs.1

  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [3, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 19 input_proj.0
  - [[-2, -1], 1, Concat, [1]] # cat backbone P4
  - [-1, 3, RepC3, [256]] # X3 (21), fpn_blocks.1

  - [-1, 1, Conv, [256, 3, 2]] # 22, downsample_convs.0
  - [[-1, 18], 1, Concat, [1]] # cat Y4
  - [-1, 3, RepC3, [256]] # F4 (24), pan_blocks.0

  - [-1, 1, Conv, [256, 3, 2]] # 25, downsample_convs.1
  - [[-1, 13], 1, Concat, [1]] # cat Y5
  - [-1, 3, RepC3, [256]] # F5 (27), pan_blocks.1

  - [[22, 25, 28], 1, RTDETRDecoder, [nc]] # Detect(P3, P4, P5)

6.2 模型改进版本二⭐

此处同样以 ultralytics/cfg/models/rt-detr/rtdetr-l.yaml 为例,在同目录下创建一个用于自己数据集训练的模型文件 rtdetr-l-HGBlock_MCA.yaml

rtdetr-l.yaml 中的内容复制到 rtdetr-l-HGBlock_MCA.yaml 文件下,修改 nc 数量等于自己数据中目标的数量。

📌 模型的修改方法是将 骨干网络 中的部分 HGBlock模块 替换成 HGBlock_MCA模块

# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-l object detection model with P3-P5 outputs. For details see https://docs.ultralytics.com/models/rtdetr

# Parameters
nc: 1 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'
  # [depth, width, max_channels]
  l: [1.00, 1.00, 1024]

backbone:
  # [from, repeats, module, args]
  - [-1, 1, HGStem, [32, 48]] # 0-P2/4
  - [-1, 6, HGBlock, [48, 128, 3]] # stage 1

  - [-1, 1, DWConv, [128, 3, 2, 1, False]] # 2-P3/8
  - [-1, 6, HGBlock, [96, 512, 3]] # stage 2

  - [-1, 1, DWConv, [512, 3, 2, 1, False]] # 4-P4/16
  - [-1, 6, HGBlock_MCA, [192, 1024, 5, True, False]] # cm, c2, k, light, shortcut
  - [-1, 6, HGBlock_MCA, [192, 1024, 5, True, True]]
  - [-1, 6, HGBlock_MCA, [192, 1024, 5, True, True]] # stage 3

  - [-1, 1, DWConv, [1024, 3, 2, 1, False]] # 8-P5/32
  - [-1, 6, HGBlock, [384, 2048, 5, True, False]] # stage 4

head:
  - [-1, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 10 input_proj.2
  - [-1, 1, AIFI, [1024, 8]]
  - [-1, 1, Conv, [256, 1, 1]] # 12, Y5, lateral_convs.0

  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [7, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 14 input_proj.1
  - [[-2, -1], 1, Concat, [1]]
  - [-1, 3, RepC3, [256]] # 16, fpn_blocks.0
  - [-1, 1, Conv, [256, 1, 1]] # 17, Y4, lateral_convs.1

  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [3, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 19 input_proj.0
  - [[-2, -1], 1, Concat, [1]] # cat backbone P4
  - [-1, 3, RepC3, [256]] # X3 (21), fpn_blocks.1

  - [-1, 1, Conv, [256, 3, 2]] # 22, downsample_convs.0
  - [[-1, 17], 1, Concat, [1]] # cat Y4
  - [-1, 3, RepC3, [256]] # F4 (24), pan_blocks.0

  - [-1, 1, Conv, [256, 3, 2]] # 25, downsample_convs.1
  - [[-1, 12], 1, Concat, [1]] # cat Y5
  - [-1, 3, RepC3, [256]] # F5 (27), pan_blocks.1

  - [[21, 24, 27], 1, RTDETRDecoder, [nc]] # Detect(P3, P4, P5)


七、成功运行结果

分别打印网络模型可以看到 MCA HGBlock_MCA 已经加入到模型中,并可以进行训练了。

rtdetr-l-MCA

rtdetr-l-MCA summary: 694 layers, 32,808,141 parameters, 32,808,141 gradients, 108.0 GFLOPs

                   from  n    params  module                                       arguments                     
  0                  -1  1     25248  ultralytics.nn.modules.block.HGStem          [3, 32, 48]                   
  1                  -1  6    155072  ultralytics.nn.modules.block.HGBlock         [48, 48, 128, 3, 6]           
  2                  -1  1      1408  ultralytics.nn.modules.conv.DWConv           [128, 128, 3, 2, 1, False]    
  3                  -1  6    839296  ultralytics.nn.modules.block.HGBlock         [128, 96, 512, 3, 6]          
  4                  -1  1      5632  ultralytics.nn.modules.conv.DWConv           [512, 512, 3, 2, 1, False]    
  5                  -1  6   1695360  ultralytics.nn.modules.block.HGBlock         [512, 192, 1024, 5, 6, True, False]
  6                  -1  6   2055808  ultralytics.nn.modules.block.HGBlock         [1024, 192, 1024, 5, 6, True, True]
  7                  -1  6   2055808  ultralytics.nn.modules.block.HGBlock         [1024, 192, 1024, 5, 6, True, True]
  8                  -1  1     11264  ultralytics.nn.modules.conv.DWConv           [1024, 1024, 3, 2, 1, False]  
  9                  -1  1        10  ultralytics.nn.AddModules.MCA.MCA            [1024, 1024]                  
 10                  -1  6   6708480  ultralytics.nn.modules.block.HGBlock         [1024, 384, 2048, 5, 6, True, False]
 11                  -1  1    524800  ultralytics.nn.modules.conv.Conv             [2048, 256, 1, 1, None, 1, 1, False]
 12                  -1  1    789760  ultralytics.nn.modules.transformer.AIFI      [256, 1024, 8]                
 13                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]              
 14                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          
 15                   7  1    262656  ultralytics.nn.modules.conv.Conv             [1024, 256, 1, 1, None, 1, 1, False]
 16            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 17                  -1  3   2232320  ultralytics.nn.modules.block.RepC3           [512, 256, 3]                 
 18                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]              
 19                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          
 20                   3  1    131584  ultralytics.nn.modules.conv.Conv             [512, 256, 1, 1, None, 1, 1, False]
 21            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 22                  -1  3   2232320  ultralytics.nn.modules.block.RepC3           [512, 256, 3]                 
 23                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]              
 24            [-1, 18]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 25                  -1  3   2232320  ultralytics.nn.modules.block.RepC3           [512, 256, 3]                 
 26                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]              
 27            [-1, 13]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 28                  -1  3   2232320  ultralytics.nn.modules.block.RepC3           [512, 256, 3]                 
 29        [22, 25, 28]  1   7303907  ultralytics.nn.modules.head.RTDETRDecoder    [1, [256, 256, 256]]          
rtdetr-l-MCA summary: 694 layers, 32,808,141 parameters, 32,808,141 gradients, 108.0 GFLOPs

rtdetr-l-HGBlock_MCA

rtdetr-l-HGBlock_MCA summary: 739 layers, 32,808,182 parameters, 32,808,182 gradients, 108.0 GFLOPs

                   from  n    params  module                                       arguments                     
  0                  -1  1     25248  ultralytics.nn.modules.block.HGStem          [3, 32, 48]                   
  1                  -1  6    155072  ultralytics.nn.modules.block.HGBlock         [48, 48, 128, 3, 6]           
  2                  -1  1      1408  ultralytics.nn.modules.conv.DWConv           [128, 128, 3, 2, 1, False]    
  3                  -1  6    839296  ultralytics.nn.modules.block.HGBlock         [128, 96, 512, 3, 6]          
  4                  -1  1      5632  ultralytics.nn.modules.conv.DWConv           [512, 512, 3, 2, 1, False]    
  5                  -1  6   1695377  ultralytics.nn.AddModules.MCA.HGBlock_MCA    [512, 192, 1024, 5, 6, True, False]
  6                  -1  6   2055825  ultralytics.nn.AddModules.MCA.HGBlock_MCA    [1024, 192, 1024, 5, 6, True, True]
  7                  -1  6   2055825  ultralytics.nn.AddModules.MCA.HGBlock_MCA    [1024, 192, 1024, 5, 6, True, True]
  8                  -1  1     11264  ultralytics.nn.modules.conv.DWConv           [1024, 1024, 3, 2, 1, False]  
  9                  -1  6   6708480  ultralytics.nn.modules.block.HGBlock         [1024, 384, 2048, 5, 6, True, False]
 10                  -1  1    524800  ultralytics.nn.modules.conv.Conv             [2048, 256, 1, 1, None, 1, 1, False]
 11                  -1  1    789760  ultralytics.nn.modules.transformer.AIFI      [256, 1024, 8]                
 12                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]              
 13                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          
 14                   7  1    262656  ultralytics.nn.modules.conv.Conv             [1024, 256, 1, 1, None, 1, 1, False]
 15            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 16                  -1  3   2232320  ultralytics.nn.modules.block.RepC3           [512, 256, 3]                 
 17                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]              
 18                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          
 19                   3  1    131584  ultralytics.nn.modules.conv.Conv             [512, 256, 1, 1, None, 1, 1, False]
 20            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 21                  -1  3   2232320  ultralytics.nn.modules.block.RepC3           [512, 256, 3]                 
 22                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]              
 23            [-1, 17]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 24                  -1  3   2232320  ultralytics.nn.modules.block.RepC3           [512, 256, 3]                 
 25                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]              
 26            [-1, 12]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 27                  -1  3   2232320  ultralytics.nn.modules.block.RepC3           [512, 256, 3]                 
 28        [21, 24, 27]  1   7303907  ultralytics.nn.modules.head.RTDETRDecoder    [1, [256, 256, 256]]          
rtdetr-l-HGBlock_MCA summary: 739 layers, 32,808,182 parameters, 32,808,182 gradients, 108.0 GFLOPs