学习资源站

【RT-DETR多模态融合改进】_利用iRMB倒置残差移动块二次改进CGAFusion-

【RT-DETR多模态融合改进】| 利用 iRMB 倒置残差移动块二次改进CGA Fusion

一、本文介绍

本文记录的是利用 iRMB 模块改进 RT-DETR 的多模态融合部分 。主要讲解如何利用一些现有的模块二次改进多模态的融合部分。

iRMB(Inverted Residual Mobile Block) 的作用在于 克服了常见模块无法同时吸收CNN 效率建模局部特征 利用Transformer 动态建模能力学习不同模态之间的长距离交互问题 。相比一些复杂结构或多个混合模块的方法, 能更好地权衡模型成本和精度

本文将其用于 CGA Fusion 模块中并进行二次创新,更好地突出不同模态的重要特征,提升模型性能。



二、iRMB注意力介绍

Rethinking Mobile Block for Efficient Attention-based Models

2.1 设计出发点

  • 统一CNN和Transformer优势 :从高效的Inverted Residual Block(IRB)和Transformer的有效组件出发,期望在基础设施设计层面整合两者优势,为注意力模型构建类似IRB的轻量级基础结构。
  • 解决现有模型问题 :当前方法存在引入复杂结构或多个混合模块的问题,不利于应用优化。希望通过重新思考IRB和Transformer组件,构建简单有效的模块。

2.2 原理

  • 基于Meta Mobile Block(MMB) MMB 是通过对 MobileNetv2 中的 IRB Transformer 中的核心 MHSA FFN模块 重新思考并归纳抽象得到的。它以参数化的扩展比率 λ 和高效算子 F 来实例化不同模块(如IRB、MHSA、FFN),揭示了这些模块的一致本质表达。

在这里插入图片描述

  • 遵循通用高效模型准则 :设计遵循可用性(简单实现,不使用复杂算子,易于应用优化)、均匀性(核心模块少,降低模型复杂度,加速部署)、有效性(分类和密集预测性能好)、效率(参数和计算少,权衡精度)的准则。

2.3 结构

2.3.1 主要组成部分

从微观角度看, iRMB Depth - Wise Convolution(DW - Conv) 改进的Expanded Window MHSA(EW - MHSA) 组成。

2.3.2 具体操作流程

  • 首先,类似MMB的操作,使用 扩展MLP M L P e MLP_{e} M L P e )以输出/输入比等于λ来扩展通道维度,即 X e = M L P e ( X ) ( ∈ R λ C × H × W ) X_{e}=MLP_{e}(X)\left(\in \mathbb{R}^{\lambda C × H × W}\right) X e = M L P e ( X ) ( R λ C × H × W )
  • 然后,中间算子 F 进一步增强图像特征,这里F被建模为 级联的MHSA 卷积 操作,即 F ( ⋅ ) = C o n v ( M H S A ( ⋅ ) ) F(\cdot)=Conv(MHSA(\cdot)) F ( ) = C o n v ( M H S A ( )) ,具体采用DW - Conv和EW - MHSA的组合,其中EW - MHSA计算注意力矩阵的方式为 Q = K = X ( ∈ R C × H × W ) Q = K = X(\in \mathbb{R}^{C ×H ×W}) Q = K = X ( R C × H × W ) ,而扩展值 x e x_{e} x e 用于 V ( ∈ R λ C × H × W ) V(\in \mathbb{R}^{\lambda C ×H ×W}) V ( R λ C × H × W )
  • 最后,使用收缩 M L P MLP M L P M L P s MLP_{s} M L P s )以倒置的输入/输出比等于 λ 来收缩通道维度,即 X s = M L P s ( X f ) ( ∈ R C × H × W ) X_{s}=MLP_{s}\left(X_{f}\right)\left(\in \mathbb{R}^{C × H × W}\right) X s = M L P s ( X f ) ( R C × H × W ) ,并通过 残差连接 得到最终输出 Y = X + X s ( ∈ R C × H × W ) Y = X + X_{s}(\in \mathbb{R}^{C ×H ×W}) Y = X + X s ( R C × H × W )

在这里插入图片描述

2.4 优势

  • 吸收CNN和Transformer优点 :既能吸收 CNN 的效率来 建模局部特征 ,又能利用 Transformer 动态建模能力学习长距离交互
  • 降低模型成本
    • 通过采用高效的 Window - MHSA(WMHSA) Depth - Wise Convolution(DW - Conv) 并带有 跳跃连接 ,权衡了模型成本和精度。
    • 设计灵活性高,如不同深度可采用不同设置,满足性能需求的同时保持结构简洁。
  • 性能优势
    • 在ImageNet - 1K数据集上进行图像分类实验, iRMB 替换标准 Transformer 结构后,在相同训练设置下能以更少的参数和计算提高性能。
    • 在下游任务(如目标检测和语义分割)中,基于 iRMB 构建的 EMO模型 在多个基准测试中取得了非常有竞争力的结果,超过了当前的SoTA方法。

论文: https://arxiv.org/pdf/2301.01146.pdf
源码: https://github.com/zhangzjn/EMO

三、iRMBFusion的实现代码

iRMBFusion 的实现代码如下:

import math
import torch
import torch.nn as nn
import torch.nn.functional as F
from functools import partial
from einops import rearrange
from timm.models._efficientnet_blocks import SqueezeExcite
from timm.models.layers import DropPath

inplace = True

class LayerNorm2d(nn.Module):

    def __init__(self, normalized_shape, eps=1e-6, elementwise_affine=True):
        super().__init__()
        self.norm = nn.LayerNorm(normalized_shape, eps, elementwise_affine)

    def forward(self, x):
        x = rearrange(x, 'b c h w -> b h w c').contiguous()
        x = self.norm(x)
        x = rearrange(x, 'b h w c -> b c h w').contiguous()
        return x

def get_norm(norm_layer='in_1d'):
    eps = 1e-6
    norm_dict = {
        'none': nn.Identity,
        'in_1d': partial(nn.InstanceNorm1d, eps=eps),
        'in_2d': partial(nn.InstanceNorm2d, eps=eps),
        'in_3d': partial(nn.InstanceNorm3d, eps=eps),
        'bn_1d': partial(nn.BatchNorm1d, eps=eps),
        'bn_2d': partial(nn.BatchNorm2d, eps=eps),
        # 'bn_2d': partial(nn.SyncBatchNorm, eps=eps),
        'bn_3d': partial(nn.BatchNorm3d, eps=eps),
        'gn': partial(nn.GroupNorm, eps=eps),
        'ln_1d': partial(nn.LayerNorm, eps=eps),
        'ln_2d': partial(LayerNorm2d, eps=eps),
    }
    return norm_dict[norm_layer]

def get_act(act_layer='relu'):
    act_dict = {
        'none': nn.Identity,
        'relu': nn.ReLU,
        'relu6': nn.ReLU6,
        'silu': nn.SiLU
    }
    return act_dict[act_layer]

class ConvNormAct(nn.Module):

    def __init__(self, dim_in, dim_out, kernel_size, stride=1, dilation=1, groups=1, bias=False,
                 skip=False, norm_layer='bn_2d', act_layer='relu', inplace=True, drop_path_rate=0.):
        super(ConvNormAct, self).__init__()
        self.has_skip = skip and dim_in == dim_out
        padding = math.ceil((kernel_size - stride) / 2)
        self.conv = nn.Conv2d(dim_in, dim_out, kernel_size, stride, padding, dilation, groups, bias)
        self.norm = get_norm(norm_layer)(dim_out)
        self.act = get_act(act_layer)(inplace=inplace)
        self.drop_path = DropPath(drop_path_rate) if drop_path_rate else nn.Identity()

    def forward(self, x):
        shortcut = x
        x = self.conv(x)
        x = self.norm(x)
        x = self.act(x)
        if self.has_skip:
            x = self.drop_path(x) + shortcut
        return x

class iRMB(nn.Module):

    def __init__(self, dim_in,  norm_in=True, has_skip=True, exp_ratio=1.0, norm_layer='bn_2d',
                 act_layer='relu', v_proj=True, dw_ks=3, stride=1, dilation=1, se_ratio=0.0, dim_head=8, window_size=7,
                 attn_s=True, qkv_bias=False, attn_drop=0., drop=0., drop_path=0., v_group=False, attn_pre=False):
        super().__init__()
        dim_out = dim_in
        self.norm = get_norm(norm_layer)(dim_in) if norm_in else nn.Identity()
        dim_mid = int(dim_in * exp_ratio)
        self.has_skip = (dim_in == dim_out and stride == 1) and has_skip
        self.attn_s = attn_s
        if self.attn_s:
            assert dim_in % dim_head == 0, 'dim should be divisible by num_heads'
            self.dim_head = dim_head
            self.window_size = window_size
            self.num_head = dim_in // dim_head
            self.scale = self.dim_head ** -0.5
            self.attn_pre = attn_pre
            self.qk = ConvNormAct(dim_in, int(dim_in * 2), kernel_size=1, bias=qkv_bias, norm_layer='none',
                                  act_layer='none')
            self.v = ConvNormAct(dim_in, dim_mid, kernel_size=1, groups=self.num_head if v_group else 1, bias=qkv_bias,
                                 norm_layer='none', act_layer=act_layer, inplace=inplace)
            self.attn_drop = nn.Dropout(attn_drop)
        else:
            if v_proj:
                self.v = ConvNormAct(dim_in, dim_mid, kernel_size=1, bias=qkv_bias, norm_layer='none',
                                     act_layer=act_layer, inplace=inplace)
            else:
                self.v = nn.Identity()
        self.conv_local = ConvNormAct(dim_mid, dim_mid, kernel_size=dw_ks, stride=stride, dilation=dilation,
                                      groups=dim_mid, norm_layer='bn_2d', act_layer='silu', inplace=inplace)
        self.se = SqueezeExcite(dim_mid, rd_ratio=se_ratio, act_layer=get_act(act_layer)) if se_ratio > 0.0 else nn.Identity()

        self.proj_drop = nn.Dropout(drop)
        self.proj = ConvNormAct(dim_mid, dim_out, kernel_size=1, norm_layer='none', act_layer='none', inplace=inplace)
        self.drop_path = DropPath(drop_path) if drop_path else nn.Identity()

    def forward(self, x):
        shortcut = x
        x = self.norm(x)
        B, C, H, W = x.shape
        if self.attn_s:
            # padding
            if self.window_size <= 0:
                window_size_W, window_size_H = W, H
            else:
                window_size_W, window_size_H = self.window_size, self.window_size
            pad_l, pad_t = 0, 0
            pad_r = (window_size_W - W % window_size_W) % window_size_W
            pad_b = (window_size_H - H % window_size_H) % window_size_H
            x = F.pad(x, (pad_l, pad_r, pad_t, pad_b, 0, 0,))
            n1, n2 = (H + pad_b) // window_size_H, (W + pad_r) // window_size_W
            x = rearrange(x, 'b c (h1 n1) (w1 n2) -> (b n1 n2) c h1 w1', n1=n1, n2=n2).contiguous()
            # attention
            b, c, h, w = x.shape
            qk = self.qk(x)
            qk = rearrange(qk, 'b (qk heads dim_head) h w -> qk b heads (h w) dim_head', qk=2, heads=self.num_head,
                           dim_head=self.dim_head).contiguous()
            q, k = qk[0], qk[1]
            attn_spa = (q @ k.transpose(-2, -1)) * self.scale
            attn_spa = attn_spa.softmax(dim=-1)
            attn_spa = self.attn_drop(attn_spa)
            if self.attn_pre:
                x = rearrange(x, 'b (heads dim_head) h w -> b heads (h w) dim_head', heads=self.num_head).contiguous()
                x_spa = attn_spa @ x
                x_spa = rearrange(x_spa, 'b heads (h w) dim_head -> b (heads dim_head) h w', heads=self.num_head, h=h,
                                  w=w).contiguous()
                x_spa = self.v(x_spa)
            else:
                v = self.v(x)
                v = rearrange(v, 'b (heads dim_head) h w -> b heads (h w) dim_head', heads=self.num_head).contiguous()
                x_spa = attn_spa @ v
                x_spa = rearrange(x_spa, 'b heads (h w) dim_head -> b (heads dim_head) h w', heads=self.num_head, h=h,
                                  w=w).contiguous()
            # unpadding
            x = rearrange(x_spa, '(b n1 n2) c h1 w1 -> b c (h1 n1) (w1 n2)', n1=n1, n2=n2).contiguous()
            if pad_r > 0 or pad_b > 0:
                x = x[:, :, :H, :W].contiguous()
        else:
            x = self.v(x)

        x = x + self.se(self.conv_local(x)) if self.has_skip else self.se(self.conv_local(x))

        x = self.proj_drop(x)
        x = self.proj(x)

        x = (shortcut + self.drop_path(x)) if self.has_skip else x
        return x

class PixelAttention_CGA(nn.Module):
    def __init__(self, dim):
        super(PixelAttention_CGA, self).__init__()
        self.pa2 = nn.Conv2d(2 * dim, dim, 7, padding=3, padding_mode='reflect' ,groups=dim, bias=True)
        self.sigmoid = nn.Sigmoid()

    def forward(self, x, pattn1):
        B, C, H, W = x.shape
        x = x.unsqueeze(dim=2) # B, C, 1, H, W
        pattn1 = pattn1.unsqueeze(dim=2) # B, C, 1, H, W
        x2 = torch.cat([x, pattn1], dim=2) # B, C, 2, H, W
        x2 = rearrange(x2, 'b c t h w -> b (c t) h w')
        pattn2 = self.pa2(x2)
        pattn2 = self.sigmoid(pattn2)
        return pattn2

class iRMBFusion(nn.Module):
    def __init__(self, dim):
        super(iRMBFusion, self).__init__()
        self.cfam = iRMB(dim)
        self.pa = PixelAttention_CGA(dim)
        self.conv = nn.Conv2d(dim, dim, 1, bias=True)
        self.sigmoid = nn.Sigmoid()

    def forward(self, data):
        x, y = data
        initial = x + y
        pattn1 = self.cfam(initial)
        pattn2 = self.sigmoid(self.pa(initial, pattn1))
        result = initial + pattn2 * x + (1 - pattn2) * y
        result = self.conv(result)
        return result

四、融合步骤

5.1 修改一

① 在 ultralytics/nn/ 目录下新建 AddModules 文件夹用于存放模块代码

② 在 AddModules 文件夹下新建 iRMBFusion.py ,将 第三节 中的代码粘贴到此处

在这里插入图片描述

5.2 修改二

AddModules 文件夹下新建 __init__.py (已有则不用新建),在文件内导入模块: from .iRMBFusion import *

在这里插入图片描述

5.3 修改三

ultralytics/nn/modules/tasks.py 文件中,需要在两处位置添加各模块类名称。

首先:导入模块

在这里插入图片描述

其次:在 parse_model函数 中注册 iRMBFusion 模块

在这里插入图片描述

        elif m in {iRMBFusion}:
            c2 = ch[f[0]]
            args = [c2]

在这里插入图片描述

最后将 ultralytics/utils/torch_utils.py 中的 get_flops 函数中的 stride 指定为 640

在这里插入图片描述


五、yaml模型文件

5.1 中期融合⭐

📌 此模型的修方法是将原本的中期融合中的Concat融合部分换成iRMBFusion,融合骨干部分的多模态信息。

# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-ResNet50 object detection model with P3-P5 outputs.

# Parameters
ch: 6
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'
  # [depth, width, max_channels]
  l: [1.00, 1.00, 1024]

backbone:
  # [from, repeats, module, args]
  - [-1, 1, IN, []]  # 0
  - [-1, 1, Multiin, [1]]  # 1
  - [-2, 1, Multiin, [2]]  # 2

  - [1, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 3-P1
  - [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 4
  - [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 5
  - [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 6-P2

  - [-1, 2, Blocks, [64,  BasicBlock, 2, False]] # 7
  - [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 8-P3
  - [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 9-P4
  - [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 10-P5

  - [2, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 11-P1
  - [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 12
  - [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 13
  - [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 14-P2

  - [-1, 2, Blocks, [64,  BasicBlock, 2, False]] # 15
  - [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 16-P3
  - [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 17-P4
  - [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 18-P5

  - [[8, 16], 1, iRMBFusion, []]  # 19 cat backbone P3
  - [[9, 17], 1, iRMBFusion, []]  # 20 cat backbone P4
  - [[10, 18], 1, iRMBFusion, []]  # 21 cat backbone P5

head:
  - [-1, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 22 input_proj.2
  - [-1, 1, AIFI, [1024, 8]]
  - [-1, 1, Conv, [256, 1, 1]]  # 24, Y5, lateral_convs.0

  - [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 25
  - [20, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 26 input_proj.1
  - [[-2, -1], 1, Concat, [1]]
  - [-1, 3, RepC3, [256, 0.5]]  # 28, fpn_blocks.0
  - [-1, 1, Conv, [256, 1, 1]]  # 29, Y4, lateral_convs.1

  - [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 30
  - [19, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 31 input_proj.0
  - [[-2, -1], 1, Concat, [1]]  # 32 cat backbone P4
  - [-1, 3, RepC3, [256, 0.5]]  # X3 (33), fpn_blocks.1

  - [-1, 1, Conv, [256, 3, 2]]  # 34, downsample_convs.0
  - [[-1, 29], 1, Concat, [1]]  # 35 cat Y4
  - [-1, 3, RepC3, [256, 0.5]]  # F4 (36), pan_blocks.0

  - [-1, 1, Conv, [256, 3, 2]]  # 37, downsample_convs.1
  - [[-1, 24], 1, Concat, [1]]  # 38 cat Y5
  - [-1, 3, RepC3, [256, 0.5]]  # F5 (39), pan_blocks.1

  - [[33, 36, 39], 1, RTDETRDecoder, [nc, 256, 300, 4, 8, 3]]  # Detect(P3, P4, P5)

5.2 中-后期融合⭐

📌 此模型的修方法是将原本的中-后期融合中的Concat融合部分换成iRMBFusion,融合FPN部分的多模态信息。

# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-ResNet50 object detection model with P3-P5 outputs.

# Parameters
ch: 6
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'
  # [depth, width, max_channels]
  l: [1.00, 1.00, 1024]

backbone:
  # [from, repeats, module, args]
  - [-1, 1, IN, []]  # 0
  - [-1, 1, Multiin, [1]]  # 1
  - [-2, 1, Multiin, [2]]  # 2

  - [1, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 3-P1
  - [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 4
  - [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 5
  - [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 6-P2

  - [-1, 2, Blocks, [64,  BasicBlock, 2, False]] # 7
  - [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 8-P3
  - [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 9-P4
  - [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 10-P5

  - [2, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 11-P1
  - [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 12
  - [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 13
  - [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 14-P2

  - [-1, 2, Blocks, [64,  BasicBlock, 2, False]] # 15
  - [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 16-P3
  - [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 17-P4
  - [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 18-P5

head:
  - [10, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 19 input_proj.2
  - [-1, 1, AIFI, [1024, 8]]
  - [-1, 1, Conv, [256, 1, 1]]  # 21, Y5, lateral_convs.0

  - [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 22
  - [9, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 23 input_proj.1
  - [[-2, -1], 1, Concat, [1]]
  - [-1, 3, RepC3, [256, 0.5]]  # 25, fpn_blocks.0
  - [-1, 1, Conv, [256, 1, 1]]  # 26, Y4, lateral_convs.1

  - [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 27
  - [8, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 28 input_proj.0
  - [[-2, -1], 1, Concat, [1]]  # 29 cat backbone P4
  - [-1, 3, RepC3, [256, 0.5]]  # X3 (30), fpn_blocks.1

  - [18, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 31 input_proj.2
  - [-1, 1, AIFI, [1024, 8]]
  - [-1, 1, Conv, [256, 1, 1]]  # 33, Y5, lateral_convs.0

  - [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 34
  - [17, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 35 input_proj.1
  - [[-2, -1], 1, Concat, [1]]
  - [-1, 3, RepC3, [256, 0.5]]  # 37, fpn_blocks.0
  - [-1, 1, Conv, [256, 1, 1]]  # 38, Y4, lateral_convs.1

  - [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 39
  - [16, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 40 input_proj.0
  - [[-2, -1], 1, Concat, [1]]  # 41 cat backbone P4
  - [-1, 3, RepC3, [256, 0.5]]  # X3 (42), fpn_blocks.1

  - [[21, 33], 1, iRMBFusion, []]  # 43 cat backbone P3
  - [[26, 38], 1, iRMBFusion, []]  # 44 cat backbone P4
  - [[30, 42], 1, iRMBFusion, []]  # 45 cat backbone P5

  - [-1, 1, Conv, [256, 3, 2]]  # 46, downsample_convs.0
  - [[-1, 44], 1, Concat, [1]]  # 47 cat Y4
  - [-1, 3, RepC3, [256, 0.5]]  # F4 (48), pan_blocks.0

  - [-1, 1, Conv, [256, 3, 2]]  # 49, downsample_convs.1
  - [[-1, 43], 1, Concat, [1]]  # 50 cat Y5
  - [-1, 3, RepC3, [256, 0.5]]  # F5 (51), pan_blocks.1

  - [[45, 48, 51], 1, RTDETRDecoder, [nc, 256, 300, 4, 8, 3]]  # Detect(P3, P4, P5)

5.3 后期融合⭐

📌 此模型的修方法是将原本的后期融合中的Concat融合部分换成iRMBFusion,融合颈部部分的多模态信息。

# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-ResNet50 object detection model with P3-P5 outputs.

# Parameters
ch: 6
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'
  # [depth, width, max_channels]
  l: [1.00, 1.00, 1024]

backbone:
  # [from, repeats, module, args]
  - [-1, 1, IN, []]  # 0
  - [-1, 1, Multiin, [1]]  # 1
  - [-2, 1, Multiin, [2]]  # 2

  - [1, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 3-P1
  - [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 4
  - [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 5
  - [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 6-P2

  - [-1, 2, Blocks, [64,  BasicBlock, 2, False]] # 7
  - [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 8-P3
  - [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 9-P4
  - [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 10-P5

  - [2, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 11-P1
  - [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 12
  - [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 13
  - [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 14-P2

  - [-1, 2, Blocks, [64,  BasicBlock, 2, False]] # 15
  - [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 16-P3
  - [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 17-P4
  - [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 18-P5

head:
  - [10, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 19 input_proj.2
  - [-1, 1, AIFI, [1024, 8]]
  - [-1, 1, Conv, [256, 1, 1]]  # 21, Y5, lateral_convs.0

  - [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 22
  - [9, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 23 input_proj.1
  - [[-2, -1], 1, Concat, [1]]
  - [-1, 3, RepC3, [256, 0.5]]  # 25, fpn_blocks.0
  - [-1, 1, Conv, [256, 1, 1]]  # 26, Y4, lateral_convs.1

  - [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 27
  - [8, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 28 input_proj.0
  - [[-2, -1], 1, Concat, [1]]  # 29 cat backbone P4
  - [-1, 3, RepC3, [256, 0.5]]  # X3 (30), fpn_blocks.1

  - [-1, 1, Conv, [256, 3, 2]]  # 31, downsample_convs.0
  - [[-1, 26], 1, Concat, [1]]  # 32 cat Y4
  - [-1, 3, RepC3, [256, 0.5]]  # F4 (33), pan_blocks.0

  - [-1, 1, Conv, [256, 3, 2]]  # 34, downsample_convs.1
  - [[-1, 21], 1, Concat, [1]]  # 35 cat Y5
  - [-1, 3, RepC3, [256, 0.5]]  # F5 (36), pan_blocks.1

  - [18, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 37 input_proj.2
  - [-1, 1, AIFI, [1024, 8]]
  - [-1, 1, Conv, [256, 1, 1]]  # 39, Y5, lateral_convs.0

  - [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 40
  - [17, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 41 input_proj.1
  - [[-2, -1], 1, Concat, [1]]
  - [-1, 3, RepC3, [256, 0.5]]  # 43, fpn_blocks.0
  - [-1, 1, Conv, [256, 1, 1]]  # 44, Y4, lateral_convs.1

  - [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 45
  - [16, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 46 input_proj.0
  - [[-2, -1], 1, Concat, [1]]  # 47 cat backbone P4
  - [-1, 3, RepC3, [256, 0.5]]  # X3 (48), fpn_blocks.1

  - [-1, 1, Conv, [256, 3, 2]]  # 49, downsample_convs.0
  - [[-1, 44], 1, Concat, [1]]  # 50 cat Y4
  - [-1, 3, RepC3, [256, 0.5]]  # F4 (51), pan_blocks.0

  - [-1, 1, Conv, [256, 3, 2]]  # 52, downsample_convs.1
  - [[-1, 39], 1, Concat, [1]]  # 53 cat Y5
  - [-1, 3, RepC3, [256, 0.5]]  # F5 (54), pan_blocks.1

  - [[30, 48], 1, iRMBFusion, []]  # 55 cat backbone P3
  - [[33, 51], 1, iRMBFusion, []]  # 56 cat backbone P4
  - [[36, 54], 1, iRMBFusion, []]  # 57 cat backbone P5

  - [[55, 56, 57], 1, RTDETRDecoder, [nc, 256, 300, 4, 8, 3]]  # Detect(P3, P4, P5)


六、成功运行结果

打印网络模型可以看到不同的融合层已经加入到模型中,并可以进行训练了。

rtdetr-resnet18-mid-iRMBFusion

rtdetr-resnet18-mid-iRMBFusion summary: 574 layers, 33,124,436 parameters, 33,124,436 gradients, 95.9 GFLOPs

                   from  n    params  module                                       arguments
  0                  -1  1         0  ultralytics.nn.AddModules.multimodal.IN      []
  1                  -1  1         0  ultralytics.nn.AddModules.multimodal.Multiin [1]
  2                  -2  1         0  ultralytics.nn.AddModules.multimodal.Multiin [2]
  3                   1  1       960  ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
  4                  -1  1      9312  ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
  5                  -1  1     18624  ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
  6                  -1  1         0  torch.nn.modules.pooling.MaxPool2d           [3, 2, 1]
  7                  -1  2    152512  ultralytics.nn.AddModules.ResNet.Blocks      [64, 64, 2, 'BasicBlock', 2, False]
  8                  -1  2    526208  ultralytics.nn.AddModules.ResNet.Blocks      [64, 128, 2, 'BasicBlock', 3, False]
  9                  -1  2   2100992  ultralytics.nn.AddModules.ResNet.Blocks      [128, 256, 2, 'BasicBlock', 4, False]
 10                  -1  2   8396288  ultralytics.nn.AddModules.ResNet.Blocks      [256, 512, 2, 'BasicBlock', 5, False]
 11                   2  1       960  ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
 12                  -1  1      9312  ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
 13                  -1  1     18624  ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
 14                  -1  1         0  torch.nn.modules.pooling.MaxPool2d           [3, 2, 1]
 15                  -1  2    152512  ultralytics.nn.AddModules.ResNet.Blocks      [64, 64, 2, 'BasicBlock', 2, False]
 16                  -1  2    526208  ultralytics.nn.AddModules.ResNet.Blocks      [64, 128, 2, 'BasicBlock', 3, False]
 17                  -1  2   2100992  ultralytics.nn.AddModules.ResNet.Blocks      [128, 256, 2, 'BasicBlock', 4, False]
 18                  -1  2   8396288  ultralytics.nn.AddModules.ResNet.Blocks      [256, 512, 2, 'BasicBlock', 5, False]
 19             [8, 16]  1     96384  ultralytics.nn.AddModules.iRMBFusion.iRMBFusion[128]
 20             [9, 17]  1    356608  ultralytics.nn.AddModules.iRMBFusion.iRMBFusion[256]
 21            [10, 18]  1   1368576  ultralytics.nn.AddModules.iRMBFusion.iRMBFusion[512]
 22                  -1  1    131584  ultralytics.nn.modules.conv.Conv             [512, 256, 1, 1, None, 1, 1, False]
 23                  -1  1    789760  ultralytics.nn.modules.transformer.AIFI      [256, 1024, 8]
 24                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]
 25                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 26                  20  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1, None, 1, 1, False]
 27            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 28                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 29                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]
 30                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 31                  19  1     33280  ultralytics.nn.modules.conv.Conv             [128, 256, 1, 1, None, 1, 1, False]
 32            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 33                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 34                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]
 35            [-1, 29]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 36                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 37                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]
 38            [-1, 24]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 39                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 40        [33, 36, 39]  1   3927956  ultralytics.nn.modules.head.RTDETRDecoder    [9, [256, 256, 256], 256, 300, 4, 8, 3]
rtdetr-resnet18-mid-iRMBFusion summary: 574 layers, 33,124,436 parameters, 33,124,436 gradients, 95.9 GFLOPs

rtdetr-resnet18-mid-to-late-iRMBFusion

rtdetr-resnet18-mid-to-late-iRMBFusion summary: 682 layers, 34,841,300 parameters, 34,841,300 gradients, 110.6 GFLOPs

                   from  n    params  module                                       arguments
  0                  -1  1         0  ultralytics.nn.AddModules.multimodal.IN      []
  1                  -1  1         0  ultralytics.nn.AddModules.multimodal.Multiin [1]
  2                  -2  1         0  ultralytics.nn.AddModules.multimodal.Multiin [2]
  3                   1  1       960  ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
  4                  -1  1      9312  ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
  5                  -1  1     18624  ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
  6                  -1  1         0  torch.nn.modules.pooling.MaxPool2d           [3, 2, 1]
  7                  -1  2    152512  ultralytics.nn.AddModules.ResNet.Blocks      [64, 64, 2, 'BasicBlock', 2, False]
  8                  -1  2    526208  ultralytics.nn.AddModules.ResNet.Blocks      [64, 128, 2, 'BasicBlock', 3, False]
  9                  -1  2   2100992  ultralytics.nn.AddModules.ResNet.Blocks      [128, 256, 2, 'BasicBlock', 4, False]
 10                  -1  2   8396288  ultralytics.nn.AddModules.ResNet.Blocks      [256, 512, 2, 'BasicBlock', 5, False]
 11                   2  1       960  ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
 12                  -1  1      9312  ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
 13                  -1  1     18624  ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
 14                  -1  1         0  torch.nn.modules.pooling.MaxPool2d           [3, 2, 1]
 15                  -1  2    152512  ultralytics.nn.AddModules.ResNet.Blocks      [64, 64, 2, 'BasicBlock', 2, False]
 16                  -1  2    526208  ultralytics.nn.AddModules.ResNet.Blocks      [64, 128, 2, 'BasicBlock', 3, False]
 17                  -1  2   2100992  ultralytics.nn.AddModules.ResNet.Blocks      [128, 256, 2, 'BasicBlock', 4, False]
 18                  -1  2   8396288  ultralytics.nn.AddModules.ResNet.Blocks      [256, 512, 2, 'BasicBlock', 5, False]
 19                  10  1    131584  ultralytics.nn.modules.conv.Conv             [512, 256, 1, 1, None, 1, 1, False]
 20                  -1  1    789760  ultralytics.nn.modules.transformer.AIFI      [256, 1024, 8]
 21                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]
 22                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 23                   9  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1, None, 1, 1, False]
 24            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 25                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 26                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]
 27                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 28                   8  1     33280  ultralytics.nn.modules.conv.Conv             [128, 256, 1, 1, None, 1, 1, False]
 29            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 30                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 31                  18  1    131584  ultralytics.nn.modules.conv.Conv             [512, 256, 1, 1, None, 1, 1, False]
 32                  -1  1    789760  ultralytics.nn.modules.transformer.AIFI      [256, 1024, 8]
 33                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]
 34                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 35                  17  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1, None, 1, 1, False]
 36            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 37                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 38                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]
 39                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 40                  16  1     33280  ultralytics.nn.modules.conv.Conv             [128, 256, 1, 1, None, 1, 1, False]
 41            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 42                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 43            [21, 33]  1    356608  ultralytics.nn.AddModules.iRMBFusion.iRMBFusion[256]
 44            [26, 38]  1    356608  ultralytics.nn.AddModules.iRMBFusion.iRMBFusion[256]
 45            [30, 42]  1    356608  ultralytics.nn.AddModules.iRMBFusion.iRMBFusion[256]
 46                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]
 47            [-1, 44]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 48                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 49                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]
 50            [-1, 43]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 51                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 52        [45, 48, 51]  1   3927956  ultralytics.nn.modules.head.RTDETRDecoder    [9, [256, 256, 256], 256, 300, 4, 8, 3]
rtdetr-resnet18-mid-to-late-iRMBFusion summary: 682 layers, 34,841,300 parameters, 34,841,300 gradients, 110.6 GFLOPs

rtdetr-resnet18-late-iRMBFusion

rtdetr-resnet18-late-iRMBFusion summary: 766 layers, 37,337,812 parameters, 37,337,812 gradients, 115.6 GFLOPs

                   from  n    params  module                                       arguments
  0                  -1  1         0  ultralytics.nn.AddModules.multimodal.IN      []
  1                  -1  1         0  ultralytics.nn.AddModules.multimodal.Multiin [1]
  2                  -2  1         0  ultralytics.nn.AddModules.multimodal.Multiin [2]
  3                   1  1       960  ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
  4                  -1  1      9312  ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
  5                  -1  1     18624  ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
  6                  -1  1         0  torch.nn.modules.pooling.MaxPool2d           [3, 2, 1]
  7                  -1  2    152512  ultralytics.nn.AddModules.ResNet.Blocks      [64, 64, 2, 'BasicBlock', 2, False]
  8                  -1  2    526208  ultralytics.nn.AddModules.ResNet.Blocks      [64, 128, 2, 'BasicBlock', 3, False]
  9                  -1  2   2100992  ultralytics.nn.AddModules.ResNet.Blocks      [128, 256, 2, 'BasicBlock', 4, False]
 10                  -1  2   8396288  ultralytics.nn.AddModules.ResNet.Blocks      [256, 512, 2, 'BasicBlock', 5, False]
 11                   2  1       960  ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
 12                  -1  1      9312  ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
 13                  -1  1     18624  ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
 14                  -1  1         0  torch.nn.modules.pooling.MaxPool2d           [3, 2, 1]
 15                  -1  2    152512  ultralytics.nn.AddModules.ResNet.Blocks      [64, 64, 2, 'BasicBlock', 2, False]
 16                  -1  2    526208  ultralytics.nn.AddModules.ResNet.Blocks      [64, 128, 2, 'BasicBlock', 3, False]
 17                  -1  2   2100992  ultralytics.nn.AddModules.ResNet.Blocks      [128, 256, 2, 'BasicBlock', 4, False]
 18                  -1  2   8396288  ultralytics.nn.AddModules.ResNet.Blocks      [256, 512, 2, 'BasicBlock', 5, False]
 19                  10  1    131584  ultralytics.nn.modules.conv.Conv             [512, 256, 1, 1, None, 1, 1, False]
 20                  -1  1    789760  ultralytics.nn.modules.transformer.AIFI      [256, 1024, 8]
 21                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]
 22                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 23                   9  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1, None, 1, 1, False]
 24            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 25                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 26                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]
 27                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 28                   8  1     33280  ultralytics.nn.modules.conv.Conv             [128, 256, 1, 1, None, 1, 1, False]
 29            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 30                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 31                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]
 32            [-1, 26]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 33                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 34                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]
 35            [-1, 21]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 36                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 37                  18  1    131584  ultralytics.nn.modules.conv.Conv             [512, 256, 1, 1, None, 1, 1, False]
 38                  -1  1    789760  ultralytics.nn.modules.transformer.AIFI      [256, 1024, 8]
 39                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]
 40                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 41                  17  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1, None, 1, 1, False]
 42            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 43                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 44                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]
 45                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 46                  16  1     33280  ultralytics.nn.modules.conv.Conv             [128, 256, 1, 1, None, 1, 1, False]
 47            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 48                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 49                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]
 50            [-1, 44]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 51                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 52                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]
 53            [-1, 39]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 54                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 55            [30, 48]  1    356608  ultralytics.nn.AddModules.iRMBFusion.iRMBFusion[256]
 56            [33, 51]  1    356608  ultralytics.nn.AddModules.iRMBFusion.iRMBFusion[256]
 57            [36, 54]  1    356608  ultralytics.nn.AddModules.iRMBFusion.iRMBFusion[256]
 58        [55, 56, 57]  1   3927956  ultralytics.nn.modules.head.RTDETRDecoder    [9, [256, 256, 256], 256, 300, 4, 8, 3]
rtdetr-resnet18-late-iRMBFusion summary: 766 layers, 37,337,812 parameters, 37,337,812 gradients, 115.6 GFLOPs