【RT-DETR多模态融合改进】| CGA Fusion:内容引导的注意力融合模块,空间权重引导的多模态特征自适应融合
一、本文介绍
本文记录的是利用 CGA Fusion 模块改进 RT-DETR 的多模态融合部分 。
CGAFusion
(Content-Guided Attention Fusion)通过
内容引导注意力生成空间权重
,
引导高低层特征的自适应融合
。本文利用
CGA Fusion
模块,通过内容引
导注意力生成空间权重,自适应地融合两个模态的特征
,
在特征融合阶段实现跨模态语义对齐与噪声抑制
,增强对不同模态互补特征的利用能力,从而提升模型在多模态场景下的检测鲁棒性与准确性。
二、CGA Fusion模块介绍
DEA-Net: Single image dehazing based on detail-enhanced convolution and content-guided attention
2.1 CGA模块(Content-Guided Attention)
2.1.1 设计出发点
传统注意力机制(如FAM、CBAM)存在两大缺陷:
- 无法处理特征级雾霾不均匀性 :现有方法仅关注图像级雾霾分布(如空间注意力),忽略特征级通道间的雾霾差异。不同通道的特征编码不同语义(如边缘、纹理),需独立分配空间重要性图(SIM)。
- 通道与空间注意力缺乏交互 :传统模块(如FAM)顺序计算通道和空间注意力,未融合两者信息,导致特征校准不全面。
目标 :生成 通道特定的空间重要性图(Channel-specific SIM) ,同时融合通道与空间注意力,提升特征表示能力。
2.1.2 原理与结构
CGA采用 粗到细(Coarse-to-Fine) 的两阶段注意力生成机制,结构如图所示:
-
阶段1:生成粗粒度SIM( W c o a W_{coa} W co a )
- 通道注意力( W c W_c W c ) :通过全局平均池化(GAP)压缩空间维度,经两层1×1卷积生成通道权重,强调关键通道。
- 空间注意力( W s W_s W s ) :对通道维度进行GAP和全局最大池化(GMP),经7×7卷积生成空间重要性图,捕捉图像级雾霾分布。
- 融合粗粒度SIM :通过元素相加融合 W c W_c W c 和 W s W_s W s ,得到初步的通道共享空间注意力图 W c o a W_{coa} W co a 。
-
阶段2:细化为通道特定SIM((W))
- 通道洗牌(Channel Shuffle) :将输入特征 X X X 与 W c o a W_{coa} W co a 按通道交替排列,促进跨通道信息交互。
- 组卷积(Group Convolution) :通过7×7组卷积(组数等于通道数),利用输入特征内容引导每个通道的SIM细化,生成最终通道特定的 W W W 。
公式总结
:
W
c
o
a
=
W
c
+
W
s
,
W
=
σ
(
G
C
7
×
7
(
C
S
(
[
X
,
W
c
o
a
]
)
)
)
,
\begin{aligned} W_{coa} &= W_c + W_s, \\ W &= \sigma\left(\mathcal{G}C_{7×7}\left(CS\left([X, W_{coa}]\right)\right)\right), \end{aligned}
W
co
a
W
=
W
c
+
W
s
,
=
σ
(
G
C
7
×
7
(
CS
(
[
X
,
W
co
a
]
)
)
)
,
其中
σ
\sigma
σ
为Sigmoid激活函数,
C
S
(
⋅
)
CS(\cdot)
CS
(
⋅
)
为通道洗牌,
G
C
\mathcal{G}C
G
C
为组卷积。
2.1.3 优势
- 通道特异性 :为每个通道分配唯一SIM,精准捕捉不同通道的雾霾分布差异(如纹理丰富区域 vs. 平滑区域),避免传统单通道SIM的语义混淆。
- 信息交互 :融合通道与空间注意力,通过内容引导(输入特征 X X X )动态调整注意力权重,提升特征表达的灵活性。
2.2 CGA Fusion模块(CGA-Based Mixup Fusion Scheme)
2.2.1 设计出发点
传统特征融合(如相加、拼接)存在 感受野不匹配问题 :
- 浅层特征(低层次,如边缘)与深层特征(高层次,如语义)的感受野差异显著,简单融合无法有效对齐信息。
- 现有混合融合(Mixup)依赖固定权重或自学习标量权重,未考虑空间位置差异。
目标 :利用CGA生成的空间权重,自适应融合高低层特征,解决感受野不匹配并增强梯度流通。
2.2.2 原理与结构
模块结构如图所示:
- 输入 :编码器的低层特征( F l o w F_{low} F l o w )和解码器的高层特征( F h i g h F_{high} F hi g h )。
- 注意力权重生成 :将 F l o w F_{low} F l o w 和 F h i g h F_{high} F hi g h 输入CGA,生成空间权重 W W W (范围0-1),其中每个像素值表示低层特征的融合比例。
-
加权融合
:
F fuse = C 1 × 1 ( F l o w ⋅ W + F h i g h ⋅ ( 1 − W ) + F l o w + F h i g h ) , F_{\text{fuse}} = \mathcal{C}_{1×1}\left(F_{low} \cdot W + F_{high} \cdot (1-W) + F_{low} + F_{high}\right), F fuse = C 1 × 1 ( F l o w ⋅ W + F hi g h ⋅ ( 1 − W ) + F l o w + F hi g h ) ,
其中 C 1 × 1 \mathcal{C}_{1×1} C 1 × 1 为1×1卷积,用于特征投影。通过 W W W 动态调节高低层特征的贡献,并通过跳跃连接( F l o w + F h i g h F_{low} + F_{high} F l o w + F hi g h )缓解梯度消失。
2.2.3 优势
- 自适应融合 :空间权重 W W W 根据特征内容动态调整,例如在边缘区域增强低层特征,在语义区域侧重高层特征,解决感受野不匹配。
- 梯度优化 :跳跃连接和加权求和机制增强了浅层到深层的信息流动,提升训练稳定性,PSNR较传统Mixup提升0.23 dB。
论文: https://export.arxiv.org/pdf/2301.04805
源码: https://github.com/cecret3350/DEA-Net
三、CGA Fusion的实现代码
CGA Fusion
的实现代码如下:
import torch.nn as nn
import torch
from einops import rearrange
def autopad(k, p=None, d=1): # kernel, padding, dilation
"""Pad to 'same' shape outputs."""
if d > 1:
k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k] # actual kernel-size
if p is None:
p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
return p
class Conv(nn.Module):
"""Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)."""
default_act = nn.SiLU() # default activation
def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
"""Initialize Conv layer with given arguments including activation."""
super().__init__()
self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)
self.bn = nn.BatchNorm2d(c2)
self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
def forward(self, x):
"""Apply convolution, batch normalization and activation to input tensor."""
return self.act(self.bn(self.conv(x)))
def forward_fuse(self, x):
"""Perform transposed convolution of 2D data."""
return self.act(self.conv(x))
class SpatialAttention_CGA(nn.Module):
def __init__(self):
super(SpatialAttention_CGA, self).__init__()
self.sa = nn.Conv2d(2, 1, 7, padding=3, padding_mode='reflect' ,bias=True)
def forward(self, x):
x_avg = torch.mean(x, dim=1, keepdim=True)
x_max, _ = torch.max(x, dim=1, keepdim=True)
x2 = torch.concat([x_avg, x_max], dim=1)
sattn = self.sa(x2)
return sattn
class ChannelAttention_CGA(nn.Module):
def __init__(self, dim, reduction = 8):
super(ChannelAttention_CGA, self).__init__()
self.gap = nn.AdaptiveAvgPool2d(1)
self.ca = nn.Sequential(
nn.Conv2d(dim, dim // reduction, 1, padding=0, bias=True),
nn.ReLU(inplace=True),
nn.Conv2d(dim // reduction, dim, 1, padding=0, bias=True),
)
def forward(self, x):
x_gap = self.gap(x)
cattn = self.ca(x_gap)
return cattn
class PixelAttention_CGA(nn.Module):
def __init__(self, dim):
super(PixelAttention_CGA, self).__init__()
self.pa2 = nn.Conv2d(2 * dim, dim, 7, padding=3, padding_mode='reflect' ,groups=dim, bias=True)
self.sigmoid = nn.Sigmoid()
def forward(self, x, pattn1):
B, C, H, W = x.shape
x = x.unsqueeze(dim=2) # B, C, 1, H, W
pattn1 = pattn1.unsqueeze(dim=2) # B, C, 1, H, W
x2 = torch.cat([x, pattn1], dim=2) # B, C, 2, H, W
x2 = rearrange(x2, 'b c t h w -> b (c t) h w')
pattn2 = self.pa2(x2)
pattn2 = self.sigmoid(pattn2)
return pattn2
class CGAFusion(nn.Module):
def __init__(self, dim, reduction=8):
super(CGAFusion, self).__init__()
self.sa = SpatialAttention_CGA()
self.ca = ChannelAttention_CGA(dim, reduction)
self.pa = PixelAttention_CGA(dim)
self.conv = nn.Conv2d(dim, dim, 1, bias=True)
self.sigmoid = nn.Sigmoid()
def forward(self, data):
x, y = data
initial = x + y
cattn = self.ca(initial)
sattn = self.sa(initial)
pattn1 = sattn + cattn
pattn2 = self.sigmoid(self.pa(initial, pattn1))
result = initial + pattn2 * x + (1 - pattn2) * y
result = self.conv(result)
return result
四、融合步骤
5.1 修改一
① 在
ultralytics/nn/
目录下新建
AddModules
文件夹用于存放模块代码
② 在
AddModules
文件夹下新建
CGAFusion .py
,将
第三节
中的代码粘贴到此处
5.2 修改二
在
AddModules
文件夹下新建
__init__.py
(已有则不用新建),在文件内导入模块:
from .CGAFusion import *
5.3 修改三
在
ultralytics/nn/modules/tasks.py
文件中,需要在两处位置添加各模块类名称。
首先:导入模块
其次:在
parse_model函数
中注册
CGAFusion
模块
elif m in {CGAFusion}:
c2 = ch[f[0]]
args = [c2]
最后将
ultralytics/utils/torch_utils.py
中的
get_flops
函数中的
stride
指定为
640
。
五、yaml模型文件
5.1 中期融合⭐
📌 此模型的修方法是将原本的中期融合中的Concat融合部分换成CGAFusion,融合骨干部分的多模态信息。
# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-ResNet50 object detection model with P3-P5 outputs.
# Parameters
ch: 6
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'
# [depth, width, max_channels]
l: [1.00, 1.00, 1024]
backbone:
# [from, repeats, module, args]
- [-1, 1, IN, []] # 0
- [-1, 1, Multiin, [1]] # 1
- [-2, 1, Multiin, [2]] # 2
- [1, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 3-P1
- [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 4
- [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 5
- [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 6-P2
- [-1, 2, Blocks, [64, BasicBlock, 2, False]] # 7
- [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 8-P3
- [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 9-P4
- [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 10-P5
- [2, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 11-P1
- [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 12
- [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 13
- [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 14-P2
- [-1, 2, Blocks, [64, BasicBlock, 2, False]] # 15
- [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 16-P3
- [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 17-P4
- [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 18-P5
- [[8, 16], 1, CGAFusion, []] # 19 cat backbone P3
- [[9, 17], 1, CGAFusion, []] # 20 cat backbone P4
- [[10, 18], 1, CGAFusion, []] # 21 cat backbone P5
head:
- [-1, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 22 input_proj.2
- [-1, 1, AIFI, [1024, 8]]
- [-1, 1, Conv, [256, 1, 1]] # 24, Y5, lateral_convs.0
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 25
- [20, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 26 input_proj.1
- [[-2, -1], 1, Concat, [1]]
- [-1, 3, RepC3, [256, 0.5]] # 28, fpn_blocks.0
- [-1, 1, Conv, [256, 1, 1]] # 29, Y4, lateral_convs.1
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 30
- [19, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 31 input_proj.0
- [[-2, -1], 1, Concat, [1]] # 32 cat backbone P4
- [-1, 3, RepC3, [256, 0.5]] # X3 (33), fpn_blocks.1
- [-1, 1, Conv, [256, 3, 2]] # 34, downsample_convs.0
- [[-1, 29], 1, Concat, [1]] # 35 cat Y4
- [-1, 3, RepC3, [256, 0.5]] # F4 (36), pan_blocks.0
- [-1, 1, Conv, [256, 3, 2]] # 37, downsample_convs.1
- [[-1, 24], 1, Concat, [1]] # 38 cat Y5
- [-1, 3, RepC3, [256, 0.5]] # F5 (39), pan_blocks.1
- [[33, 36, 39], 1, RTDETRDecoder, [nc, 256, 300, 4, 8, 3]] # Detect(P3, P4, P5)
5.2 中-后期融合⭐
📌 此模型的修方法是将原本的中-后期融合中的Concat融合部分换成CGAFusion,融合FPN部分的多模态信息。
# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-ResNet50 object detection model with P3-P5 outputs.
# Parameters
ch: 6
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'
# [depth, width, max_channels]
l: [1.00, 1.00, 1024]
backbone:
# [from, repeats, module, args]
- [-1, 1, IN, []] # 0
- [-1, 1, Multiin, [1]] # 1
- [-2, 1, Multiin, [2]] # 2
- [1, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 3-P1
- [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 4
- [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 5
- [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 6-P2
- [-1, 2, Blocks, [64, BasicBlock, 2, False]] # 7
- [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 8-P3
- [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 9-P4
- [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 10-P5
- [2, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 11-P1
- [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 12
- [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 13
- [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 14-P2
- [-1, 2, Blocks, [64, BasicBlock, 2, False]] # 15
- [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 16-P3
- [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 17-P4
- [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 18-P5
head:
- [10, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 19 input_proj.2
- [-1, 1, AIFI, [1024, 8]]
- [-1, 1, Conv, [256, 1, 1]] # 21, Y5, lateral_convs.0
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 22
- [9, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 23 input_proj.1
- [[-2, -1], 1, Concat, [1]]
- [-1, 3, RepC3, [256, 0.5]] # 25, fpn_blocks.0
- [-1, 1, Conv, [256, 1, 1]] # 26, Y4, lateral_convs.1
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 27
- [8, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 28 input_proj.0
- [[-2, -1], 1, Concat, [1]] # 29 cat backbone P4
- [-1, 3, RepC3, [256, 0.5]] # X3 (30), fpn_blocks.1
- [18, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 31 input_proj.2
- [-1, 1, AIFI, [1024, 8]]
- [-1, 1, Conv, [256, 1, 1]] # 33, Y5, lateral_convs.0
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 34
- [17, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 35 input_proj.1
- [[-2, -1], 1, Concat, [1]]
- [-1, 3, RepC3, [256, 0.5]] # 37, fpn_blocks.0
- [-1, 1, Conv, [256, 1, 1]] # 38, Y4, lateral_convs.1
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 39
- [16, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 40 input_proj.0
- [[-2, -1], 1, Concat, [1]] # 41 cat backbone P4
- [-1, 3, RepC3, [256, 0.5]] # X3 (42), fpn_blocks.1
- [[21, 33], 1, CGAFusion, []] # 43 cat backbone P3
- [[26, 38], 1, CGAFusion, []] # 44 cat backbone P4
- [[30, 42], 1, CGAFusion, []] # 45 cat backbone P5
- [-1, 1, Conv, [256, 3, 2]] # 46, downsample_convs.0
- [[-1, 44], 1, Concat, [1]] # 47 cat Y4
- [-1, 3, RepC3, [256, 0.5]] # F4 (48), pan_blocks.0
- [-1, 1, Conv, [256, 3, 2]] # 49, downsample_convs.1
- [[-1, 43], 1, Concat, [1]] # 50 cat Y5
- [-1, 3, RepC3, [256, 0.5]] # F5 (51), pan_blocks.1
- [[45, 48, 51], 1, RTDETRDecoder, [nc, 256, 300, 4, 8, 3]] # Detect(P3, P4, P5)
5.3 后期融合⭐
📌 此模型的修方法是将原本的后期融合中的Concat融合部分换成CGAFusion,融合颈部部分的多模态信息。
# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-ResNet50 object detection model with P3-P5 outputs.
# Parameters
ch: 6
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'
# [depth, width, max_channels]
l: [1.00, 1.00, 1024]
backbone:
# [from, repeats, module, args]
- [-1, 1, IN, []] # 0
- [-1, 1, Multiin, [1]] # 1
- [-2, 1, Multiin, [2]] # 2
- [1, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 3-P1
- [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 4
- [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 5
- [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 6-P2
- [-1, 2, Blocks, [64, BasicBlock, 2, False]] # 7
- [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 8-P3
- [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 9-P4
- [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 10-P5
- [2, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 11-P1
- [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 12
- [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 13
- [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 14-P2
- [-1, 2, Blocks, [64, BasicBlock, 2, False]] # 15
- [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 16-P3
- [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 17-P4
- [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 18-P5
head:
- [10, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 19 input_proj.2
- [-1, 1, AIFI, [1024, 8]]
- [-1, 1, Conv, [256, 1, 1]] # 21, Y5, lateral_convs.0
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 22
- [9, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 23 input_proj.1
- [[-2, -1], 1, Concat, [1]]
- [-1, 3, RepC3, [256, 0.5]] # 25, fpn_blocks.0
- [-1, 1, Conv, [256, 1, 1]] # 26, Y4, lateral_convs.1
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 27
- [8, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 28 input_proj.0
- [[-2, -1], 1, Concat, [1]] # 29 cat backbone P4
- [-1, 3, RepC3, [256, 0.5]] # X3 (30), fpn_blocks.1
- [-1, 1, Conv, [256, 3, 2]] # 31, downsample_convs.0
- [[-1, 26], 1, Concat, [1]] # 32 cat Y4
- [-1, 3, RepC3, [256, 0.5]] # F4 (33), pan_blocks.0
- [-1, 1, Conv, [256, 3, 2]] # 34, downsample_convs.1
- [[-1, 21], 1, Concat, [1]] # 35 cat Y5
- [-1, 3, RepC3, [256, 0.5]] # F5 (36), pan_blocks.1
- [18, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 37 input_proj.2
- [-1, 1, AIFI, [1024, 8]]
- [-1, 1, Conv, [256, 1, 1]] # 39, Y5, lateral_convs.0
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 40
- [17, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 41 input_proj.1
- [[-2, -1], 1, Concat, [1]]
- [-1, 3, RepC3, [256, 0.5]] # 43, fpn_blocks.0
- [-1, 1, Conv, [256, 1, 1]] # 44, Y4, lateral_convs.1
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 45
- [16, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 46 input_proj.0
- [[-2, -1], 1, Concat, [1]] # 47 cat backbone P4
- [-1, 3, RepC3, [256, 0.5]] # X3 (48), fpn_blocks.1
- [-1, 1, Conv, [256, 3, 2]] # 49, downsample_convs.0
- [[-1, 44], 1, Concat, [1]] # 50 cat Y4
- [-1, 3, RepC3, [256, 0.5]] # F4 (51), pan_blocks.0
- [-1, 1, Conv, [256, 3, 2]] # 52, downsample_convs.1
- [[-1, 39], 1, Concat, [1]] # 53 cat Y5
- [-1, 3, RepC3, [256, 0.5]] # F5 (54), pan_blocks.1
- [[30, 48], 1, CGAFusion, []] # 55 cat backbone P3
- [[33, 51], 1, CGAFusion, []] # 56 cat backbone P4
- [[36, 54], 1, CGAFusion, []] # 57 cat backbone P5
- [[55, 56, 57], 1, RTDETRDecoder, [nc, 256, 300, 4, 8, 3]] # Detect(P3, P4, P5)
六、成功运行结果
打印网络模型可以看到不同的融合层已经加入到模型中,并可以进行训练了。
rtdetr-resnet18-mid-CGAFusion :
rtdetr-resnet18-mid-CGAFusion summary: 520 layers, 31,823,853 parameters, 31,823,853 gradients, 93.2 GFLOPs
from n params module arguments
0 -1 1 0 ultralytics.nn.AddModules.multimodal.IN []
1 -1 1 0 ultralytics.nn.AddModules.multimodal.Multiin [1]
2 -2 1 0 ultralytics.nn.AddModules.multimodal.Multiin [2]
3 1 1 960 ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
4 -1 1 9312 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
5 -1 1 18624 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
6 -1 1 0 torch.nn.modules.pooling.MaxPool2d [3, 2, 1]
7 -1 2 152512 ultralytics.nn.AddModules.ResNet.Blocks [64, 64, 2, 'BasicBlock', 2, False]
8 -1 2 526208 ultralytics.nn.AddModules.ResNet.Blocks [64, 128, 2, 'BasicBlock', 3, False]
9 -1 2 2100992 ultralytics.nn.AddModules.ResNet.Blocks [128, 256, 2, 'BasicBlock', 4, False]
10 -1 2 8396288 ultralytics.nn.AddModules.ResNet.Blocks [256, 512, 2, 'BasicBlock', 5, False]
11 2 1 960 ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
12 -1 1 9312 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
13 -1 1 18624 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
14 -1 1 0 torch.nn.modules.pooling.MaxPool2d [3, 2, 1]
15 -1 2 152512 ultralytics.nn.AddModules.ResNet.Blocks [64, 64, 2, 'BasicBlock', 2, False]
16 -1 2 526208 ultralytics.nn.AddModules.ResNet.Blocks [64, 128, 2, 'BasicBlock', 3, False]
17 -1 2 2100992 ultralytics.nn.AddModules.ResNet.Blocks [128, 256, 2, 'BasicBlock', 4, False]
18 -1 2 8396288 ultralytics.nn.AddModules.ResNet.Blocks [256, 512, 2, 'BasicBlock', 5, False]
19 [8, 16] 1 33523 ultralytics.nn.AddModules.CGAFusion.CGAFusion[128]
20 [9, 17] 1 107907 ultralytics.nn.AddModules.CGAFusion.CGAFusion[256]
21 [10, 18] 1 379555 ultralytics.nn.AddModules.CGAFusion.CGAFusion[512]
22 -1 1 131584 ultralytics.nn.modules.conv.Conv [512, 256, 1, 1, None, 1, 1, False]
23 -1 1 789760 ultralytics.nn.modules.transformer.AIFI [256, 1024, 8]
24 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
25 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
26 20 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1, None, 1, 1, False]
27 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
28 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
29 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
30 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
31 19 1 33280 ultralytics.nn.modules.conv.Conv [128, 256, 1, 1, None, 1, 1, False]
32 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
33 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
34 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
35 [-1, 29] 1 0 ultralytics.nn.modules.conv.Concat [1]
36 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
37 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
38 [-1, 24] 1 0 ultralytics.nn.modules.conv.Concat [1]
39 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
40 [33, 36, 39] 1 3927956 ultralytics.nn.modules.head.RTDETRDecoder [9, [256, 256, 256], 256, 300, 4, 8, 3]
rtdetr-resnet18-mid-CGAFusion summary: 520 layers, 31,823,853 parameters, 31,823,853 gradients, 93.2 GFLOPs
rtdetr-resnet18-mid-to-late-CGAFusion :
rtdetr-resnet18-mid-to-late-CGAFusion summary: 628 layers, 34,095,197 parameters, 34,095,197 gradients, 105.8 GFLOPs
from n params module arguments
0 -1 1 0 ultralytics.nn.AddModules.multimodal.IN []
1 -1 1 0 ultralytics.nn.AddModules.multimodal.Multiin [1]
2 -2 1 0 ultralytics.nn.AddModules.multimodal.Multiin [2]
3 1 1 960 ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
4 -1 1 9312 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
5 -1 1 18624 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
6 -1 1 0 torch.nn.modules.pooling.MaxPool2d [3, 2, 1]
7 -1 2 152512 ultralytics.nn.AddModules.ResNet.Blocks [64, 64, 2, 'BasicBlock', 2, False]
8 -1 2 526208 ultralytics.nn.AddModules.ResNet.Blocks [64, 128, 2, 'BasicBlock', 3, False]
9 -1 2 2100992 ultralytics.nn.AddModules.ResNet.Blocks [128, 256, 2, 'BasicBlock', 4, False]
10 -1 2 8396288 ultralytics.nn.AddModules.ResNet.Blocks [256, 512, 2, 'BasicBlock', 5, False]
11 2 1 960 ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
12 -1 1 9312 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
13 -1 1 18624 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
14 -1 1 0 torch.nn.modules.pooling.MaxPool2d [3, 2, 1]
15 -1 2 152512 ultralytics.nn.AddModules.ResNet.Blocks [64, 64, 2, 'BasicBlock', 2, False]
16 -1 2 526208 ultralytics.nn.AddModules.ResNet.Blocks [64, 128, 2, 'BasicBlock', 3, False]
17 -1 2 2100992 ultralytics.nn.AddModules.ResNet.Blocks [128, 256, 2, 'BasicBlock', 4, False]
18 -1 2 8396288 ultralytics.nn.AddModules.ResNet.Blocks [256, 512, 2, 'BasicBlock', 5, False]
19 10 1 131584 ultralytics.nn.modules.conv.Conv [512, 256, 1, 1, None, 1, 1, False]
20 -1 1 789760 ultralytics.nn.modules.transformer.AIFI [256, 1024, 8]
21 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
22 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
23 9 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1, None, 1, 1, False]
24 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
25 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
26 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
27 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
28 8 1 33280 ultralytics.nn.modules.conv.Conv [128, 256, 1, 1, None, 1, 1, False]
29 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
30 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
31 18 1 131584 ultralytics.nn.modules.conv.Conv [512, 256, 1, 1, None, 1, 1, False]
32 -1 1 789760 ultralytics.nn.modules.transformer.AIFI [256, 1024, 8]
33 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
34 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
35 17 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1, None, 1, 1, False]
36 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
37 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
38 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
39 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
40 16 1 33280 ultralytics.nn.modules.conv.Conv [128, 256, 1, 1, None, 1, 1, False]
41 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
42 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
43 [21, 33] 1 107907 ultralytics.nn.AddModules.CGAFusion.CGAFusion[256]
44 [26, 38] 1 107907 ultralytics.nn.AddModules.CGAFusion.CGAFusion[256]
45 [30, 42] 1 107907 ultralytics.nn.AddModules.CGAFusion.CGAFusion[256]
46 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
47 [-1, 44] 1 0 ultralytics.nn.modules.conv.Concat [1]
48 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
49 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
50 [-1, 43] 1 0 ultralytics.nn.modules.conv.Concat [1]
51 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
52 [45, 48, 51] 1 3927956 ultralytics.nn.modules.head.RTDETRDecoder [9, [256, 256, 256], 256, 300, 4, 8, 3]
rtdetr-resnet18-mid-to-late-CGAFusion summary: 628 layers, 34,095,197 parameters, 34,095,197 gradients, 105.8 GFLOPs
rtdetr-resnet18-late-CGAFusion :
rtdetr-resnet18-late-CGAFusion summary: 712 layers, 36,591,709 parameters, 36,591,709 gradients, 110.8 GFLOPs
from n params module arguments
0 -1 1 0 ultralytics.nn.AddModules.multimodal.IN []
1 -1 1 0 ultralytics.nn.AddModules.multimodal.Multiin [1]
2 -2 1 0 ultralytics.nn.AddModules.multimodal.Multiin [2]
3 1 1 960 ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
4 -1 1 9312 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
5 -1 1 18624 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
6 -1 1 0 torch.nn.modules.pooling.MaxPool2d [3, 2, 1]
7 -1 2 152512 ultralytics.nn.AddModules.ResNet.Blocks [64, 64, 2, 'BasicBlock', 2, False]
8 -1 2 526208 ultralytics.nn.AddModules.ResNet.Blocks [64, 128, 2, 'BasicBlock', 3, False]
9 -1 2 2100992 ultralytics.nn.AddModules.ResNet.Blocks [128, 256, 2, 'BasicBlock', 4, False]
10 -1 2 8396288 ultralytics.nn.AddModules.ResNet.Blocks [256, 512, 2, 'BasicBlock', 5, False]
11 2 1 960 ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
12 -1 1 9312 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
13 -1 1 18624 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
14 -1 1 0 torch.nn.modules.pooling.MaxPool2d [3, 2, 1]
15 -1 2 152512 ultralytics.nn.AddModules.ResNet.Blocks [64, 64, 2, 'BasicBlock', 2, False]
16 -1 2 526208 ultralytics.nn.AddModules.ResNet.Blocks [64, 128, 2, 'BasicBlock', 3, False]
17 -1 2 2100992 ultralytics.nn.AddModules.ResNet.Blocks [128, 256, 2, 'BasicBlock', 4, False]
18 -1 2 8396288 ultralytics.nn.AddModules.ResNet.Blocks [256, 512, 2, 'BasicBlock', 5, False]
19 10 1 131584 ultralytics.nn.modules.conv.Conv [512, 256, 1, 1, None, 1, 1, False]
20 -1 1 789760 ultralytics.nn.modules.transformer.AIFI [256, 1024, 8]
21 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
22 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
23 9 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1, None, 1, 1, False]
24 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
25 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
26 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
27 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
28 8 1 33280 ultralytics.nn.modules.conv.Conv [128, 256, 1, 1, None, 1, 1, False]
29 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
30 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
31 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
32 [-1, 26] 1 0 ultralytics.nn.modules.conv.Concat [1]
33 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
34 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
35 [-1, 21] 1 0 ultralytics.nn.modules.conv.Concat [1]
36 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
37 18 1 131584 ultralytics.nn.modules.conv.Conv [512, 256, 1, 1, None, 1, 1, False]
38 -1 1 789760 ultralytics.nn.modules.transformer.AIFI [256, 1024, 8]
39 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
40 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
41 17 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1, None, 1, 1, False]
42 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
43 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
44 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
45 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
46 16 1 33280 ultralytics.nn.modules.conv.Conv [128, 256, 1, 1, None, 1, 1, False]
47 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
48 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
49 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
50 [-1, 44] 1 0 ultralytics.nn.modules.conv.Concat [1]
51 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
52 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
53 [-1, 39] 1 0 ultralytics.nn.modules.conv.Concat [1]
54 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
55 [30, 48] 1 107907 ultralytics.nn.AddModules.CGAFusion.CGAFusion[256]
56 [33, 51] 1 107907 ultralytics.nn.AddModules.CGAFusion.CGAFusion[256]
57 [36, 54] 1 107907 ultralytics.nn.AddModules.CGAFusion.CGAFusion[256]
58 [55, 56, 57] 1 3927956 ultralytics.nn.modules.head.RTDETRDecoder [9, [256, 256, 256], 256, 300, 4, 8, 3]
rtdetr-resnet18-late-CGAFusion summary: 712 layers, 36,591,709 parameters, 36,591,709 gradients, 110.8 GFLOPs