【RT-DETR多模态融合改进】| SDFM 表层细节融合模块,利用通道-空间注意力机制,实现跨模态特征融合,抑制噪声干扰
一、本文介绍
本文记录的是利用 SDFM 模块改进 RT-DETR 的多模态融合部分 。
SDFM模块(Surface Detail Fusion Module,表层细节融合模块) 通过在特征提取网络的浅层引入 通道-空间注意力机制,动态生成跨模态特征融合权重 。该模块可自适应 保留不同模态中的独特信息,抑制背景噪声与光照干扰 , 实现低层细节的精准对齐与互补增强 ,为后续检测提供 高保真度 的底层特征表示,从而提升模型在复杂场景下的目标检测鲁棒性与定位准确性。
二、SDFM模块介绍
Rethinking the necessity of image fusion in high-level vision tasks: A practical infrared and visible image fusion network based on progressive semantic injection and scene fidelity
2.1 出发点
- 解决浅层特征融合的局限性 :传统像素级融合(如直接叠加)易导致细节冗余或丢失,而SDFM通过 通道-空间注意力机制 动态调整特征权重,实现“结构保留”与“噪声抑制”的平衡。
- 为高层语义融合奠定基础 :确保浅层特征中的目标位置(如红外热目标)与可见光场景结构(如建筑物轮廓)准确对齐,避免高层语义注入时出现位置偏差。
2.2 结构原理:基于通道-空间注意力的特征调制
2.2.1 核心组件与流程
SDFM模块的架构如图所示,主要包含以下步骤:
-
特征增强 :
- 将红外与可见光的浅层特征在通道维度拼接( C ( F i r i , F v i i ) C(\mathcal{F}_{ir}^{i}, \mathcal{F}_{vi}^{i}) C ( F i r i , F v i i ) ),通过 全局平均池化(GAP) 和 逐点卷积(Pw-Conv) 生成通道注意力权重( δ ( P w − C o n v n ( G A P ( ⋅ ) ) ) \delta(Pw-Conv^{n}(GAP(\cdot))) δ ( Pw − C o n v n ( G A P ( ⋅ ))) )。
-
权重通过元素乘法作用于原始特征,并与另一分支的特征相加,实现跨模态特征增强:
F ^ i r i = F i r i ⊕ ( F v i i ⊗ δ ( ⋅ ) ) , F ^ v i i = F v i i ⊕ ( F i r i ⊗ δ ( ⋅ ) ) \hat{\mathcal{F}}_{ir}^{i} = \mathcal{F}_{ir}^{i} \oplus \left(\mathcal{F}_{vi}^{i} \otimes \delta(\cdot)\right), \quad \hat{\mathcal{F}}_{vi}^{i} = \mathcal{F}_{vi}^{i} \oplus \left(\mathcal{F}_{ir}^{i} \otimes \delta(\cdot)\right) F ^ i r i = F i r i ⊕ ( F v i i ⊗ δ ( ⋅ ) ) , F ^ v i i = F v i i ⊕ ( F i r i ⊗ δ ( ⋅ ) )
(其中 ⊕ \oplus ⊕ 为元素相加, ⊗ \otimes ⊗ 为元素相乘, δ \delta δ 为Sigmoid激活函数)
-
融合权重生成 :
- 将增强后的特征再次拼接,分别输入 通道注意力模块 和 空间注意力模块 ,生成通道权重( A C i \mathcal{A}_{C}^{i} A C i )和空间权重( A S i \mathcal{A}_{S}^{i} A S i )。
-
融合权重为两者的元素乘积,经Sigmoid激活后得到最终权重(
W
i
\mathcal{W}^{i}
W
i
):
W i = δ ( A C i ⊗ A S i ) \mathcal{W}^{i} = \delta\left(\mathcal{A}_{C}^{i} \otimes \mathcal{A}_{S}^{i}\right) W i = δ ( A C i ⊗ A S i )
-
特征融合 :
-
根据融合权重动态分配红外与可见光特征的贡献:
F f u i = ( W i ⊗ F ^ i r i ) ⊕ ( ( 1 − W i ) ⊗ F ^ v i i ) \mathcal{F}_{fu}^{i} = \left(\mathcal{W}^{i} \otimes \hat{\mathcal{F}}_{ir}^{i}\right) \oplus \left(\left(1-\mathcal{W}^{i}\right) \otimes \hat{\mathcal{F}}_{vi}^{i}\right) F f u i = ( W i ⊗ F ^ i r i ) ⊕ ( ( 1 − W i ) ⊗ F ^ v i i )
(在结构清晰区域(如可见光边缘)保留可见光特征,在热目标区域(如红外热力图)增强红外特征)。
-
根据融合权重动态分配红外与可见光特征的贡献:
2.3 关键技术特点
- 轻量化设计 :仅通过逐点卷积(1×1卷积)和池化操作实现注意力机制,计算量小,适用于浅层特征的实时处理。
- 跨模态交互 :通过通道-空间双重注意力,迫使模型关注红外与可见光特征的互补区域(如红外中的目标位置+可见光中的纹理细节),抑制无关噪声(如红外背景噪声或可见光低光照噪点)。
论文: https://www.sciencedirect.com/science/article/abs/pii/S1566253523001860
源码: https://github.com/Linfeng-Tang/PSFusion
三、SDFM的实现代码
SDFM
的实现代码如下:
import math
import torch.nn as nn
import torch
def autopad(k, p=None, d=1): # kernel, padding, dilation
"""Pad to 'same' shape outputs."""
if d > 1:
k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k] # actual kernel-size
if p is None:
p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
return p
class Conv(nn.Module):
"""Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)."""
default_act = nn.SiLU() # default activation
def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
"""Initialize Conv layer with given arguments including activation."""
super().__init__()
self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)
self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
def bn(self, x):
"""Apply layer normalization to the input tensor."""
return nn.LayerNorm(x.shape[1:]).to(x.device)(x)
def forward(self, x):
"""Apply convolution, batch normalization and activation to input tensor."""
return self.act(self.bn(self.conv(x)))
def forward_fuse(self, x):
"""Perform transposed convolution of 2D data."""
return self.act(self.conv(x))
class SDFM(nn.Module):
'''
superficial detail fusion module
'''
def __init__(self, channels=64, r=4):
super(SDFM, self).__init__()
inter_channels = int(channels // r)
self.Recalibrate = nn.Sequential(
nn.AdaptiveAvgPool2d(1),
Conv(2 * channels, 2 * inter_channels),
Conv(2 * inter_channels, 2 * channels, act=nn.Sigmoid()),
)
self.channel_agg = Conv(2 * channels, channels)
self.local_att = nn.Sequential(
Conv(channels, inter_channels, 1),
Conv(inter_channels, channels, 1, act=False),
)
self.global_att = nn.Sequential(
nn.AdaptiveAvgPool2d(1),
Conv(channels, inter_channels, 1),
Conv(inter_channels, channels, 1),
)
self.sigmoid = nn.Sigmoid()
# 确保模型在初始化时就被移到 CUDA 上(如果可用)
if torch.cuda.is_available():
self.to('cuda')
def forward(self, data):
x1, x2 = data
# 将输入数据移动到与模型相同的设备上
device = next(self.parameters()).device
x1 = x1.to(device)
x2 = x2.to(device)
_, c, _, _ = x1.shape
input = torch.cat([x1, x2], dim=1)
recal_w = self.Recalibrate(input)
recal_input = recal_w * input ## 先对特征进行一步自校正
recal_input = recal_input + input
x1, x2 = torch.split(recal_input, c, dim=1)
agg_input = self.channel_agg(recal_input) ## 进行特征压缩 因为只计算一个特征的权重
local_w = self.local_att(agg_input) ## 局部注意力 即spatial attention
global_w = self.global_att(agg_input) ## 全局注意力 即channel attention
w = self.sigmoid(local_w * global_w) ## 计算特征x1的权重
xo = w * x1 + (1 - w) * x2 ## fusion results ## 特征聚合
return xo
四、融合步骤
4.1 修改一
① 在
ultralytics/nn/
目录下新建
AddModules
文件夹用于存放模块代码
② 在
AddModules
文件夹下新建
SDFM.py
,将
第三节
中的代码粘贴到此处
4.2 修改二
在
AddModules
文件夹下新建
__init__.py
(已有则不用新建),在文件内导入模块:
from .SDFM import *
4.3 修改三
在
ultralytics/nn/modules/tasks.py
文件中,需要在两处位置添加各模块类名称。
首先:导入模块
其次:在
parse_model函数
中注册
SDFM
模块
elif m in {SDFM}:
c2 = ch[f[0]]
args = [c2]
在
DetectionModel
类下,添加如下代码
try:
m.stride = torch.tensor([s / x.shape[-2] for x in _forward(torch.zeros(1, ch, s, s))]) # forward on CPU
except RuntimeError:
try:
self.model.to(torch.device('cuda'))
m.stride = torch.tensor([s / x.shape[-2] for x in _forward(
torch.zeros(1, ch, s, s).to(torch.device('cuda')))]) # forward on CUDA
except RuntimeError as error:
raise error
并注释这一行
# m.stride = torch.tensor([s / x.shape[-2] for x in _forward(torch.zeros(1, ch, s, s))]) # forward
五、yaml模型文件
5.1 中期融合⭐
📌 此模型的修方法是将原本的中期融合中的Concat融合部分换成SDFM,融合骨干部分的多模态信息。
# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-ResNet50 object detection model with P3-P5 outputs.
# Parameters
ch: 6
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'
# [depth, width, max_channels]
l: [1.00, 1.00, 1024]
backbone:
# [from, repeats, module, args]
- [-1, 1, IN, []] # 0
- [-1, 1, Multiin, [1]] # 1
- [-2, 1, Multiin, [2]] # 2
- [1, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 3-P1
- [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 4
- [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 5
- [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 6-P2
- [-1, 2, Blocks, [64, BasicBlock, 2, False]] # 7
- [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 8-P3
- [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 9-P4
- [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 10-P5
- [2, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 11-P1
- [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 12
- [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 13
- [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 14-P2
- [-1, 2, Blocks, [64, BasicBlock, 2, False]] # 15
- [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 16-P3
- [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 17-P4
- [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 18-P5
- [[8, 16], 1, SDFM, []] # 19 cat backbone P3
- [[9, 17], 1, SDFM, []] # 20 cat backbone P4
- [[10, 18], 1, SDFM, []] # 21 cat backbone P5
head:
- [-1, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 22 input_proj.2
- [-1, 1, AIFI, [1024, 8]]
- [-1, 1, Conv, [256, 1, 1]] # 24, Y5, lateral_convs.0
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 25
- [20, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 26 input_proj.1
- [[-2, -1], 1, Concat, [1]]
- [-1, 3, RepC3, [256, 0.5]] # 28, fpn_blocks.0
- [-1, 1, Conv, [256, 1, 1]] # 29, Y4, lateral_convs.1
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 30
- [19, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 31 input_proj.0
- [[-2, -1], 1, Concat, [1]] # 32 cat backbone P4
- [-1, 3, RepC3, [256, 0.5]] # X3 (33), fpn_blocks.1
- [-1, 1, Conv, [256, 3, 2]] # 34, downsample_convs.0
- [[-1, 29], 1, Concat, [1]] # 35 cat Y4
- [-1, 3, RepC3, [256, 0.5]] # F4 (36), pan_blocks.0
- [-1, 1, Conv, [256, 3, 2]] # 37, downsample_convs.1
- [[-1, 24], 1, Concat, [1]] # 38 cat Y5
- [-1, 3, RepC3, [256, 0.5]] # F5 (39), pan_blocks.1
- [[33, 36, 39], 1, RTDETRDecoder, [nc, 256, 300, 4, 8, 3]] # Detect(P3, P4, P5)
5.2 中-后期融合⭐
📌 此模型的修方法是将原本的中-后期融合中的Concat融合部分换成SDFM,融合FPN部分的多模态信息。
# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-ResNet50 object detection model with P3-P5 outputs.
# Parameters
ch: 6
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'
# [depth, width, max_channels]
l: [1.00, 1.00, 1024]
backbone:
# [from, repeats, module, args]
- [-1, 1, IN, []] # 0
- [-1, 1, Multiin, [1]] # 1
- [-2, 1, Multiin, [2]] # 2
- [1, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 3-P1
- [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 4
- [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 5
- [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 6-P2
- [-1, 2, Blocks, [64, BasicBlock, 2, False]] # 7
- [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 8-P3
- [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 9-P4
- [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 10-P5
- [2, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 11-P1
- [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 12
- [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 13
- [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 14-P2
- [-1, 2, Blocks, [64, BasicBlock, 2, False]] # 15
- [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 16-P3
- [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 17-P4
- [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 18-P5
head:
- [10, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 19 input_proj.2
- [-1, 1, AIFI, [1024, 8]]
- [-1, 1, Conv, [256, 1, 1]] # 21, Y5, lateral_convs.0
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 22
- [9, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 23 input_proj.1
- [[-2, -1], 1, Concat, [1]]
- [-1, 3, RepC3, [256, 0.5]] # 25, fpn_blocks.0
- [-1, 1, Conv, [256, 1, 1]] # 26, Y4, lateral_convs.1
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 27
- [8, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 28 input_proj.0
- [[-2, -1], 1, Concat, [1]] # 29 cat backbone P4
- [-1, 3, RepC3, [256, 0.5]] # X3 (30), fpn_blocks.1
- [18, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 31 input_proj.2
- [-1, 1, AIFI, [1024, 8]]
- [-1, 1, Conv, [256, 1, 1]] # 33, Y5, lateral_convs.0
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 34
- [17, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 35 input_proj.1
- [[-2, -1], 1, Concat, [1]]
- [-1, 3, RepC3, [256, 0.5]] # 37, fpn_blocks.0
- [-1, 1, Conv, [256, 1, 1]] # 38, Y4, lateral_convs.1
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 39
- [16, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 40 input_proj.0
- [[-2, -1], 1, Concat, [1]] # 41 cat backbone P4
- [-1, 3, RepC3, [256, 0.5]] # X3 (42), fpn_blocks.1
- [[21, 33], 1, SDFM, []] # 43 cat backbone P3
- [[26, 38], 1, SDFM, []] # 44 cat backbone P4
- [[30, 42], 1, SDFM, []] # 45 cat backbone P5
- [-1, 1, Conv, [256, 3, 2]] # 46, downsample_convs.0
- [[-1, 44], 1, Concat, [1]] # 47 cat Y4
- [-1, 3, RepC3, [256, 0.5]] # F4 (48), pan_blocks.0
- [-1, 1, Conv, [256, 3, 2]] # 49, downsample_convs.1
- [[-1, 43], 1, Concat, [1]] # 50 cat Y5
- [-1, 3, RepC3, [256, 0.5]] # F5 (51), pan_blocks.1
- [[45, 48, 51], 1, RTDETRDecoder, [nc, 256, 300, 4, 8, 3]] # Detect(P3, P4, P5)
5.3 后期融合⭐
📌 此模型的修方法是将原本的后期融合中的Concat融合部分换成SDFM,融合颈部部分的多模态信息。
# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-ResNet50 object detection model with P3-P5 outputs.
# Parameters
ch: 6
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'
# [depth, width, max_channels]
l: [1.00, 1.00, 1024]
backbone:
# [from, repeats, module, args]
- [-1, 1, IN, []] # 0
- [-1, 1, Multiin, [1]] # 1
- [-2, 1, Multiin, [2]] # 2
- [1, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 3-P1
- [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 4
- [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 5
- [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 6-P2
- [-1, 2, Blocks, [64, BasicBlock, 2, False]] # 7
- [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 8-P3
- [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 9-P4
- [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 10-P5
- [2, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 11-P1
- [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 12
- [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 13
- [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 14-P2
- [-1, 2, Blocks, [64, BasicBlock, 2, False]] # 15
- [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 16-P3
- [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 17-P4
- [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 18-P5
head:
- [10, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 19 input_proj.2
- [-1, 1, AIFI, [1024, 8]]
- [-1, 1, Conv, [256, 1, 1]] # 21, Y5, lateral_convs.0
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 22
- [9, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 23 input_proj.1
- [[-2, -1], 1, Concat, [1]]
- [-1, 3, RepC3, [256, 0.5]] # 25, fpn_blocks.0
- [-1, 1, Conv, [256, 1, 1]] # 26, Y4, lateral_convs.1
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 27
- [8, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 28 input_proj.0
- [[-2, -1], 1, Concat, [1]] # 29 cat backbone P4
- [-1, 3, RepC3, [256, 0.5]] # X3 (30), fpn_blocks.1
- [-1, 1, Conv, [256, 3, 2]] # 31, downsample_convs.0
- [[-1, 26], 1, Concat, [1]] # 32 cat Y4
- [-1, 3, RepC3, [256, 0.5]] # F4 (33), pan_blocks.0
- [-1, 1, Conv, [256, 3, 2]] # 34, downsample_convs.1
- [[-1, 21], 1, Concat, [1]] # 35 cat Y5
- [-1, 3, RepC3, [256, 0.5]] # F5 (36), pan_blocks.1
- [18, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 37 input_proj.2
- [-1, 1, AIFI, [1024, 8]]
- [-1, 1, Conv, [256, 1, 1]] # 39, Y5, lateral_convs.0
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 40
- [17, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 41 input_proj.1
- [[-2, -1], 1, Concat, [1]]
- [-1, 3, RepC3, [256, 0.5]] # 43, fpn_blocks.0
- [-1, 1, Conv, [256, 1, 1]] # 44, Y4, lateral_convs.1
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 45
- [16, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 46 input_proj.0
- [[-2, -1], 1, Concat, [1]] # 47 cat backbone P4
- [-1, 3, RepC3, [256, 0.5]] # X3 (48), fpn_blocks.1
- [-1, 1, Conv, [256, 3, 2]] # 49, downsample_convs.0
- [[-1, 44], 1, Concat, [1]] # 50 cat Y4
- [-1, 3, RepC3, [256, 0.5]] # F4 (51), pan_blocks.0
- [-1, 1, Conv, [256, 3, 2]] # 52, downsample_convs.1
- [[-1, 39], 1, Concat, [1]] # 53 cat Y5
- [-1, 3, RepC3, [256, 0.5]] # F5 (54), pan_blocks.1
- [[30, 48], 1, SDFM, []] # 55 cat backbone P3
- [[33, 51], 1, SDFM, []] # 56 cat backbone P4
- [[36, 54], 1, SDFM, []] # 57 cat backbone P5
- [[55, 56, 57], 1, RTDETRDecoder, [nc, 256, 300, 4, 8, 3]] # Detect(P3, P4, P5)
六、成功运行结果
打印网络模型可以看到不同的融合层已经加入到模型中,并可以进行训练了。
rtdetr-resnet18-mid-SDFM :
rtdetr-resnet18-mid-SDFM summary: 548 layers, 33,023,188 parameters, 33,023,188 gradients
from n params module arguments
0 -1 1 0 ultralytics.nn.AddModules.multimodal.IN []
1 -1 1 0 ultralytics.nn.AddModules.multimodal.Multiin [1]
2 -2 1 0 ultralytics.nn.AddModules.multimodal.Multiin [2]
3 1 1 960 ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
4 -1 1 9312 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
5 -1 1 18624 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
6 -1 1 0 torch.nn.modules.pooling.MaxPool2d [3, 2, 1]
7 -1 2 152512 ultralytics.nn.AddModules.ResNet.Blocks [64, 64, 2, 'BasicBlock', 2, False]
8 -1 2 526208 ultralytics.nn.AddModules.ResNet.Blocks [64, 128, 2, 'BasicBlock', 3, False]
9 -1 2 2100992 ultralytics.nn.AddModules.ResNet.Blocks [128, 256, 2, 'BasicBlock', 4, False]
10 -1 2 8396288 ultralytics.nn.AddModules.ResNet.Blocks [256, 512, 2, 'BasicBlock', 5, False]
11 2 1 960 ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
12 -1 1 9312 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
13 -1 1 18624 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
14 -1 1 0 torch.nn.modules.pooling.MaxPool2d [3, 2, 1]
15 -1 2 152512 ultralytics.nn.AddModules.ResNet.Blocks [64, 64, 2, 'BasicBlock', 2, False]
16 -1 2 526208 ultralytics.nn.AddModules.ResNet.Blocks [64, 128, 2, 'BasicBlock', 3, False]
17 -1 2 2100992 ultralytics.nn.AddModules.ResNet.Blocks [128, 256, 2, 'BasicBlock', 4, False]
18 -1 2 8396288 ultralytics.nn.AddModules.ResNet.Blocks [256, 512, 2, 'BasicBlock', 5, False]
19 [8, 16] 1 81920 ultralytics.nn.AddModules.SDFM.SDFM [128]
20 [9, 17] 1 327680 ultralytics.nn.AddModules.SDFM.SDFM [256]
21 [10, 18] 1 1310720 ultralytics.nn.AddModules.SDFM.SDFM [512]
22 -1 1 131584 ultralytics.nn.modules.conv.Conv [512, 256, 1, 1, None, 1, 1, False]
23 -1 1 789760 ultralytics.nn.modules.transformer.AIFI [256, 1024, 8]
24 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
25 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
26 20 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1, None, 1, 1, False]
27 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
28 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
29 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
30 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
31 19 1 33280 ultralytics.nn.modules.conv.Conv [128, 256, 1, 1, None, 1, 1, False]
32 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
33 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
34 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
35 [-1, 29] 1 0 ultralytics.nn.modules.conv.Concat [1]
36 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
37 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
38 [-1, 24] 1 0 ultralytics.nn.modules.conv.Concat [1]
39 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
40 [33, 36, 39] 1 3927956 ultralytics.nn.modules.head.RTDETRDecoder [9, [256, 256, 256], 256, 300, 4, 8, 3]
rtdetr-resnet18-mid-SDFM summary: 548 layers, 33,023,188 parameters, 33,023,188 gradients
rtdetr-resnet18-mid-to-late-SDFM :
rtdetr-resnet18-mid-to-late-SDFM summary: 656 layers, 34,754,516 parameters, 34,754,516 gradients
from n params module arguments
0 -1 1 0 ultralytics.nn.AddModules.multimodal.IN []
1 -1 1 0 ultralytics.nn.AddModules.multimodal.Multiin [1]
2 -2 1 0 ultralytics.nn.AddModules.multimodal.Multiin [2]
3 1 1 960 ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
4 -1 1 9312 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
5 -1 1 18624 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
6 -1 1 0 torch.nn.modules.pooling.MaxPool2d [3, 2, 1]
7 -1 2 152512 ultralytics.nn.AddModules.ResNet.Blocks [64, 64, 2, 'BasicBlock', 2, False]
8 -1 2 526208 ultralytics.nn.AddModules.ResNet.Blocks [64, 128, 2, 'BasicBlock', 3, False]
9 -1 2 2100992 ultralytics.nn.AddModules.ResNet.Blocks [128, 256, 2, 'BasicBlock', 4, False]
10 -1 2 8396288 ultralytics.nn.AddModules.ResNet.Blocks [256, 512, 2, 'BasicBlock', 5, False]
11 2 1 960 ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
12 -1 1 9312 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
13 -1 1 18624 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
14 -1 1 0 torch.nn.modules.pooling.MaxPool2d [3, 2, 1]
15 -1 2 152512 ultralytics.nn.AddModules.ResNet.Blocks [64, 64, 2, 'BasicBlock', 2, False]
16 -1 2 526208 ultralytics.nn.AddModules.ResNet.Blocks [64, 128, 2, 'BasicBlock', 3, False]
17 -1 2 2100992 ultralytics.nn.AddModules.ResNet.Blocks [128, 256, 2, 'BasicBlock', 4, False]
18 -1 2 8396288 ultralytics.nn.AddModules.ResNet.Blocks [256, 512, 2, 'BasicBlock', 5, False]
19 10 1 131584 ultralytics.nn.modules.conv.Conv [512, 256, 1, 1, None, 1, 1, False]
20 -1 1 789760 ultralytics.nn.modules.transformer.AIFI [256, 1024, 8]
21 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
22 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
23 9 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1, None, 1, 1, False]
24 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
25 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
26 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
27 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
28 8 1 33280 ultralytics.nn.modules.conv.Conv [128, 256, 1, 1, None, 1, 1, False]
29 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
30 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
31 18 1 131584 ultralytics.nn.modules.conv.Conv [512, 256, 1, 1, None, 1, 1, False]
32 -1 1 789760 ultralytics.nn.modules.transformer.AIFI [256, 1024, 8]
33 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
34 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
35 17 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1, None, 1, 1, False]
36 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
37 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
38 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
39 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
40 16 1 33280 ultralytics.nn.modules.conv.Conv [128, 256, 1, 1, None, 1, 1, False]
41 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
42 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
43 [21, 33] 1 327680 ultralytics.nn.AddModules.SDFM.SDFM [256]
44 [26, 38] 1 327680 ultralytics.nn.AddModules.SDFM.SDFM [256]
45 [30, 42] 1 327680 ultralytics.nn.AddModules.SDFM.SDFM [256]
46 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
47 [-1, 44] 1 0 ultralytics.nn.modules.conv.Concat [1]
48 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
49 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
50 [-1, 43] 1 0 ultralytics.nn.modules.conv.Concat [1]
51 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
52 [45, 48, 51] 1 3927956 ultralytics.nn.modules.head.RTDETRDecoder [9, [256, 256, 256], 256, 300, 4, 8, 3]
rtdetr-resnet18-mid-to-late-SDFM summary: 656 layers, 34,754,516 parameters, 34,754,516 gradients
rtdetr-resnet18-mid-to-late-SDFM :
rtdetr-resnet18-mid-to-late-SDFM summary: 656 layers, 34,754,516 parameters, 34,754,516 gradients
from n params module arguments
0 -1 1 0 ultralytics.nn.AddModules.multimodal.IN []
1 -1 1 0 ultralytics.nn.AddModules.multimodal.Multiin [1]
2 -2 1 0 ultralytics.nn.AddModules.multimodal.Multiin [2]
3 1 1 960 ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
4 -1 1 9312 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
5 -1 1 18624 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
6 -1 1 0 torch.nn.modules.pooling.MaxPool2d [3, 2, 1]
7 -1 2 152512 ultralytics.nn.AddModules.ResNet.Blocks [64, 64, 2, 'BasicBlock', 2, False]
8 -1 2 526208 ultralytics.nn.AddModules.ResNet.Blocks [64, 128, 2, 'BasicBlock', 3, False]
9 -1 2 2100992 ultralytics.nn.AddModules.ResNet.Blocks [128, 256, 2, 'BasicBlock', 4, False]
10 -1 2 8396288 ultralytics.nn.AddModules.ResNet.Blocks [256, 512, 2, 'BasicBlock', 5, False]
11 2 1 960 ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
12 -1 1 9312 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
13 -1 1 18624 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
14 -1 1 0 torch.nn.modules.pooling.MaxPool2d [3, 2, 1]
15 -1 2 152512 ultralytics.nn.AddModules.ResNet.Blocks [64, 64, 2, 'BasicBlock', 2, False]
16 -1 2 526208 ultralytics.nn.AddModules.ResNet.Blocks [64, 128, 2, 'BasicBlock', 3, False]
17 -1 2 2100992 ultralytics.nn.AddModules.ResNet.Blocks [128, 256, 2, 'BasicBlock', 4, False]
18 -1 2 8396288 ultralytics.nn.AddModules.ResNet.Blocks [256, 512, 2, 'BasicBlock', 5, False]
19 10 1 131584 ultralytics.nn.modules.conv.Conv [512, 256, 1, 1, None, 1, 1, False]
20 -1 1 789760 ultralytics.nn.modules.transformer.AIFI [256, 1024, 8]
21 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
22 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
23 9 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1, None, 1, 1, False]
24 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
25 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
26 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
27 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
28 8 1 33280 ultralytics.nn.modules.conv.Conv [128, 256, 1, 1, None, 1, 1, False]
29 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
30 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
31 18 1 131584 ultralytics.nn.modules.conv.Conv [512, 256, 1, 1, None, 1, 1, False]
32 -1 1 789760 ultralytics.nn.modules.transformer.AIFI [256, 1024, 8]
33 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
34 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
35 17 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1, None, 1, 1, False]
36 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
37 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
38 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
39 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
40 16 1 33280 ultralytics.nn.modules.conv.Conv [128, 256, 1, 1, None, 1, 1, False]
41 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
42 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
43 [21, 33] 1 327680 ultralytics.nn.AddModules.SDFM.SDFM [256]
44 [26, 38] 1 327680 ultralytics.nn.AddModules.SDFM.SDFM [256]
45 [30, 42] 1 327680 ultralytics.nn.AddModules.SDFM.SDFM [256]
46 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
47 [-1, 44] 1 0 ultralytics.nn.modules.conv.Concat [1]
48 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
49 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
50 [-1, 43] 1 0 ultralytics.nn.modules.conv.Concat [1]
51 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
52 [45, 48, 51] 1 3927956 ultralytics.nn.modules.head.RTDETRDecoder [9, [256, 256, 256], 256, 300, 4, 8, 3]
rtdetr-resnet18-mid-to-late-SDFM summary: 656 layers, 34,754,516 parameters, 34,754,516 gradients