【RT-DETR多模态融合改进】| PR 2024 ICAFusion中的DMFF,双模态特征融合模块 引入跨模态交叉注意力机制,动态建模不同模态特征的全局语义依赖
一、本文介绍
本文记录的是利用 ICAFusion中的DMFF模块改进RT-DETR的多模态融合部分 。
DMFF
模块通过
双交叉注意力引导的迭代特征融合
过程,实现了
对RGB与热模态全局互补信息的高效聚合
。将其应用于
RT-DETR
的改进过程中,针对多光谱检测中
模态错位
与
长距离依赖建模不足
的问题,
通过空间特征压缩、交叉模态增强及迭代学习策略,缓解传统CNN局部交互局限导致的性能下降,提升复杂场景下的目标识别精度
。
二、ICAFusion介绍
ICAFusion: Iterative Cross-Attention Guided Feature Fusion for Multispectral Object Detection
2.1 DMFF(Dual-modal Feature Fusion)的设计出发点
-
现有方法的局限性
- 传统基于卷积神经网络(CNN)的特征融合方法在多光谱目标检测中,因局部特征交互能力有限,对图像错位敏感,导致性能下降。
- 直接使用Transformer进行特征融合会导致大量特征冗余,带来过高的计算负载和内存需求。
-
多光谱数据的互补性需求
- RGB图像在良好光照下提供颜色、纹理和轮廓细节,热图像在低光照或恶劣环境中捕捉物体热辐射轮廓,二者具有显著互补性。
- 需有效聚合跨模态的全局互补信息,同时降低模型复杂度。
2.2 结构原理
DMFF模块主要包含三大核心组件,通过层级化设计实现高效的跨模态特征融合:
-
空间特征压缩(SFS,Spatial Feature Shrinking)
- 目的 :降低特征图维度,减少后续计算量,同时保留关键信息。
-
实现方式
:
- 卷积操作 :通过1×1卷积将空间信息压缩到通道维度。
- 混合池化 :自适应融合平均池化(保留背景)和最大池化(保留纹理),通过可学习参数λ平衡权重。
-
交叉模态特征增强(CFE,Cross-modal Feature Enhancement)
- 核心机制 :基于双交叉注意力Transformer,从全局视角捕捉RGB与热模态的互补关系。
-
工作流程
:
- 特征向量化 :将输入特征图展平为令牌(Token),并添加位置嵌入。
- 注意力计算 :以热模态为例,通过 Q R Q_R Q R (RGB令牌投影的查询)与 K T K_T K T 、 V T V_T V T (热模态令牌投影的键和值)的点积运算,构建跨模态相关性矩阵。
- 特征精炼 :通过残差连接和前馈网络(FFN)增强特征表示,引入可学习系数自适应调整分支权重。
-
迭代交叉模态特征增强(ICFE,Iterative Cross-modal Feature Enhancement)
-
工作原理
:
- 通过共享参数的迭代操作,逐步精炼跨模态和模态内的特征表示。
- 第 n n n 次迭代的输入为前一次迭代的输出,避免堆叠模块带来的参数爆炸。
- 关键公式 : { T ^ R n , T ^ T n } = F C F E ( ⋯ F C F E ⏟ n ( { T R , T T } ) ) \{\hat{T}_R^n, \hat{T}_T^n\} = \underbrace{\mathcal{F}_{CFE}(\cdots\mathcal{F}_{CFE}}_n(\{T_R, T_T\})) { T ^ R n , T ^ T n } = n F CFE ( ⋯ F CFE ({ T R , T T })) 。
-
工作原理
:
2.3 核心优势
-
性能提升显著
- 在KAIST、FLIR和VEDAI数据集上,相比基线方法大幅降低漏检率(MR)并提高平均精度(mAP)。
- 例如,在KAIST数据集上,MR从8.33%降至7.17%,FLIR数据集上mAP50从76.5%提升至79.2%。
-
计算效率优化
- 相比传统堆叠Transformer模块的方法,参数数量和内存占用保持不变,推理速度提升约20%。
- 通过SFS和迭代学习策略,计算复杂度从 O ( W 2 H 2 × C ) O(W^2H^2×C) O ( W 2 H 2 × C ) 降至 O ( W 2 H 2 / S 2 × C ) O(W^2H^2/S^2×C) O ( W 2 H 2 / S 2 × C ) 。
-
模态适应性强
- 支持单模态和双模态输入,当某一模态缺失或质量较差时,仍能通过跨模态注意力提取互补信息。
- 例如,输入单模态(R+R或T+T)时,检测性能仅比双模态下降2.4%~2.76%。
-
通用性与灵活性
- 可集成到YOLOv5、FCOS等不同检测框架,并兼容VGG16、ResNet50、CSPDarkNet等多种骨干网络。
- 在不同场景(如白天、夜间、航拍)下均表现稳定,适用于实时目标检测任务。
论文: https://linkinghub.elsevier.com/retrieve/pii/S0031320323006118
源码: https://github.com/chanchanchan97/ICAFusion
三、ICAFusion的实现代码
ICAFusion模块
的实现代码如下:
import math
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn import init
def autopad(k, p=None): # kernel, padding
# Pad to 'same'
if p is None:
p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
return p
class Conv(nn.Module):
# Standard convolution
def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
super(Conv, self).__init__()
self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
self.bn = nn.BatchNorm2d(c2)
self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())
def forward(self, x):
return self.act(self.bn(self.conv(x)))
def fuseforward(self, x):
return self.act(self.conv(x))
class LearnableWeights(nn.Module):
def __init__(self):
super(LearnableWeights, self).__init__()
self.w1 = nn.Parameter(torch.tensor([0.5]), requires_grad=True)
self.w2 = nn.Parameter(torch.tensor([0.5]), requires_grad=True)
def forward(self, x1, x2):
out = x1 * self.w1 + x2 * self.w2
return out
class AdaptivePool2d(nn.Module):
def __init__(self, output_h, output_w, pool_type='avg'):
super(AdaptivePool2d, self).__init__()
self.output_h = output_h
self.output_w = output_w
self.pool_type = pool_type
def forward(self, x):
bs, c, input_h, input_w = x.shape
if (input_h > self.output_h) or (input_w > self.output_w):
self.stride_h = input_h // self.output_h
self.stride_w = input_w // self.output_w
self.kernel_size = (input_h - (self.output_h - 1) * self.stride_h, input_w - (self.output_w - 1) * self.stride_w)
if self.pool_type == 'avg':
y = nn.AvgPool2d(kernel_size=self.kernel_size, stride=(self.stride_h, self.stride_w), padding=0)(x)
else:
y = nn.MaxPool2d(kernel_size=self.kernel_size, stride=(self.stride_h, self.stride_w), padding=0)(x)
else:
y = x
return y
class LearnableCoefficient(nn.Module):
def __init__(self):
super(LearnableCoefficient, self).__init__()
self.bias = nn.Parameter(torch.FloatTensor([1.0]), requires_grad=True)
def forward(self, x):
out = x * self.bias
return out
class CrossAttention(nn.Module):
def __init__(self, d_model, d_k, d_v, h, attn_pdrop=.1, resid_pdrop=.1):
'''
:param d_model: Output dimensionality of the model
:param d_k: Dimensionality of queries and keys
:param d_v: Dimensionality of values
:param h: Number of heads
'''
super(CrossAttention, self).__init__()
assert d_k % h == 0
self.d_model = d_model
self.d_k = d_model // h
self.d_v = d_model // h
self.h = h
# key, query, value projections for all heads
self.que_proj_vis = nn.Linear(d_model, h * self.d_k) # query projection
self.key_proj_vis = nn.Linear(d_model, h * self.d_k) # key projection
self.val_proj_vis = nn.Linear(d_model, h * self.d_v) # value projection
self.que_proj_ir = nn.Linear(d_model, h * self.d_k) # query projection
self.key_proj_ir = nn.Linear(d_model, h * self.d_k) # key projection
self.val_proj_ir = nn.Linear(d_model, h * self.d_v) # value projection
self.out_proj_vis = nn.Linear(h * self.d_v, d_model) # output projection
self.out_proj_ir = nn.Linear(h * self.d_v, d_model) # output projection
# regularization
self.attn_drop = nn.Dropout(attn_pdrop)
self.resid_drop = nn.Dropout(resid_pdrop)
# layer norm
self.LN1 = nn.LayerNorm(d_model)
self.LN2 = nn.LayerNorm(d_model)
self.init_weights()
def init_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
init.kaiming_normal_(m.weight, mode='fan_out')
if m.bias is not None:
init.constant_(m.bias, 0)
elif isinstance(m, nn.BatchNorm2d):
init.constant_(m.weight, 1)
init.constant_(m.bias, 0)
elif isinstance(m, nn.Linear):
init.normal_(m.weight, std=0.001)
if m.bias is not None:
init.constant_(m.bias, 0)
def forward(self, x, attention_mask=None, attention_weights=None):
'''
Computes Self-Attention
Args:
x (tensor): input (token) dim:(b_s, nx, c),
b_s means batch size
nx means length, for CNN, equals H*W, i.e. the length of feature maps
c means channel, i.e. the channel of feature maps
attention_mask: Mask over attention values (b_s, h, nq, nk). True indicates masking.
attention_weights: Multiplicative weights for attention values (b_s, h, nq, nk).
Return:
output (tensor): dim:(b_s, nx, c)
'''
rgb_fea_flat = x[0]
ir_fea_flat = x[1]
b_s, nq = rgb_fea_flat.shape[:2]
nk = rgb_fea_flat.shape[1]
# Self-Attention
rgb_fea_flat = self.LN1(rgb_fea_flat)
q_vis = self.que_proj_vis(rgb_fea_flat).contiguous().view(b_s, nq, self.h, self.d_k).permute(0, 2, 1, 3) # (b_s, h, nq, d_k)
k_vis = self.key_proj_vis(rgb_fea_flat).contiguous().view(b_s, nk, self.h, self.d_k).permute(0, 2, 3, 1) # (b_s, h, d_k, nk) K^T
v_vis = self.val_proj_vis(rgb_fea_flat).contiguous().view(b_s, nk, self.h, self.d_v).permute(0, 2, 1, 3) # (b_s, h, nk, d_v)
ir_fea_flat = self.LN2(ir_fea_flat)
q_ir = self.que_proj_ir(ir_fea_flat).contiguous().view(b_s, nq, self.h, self.d_k).permute(0, 2, 1, 3) # (b_s, h, nq, d_k)
k_ir = self.key_proj_ir(ir_fea_flat).contiguous().view(b_s, nk, self.h, self.d_k).permute(0, 2, 3, 1) # (b_s, h, d_k, nk) K^T
v_ir = self.val_proj_ir(ir_fea_flat).contiguous().view(b_s, nk, self.h, self.d_v).permute(0, 2, 1, 3) # (b_s, h, nk, d_v)
att_vis = torch.matmul(q_ir, k_vis) / np.sqrt(self.d_k)
att_ir = torch.matmul(q_vis, k_ir) / np.sqrt(self.d_k)
# att_vis = torch.matmul(k_vis, q_ir) / np.sqrt(self.d_k)
# att_ir = torch.matmul(k_ir, q_vis) / np.sqrt(self.d_k)
# get attention matrix
att_vis = torch.softmax(att_vis, -1)
att_vis = self.attn_drop(att_vis)
att_ir = torch.softmax(att_ir, -1)
att_ir = self.attn_drop(att_ir)
# output
out_vis = torch.matmul(att_vis, v_vis).permute(0, 2, 1, 3).contiguous().view(b_s, nq, self.h * self.d_v) # (b_s, nq, h*d_v)
out_vis = self.resid_drop(self.out_proj_vis(out_vis)) # (b_s, nq, d_model)
out_ir = torch.matmul(att_ir, v_ir).permute(0, 2, 1, 3).contiguous().view(b_s, nq, self.h * self.d_v) # (b_s, nq, h*d_v)
out_ir = self.resid_drop(self.out_proj_ir(out_ir)) # (b_s, nq, d_model)
return [out_vis, out_ir]
class CrossTransformerBlock(nn.Module):
def __init__(self, d_model, d_k, d_v, h, block_exp, attn_pdrop, resid_pdrop, loops_num=1):
"""
:param d_model: Output dimensionality of the model
:param d_k: Dimensionality of queries and keys
:param d_v: Dimensionality of values
:param h: Number of heads
:param block_exp: Expansion factor for MLP (feed foreword network)
"""
super(CrossTransformerBlock, self).__init__()
self.loops = loops_num
self.ln_input = nn.LayerNorm(d_model)
self.ln_output = nn.LayerNorm(d_model)
self.crossatt = CrossAttention(d_model, d_k, d_v, h, attn_pdrop, resid_pdrop)
self.mlp_vis = nn.Sequential(nn.Linear(d_model, block_exp * d_model),
# nn.SiLU(), # changed from GELU
nn.GELU(), # changed from GELU
nn.Linear(block_exp * d_model, d_model),
nn.Dropout(resid_pdrop),
)
self.mlp_ir = nn.Sequential(nn.Linear(d_model, block_exp * d_model),
# nn.SiLU(), # changed from GELU
nn.GELU(), # changed from GELU
nn.Linear(block_exp * d_model, d_model),
nn.Dropout(resid_pdrop),
)
self.mlp = nn.Sequential(nn.Linear(d_model, block_exp * d_model),
# nn.SiLU(), # changed from GELU
nn.GELU(), # changed from GELU
nn.Linear(block_exp * d_model, d_model),
nn.Dropout(resid_pdrop),
)
# Layer norm
self.LN1 = nn.LayerNorm(d_model)
self.LN2 = nn.LayerNorm(d_model)
# Learnable Coefficient
self.coefficient1 = LearnableCoefficient()
self.coefficient2 = LearnableCoefficient()
self.coefficient3 = LearnableCoefficient()
self.coefficient4 = LearnableCoefficient()
self.coefficient5 = LearnableCoefficient()
self.coefficient6 = LearnableCoefficient()
self.coefficient7 = LearnableCoefficient()
self.coefficient8 = LearnableCoefficient()
def forward(self, x):
rgb_fea_flat = x[0]
ir_fea_flat = x[1]
assert rgb_fea_flat.shape[0] == ir_fea_flat.shape[0]
bs, nx, c = rgb_fea_flat.size()
h = w = int(math.sqrt(nx))
for loop in range(self.loops):
# with Learnable Coefficient
rgb_fea_out, ir_fea_out = self.crossatt([rgb_fea_flat, ir_fea_flat])
rgb_att_out = self.coefficient1(rgb_fea_flat) + self.coefficient2(rgb_fea_out)
ir_att_out = self.coefficient3(ir_fea_flat) + self.coefficient4(ir_fea_out)
rgb_fea_flat = self.coefficient5(rgb_att_out) + self.coefficient6(self.mlp_vis(self.LN2(rgb_att_out)))
ir_fea_flat = self.coefficient7(ir_att_out) + self.coefficient8(self.mlp_ir(self.LN2(ir_att_out)))
# without Learnable Coefficient
# rgb_fea_out, ir_fea_out = self.crossatt([rgb_fea_flat, ir_fea_flat])
# rgb_att_out = rgb_fea_flat + rgb_fea_out
# ir_att_out = ir_fea_flat + ir_fea_out
# rgb_fea_flat = rgb_att_out + self.mlp_vis(self.LN2(rgb_att_out))
# ir_fea_flat = ir_att_out + self.mlp_ir(self.LN2(ir_att_out))
return [rgb_fea_flat, ir_fea_flat]
class Concat(nn.Module):
# Concatenate a list of tensors along dimension
def __init__(self, dimension=1):
super(Concat, self).__init__()
self.d = dimension
def forward(self, x):
# print(x.shape)
return torch.cat(x, self.d)
class TransformerFusionBlock(nn.Module):
def __init__(self, d_model, vert_anchors=16, horz_anchors=16, h=8, block_exp=4, n_layer=1, embd_pdrop=0.1, attn_pdrop=0.1, resid_pdrop=0.1):
super(TransformerFusionBlock, self).__init__()
self.n_embd = d_model
self.vert_anchors = vert_anchors
self.horz_anchors = horz_anchors
d_k = d_model
d_v = d_model
# positional embedding parameter (learnable), rgb_fea + ir_fea
self.register_buffer('pos_emb_vis', torch.zeros(1, vert_anchors * horz_anchors, self.n_embd))
self.register_buffer('pos_emb_ir', torch.zeros(1, vert_anchors * horz_anchors, self.n_embd))
# 初始化位置编码
self._init_pos_emb()
# downsampling
self.avgpool = AdaptivePool2d(self.vert_anchors, self.horz_anchors, 'avg')
self.maxpool = AdaptivePool2d(self.vert_anchors, self.horz_anchors, 'max')
# LearnableCoefficient
self.vis_coefficient = LearnableWeights()
self.ir_coefficient = LearnableWeights()
# init weights
self.apply(self._init_weights)
# cross transformer
self.crosstransformer = nn.Sequential(*[CrossTransformerBlock(d_model, d_k, d_v, h, block_exp, attn_pdrop, resid_pdrop) for layer in range(n_layer)])
# Concat
self.concat = Concat(dimension=1)
# conv1x1
self.conv1x1_out = Conv(c1=d_model * 2, c2=d_model, k=1, s=1, p=0, g=1, act=True)
def _init_pos_emb(self):
# 初始化位置编码
nn.init.trunc_normal_(self.pos_emb_vis, std=.02)
nn.init.trunc_normal_(self.pos_emb_ir, std=.02)
@staticmethod
def _init_weights(module):
if isinstance(module, nn.Linear):
module.weight.data.normal_(mean=0.0, std=0.02)
if module.bias is not None:
module.bias.data.zero_()
elif isinstance(module, nn.LayerNorm):
module.bias.data.zero_()
module.weight.data.fill_(1.0)
def forward(self, x):
rgb_fea = x[0]
ir_fea = x[1]
assert rgb_fea.shape[0] == ir_fea.shape[0]
bs, c, h, w = rgb_fea.shape
# ------------------------- cross-modal feature fusion -----------------------#
new_rgb_fea = self.vis_coefficient(self.avgpool(rgb_fea), self.maxpool(rgb_fea))
new_c, new_h, new_w = new_rgb_fea.shape[1], new_rgb_fea.shape[2], new_rgb_fea.shape[3]
# 调整位置编码大小以匹配特征图
pos_emb_vis = self._resize_pos_embed(self.pos_emb_vis, new_h, new_w)
rgb_fea_flat = new_rgb_fea.contiguous().view(bs, new_c, -1).permute(0, 2, 1) + pos_emb_vis
new_ir_fea = self.ir_coefficient(self.avgpool(ir_fea), self.maxpool(ir_fea))
pos_emb_ir = self._resize_pos_embed(self.pos_emb_ir, new_h, new_w)
ir_fea_flat = new_ir_fea.contiguous().view(bs, new_c, -1).permute(0, 2, 1) + pos_emb_ir
rgb_fea_flat, ir_fea_flat = self.crosstransformer([rgb_fea_flat, ir_fea_flat])
rgb_fea_CFE = rgb_fea_flat.contiguous().view(bs, new_h, new_w, new_c).permute(0, 3, 1, 2)
if self.training == True:
rgb_fea_CFE = F.interpolate(rgb_fea_CFE, size=([h, w]), mode='nearest')
else:
rgb_fea_CFE = F.interpolate(rgb_fea_CFE, size=([h, w]), mode='bilinear')
new_rgb_fea = rgb_fea_CFE + rgb_fea
ir_fea_CFE = ir_fea_flat.contiguous().view(bs, new_h, new_w, new_c).permute(0, 3, 1, 2)
if self.training == True:
ir_fea_CFE = F.interpolate(ir_fea_CFE, size=([h, w]), mode='nearest')
else:
ir_fea_CFE = F.interpolate(ir_fea_CFE, size=([h, w]), mode='bilinear')
new_ir_fea = ir_fea_CFE + ir_fea
new_fea = self.concat([new_rgb_fea, new_ir_fea])
new_fea = self.conv1x1_out(new_fea)
return new_fea
def _resize_pos_embed(self, pos_embed, new_h, new_w):
"""
调整位置编码的大小以匹配新的特征图尺寸
"""
# 获取原始位置编码的尺寸
N, L, C = pos_embed.shape
H = W = int(L ** 0.5)
# 将位置编码重塑为二维形式
pos_embed = pos_embed.permute(0, 2, 1).view(N, C, H, W)
# 使用双线性插值调整大小
pos_embed = F.interpolate(pos_embed, size=(new_h, new_w), mode='bilinear', align_corners=False)
# 重塑回一维形式
pos_embed = pos_embed.view(N, C, new_h * new_w).permute(0, 2, 1)
return pos_embed
四、添加步骤
4.1 修改一
① 在
ultralytics/nn/
目录下新建
AddModules
文件夹用于存放模块代码
② 在
AddModules
文件夹下新建
ICAFusion.py
,将
第三节
中的代码粘贴到此处
4.2 修改二
在
AddModules
文件夹下新建
__init__.py
(已有则不用新建),在文件内导入模块:
from .ICAFusion import *
4.3 修改三
在
ultralytics/nn/modules/tasks.py
文件中,需要在两处位置添加各模块类名称。
首先:导入模块
然后,在
parse_model函数
中添加如下代码:
elif m is TransformerFusionBlock:
c2 = ch[f[0]]
args = [c2, *args[1:]]
五、yaml模型文件
5.1 中期融合⭐
📌 此模型的修方法是将TransformerFusionBlock模块应用到RT-DETR的中期融合中。
# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-ResNet50 object detection model with P3-P5 outputs.
# Parameters
ch: 6
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'
# [depth, width, max_channels]
l: [1.00, 1.00, 1024]
backbone:
# [from, repeats, module, args]
- [-1, 1, IN, []] # 0
- [-1, 1, Multiin, [1]] # 1
- [-2, 1, Multiin, [2]] # 2
- [1, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 3-P1
- [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 4
- [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 5
- [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 6-P2
- [-1, 2, Blocks, [64, BasicBlock, 2, False]] # 7
- [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 8-P3
- [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 9-P4
- [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 10-P5
- [2, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 11-P1
- [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 12
- [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 13
- [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 14-P2
- [-1, 2, Blocks, [64, BasicBlock, 2, False]] # 15
- [-1, 2, Blocks, [128, BasicBlock, 3, False]] # 16-P3
- [-1, 2, Blocks, [256, BasicBlock, 4, False]] # 17-P4
- [-1, 2, Blocks, [512, BasicBlock, 5, False]] # 18-P5
- [[8, 16], 1, TransformerFusionBlock, [128, 20, 20]] # 19 cat backbone P3
- [[9, 17], 1, TransformerFusionBlock, [256, 20, 20]] # 20 cat backbone P4
- [[10, 18], 1, TransformerFusionBlock, [512, 20, 20]] # 21 cat backbone P5
head:
- [-1, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 22 input_proj.2
- [-1, 1, AIFI, [1024, 8]]
- [-1, 1, Conv, [256, 1, 1]] # 24, Y5, lateral_convs.0
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 25
- [20, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 26 input_proj.1
- [[-2, -1], 1, Concat, [1]]
- [-1, 3, RepC3, [256, 0.5]] # 28, fpn_blocks.0
- [-1, 1, Conv, [256, 1, 1]] # 29, Y4, lateral_convs.1
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 30
- [19, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 31 input_proj.0
- [[-2, -1], 1, Concat, [1]] # 32 cat backbone P4
- [-1, 3, RepC3, [256, 0.5]] # X3 (33), fpn_blocks.1
- [-1, 1, Conv, [256, 3, 2]] # 34, downsample_convs.0
- [[-1, 29], 1, Concat, [1]] # 35 cat Y4
- [-1, 3, RepC3, [256, 0.5]] # F4 (36), pan_blocks.0
- [-1, 1, Conv, [256, 3, 2]] # 37, downsample_convs.1
- [[-1, 24], 1, Concat, [1]] # 38 cat Y5
- [-1, 3, RepC3, [256, 0.5]] # F5 (39), pan_blocks.1
- [[33, 36, 39], 1, RTDETRDecoder, [nc, 256, 300, 4, 8, 3]] # Detect(P3, P4, P5)
六、成功运行结果
打印网络模型可以看到不同的融合层已经加入到模型中,并可以进行训练了。
rtdetr-resnet18-mid-ICAFusion :
rtdetr-resnet18-mid-ICAFusion summary: 634 layers, 43,034,232 parameters, 43,034,232 gradients, 100.2 GFLOPs
from n params module arguments
0 -1 1 0 ultralytics.nn.AddModules.multimodal.IN []
1 -1 1 0 ultralytics.nn.AddModules.multimodal.Multiin [1]
2 -2 1 0 ultralytics.nn.AddModules.multimodal.Multiin [2]
3 1 1 960 ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
4 -1 1 9312 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
5 -1 1 18624 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
6 -1 1 0 torch.nn.modules.pooling.MaxPool2d [3, 2, 1]
7 -1 2 152512 ultralytics.nn.AddModules.ResNet.Blocks [64, 64, 2, 'BasicBlock', 2, False]
8 -1 2 526208 ultralytics.nn.AddModules.ResNet.Blocks [64, 128, 2, 'BasicBlock', 3, False]
9 -1 2 2100992 ultralytics.nn.AddModules.ResNet.Blocks [128, 256, 2, 'BasicBlock', 4, False]
10 -1 2 8396288 ultralytics.nn.AddModules.ResNet.Blocks [256, 512, 2, 'BasicBlock', 5, False]
11 2 1 960 ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
12 -1 1 9312 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
13 -1 1 18624 ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
14 -1 1 0 torch.nn.modules.pooling.MaxPool2d [3, 2, 1]
15 -1 2 152512 ultralytics.nn.AddModules.ResNet.Blocks [64, 64, 2, 'BasicBlock', 2, False]
16 -1 2 526208 ultralytics.nn.AddModules.ResNet.Blocks [64, 128, 2, 'BasicBlock', 3, False]
17 -1 2 2100992 ultralytics.nn.AddModules.ResNet.Blocks [128, 256, 2, 'BasicBlock', 4, False]
18 -1 2 8396288 ultralytics.nn.AddModules.ResNet.Blocks [256, 512, 2, 'BasicBlock', 5, False]
19 [8, 16] 1 561804 ultralytics.nn.AddModules.ICAFusion.TransformerFusionBlock[128, 20, 20]
20 [9, 17] 1 2237708 ultralytics.nn.AddModules.ICAFusion.TransformerFusionBlock[256, 20, 20]
21 [10, 18] 1 8931852 ultralytics.nn.AddModules.ICAFusion.TransformerFusionBlock[512, 20, 20]
22 -1 1 131584 ultralytics.nn.modules.conv.Conv [512, 256, 1, 1, None, 1, 1, False]
23 -1 1 789760 ultralytics.nn.modules.transformer.AIFI [256, 1024, 8]
24 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
25 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
26 20 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1, None, 1, 1, False]
27 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
28 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
29 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
30 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
31 19 1 33280 ultralytics.nn.modules.conv.Conv [128, 256, 1, 1, None, 1, 1, False]
32 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
33 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
34 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
35 [-1, 29] 1 0 ultralytics.nn.modules.conv.Concat [1]
36 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
37 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
38 [-1, 24] 1 0 ultralytics.nn.modules.conv.Concat [1]
39 -1 3 657920 ultralytics.nn.modules.block.RepC3 [512, 256, 3, 0.5]
40 [33, 36, 39] 1 3927956 ultralytics.nn.modules.head.RTDETRDecoder [9, [256, 256, 256], 256, 300, 4, 8, 3]
rtdetr-resnet18-mid-ICAFusion summary: 634 layers, 43,034,232 parameters, 43,034,232 gradients, 100.2 GFLOPs