【YOLOv10多模态融合改进】| CAFM:通道 - 空间交叉注意力机制 | 动态捕捉跨模态特征的重要性,抑制冗余信息
一、本文介绍
本文记录的是 利用 CAFM 模块改进 YOLOv10 的多模态目标检测网络模型 。
CAFM 模块(Cross-Attention Fusion Module,跨注意力融合模块)
通过在特征融合阶段
引入通道 - 空间交叉注意力机制
,
动态生成跨子网特征融合权重
。该模块可
自适应捕捉像素级与超像素级特征的语义关联
,抑制无关背景干扰,
实现高层语义与空间结构的深度交互与互补增强
,为检测任务提供跨模态的精准特征表示,从而提升模型在不同模态场景下的分类准确性与鲁棒性。
二、CAFM模块介绍
Attention Multihop Graph and Multiscale Convolutional Fusion Network for Hyperspectral Image Classification
2.1 设计出发点
传统的特征融合方法在结合不同子网(如CNN和GCN)的特征时,往往存在信息交互不充分、无法动态平衡各分支贡献的问题。例如,直接拼接或简单加权融合难以捕捉跨子网特征的深层关联,导致融合后的特征缺乏对关键语义和空间结构的聚焦。
针对这一挑战, 跨注意力融合模块(Cross-Attention Fusion Module, CAFM) 被提出,旨在通过通道和空间维度的交叉注意力机制,实现像素级与超像素级特征的深度交互,提升融合特征的判别力。
2.2 结构原理
CAFM模块分为 通道注意力交叉模块 和 空间注意力融合模块 两部分,通过双向注意力机制增强不同子网特征的互补性,其具体结构如下:
2.2.1 通道注意力交叉模块
-
特征描述与交互
:
对来自两个子网(如PMCsN和MGCsN)的特征 F c F^c F c (形状为 C × H × W C×H×W C × H × W ),分别进行全局平均池化和最大池化,得到通道维度的描述向量 F a v g c F_{avg}^c F a vg c 和 F m a x c F_{max}^c F ma x c 。-
通过共享的两层神经网络(MLP)处理,生成通道权重
M
T
c
M_T^c
M
T
c
和
M
H
c
M_H^c
M
H
c
,公式为:
M c = σ ( M L P ( F a v g c ) + M L P ( F m a x c ) ) M^c = \sigma\left(MLP(F_{avg}^c) + MLP(F_{max}^c)\right) M c = σ ( M L P ( F a vg c ) + M L P ( F ma x c ) ) -
计算交叉权重矩阵
M
c
r
o
s
s
=
M
T
c
⋅
(
M
H
c
)
T
M_{cross} = M_T^c \cdot (M_H^c)^T
M
cross
=
M
T
c
⋅
(
M
H
c
)
T
,通过Softmax归一化后,对两子网特征进行加权,突出跨通道的重要关联:
T c = Softmax ( M c r o s s ) ⋅ T o u t , H c = Softmax ( M c r o s s T ) ⋅ H o u t T^c = \text{Softmax}(M_{cross}) \cdot T_{out}, \quad H^c = \text{Softmax}(M_{cross}^T) \cdot H_{out} T c = Softmax ( M cross ) ⋅ T o u t , H c = Softmax ( M cross T ) ⋅ H o u t
该过程通过通道间的双向交互,强化了特征在语义层面的互补性。
-
通过共享的两层神经网络(MLP)处理,生成通道权重
M
T
c
M_T^c
M
T
c
和
M
H
c
M_H^c
M
H
c
,公式为:
2.2.2 空间注意力融合模块
-
空间权重计算
:
对通道交叉后的特征 T c T^c T c 和 H c H^c H c ,在空间维度上分别进行平均池化和最大池化,拼接后通过卷积层生成空间权重 M T s M_T^s M T s 和 M H s M_H^s M H s :
M s = f 3 × 3 ( [ F a v g s ⊕ F m a x s ] ) M^s = f^{3×3}\left([F_{avg}^s \oplus F_{max}^s]\right) M s = f 3 × 3 ( [ F a vg s ⊕ F ma x s ] )
其中, f 3 × 3 f^{3×3} f 3 × 3 为3×3卷积,用于捕捉局部空间依赖。 -
特征融合与残差连接
:
将空间权重与特征相乘,并引入残差连接以保留原始信息:
T s = Softmax ( M T s ) ⋅ T c + T c , H s = Softmax ( M H s ) ⋅ H c + H c T^s = \text{Softmax}(M_T^s) \cdot T^c + T^c, \quad H^s = \text{Softmax}(M_H^s) \cdot H^c + H^c T s = Softmax ( M T s ) ⋅ T c + T c , H s = Softmax ( M H s ) ⋅ H c + H c
最终通过拼接 T s T^s T s 和 H s H^s H s 并经过全连接层,输出融合后的分类结果。
2.3 优势
-
跨子网特征的深度交互: 通过通道注意力交叉模块,CAFM能够建模两子网特征在通道维度的依赖关系,例如,使CNN提取的像素级细节特征与GCN提取的超像素级结构特征产生语义关联。
-
动态权重分配与噪声抑制: 空间注意力模块通过自适应抑制背景噪声和光照干扰,增强了变化区域的空间一致性。
-
轻量化与泛化能力: 模块通过共享参数和简单卷积操作实现,计算复杂度低。
论文: https://ieeexplore.ieee.org/document/10098209
源码: https://github.com/EdwardHaoz/IEEE_TGRS_AMGCFN
三、CAFM的实现代码
CAFM
的实现代码如下:
import torch
import torch.nn as nn
import torch.nn.functional as F
from einops.einops import rearrange
class CAFM(nn.Module): # Cross Attention Fusion Module
def __init__(self, channels):
super(CAFM, self).__init__()
self.conv1_spatial = nn.Conv2d(2, 1, 3, stride=1, padding=1, groups=1)
self.conv2_spatial = nn.Conv2d(1, 1, 3, stride=1, padding=1, groups=1)
self.avg1 = nn.Conv2d(channels, 64, 1, stride=1, padding=0)
self.avg2 = nn.Conv2d(channels, 64, 1, stride=1, padding=0)
self.max1 = nn.Conv2d(channels, 64, 1, stride=1, padding=0)
self.max2 = nn.Conv2d(channels, 64, 1, stride=1, padding=0)
self.avg11 = nn.Conv2d(64, channels, 1, stride=1, padding=0)
self.avg22 = nn.Conv2d(64, channels, 1, stride=1, padding=0)
self.max11 = nn.Conv2d(64, channels, 1, stride=1, padding=0)
self.max22 = nn.Conv2d(64, channels, 1, stride=1, padding=0)
def forward(self, x):
rgb_fea = x[0] # rgb_fea (tensor): dim:(B, C, H, W)
ir_fea = x[1] # ir_fea (tensor): dim:(B, C, H, W)
assert rgb_fea.shape[0] == ir_fea.shape[0]
bs, c, h, w = rgb_fea.shape
f1 = rgb_fea.reshape([bs, c, -1])
f2 = ir_fea.reshape([bs, c, -1])
avg_1 = torch.mean(f1, dim=-1, keepdim=True).unsqueeze(-1)
max_1, _ = torch.max(f1, dim=-1, keepdim=True)
max_1 = max_1.unsqueeze(-1)
avg_1 = F.relu(self.avg1(avg_1))
max_1 = F.relu(self.max1(max_1))
avg_1 = self.avg11(avg_1).squeeze(-1)
max_1 = self.max11(max_1).squeeze(-1)
a1 = avg_1 + max_1
avg_2 = torch.mean(f2, dim=-1, keepdim=True).unsqueeze(-1)
max_2, _ = torch.max(f2, dim=-1, keepdim=True)
max_2 = max_2.unsqueeze(-1)
avg_2 = F.relu(self.avg2(avg_2))
max_2 = F.relu(self.max2(max_2))
avg_2 = self.avg22(avg_2).squeeze(-1)
max_2 = self.max22(max_2).squeeze(-1)
a2 = avg_2 + max_2
cross = torch.matmul(a1, a2.transpose(1, 2))
a1_att = torch.matmul(F.softmax(cross, dim=-1), f1)
a2_att = torch.matmul(F.softmax(cross.transpose(1, 2), dim=-1), f2)
a1_att = a1_att.reshape([bs, c, h, w])
a2_att = a2_att.reshape([bs, c, h, w])
avg_out_1 = torch.mean(a1_att, dim=1, keepdim=True)
max_out_1, _ = torch.max(a1_att, dim=1, keepdim=True)
a1_spatial = torch.cat([avg_out_1, max_out_1], dim=1)
a1_spatial = F.relu(self.conv1_spatial(a1_spatial))
a1_spatial = self.conv2_spatial(a1_spatial)
a1_spatial = a1_spatial.reshape([bs, 1, -1])
a1_spatial = F.softmax(a1_spatial, dim=-1)
avg_out_2 = torch.mean(a2_att, dim=1, keepdim=True)
max_out_2, _ = torch.max(a2_att, dim=1, keepdim=True)
a2_spatial = torch.cat([avg_out_2, max_out_2], dim=1)
a2_spatial = F.relu(self.conv1_spatial(a2_spatial))
a2_spatial = self.conv2_spatial(a2_spatial)
a2_spatial = a2_spatial.reshape([bs, 1, -1])
a2_spatial = F.softmax(a2_spatial, dim=-1)
f1_att = f1 * a1_spatial + f1
f2_att = f2 * a2_spatial + f2
f1_out = f1_att.view(bs, c, h, w)
f2_out = f2_att.view(bs, c, h, w)
return f1_out, f2_out
class Add(nn.Module):
def __init__(self, arg):
super().__init__()
self.arg = arg
def forward(self, x):
assert len(x) == 2, "输入必须包含两个待相加的张量"
tensor_a, tensor_b = x[0], x[1]
if tensor_a.shape[2:] != tensor_b.shape[2:]:
target_size = tensor_a.shape[2:] if tensor_a.shape[2] >= tensor_b.shape[2] else tensor_b.shape[2:]
tensor_a = F.interpolate(tensor_a, size=target_size, mode='bilinear', align_corners=False)
tensor_b = F.interpolate(tensor_b, size=target_size, mode='bilinear', align_corners=False)
return torch.add(tensor_a, tensor_b)
class Add2(nn.Module):
def __init__(self, c1, index):
super().__init__()
self.index = index
def forward(self, x):
assert len(x) == 2, "输入必须包含两个张量"
src, trans = x[0], x[1]
trans_part = trans[0] if self.index == 0 else trans[1]
if src.shape[2:] != trans_part.shape[2:]:
trans_part = F.interpolate(
trans_part,
size=src.shape[2:],
mode='bilinear',
align_corners=False
)
return torch.add(src, trans_part)
四、融合步骤
5.1 修改一
① 在
ultralytics/nn/
目录下新建
AddModules
文件夹用于存放模块代码
② 在
AddModules
文件夹下新建
CAFM.py
,将
第三节
中的代码粘贴到此处
5.2 修改二
在
AddModules
文件夹下新建
__init__.py
(已有则不用新建),在文件内导入模块:
from .CAFM import *
5.3 修改三
在
ultralytics/nn/modules/tasks.py
文件中,需要在两处位置添加各模块类名称。
首先:导入模块
其次:在
parse_model函数
中注册
Add
、
Add2
和
CAFM
模块
elif m is Add:
c2 = ch[f[0]]
args = [c2]
elif m is Add2:
c2 = ch[f[0]]
args = [c2, args[1]]
elif m is CAFM:
c2 = ch[f[0]]
args = [c2]
五、yaml模型文件
5.1 模型改进版本1 ⭐
此处以
ultralytics/cfg/models/v10/yolov10n.yaml
为例,在同目录下创建一个用于自己数据集训练的模型文件
yolov10n-CAFM-p234.yaml
。
将下方内容复制到
yolov10n-CAFM-p234.yaml
文件下。
📌 此模型的修方法是将骨干网络中的,不同模态之间的P2, P3, P4进行跨模态融合。
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv10 object detection model. For Usage examples see https://docs.ultralytics.com/tasks/detect
# Parameters
ch: 6
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov10n.yaml' will call yolov10.yaml with scale 'n'
# [depth, width, max_channels]
n: [0.33, 0.25, 1024]
backbone:
# [from, repeats, module, args]
- [-1, 1, IN, []] # 0
- [-1, 1, Multiin, [1]] # 1
- [-2, 1, Multiin, [2]] # 2
# Visible
- [1, 1, Conv, [64, 3, 2]] # 3-P1/2
- [-1, 1, Conv, [128, 3, 2]] # 4-P2/4
- [-1, 3, C2f, [128, True]] # 5
# infrared
- [2, 1, Conv, [64, 3, 2]] # 6-P1/2
- [-1, 1, Conv, [128, 3, 2]] # 7-P2/4
- [-1, 3, C2f, [128, True]] # 8
# transformer fusion
- [ [ 5,8 ], 1, CAFM, [ 128 ] ] # 9-P2/4
- [ [ 5,9 ], 1, Add2, [ 128,0 ] ] # 10-P2/4 stream one:x+trans[0]
- [ [ 8,9 ], 1, Add2, [ 128,1 ] ] # 11-P2/4 stream two:x+trans[1]
# Visible
- [10, 1, Conv, [256, 3, 2]] # 12-P3/8
- [-1, 6, C2f, [256, True]] # 13
# infrared
- [11, 1, Conv, [256, 3, 2]] # 14-P3/8
- [-1, 6, C2f, [256, True]] # 15
# transformer fusion
- [ [ 13,15 ], 1, CAFM, [ 256 ] ] # 16-P3/8
- [ [ 13,16 ], 1, Add2, [ 256,0 ] ] # 17-P3/8 stream one x+trans[0]
- [ [ 15,16 ], 1, Add2, [ 256,1 ] ] # 18-P3/8 stream two x+trans[1]
# Visible
- [17, 1, SCDown, [512, 3, 2]] # 19-P4/16
- [-1, 6, C2f, [512, True]] # 20
# infrared
- [18, 1, SCDown, [512, 3, 2]] # 21-P4/16
- [-1, 6, C2f, [512, True]] # 22
# transformer fusion
- [ [ 20,22 ], 1, CAFM, [ 512 ] ] # 23-P3/8
- [ [ 20,23 ], 1, Add2, [ 512,0 ] ] # 24-P3/8 stream one x+trans[0]
- [ [ 22,23 ], 1, Add2, [ 512,1 ] ] # 25-P3/8 stream two x+trans[1]
# Visible
- [24, 1, SCDown, [1024, 3, 2]] # 26-P5/32
- [-1, 3, C2f, [1024, True]] # 27
- [-1, 1, SPPF, [1024, 5]] # 28
- [-1, 1, PSA, [1024]] # 29
# infrared
- [25, 1, SCDown, [1024, 3, 2]] # 30-P5/32
- [-1, 3, C2f, [1024, True]] # 31
- [-1, 1, SPPF, [1024, 5]] # 32
- [-1, 1, PSA, [1024]] # 33
- [ [ 17,18 ], 1, Add, [ 1 ] ] # 34-P3/8 fusion backbone P3
- [ [ 24,25 ], 1, Add, [ 1 ] ] # 35-P4/16 fusion backbone P4
- [ [ 29,33 ], 1, Add, [ 1 ] ] # 36-P5/32 fusion backbone P5
# YOLOv10.0n head
head:
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [[-1, 35], 1, Concat, [1]] # cat backbone P4
- [-1, 3, C2f, [512]] # 39
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [[-1, 34], 1, Concat, [1]] # cat backbone P3
- [-1, 3, C2f, [256]] # 42 (P3/8-small)
- [-1, 1, Conv, [256, 3, 2]]
- [[-1, 39], 1, Concat, [1]] # cat head P4
- [-1, 3, C2f, [512]] # 45 (P4/16-medium)
- [-1, 1, SCDown, [512, 3, 2]]
- [[-1, 36], 1, Concat, [1]] # cat head P5
- [-1, 3, C2fCIB, [1024, True, True]] # 48 (P5/32-large)
- [[42, 45, 48], 1, v10Detect, [nc]] # Detect(P3, P4, P5)
六、成功运行结果
打印网络模型可以看到不同的融合层已经加入到模型中,并可以进行训练了。
YOLOv10n-CAFM-p234 :
YOLOv8-CAFM-p234 summary: 391 layers, 4,374,410 parameters, 4,374,394 gradients, 9.7 GFLOPs
from n params module arguments
0 -1 1 0 ultralytics.nn.AddModules.multimodal.IN []
1 -1 1 0 ultralytics.nn.AddModules.multimodal.Multiin [1]
2 -2 1 0 ultralytics.nn.AddModules.multimodal.Multiin [2]
3 1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2]
4 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]
5 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True]
6 2 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2]
7 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]
8 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True]
9 [5, 8] 1 16797 ultralytics.nn.AddModules.CAFM.CAFM [32]
10 [5, 9] 1 0 ultralytics.nn.AddModules.CFT.Add2 [32, 0]
11 [8, 9] 1 0 ultralytics.nn.AddModules.CFT.Add2 [32, 1]
12 10 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2]
13 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True]
14 11 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2]
15 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True]
16 [13, 15] 1 33309 ultralytics.nn.AddModules.CAFM.CAFM [64]
17 [13, 16] 1 0 ultralytics.nn.AddModules.CFT.Add2 [64, 0]
18 [15, 16] 1 0 ultralytics.nn.AddModules.CFT.Add2 [64, 1]
19 17 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2]
20 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True]
21 18 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2]
22 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True]
23 [20, 22] 1 66333 ultralytics.nn.AddModules.CAFM.CAFM [128]
24 [20, 23] 1 0 ultralytics.nn.AddModules.CFT.Add2 [128, 0]
25 [22, 23] 1 0 ultralytics.nn.AddModules.CFT.Add2 [128, 1]
26 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2]
27 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True]
28 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5]
29 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
30 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True]
31 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5]
32 [17, 18] 1 0 ultralytics.nn.AddModules.CFT.Add [64]
33 [24, 25] 1 0 ultralytics.nn.AddModules.CFT.Add [128]
34 [28, 31] 1 0 ultralytics.nn.AddModules.CFT.Add [256]
35 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
36 [-1, 33] 1 0 ultralytics.nn.modules.conv.Concat [1]
37 -1 1 148224 ultralytics.nn.modules.block.C2f [384, 128, 1]
38 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
39 [-1, 32] 1 0 ultralytics.nn.modules.conv.Concat [1]
40 -1 1 37248 ultralytics.nn.modules.block.C2f [192, 64, 1]
41 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2]
42 [-1, 37] 1 0 ultralytics.nn.modules.conv.Concat [1]
43 -1 1 123648 ultralytics.nn.modules.block.C2f [192, 128, 1]
44 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2]
45 [-1, 34] 1 0 ultralytics.nn.modules.conv.Concat [1]
46 -1 1 493056 ultralytics.nn.modules.block.C2f [384, 256, 1]
47 [40, 43, 46] 1 430867 ultralytics.nn.modules.head.Detect [1, [64, 128, 256]]
YOLOv8-CAFM-p234 summary: 391 layers, 4,374,410 parameters, 4,374,394 gradients, 9.7 GFLOPs