学习资源站

【YOLOv13多模态融合改进】_CAFM-通道-空间交叉注意力机制_动态捕捉跨模态特征的重要性,抑制冗余信息-

【YOLOv13多模态融合改进】| CAFM:通道 - 空间交叉注意力机制 | 动态捕捉跨模态特征的重要性,抑制冗余信息

一、本文介绍

本文记录的是 利用 CAFM 模块改进 YOLOv12 的多模态目标检测网络模型

CAFM 模块(Cross-Attention Fusion Module,跨注意力融合模块) 通过在特征融合阶段 引入通道 - 空间交叉注意力机制 动态生成跨子网特征融合权重 。该模块可 自适应捕捉像素级与超像素级特征的语义关联 ,抑制无关背景干扰, 实现高层语义与空间结构的深度交互与互补增强 ,为检测任务提供跨模态的精准特征表示,从而提升模型在不同模态场景下的分类准确性与鲁棒性。



二、CAFM模块介绍

Attention Multihop Graph and Multiscale Convolutional Fusion Network for Hyperspectral Image Classification

2.1 设计出发点

传统的特征融合方法在结合不同子网(如CNN和GCN)的特征时,往往存在信息交互不充分、无法动态平衡各分支贡献的问题。例如,直接拼接或简单加权融合难以捕捉跨子网特征的深层关联,导致融合后的特征缺乏对关键语义和空间结构的聚焦。

针对这一挑战, 跨注意力融合模块(Cross-Attention Fusion Module, CAFM) 被提出,旨在通过通道和空间维度的交叉注意力机制,实现像素级与超像素级特征的深度交互,提升融合特征的判别力。

2.2 结构原理

CAFM模块分为 通道注意力交叉模块 空间注意力融合模块 两部分,通过双向注意力机制增强不同子网特征的互补性,其具体结构如下:

在这里插入图片描述

2.2.1 通道注意力交叉模块

  • 特征描述与交互
    对来自两个子网(如PMCsN和MGCsN)的特征 F c F^c F c (形状为 C × H × W C×H×W C × H × W ),分别进行全局平均池化和最大池化,得到通道维度的描述向量 F a v g c F_{avg}^c F a vg c F m a x c F_{max}^c F ma x c
    • 通过共享的两层神经网络(MLP)处理,生成通道权重 M T c M_T^c M T c M H c M_H^c M H c ,公式为:
      M c = σ ( M L P ( F a v g c ) + M L P ( F m a x c ) ) M^c = \sigma\left(MLP(F_{avg}^c) + MLP(F_{max}^c)\right) M c = σ ( M L P ( F a vg c ) + M L P ( F ma x c ) )
    • 计算交叉权重矩阵 M c r o s s = M T c ⋅ ( M H c ) T M_{cross} = M_T^c \cdot (M_H^c)^T M cross = M T c ( M H c ) T ,通过Softmax归一化后,对两子网特征进行加权,突出跨通道的重要关联:
      T c = Softmax ( M c r o s s ) ⋅ T o u t , H c = Softmax ( M c r o s s T ) ⋅ H o u t T^c = \text{Softmax}(M_{cross}) \cdot T_{out}, \quad H^c = \text{Softmax}(M_{cross}^T) \cdot H_{out} T c = Softmax ( M cross ) T o u t , H c = Softmax ( M cross T ) H o u t
      该过程通过通道间的双向交互,强化了特征在语义层面的互补性。

2.2.2 空间注意力融合模块

  • 空间权重计算
    对通道交叉后的特征 T c T^c T c H c H^c H c ,在空间维度上分别进行平均池化和最大池化,拼接后通过卷积层生成空间权重 M T s M_T^s M T s M H s M_H^s M H s
    M s = f 3 × 3 ( [ F a v g s ⊕ F m a x s ] ) M^s = f^{3×3}\left([F_{avg}^s \oplus F_{max}^s]\right) M s = f 3 × 3 ( [ F a vg s F ma x s ] )
    其中, f 3 × 3 f^{3×3} f 3 × 3 为3×3卷积,用于捕捉局部空间依赖。
  • 特征融合与残差连接
    将空间权重与特征相乘,并引入残差连接以保留原始信息:
    T s = Softmax ( M T s ) ⋅ T c + T c , H s = Softmax ( M H s ) ⋅ H c + H c T^s = \text{Softmax}(M_T^s) \cdot T^c + T^c, \quad H^s = \text{Softmax}(M_H^s) \cdot H^c + H^c T s = Softmax ( M T s ) T c + T c , H s = Softmax ( M H s ) H c + H c
    最终通过拼接 T s T^s T s H s H^s H s 并经过全连接层,输出融合后的分类结果。

2.3 优势

  1. 跨子网特征的深度交互: 通过通道注意力交叉模块,CAFM能够建模两子网特征在通道维度的依赖关系,例如,使CNN提取的像素级细节特征与GCN提取的超像素级结构特征产生语义关联。

  2. 动态权重分配与噪声抑制: 空间注意力模块通过自适应抑制背景噪声和光照干扰,增强了变化区域的空间一致性。

  3. 轻量化与泛化能力: 模块通过共享参数和简单卷积操作实现,计算复杂度低。

论文: https://ieeexplore.ieee.org/document/10098209
源码: https://github.com/EdwardHaoz/IEEE_TGRS_AMGCFN

三、CAFM的实现代码

CAFM 的实现代码如下:

import torch
import torch.nn as nn
import torch.nn.functional as F
from einops.einops import rearrange

class CAFM(nn.Module):  # Cross Attention Fusion Module
    def __init__(self, channels):
        super(CAFM, self).__init__()

        self.conv1_spatial = nn.Conv2d(2, 1, 3, stride=1, padding=1, groups=1)
        self.conv2_spatial = nn.Conv2d(1, 1, 3, stride=1, padding=1, groups=1)

        self.avg1 = nn.Conv2d(channels, 64, 1, stride=1, padding=0)
        self.avg2 = nn.Conv2d(channels, 64, 1, stride=1, padding=0)
        self.max1 = nn.Conv2d(channels, 64, 1, stride=1, padding=0)
        self.max2 = nn.Conv2d(channels, 64, 1, stride=1, padding=0)

        self.avg11 = nn.Conv2d(64, channels, 1, stride=1, padding=0)
        self.avg22 = nn.Conv2d(64, channels, 1, stride=1, padding=0)
        self.max11 = nn.Conv2d(64, channels, 1, stride=1, padding=0)
        self.max22 = nn.Conv2d(64, channels, 1, stride=1, padding=0)

    def forward(self, x):
        rgb_fea = x[0]  # rgb_fea (tensor): dim:(B, C, H, W)
        ir_fea = x[1]   # ir_fea (tensor): dim:(B, C, H, W)
        assert rgb_fea.shape[0] == ir_fea.shape[0]
        bs, c, h, w = rgb_fea.shape

        f1 = rgb_fea.reshape([bs, c, -1])
        f2 = ir_fea.reshape([bs, c, -1])

        avg_1 = torch.mean(f1, dim=-1, keepdim=True).unsqueeze(-1)
        max_1, _ = torch.max(f1, dim=-1, keepdim=True)
        max_1 = max_1.unsqueeze(-1)

        avg_1 = F.relu(self.avg1(avg_1))
        max_1 = F.relu(self.max1(max_1))
        avg_1 = self.avg11(avg_1).squeeze(-1)
        max_1 = self.max11(max_1).squeeze(-1)
        a1 = avg_1 + max_1

        avg_2 = torch.mean(f2, dim=-1, keepdim=True).unsqueeze(-1)
        max_2, _ = torch.max(f2, dim=-1, keepdim=True)
        max_2 = max_2.unsqueeze(-1)

        avg_2 = F.relu(self.avg2(avg_2))
        max_2 = F.relu(self.max2(max_2))
        avg_2 = self.avg22(avg_2).squeeze(-1)
        max_2 = self.max22(max_2).squeeze(-1)
        a2 = avg_2 + max_2

        cross = torch.matmul(a1, a2.transpose(1, 2))

        a1_att = torch.matmul(F.softmax(cross, dim=-1), f1)
        a2_att = torch.matmul(F.softmax(cross.transpose(1, 2), dim=-1), f2)

        a1_att = a1_att.reshape([bs, c, h, w])
        a2_att = a2_att.reshape([bs, c, h, w])

        avg_out_1 = torch.mean(a1_att, dim=1, keepdim=True)
        max_out_1, _ = torch.max(a1_att, dim=1, keepdim=True)
        a1_spatial = torch.cat([avg_out_1, max_out_1], dim=1)
        a1_spatial = F.relu(self.conv1_spatial(a1_spatial))
        a1_spatial = self.conv2_spatial(a1_spatial)
        a1_spatial = a1_spatial.reshape([bs, 1, -1])
        a1_spatial = F.softmax(a1_spatial, dim=-1)

        avg_out_2 = torch.mean(a2_att, dim=1, keepdim=True)
        max_out_2, _ = torch.max(a2_att, dim=1, keepdim=True)
        a2_spatial = torch.cat([avg_out_2, max_out_2], dim=1)
        a2_spatial = F.relu(self.conv1_spatial(a2_spatial))
        a2_spatial = self.conv2_spatial(a2_spatial)
        a2_spatial = a2_spatial.reshape([bs, 1, -1])
        a2_spatial = F.softmax(a2_spatial, dim=-1)

        f1_att = f1 * a1_spatial + f1
        f2_att = f2 * a2_spatial + f2

        f1_out = f1_att.view(bs, c, h, w)
        f2_out = f2_att.view(bs, c, h, w)

        return f1_out, f2_out

class Add(nn.Module):
        def __init__(self, arg):
            super().__init__()
            self.arg = arg

        def forward(self, x):
            assert len(x) == 2, "输入必须包含两个待相加的张量"
            tensor_a, tensor_b = x[0], x[1]

            if tensor_a.shape[2:] != tensor_b.shape[2:]:
                target_size = tensor_a.shape[2:] if tensor_a.shape[2] >= tensor_b.shape[2] else tensor_b.shape[2:]
                tensor_a = F.interpolate(tensor_a, size=target_size, mode='bilinear', align_corners=False)
                tensor_b = F.interpolate(tensor_b, size=target_size, mode='bilinear', align_corners=False)

            return torch.add(tensor_a, tensor_b)

class Add2(nn.Module):
        def __init__(self, c1, index):
            super().__init__()
            self.index = index

        def forward(self, x):
            assert len(x) == 2, "输入必须包含两个张量"
            src, trans = x[0], x[1]

            trans_part = trans[0] if self.index == 0 else trans[1]

            if src.shape[2:] != trans_part.shape[2:]:
                trans_part = F.interpolate(
                    trans_part,
                    size=src.shape[2:],
                    mode='bilinear',
                    align_corners=False
                )

            return torch.add(src, trans_part)

四、融合步骤

5.1 修改一

① 在 ultralytics/nn/ 目录下新建 AddModules 文件夹用于存放模块代码

② 在 AddModules 文件夹下新建 CAFM.py ,将 第三节 中的代码粘贴到此处

在这里插入图片描述

5.2 修改二

AddModules 文件夹下新建 __init__.py (已有则不用新建),在文件内导入模块: from .CAFM import *

在这里插入图片描述

5.3 修改三

ultralytics/nn/modules/tasks.py 文件中,需要在两处位置添加各模块类名称。

首先:导入模块

在这里插入图片描述

其次:在 parse_model函数 中注册 Add Add2 CAFM 模块

在这里插入图片描述

        elif m is Add:
            c2 = ch[f[0]]
            args = [c2]
        elif m is Add2:
            c2 = ch[f[0]]
            args = [c2, args[1]]
        elif m is CAFM:
            c2 = ch[f[0]]
            args = [c2]

在这里插入图片描述


五、yaml模型文件

5.1 模型改进版本1 ⭐

创建一个用于自己数据集训练的模型文件 yolov13-CAFM-p234.yaml

将下方内容复制到 yolov13-CAFM-p234.yaml 文件下。

📌 此模型的修方法是将骨干网络中的,不同模态之间的P2, P3, P4进行跨模态融合。

ch: 6
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov13n.yaml' will call yolov13.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.50, 0.25, 1024]   # Nano
  s: [0.50, 0.50, 1024]   # Small
  l: [1.00, 1.00, 512]    # Large
  x: [1.00, 1.50, 512]    # Extra Large

backbone:
  # [from, repeats, module, args]
  - [-1, 1, IN, []]  # 0
  - [-1, 1, Multiin, [1]]  # 1
  - [-2, 1, Multiin, [2]]  # 2

    # Visible
  - [1, 1, Conv,  [64, 3, 2]] # 3-P1/2
  - [-1, 1, Conv,  [128, 3, 2, 1, 2]] # 4-P2/4
  - [-1, 2, DSC3k2,  [256, False, 0.25]]
    # infrared
  - [2, 1, Conv,  [64, 3, 2]] # 6-P1/2
  - [-1, 1, Conv,  [128, 3, 2, 1, 2]] # 7-P2/4
  - [-1, 2, DSC3k2,  [256, False, 0.25]]

  # transformer fusion
  - [ [ 5,8 ], 1, CAFM, [ 256 ] ] # 9-P2/4
  - [ [ 5,9 ], 1, Add2, [ 256,0 ] ]  # 10-P2/4 stream one:x+trans[0]
  - [ [ 8,9 ], 1, Add2, [ 256,1 ] ]  # 11-P2/4 stream two:x+trans[1]

    # Visible
  - [10, 1, Conv,  [256, 3, 2, 1, 4]] # 12-P3/8
  - [-1, 2, DSC3k2,  [512, False, 0.25]]
    # infrared
  - [11, 1, Conv,  [256, 3, 2, 1, 4]] # 14-P3/8
  - [-1, 2, DSC3k2,  [512, False, 0.25]]

  # transformer fusion
  - [ [ 13,15 ], 1, CAFM, [ 256 ] ]   # 16-P3/8
  - [ [ 13,16 ], 1, Add2, [ 256,0 ] ]    # 17-P3/8 stream one x+trans[0]
  - [ [ 15,16 ], 1, Add2, [ 256,1 ] ]    # 18-P3/8 stream two x+trans[1]

    # Visible
  - [17, 1, DSConv,  [512, 3, 2]] # 19-P4/16
  - [-1, 4, A2C2f, [512, True, 4]]
    # infrared
  - [18, 1, DSConv,  [512, 3, 2]] # 21-P4/16
  - [-1, 4, A2C2f, [512, True, 4]]

  # transformer fusion
  - [ [ 20,22 ], 1, CAFM, [ 512 ] ]   # 23-P3/8
  - [ [ 20,23 ], 1, Add2, [ 512,0 ] ]    # 24-P3/8 stream one x+trans[0]
  - [ [ 22,23 ], 1, Add2, [ 512,1 ] ]    # 25-P3/8 stream two x+trans[1]

    # Visible
  - [24, 1, DSConv,  [1024, 3, 2]] # 26-P5/32
  - [-1, 4, A2C2f, [1024, True, 1]] # 27
    # infrared
  - [25, 1, DSConv,  [1024, 3, 2]] # 28-P5/32
  - [-1, 4, A2C2f, [1024, True, 1]] # 29

  - [ [ 17,18 ], 1, Add, [ 1 ] ]   # 30-P3/8 fusion backbone P3
  - [ [ 24,25 ], 1, Add, [ 1 ] ]   # 31-P4/16 fusion backbone P4
  - [ [ 27,29 ], 1, Add, [ 1 ] ]   # 32-P5/32 fusion backbone P5

head:
  - [[30, 31, 32], 2, HyperACE, [512, 8, True, True, 0.5, 1, "both"]]
  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [ 33, 1, DownsampleConv, []]
  - [[31, 33], 1, FullPAD_Tunnel, []]  # 36     
  - [[30, 34], 1, FullPAD_Tunnel, []]  # 37    
  - [[32, 35], 1, FullPAD_Tunnel, []] # 38 

  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 36], 1, Concat, [1]] # cat backbone P4
  - [-1, 2, DSC3k2, [512, True]] # 41
  - [[-1, 33], 1, FullPAD_Tunnel, []]  # 42

  - [41, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 37], 1, Concat, [1]] # cat backbone P3
  - [-1, 2, DSC3k2, [256, True]] # 45
  - [34, 1, Conv, [256, 1, 1]]
  - [[45, 46], 1, FullPAD_Tunnel, []]  # 47

  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 42], 1, Concat, [1]] # cat head P4
  - [-1, 2, DSC3k2, [512, True]] # 50
  - [[-1, 33], 1, FullPAD_Tunnel, []]

  - [50, 1, Conv, [512, 3, 2]]
  - [[-1, 38], 1, Concat, [1]] # cat head P5
  - [-1, 2, DSC3k2, [1024,True]] # 54 (P5/32-large)
  - [[-1, 35], 1, FullPAD_Tunnel, []]

  - [[47, 51, 55], 1, Detect, [nc]] # Detect(P3, P4, P5)


六、成功运行结果

打印网络模型可以看到不同的融合层已经加入到模型中,并可以进行训练了。

YOLOv13-CAFM-p234

YOLOv13-CAFM-p234 summary: 915 layers, 3,607,337 parameters, 3,607,321 gradients, 8.5 GFLOPs

                   from  n    params  module                                       arguments
  0                  -1  1         0  ultralytics.nn.AddModules.multimodal.IN      []
  1                  -1  1         0  ultralytics.nn.AddModules.multimodal.Multiin [1]
  2                  -2  1         0  ultralytics.nn.AddModules.multimodal.Multiin [2]
  3                   1  1       464  ultralytics.nn.modules.conv.Conv             [3, 16, 3, 2]
  4                  -1  1      2368  ultralytics.nn.modules.conv.Conv             [16, 32, 3, 2, 1, 2]
  5                  -1  1      5792  ultralytics.nn.modules.block.DSC3k2          [32, 64, 1, False, 0.25]
  6                   2  1       464  ultralytics.nn.modules.conv.Conv             [3, 16, 3, 2]
  7                  -1  1      2368  ultralytics.nn.modules.conv.Conv             [16, 32, 3, 2, 1, 2]
  8                  -1  1      5792  ultralytics.nn.modules.block.DSC3k2          [32, 64, 1, False, 0.25]
  9              [5, 8]  1     33309  ultralytics.nn.AddModules.CAFM.CAFM          [64]
 10              [5, 9]  1         0  ultralytics.nn.AddModules.CFT.Add2           [64, 0]
 11              [8, 9]  1         0  ultralytics.nn.AddModules.CFT.Add2           [64, 1]
 12                  10  1      9344  ultralytics.nn.modules.conv.Conv             [64, 64, 3, 2, 1, 4]
 13                  -1  1     20800  ultralytics.nn.modules.block.DSC3k2          [64, 128, 1, False, 0.25]
 14                  11  1      9344  ultralytics.nn.modules.conv.Conv             [64, 64, 3, 2, 1, 4]
 15                  -1  1     20800  ultralytics.nn.modules.block.DSC3k2          [64, 128, 1, False, 0.25]
 16            [13, 15]  1     66333  ultralytics.nn.AddModules.CAFM.CAFM          [128]
 17            [13, 16]  1         0  ultralytics.nn.AddModules.CFT.Add2           [128, 0]
 18            [15, 16]  1         0  ultralytics.nn.AddModules.CFT.Add2           [128, 1]
 19                  17  1     17792  ultralytics.nn.modules.conv.DSConv           [128, 128, 3, 2]
 20                  -1  2    180864  ultralytics.nn.AddModules.A2C2f.A2C2f        [128, 128, 2, True, 4]
 21                  18  1     17792  ultralytics.nn.modules.conv.DSConv           [128, 128, 3, 2]
 22                  -1  2    180864  ultralytics.nn.AddModules.A2C2f.A2C2f        [128, 128, 2, True, 4]
 23            [20, 22]  1     66333  ultralytics.nn.AddModules.CAFM.CAFM          [128]
 24            [20, 23]  1         0  ultralytics.nn.AddModules.CFT.Add2           [128, 0]
 25            [22, 23]  1         0  ultralytics.nn.AddModules.CFT.Add2           [128, 1]
 26                  24  1     34432  ultralytics.nn.modules.conv.DSConv           [128, 256, 3, 2]
 27                  -1  2    689408  ultralytics.nn.AddModules.A2C2f.A2C2f        [256, 256, 2, True, 1]
 28                  25  1     34432  ultralytics.nn.modules.conv.DSConv           [128, 256, 3, 2]
 29                  -1  2    689408  ultralytics.nn.AddModules.A2C2f.A2C2f        [256, 256, 2, True, 1]
 30            [17, 18]  1         0  ultralytics.nn.AddModules.CFT.Add            [128]
 31            [24, 25]  1         0  ultralytics.nn.AddModules.CFT.Add            [128]
 32            [27, 29]  1         0  ultralytics.nn.AddModules.CFT.Add            [256]
 33        [30, 31, 32]  1    273536  ultralytics.nn.modules.block.HyperACE        [128, 128, 1, 4, True, True, 0.5, 1, 'both']
 34                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 35                  33  1     33280  ultralytics.nn.modules.block.DownsampleConv  [128]
 36            [31, 33]  1         1  ultralytics.nn.modules.block.FullPAD_Tunnel  []
 37            [30, 34]  1         1  ultralytics.nn.modules.block.FullPAD_Tunnel  []
 38            [32, 35]  1         1  ultralytics.nn.modules.block.FullPAD_Tunnel  []
 39                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 40            [-1, 36]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 41                  -1  1    115328  ultralytics.nn.modules.block.DSC3k2          [384, 128, 1, True]
 42            [-1, 33]  1         1  ultralytics.nn.modules.block.FullPAD_Tunnel  []
 43                  41  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 44            [-1, 37]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 45                  -1  1     35136  ultralytics.nn.modules.block.DSC3k2          [256, 64, 1, True]
 46                  34  1      8320  ultralytics.nn.modules.conv.Conv             [128, 64, 1, 1]
 47            [45, 46]  1         1  ultralytics.nn.modules.block.FullPAD_Tunnel  []
 48                  -1  1     36992  ultralytics.nn.modules.conv.Conv             [64, 64, 3, 2]
 49            [-1, 42]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 50                  -1  1     90752  ultralytics.nn.modules.block.DSC3k2          [192, 128, 1, True]
 51            [-1, 33]  1         1  ultralytics.nn.modules.block.FullPAD_Tunnel  []
 52                  50  1    147712  ultralytics.nn.modules.conv.Conv             [128, 128, 3, 2]
 53            [-1, 38]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 54                  -1  1    345344  ultralytics.nn.modules.block.DSC3k2          [384, 256, 1, True]
 55            [-1, 35]  1         1  ultralytics.nn.modules.block.FullPAD_Tunnel  []
 56        [47, 51, 55]  1    432427  ultralytics.nn.modules.head.Detect           [9, [64, 128, 256]]
YOLOv13-CAFM-p234 summary: 915 layers, 3,607,337 parameters, 3,607,321 gradients, 8.5 GFLOPs