学习资源站

【RT-DETR多模态融合改进】_CAFM-通道-空间交叉注意力机制_动态捕捉跨模态特征的重要性,抑制冗余信息_通道自适应动态权重分配机制非刚性时空语义配准-

【RT-DETR多模态融合改进】| CAFM:通道 - 空间交叉注意力机制 | 动态捕捉跨模态特征的重要性,抑制冗余信息

一、本文介绍

本文记录的是 利用 CAFM 模块改进 RT-DETR 的多模态目标检测网络模型

CAFM 模块(Cross-Attention Fusion Module,跨注意力融合模块) 通过在特征融合阶段 引入通道 - 空间交叉注意力机制 动态生成跨子网特征融合权重 。该模块可 自适应捕捉像素级与超像素级特征的语义关联 ,抑制无关背景干扰, 实现高层语义与空间结构的深度交互与互补增强 ,为检测任务提供跨模态的精准特征表示,从而提升模型在不同模态场景下的分类准确性与鲁棒性。



二、CAFM模块介绍

Attention Multihop Graph and Multiscale Convolutional Fusion Network for Hyperspectral Image Classification

2.1 设计出发点

传统的特征融合方法在结合不同子网(如CNN和GCN)的特征时,往往存在信息交互不充分、无法动态平衡各分支贡献的问题。例如,直接拼接或简单加权融合难以捕捉跨子网特征的深层关联,导致融合后的特征缺乏对关键语义和空间结构的聚焦。

针对这一挑战, 跨注意力融合模块(Cross-Attention Fusion Module, CAFM) 被提出,旨在通过通道和空间维度的交叉注意力机制,实现像素级与超像素级特征的深度交互,提升融合特征的判别力。

2.2 结构原理

CAFM模块分为 通道注意力交叉模块 空间注意力融合模块 两部分,通过双向注意力机制增强不同子网特征的互补性,其具体结构如下:

在这里插入图片描述

2.2.1 通道注意力交叉模块

  • 特征描述与交互
    对来自两个子网(如PMCsN和MGCsN)的特征 F c F^c F c (形状为 C × H × W C×H×W C × H × W ),分别进行全局平均池化和最大池化,得到通道维度的描述向量 F a v g c F_{avg}^c F a vg c F m a x c F_{max}^c F ma x c
    • 通过共享的两层神经网络(MLP)处理,生成通道权重 M T c M_T^c M T c M H c M_H^c M H c ,公式为:
      M c = σ ( M L P ( F a v g c ) + M L P ( F m a x c ) ) M^c = \sigma\left(MLP(F_{avg}^c) + MLP(F_{max}^c)\right) M c = σ ( M L P ( F a vg c ) + M L P ( F ma x c ) )
    • 计算交叉权重矩阵 M c r o s s = M T c ⋅ ( M H c ) T M_{cross} = M_T^c \cdot (M_H^c)^T M cross = M T c ( M H c ) T ,通过Softmax归一化后,对两子网特征进行加权,突出跨通道的重要关联:
      T c = Softmax ( M c r o s s ) ⋅ T o u t , H c = Softmax ( M c r o s s T ) ⋅ H o u t T^c = \text{Softmax}(M_{cross}) \cdot T_{out}, \quad H^c = \text{Softmax}(M_{cross}^T) \cdot H_{out} T c = Softmax ( M cross ) T o u t , H c = Softmax ( M cross T ) H o u t
      该过程通过通道间的双向交互,强化了特征在语义层面的互补性。

2.2.2 空间注意力融合模块

  • 空间权重计算
    对通道交叉后的特征 T c T^c T c H c H^c H c ,在空间维度上分别进行平均池化和最大池化,拼接后通过卷积层生成空间权重 M T s M_T^s M T s M H s M_H^s M H s
    M s = f 3 × 3 ( [ F a v g s ⊕ F m a x s ] ) M^s = f^{3×3}\left([F_{avg}^s \oplus F_{max}^s]\right) M s = f 3 × 3 ( [ F a vg s F ma x s ] )
    其中, f 3 × 3 f^{3×3} f 3 × 3 为3×3卷积,用于捕捉局部空间依赖。
  • 特征融合与残差连接
    将空间权重与特征相乘,并引入残差连接以保留原始信息:
    T s = Softmax ( M T s ) ⋅ T c + T c , H s = Softmax ( M H s ) ⋅ H c + H c T^s = \text{Softmax}(M_T^s) \cdot T^c + T^c, \quad H^s = \text{Softmax}(M_H^s) \cdot H^c + H^c T s = Softmax ( M T s ) T c + T c , H s = Softmax ( M H s ) H c + H c
    最终通过拼接 T s T^s T s H s H^s H s 并经过全连接层,输出融合后的分类结果。

2.3 优势

  1. 跨子网特征的深度交互: 通过通道注意力交叉模块,CAFM能够建模两子网特征在通道维度的依赖关系,例如,使CNN提取的像素级细节特征与GCN提取的超像素级结构特征产生语义关联。

  2. 动态权重分配与噪声抑制: 空间注意力模块通过自适应抑制背景噪声和光照干扰,增强了变化区域的空间一致性。

  3. 轻量化与泛化能力: 模块通过共享参数和简单卷积操作实现,计算复杂度低。

论文: https://ieeexplore.ieee.org/document/10098209
源码: https://github.com/EdwardHaoz/IEEE_TGRS_AMGCFN

三、CAFM的实现代码

CAFM 的实现代码如下:

import torch
import torch.nn as nn
import torch.nn.functional as F
from einops.einops import rearrange

class CAFM(nn.Module):  # Cross Attention Fusion Module
    def __init__(self, channels):
        super(CAFM, self).__init__()

        self.conv1_spatial = nn.Conv2d(2, 1, 3, stride=1, padding=1, groups=1)
        self.conv2_spatial = nn.Conv2d(1, 1, 3, stride=1, padding=1, groups=1)

        self.avg1 = nn.Conv2d(channels, 64, 1, stride=1, padding=0)
        self.avg2 = nn.Conv2d(channels, 64, 1, stride=1, padding=0)
        self.max1 = nn.Conv2d(channels, 64, 1, stride=1, padding=0)
        self.max2 = nn.Conv2d(channels, 64, 1, stride=1, padding=0)

        self.avg11 = nn.Conv2d(64, channels, 1, stride=1, padding=0)
        self.avg22 = nn.Conv2d(64, channels, 1, stride=1, padding=0)
        self.max11 = nn.Conv2d(64, channels, 1, stride=1, padding=0)
        self.max22 = nn.Conv2d(64, channels, 1, stride=1, padding=0)

    def forward(self, x):
        rgb_fea = x[0]  # rgb_fea (tensor): dim:(B, C, H, W)
        ir_fea = x[1]   # ir_fea (tensor): dim:(B, C, H, W)
        assert rgb_fea.shape[0] == ir_fea.shape[0]
        bs, c, h, w = rgb_fea.shape

        f1 = rgb_fea.reshape([bs, c, -1])
        f2 = ir_fea.reshape([bs, c, -1])

        avg_1 = torch.mean(f1, dim=-1, keepdim=True).unsqueeze(-1)
        max_1, _ = torch.max(f1, dim=-1, keepdim=True)
        max_1 = max_1.unsqueeze(-1)

        avg_1 = F.relu(self.avg1(avg_1))
        max_1 = F.relu(self.max1(max_1))
        avg_1 = self.avg11(avg_1).squeeze(-1)
        max_1 = self.max11(max_1).squeeze(-1)
        a1 = avg_1 + max_1

        avg_2 = torch.mean(f2, dim=-1, keepdim=True).unsqueeze(-1)
        max_2, _ = torch.max(f2, dim=-1, keepdim=True)
        max_2 = max_2.unsqueeze(-1)

        avg_2 = F.relu(self.avg2(avg_2))
        max_2 = F.relu(self.max2(max_2))
        avg_2 = self.avg22(avg_2).squeeze(-1)
        max_2 = self.max22(max_2).squeeze(-1)
        a2 = avg_2 + max_2

        cross = torch.matmul(a1, a2.transpose(1, 2))

        a1_att = torch.matmul(F.softmax(cross, dim=-1), f1)
        a2_att = torch.matmul(F.softmax(cross.transpose(1, 2), dim=-1), f2)

        a1_att = a1_att.reshape([bs, c, h, w])
        a2_att = a2_att.reshape([bs, c, h, w])

        avg_out_1 = torch.mean(a1_att, dim=1, keepdim=True)
        max_out_1, _ = torch.max(a1_att, dim=1, keepdim=True)
        a1_spatial = torch.cat([avg_out_1, max_out_1], dim=1)
        a1_spatial = F.relu(self.conv1_spatial(a1_spatial))
        a1_spatial = self.conv2_spatial(a1_spatial)
        a1_spatial = a1_spatial.reshape([bs, 1, -1])
        a1_spatial = F.softmax(a1_spatial, dim=-1)

        avg_out_2 = torch.mean(a2_att, dim=1, keepdim=True)
        max_out_2, _ = torch.max(a2_att, dim=1, keepdim=True)
        a2_spatial = torch.cat([avg_out_2, max_out_2], dim=1)
        a2_spatial = F.relu(self.conv1_spatial(a2_spatial))
        a2_spatial = self.conv2_spatial(a2_spatial)
        a2_spatial = a2_spatial.reshape([bs, 1, -1])
        a2_spatial = F.softmax(a2_spatial, dim=-1)

        f1_att = f1 * a1_spatial + f1
        f2_att = f2 * a2_spatial + f2

        f1_out = f1_att.view(bs, c, h, w)
        f2_out = f2_att.view(bs, c, h, w)

        return f1_out, f2_out

class Add(nn.Module):
        def __init__(self, arg):
            super().__init__()
            self.arg = arg

        def forward(self, x):
            assert len(x) == 2, "输入必须包含两个待相加的张量"
            tensor_a, tensor_b = x[0], x[1]

            if tensor_a.shape[2:] != tensor_b.shape[2:]:
                target_size = tensor_a.shape[2:] if tensor_a.shape[2] >= tensor_b.shape[2] else tensor_b.shape[2:]
                tensor_a = F.interpolate(tensor_a, size=target_size, mode='bilinear', align_corners=False)
                tensor_b = F.interpolate(tensor_b, size=target_size, mode='bilinear', align_corners=False)

            return torch.add(tensor_a, tensor_b)

class Add2(nn.Module):
        def __init__(self, c1, index):
            super().__init__()
            self.index = index

        def forward(self, x):
            assert len(x) == 2, "输入必须包含两个张量"
            src, trans = x[0], x[1]

            trans_part = trans[0] if self.index == 0 else trans[1]

            if src.shape[2:] != trans_part.shape[2:]:
                trans_part = F.interpolate(
                    trans_part,
                    size=src.shape[2:],
                    mode='bilinear',
                    align_corners=False
                )

            return torch.add(src, trans_part)

四、融合步骤

5.1 修改一

① 在 ultralytics/nn/ 目录下新建 AddModules 文件夹用于存放模块代码

② 在 AddModules 文件夹下新建 CAFM.py ,将 第三节 中的代码粘贴到此处

在这里插入图片描述

5.2 修改二

AddModules 文件夹下新建 __init__.py (已有则不用新建),在文件内导入模块: from .CAFM import *

在这里插入图片描述

5.3 修改三

ultralytics/nn/modules/tasks.py 文件中,需要在两处位置添加各模块类名称。

首先:导入模块

在这里插入图片描述

其次:在 parse_model函数 中注册 Add Add2 CAFM 模块

在这里插入图片描述

        elif m is Add:
            c2 = ch[f[0]]
            args = [c2]
        elif m is Add2:
            c2 = ch[f[0]]
            args = [c2, args[1]]
        elif m is CAFM:
            c2 = ch[f[0]]
            args = [c2]

在这里插入图片描述


五、yaml模型文件

5.1 模型改进版本1 ⭐

创建一个用于自己数据集训练的模型文件 rtdetr-resnet18-CAFM-p234.yaml

将下方内容复制到 rtdetr-resnet18-CAFM-p234.yaml 文件下。

📌 此模型的修方法是将骨干网络中的,不同模态之间的P2, P3, P4进行跨模态融合。

# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-ResNet50 object detection model with P3-P5 outputs.

# Parameters
ch: 6
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'
  # [depth, width, max_channels]
  l: [1.00, 1.00, 1024]

backbone:
  # [from, repeats, module, args]
  - [-1, 1, IN, []]  # 0
  - [-1, 1, Multiin, [1]]  # 1
  - [-2, 1, Multiin, [2]]  # 2

    # Visible
  - [1, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 3-P1
  - [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 4
  - [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 5
  - [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 6-P2
  - [-1, 2, Blocks, [64,  BasicBlock, 2, False]] # 7
    # infrared
  - [2, 1, ConvNormLayer, [32, 3, 2, 1, 'relu']] # 8-P1
  - [-1, 1, ConvNormLayer, [32, 3, 1, 1, 'relu']] # 9
  - [-1, 1, ConvNormLayer, [64, 3, 1, 1, 'relu']] # 10
  - [-1, 1, nn.MaxPool2d, [3, 2, 1]] # 11-P2
  - [-1, 2, Blocks, [64,  BasicBlock, 2, False]] # 12
  # transformer fusion
  - [ [ 7,12 ], 1, CAFM, [ 64 ] ] # 13-P2/4
  - [ [ 7,13 ], 1, Add2, [ 64,0 ] ]  # 14-P2/4 stream one:x+trans[0]
  - [ [ 12,13 ], 1, Add2, [ 64,1 ] ]  # 15-P2/4 stream two:x+trans[1]

    # Visible
  - [14, 2, Blocks, [128, BasicBlock, 3, False]] # 16-P3
    # infrared
  - [15, 2, Blocks, [128, BasicBlock, 3, False]] # 17-P3
  # transformer fusion
  - [ [ 16,17 ], 1, CAFM, [ 128 ] ]   # 18-P3/8
  - [ [ 16,18 ], 1, Add2, [ 128,0 ] ]    # 19-P3/8 stream one x+trans[0]
  - [ [ 17,18 ], 1, Add2, [ 128,1 ] ]    # 20-P3/8 stream two x+trans[1]

    # Visible
  - [19, 2, Blocks, [256, BasicBlock, 4, False]] # 21-P4
    # infrared
  - [20, 2, Blocks, [256, BasicBlock, 4, False]] # 22-P4
  # transformer fusion
  - [ [ 21,22 ], 1, CAFM, [ 256 ] ]   # 23-P3/8
  - [ [ 21,23 ], 1, Add2, [ 256,0 ] ]    # 24-P3/8 stream one x+trans[0]
  - [ [ 22,23 ], 1, Add2, [ 256,1 ] ]    # 25-P3/8 stream two x+trans[1]

  - [24, 2, Blocks, [512, BasicBlock, 5, False]] # 26-P5

  - [25, 2, Blocks, [512, BasicBlock, 5, False]] # 27-P5

  - [[19, 20], 1, Concat, [1]]  # 28 cat backbone P3
  - [[24, 25], 1, Concat, [1]]  # 29 cat backbone P4
  - [[26, 27], 1, Concat, [1]]  # 30 cat backbone P5

head:
  - [-1, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 31 input_proj.2
  - [-1, 1, AIFI, [1024, 8]]
  - [-1, 1, Conv, [256, 1, 1]]  # 33, Y5, lateral_convs.0

  - [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 34
  - [29, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 35 input_proj.1
  - [[-2, -1], 1, Concat, [1]]
  - [-1, 3, RepC3, [256, 0.5]]  # 37, fpn_blocks.0
  - [-1, 1, Conv, [256, 1, 1]]  # 38, Y4, lateral_convs.1

  - [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 39
  - [28, 1, Conv, [256, 1, 1, None, 1, 1, False]]  # 40 input_proj.0
  - [[-2, -1], 1, Concat, [1]]  # 41 cat backbone P4
  - [-1, 3, RepC3, [256, 0.5]]  # X3 (42), fpn_blocks.1

  - [-1, 1, Conv, [256, 3, 2]]  # 43, downsample_convs.0
  - [[-1, 38], 1, Concat, [1]]  # 44 cat Y4
  - [-1, 3, RepC3, [256, 0.5]]  # F4 (45), pan_blocks.0

  - [-1, 1, Conv, [256, 3, 2]]  # 46, downsample_convs.1
  - [[-1, 33], 1, Concat, [1]]  # 47 cat Y5
  - [-1, 3, RepC3, [256, 0.5]]  # F5 (48), pan_blocks.1

  - [[42, 45, 48], 1, RTDETRDecoder, [nc, 256, 300, 4, 8, 3]]  # Detect(P3, P4, P5)


六、成功运行结果

打印网络模型可以看到不同的融合层已经加入到模型中,并可以进行训练了。

rtdetr-resnet18-CAFM-p234

rtdetr-resnet18-CAFM-p234 summary: 520 layers, 31,764,267 parameters, 31,764,267 gradients, 93.0 GFLOPs

                   from  n    params  module                                       arguments
  0                  -1  1         0  ultralytics.nn.AddModules.multimodal.IN      []
  1                  -1  1         0  ultralytics.nn.AddModules.multimodal.Multiin [1]
  2                  -2  1         0  ultralytics.nn.AddModules.multimodal.Multiin [2]
  3                   1  1       960  ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
  4                  -1  1      9312  ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
  5                  -1  1     18624  ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
  6                  -1  1         0  torch.nn.modules.pooling.MaxPool2d           [3, 2, 1]
  7                  -1  2    152512  ultralytics.nn.AddModules.ResNet.Blocks      [64, 64, 2, 'BasicBlock', 2, False]
  8                   2  1       960  ultralytics.nn.AddModules.ResNet.ConvNormLayer[3, 32, 3, 2, 1, 'relu']
  9                  -1  1      9312  ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 32, 3, 1, 1, 'relu']
 10                  -1  1     18624  ultralytics.nn.AddModules.ResNet.ConvNormLayer[32, 64, 3, 1, 1, 'relu']
 11                  -1  1         0  torch.nn.modules.pooling.MaxPool2d           [3, 2, 1]
 12                  -1  2    152512  ultralytics.nn.AddModules.ResNet.Blocks      [64, 64, 2, 'BasicBlock', 2, False]
 13             [7, 12]  1     33309  ultralytics.nn.AddModules.CAFM.CAFM          [64]
 14             [7, 13]  1         0  ultralytics.nn.AddModules.CFT.Add2           [64, 0]
 15            [12, 13]  1         0  ultralytics.nn.AddModules.CFT.Add2           [64, 1]
 16                  14  2    526208  ultralytics.nn.AddModules.ResNet.Blocks      [64, 128, 2, 'BasicBlock', 3, False]
 17                  15  2    526208  ultralytics.nn.AddModules.ResNet.Blocks      [64, 128, 2, 'BasicBlock', 3, False]
 18            [16, 17]  1     66333  ultralytics.nn.AddModules.CAFM.CAFM          [128]
 19            [16, 18]  1         0  ultralytics.nn.AddModules.CFT.Add2           [128, 0]
 20            [17, 18]  1         0  ultralytics.nn.AddModules.CFT.Add2           [128, 1]
 21                  19  2   2100992  ultralytics.nn.AddModules.ResNet.Blocks      [128, 256, 2, 'BasicBlock', 4, False]
 22                  20  2   2100992  ultralytics.nn.AddModules.ResNet.Blocks      [128, 256, 2, 'BasicBlock', 4, False]
 23            [21, 22]  1    132381  ultralytics.nn.AddModules.CAFM.CAFM          [256]
 24            [21, 23]  1         0  ultralytics.nn.AddModules.CFT.Add2           [256, 0]
 25            [22, 23]  1         0  ultralytics.nn.AddModules.CFT.Add2           [256, 1]
 26                  24  2   8396288  ultralytics.nn.AddModules.ResNet.Blocks      [256, 512, 2, 'BasicBlock', 5, False]
 27                  25  2   8396288  ultralytics.nn.AddModules.ResNet.Blocks      [256, 512, 2, 'BasicBlock', 5, False]
 28            [19, 20]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 29            [24, 25]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 30            [26, 27]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 31                  -1  1    262656  ultralytics.nn.modules.conv.Conv             [1024, 256, 1, 1, None, 1, 1, False]
 32                  -1  1    789760  ultralytics.nn.modules.transformer.AIFI      [256, 1024, 8]
 33                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]
 34                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 35                  29  1    131584  ultralytics.nn.modules.conv.Conv             [512, 256, 1, 1, None, 1, 1, False]
 36            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 37                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 38                  -1  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1]
 39                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 40                  28  1     66048  ultralytics.nn.modules.conv.Conv             [256, 256, 1, 1, None, 1, 1, False]
 41            [-2, -1]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 42                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 43                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]
 44            [-1, 38]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 45                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 46                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]
 47            [-1, 33]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 48                  -1  3    657920  ultralytics.nn.modules.block.RepC3           [512, 256, 3, 0.5]
 49        [42, 45, 48]  1   3927956  ultralytics.nn.modules.head.RTDETRDecoder    [9, [256, 256, 256], 256, 300, 4, 8, 3]
rtdetr-resnet18-CAFM-p234 summary: 520 layers, 31,764,267 parameters, 31,764,267 gradients, 93.0 GFLOPs