学习资源站

【YOLOv10多模态融合改进】_PR2024ICAFusion中的DMFF,双模态特征融合模块引入跨模态交叉注意力机制,动态建模不同模态特征的全局语义依赖_yolov10多模态-

【YOLOv10多模态融合改进】| PR 2024 ICAFusion中的DMFF,双模态特征融合模块 引入跨模态交叉注意力机制,动态建模不同模态特征的全局语义依赖

一、本文介绍

本文记录的是利用 ICAFusion中的DMFF模块改进YOLOv10的多模态融合部分

DMFF 模块通过 双交叉注意力引导的迭代特征融合 过程,实现了 对RGB与热模态全局互补信息的高效聚合 。将其应用于 YOLOv10 的改进过程中,针对多光谱检测中 模态错位 长距离依赖建模不足 的问题, 通过空间特征压缩、交叉模态增强及迭代学习策略,缓解传统CNN局部交互局限导致的性能下降,提升复杂场景下的目标识别精度



二、ICAFusion介绍

ICAFusion: Iterative Cross-Attention Guided Feature Fusion for Multispectral Object Detection

2.1 DMFF(Dual-modal Feature Fusion)的设计出发点

  1. 现有方法的局限性
    • 传统基于卷积神经网络(CNN)的特征融合方法在多光谱目标检测中,因局部特征交互能力有限,对图像错位敏感,导致性能下降。
    • 直接使用Transformer进行特征融合会导致大量特征冗余,带来过高的计算负载和内存需求。
  2. 多光谱数据的互补性需求
    • RGB图像在良好光照下提供颜色、纹理和轮廓细节,热图像在低光照或恶劣环境中捕捉物体热辐射轮廓,二者具有显著互补性。
    • 需有效聚合跨模态的全局互补信息,同时降低模型复杂度。

2.2 结构原理

DMFF模块主要包含三大核心组件,通过层级化设计实现高效的跨模态特征融合:
在这里插入图片描述

  1. 空间特征压缩(SFS,Spatial Feature Shrinking)

    • 目的 :降低特征图维度,减少后续计算量,同时保留关键信息。
    • 实现方式
      • 卷积操作 :通过1×1卷积将空间信息压缩到通道维度。
      • 混合池化 :自适应融合平均池化(保留背景)和最大池化(保留纹理),通过可学习参数λ平衡权重。
  2. 交叉模态特征增强(CFE,Cross-modal Feature Enhancement)

    • 核心机制 :基于双交叉注意力Transformer,从全局视角捕捉RGB与热模态的互补关系。
    • 工作流程
      1. 特征向量化 :将输入特征图展平为令牌(Token),并添加位置嵌入。
      2. 注意力计算 :以热模态为例,通过 Q R Q_R Q R (RGB令牌投影的查询)与 K T K_T K T V T V_T V T (热模态令牌投影的键和值)的点积运算,构建跨模态相关性矩阵。
      3. 特征精炼 :通过残差连接和前馈网络(FFN)增强特征表示,引入可学习系数自适应调整分支权重。
  3. 迭代交叉模态特征增强(ICFE,Iterative Cross-modal Feature Enhancement)

    • 工作原理
      • 通过共享参数的迭代操作,逐步精炼跨模态和模态内的特征表示。
      • n n n 次迭代的输入为前一次迭代的输出,避免堆叠模块带来的参数爆炸。
    • 关键公式 { T ^ R n , T ^ T n } = F C F E ( ⋯ F C F E ⏟ n ( { T R , T T } ) ) \{\hat{T}_R^n, \hat{T}_T^n\} = \underbrace{\mathcal{F}_{CFE}(\cdots\mathcal{F}_{CFE}}_n(\{T_R, T_T\})) { T ^ R n , T ^ T n } = n F CFE ( F CFE ({ T R , T T }))

在这里插入图片描述

2.3 核心优势

  1. 性能提升显著
    • 在KAIST、FLIR和VEDAI数据集上,相比基线方法大幅降低漏检率(MR)并提高平均精度(mAP)。
    • 例如,在KAIST数据集上,MR从8.33%降至7.17%,FLIR数据集上mAP50从76.5%提升至79.2%。
  2. 计算效率优化
    • 相比传统堆叠Transformer模块的方法,参数数量和内存占用保持不变,推理速度提升约20%。
    • 通过SFS和迭代学习策略,计算复杂度从 O ( W 2 H 2 × C ) O(W^2H^2×C) O ( W 2 H 2 × C ) 降至 O ( W 2 H 2 / S 2 × C ) O(W^2H^2/S^2×C) O ( W 2 H 2 / S 2 × C )
  3. 模态适应性强
    • 支持单模态和双模态输入,当某一模态缺失或质量较差时,仍能通过跨模态注意力提取互补信息。
    • 例如,输入单模态(R+R或T+T)时,检测性能仅比双模态下降2.4%~2.76%。
  4. 通用性与灵活性
    • 可集成到YOLOv5、FCOS等不同检测框架,并兼容VGG16、ResNet50、CSPDarkNet等多种骨干网络。
    • 在不同场景(如白天、夜间、航拍)下均表现稳定,适用于实时目标检测任务。

论文: https://linkinghub.elsevier.com/retrieve/pii/S0031320323006118
源码: https://github.com/chanchanchan97/ICAFusion

三、ICAFusion的实现代码

ICAFusion模块 的实现代码如下:

import math
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn import init

def autopad(k, p=None):  # kernel, padding
    # Pad to 'same'
    if p is None:
        p = k // 2 if isinstance(k, int) else [x // 2 for x in k]  # auto-pad
    return p

class Conv(nn.Module):
    # Standard convolution
    def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True):  # ch_in, ch_out, kernel, stride, padding, groups
        super(Conv, self).__init__()
        self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False)
        self.bn = nn.BatchNorm2d(c2)
        self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity())

    def forward(self, x):
        return self.act(self.bn(self.conv(x)))

    def fuseforward(self, x):
        return self.act(self.conv(x))

class LearnableWeights(nn.Module):
    def __init__(self):
        super(LearnableWeights, self).__init__()
        self.w1 = nn.Parameter(torch.tensor([0.5]), requires_grad=True)
        self.w2 = nn.Parameter(torch.tensor([0.5]), requires_grad=True)

    def forward(self, x1, x2):
        out = x1 * self.w1 + x2 * self.w2
        return out

class AdaptivePool2d(nn.Module):
    def __init__(self, output_h, output_w, pool_type='avg'):
        super(AdaptivePool2d, self).__init__()

        self.output_h = output_h
        self.output_w = output_w
        self.pool_type = pool_type

    def forward(self, x):
        bs, c, input_h, input_w = x.shape

        if (input_h > self.output_h) or (input_w > self.output_w):
            self.stride_h = input_h // self.output_h
            self.stride_w = input_w // self.output_w
            self.kernel_size = (input_h - (self.output_h - 1) * self.stride_h, input_w - (self.output_w - 1) * self.stride_w)

            if self.pool_type == 'avg':
                y = nn.AvgPool2d(kernel_size=self.kernel_size, stride=(self.stride_h, self.stride_w), padding=0)(x)
            else:
                y = nn.MaxPool2d(kernel_size=self.kernel_size, stride=(self.stride_h, self.stride_w), padding=0)(x)
        else:
            y = x

        return y

class LearnableCoefficient(nn.Module):
    def __init__(self):
        super(LearnableCoefficient, self).__init__()
        self.bias = nn.Parameter(torch.FloatTensor([1.0]), requires_grad=True)

    def forward(self, x):
        out = x * self.bias
        return out

class CrossAttention(nn.Module):
    def __init__(self, d_model, d_k, d_v, h, attn_pdrop=.1, resid_pdrop=.1):
        '''
        :param d_model: Output dimensionality of the model
        :param d_k: Dimensionality of queries and keys
        :param d_v: Dimensionality of values
        :param h: Number of heads
        '''
        super(CrossAttention, self).__init__()
        assert d_k % h == 0
        self.d_model = d_model
        self.d_k = d_model // h
        self.d_v = d_model // h
        self.h = h

        # key, query, value projections for all heads
        self.que_proj_vis = nn.Linear(d_model, h * self.d_k)  # query projection
        self.key_proj_vis = nn.Linear(d_model, h * self.d_k)  # key projection
        self.val_proj_vis = nn.Linear(d_model, h * self.d_v)  # value projection

        self.que_proj_ir = nn.Linear(d_model, h * self.d_k)  # query projection
        self.key_proj_ir = nn.Linear(d_model, h * self.d_k)  # key projection
        self.val_proj_ir = nn.Linear(d_model, h * self.d_v)  # value projection

        self.out_proj_vis = nn.Linear(h * self.d_v, d_model)  # output projection
        self.out_proj_ir = nn.Linear(h * self.d_v, d_model)  # output projection

        # regularization
        self.attn_drop = nn.Dropout(attn_pdrop)
        self.resid_drop = nn.Dropout(resid_pdrop)

        # layer norm
        self.LN1 = nn.LayerNorm(d_model)
        self.LN2 = nn.LayerNorm(d_model)

        self.init_weights()

    def init_weights(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                init.kaiming_normal_(m.weight, mode='fan_out')
                if m.bias is not None:
                    init.constant_(m.bias, 0)
            elif isinstance(m, nn.BatchNorm2d):
                init.constant_(m.weight, 1)
                init.constant_(m.bias, 0)
            elif isinstance(m, nn.Linear):
                init.normal_(m.weight, std=0.001)
                if m.bias is not None:
                    init.constant_(m.bias, 0)

    def forward(self, x, attention_mask=None, attention_weights=None):
        '''
        Computes Self-Attention
        Args:
            x (tensor): input (token) dim:(b_s, nx, c),
                b_s means batch size
                nx means length, for CNN, equals H*W, i.e. the length of feature maps
                c means channel, i.e. the channel of feature maps
            attention_mask: Mask over attention values (b_s, h, nq, nk). True indicates masking.
            attention_weights: Multiplicative weights for attention values (b_s, h, nq, nk).
        Return:
            output (tensor): dim:(b_s, nx, c)
        '''
        rgb_fea_flat = x[0]
        ir_fea_flat = x[1]
        b_s, nq = rgb_fea_flat.shape[:2]
        nk = rgb_fea_flat.shape[1]

        # Self-Attention
        rgb_fea_flat = self.LN1(rgb_fea_flat)
        q_vis = self.que_proj_vis(rgb_fea_flat).contiguous().view(b_s, nq, self.h, self.d_k).permute(0, 2, 1, 3)  # (b_s, h, nq, d_k)
        k_vis = self.key_proj_vis(rgb_fea_flat).contiguous().view(b_s, nk, self.h, self.d_k).permute(0, 2, 3, 1)  # (b_s, h, d_k, nk) K^T
        v_vis = self.val_proj_vis(rgb_fea_flat).contiguous().view(b_s, nk, self.h, self.d_v).permute(0, 2, 1, 3)  # (b_s, h, nk, d_v)

        ir_fea_flat = self.LN2(ir_fea_flat)
        q_ir = self.que_proj_ir(ir_fea_flat).contiguous().view(b_s, nq, self.h, self.d_k).permute(0, 2, 1, 3)  # (b_s, h, nq, d_k)
        k_ir = self.key_proj_ir(ir_fea_flat).contiguous().view(b_s, nk, self.h, self.d_k).permute(0, 2, 3, 1)  # (b_s, h, d_k, nk) K^T
        v_ir = self.val_proj_ir(ir_fea_flat).contiguous().view(b_s, nk, self.h, self.d_v).permute(0, 2, 1, 3)  # (b_s, h, nk, d_v)

        att_vis = torch.matmul(q_ir, k_vis) / np.sqrt(self.d_k)
        att_ir = torch.matmul(q_vis, k_ir) / np.sqrt(self.d_k)
        # att_vis = torch.matmul(k_vis, q_ir) / np.sqrt(self.d_k)
        # att_ir = torch.matmul(k_ir, q_vis) / np.sqrt(self.d_k)

        # get attention matrix
        att_vis = torch.softmax(att_vis, -1)
        att_vis = self.attn_drop(att_vis)
        att_ir = torch.softmax(att_ir, -1)
        att_ir = self.attn_drop(att_ir)

        # output
        out_vis = torch.matmul(att_vis, v_vis).permute(0, 2, 1, 3).contiguous().view(b_s, nq, self.h * self.d_v)  # (b_s, nq, h*d_v)
        out_vis = self.resid_drop(self.out_proj_vis(out_vis)) # (b_s, nq, d_model)
        out_ir = torch.matmul(att_ir, v_ir).permute(0, 2, 1, 3).contiguous().view(b_s, nq, self.h * self.d_v)  # (b_s, nq, h*d_v)
        out_ir = self.resid_drop(self.out_proj_ir(out_ir)) # (b_s, nq, d_model)

        return [out_vis, out_ir]

class CrossTransformerBlock(nn.Module):
    def __init__(self, d_model, d_k, d_v, h, block_exp, attn_pdrop, resid_pdrop, loops_num=1):
        """
        :param d_model: Output dimensionality of the model
        :param d_k: Dimensionality of queries and keys
        :param d_v: Dimensionality of values
        :param h: Number of heads
        :param block_exp: Expansion factor for MLP (feed foreword network)
        """
        super(CrossTransformerBlock, self).__init__()
        self.loops = loops_num
        self.ln_input = nn.LayerNorm(d_model)
        self.ln_output = nn.LayerNorm(d_model)
        self.crossatt = CrossAttention(d_model, d_k, d_v, h, attn_pdrop, resid_pdrop)
        self.mlp_vis = nn.Sequential(nn.Linear(d_model, block_exp * d_model),
                                     # nn.SiLU(),  # changed from GELU
                                     nn.GELU(),  # changed from GELU
                                     nn.Linear(block_exp * d_model, d_model),
                                     nn.Dropout(resid_pdrop),
                                     )
        self.mlp_ir = nn.Sequential(nn.Linear(d_model, block_exp * d_model),
                                    # nn.SiLU(),  # changed from GELU
                                    nn.GELU(),  # changed from GELU
                                    nn.Linear(block_exp * d_model, d_model),
                                    nn.Dropout(resid_pdrop),
                                    )
        self.mlp = nn.Sequential(nn.Linear(d_model, block_exp * d_model),
                                 # nn.SiLU(),  # changed from GELU
                                 nn.GELU(),  # changed from GELU
                                 nn.Linear(block_exp * d_model, d_model),
                                 nn.Dropout(resid_pdrop),
                                 )

        # Layer norm
        self.LN1 = nn.LayerNorm(d_model)
        self.LN2 = nn.LayerNorm(d_model)

        # Learnable Coefficient
        self.coefficient1 = LearnableCoefficient()
        self.coefficient2 = LearnableCoefficient()
        self.coefficient3 = LearnableCoefficient()
        self.coefficient4 = LearnableCoefficient()
        self.coefficient5 = LearnableCoefficient()
        self.coefficient6 = LearnableCoefficient()
        self.coefficient7 = LearnableCoefficient()
        self.coefficient8 = LearnableCoefficient()

    def forward(self, x):
        rgb_fea_flat = x[0]
        ir_fea_flat = x[1]
        assert rgb_fea_flat.shape[0] == ir_fea_flat.shape[0]
        bs, nx, c = rgb_fea_flat.size()
        h = w = int(math.sqrt(nx))

        for loop in range(self.loops):
            # with Learnable Coefficient
            rgb_fea_out, ir_fea_out = self.crossatt([rgb_fea_flat, ir_fea_flat])
            rgb_att_out = self.coefficient1(rgb_fea_flat) + self.coefficient2(rgb_fea_out)
            ir_att_out = self.coefficient3(ir_fea_flat) + self.coefficient4(ir_fea_out)
            rgb_fea_flat = self.coefficient5(rgb_att_out) + self.coefficient6(self.mlp_vis(self.LN2(rgb_att_out)))
            ir_fea_flat = self.coefficient7(ir_att_out) + self.coefficient8(self.mlp_ir(self.LN2(ir_att_out)))

            # without Learnable Coefficient
            # rgb_fea_out, ir_fea_out = self.crossatt([rgb_fea_flat, ir_fea_flat])
            # rgb_att_out = rgb_fea_flat + rgb_fea_out
            # ir_att_out = ir_fea_flat + ir_fea_out
            # rgb_fea_flat = rgb_att_out + self.mlp_vis(self.LN2(rgb_att_out))
            # ir_fea_flat = ir_att_out + self.mlp_ir(self.LN2(ir_att_out))

        return [rgb_fea_flat, ir_fea_flat]

class Concat(nn.Module):
    # Concatenate a list of tensors along dimension
    def __init__(self, dimension=1):
        super(Concat, self).__init__()
        self.d = dimension

    def forward(self, x):
        # print(x.shape)
        return torch.cat(x, self.d)

class TransformerFusionBlock(nn.Module):
    def __init__(self, d_model, vert_anchors=16, horz_anchors=16, h=8, block_exp=4, n_layer=1, embd_pdrop=0.1, attn_pdrop=0.1, resid_pdrop=0.1):
        super(TransformerFusionBlock, self).__init__()

        self.n_embd = d_model
        self.vert_anchors = vert_anchors
        self.horz_anchors = horz_anchors
        d_k = d_model
        d_v = d_model

        # positional embedding parameter (learnable), rgb_fea + ir_fea
        self.register_buffer('pos_emb_vis', torch.zeros(1, vert_anchors * horz_anchors, self.n_embd))
        self.register_buffer('pos_emb_ir', torch.zeros(1, vert_anchors * horz_anchors, self.n_embd))

        # 初始化位置编码
        self._init_pos_emb()

        # downsampling
        self.avgpool = AdaptivePool2d(self.vert_anchors, self.horz_anchors, 'avg')
        self.maxpool = AdaptivePool2d(self.vert_anchors, self.horz_anchors, 'max')

        # LearnableCoefficient
        self.vis_coefficient = LearnableWeights()
        self.ir_coefficient = LearnableWeights()

        # init weights
        self.apply(self._init_weights)

        # cross transformer
        self.crosstransformer = nn.Sequential(*[CrossTransformerBlock(d_model, d_k, d_v, h, block_exp, attn_pdrop, resid_pdrop) for layer in range(n_layer)])

        # Concat
        self.concat = Concat(dimension=1)

        # conv1x1
        self.conv1x1_out = Conv(c1=d_model * 2, c2=d_model, k=1, s=1, p=0, g=1, act=True)

    def _init_pos_emb(self):
        # 初始化位置编码
        nn.init.trunc_normal_(self.pos_emb_vis, std=.02)
        nn.init.trunc_normal_(self.pos_emb_ir, std=.02)

    @staticmethod
    def _init_weights(module):
        if isinstance(module, nn.Linear):
            module.weight.data.normal_(mean=0.0, std=0.02)
            if module.bias is not None:
                module.bias.data.zero_()
        elif isinstance(module, nn.LayerNorm):
            module.bias.data.zero_()
            module.weight.data.fill_(1.0)

    def forward(self, x):
        rgb_fea = x[0]
        ir_fea = x[1]
        assert rgb_fea.shape[0] == ir_fea.shape[0]
        bs, c, h, w = rgb_fea.shape

        # ------------------------- cross-modal feature fusion -----------------------#
        new_rgb_fea = self.vis_coefficient(self.avgpool(rgb_fea), self.maxpool(rgb_fea))
        new_c, new_h, new_w = new_rgb_fea.shape[1], new_rgb_fea.shape[2], new_rgb_fea.shape[3]

        # 调整位置编码大小以匹配特征图
        pos_emb_vis = self._resize_pos_embed(self.pos_emb_vis, new_h, new_w)
        rgb_fea_flat = new_rgb_fea.contiguous().view(bs, new_c, -1).permute(0, 2, 1) + pos_emb_vis

        new_ir_fea = self.ir_coefficient(self.avgpool(ir_fea), self.maxpool(ir_fea))
        pos_emb_ir = self._resize_pos_embed(self.pos_emb_ir, new_h, new_w)
        ir_fea_flat = new_ir_fea.contiguous().view(bs, new_c, -1).permute(0, 2, 1) + pos_emb_ir

        rgb_fea_flat, ir_fea_flat = self.crosstransformer([rgb_fea_flat, ir_fea_flat])

        rgb_fea_CFE = rgb_fea_flat.contiguous().view(bs, new_h, new_w, new_c).permute(0, 3, 1, 2)
        if self.training == True:
            rgb_fea_CFE = F.interpolate(rgb_fea_CFE, size=([h, w]), mode='nearest')
        else:
            rgb_fea_CFE = F.interpolate(rgb_fea_CFE, size=([h, w]), mode='bilinear')
        new_rgb_fea = rgb_fea_CFE + rgb_fea
        ir_fea_CFE = ir_fea_flat.contiguous().view(bs, new_h, new_w, new_c).permute(0, 3, 1, 2)
        if self.training == True:
            ir_fea_CFE = F.interpolate(ir_fea_CFE, size=([h, w]), mode='nearest')
        else:
            ir_fea_CFE = F.interpolate(ir_fea_CFE, size=([h, w]), mode='bilinear')
        new_ir_fea = ir_fea_CFE + ir_fea

        new_fea = self.concat([new_rgb_fea, new_ir_fea])
        new_fea = self.conv1x1_out(new_fea)

        return new_fea

    def _resize_pos_embed(self, pos_embed, new_h, new_w):
        """
        调整位置编码的大小以匹配新的特征图尺寸
        """
        # 获取原始位置编码的尺寸
        N, L, C = pos_embed.shape
        H = W = int(L ** 0.5)

        # 将位置编码重塑为二维形式
        pos_embed = pos_embed.permute(0, 2, 1).view(N, C, H, W)

        # 使用双线性插值调整大小
        pos_embed = F.interpolate(pos_embed, size=(new_h, new_w), mode='bilinear', align_corners=False)

        # 重塑回一维形式
        pos_embed = pos_embed.view(N, C, new_h * new_w).permute(0, 2, 1)

        return pos_embed

四、添加步骤

4.1 修改一

① 在 ultralytics/nn/ 目录下新建 AddModules 文件夹用于存放模块代码

② 在 AddModules 文件夹下新建 ICAFusion.py ,将 第三节 中的代码粘贴到此处

在这里插入图片描述

4.2 修改二

AddModules 文件夹下新建 __init__.py (已有则不用新建),在文件内导入模块: from .ICAFusion import *

在这里插入图片描述

4.3 修改三

ultralytics/nn/modules/tasks.py 文件中,需要在两处位置添加各模块类名称。

首先:导入模块

在这里插入图片描述

然后,在 parse_model函数 中添加如下代码:

        elif m is TransformerFusionBlock:
            c2 = ch[f[0]]
            args = [c2, *args[1:]]

在这里插入图片描述


五、yaml模型文件

5.1 中期融合⭐

📌 此模型的修方法是将TransformerFusionBlock模块应用到YOLOv10的中期融合中。

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv10 object detection model. For Usage examples see https://docs.ultralytics.com/tasks/detect

# Parameters
ch: 6
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov10n.yaml' will call yolov10.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.33, 0.25, 1024]

backbone:
  # [from, repeats, module, args]
  - [-1, 1, IN, []]  # 0
  - [-1, 1, Multiin, [1]]  # 1
  - [-2, 1, Multiin, [2]]  # 2

  - [1, 1, Conv, [64, 3, 2]] # 3-P1/2
  - [-1, 1, Conv, [128, 3, 2]] # 4-P2/4
  - [-1, 3, C2f, [128, True]]
  - [-1, 1, Conv, [256, 3, 2]] # 6-P3/8
  - [-1, 6, C2f, [256, True]]
  - [-1, 1, SCDown, [512, 3, 2]] # 8-P4/16
  - [-1, 6, C2f, [512, True]]
  - [-1, 1, SCDown, [1024, 3, 2]] # 10-P5/32
  - [-1, 3, C2f, [1024, True]]
  - [-1, 1, SPPF, [1024, 5]] # 12
  - [-1, 1, PSA, [1024]] # 13

  - [2, 1, Conv, [64, 3, 2]] # 14-P1/2
  - [-1, 1, Conv, [128, 3, 2]] # 15-P2/4
  - [-1, 3, C2f, [128, True]]
  - [-1, 1, Conv, [256, 3, 2]] # 17-P3/8
  - [-1, 6, C2f, [256, True]]
  - [-1, 1, SCDown, [512, 3, 2]] # 19-P4/16
  - [-1, 6, C2f, [512, True]]
  - [-1, 1, SCDown, [1024, 3, 2]] # 21-P5/32
  - [-1, 3, C2f, [1024, True]]
  - [-1, 1, SPPF, [1024, 5]] # 23
  - [-1, 1, PSA, [1024]] # 24

  - [[7, 18], 1, TransformerFusionBlock, [256, 20, 20]]  # 25 cat backbone P3
  - [[9, 20], 1, TransformerFusionBlock, [512, 16, 16]]  # 26 cat backbone P4
  - [[13, 24], 1, TransformerFusionBlock, [1024, 10, 10]]  # 27 cat backbone P5

# YOLOv10.0n head
head:
  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 26], 1, Concat, [1]] # cat backbone P4
  - [-1, 3, C2f, [512]] # 30

  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 25], 1, Concat, [1]] # cat backbone P3
  - [-1, 3, C2f, [256]] # 33 (P3/8-small)

  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 30], 1, Concat, [1]] # cat head P4
  - [-1, 3, C2f, [512]] # 36 (P4/16-medium)

  - [-1, 1, SCDown, [512, 3, 2]]
  - [[-1, 27], 1, Concat, [1]] # cat head P5
  - [-1, 3, C2fCIB, [1024, True, True]] # 39 (P5/32-large)

  - [[33, 36, 39], 1, v10Detect, [nc]] # Detect(P3, P4, P5)


六、成功运行结果

打印网络模型可以看到不同的融合层已经加入到模型中,并可以进行训练了。

YOLOv10-mid-ICAFusion

YOLOv10n-mid-ICAFusion summary: 679 layers, 6,850,634 parameters, 6,850,618 gradients, 12.5 GFLOPs

                   from  n    params  module                                       arguments
  0                  -1  1         0  ultralytics.nn.AddModules.multimodal.IN      []
  1                  -1  1         0  ultralytics.nn.AddModules.multimodal.Multiin [1]
  2                  -2  1         0  ultralytics.nn.AddModules.multimodal.Multiin [2]
  3                   1  1       464  ultralytics.nn.modules.conv.Conv             [3, 16, 3, 2]
  4                  -1  1      4672  ultralytics.nn.modules.conv.Conv             [16, 32, 3, 2]
  5                  -1  1      7360  ultralytics.nn.modules.block.C2f             [32, 32, 1, True]
  6                  -1  1     18560  ultralytics.nn.modules.conv.Conv             [32, 64, 3, 2]
  7                  -1  2     49664  ultralytics.nn.modules.block.C2f             [64, 64, 2, True]
  8                  -1  1      9856  ultralytics.nn.modules.block.SCDown          [64, 128, 3, 2]
  9                  -1  2    197632  ultralytics.nn.modules.block.C2f             [128, 128, 2, True]
 10                  -1  1     36096  ultralytics.nn.modules.block.SCDown          [128, 256, 3, 2]
 11                  -1  1    460288  ultralytics.nn.modules.block.C2f             [256, 256, 1, True]
 12                  -1  1    164608  ultralytics.nn.modules.block.SPPF            [256, 256, 5]
 13                  -1  1    249728  ultralytics.nn.modules.block.PSA             [256, 256]
 14                   2  1       464  ultralytics.nn.modules.conv.Conv             [3, 16, 3, 2]
 15                  -1  1      4672  ultralytics.nn.modules.conv.Conv             [16, 32, 3, 2]
 16                  -1  1      7360  ultralytics.nn.modules.block.C2f             [32, 32, 1, True]
 17                  -1  1     18560  ultralytics.nn.modules.conv.Conv             [32, 64, 3, 2]
 18                  -1  2     49664  ultralytics.nn.modules.block.C2f             [64, 64, 2, True]
 19                  -1  1      9856  ultralytics.nn.modules.block.SCDown          [64, 128, 3, 2]
 20                  -1  2    197632  ultralytics.nn.modules.block.C2f             [128, 128, 2, True]
 21                  -1  1     36096  ultralytics.nn.modules.block.SCDown          [128, 256, 3, 2]
 22                  -1  1    460288  ultralytics.nn.modules.block.C2f             [256, 256, 1, True]
 23                  -1  1    164608  ultralytics.nn.modules.block.SPPF            [256, 256, 5]
 24                  -1  1    249728  ultralytics.nn.modules.block.PSA             [256, 256]
 25             [7, 18]  1    141644  ultralytics.nn.AddModules.ICAFusion.TransformerFusionBlock[64, 20, 20]
 26             [9, 20]  1    561804  ultralytics.nn.AddModules.ICAFusion.TransformerFusionBlock[128, 16, 16]
 27            [13, 24]  1   2237708  ultralytics.nn.AddModules.ICAFusion.TransformerFusionBlock[256, 10, 10]
 28                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 29            [-1, 26]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 30                  -1  1    148224  ultralytics.nn.modules.block.C2f             [384, 128, 1]
 31                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 32            [-1, 25]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 33                  -1  1     37248  ultralytics.nn.modules.block.C2f             [192, 64, 1]
 34                  -1  1     36992  ultralytics.nn.modules.conv.Conv             [64, 64, 3, 2]
 35            [-1, 30]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 36                  -1  1    123648  ultralytics.nn.modules.block.C2f             [192, 128, 1]
 37                  -1  1     18048  ultralytics.nn.modules.block.SCDown          [128, 128, 3, 2]
 38            [-1, 27]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 39                  -1  1    282624  ultralytics.nn.modules.block.C2fCIB          [384, 256, 1, True, True]
 40        [33, 36, 39]  1    864838  ultralytics.nn.modules.head.v10Detect        [9, [64, 128, 256]]
YOLOv10n-mid-ICAFusion summary: 679 layers, 6,850,634 parameters, 6,850,618 gradients, 12.5 GFLOPs