【YOLOv8多模态融合改进】| TFAM:时序融合注意力模块 | 引入通道 - 空间双分支注意力机制,解决双模态特征融合中时序关联不足的问题
一、本文介绍
本文记录的是利用 TFAM 模块改进 YOLOv8 的多模态融合部分 。
TFAM 模块(Temporal Fusion Attention Module,时序融合注意力模块) 通过在特征提取网络的深层 引入通道 - 空间双分支注意力机制 ,基于时序信息动态生成双模态特征融合权重。该模块 可自适应捕捉跨模态的重要特征,抑制双模态特征中的噪声干扰与无效信息,实现高层语义与低层空间特征的时序关联建模与互补融合 ,为变化检测提供跨模态的精准特征表示。
二、TFAM模块介绍
Exchanging Dual-Encoder–Decoder: A New Strategy for Change Detection With Semantic Guidance and Spatial Localization
2.1 设计出发点
当前基于深度学习的变化检测模型在进行双时相特征融合时,主要存在以下问题:
- 简单融合方法 :直接进行元素加减或拼接,易受噪声干扰,难以实现有效融合。
- 卷积增强方法 :虽通过多尺度卷积减少噪声,但忽略了双时相特征间的时序信息。
- 注意力增强方法 :通常在通道维度拼接后使用注意力机制,但同样未充分利用时序信息。
为解决上述问题,TFAM(Temporal Fusion Attention Module)模块被提出,其核心目标是 利用时序信息实现双时相特征的有效融合 ,通过对比双时相特征在时间维度上的重要性,提升特征融合的准确性和鲁棒性。
2.2 结构原理
TFAM模块通过 通道注意力 和 空间注意力 两个分支,分别在通道和空间维度上计算双时相特征的权重,进而实现特征融合。其具体结构和工作流程如下:
2.2.1 通道注意力分支
- 特征聚合 :对输入的双时相特征 T 1 T_1 T 1 和 T 2 T_2 T 2 ,分别进行全局平均池化(Avgpool)和全局最大池化(Maxpool),聚合空间信息,得到 S c = Concat ( A v g ( T 1 ) , Max ( T 1 ) , Avg ( T 2 ) , Max ( T 2 ) ) S_c = \text{Concat}(Avg(T_1), \text{Max}(T_1), \text{Avg}(T_2), \text{Max}(T_2)) S c = Concat ( A vg ( T 1 ) , Max ( T 1 ) , Avg ( T 2 ) , Max ( T 2 )) 。
-
权重计算
:通过两个1D卷积层(类似ECA模块)生成双时相的通道权重
W
c
1
W_{c1}
W
c
1
和
W
c
2
W_{c2}
W
c
2
,再经Softmax归一化,使权重和为1,公式为:
W c 1 ′ , W c 2 ′ = e W c 1 e W c 1 + e W c 2 , e W c 2 e W c 1 + e W c 2 W'_{c1}, W'_{c2} = \frac{e^{W_{c1}}}{e^{W_{c1}} + e^{W_{c2}}}, \frac{e^{W_{c2}}}{e^{W_{c1}} + e^{W_{c2}}} W c 1 ′ , W c 2 ′ = e W c 1 + e W c 2 e W c 1 , e W c 1 + e W c 2 e W c 2
该过程通过对比通道权重,确定双时相特征在通道维度上的重要部分。
2.2.2 空间注意力分支
- 权重计算 :采用与通道注意力类似的方法,对双时相特征在空间维度上进行池化和卷积操作,生成空间权重 W s 1 ′ W'_{s1} W s 1 ′ 和 W s 2 ′ W'_{s2} W s 2 ′ ,用于衡量空间位置的重要性。
2.2.3 特征融合
-
将通道权重和空间权重相加,得到双时相特征的综合权重,再与原始特征相乘并求和,公式为:
Output = ( W c 1 ′ + W s 1 ′ ) ⋅ T 1 + ( W c 2 ′ + W s 2 ′ ) ⋅ T 2 \text{Output} = (W'_{c1} + W'_{s1}) \cdot T_1 + (W'_{c2} + W'_{s2}) \cdot T_2 Output = ( W c 1 ′ + W s 1 ′ ) ⋅ T 1 + ( W c 2 ′ + W s 2 ′ ) ⋅ T 2
最终输出融合后的特征,通过权重分配保留有用信息,丢弃冗余部分。
2.3 优势
-
有效利用时序信息: 通过对比双时相特征在通道和空间维度的权重,TFAM能够 捕捉跨时相的重要特征 ,避免了传统方法中忽略时序关联的缺陷。
-
增强特征融合的准确性
- 通道与空间联合优化 :同时在两个维度上进行注意力机制计算,使模型能够从“哪里变”(空间)和“什么类型变”(通道语义)两个层面精准定位变化区域。
- 动态权重分配 :Softmax归一化确保双时相特征的权重和为1,避免了简单相加或拼接可能引入的噪声放大问题,提升了融合特征的纯净度。
-
轻量化与高效性
- 模块通过1D卷积和池化操作实现,计算复杂度低,且可嵌入到主流网络架构中(如EDED backbone),适用于实时或资源受限的场景。
- 实验表明,引入TFAM后,模型在LEVIR-CD数据集上的F1分数提升了约0.3%,验证了其有效性。
总结
TFAM模块通过 时序信息驱动的注意力机制 ,解决了双时相特征融合中时序关联不足的问题,实现了更精准的变化区域定位和特征表示。其结构轻量、泛化性强,为变化检测任务提供了一种高效的特征融合解决方案。
三、TFAM的实现代码
TFAM
的实现代码如下:
import torch
import torch.nn as nn
import math
def kernel_size(in_channel):
"""Compute kernel size for one dimension convolution in eca-net"""
k = int((math.log2(in_channel) + 1) // 2) # parameters from ECA-net
if k % 2 == 0:
return k + 1
else:
return k
class TFAM(nn.Module):
"""Fuse two feature into one feature."""
def __init__(self, in_channel):
super().__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.max_pool = nn.AdaptiveMaxPool2d(1)
self.k = kernel_size(in_channel)
self.channel_conv1 = nn.Conv1d(4, 1, kernel_size=self.k, padding=self.k // 2)
self.channel_conv2 = nn.Conv1d(4, 1, kernel_size=self.k, padding=self.k // 2)
self.spatial_conv1 = nn.Conv2d(4, 1, kernel_size=7, padding=3)
self.spatial_conv2 = nn.Conv2d(4, 1, kernel_size=7, padding=3)
self.softmax = nn.Softmax(0)
def forward(self, x, log=None, module_name=None, img_name=None):
t1 = x[0] # 拆分输入元组为 t1
t2 = x[1] # 拆分输入元组为 t2
# channel part
t1_channel_avg_pool = self.avg_pool(t1) # b,c,1,1
t1_channel_max_pool = self.max_pool(t1) # b,c,1,1
t2_channel_avg_pool = self.avg_pool(t2) # b,c,1,1
t2_channel_max_pool = self.max_pool(t2) # b,c,1,1
channel_pool = torch.cat([t1_channel_avg_pool, t1_channel_max_pool,
t2_channel_avg_pool, t2_channel_max_pool],
dim=2).squeeze(-1).transpose(1, 2) # b,4,c
t1_channel_attention = self.channel_conv1(channel_pool) # b,1,c
t2_channel_attention = self.channel_conv2(channel_pool) # b,1,c
channel_stack = torch.stack([t1_channel_attention, t2_channel_attention],
dim=0) # 2,b,1,c
channel_stack = self.softmax(channel_stack).transpose(-1, -2).unsqueeze(-1) # 2,b,c,1,1
# spatial part
t1_spatial_avg_pool = torch.mean(t1, dim=1, keepdim=True) # b,1,h,w
t1_spatial_max_pool = torch.max(t1, dim=1, keepdim=True)[0] # b,1,h,w
t2_spatial_avg_pool = torch.mean(t2, dim=1, keepdim=True) # b,1,h,w
t2_spatial_max_pool = torch.max(t2, dim=1, keepdim=True)[0] # b,1,h,w
spatial_pool = torch.cat([t1_spatial_avg_pool, t1_spatial_max_pool,
t2_spatial_avg_pool, t2_spatial_max_pool], dim=1) # b,4,h,w
t1_spatial_attention = self.spatial_conv1(spatial_pool) # b,1,h,w
t2_spatial_attention = self.spatial_conv2(spatial_pool) # b,1,h,w
spatial_stack = torch.stack([t1_spatial_attention, t2_spatial_attention], dim=0) # 2,b,1,h,w
spatial_stack = self.softmax(spatial_stack) # 2,b,1,h,w
# fusion part, add 1 means residual add
stack_attention = channel_stack + spatial_stack + 1 # 2,b,c,h,w
fuse = stack_attention[0] * t1 + stack_attention[1] * t2 # b,c,h,w
return fuse
四、融合步骤
4.1 修改一
① 在
ultralytics/nn/
目录下新建
AddModules
文件夹用于存放模块代码
② 在
AddModules
文件夹下新建
TFAM.py
,将
第三节
中的代码粘贴到此处
4.2 修改二
在
AddModules
文件夹下新建
__init__.py
(已有则不用新建),在文件内导入模块:
from .TFAM import *
4.3 修改三
在
ultralytics/nn/modules/tasks.py
文件中,需要在两处位置添加各模块类名称。
首先:导入模块
其次:在
parse_model函数
中注册
TFAM
模块
elif m in {TFAM}:
c2 = ch[f[0]]
args = [c2]
在
DetectionModel
类下,添加如下代码
try:
m.stride = torch.tensor([s / x.shape[-2] for x in _forward(torch.zeros(1, ch, s, s))]) # forward on CPU
except RuntimeError:
try:
self.model.to(torch.device('cuda'))
m.stride = torch.tensor([s / x.shape[-2] for x in _forward(
torch.zeros(1, ch, s, s).to(torch.device('cuda')))]) # forward on CUDA
except RuntimeError as error:
raise error
并注释这一行
# m.stride = torch.tensor([s / x.shape[-2] for x in _forward(torch.zeros(1, ch, s, s))]) # forward
五、yaml模型文件
5.1 中期融合⭐
📌 此模型的修方法是将原本的中期融合中的Concat融合部分换成TFAM,融合骨干部分的多模态信息。
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
# Parameters
ch: 6
nc: 1 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'
# [depth, width, max_channels]
n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs
s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs
m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs
l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs
x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs
# YOLOv8.0n backbone
backbone:
# [from, repeats, module, args]
- [-1, 1, IN, []] # 0
- [-1, 1, Multiin, [1]] # 1
- [-2, 1, Multiin, [2]] # 2
- [1, 1, Conv, [64, 3, 2]] # 3-P1/2
- [-1, 1, Conv, [128, 3, 2]] # 4-P2/4
- [-1, 3, C2f, [128, True]]
- [-1, 1, Conv, [256, 3, 2]] # 6-P3/8
- [-1, 6, C2f, [256, True]]
- [-1, 1, Conv, [512, 3, 2]] # 8-P4/16
- [-1, 6, C2f, [512, True]]
- [-1, 1, Conv, [1024, 3, 2]] # 10-P5/32
- [-1, 3, C2f, [1024, True]]
- [2, 1, Conv, [64, 3, 2]] # 12-P1/2
- [-1, 1, Conv, [128, 3, 2]] # 13-P2/4
- [-1, 3, C2f, [128, True]]
- [-1, 1, Conv, [256, 3, 2]] # 15-P3/8
- [-1, 6, C2f, [256, True]]
- [-1, 1, Conv, [512, 3, 2]] # 17-P4/16
- [-1, 6, C2f, [512, True]]
- [-1, 1, Conv, [1024, 3, 2]] # 19-P5/32
- [-1, 3, C2f, [1024, True]]
- [[7, 16], 1, TFAM, [256]] # 21 cat backbone P3
- [[9, 18], 1, TFAM, [512]] # 22 cat backbone P4
- [[11, 20], 1, TFAM, [1024]] # 23 cat backbone P5
- [-1, 1, SPPF, [1024, 5]] # 24
# YOLOv8.0n head
head:
- [-1, 1, nn.Upsample, [None, 2, 'nearest']]
- [[-1, 22], 1, Concat, [1]] # cat backbone P4
- [-1, 3, C2f, [512]] # 27
- [-1, 1, nn.Upsample, [None, 2, 'nearest']]
- [[-1, 21], 1, Concat, [1]] # cat backbone P3
- [-1, 3, C2f, [256]] # 30 (P3/8-small)
- [-1, 1, Conv, [256, 3, 2]]
- [[-1, 27], 1, Concat, [1]] # cat head P4
- [-1, 3, C2f, [512]] # 33 (P4/16-medium)
- [-1, 1, Conv, [512, 3, 2]]
- [[-1, 24], 1, Concat, [1]] # cat head P5
- [-1, 3, C2f, [1024]] # 36 (P5/32-large)
- [[30, 33, 36], 1, Detect, [nc]] # Detect(P3, P4, P5)
5.2 中-后期融合⭐
📌 此模型的修方法是将原本的中-后期融合中的Concat融合部分换成TFAM,融合FPN部分的多模态信息。
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
# Parameters
ch: 6
nc: 1 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'
# [depth, width, max_channels]
n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs
s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs
m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs
l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs
x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs
# YOLOv8.0n backbone
backbone:
# [from, repeats, module, args]
- [-1, 1, IN, []] # 0
- [-1, 1, Multiin, [1]] # 1
- [-2, 1, Multiin, [2]] # 2
- [1, 1, Conv, [64, 3, 2]] # 3-P1/2
- [-1, 1, Conv, [128, 3, 2]] # 4-P2/4
- [-1, 3, C2f, [128, True]]
- [-1, 1, Conv, [256, 3, 2]] # 6-P3/8
- [-1, 6, C2f, [256, True]]
- [-1, 1, Conv, [512, 3, 2]] # 8-P4/16
- [-1, 6, C2f, [512, True]]
- [-1, 1, Conv, [1024, 3, 2]] # 10-P5/32
- [-1, 3, C2f, [1024, True]]
- [-1, 1, SPPF, [1024, 5]] # 12
- [2, 1, Conv, [64, 3, 2]] # 13-P1/2
- [-1, 1, Conv, [128, 3, 2]] # 14-P2/4
- [-1, 3, C2f, [128, True]]
- [-1, 1, Conv, [256, 3, 2]] # 16-P3/8
- [-1, 6, C2f, [256, True]]
- [-1, 1, Conv, [512, 3, 2]] # 18-P4/16
- [-1, 6, C2f, [512, True]]
- [-1, 1, Conv, [1024, 3, 2]] # 20-P5/32
- [-1, 3, C2f, [1024, True]]
- [-1, 1, SPPF, [1024, 5]] # 22
# YOLOv8.0n head
head:
- [12, 1, nn.Upsample, [None, 2, 'nearest']]
- [[-1, 9], 1, Concat, [1]] # cat backbone P4
- [-1, 3, C2f, [512]] # 25
- [-1, 1, nn.Upsample, [None, 2, 'nearest']]
- [[-1, 7], 1, Concat, [1]] # cat backbone P3
- [-1, 3, C2f, [256]] # 28 (P3/8-small)
- [22, 1, nn.Upsample, [None, 2, 'nearest']]
- [[-1, 19], 1, Concat, [1]] # cat backbone P4
- [-1, 3, C2f, [512]] # 31
- [-1, 1, nn.Upsample, [None, 2, 'nearest']]
- [[-1, 17], 1, Concat, [1]] # cat backbone P3
- [-1, 3, C2f, [256]] # 34 (P3/8-small)
- [ [ 12, 22 ], 1, TFAM, [1024] ] # cat head P3 35
- [ [ 25, 31 ], 1, TFAM, [512] ] # cat head P4 36
- [ [ 28, 34 ], 1, TFAM, [256] ] # cat head P5 37
- [37, 1, Conv, [256, 3, 2]]
- [[-1, 36], 1, Concat, [1]] # cat head P4
- [-1, 3, C2f, [512]] # 40 (P4/16-medium)
- [-1, 1, Conv, [512, 3, 2]]
- [[-1, 35], 1, Concat, [1]] # cat head P5
- [-1, 3, C2f, [1024]] # 43 (P5/32-large)
- [[37, 40, 43], 1, Detect, [nc]] # Detect(P3, P4, P5)
5.3 后期融合⭐
📌 此模型的修方法是将原本的后期融合中的Concat融合部分换成TFAM,融合颈部部分的多模态信息。
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
# Parameters
ch: 6
nc: 1 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'
# [depth, width, max_channels]
n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs
s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs
m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs
l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs
x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs
# YOLOv8.0n backbone
backbone:
# [from, repeats, module, args]
- [-1, 1, IN, []] # 0
- [-1, 1, Multiin, [1]] # 1
- [-2, 1, Multiin, [2]] # 2
- [1, 1, Conv, [64, 3, 2]] # 3-P1/2
- [-1, 1, Conv, [128, 3, 2]] # 4-P2/4
- [-1, 3, C2f, [128, True]]
- [-1, 1, Conv, [256, 3, 2]] # 6-P3/8
- [-1, 6, C2f, [256, True]]
- [-1, 1, Conv, [512, 3, 2]] # 8-P4/16
- [-1, 6, C2f, [512, True]]
- [-1, 1, Conv, [1024, 3, 2]] # 10-P5/32
- [-1, 3, C2f, [1024, True]]
- [-1, 1, SPPF, [1024, 5]] # 12
- [2, 1, Conv, [64, 3, 2]] # 13-P1/2
- [-1, 1, Conv, [128, 3, 2]] # 14-P2/4
- [-1, 3, C2f, [128, True]]
- [-1, 1, Conv, [256, 3, 2]] # 16-P3/8
- [-1, 6, C2f, [256, True]]
- [-1, 1, Conv, [512, 3, 2]] # 18-P4/16
- [-1, 6, C2f, [512, True]]
- [-1, 1, Conv, [1024, 3, 2]] # 20-P5/32
- [-1, 3, C2f, [1024, True]]
- [-1, 1, SPPF, [1024, 5]] # 22
# YOLOv8.0n head
head:
- [12, 1, nn.Upsample, [None, 2, 'nearest']]
- [[-1, 9], 1, Concat, [1] ] # cat backbone P4
- [-1, 3, C2f, [512]] # 25
- [-1, 1, nn.Upsample, [None, 2, 'nearest']]
- [[ -1, 7], 1, Concat, [1]] # cat backbone P3
- [-1, 3, C2f, [256]] # 28 (P3/8-small)
- [-1, 1, Conv, [256, 3, 2]]
- [[-1, 25], 1, Concat, [1]] # cat head P4
- [-1, 3, C2f, [512]] # 31 (P4/16-medium)
- [-1, 1, Conv, [512, 3, 2]]
- [[-1, 12], 1, Concat, [1]] # cat head P5
- [-1, 3, C2f, [1024]] # 34 (P5/32-large)
- [22, 1, nn.Upsample, [None, 2, 'nearest']]
- [[-1, 19], 1, Concat, [1]] # cat backbone P4
- [-1, 3, C2f, [512]] # 37
- [-1, 1, nn.Upsample, [None, 2, 'nearest']]
- [[ -1, 17 ], 1, Concat, [1]] # cat backbone P3
- [-1, 3, C2f, [256]] # 40 (P3/8-small)
- [-1, 1, Conv, [256, 3, 2]]
- [[-1, 37], 1, Concat, [1]] # cat head P4
- [-1, 3, C2f, [512]] # 43 (P4/16-medium)
- [-1, 1, Conv, [512, 3, 2]]
- [[-1, 22], 1, Concat, [1]] # cat head P5
- [-1, 3, C2f, [1024]] # 46 (P5/32-large)
- [[28, 40], 1, TFAM, [256]] # cat head P5 47
- [[31, 43], 1, TFAM, [512]] # cat head P5 48
- [[34, 46], 1, TFAM, [1024]] # cat head P5 49
- [[47, 48, 49], 1, Detect, [nc]] # Detect(P3, P4, P5)
六、成功运行结果
打印网络模型可以看到不同的融合层已经加入到模型中,并可以进行训练了。
YOLOv8-mid-TFAM :
YOLOv8-mid-TFAM summary: 365 layers, 3,799,743 parameters, 3,799,727 gradients, 10.0 GFLOPs
from n params module arguments
0 -1 1 0 ultralytics.nn.AddModules.multimodal.IN []
1 -1 1 0 ultralytics.nn.AddModules.multimodal.Multiin [1]
2 -2 1 0 ultralytics.nn.AddModules.multimodal.Multiin [2]
3 1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2]
4 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]
5 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True]
6 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2]
7 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True]
8 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2]
9 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True]
10 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2]
11 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True]
12 2 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2]
13 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]
14 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True]
15 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2]
16 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True]
17 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2]
18 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True]
19 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2]
20 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True]
21 [7, 16] 1 420 ultralytics.nn.AddModules.TFAM.TFAM [64]
22 [9, 18] 1 436 ultralytics.nn.AddModules.TFAM.TFAM [128]
23 [11, 20] 1 436 ultralytics.nn.AddModules.TFAM.TFAM [256]
24 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5]
25 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
26 [-1, 22] 1 0 ultralytics.nn.modules.conv.Concat [1]
27 -1 1 148224 ultralytics.nn.modules.block.C2f [384, 128, 1]
28 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
29 [-1, 21] 1 0 ultralytics.nn.modules.conv.Concat [1]
30 -1 1 37248 ultralytics.nn.modules.block.C2f [192, 64, 1]
31 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2]
32 [-1, 27] 1 0 ultralytics.nn.modules.conv.Concat [1]
33 -1 1 123648 ultralytics.nn.modules.block.C2f [192, 128, 1]
34 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2]
35 [-1, 24] 1 0 ultralytics.nn.modules.conv.Concat [1]
36 -1 1 493056 ultralytics.nn.modules.block.C2f [384, 256, 1]
37 [30, 33, 36] 1 430867 ultralytics.nn.modules.head.Detect [1, [64, 128, 256]]
YOLOv8-mid-TFAM summary: 365 layers, 3,799,743 parameters, 3,799,727 gradients, 10.0 GFLOPs
YOLOv8-mid-to-late-TFAM :
YOLOv8-mid-to-late-TFAM summary: 407 layers, 4,149,823 parameters, 4,149,807 gradients, 11.1 GFLOPs
from n params module arguments
0 -1 1 0 ultralytics.nn.AddModules.multimodal.IN []
1 -1 1 0 ultralytics.nn.AddModules.multimodal.Multiin [1]
2 -2 1 0 ultralytics.nn.AddModules.multimodal.Multiin [2]
3 1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2]
4 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]
5 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True]
6 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2]
7 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True]
8 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2]
9 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True]
10 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2]
11 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True]
12 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5]
13 2 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2]
14 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]
15 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True]
16 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2]
17 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True]
18 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2]
19 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True]
20 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2]
21 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True]
22 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5]
23 12 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
24 [-1, 9] 1 0 ultralytics.nn.modules.conv.Concat [1]
25 -1 1 148224 ultralytics.nn.modules.block.C2f [384, 128, 1]
26 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
27 [-1, 7] 1 0 ultralytics.nn.modules.conv.Concat [1]
28 -1 1 37248 ultralytics.nn.modules.block.C2f [192, 64, 1]
29 22 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
30 [-1, 19] 1 0 ultralytics.nn.modules.conv.Concat [1]
31 -1 1 148224 ultralytics.nn.modules.block.C2f [384, 128, 1]
32 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
33 [-1, 17] 1 0 ultralytics.nn.modules.conv.Concat [1]
34 -1 1 37248 ultralytics.nn.modules.block.C2f [192, 64, 1]
35 [12, 22] 1 436 ultralytics.nn.AddModules.TFAM.TFAM [256]
36 [25, 31] 1 436 ultralytics.nn.AddModules.TFAM.TFAM [128]
37 [28, 34] 1 420 ultralytics.nn.AddModules.TFAM.TFAM [64]
38 37 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2]
39 [-1, 36] 1 0 ultralytics.nn.modules.conv.Concat [1]
40 -1 1 123648 ultralytics.nn.modules.block.C2f [192, 128, 1]
41 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2]
42 [-1, 35] 1 0 ultralytics.nn.modules.conv.Concat [1]
43 -1 1 493056 ultralytics.nn.modules.block.C2f [384, 256, 1]
44 [37, 40, 43] 1 430867 ultralytics.nn.modules.head.Detect [1, [64, 128, 256]]
YOLOv8-mid-to-late-TFAM summary: 407 layers, 4,149,823 parameters, 4,149,807 gradients, 11.1 GFLOPs
YOLOv8-late-TFAM :
YOLOv8-late-TFAM summary: 445 layers, 4,951,231 parameters, 4,951,215 gradients, 12.2 GFLOPs
from n params module arguments
0 -1 1 0 ultralytics.nn.AddModules.multimodal.IN []
1 -1 1 0 ultralytics.nn.AddModules.multimodal.Multiin [1]
2 -2 1 0 ultralytics.nn.AddModules.multimodal.Multiin [2]
3 1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2]
4 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]
5 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True]
6 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2]
7 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True]
8 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2]
9 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True]
10 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2]
11 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True]
12 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5]
13 2 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2]
14 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2]
15 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True]
16 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2]
17 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True]
18 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2]
19 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True]
20 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2]
21 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True]
22 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5]
23 12 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
24 [-1, 9] 1 0 ultralytics.nn.modules.conv.Concat [1]
25 -1 1 148224 ultralytics.nn.modules.block.C2f [384, 128, 1]
26 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
27 [-1, 7] 1 0 ultralytics.nn.modules.conv.Concat [1]
28 -1 1 37248 ultralytics.nn.modules.block.C2f [192, 64, 1]
29 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2]
30 [-1, 25] 1 0 ultralytics.nn.modules.conv.Concat [1]
31 -1 1 123648 ultralytics.nn.modules.block.C2f [192, 128, 1]
32 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2]
33 [-1, 12] 1 0 ultralytics.nn.modules.conv.Concat [1]
34 -1 1 493056 ultralytics.nn.modules.block.C2f [384, 256, 1]
35 22 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
36 [-1, 19] 1 0 ultralytics.nn.modules.conv.Concat [1]
37 -1 1 148224 ultralytics.nn.modules.block.C2f [384, 128, 1]
38 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
39 [-1, 17] 1 0 ultralytics.nn.modules.conv.Concat [1]
40 -1 1 37248 ultralytics.nn.modules.block.C2f [192, 64, 1]
41 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2]
42 [-1, 37] 1 0 ultralytics.nn.modules.conv.Concat [1]
43 -1 1 123648 ultralytics.nn.modules.block.C2f [192, 128, 1]
44 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2]
45 [-1, 22] 1 0 ultralytics.nn.modules.conv.Concat [1]
46 -1 1 493056 ultralytics.nn.modules.block.C2f [384, 256, 1]
47 [28, 40] 1 420 ultralytics.nn.AddModules.TFAM.TFAM [64]
48 [31, 43] 1 436 ultralytics.nn.AddModules.TFAM.TFAM [128]
49 [34, 46] 1 436 ultralytics.nn.AddModules.TFAM.TFAM [256]
50 [47, 48, 49] 1 430867 ultralytics.nn.modules.head.Detect [1, [64, 128, 256]]
YOLOv8-late-TFAM summary: 445 layers, 4,951,231 parameters, 4,951,215 gradients, 12.2 GFLOPs