一、本文介绍
在这篇文章中,我们将讲解 如何将LSKAttention大核注意力机制应用于YOLOv11 ,以实现显著的性能提升。首先,我们介绍LSK Attention机制 的基本原理,它主要通过将深度卷积层的2D卷积核分解为水平和垂直1D卷积核,减少了计算复杂性和内存占用。接着,我们介绍将这一机制整合到YOLOv11的方法,以及它如何帮助提高处理大型数据集和复杂视觉任务的效率和准确性。 本文还将提供代码实现细节和使用方法 ,展示这种改进对目标检测、语义分割等方面的积极影响。通过实验YOLOv11在整合LSKAttention机制后,实现了检测精度提升 ( 下面会附上改进LSKAttention机制和基础版本的结果对比图 )。
二、LSKAttention的机制原理
论文地址: 官方论文地址
代码地址: 官方代码地址
《Large Separable Kernel Attention》 这篇论文提出的 LSKAttention 的机制原理是针对传统大核注意力(Large Kernel Attention,LKA)模块在视觉注意网络(Visual Attention Networks,VAN)中的应用问题进行的改进。LKA模块在处理大尺寸卷积核时面临着高计算和内存需求的挑战。 LSKAttention通过以下几个关键步骤和原理来解决这些问题:
-
核分解 :LSKAttention的核心创新是将传统的2D卷积核分解为两个1D卷积核。首先,它将一个大的2D核分解成水平(横向)和垂直(纵向)的两个1D核。这样的分解大幅降低了参数数量和计算复杂度。
-
串联卷积操作 :在进行卷积操作时,LSKAttention首先使用一个1D核对输入进行水平方向上的卷积,然后使用另一个1D核进行垂直方向上的卷积。这两步卷积操作串联执行,从而实现了与原始大尺寸2D核相似的效果。
-
计算效率提升 :由于分解后的1D卷积核大大减少了参数的数量,LSKAttention在执行时的计算效率得到显著提升。这种方法特别适用于处理大尺寸的卷积核,能够有效降低内存占用和计算成本。
-
保持效果 :虽然采用了分解和串联的策略,LSKAttention仍然能够保持类似于原始LKA的性能。这意味着在处理图像的关键特征(如边缘、纹理和形状)时,LSKAttention能够有效地捕捉到重要信息。
-
适用于多种任务 :LSKAttention不仅在图像分类任务中表现出色,还能够在目标检测、语义分割等多种 计算机视觉 任务中有效应用,显示出其广泛的适用性。
总结: LSKAttention通过创新的核分解和串联卷积策略,在降低计算和内存成本的同时,保持了高效的图像处理能力,这在处理大尺寸核和复杂图像数据时特别有价值。
上图展示了在不同大核分解方法和核大小下的速度-精度权衡。在这个比较中,使用了不同的标记来代表不同的核大小,并且以VAN-Tiny作为对比的 模型 。从图中可以看出,LKA的朴素设计(LKA-trivial)以及在VAN中的实际设计,在核大小增加时会导致更高的GFLOPs(十亿浮点运算次数)。相比之下, 论文提出的LSKA(Large Separable Kernel Attention)-trivial和VAN中的LSKA在核大小增加时显著降低了GFLOPs,同时没有降低性能
上图展示了大核注意力模块不同设计的比较,具体包括:
- LKA-trivial :朴素的2D大核深度卷积(DW-Conv)与1×1卷积结合(图a)。
- LSKA-trivial :串联的水平和垂直1D大核深度卷积与1×1卷积结合(图b)。
- 原始LKA设计 :在VAN中包括标准深度卷积(DW-Conv)、扩张深度卷积(DW-D-Conv)和1×1卷积(图c)。
- 提出的LSKA设计 :将LKA的前两层分解为四层,每层由两个1D卷积层组成(图d)。其中,N代表Hadamard乘积,k代表最大感受野,d代表扩张率。
个人总结: 提出了一种创新的大型可分离核注意力(LSKA)模块,用于改进 卷积神经网络 (CNN)。这种模块通过将2D卷积核分解为串联的1D核,有效降低了计算复杂度和内存需求。LSKA模块在保持与标准大核注意力(LKA)模块相当的性能的同时,显示出更高的计算效率和更小的内存占用。
三、LSKAttention的代码
使用方法看章节四!
- import torch
- import torch.nn as nn
- __all__ = ['LSKA', 'C2PSA_LSKA']
- class LSKA(nn.Module):
- def __init__(self, dim, k_size=11):
- super().__init__()
- self.k_size = k_size
- if k_size == 7:
- self.conv0h = nn.Conv2d(dim, dim, kernel_size=(1, 3), stride=(1,1), padding=(0,(3-1)//2), groups=dim)
- self.conv0v = nn.Conv2d(dim, dim, kernel_size=(3, 1), stride=(1,1), padding=((3-1)//2,0), groups=dim)
- self.conv_spatial_h = nn.Conv2d(dim, dim, kernel_size=(1, 3), stride=(1,1), padding=(0,2), groups=dim, dilation=2)
- self.conv_spatial_v = nn.Conv2d(dim, dim, kernel_size=(3, 1), stride=(1,1), padding=(2,0), groups=dim, dilation=2)
- elif k_size == 11:
- self.conv0h = nn.Conv2d(dim, dim, kernel_size=(1, 3), stride=(1,1), padding=(0,(3-1)//2), groups=dim)
- self.conv0v = nn.Conv2d(dim, dim, kernel_size=(3, 1), stride=(1,1), padding=((3-1)//2,0), groups=dim)
- self.conv_spatial_h = nn.Conv2d(dim, dim, kernel_size=(1, 5), stride=(1,1), padding=(0,4), groups=dim, dilation=2)
- self.conv_spatial_v = nn.Conv2d(dim, dim, kernel_size=(5, 1), stride=(1,1), padding=(4,0), groups=dim, dilation=2)
- elif k_size == 23:
- self.conv0h = nn.Conv2d(dim, dim, kernel_size=(1, 5), stride=(1,1), padding=(0,(5-1)//2), groups=dim)
- self.conv0v = nn.Conv2d(dim, dim, kernel_size=(5, 1), stride=(1,1), padding=((5-1)//2,0), groups=dim)
- self.conv_spatial_h = nn.Conv2d(dim, dim, kernel_size=(1, 7), stride=(1,1), padding=(0,9), groups=dim, dilation=3)
- self.conv_spatial_v = nn.Conv2d(dim, dim, kernel_size=(7, 1), stride=(1,1), padding=(9,0), groups=dim, dilation=3)
- elif k_size == 35:
- self.conv0h = nn.Conv2d(dim, dim, kernel_size=(1, 5), stride=(1,1), padding=(0,(5-1)//2), groups=dim)
- self.conv0v = nn.Conv2d(dim, dim, kernel_size=(5, 1), stride=(1,1), padding=((5-1)//2,0), groups=dim)
- self.conv_spatial_h = nn.Conv2d(dim, dim, kernel_size=(1, 11), stride=(1,1), padding=(0,15), groups=dim, dilation=3)
- self.conv_spatial_v = nn.Conv2d(dim, dim, kernel_size=(11, 1), stride=(1,1), padding=(15,0), groups=dim, dilation=3)
- elif k_size == 41:
- self.conv0h = nn.Conv2d(dim, dim, kernel_size=(1, 5), stride=(1,1), padding=(0,(5-1)//2), groups=dim)
- self.conv0v = nn.Conv2d(dim, dim, kernel_size=(5, 1), stride=(1,1), padding=((5-1)//2,0), groups=dim)
- self.conv_spatial_h = nn.Conv2d(dim, dim, kernel_size=(1, 13), stride=(1,1), padding=(0,18), groups=dim, dilation=3)
- self.conv_spatial_v = nn.Conv2d(dim, dim, kernel_size=(13, 1), stride=(1,1), padding=(18,0), groups=dim, dilation=3)
- elif k_size == 53:
- self.conv0h = nn.Conv2d(dim, dim, kernel_size=(1, 5), stride=(1,1), padding=(0,(5-1)//2), groups=dim)
- self.conv0v = nn.Conv2d(dim, dim, kernel_size=(5, 1), stride=(1,1), padding=((5-1)//2,0), groups=dim)
- self.conv_spatial_h = nn.Conv2d(dim, dim, kernel_size=(1, 17), stride=(1,1), padding=(0,24), groups=dim, dilation=3)
- self.conv_spatial_v = nn.Conv2d(dim, dim, kernel_size=(17, 1), stride=(1,1), padding=(24,0), groups=dim, dilation=3)
- self.conv1 = nn.Conv2d(dim, dim, 1)
- def forward(self, x):
- u = x.clone()
- attn = self.conv0h(x)
- attn = self.conv0v(attn)
- attn = self.conv_spatial_h(attn)
- attn = self.conv_spatial_v(attn)
- attn = self.conv1(attn)
- return u * attn
- def autopad(k, p=None, d=1): # kernel, padding, dilation
- """Pad to 'same' shape outputs."""
- if d > 1:
- k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k] # actual kernel-size
- if p is None:
- p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
- return p
- class Conv(nn.Module):
- """Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)."""
- default_act = nn.SiLU() # default activation
- def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
- """Initialize Conv layer with given arguments including activation."""
- super().__init__()
- self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)
- self.bn = nn.BatchNorm2d(c2)
- self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
- def forward(self, x):
- """Apply convolution, batch normalization and activation to input tensor."""
- return self.act(self.bn(self.conv(x)))
- def forward_fuse(self, x):
- """Perform transposed convolution of 2D data."""
- return self.act(self.conv(x))
- class PSABlock(nn.Module):
- """
- PSABlock class implementing a Position-Sensitive Attention block for neural networks.
- This class encapsulates the functionality for applying multi-head attention and feed-forward neural network layers
- with optional shortcut connections.
- Attributes:
- attn (Attention): Multi-head attention module.
- ffn (nn.Sequential): Feed-forward neural network module.
- add (bool): Flag indicating whether to add shortcut connections.
- Methods:
- forward: Performs a forward pass through the PSABlock, applying attention and feed-forward layers.
- Examples:
- Create a PSABlock and perform a forward pass
- >>> psablock = PSABlock(c=128, attn_ratio=0.5, num_heads=4, shortcut=True)
- >>> input_tensor = torch.randn(1, 128, 32, 32)
- >>> output_tensor = psablock(input_tensor)
- """
- def __init__(self, c, attn_ratio=0.5, num_heads=4, shortcut=True) -> None:
- """Initializes the PSABlock with attention and feed-forward layers for enhanced feature extraction."""
- super().__init__()
- self.attn = LSKA(c)
- self.ffn = nn.Sequential(Conv(c, c * 2, 1), Conv(c * 2, c, 1, act=False))
- self.add = shortcut
- def forward(self, x):
- """Executes a forward pass through PSABlock, applying attention and feed-forward layers to the input tensor."""
- x = x + self.attn(x) if self.add else self.attn(x)
- x = x + self.ffn(x) if self.add else self.ffn(x)
- return x
- class C2PSA_LSKA(nn.Module):
- """
- C2PSA module with attention mechanism for enhanced feature extraction and processing.
- This module implements a convolutional block with attention mechanisms to enhance feature extraction and processing
- capabilities. It includes a series of PSABlock modules for self-attention and feed-forward operations.
- Attributes:
- c (int): Number of hidden channels.
- cv1 (Conv): 1x1 convolution layer to reduce the number of input channels to 2*c.
- cv2 (Conv): 1x1 convolution layer to reduce the number of output channels to c.
- m (nn.Sequential): Sequential container of PSABlock modules for attention and feed-forward operations.
- Methods:
- forward: Performs a forward pass through the C2PSA module, applying attention and feed-forward operations.
- Notes:
- This module essentially is the same as PSA module, but refactored to allow stacking more PSABlock modules.
- Examples:
- >>> c2psa = C2PSA(c1=256, c2=256, n=3, e=0.5)
- >>> input_tensor = torch.randn(1, 256, 64, 64)
- >>> output_tensor = c2psa(input_tensor)
- """
- def __init__(self, c1, c2, n=1, e=0.5):
- """Initializes the C2PSA module with specified input/output channels, number of layers, and expansion ratio."""
- super().__init__()
- assert c1 == c2
- self.c = int(c1 * e)
- self.cv1 = Conv(c1, 2 * self.c, 1, 1)
- self.cv2 = Conv(2 * self.c, c1, 1)
- self.m = nn.Sequential(*(PSABlock(self.c, attn_ratio=0.5, num_heads=self.c // 64) for _ in range(n)))
- def forward(self, x):
- """Processes the input tensor 'x' through a series of PSA blocks and returns the transformed tensor."""
- a, b = self.cv1(x).split((self.c, self.c), dim=1)
- b = self.m(b)
- return self.cv2(torch.cat((a, b), 1))
- if __name__ == "__main__":
- # Generating Sample image
- image_size = (1, 64, 240, 240)
- image = torch.rand(*image_size)
- # Model
- mobilenet_v1 = C2PSA_LSKA(64, 64)
- out = mobilenet_v1(image)
- print(out.size())
四、手把手教你将LSKAttention添加到你的网络结构中
4.1 修改一
第一还是建立文件,我们找到如下ultralytics/nn文件夹下建立一个目录名字呢就是'Addmodules'文件夹( 用群内的文件的话已经有了无需新建) !然后在其内部建立一个新的py文件将核心代码复制粘贴进去即可。
4.2 修改二
第二步我们在该目录下创建一个新的py文件名字为'__init__.py'( 用群内的文件的话已经有了无需新建) ,然后在其内部导入我们的检测头如下图所示。
4.3 修改三
第三步我门中到如下文件'ultralytics/nn/tasks.py'进行导入和注册我们的模块( 用群内的文件的话已经有了无需重新导入直接开始第四步即可) !
从今天开始以后的教程就都统一成这个样子了,因为我默认大家用了我群内的文件来进行修改!!
4.4 修改四
按照我的添加在parse_model里添加即可。
到此就修改完成了,大家可以复制下面的yaml文件运行。
五、LSKA的yaml文件和运行记录
5.1 LSKA的yaml文件1
此版本训练信息:YOLO11-C2PSA-LSKA summary: 313 layers, 2,562,459 parameters, 2,562,443 gradients, 6.4 GFLOPs
- # Ultralytics YOLO 🚀, AGPL-3.0 license
- # YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
- # Parameters
- nc: 80 # number of classes
- scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
- # [depth, width, max_channels]
- n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
- s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
- m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
- l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
- x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
- # YOLO11n backbone
- backbone:
- # [from, repeats, module, args]
- - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
- - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
- - [-1, 2, C3k2, [256, False, 0.25]]
- - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
- - [-1, 2, C3k2, [512, False, 0.25]]
- - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
- - [-1, 2, C3k2, [512, True]]
- - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
- - [-1, 2, C3k2, [1024, True]]
- - [-1, 1, SPPF, [1024, 5]] # 9
- - [-1, 2, C2PSA_LSKA, [1024]] # 10
- # YOLO11n head
- head:
- - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- - [[-1, 6], 1, Concat, [1]] # cat backbone P4
- - [-1, 2, C3k2, [512, False]] # 13
- - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- - [[-1, 4], 1, Concat, [1]] # cat backbone P3
- - [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)
- - [-1, 1, Conv, [256, 3, 2]]
- - [[-1, 13], 1, Concat, [1]] # cat head P4
- - [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)
- - [-1, 1, Conv, [512, 3, 2]]
- - [[-1, 10], 1, Concat, [1]] # cat head P5
- - [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)
- - [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)
5.2 LSKA的yaml文件2
此版本训练信息:YOLO11-LSKA summary: 337 layers, 2,690,139 parameters, 2,690,123 gradients, 6.6 GFLOPs
- # Ultralytics YOLO 🚀, AGPL-3.0 license
- # YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
- # Parameters
- nc: 80 # number of classes
- scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
- # [depth, width, max_channels]
- n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
- s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
- m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
- l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
- x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
- # YOLO11n backbone
- backbone:
- # [from, repeats, module, args]
- - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
- - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
- - [-1, 2, C3k2, [256, False, 0.25]]
- - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
- - [-1, 2, C3k2, [512, False, 0.25]]
- - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
- - [-1, 2, C3k2, [512, True]]
- - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
- - [-1, 2, C3k2, [1024, True]]
- - [-1, 1, SPPF, [1024, 5]] # 9
- - [-1, 2, C2PSA, [1024]] # 10
- # YOLO11n head
- head:
- - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- - [[-1, 6], 1, Concat, [1]] # cat backbone P4
- - [-1, 2, C3k2, [512, False]] # 13
- - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- - [[-1, 4], 1, Concat, [1]] # cat backbone P3
- - [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)
- - [-1, 1, LSKA, []] # 17 (P3/8-small) 小目标检测层输出位置增加注意力机制
- - [-1, 1, Conv, [256, 3, 2]]
- - [[-1, 13], 1, Concat, [1]] # cat head P4
- - [-1, 2, C3k2, [512, False]] # 20 (P4/16-medium)
- - [-1, 1, LSKA, []] # 21 (P4/16-medium) 中目标检测层输出位置增加注意力机制
- - [-1, 1, Conv, [512, 3, 2]]
- - [[-1, 10], 1, Concat, [1]] # cat head P5
- - [-1, 2, C3k2, [1024, True]] # 24 (P5/32-large)
- - [-1, 1, LSKA, []] # 25 (P5/32-large) 大目标检测层输出位置增加注意力机制
- # 具体在那一层用注意力机制可以根据自己的数据集场景进行选择。
- # 如果你自己配置注意力位置注意from[17, 21, 25]位置要对应上对应的检测层!
- - [[17, 21, 25], 1, Detect, [nc]] # Detect(P3, P4, P5)
5.3 训练代码
大家可以创建一个py文件将我给的代码复制粘贴进去,配置好自己的文件路径即可运行。
- import warnings
- warnings.filterwarnings('ignore')
- from ultralytics import YOLO
- if __name__ == '__main__':
- model = YOLO('ultralytics/cfg/models/v8/yolov8-C2f-FasterBlock.yaml')
- # model.load('yolov8n.pt') # loading pretrain weights
- model.train(data=r'替换数据集yaml文件地址',
- # 如果大家任务是其它的'ultralytics/cfg/default.yaml'找到这里修改task可以改成detect, segment, classify, pose
- cache=False,
- imgsz=640,
- epochs=150,
- single_cls=False, # 是否是单类别检测
- batch=4,
- close_mosaic=10,
- workers=0,
- device='0',
- optimizer='SGD', # using SGD
- # resume='', # 如过想续训就设置last.pt的地址
- amp=False, # 如果出现训练损失为Nan可以关闭amp
- project='runs/train',
- name='exp',
- )
5.4 LSKA的训练过程截图
六、本文总结
到此本文的正式分享内容就结束了,在这里给大家推荐我的YOLOv11改进有效涨点专栏,本专栏目前为新开的平均质量分98分,后期我会根据各种最新的前沿顶会进行论文复现,也会对一些老的改进机制进行补充, 目前本专栏免费阅读(暂时,大家尽早关注不迷路~) ,如果大家觉得本文帮助到你了,订阅本专栏,关注后续更多的更新~