学习资源站

YOLOv11改进-添加注意力机制篇-添加DAttention(DAT)注意力机制二次创新C2PSA(BestPaper)

一、本文介绍

本文给大家带来的是YOLOv11改进 DAT(Vision Transformer with Deformable Attention) 的教程, CVPR2022 上同时被评选为 Best Paper 由此可以证明其是一种十分 有效的 改进机制,其主要的 核心思想是: 引入可变形注意力机制和动态采样点(听着是不是和可变形动态卷积DCN挺相似)。文的讲解主要包含三方面:DAT的网络结构思想、DAttention的代码复现,如何添加DAttention到你的结构中实现涨点,下面先来分享我测试的对比图



二、DAT的网络结构思想

论文地址: DAT论文地址

官方地址: 官方代码的地址


2.1 DAT的主要思想和改进

DAT(Vision Transformer with Deformable Attention) 是一种引入了可变形注意力机制的视觉 Transformer ,DAT的核心思想主要包括以下几个方面:

  1. 可变形注意力(Deformable Attention) :传统的 Transformer 使用标准的自注意力机制,这种机制会处理图像中的所有像素,导致计算量很大。而DAT引入了可变形注意力机制,它只关注图像中的一小部分关键区域。这种方法可以显著减少计算量,同时保持良好的 性能

  2. 动态采样点 :在可变形注意力机制中,DAT动态地选择采样点,而不是固定地处理整个图像。这种动态选择机制使得 模型 可以更加集中地关注于那些对当前任务最重要的区域。

  3. 即插即用 :DAT的设计允许它适应不同的图像大小和内容,使其在多种视觉任务中都能有效工作,如图像分类、对象检测等。

总结: DAT通过引入可变形注意力机制,改进了视觉Transformer的效率和性能,使其在处理复杂的视觉任务时更加高效和准确。


2.2 DAT的网络结构图

(a) 展示了可变形注意力的信息流。左侧部分,一组参考点均匀地放置在特征图上,这些点的偏移量是由查询通过偏移网络学习得到的。然后,如右侧所示,根据变形点从采样特征中投影出变形的键和值。相对位置偏差也通过变形点计算,增强了输出转换特征的多头注意力。为了清晰展示,图中仅显示了4个参考点,但在实际实现中实际上有更多的点。

(b) 展示了偏移生成网络的详细结构,每层输入和输出特征图的大小都有标注 ( 这个Offset network在网络的代码中需要控制可添加可不添加 )。

通过上面的方式产生多种参考点分布在图像上,从而提高检测的效率, 最终的效果图如下->


2.3 DAT和其他机制的对比

DAT与其他视觉Transformer模型和CNN模型中的DCN(可变形卷积网络)的对比图如下,突出了它们处理查询的不同方法 ( 图片展示的很直观,不给大家描述过程了 )


三、DAT即插即用的代码块

下面的代码是DAT的网络结构代码.

  1. import numpy as np
  2. import torch
  3. import torch.nn as nn
  4. import torch.nn.functional as F
  5. import einops
  6. from timm.models.layers import trunc_normal_
  7. __all__ = ['DAttentionBaseline', 'C2PSA_DAT']
  8. class LayerNormProxy(nn.Module):
  9. def __init__(self, dim):
  10. super().__init__()
  11. self.norm = nn.LayerNorm(dim)
  12. def forward(self, x):
  13. x = einops.rearrange(x, 'b c h w -> b h w c')
  14. x = self.norm(x)
  15. return einops.rearrange(x, 'b h w c -> b c h w')
  16. class DAttentionBaseline(nn.Module):
  17. def __init__(
  18. self, q_size=(224,224), kv_size=(224,224), n_heads=8, n_head_channels=32, n_groups=1,
  19. attn_drop=0.0, proj_drop=0.0, stride=1,
  20. offset_range_factor=-1, use_pe=True, dwc_pe=True,
  21. no_off=False, fixed_pe=False, ksize=9, log_cpb=False
  22. ):
  23. super().__init__()
  24. n_head_channels = int(q_size / 8)
  25. q_size = (q_size, q_size)
  26. self.dwc_pe = dwc_pe
  27. self.n_head_channels = n_head_channels
  28. self.scale = self.n_head_channels ** -0.5
  29. self.n_heads = n_heads
  30. self.q_h, self.q_w = q_size
  31. # self.kv_h, self.kv_w = kv_size
  32. self.kv_h, self.kv_w = self.q_h // stride, self.q_w // stride
  33. self.nc = n_head_channels * n_heads
  34. self.n_groups = n_groups
  35. self.n_group_channels = self.nc // self.n_groups
  36. self.n_group_heads = self.n_heads // self.n_groups
  37. self.use_pe = use_pe
  38. self.fixed_pe = fixed_pe
  39. self.no_off = no_off
  40. self.offset_range_factor = offset_range_factor
  41. self.ksize = ksize
  42. self.log_cpb = log_cpb
  43. self.stride = stride
  44. kk = self.ksize
  45. pad_size = kk // 2 if kk != stride else 0
  46. self.conv_offset = nn.Sequential(
  47. nn.Conv2d(self.n_group_channels, self.n_group_channels, kk, stride, pad_size, groups=self.n_group_channels),
  48. LayerNormProxy(self.n_group_channels),
  49. nn.GELU(),
  50. nn.Conv2d(self.n_group_channels, 2, 1, 1, 0, bias=False)
  51. )
  52. if self.no_off:
  53. for m in self.conv_offset.parameters():
  54. m.requires_grad_(False)
  55. self.proj_q = nn.Conv2d(
  56. self.nc, self.nc,
  57. kernel_size=1, stride=1, padding=0
  58. )
  59. self.proj_k = nn.Conv2d(
  60. self.nc, self.nc,
  61. kernel_size=1, stride=1, padding=0)
  62. self.proj_v = nn.Conv2d(
  63. self.nc, self.nc,
  64. kernel_size=1, stride=1, padding=0
  65. )
  66. self.proj_out = nn.Conv2d(
  67. self.nc, self.nc,
  68. kernel_size=1, stride=1, padding=0
  69. )
  70. self.proj_drop = nn.Dropout(proj_drop, inplace=True)
  71. self.attn_drop = nn.Dropout(attn_drop, inplace=True)
  72. if self.use_pe and not self.no_off:
  73. if self.dwc_pe:
  74. self.rpe_table = nn.Conv2d(
  75. self.nc, self.nc, kernel_size=3, stride=1, padding=1, groups=self.nc)
  76. elif self.fixed_pe:
  77. self.rpe_table = nn.Parameter(
  78. torch.zeros(self.n_heads, self.q_h * self.q_w, self.kv_h * self.kv_w)
  79. )
  80. trunc_normal_(self.rpe_table, std=0.01)
  81. elif self.log_cpb:
  82. # Borrowed from Swin-V2
  83. self.rpe_table = nn.Sequential(
  84. nn.Linear(2, 32, bias=True),
  85. nn.ReLU(inplace=True),
  86. nn.Linear(32, self.n_group_heads, bias=False)
  87. )
  88. else:
  89. self.rpe_table = nn.Parameter(
  90. torch.zeros(self.n_heads, self.q_h * 2 - 1, self.q_w * 2 - 1)
  91. )
  92. trunc_normal_(self.rpe_table, std=0.01)
  93. else:
  94. self.rpe_table = None
  95. @torch.no_grad()
  96. def _get_ref_points(self, H_key, W_key, B, dtype, device):
  97. ref_y, ref_x = torch.meshgrid(
  98. torch.linspace(0.5, H_key - 0.5, H_key, dtype=dtype, device=device),
  99. torch.linspace(0.5, W_key - 0.5, W_key, dtype=dtype, device=device),
  100. indexing='ij'
  101. )
  102. ref = torch.stack((ref_y, ref_x), -1)
  103. ref[..., 1].div_(W_key - 1.0).mul_(2.0).sub_(1.0)
  104. ref[..., 0].div_(H_key - 1.0).mul_(2.0).sub_(1.0)
  105. ref = ref[None, ...].expand(B * self.n_groups, -1, -1, -1) # B * g H W 2
  106. return ref
  107. @torch.no_grad()
  108. def _get_q_grid(self, H, W, B, dtype, device):
  109. ref_y, ref_x = torch.meshgrid(
  110. torch.arange(0, H, dtype=dtype, device=device),
  111. torch.arange(0, W, dtype=dtype, device=device),
  112. indexing='ij'
  113. )
  114. ref = torch.stack((ref_y, ref_x), -1)
  115. ref[..., 1].div_(W - 1.0).mul_(2.0).sub_(1.0)
  116. ref[..., 0].div_(H - 1.0).mul_(2.0).sub_(1.0)
  117. ref = ref[None, ...].expand(B * self.n_groups, -1, -1, -1) # B * g H W 2
  118. return ref
  119. def forward(self, x):
  120. x = x
  121. B, C, H, W = x.size()
  122. dtype, device = x.dtype, x.device
  123. q = self.proj_q(x)
  124. q_off = einops.rearrange(q, 'b (g c) h w -> (b g) c h w', g=self.n_groups, c=self.n_group_channels)
  125. offset = self.conv_offset(q_off).contiguous() # B * g 2 Hg Wg
  126. Hk, Wk = offset.size(2), offset.size(3)
  127. n_sample = Hk * Wk
  128. if self.offset_range_factor >= 0 and not self.no_off:
  129. offset_range = torch.tensor([1.0 / (Hk - 1.0), 1.0 / (Wk - 1.0)], device=device).reshape(1, 2, 1, 1)
  130. offset = offset.tanh().mul(offset_range).mul(self.offset_range_factor)
  131. offset = einops.rearrange(offset, 'b p h w -> b h w p')
  132. reference = self._get_ref_points(Hk, Wk, B, dtype, device)
  133. if self.no_off:
  134. offset = offset.fill_(0.0)
  135. if self.offset_range_factor >= 0:
  136. pos = offset + reference
  137. else:
  138. pos = (offset + reference).clamp(-1., +1.)
  139. if self.no_off:
  140. x_sampled = F.avg_pool2d(x, kernel_size=self.stride, stride=self.stride)
  141. assert x_sampled.size(2) == Hk and x_sampled.size(3) == Wk, f"Size is {x_sampled.size()}"
  142. else:
  143. x_sampled = F.grid_sample(
  144. input=x.reshape(B * self.n_groups, self.n_group_channels, H, W),
  145. grid=pos[..., (1, 0)], # y, x -> x, y
  146. mode='bilinear', align_corners=True) # B * g, Cg, Hg, Wg
  147. x_sampled = x_sampled.reshape(B, C, 1, n_sample)
  148. # self.proj_k.weight = torch.nn.Parameter(self.proj_k.weight.float())
  149. # self.proj_k.bias = torch.nn.Parameter(self.proj_k.bias.float())
  150. # self.proj_v.weight = torch.nn.Parameter(self.proj_v.weight.float())
  151. # self.proj_v.bias = torch.nn.Parameter(self.proj_v.bias.float())
  152. # 检查权重的数据类型
  153. q = q.reshape(B * self.n_heads, self.n_head_channels, H * W)
  154. k = self.proj_k(x_sampled).reshape(B * self.n_heads, self.n_head_channels, n_sample)
  155. v = self.proj_v(x_sampled).reshape(B * self.n_heads, self.n_head_channels, n_sample)
  156. attn = torch.einsum('b c m, b c n -> b m n', q, k) # B * h, HW, Ns
  157. attn = attn.mul(self.scale)
  158. if self.use_pe and (not self.no_off):
  159. if self.dwc_pe:
  160. residual_lepe = self.rpe_table(q.reshape(B, C, H, W)).reshape(B * self.n_heads, self.n_head_channels,
  161. H * W)
  162. elif self.fixed_pe:
  163. rpe_table = self.rpe_table
  164. attn_bias = rpe_table[None, ...].expand(B, -1, -1, -1)
  165. attn = attn + attn_bias.reshape(B * self.n_heads, H * W, n_sample)
  166. elif self.log_cpb:
  167. q_grid = self._get_q_grid(H, W, B, dtype, device)
  168. displacement = (
  169. q_grid.reshape(B * self.n_groups, H * W, 2).unsqueeze(2) - pos.reshape(B * self.n_groups,
  170. n_sample,
  171. 2).unsqueeze(1)).mul(
  172. 4.0) # d_y, d_x [-8, +8]
  173. displacement = torch.sign(displacement) * torch.log2(torch.abs(displacement) + 1.0) / np.log2(8.0)
  174. attn_bias = self.rpe_table(displacement) # B * g, H * W, n_sample, h_g
  175. attn = attn + einops.rearrange(attn_bias, 'b m n h -> (b h) m n', h=self.n_group_heads)
  176. else:
  177. rpe_table = self.rpe_table
  178. rpe_bias = rpe_table[None, ...].expand(B, -1, -1, -1)
  179. q_grid = self._get_q_grid(H, W, B, dtype, device)
  180. displacement = (
  181. q_grid.reshape(B * self.n_groups, H * W, 2).unsqueeze(2) - pos.reshape(B * self.n_groups,
  182. n_sample,
  183. 2).unsqueeze(1)).mul(
  184. 0.5)
  185. attn_bias = F.grid_sample(
  186. input=einops.rearrange(rpe_bias, 'b (g c) h w -> (b g) c h w', c=self.n_group_heads,
  187. g=self.n_groups),
  188. grid=displacement[..., (1, 0)],
  189. mode='bilinear', align_corners=True) # B * g, h_g, HW, Ns
  190. attn_bias = attn_bias.reshape(B * self.n_heads, H * W, n_sample)
  191. attn = attn + attn_bias
  192. attn = F.softmax(attn, dim=2)
  193. attn = self.attn_drop(attn)
  194. out = torch.einsum('b m n, b c n -> b c m', attn, v)
  195. if self.use_pe and self.dwc_pe:
  196. out = out + residual_lepe
  197. out = out.reshape(B, C, H, W)
  198. y = self.proj_drop(self.proj_out(out))
  199. h, w = pos.reshape(B, self.n_groups, Hk, Wk, 2), reference.reshape(B, self.n_groups, Hk, Wk, 2)
  200. return y
  201. def autopad(k, p=None, d=1): # kernel, padding, dilation
  202. """Pad to 'same' shape outputs."""
  203. if d > 1:
  204. k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k] # actual kernel-size
  205. if p is None:
  206. p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
  207. return p
  208. class Conv(nn.Module):
  209. """Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)."""
  210. default_act = nn.SiLU() # default activation
  211. def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
  212. """Initialize Conv layer with given arguments including activation."""
  213. super().__init__()
  214. self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)
  215. self.bn = nn.BatchNorm2d(c2)
  216. self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
  217. def forward(self, x):
  218. """Apply convolution, batch normalization and activation to input tensor."""
  219. return self.act(self.bn(self.conv(x)))
  220. def forward_fuse(self, x):
  221. """Perform transposed convolution of 2D data."""
  222. return self.act(self.conv(x))
  223. class PSABlock(nn.Module):
  224. """
  225. PSABlock class implementing a Position-Sensitive Attention block for neural networks.
  226. This class encapsulates the functionality for applying multi-head attention and feed-forward neural network layers
  227. with optional shortcut connections.
  228. Attributes:
  229. attn (Attention): Multi-head attention module.
  230. ffn (nn.Sequential): Feed-forward neural network module.
  231. add (bool): Flag indicating whether to add shortcut connections.
  232. Methods:
  233. forward: Performs a forward pass through the PSABlock, applying attention and feed-forward layers.
  234. Examples:
  235. Create a PSABlock and perform a forward pass
  236. >>> psablock = PSABlock(c=128, attn_ratio=0.5, num_heads=4, shortcut=True)
  237. >>> input_tensor = torch.randn(1, 128, 32, 32)
  238. >>> output_tensor = psablock(input_tensor)
  239. """
  240. def __init__(self, c, attn_ratio=0.5, num_heads=4, shortcut=True) -> None:
  241. """Initializes the PSABlock with attention and feed-forward layers for enhanced feature extraction."""
  242. super().__init__()
  243. self.attn = DAttentionBaseline(c)
  244. self.ffn = nn.Sequential(Conv(c, c * 2, 1), Conv(c * 2, c, 1, act=False))
  245. self.add = shortcut
  246. def forward(self, x):
  247. """Executes a forward pass through PSABlock, applying attention and feed-forward layers to the input tensor."""
  248. x = x + self.attn(x) if self.add else self.attn(x)
  249. x = x + self.ffn(x) if self.add else self.ffn(x)
  250. return x
  251. class C2PSA_DAT(nn.Module):
  252. """
  253. C2PSA module with attention mechanism for enhanced feature extraction and processing.
  254. This module implements a convolutional block with attention mechanisms to enhance feature extraction and processing
  255. capabilities. It includes a series of PSABlock modules for self-attention and feed-forward operations.
  256. Attributes:
  257. c (int): Number of hidden channels.
  258. cv1 (Conv): 1x1 convolution layer to reduce the number of input channels to 2*c.
  259. cv2 (Conv): 1x1 convolution layer to reduce the number of output channels to c.
  260. m (nn.Sequential): Sequential container of PSABlock modules for attention and feed-forward operations.
  261. Methods:
  262. forward: Performs a forward pass through the C2PSA module, applying attention and feed-forward operations.
  263. Notes:
  264. This module essentially is the same as PSA module, but refactored to allow stacking more PSABlock modules.
  265. Examples:
  266. >>> c2psa = C2PSA(c1=256, c2=256, n=3, e=0.5)
  267. >>> input_tensor = torch.randn(1, 256, 64, 64)
  268. >>> output_tensor = c2psa(input_tensor)
  269. """
  270. def __init__(self, c1, c2, n=1, e=0.5):
  271. """Initializes the C2PSA module with specified input/output channels, number of layers, and expansion ratio."""
  272. super().__init__()
  273. assert c1 == c2
  274. self.c = int(c1 * e)
  275. self.cv1 = Conv(c1, 2 * self.c, 1, 1)
  276. self.cv2 = Conv(2 * self.c, c1, 1)
  277. self.m = nn.Sequential(*(PSABlock(self.c, attn_ratio=0.5, num_heads=self.c // 64) for _ in range(n)))
  278. def forward(self, x):
  279. """Processes the input tensor 'x' through a series of PSA blocks and returns the transformed tensor."""
  280. a, b = self.cv1(x).split((self.c, self.c), dim=1)
  281. b = self.m(b)
  282. return self.cv2(torch.cat((a, b), 1))
  283. if __name__ == "__main__":
  284. # Generating Sample image
  285. image_size = (1, 64, 224, 224)
  286. image = torch.rand(*image_size)
  287. # Model
  288. model = C2PSA_DAT(64, 64)
  289. out = model(image)
  290. print(out.size())


四、添加DAT到你的网络中

4.1 修改一

第一还是建立文件,我们找到如下 ultralytics /nn文件夹下建立一个目录名字呢就是'Addmodules'文件夹( 用群内的文件的话已经有了无需新建) !然后在其内部建立一个新的py文件将核心代码复制粘贴进去即可。


4.2 修改二

第二步我们在该目录下创建一个新的py文件名字为'__init__.py'( 用群内的文件的话已经有了无需新建) ,然后在其内部导入我们的检测头如下图所示。


4.3 修改三

第三步我门中到如下文件'ultralytics/nn/tasks.py'进行导入和注册我们的模块( 用群内的文件的话已经有了无需重新导入直接开始第四步即可)

从今天开始以后的教程就都统一成这个样子了,因为我默认大家用了我群内的文件来进行修改!!


4.4 修改四

按照我的添加在parse_model里添加即可。

到此就修改完成了,大家可以复制下面的yaml文件运行。


五、DAT的yaml文件和运行记录

5.1 DAT的yaml文件1

此版本训练信息:YOLO11-C2PSA-DAT summary: 321 layers, 2,621,723 parameters, 2,621,707 gradients, 6.5 GFLOPs

  1. # Ultralytics YOLO 🚀, AGPL-3.0 license
  2. # YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
  3. # Parameters
  4. nc: 80 # number of classes
  5. scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  6. # [depth, width, max_channels]
  7. n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  8. s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  9. m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  10. l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  11. x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
  12. # YOLO11n backbone
  13. backbone:
  14. # [from, repeats, module, args]
  15. - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  16. - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  17. - [-1, 2, C3k2, [256, False, 0.25]]
  18. - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  19. - [-1, 2, C3k2, [512, False, 0.25]]
  20. - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  21. - [-1, 2, C3k2, [512, True]]
  22. - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  23. - [-1, 2, C3k2, [1024, True]]
  24. - [-1, 1, SPPF, [1024, 5]] # 9
  25. - [-1, 2, C2PSA_DAT, [1024]] # 10
  26. # YOLO11n head
  27. head:
  28. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  29. - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  30. - [-1, 2, C3k2, [512, False]] # 13
  31. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  32. - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  33. - [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)
  34. - [-1, 1, Conv, [256, 3, 2]]
  35. - [[-1, 13], 1, Concat, [1]] # cat head P4
  36. - [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)
  37. - [-1, 1, Conv, [512, 3, 2]]
  38. - [[-1, 10], 1, Concat, [1]] # cat head P5
  39. - [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)
  40. - [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)

5.2 DAT的yaml文件2

注意力机制我这里其实是添加了三个但是实际一般生效就只添加一个就可以了,所以大家可以自行注释来尝试.

此版本训练信息:YOLO11-DAT summary: 361 layers, 2,983,579 parameters, 2,983,563 gradients, 7.2 GFLOPs

  1. # Ultralytics YOLO 🚀, AGPL-3.0 license
  2. # YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
  3. # Parameters
  4. nc: 80 # number of classes
  5. scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  6. # [depth, width, max_channels]
  7. n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  8. s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  9. m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  10. l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  11. x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
  12. # YOLO11n backbone
  13. backbone:
  14. # [from, repeats, module, args]
  15. - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  16. - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  17. - [-1, 2, C3k2, [256, False, 0.25]]
  18. - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  19. - [-1, 2, C3k2, [512, False, 0.25]]
  20. - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  21. - [-1, 2, C3k2, [512, True]]
  22. - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  23. - [-1, 2, C3k2, [1024, True]]
  24. - [-1, 1, SPPF, [1024, 5]] # 9
  25. - [-1, 2, C2PSA, [1024]] # 10
  26. # YOLO11n head
  27. head:
  28. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  29. - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  30. - [-1, 2, C3k2, [512, False]] # 13
  31. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  32. - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  33. - [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)
  34. - [-1, 1, DAttentionBaseline, []] # 17 (P3/8-small) 小目标检测层输出位置增加注意力机制
  35. - [-1, 1, Conv, [256, 3, 2]]
  36. - [[-1, 13], 1, Concat, [1]] # cat head P4
  37. - [-1, 2, C3k2, [512, False]] # 20 (P4/16-medium)
  38. - [-1, 1, DAttentionBaseline, []] # 21 (P4/16-medium) 中目标检测层输出位置增加注意力机制
  39. - [-1, 1, Conv, [512, 3, 2]]
  40. - [[-1, 10], 1, Concat, [1]] # cat head P5
  41. - [-1, 2, C3k2, [1024, True]] # 24 (P5/32-large)
  42. - [-1, 1, DAttentionBaseline, []] # 25 (P5/32-large) 大目标检测层输出位置增加注意力机制
  43. # 注意力机制我这里其实是添加了三个但是实际一般生效就只添加一个就可以了,所以大家可以自行注释来尝试.
  44. # 具体在那一层用注意力机制可以根据自己的数据集场景进行选择。
  45. # 如果你自己配置注意力位置注意from[17, 21, 25]位置要对应上对应的检测层!
  46. - [[17, 21, 25], 1, Detect, [nc]] # Detect(P3, P4, P5)


5.3 训练代码

大家可以创建一个py文件将我给的代码复制粘贴进去,配置好自己的文件路径即可运行。

  1. import warnings
  2. warnings.filterwarnings('ignore')
  3. from ultralytics import YOLO
  4. if __name__ == '__main__':
  5. model = YOLO('ultralytics/cfg/models/v8/yolov8-C2f-FasterBlock.yaml')
  6. # model.load('yolov8n.pt') # loading pretrain weights
  7. model.train(data=r'替换数据集yaml文件地址',
  8. # 如果大家任务是其它的'ultralytics/cfg/default.yaml'找到这里修改task可以改成detect, segment, classify, pose
  9. cache=False,
  10. imgsz=640,
  11. epochs=150,
  12. single_cls=False, # 是否是单类别检测
  13. batch=4,
  14. close_mosaic=10,
  15. workers=0,
  16. device='0',
  17. optimizer='SGD', # using SGD
  18. # resume='', # 如过想续训就设置last.pt的地址
  19. amp=False, # 如果出现训练损失为Nan可以关闭amp
  20. project='runs/train',
  21. name='exp',
  22. )


5.4 DAT的训练过程截图


六、本文总结

到此本文的正式分享内容就结束了,在这里给大家推荐我的YOLOv11改进有效涨点专栏,本专栏目前为新开的平均质量分98分,后期我会根据各种最新的前沿顶会进行论文复现,也会对一些老的改进机制进行补充, 目前本专栏免费阅读(暂时,大家尽早关注不迷路~) ,如果大家觉得本文帮助到你了,订阅本专栏,关注后续更多的更新~