学习资源站

YOLOv11改进-检测头篇-辅助特征融合检测头FASFFHead添加小目标检测头(让小目标无所遁形,全网独家创新)

一、本文介绍

本文给大家带来的最新改进机制是由我 独家创新的FASFFHead检测头 ,我根据ASFFHead检测头(只能用于三头检测)的基础上进行二次创新,解决由于跨尺度融合的特征丢失情况,同时本文的内容全网无第二份, 非常适合大家拿来发表论文 该检测头为四头版本,增加 小目标检测 层或者大目标检测层,在配合上本文的检测头,针对小目标或者大目标进行二次提取,效果非常好。本文的内容也是购买专栏的读者想要四头版本的ASFF检测头,所以我针对这一需求进行了二次创新产生了本文的检测头。

欢迎大家订阅我的专栏一起学习YOLO!



二、原理介绍

官方论文地址: 官方论文地址点击即可跳转

官方代码地址: 官方代码地址点击即可跳转


ASFF (自适应空间特征融合)方法针对单次对象检测任务提出,解决了不同特征尺度间的一致性问题。其主要创新是引入了一种自适应的空间特征融合方式,有效地过滤掉冲突信息,从而增强了尺度不变性。研究表明,将ASFF应用于 YOLOv3 可以显著提高在MS COCO数据集上的检测 性能 ,实现了速度与准确性的平衡。ASFF方法可以通过反向传播进行训练,与模型无关,并且引入的计算开销很小,使其成为现有对象检测框架的一种实用增强。

ASFF的创新点主要包括:

1. 自适应空间特征融合:提出了一种新的金字塔特征融合策略,能够空间过滤冲突信息,压制不同尺度特征间的不一致性。

2. 改善尺度不变性:通过ASFF策略,显著提升了特征的尺度不变性,有助于提高对象检测的准确性。

3. 低推理开销:在提升检测性能的同时,几乎不增加额外的推理开销。

这些创新使ASFF成为单次对象检测领域的一个重要进展,特别是对处理不同尺度对象的能力的提升, 所以将其对于一些单一尺度检测的Neck适合是不适用的大家需要注意这一点

这张图片展示了自适应空间特征融合(ASFF)机制的工作原理,它是用于单次对象检测的。在这种结构中,不同层级的特征(表示为不同颜色的层)首先通过各自的步幅(stride)进行下采样或上采样,以便所有特征具有相同的空间维度。

- Level 1、Level 2和Level 3指的是特征金字塔中不同层级的特征,每个层级都有不同的空间分辨率。
- ASFF-1、ASFF-2和ASFF-3表示应用了ASFF机制的不同层级的特征融合。
- 在ASFF-3的放大部分,我们可以看到来自其他层级的特征(x1→3、x2→3)被调整到与第三层(x3→3)相同的尺寸,然后它们通过学习到的权重图进行加权融合,生成最终用于预测的融合特征( y^3 )。

通过这种方式,ASFF能够在每个空间位置自适应地选择最有用的特征,以提高检测的准确性。这种方法允许模型根据每个特定位置和尺度的上下文,灵活地决定哪些特征层级对最终预测最为重要。


三、FASFFHead的核心代码

使用方法看章节四!

  1. import copy
  2. import torch
  3. import torch.nn as nn
  4. from ultralytics.utils.tal import dist2bbox, make_anchors
  5. import math
  6. import torch.nn.functional as F
  7. __all__ = ['FASFFHead']
  8. def autopad(k, p=None, d=1): # kernel, padding, dilation
  9. """Pad to 'same' shape outputs."""
  10. if d > 1:
  11. k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k] # actual kernel-size
  12. if p is None:
  13. p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
  14. return p
  15. class Conv(nn.Module):
  16. """Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)."""
  17. default_act = nn.SiLU() # default activation
  18. def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
  19. """Initialize Conv layer with given arguments including activation."""
  20. super().__init__()
  21. self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)
  22. self.bn = nn.BatchNorm2d(c2)
  23. self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
  24. def forward(self, x):
  25. """Apply convolution, batch normalization and activation to input tensor."""
  26. return self.act(self.bn(self.conv(x)))
  27. def forward_fuse(self, x):
  28. """Perform transposed convolution of 2D data."""
  29. return self.act(self.conv(x))
  30. class DFL(nn.Module):
  31. """
  32. Integral module of Distribution Focal Loss (DFL).
  33. Proposed in Generalized Focal Loss https://ieeexplore.ieee.org/document/9792391
  34. """
  35. def __init__(self, c1=16):
  36. """Initialize a convolutional layer with a given number of input channels."""
  37. super().__init__()
  38. self.conv = nn.Conv2d(c1, 1, 1, bias=False).requires_grad_(False)
  39. x = torch.arange(c1, dtype=torch.float)
  40. self.conv.weight.data[:] = nn.Parameter(x.view(1, c1, 1, 1))
  41. self.c1 = c1
  42. def forward(self, x):
  43. """Applies a transformer layer on input tensor 'x' and returns a tensor."""
  44. b, c, a = x.shape # batch, channels, anchors
  45. return self.conv(x.view(b, 4, self.c1, a).transpose(2, 1).softmax(1)).view(b, 4, a)
  46. # return self.conv(x.view(b, self.c1, 4, a).softmax(1)).view(b, 4, a)
  47. class FASFF(nn.Module):
  48. def __init__(self, level, ch, multiplier=1, rfb=False, vis=False):
  49. super(FASFF, self).__init__()
  50. self.level = level
  51. self.dim = [int(ch[3] * multiplier), int(ch[2] * multiplier), int(ch[1] * multiplier),
  52. int(ch[0] * multiplier)]
  53. # print(self.dim)
  54. self.inter_dim = self.dim[self.level]
  55. if level == 0:
  56. self.stride_level_1 = Conv(int(ch[2] * multiplier), self.inter_dim, 3, 2)
  57. self.stride_level_2 = Conv(int(ch[1] * multiplier), self.inter_dim, 3, 2)
  58. self.expand = Conv(self.inter_dim, int(
  59. ch[3] * multiplier), 3, 1)
  60. elif level == 1:
  61. self.compress_level_0 = Conv(
  62. int(ch[3] * multiplier), self.inter_dim, 1, 1)
  63. self.stride_level_2 = Conv(
  64. int(ch[1] * multiplier), self.inter_dim, 3, 2)
  65. self.expand = Conv(self.inter_dim, int(ch[2] * multiplier), 3, 1)
  66. elif level == 2:
  67. self.compress_level_0 = Conv(
  68. int(ch[2] * multiplier), self.inter_dim, 1, 1)
  69. self.stride_level_2 = Conv(
  70. int(ch[0] * multiplier), self.inter_dim, 3, 2)
  71. self.expand = Conv(self.inter_dim, int(ch[1] * multiplier), 3, 1)
  72. elif level == 3:
  73. self.compress_level_0 = Conv(
  74. int(ch[2] * multiplier), self.inter_dim, 1, 1)
  75. self.compress_level_1 = Conv(
  76. int(ch[1] * multiplier), self.inter_dim, 1, 1)
  77. self.expand = Conv(self.inter_dim, int(
  78. ch[0] * multiplier), 3, 1)
  79. # when adding rfb, we use half number of channels to save memory
  80. compress_c = 8 if rfb else 16
  81. self.weight_level_0 = Conv(
  82. self.inter_dim, compress_c, 1, 1)
  83. self.weight_level_1 = Conv(
  84. self.inter_dim, compress_c, 1, 1)
  85. self.weight_level_2 = Conv(
  86. self.inter_dim, compress_c, 1, 1)
  87. self.weight_levels = Conv(
  88. compress_c * 3, 3, 1, 1)
  89. self.vis = vis
  90. def forward(self, x): # l,m,s
  91. """
  92. # 128, 256, 512
  93. 512, 256, 128
  94. from small -> large
  95. """
  96. x_level_add = x[2]
  97. x_level_0 = x[3] # l
  98. x_level_1 = x[1] # m
  99. x_level_2 = x[0] # s
  100. # print('x_level_0: ', x_level_0.shape)
  101. # print('x_level_1: ', x_level_1.shape)
  102. # print('x_level_2: ', x_level_2.shape)
  103. if self.level == 0:
  104. level_0_resized = x_level_0
  105. level_1_resized = self.stride_level_1(x_level_add)
  106. level_2_downsampled_inter = F.max_pool2d(
  107. x_level_1, 3, stride=2, padding=1)
  108. level_2_resized = self.stride_level_2(level_2_downsampled_inter)
  109. elif self.level == 1:
  110. level_0_compressed = self.compress_level_0(x_level_0)
  111. level_0_resized = F.interpolate(
  112. level_0_compressed, scale_factor=2, mode='nearest')
  113. level_1_resized = x_level_add
  114. level_2_resized = self.stride_level_2(x_level_1)
  115. elif self.level == 2:
  116. level_0_compressed = self.compress_level_0(x_level_add)
  117. level_0_resized = F.interpolate(
  118. level_0_compressed, scale_factor=2, mode='nearest')
  119. level_1_resized = x_level_1
  120. level_2_resized = self.stride_level_2(x_level_2)
  121. elif self.level == 3:
  122. level_0_compressed = self.compress_level_0(x_level_add)
  123. level_0_resized = F.interpolate(
  124. level_0_compressed, scale_factor=4, mode='nearest')
  125. x_level_1_compressed = self.compress_level_1(x_level_1)
  126. level_1_resized = F.interpolate(
  127. x_level_1_compressed, scale_factor=2, mode='nearest')
  128. level_2_resized = x_level_2
  129. # print('level: {}, l1_resized: {}, l2_resized: {}'.format(self.level,
  130. # level_1_resized.shape, level_2_resized.shape))
  131. level_0_weight_v = self.weight_level_0(level_0_resized)
  132. level_1_weight_v = self.weight_level_1(level_1_resized)
  133. level_2_weight_v = self.weight_level_2(level_2_resized)
  134. # print('level_0_weight_v: ', level_0_weight_v.shape)
  135. # print('level_1_weight_v: ', level_1_weight_v.shape)
  136. # print('level_2_weight_v: ', level_2_weight_v.shape)
  137. levels_weight_v = torch.cat(
  138. (level_0_weight_v, level_1_weight_v, level_2_weight_v), 1)
  139. levels_weight = self.weight_levels(levels_weight_v)
  140. levels_weight = F.softmax(levels_weight, dim=1)
  141. fused_out_reduced = level_0_resized * levels_weight[:, 0:1, :, :] + \
  142. level_1_resized * levels_weight[:, 1:2, :, :] + \
  143. level_2_resized * levels_weight[:, 2:, :, :]
  144. out = self.expand(fused_out_reduced)
  145. if self.vis:
  146. return out, levels_weight, fused_out_reduced.sum(dim=1)
  147. else:
  148. return out
  149. class DWConv(Conv):
  150. """Depth-wise convolution."""
  151. def __init__(self, c1, c2, k=1, s=1, d=1, act=True): # ch_in, ch_out, kernel, stride, dilation, activation
  152. """Initialize Depth-wise convolution with given parameters."""
  153. super().__init__(c1, c2, k, s, g=math.gcd(c1, c2), d=d, act=act)
  154. class FASFFHead(nn.Module):
  155. """YOLOv8 Detect head for detection models. CSDNSnu77"""
  156. dynamic = False # force grid reconstruction
  157. export = False # export mode
  158. end2end = False # end2end
  159. max_det = 300 # max_det
  160. shape = None
  161. anchors = torch.empty(0) # init
  162. strides = torch.empty(0) # init
  163. def __init__(self, nc=80, ch=(), multiplier=1, rfb=False):
  164. """Initializes the YOLOv8 detection layer with specified number of classes and channels."""
  165. super().__init__()
  166. self.nc = nc # number of classes
  167. self.nl = len(ch) # number of detection layers
  168. self.reg_max = 16 # DFL channels (ch[0] // 16 to scale 4/8/12/16/20 for n/s/m/l/x)
  169. self.no = nc + self.reg_max * 4 # number of outputs per anchor
  170. self.stride = torch.zeros(self.nl) # strides computed during build
  171. c2, c3 = max((16, ch[0] // 4, self.reg_max * 4)), max(ch[0], min(self.nc, 100)) # channels
  172. self.cv2 = nn.ModuleList(
  173. nn.Sequential(Conv(x, c2, 3), Conv(c2, c2, 3), nn.Conv2d(c2, 4 * self.reg_max, 1)) for x in ch
  174. )
  175. self.cv3 = nn.ModuleList(
  176. nn.Sequential(
  177. nn.Sequential(DWConv(x, x, 3), Conv(x, c3, 1)),
  178. nn.Sequential(DWConv(c3, c3, 3), Conv(c3, c3, 1)),
  179. nn.Conv2d(c3, self.nc, 1),
  180. )
  181. for x in ch
  182. )
  183. self.dfl = DFL(self.reg_max) if self.reg_max > 1 else nn.Identity()
  184. self.l0_fusion = FASFF(level=0, ch=ch, multiplier=multiplier, rfb=rfb)
  185. self.l1_fusion = FASFF(level=1, ch=ch, multiplier=multiplier, rfb=rfb)
  186. self.l2_fusion = FASFF(level=2, ch=ch, multiplier=multiplier, rfb=rfb)
  187. self.l3_fusion = FASFF(level=3, ch=ch, multiplier=multiplier, rfb=rfb)
  188. if self.end2end:
  189. self.one2one_cv2 = copy.deepcopy(self.cv2)
  190. self.one2one_cv3 = copy.deepcopy(self.cv3)
  191. def forward(self, x):
  192. x1 = self.l0_fusion(x)
  193. x2 = self.l1_fusion(x)
  194. x3 = self.l2_fusion(x)
  195. x4 = self.l3_fusion(x)
  196. x = [x4, x3, x2, x1]
  197. """Concatenates and returns predicted bounding boxes and class probabilities."""
  198. if self.end2end:
  199. return self.forward_end2end(x)
  200. for i in range(self.nl):
  201. x[i] = torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1)
  202. if self.training: # Training path
  203. return x
  204. y = self._inference(x)
  205. return y if self.export else (y, x)
  206. def forward_end2end(self, x):
  207. """
  208. Performs forward pass of the v10Detect module.
  209. Args:
  210. x (tensor): Input tensor.
  211. Returns:
  212. (dict, tensor): If not in training mode, returns a dictionary containing the outputs of both one2many and one2one detections.
  213. If in training mode, returns a dictionary containing the outputs of one2many and one2one detections separately.
  214. """
  215. x_detach = [xi.detach() for xi in x]
  216. one2one = [
  217. torch.cat((self.one2one_cv2[i](x_detach[i]), self.one2one_cv3[i](x_detach[i])), 1) for i in range(self.nl)
  218. ]
  219. for i in range(self.nl):
  220. x[i] = torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1)
  221. if self.training: # Training path
  222. return {"one2many": x, "one2one": one2one}
  223. y = self._inference(one2one)
  224. y = self.postprocess(y.permute(0, 2, 1), self.max_det, self.nc)
  225. return y if self.export else (y, {"one2many": x, "one2one": one2one})
  226. def _inference(self, x):
  227. """Decode predicted bounding boxes and class probabilities based on multiple-level feature maps."""
  228. # Inference path
  229. shape = x[0].shape # BCHW
  230. x_cat = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2)
  231. if self.dynamic or self.shape != shape:
  232. self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5))
  233. self.shape = shape
  234. if self.export and self.format in {"saved_model", "pb", "tflite", "edgetpu", "tfjs"}: # avoid TF FlexSplitV ops
  235. box = x_cat[:, : self.reg_max * 4]
  236. cls = x_cat[:, self.reg_max * 4:]
  237. else:
  238. box, cls = x_cat.split((self.reg_max * 4, self.nc), 1)
  239. if self.export and self.format in {"tflite", "edgetpu"}:
  240. # Precompute normalization factor to increase numerical stability
  241. # See https://github.com/ultralytics/ultralytics/issues/7371
  242. grid_h = shape[2]
  243. grid_w = shape[3]
  244. grid_size = torch.tensor([grid_w, grid_h, grid_w, grid_h], device=box.device).reshape(1, 4, 1)
  245. norm = self.strides / (self.stride[0] * grid_size)
  246. dbox = self.decode_bboxes(self.dfl(box) * norm, self.anchors.unsqueeze(0) * norm[:, :2])
  247. else:
  248. dbox = self.decode_bboxes(self.dfl(box), self.anchors.unsqueeze(0)) * self.strides
  249. return torch.cat((dbox, cls.sigmoid()), 1)
  250. def bias_init(self):
  251. """Initialize Detect() biases, WARNING: requires stride availability."""
  252. m = self # self.model[-1] # Detect() module
  253. # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1
  254. # ncf = math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum()) # nominal class frequency
  255. for a, b, s in zip(m.cv2, m.cv3, m.stride): # from
  256. a[-1].bias.data[:] = 1.0 # box
  257. b[-1].bias.data[: m.nc] = math.log(5 / m.nc / (640 / s) ** 2) # cls (.01 objects, 80 classes, 640 img)
  258. if self.end2end:
  259. for a, b, s in zip(m.one2one_cv2, m.one2one_cv3, m.stride): # from
  260. a[-1].bias.data[:] = 1.0 # box
  261. b[-1].bias.data[: m.nc] = math.log(5 / m.nc / (640 / s) ** 2) # cls (.01 objects, 80 classes, 640 img)
  262. def decode_bboxes(self, bboxes, anchors):
  263. """Decode bounding boxes."""
  264. return dist2bbox(bboxes, anchors, xywh=not self.end2end, dim=1)
  265. @staticmethod
  266. def postprocess(preds: torch.Tensor, max_det: int, nc: int = 80):
  267. """
  268. Post-processes YOLO model predictions.
  269. Args:
  270. preds (torch.Tensor): Raw predictions with shape (batch_size, num_anchors, 4 + nc) with last dimension
  271. format [x, y, w, h, class_probs].
  272. max_det (int): Maximum detections per image.
  273. nc (int, optional): Number of classes. Default: 80.
  274. Returns:
  275. (torch.Tensor): Processed predictions with shape (batch_size, min(max_det, num_anchors), 6) and last
  276. dimension format [x, y, w, h, max_class_prob, class_index].
  277. """
  278. batch_size, anchors, _ = preds.shape # i.e. shape(16,8400,84)
  279. boxes, scores = preds.split([4, nc], dim=-1)
  280. index = scores.amax(dim=-1).topk(min(max_det, anchors))[1].unsqueeze(-1)
  281. boxes = boxes.gather(dim=1, index=index.repeat(1, 1, 4))
  282. scores = scores.gather(dim=1, index=index.repeat(1, 1, nc))
  283. scores, index = scores.flatten(1).topk(min(max_det, anchors))
  284. i = torch.arange(batch_size)[..., None] # batch indices
  285. return torch.cat([boxes[i, index // nc], scores[..., None], (index % nc)[..., None].float()], dim=-1)
  286. # if __name__ == "__main__":
  287. # # Generating Sample image
  288. # image1 = (1, 64, 32, 32)
  289. # image2 = (1, 128, 16, 16)
  290. # image3 = (1, 256, 8, 8)
  291. #
  292. # image1 = torch.rand(image1)
  293. # image2 = torch.rand(image2)
  294. # image3 = torch.rand(image3)
  295. # image = [image1, image2, image3]
  296. # channel = (64, 128, 256)
  297. # # Model
  298. # mobilenet_v1 = FASFFHead(nc=80, ch=channel)
  299. #
  300. # out = mobilenet_v1(image)
  301. # print(out)


四、手把手教你添加FASFFHead卷积

4.1 修改一

首先我们将上面的代码复制粘贴到' ultralytics /nn' 目录下新建一个py文件复制粘贴进去,具体名字自己来定.


4.2 修改二

第二步我们在该目录下创建一个新的py文件名字为'__init__.py'( 用群内的文件的话已经有了无需新建) ,然后在其内部导入我们的检测头如下图所示。


4.3 修改三

第三步我门中到如下文件'ultralytics/nn/tasks.py'进行导入和注册我们的模块( 用群内的文件的话已经有了无需重新导入直接开始第四步即可)

​​​


4.4 修改四

第四步我门找到如下文件'ultralytics/nn/tasks.py,找到如下的代码进行将检测头添加进去,这里给大家推荐个快速搜索的方法用ctrl+f然后搜索Detect然后就能快速查找了。

​​​​


4.5 修改五

同理


4.6 修改六

同理


4.7 修改七

这里有一些不一样,我们需要加一行代码

  1. else:
  2. return 'detect'

为啥呢不一样,因为这里的m在代码执行过程中会将你的代码自动转换为小写,所以直接else方便一点,以后出现一些其它分割或者其它的教程的时候在提供其它的修改教程。

​​​​


4.8 修改八

同理.

​​​​


到此就修改完成了,大家可以复制下面的yaml文件运行。


五、FASFFHead检测头的yaml文件

5.1 融合P2网络结构

此版本训练信息:YOLO11-P2-FASFFHead summary: 468 layers, 4,058,716 parameters, 4,058,700 gradients, 13.4 GFLOPs

  1. # Ultralytics YOLO 🚀, AGPL-3.0 license
  2. # YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
  3. # Parameters
  4. nc: 80 # number of classes
  5. scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  6. # [depth, width, max_channels]
  7. n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  8. s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  9. m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  10. l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  11. x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
  12. # YOLO11n backbone
  13. backbone:
  14. # [from, repeats, module, args]
  15. - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  16. - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  17. - [-1, 2, C3k2, [256, False, 0.25]]
  18. - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  19. - [-1, 2, C3k2, [512, False, 0.25]]
  20. - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  21. - [-1, 2, C3k2, [512, True]]
  22. - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  23. - [-1, 2, C3k2, [1024, True]]
  24. - [-1, 1, SPPF, [1024, 5]] # 9
  25. - [-1, 2, C2PSA, [1024]] # 10
  26. # YOLOv11.0-p2 head
  27. head:
  28. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  29. - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  30. - [-1, 2, C3k2, [512, False]] # 13
  31. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  32. - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  33. - [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)
  34. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  35. - [[-1, 2], 1, Concat, [1]] # cat backbone P2
  36. - [-1, 2, C3k2, [128, False]] # 19 (P2/4-xsmall) # 小目标可以尝试将这里的False设置为True.
  37. - [-1, 1, Conv, [128, 3, 2]]
  38. - [[-1, 16], 1, Concat, [1]] # cat head P3
  39. - [-1, 2, C3k2, [256, False]] # 22 (P3/8-small)
  40. - [-1, 1, Conv, [256, 3, 2]]
  41. - [[-1, 13], 1, Concat, [1]] # cat head P4
  42. - [-1, 2, C3k2, [512, False]] # 25 (P4/16-medium)
  43. - [-1, 1, Conv, [512, 3, 2]]
  44. - [[-1, 10], 1, Concat, [1]] # cat head P5
  45. - [-1, 2, C3k2, [1024, True]] # 28 (P5/32-large)
  46. - [[19, 22, 25, 28], 1, FASFFHead, [nc]] # Detect(P2, P3, P4, P5)


5.2 融合P6网络结构

训练信息:YOLO11-P6-FASFFHead summary: 474 layers, 5,952,772 parameters, 5,952,756 gradients, 7.1 GFLOPs

  1. # Ultralytics YOLO 🚀, AGPL-3.0 license
  2. # YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
  3. # Parameters
  4. nc: 80 # number of classes
  5. scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  6. # [depth, width, max_channels]
  7. n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  8. s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  9. m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  10. l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  11. x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
  12. # YOLO11n backbone
  13. backbone:
  14. # [from, repeats, module, args]
  15. - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  16. - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  17. - [-1, 2, C3k2, [128, False, 0.25]]
  18. - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  19. - [-1, 2, C3k2, [256, False, 0.25]]
  20. - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  21. - [-1, 2, C3k2, [512, False, 0.25]]
  22. - [-1, 1, Conv, [768, 3, 2]] # 7-P5/32
  23. - [-1, 2, C3k2, [768, True]]
  24. - [-1, 1, Conv, [1024, 3, 2]] # 9-P6/64
  25. - [-1, 2, C3k2, [1024, True]]
  26. - [-1, 1, SPPF, [1024, 5]] # 11
  27. # YOLOv11.0x6 head
  28. head:
  29. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  30. - [[-1, 8], 1, Concat, [1]] # cat backbone P5
  31. - [-1, 2, C3k2, [768, False]] # 14
  32. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  33. - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  34. - [-1, 2, C3k2, [512, False]] # 17
  35. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  36. - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  37. - [-1, 2, C3k2, [256, False]] # 20 (P3/8-small)
  38. - [-1, 1, Conv, [256, 3, 2]]
  39. - [[-1, 17], 1, Concat, [1]] # cat head P4
  40. - [-1, 2, C3k2, [512, False]] # 23 (P4/16-medium)
  41. - [-1, 1, Conv, [512, 3, 2]]
  42. - [[-1, 14], 1, Concat, [1]] # cat head P5
  43. - [-1, 2, C3k2, [768, True]] # 26 (P5/32-large)
  44. - [-1, 1, Conv, [768, 3, 2]]
  45. - [[-1, 11], 1, Concat, [1]] # cat head P6
  46. - [-1, 2, C3k2, [1024, True]] # 29 (P6/64-xlarge) # True也可设置False尝试.
  47. - [[20, 23, 26, 29], 1, FASFFHead, [nc]] # Detect(P3, P4, P5, P6)


六、完美运行记录

最后提供一下完美运行的图片。

​​

​​


七、本文总结

到此本文的正式分享内容就结束了,在这里给大家推荐我的YOLOv11改进有效涨点专栏,本专栏目前为新开的平均质量分98分,后期我会根据各种最新的前沿顶会进行论文复现,也会对一些老的改进机制进行补充,如果大家觉得本文帮助到你了,订阅本专栏,关注后续更多的更新~