学习资源站

YOLOv11改进-Neck篇-2024最新TPAMI机制FreqFusion二次创新BiFPN(全网独家创新)

一、本文介绍

本文给大家带来的改进机制是利用 2024-TPAMI 最新机制 FreqFusion二次创新BiFPN 《Frequency-aware Feature Fusion for Dense Image Prediction》这篇文章的主要贡献是提出了一种新的特征融合方法(FreqFusion),旨在解决密集图像预测任务中的类别内不一致性和边界位移问题。本文将其和 BiFPN 进行结合实现二次创新BiFPN机制, 相比于原始的YOLOv11本文的内容可以达到一定的轻量化,本文的内容在作者的多类别数据集上实现了涨点。



二、原理介绍

官方论文地址: 官方论文地址点击此处即可跳转

官方代码地址: 官方代码地址点击此处即可跳转


《Frequency-aware Feature Fusion for Dense Image Prediction》这篇文章的主要贡献是提出了一种新的特征融合方法,旨在解决密集图像预测任务中的类别内不一致性和边界位移问题。文章中的核心概念较多,以下是简要的总结和理解:

问题定义:
密集图像预测任务(例如语义分割、目标检测和实例分割)依赖于高精度的类别信息和空间边界。但传统的特征融合方法在类别内特征一致性和边界保留上表现不佳,容易导致类别内不一致(类别内部不同部分特征差异大)和边界模糊。

解决方案——FreqFusion:
文章提出了一种**频率感知特征融合(FreqFusion) ,它通过三个主要组件来提升融合效果:
1. 自适应低通滤波器(ALPF) 生成器 :该模块通过生成空间可变的低通滤波器,平滑高层特征,减少类别内不一致。
2. 偏移生成器:通过重新采样,将类别一致性较高的特征替换掉不一致的特征,进一步增强边界的清晰度。
3. 自适应高通滤波器(AHPF)生成器:用于增强在下采样过程中丢失的高频信息,提升边界细节。

方法优势:
提升类别内一致性:通过ALPF组件减少了对象内部特征的波动,提升了类别内的相似度。
边界优化:通过偏移生成器和AHPF组件修正了对象边界,使得边界更加清晰。
广泛的适用性:该方法在多个任务上验证了其有效性,如语义分割、目标检测和实例分割。

实验结果:
在语义分割任务中,FreqFusion相比现有方法在多个数据集(如Cityscapes和ADE20K)上有显著的提升,例如在ADE20K上比现有最优方法提升了2.8 mIoU。
在目标检测任务中,使用Faster R-CNN的FreqFusion版本在MS COCO数据集 上提升了1.8 AP。
实例分割和全景分割任务中,也实现了显著的 性能 提升。

总结:
FreqFusion通过结合自适应低通和高通滤波器,解决了标准特征融合中的类别内不一致性和边界模糊问题,在多个 计算机视觉 任务上提升了预测性能。


三、核心代码

核心代码使用方式看章节四!

  1. # TPAMI 2024:Frequency-aware Feature Fusion for Dense Image Prediction
  2. import torch
  3. import torch.nn as nn
  4. import torch.nn.functional as F
  5. from mmcv.ops.carafe import normal_init, xavier_init, carafe
  6. import warnings
  7. import numpy as np
  8. __all__ = ['FreqFusion']
  9. def normal_init(module, mean=0, std=1, bias=0):
  10. if hasattr(module, 'weight') and module.weight is not None:
  11. nn.init.normal_(module.weight, mean, std)
  12. if hasattr(module, 'bias') and module.bias is not None:
  13. nn.init.constant_(module.bias, bias)
  14. def constant_init(module, val, bias=0):
  15. if hasattr(module, 'weight') and module.weight is not None:
  16. nn.init.constant_(module.weight, val)
  17. if hasattr(module, 'bias') and module.bias is not None:
  18. nn.init.constant_(module.bias, bias)
  19. def resize(input,
  20. size=None,
  21. scale_factor=None,
  22. mode='nearest',
  23. align_corners=None,
  24. warning=True):
  25. if warning:
  26. if size is not None and align_corners:
  27. input_h, input_w = tuple(int(x) for x in input.shape[2:])
  28. output_h, output_w = tuple(int(x) for x in size)
  29. if output_h > input_h or output_w > input_w:
  30. if ((output_h > 1 and output_w > 1 and input_h > 1
  31. and input_w > 1) and (output_h - 1) % (input_h - 1)
  32. and (output_w - 1) % (input_w - 1)):
  33. warnings.warn(
  34. f'When align_corners={align_corners}, '
  35. 'the output would more aligned if '
  36. f'input size {(input_h, input_w)} is `x+1` and '
  37. f'out size {(output_h, output_w)} is `nx+1`')
  38. return F.interpolate(input, size, scale_factor, mode, align_corners)
  39. def hamming2D(M, N):
  40. """
  41. 生成二维Hamming窗
  42. 参数:
  43. - M:窗口的行数
  44. - N:窗口的列数
  45. 返回:
  46. - 二维Hamming窗
  47. """
  48. # 生成水平和垂直方向上的Hamming窗
  49. # hamming_x = np.blackman(M)
  50. # hamming_x = np.kaiser(M)
  51. hamming_x = np.hamming(M)
  52. hamming_y = np.hamming(N)
  53. # 通过外积生成二维Hamming窗
  54. hamming_2d = np.outer(hamming_x, hamming_y)
  55. return hamming_2d
  56. class FreqFusion(nn.Module):
  57. def __init__(self,
  58. channels,
  59. scale_factor=1,
  60. lowpass_kernel=5,
  61. highpass_kernel=3,
  62. up_group=1,
  63. encoder_kernel=3,
  64. encoder_dilation=1,
  65. compressed_channels=64,
  66. align_corners=False,
  67. upsample_mode='nearest',
  68. feature_resample=False, # use offset generator or not
  69. feature_resample_group=4,
  70. comp_feat_upsample=True, # use ALPF & AHPF for init upsampling
  71. use_high_pass=True,
  72. use_low_pass=True,
  73. hr_residual=True,
  74. semi_conv=True,
  75. hamming_window=True, # for regularization, do not matter really
  76. feature_resample_norm=True,
  77. **kwargs):
  78. super().__init__()
  79. hr_channels, lr_channels = channels
  80. self.scale_factor = scale_factor
  81. self.lowpass_kernel = lowpass_kernel
  82. self.highpass_kernel = highpass_kernel
  83. self.up_group = up_group
  84. self.encoder_kernel = encoder_kernel
  85. self.encoder_dilation = encoder_dilation
  86. self.compressed_channels = compressed_channels
  87. self.hr_channel_compressor = nn.Conv2d(hr_channels, self.compressed_channels,1)
  88. self.lr_channel_compressor = nn.Conv2d(lr_channels, self.compressed_channels,1)
  89. self.content_encoder = nn.Conv2d( # ALPF generator
  90. self.compressed_channels,
  91. lowpass_kernel ** 2 * self.up_group * self.scale_factor * self.scale_factor,
  92. self.encoder_kernel,
  93. padding=int((self.encoder_kernel - 1) * self.encoder_dilation / 2),
  94. dilation=self.encoder_dilation,
  95. groups=1)
  96. self.align_corners = align_corners
  97. self.upsample_mode = upsample_mode
  98. self.hr_residual = hr_residual
  99. self.use_high_pass = use_high_pass
  100. self.use_low_pass = use_low_pass
  101. self.semi_conv = semi_conv
  102. self.feature_resample = feature_resample
  103. self.comp_feat_upsample = comp_feat_upsample
  104. if self.feature_resample:
  105. self.dysampler = LocalSimGuidedSampler(in_channels=compressed_channels, scale=2, style='lp', groups=feature_resample_group, use_direct_scale=True, kernel_size=encoder_kernel, norm=feature_resample_norm)
  106. if self.use_high_pass:
  107. self.content_encoder2 = nn.Conv2d( # AHPF generator
  108. self.compressed_channels,
  109. highpass_kernel ** 2 * self.up_group * self.scale_factor * self.scale_factor,
  110. self.encoder_kernel,
  111. padding=int((self.encoder_kernel - 1) * self.encoder_dilation / 2),
  112. dilation=self.encoder_dilation,
  113. groups=1)
  114. self.hamming_window = hamming_window
  115. lowpass_pad=0
  116. highpass_pad=0
  117. if self.hamming_window:
  118. self.register_buffer('hamming_lowpass', torch.FloatTensor(hamming2D(lowpass_kernel + 2 * lowpass_pad, lowpass_kernel + 2 * lowpass_pad))[None, None,])
  119. self.register_buffer('hamming_highpass', torch.FloatTensor(hamming2D(highpass_kernel + 2 * highpass_pad, highpass_kernel + 2 * highpass_pad))[None, None,])
  120. else:
  121. self.register_buffer('hamming_lowpass', torch.FloatTensor([1.0]))
  122. self.register_buffer('hamming_highpass', torch.FloatTensor([1.0]))
  123. self.init_weights()
  124. def init_weights(self):
  125. for m in self.modules():
  126. # print(m)
  127. if isinstance(m, nn.Conv2d):
  128. xavier_init(m, distribution='uniform')
  129. normal_init(self.content_encoder, std=0.001)
  130. if self.use_high_pass:
  131. normal_init(self.content_encoder2, std=0.001)
  132. def kernel_normalizer(self, mask, kernel, scale_factor=None, hamming=1):
  133. if scale_factor is not None:
  134. mask = F.pixel_shuffle(mask, self.scale_factor)
  135. n, mask_c, h, w = mask.size()
  136. mask_channel = int(mask_c / float(kernel**2))
  137. # mask = mask.view(n, mask_channel, -1, h, w)
  138. # mask = F.softmax(mask, dim=2, dtype=mask.dtype)
  139. # mask = mask.view(n, mask_c, h, w).contiguous()
  140. mask = mask.view(n, mask_channel, -1, h, w)
  141. mask = F.softmax(mask, dim=2, dtype=mask.dtype)
  142. mask = mask.view(n, mask_channel, kernel, kernel, h, w)
  143. mask = mask.permute(0, 1, 4, 5, 2, 3).view(n, -1, kernel, kernel)
  144. # mask = F.pad(mask, pad=[padding] * 4, mode=self.padding_mode) # kernel + 2 * padding
  145. mask = mask * hamming
  146. mask /= mask.sum(dim=(-1, -2), keepdims=True)
  147. # print(hamming)
  148. # print(mask.shape)
  149. mask = mask.view(n, mask_channel, h, w, -1)
  150. mask = mask.permute(0, 1, 4, 2, 3).view(n, -1, h, w).contiguous()
  151. return mask
  152. def forward(self, x):
  153. hr_feat, lr_feat = x
  154. compressed_hr_feat = self.hr_channel_compressor(hr_feat)
  155. compressed_lr_feat = self.lr_channel_compressor(lr_feat)
  156. if self.semi_conv:
  157. if self.comp_feat_upsample:
  158. if self.use_high_pass:
  159. mask_hr_hr_feat = self.content_encoder2(compressed_hr_feat)
  160. mask_hr_init = self.kernel_normalizer(mask_hr_hr_feat, self.highpass_kernel, hamming=self.hamming_highpass)
  161. compressed_hr_feat = compressed_hr_feat + compressed_hr_feat - carafe(compressed_hr_feat, mask_hr_init, self.highpass_kernel, self.up_group, 1)
  162. mask_lr_hr_feat = self.content_encoder(compressed_hr_feat)
  163. mask_lr_init = self.kernel_normalizer(mask_lr_hr_feat, self.lowpass_kernel, hamming=self.hamming_lowpass)
  164. mask_lr_lr_feat_lr = self.content_encoder(compressed_lr_feat)
  165. mask_lr_lr_feat = F.interpolate(
  166. carafe(mask_lr_lr_feat_lr, mask_lr_init, self.lowpass_kernel, self.up_group, 2), size=compressed_hr_feat.shape[-2:], mode='nearest')
  167. mask_lr = mask_lr_hr_feat + mask_lr_lr_feat
  168. mask_lr_init = self.kernel_normalizer(mask_lr, self.lowpass_kernel, hamming=self.hamming_lowpass)
  169. mask_hr_lr_feat = F.interpolate(
  170. carafe(self.content_encoder2(compressed_lr_feat), mask_lr_init, self.lowpass_kernel, self.up_group, 2), size=compressed_hr_feat.shape[-2:], mode='nearest')
  171. mask_hr = mask_hr_hr_feat + mask_hr_lr_feat
  172. else: raise NotImplementedError
  173. else:
  174. mask_lr = self.content_encoder(compressed_hr_feat) + F.interpolate(self.content_encoder(compressed_lr_feat), size=compressed_hr_feat.shape[-2:], mode='nearest')
  175. if self.use_high_pass:
  176. mask_hr = self.content_encoder2(compressed_hr_feat) + F.interpolate(self.content_encoder2(compressed_lr_feat), size=compressed_hr_feat.shape[-2:], mode='nearest')
  177. else:
  178. compressed_x = F.interpolate(compressed_lr_feat, size=compressed_hr_feat.shape[-2:], mode='nearest') + compressed_hr_feat
  179. mask_lr = self.content_encoder(compressed_x)
  180. if self.use_high_pass:
  181. mask_hr = self.content_encoder2(compressed_x)
  182. mask_lr = self.kernel_normalizer(mask_lr, self.lowpass_kernel, hamming=self.hamming_lowpass)
  183. if self.semi_conv:
  184. lr_feat = carafe(lr_feat, mask_lr, self.lowpass_kernel, self.up_group, 2)
  185. else:
  186. lr_feat = resize(
  187. input=lr_feat,
  188. size=hr_feat.shape[2:],
  189. mode=self.upsample_mode,
  190. align_corners=None if self.upsample_mode == 'nearest' else self.align_corners)
  191. lr_feat = carafe(lr_feat, mask_lr, self.lowpass_kernel, self.up_group, 1)
  192. if self.use_high_pass:
  193. mask_hr = self.kernel_normalizer(mask_hr, self.highpass_kernel, hamming=self.hamming_highpass)
  194. hr_feat_hf = hr_feat - carafe(hr_feat, mask_hr, self.highpass_kernel, self.up_group, 1)
  195. if self.hr_residual:
  196. # print('using hr_residual')
  197. hr_feat = hr_feat_hf + hr_feat
  198. else:
  199. hr_feat = hr_feat_hf
  200. if self.feature_resample:
  201. # print(lr_feat.shape)
  202. lr_feat = self.dysampler(hr_x=compressed_hr_feat,
  203. lr_x=compressed_lr_feat, feat2sample=lr_feat)
  204. return hr_feat + lr_feat
  205. class LocalSimGuidedSampler(nn.Module):
  206. """
  207. offset generator in FreqFusion
  208. """
  209. def __init__(self, in_channels, scale=2, style='lp', groups=4, use_direct_scale=True, kernel_size=1, local_window=3, sim_type='cos', norm=True, direction_feat='sim_concat'):
  210. super().__init__()
  211. assert scale==2
  212. assert style=='lp'
  213. self.scale = scale
  214. self.style = style
  215. self.groups = groups
  216. self.local_window = local_window
  217. self.sim_type = sim_type
  218. self.direction_feat = direction_feat
  219. if style == 'pl':
  220. assert in_channels >= scale ** 2 and in_channels % scale ** 2 == 0
  221. assert in_channels >= groups and in_channels % groups == 0
  222. if style == 'pl':
  223. in_channels = in_channels // scale ** 2
  224. out_channels = 2 * groups
  225. else:
  226. out_channels = 2 * groups * scale ** 2
  227. if self.direction_feat == 'sim':
  228. self.offset = nn.Conv2d(local_window**2 - 1, out_channels, kernel_size=kernel_size, padding=kernel_size//2)
  229. elif self.direction_feat == 'sim_concat':
  230. self.offset = nn.Conv2d(in_channels + local_window**2 - 1, out_channels, kernel_size=kernel_size, padding=kernel_size//2)
  231. else: raise NotImplementedError
  232. normal_init(self.offset, std=0.001)
  233. if use_direct_scale:
  234. if self.direction_feat == 'sim':
  235. self.direct_scale = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, padding=kernel_size//2)
  236. elif self.direction_feat == 'sim_concat':
  237. self.direct_scale = nn.Conv2d(in_channels + local_window**2 - 1, out_channels, kernel_size=kernel_size, padding=kernel_size//2)
  238. else: raise NotImplementedError
  239. constant_init(self.direct_scale, val=0.)
  240. out_channels = 2 * groups
  241. if self.direction_feat == 'sim':
  242. self.hr_offset = nn.Conv2d(local_window**2 - 1, out_channels, kernel_size=kernel_size, padding=kernel_size//2)
  243. elif self.direction_feat == 'sim_concat':
  244. self.hr_offset = nn.Conv2d(in_channels + local_window**2 - 1, out_channels, kernel_size=kernel_size, padding=kernel_size//2)
  245. else: raise NotImplementedError
  246. normal_init(self.hr_offset, std=0.001)
  247. if use_direct_scale:
  248. if self.direction_feat == 'sim':
  249. self.hr_direct_scale = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, padding=kernel_size//2)
  250. elif self.direction_feat == 'sim_concat':
  251. self.hr_direct_scale = nn.Conv2d(in_channels + local_window**2 - 1, out_channels, kernel_size=kernel_size, padding=kernel_size//2)
  252. else: raise NotImplementedError
  253. constant_init(self.hr_direct_scale, val=0.)
  254. self.norm = norm
  255. if self.norm:
  256. self.norm_hr = nn.GroupNorm(in_channels // 8, in_channels)
  257. self.norm_lr = nn.GroupNorm(in_channels // 8, in_channels)
  258. else:
  259. self.norm_hr = nn.Identity()
  260. self.norm_lr = nn.Identity()
  261. self.register_buffer('init_pos', self._init_pos())
  262. def _init_pos(self):
  263. h = torch.arange((-self.scale + 1) / 2, (self.scale - 1) / 2 + 1) / self.scale
  264. return torch.stack(torch.meshgrid([h, h])).transpose(1, 2).repeat(1, self.groups, 1).reshape(1, -1, 1, 1)
  265. def sample(self, x, offset, scale=None):
  266. if scale is None: scale = self.scale
  267. B, _, H, W = offset.shape
  268. offset = offset.view(B, 2, -1, H, W)
  269. coords_h = torch.arange(H) + 0.5
  270. coords_w = torch.arange(W) + 0.5
  271. coords = torch.stack(torch.meshgrid([coords_w, coords_h])
  272. ).transpose(1, 2).unsqueeze(1).unsqueeze(0).type(x.dtype).to(x.device)
  273. normalizer = torch.tensor([W, H], dtype=x.dtype, device=x.device).view(1, 2, 1, 1, 1)
  274. coords = 2 * (coords + offset) / normalizer - 1
  275. coords = F.pixel_shuffle(coords.view(B, -1, H, W), scale).view(
  276. B, 2, -1, scale * H, scale * W).permute(0, 2, 3, 4, 1).contiguous().flatten(0, 1)
  277. return F.grid_sample(x.reshape(B * self.groups, -1, x.size(-2), x.size(-1)), coords, mode='bilinear',
  278. align_corners=False, padding_mode="border").view(B, -1, scale * H, scale * W)
  279. def forward(self, hr_x, lr_x, feat2sample):
  280. hr_x = self.norm_hr(hr_x)
  281. lr_x = self.norm_lr(lr_x)
  282. if self.direction_feat == 'sim':
  283. hr_sim = compute_similarity(hr_x, self.local_window, dilation=2, sim='cos')
  284. lr_sim = compute_similarity(lr_x, self.local_window, dilation=2, sim='cos')
  285. elif self.direction_feat == 'sim_concat':
  286. hr_sim = torch.cat([hr_x, compute_similarity(hr_x, self.local_window, dilation=2, sim='cos')], dim=1)
  287. lr_sim = torch.cat([lr_x, compute_similarity(lr_x, self.local_window, dilation=2, sim='cos')], dim=1)
  288. hr_x, lr_x = hr_sim, lr_sim
  289. # offset = self.get_offset(hr_x, lr_x)
  290. offset = self.get_offset_lp(hr_x, lr_x, hr_sim, lr_sim)
  291. return self.sample(feat2sample, offset)
  292. # def get_offset_lp(self, hr_x, lr_x):
  293. def get_offset_lp(self, hr_x, lr_x, hr_sim, lr_sim):
  294. if hasattr(self, 'direct_scale'):
  295. # offset = (self.offset(lr_x) + F.pixel_unshuffle(self.hr_offset(hr_x), self.scale)) * (self.direct_scale(lr_x) + F.pixel_unshuffle(self.hr_direct_scale(hr_x), self.scale)).sigmoid() + self.init_pos
  296. offset = (self.offset(lr_sim) + F.pixel_unshuffle(self.hr_offset(hr_sim), self.scale)) * (self.direct_scale(lr_x) + F.pixel_unshuffle(self.hr_direct_scale(hr_x), self.scale)).sigmoid() + self.init_pos
  297. # offset = (self.offset(lr_sim) + F.pixel_unshuffle(self.hr_offset(hr_sim), self.scale)) * (self.direct_scale(lr_sim) + F.pixel_unshuffle(self.hr_direct_scale(hr_sim), self.scale)).sigmoid() + self.init_pos
  298. else:
  299. offset = (self.offset(lr_x) + F.pixel_unshuffle(self.hr_offset(hr_x), self.scale)) * 0.25 + self.init_pos
  300. return offset
  301. def get_offset(self, hr_x, lr_x):
  302. if self.style == 'pl':
  303. raise NotImplementedError
  304. return self.get_offset_lp(hr_x, lr_x)
  305. def compute_similarity(input_tensor, k=3, dilation=1, sim='cos'):
  306. """
  307. 计算输入张量中每一点与周围KxK范围内的点的余弦相似度。
  308. 参数:
  309. - input_tensor: 输入张量,形状为[B, C, H, W]
  310. - k: 范围大小,表示周围KxK范围内的点
  311. 返回:
  312. - 输出张量,形状为[B, KxK-1, H, W]
  313. """
  314. B, C, H, W = input_tensor.shape
  315. # 使用零填充来处理边界情况
  316. # padded_input = F.pad(input_tensor, (k // 2, k // 2, k // 2, k // 2), mode='constant', value=0)
  317. # 展平输入张量中每个点及其周围KxK范围内的点
  318. unfold_tensor = F.unfold(input_tensor, k, padding=(k // 2) * dilation, dilation=dilation) # B, CxKxK, HW
  319. # print(unfold_tensor.shape)
  320. unfold_tensor = unfold_tensor.reshape(B, C, k**2, H, W)
  321. # 计算余弦相似度
  322. if sim == 'cos':
  323. similarity = F.cosine_similarity(unfold_tensor[:, :, k * k // 2:k * k // 2 + 1], unfold_tensor[:, :, :], dim=1)
  324. elif sim == 'dot':
  325. similarity = unfold_tensor[:, :, k * k // 2:k * k // 2 + 1] * unfold_tensor[:, :, :]
  326. similarity = similarity.sum(dim=1)
  327. else:
  328. raise NotImplementedError
  329. # 移除中心点的余弦相似度,得到[KxK-1]的结果
  330. similarity = torch.cat((similarity[:, :k * k // 2], similarity[:, k * k // 2 + 1:]), dim=1)
  331. # 将结果重塑回[B, KxK-1, H, W]的形状
  332. similarity = similarity.view(B, k * k - 1, H, W)
  333. return similarity


四、添加方法

4.1 修改一

第一还是建立文件,我们找到如下 ultralytics /nn文件夹下建立一个目录名字呢就是'Addmodules'文件夹( 用群内的文件的话已经有了无需新建) !然后在其内部建立一个新的py文件将核心代码复制粘贴进去即可。


4.2 修改二

第二步我们在该目录下创建一个新的py文件名字为'__init__.py'( 用群内的文件的话已经有了无需新建) ,然后在其内部导入我们的检测头如下图所示。


4.3 修改三

第三步我门中到如下文件'ultralytics/nn/tasks.py'进行导入和注册我们的模块( 用群内的文件的话已经有了无需重新导入直接开始第四步即可)


4.4 修改四

按照我的添加在parse_model里添加即可。

  1. elif m in {FreqFusion}:
  2. c2 = ch[f[0]]
  3. args = [[ch[x] for x in f], *args]


4.5 修改五

第五步我门中到如下文件'ultralytics/nn/tasks.py'进行修改,按照红框的位置进行定位,用我给的代码进行替换红框中的代码.

  1. try:
  2. m.stride = torch.tensor([s / x.shape[-2] for x in _forward(torch.zeros(1, ch, s, s))]) # forward on CPU
  3. except RuntimeError:
  4. try:
  5. self.model.to(torch.device('cuda'))
  6. m.stride = torch.tensor([s / x.shape[-2] for x in _forward(
  7. torch.zeros(1, ch, s, s).to(torch.device('cuda')))]) # forward on CUDA
  8. except RuntimeError as error:
  9. raise error


到此就修改完成了,大家可以复制下面的yaml文件运行。


五、正式训练


5.1 yaml文件

训练信息:YOLO11n-FreqFusion-BiFPN summary: 356 layers, 2,441,503 parameters, 2,441,487 gradients, 6.9 GFLOPs

注意:本文的机制需要关闭AMP训练否则会报错.

  1. # Ultralytics YOLO 🚀, AGPL-3.0 license
  2. # YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
  3. # Parameters
  4. nc: 80 # number of classes
  5. scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  6. # [depth, width, max_channels]
  7. n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  8. s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  9. m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  10. l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  11. x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
  12. # YOLO11n backbone
  13. backbone:
  14. # [from, repeats, module, args]
  15. - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  16. - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  17. - [-1, 2, C3k2, [256, False, 0.25]]
  18. - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  19. - [-1, 2, C3k2, [512, False, 0.25]]
  20. - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  21. - [-1, 2, C3k2, [512, True]]
  22. - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  23. - [-1, 2, C3k2, [1024, True]]
  24. - [-1, 1, SPPF, [1024, 5]] # 9
  25. - [-1, 2, C2PSA, [1024]] # 10
  26. # YOLO11n head
  27. head:
  28. - [4, 1, Conv, [256]] # 11-P3/8
  29. - [6, 1, Conv, [256]] # 12-P4/16
  30. - [10, 1, Conv, [256]] # 13-P5/32
  31. - [[12, -1], 1, FreqFusion, []] # 14
  32. - [-1, 2, C3k2, [256, False]] # 15-P4/16
  33. - [[11, -1], 1, FreqFusion, []] # 16
  34. - [-1, 2, C3k2, [256, False]] # 17-P3/8
  35. - [1, 1, Conv, [256, 3, 2]] # 18 P2->P3
  36. - [[-1, 11, 17], 1, Bi_FPN, []] # 19
  37. - [-1, 2, C3k2, [256, False]] # 20-P3/8
  38. - [-1, 1, Conv, [256, 3, 2]] # 21 P3->P4
  39. - [[-1, 12, 15], 1, Bi_FPN, []] # 22
  40. - [-1, 2, C3k2, [512, False]] # 23-P4/16
  41. - [-1, 1, Conv, [256, 3, 2]] # 24 P4->P5
  42. - [[-1, 13], 1, Bi_FPN, []] # 25
  43. - [-1, 2, C3k2, [1024, True]] # 26-P5/32
  44. - [[20, 23, 26], 1, Detect, [nc]] # Detect(P3, P4, P5)


5.2 训练代码

大家可以创建一个py文件将我给的代码复制粘贴进去,配置好自己的文件路径即可运行。

  1. import warnings
  2. warnings.filterwarnings('ignore')
  3. from ultralytics import YOLO
  4. if __name__ == '__main__':
  5. model = YOLO('yolov8-MLLA.yaml')
  6. # 如何切换模型版本, 上面的ymal文件可以改为 yolov8s.yaml就是使用的v8s,
  7. # 类似某个改进的yaml文件名称为yolov8-XXX.yaml那么如果想使用其它版本就把上面的名称改为yolov8l-XXX.yaml即可(改的是上面YOLO中间的名字不是配置文件的)!
  8. # model.load('yolov8n.pt') # 是否加载预训练权重,科研不建议大家加载否则很难提升精度
  9. model.train(data=r"C:\Users\Administrator\PycharmProjects\yolov5-master\yolov5-master\Construction Site Safety.v30-raw-images_latestversion.yolov8\data.yaml",
  10. # 如果大家任务是其它的'ultralytics/cfg/default.yaml'找到这里修改task可以改成detect, segment, classify, pose
  11. cache=False,
  12. imgsz=640,
  13. epochs=150,
  14. single_cls=False, # 是否是单类别检测
  15. batch=16,
  16. close_mosaic=0,
  17. workers=0,
  18. device='0',
  19. optimizer='SGD', # using SGD
  20. # resume='runs/train/exp21/weights/last.pt', # 如过想续训就设置last.pt的地址
  21. amp=False, # 如果出现训练损失为Nan可以关闭amp
  22. project='runs/train',
  23. name='exp',
  24. )


5.3 训练过程截图


五、本文总结

到此本文的正式分享内容就结束了,在这里给大家推荐我的YOLOv11改进有效涨点专栏,本专栏目前为新开的平均质量分98分,后期我会根据各种最新的前沿顶会进行论文复现,也会对一些老的改进机制进行补充,如果大家觉得本文帮助到你了,订阅本专栏,关注后续更多的更新~