学习资源站

YOLOv11改进-特殊场景改进篇-MB-TaylorFormer改善YOLOv11高分辨率和图像去雾检测(ICCV,全网独家首发改进)

一、本文介绍

本文给大家带来的改进机制是 图像去雾 MB-TaylorFormer, 其发布于2023年的国际计算机视觉会议(ICCV)上 ,可以算是一遍比较权威的图像去雾网络, MB-TaylorFormer 是一种为图像去雾设计的多分支高效 Transformer 网络,它通过应用泰勒公式展开的方式来近似softmax-attention机制,实现了线性的计算复杂性,原本的网络计算量和参数量比较高,我对其进行了一定的跳转参数量和计算量都大幅度的降低,其为高分辨率图像处理也提供了一种新的解决方案。

欢迎大家订阅我的专栏一起学习YOLO!



二、原理介绍

官方论文地址: 官方论文地址点击此处即可跳转

官方代码地址: 官方代码地址点击此处即可跳转


2.1 MB-TaylorFormer基本原理

MB-TaylorFormer 是一种为图像去雾设计的多分支高效Transformer网络,它通过应用泰勒公式展开的方式来近似softmax-attention机制,实现了线性的计算复杂性。该网络包含以下主要特点和创新:

1. 泰勒公式展开的自注意力机制: 通过泰勓公式展开近似softmax-attention,MB-TaylorFormer能够以线性的计算复杂度处理图像,克服了传统Transformer在处理高分辨率图像时计算成本高的问题。

2. 多尺度注意力精化模块: 为了补偿泰勒展开可能引入的误差,提出了一个多尺度注意力精化模块,通过学习查询和键的局部信息来纠正输出,提高了去雾效果的准确性和自然度。

3. 多分支架构与多尺度贴片嵌入: 该网络引入了一种多分支架构,结合多尺度贴片嵌入策略,能够灵活地处理不同尺度的特征并捕获长距离的像素交互,这有助于在去雾任务中恢复更多细节和质量更高的图像。

总之: MB-TaylorFormer通过其创新的设计和有效的实现,在图像去雾任务中实现了高效和高 性能 的去雾效果,为高分辨率图像处理提供了一种新的解决方案。


2.2 泰勒公式展开的自注意力机制

MB-TaylorFormer采用泰勒公式对softmax-attention进行近似,实现了线性计算复杂度。这一机制通过将softmax函数中的指数函数用其泰勒级数展开来近似,有效降低了Transformer在处理图像时的计算负担。特别是在处理高分辨率图像时,该方法能够保持Transformer对全局信息的捕获能力,同时避免了传统softmax-attention因计算复杂度呈二次方增长而带来的计算资源消耗问题。


2.3 多尺度注意力精化模块

为了解决泰勒展开可能引入的近似误差,MB-TaylorFormer设计了一个多尺度注意力精化模块(MSAR)。这一模块通过对查询(Q)和键(K)的局部信息进行学习,并生成一个调制因子(gating tensor),用于校正和精化自注意力机制的输出。MSAR模块的引入不仅补偿了由泰勒展开引入的误差,还提升了图像去雾效果的准确性和视觉自然度,使得最终恢复的图像更加清晰和真实。


2.4 多分支架构与多尺度贴片嵌入

MB-TaylorFormer引入了一种多分支架构,配合多尺度贴片嵌入策略,增强了网络处理不同尺度特征的能力。这种架构允许网络在贴片嵌入阶段灵活地处理和融合不同尺度的特征,从而捕获从粗糙到精细的多层次语义信息。多尺度贴片嵌入通过不同尺度的重叠可变形卷积来实现,使得Transformer能够更好地适应图像中不同大小的物体和细节,从而在图像去雾任务中恢复出质量更高、细节更丰富的图像。


三、核心代码

核心代码的使用方式看下面章节四,这里说一下官方发布的版本计算量有一百多,我给降低到了18.1参数量也仅仅是略微上升,方便大家使用。

  1. import torch
  2. import torch.nn as nn
  3. import torch.nn.functional as F
  4. from torchvision.ops.deform_conv import DeformConv2d
  5. import numbers
  6. import math
  7. from einops import rearrange
  8. import numpy as np
  9. __all__ = ['MB_TaylorFormer']
  10. freqs_dict = dict()
  11. ##########################################################################
  12. def to_3d(x):
  13. return rearrange(x, 'b c h w -> b (h w) c')
  14. def to_4d(x, h, w):
  15. return rearrange(x, 'b (h w) c -> b c h w', h=h, w=w)
  16. class BiasFree_LayerNorm(nn.Module):
  17. def __init__(self, normalized_shape):
  18. super(BiasFree_LayerNorm, self).__init__()
  19. if isinstance(normalized_shape, numbers.Integral):
  20. normalized_shape = (normalized_shape,)
  21. normalized_shape = torch.Size(normalized_shape)
  22. assert len(normalized_shape) == 1
  23. self.weight = nn.Parameter(torch.ones(normalized_shape))
  24. self.normalized_shape = normalized_shape
  25. def forward(self, x):
  26. sigma = x.var(-1, keepdim=True, unbiased=False)
  27. return x / torch.sqrt(sigma + 1e-5) * self.weight
  28. class WithBias_LayerNorm(nn.Module):
  29. def __init__(self, normalized_shape):
  30. super(WithBias_LayerNorm, self).__init__()
  31. if isinstance(normalized_shape, numbers.Integral):
  32. normalized_shape = (normalized_shape,)
  33. normalized_shape = torch.Size(normalized_shape)
  34. assert len(normalized_shape) == 1
  35. self.weight = nn.Parameter(torch.ones(normalized_shape))
  36. self.bias = nn.Parameter(torch.zeros(normalized_shape))
  37. self.normalized_shape = normalized_shape
  38. def forward(self, x):
  39. mu = x.mean(-1, keepdim=True)
  40. sigma = x.var(-1, keepdim=True, unbiased=False)
  41. return (x - mu) / torch.sqrt(sigma + 1e-5) * self.weight + self.bias
  42. class LayerNorm(nn.Module):
  43. def __init__(self, dim, LayerNorm_type):
  44. super(LayerNorm, self).__init__()
  45. if LayerNorm_type == 'BiasFree':
  46. self.body = BiasFree_LayerNorm(dim)
  47. else:
  48. self.body = WithBias_LayerNorm(dim)
  49. def forward(self, x):
  50. h, w = x.shape[-2:]
  51. return to_4d(self.body(to_3d(x)), h, w)
  52. ##########################################################################
  53. ## Gated-Dconv Feed-Forward Network (GDFN)
  54. class FeedForward(nn.Module):
  55. def __init__(self, dim, ffn_expansion_factor, bias):
  56. super(FeedForward, self).__init__()
  57. hidden_features = int(dim * ffn_expansion_factor)
  58. self.project_in = nn.Conv2d(dim, hidden_features * 2, kernel_size=1, bias=bias)
  59. self.dwconv = nn.Conv2d(hidden_features * 2, hidden_features * 2, kernel_size=3, stride=1, padding=1,
  60. groups=hidden_features * 2, bias=bias)
  61. self.project_out = nn.Conv2d(hidden_features, dim, kernel_size=1, bias=bias)
  62. def forward(self, x):
  63. x = self.project_in(x)
  64. x1, x2 = self.dwconv(x).chunk(2, dim=1)
  65. x = F.gelu(x1) * x2
  66. x = self.project_out(x)
  67. return x
  68. class refine_att(nn.Module):
  69. """Convolutional relative position encoding."""
  70. def __init__(self, Ch, h, window):
  71. super().__init__()
  72. if isinstance(window, int):
  73. # Set the same window size for all attention heads.
  74. window = {window: h}
  75. self.window = window
  76. elif isinstance(window, dict):
  77. self.window = window
  78. else:
  79. raise ValueError()
  80. self.conv_list = nn.ModuleList()
  81. self.head_splits = []
  82. for cur_window, cur_head_split in window.items():
  83. dilation = 1 # Use dilation=1 at default.
  84. padding_size = (cur_window + (cur_window - 1) *
  85. (dilation - 1)) // 2
  86. cur_conv = nn.Conv2d(
  87. cur_head_split * Ch * 2,
  88. cur_head_split,
  89. kernel_size=(cur_window, cur_window),
  90. padding=(padding_size, padding_size),
  91. dilation=(dilation, dilation),
  92. groups=cur_head_split,
  93. )
  94. self.conv_list.append(cur_conv)
  95. self.head_splits.append(cur_head_split)
  96. self.channel_splits = [x * Ch * 2 for x in self.head_splits]
  97. def forward(self, q, k, v, size):
  98. """foward function"""
  99. B, h, N, Ch = q.shape
  100. H, W = size
  101. # We don't use CLS_TOKEN
  102. q_img = q
  103. k_img = k
  104. v_img = v
  105. # Shape: [B, h, H*W, Ch] -> [B, h*Ch, H, W].
  106. q_img = rearrange(q_img, "B h (H W) Ch -> B h Ch H W", H=H, W=W)
  107. k_img = rearrange(k_img, "B h Ch (H W) -> B h Ch H W", H=H, W=W)
  108. qk_concat = torch.cat((q_img, k_img), 2)
  109. qk_concat = rearrange(qk_concat, "B h Ch H W -> B (h Ch) H W", H=H, W=W)
  110. # Split according to channels.
  111. qk_concat_list = torch.split(qk_concat, self.channel_splits, dim=1)
  112. qk_att_list = [
  113. conv(x) for conv, x in zip(self.conv_list, qk_concat_list)
  114. ]
  115. qk_att = torch.cat(qk_att_list, dim=1)
  116. # Shape: [B, h*Ch, H, W] -> [B, h, H*W, Ch].
  117. qk_att = rearrange(qk_att, "B (h Ch) H W -> B h (H W) Ch", h=h)
  118. return qk_att
  119. ##########################################################################
  120. ## Multi-DConv Head Transposed Self-Attention (MDTA)
  121. class Attention(nn.Module):
  122. def __init__(self, dim, num_heads, bias, shared_refine_att=None, qk_norm=1):
  123. super(Attention, self).__init__()
  124. self.norm = qk_norm
  125. self.num_heads = num_heads
  126. self.temperature = nn.Parameter(torch.ones(num_heads, 1, 1))
  127. # self.Leakyrelu=nn.LeakyReLU(negative_slope=0.01,inplace=True)
  128. self.sigmoid = nn.Sigmoid()
  129. self.qkv = nn.Conv2d(dim, dim * 3, kernel_size=1, bias=bias)
  130. self.qkv_dwconv = nn.Conv2d(dim * 3, dim * 3, kernel_size=3, stride=1, padding=1, groups=dim * 3, bias=bias)
  131. self.project_out = nn.Conv2d(dim, dim, kernel_size=1, bias=bias)
  132. if num_heads == 8:
  133. crpe_window = {
  134. 3: 2,
  135. 5: 3,
  136. 7: 3
  137. }
  138. elif num_heads == 1:
  139. crpe_window = {
  140. 3: 1,
  141. }
  142. elif num_heads == 2:
  143. crpe_window = {
  144. 3: 2,
  145. }
  146. elif num_heads == 4:
  147. crpe_window = {
  148. 3: 2,
  149. 5: 2,
  150. }
  151. self.refine_att = refine_att(Ch=dim // num_heads,
  152. h=num_heads,
  153. window=crpe_window)
  154. def forward(self, x):
  155. b, c, h, w = x.shape
  156. qkv = self.qkv_dwconv(self.qkv(x))
  157. q, k, v = qkv.chunk(3, dim=1)
  158. q = rearrange(q, 'b (head c) h w -> b head (h w) c', head=self.num_heads)
  159. k = rearrange(k, 'b (head c) h w -> b head c (h w)', head=self.num_heads)
  160. v = rearrange(v, 'b (head c) h w -> b head (h w) c', head=self.num_heads)
  161. # q = torch.nn.functional.normalize(q, dim=-1)
  162. q_norm = torch.norm(q, p=2, dim=-1, keepdim=True) / self.norm + 1e-6
  163. q = torch.div(q, q_norm)
  164. k_norm = torch.norm(k, p=2, dim=-2, keepdim=True) / self.norm + 1e-6
  165. k = torch.div(k, k_norm)
  166. # k = torch.nn.functional.normalize(k, dim=-2)
  167. refine_weight = self.refine_att(q, k, v, size=(h, w))
  168. # refine_weight=self.Leakyrelu(refine_weight)
  169. refine_weight = self.sigmoid(refine_weight)
  170. attn = k @ v
  171. # attn = attn.softmax(dim=-1)
  172. # print(torch.sum(k, dim=-1).unsqueeze(3).shape)
  173. out_numerator = torch.sum(v, dim=-2).unsqueeze(2) + (q @ attn)
  174. out_denominator = torch.full((h * w, c // self.num_heads), h * w).to(q.device) \
  175. + q @ torch.sum(k, dim=-1).unsqueeze(3).repeat(1, 1, 1, c // self.num_heads) + 1e-6
  176. # out=torch.div(out_numerator,out_denominator)*self.temperature*refine_weight
  177. out = torch.div(out_numerator, out_denominator) * self.temperature
  178. out = out * refine_weight
  179. out = rearrange(out, 'b head (h w) c-> b (head c) h w', head=self.num_heads, h=h, w=w)
  180. out = self.project_out(out)
  181. return out
  182. ##########################################################################
  183. class TransformerBlock(nn.Module):
  184. def __init__(self, dim, num_heads, ffn_expansion_factor, bias, LayerNorm_type, shared_refine_att=None, qk_norm=1):
  185. super(TransformerBlock, self).__init__()
  186. self.norm1 = LayerNorm(dim, LayerNorm_type)
  187. self.attn = Attention(dim, num_heads, bias, shared_refine_att=shared_refine_att, qk_norm=qk_norm)
  188. self.norm2 = LayerNorm(dim, LayerNorm_type)
  189. self.ffn = FeedForward(dim, ffn_expansion_factor, bias)
  190. def forward(self, x):
  191. x = x + self.attn(self.norm1(x))
  192. x = x + self.ffn(self.norm2(x))
  193. return x
  194. class MHCAEncoder(nn.Module):
  195. """Multi-Head Convolutional self-Attention Encoder comprised of `MHCA`
  196. blocks."""
  197. def __init__(
  198. self,
  199. dim,
  200. num_layers=1,
  201. num_heads=8,
  202. ffn_expansion_factor=2.66,
  203. bias=False,
  204. LayerNorm_type='BiasFree',
  205. qk_norm=1
  206. ):
  207. super().__init__()
  208. self.num_layers = num_layers
  209. self.MHCA_layers = nn.ModuleList([
  210. TransformerBlock(
  211. dim,
  212. num_heads=num_heads,
  213. ffn_expansion_factor=ffn_expansion_factor,
  214. bias=bias,
  215. LayerNorm_type=LayerNorm_type,
  216. qk_norm=qk_norm
  217. ) for idx in range(self.num_layers)
  218. ])
  219. def forward(self, x, size):
  220. """foward function"""
  221. H, W = size
  222. B = x.shape[0]
  223. # return x's shape : [B, N, C] -> [B, C, H, W]
  224. x = x.reshape(B, H, W, -1).permute(0, 3, 1, 2).contiguous()
  225. for layer in self.MHCA_layers:
  226. x = layer(x)
  227. return x
  228. class ResBlock(nn.Module):
  229. """Residual block for convolutional local feature."""
  230. def __init__(
  231. self,
  232. in_features,
  233. hidden_features=None,
  234. out_features=None,
  235. act_layer=nn.Hardswish,
  236. norm_layer=nn.BatchNorm2d,
  237. ):
  238. super().__init__()
  239. out_features = out_features or in_features
  240. hidden_features = hidden_features or in_features
  241. # self.act0 = act_layer()
  242. self.conv1 = Conv2d_BN(in_features,
  243. hidden_features,
  244. act_layer=act_layer)
  245. self.dwconv = nn.Conv2d(
  246. hidden_features,
  247. hidden_features,
  248. 3,
  249. 1,
  250. 1,
  251. bias=False,
  252. groups=hidden_features,
  253. )
  254. # self.norm = norm_layer(hidden_features)
  255. self.act = act_layer()
  256. self.conv2 = Conv2d_BN(hidden_features, out_features)
  257. self.apply(self._init_weights)
  258. def _init_weights(self, m):
  259. """
  260. initialization
  261. """
  262. if isinstance(m, nn.Conv2d):
  263. fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
  264. fan_out //= m.groups
  265. m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
  266. if m.bias is not None:
  267. m.bias.data.zero_()
  268. def forward(self, x):
  269. """foward function"""
  270. identity = x
  271. # x=self.act0(x)
  272. feat = self.conv1(x)
  273. feat = self.dwconv(feat)
  274. # feat = self.norm(feat)
  275. feat = self.act(feat)
  276. feat = self.conv2(feat)
  277. return identity + feat
  278. class MHCA_stage(nn.Module):
  279. """Multi-Head Convolutional self-Attention stage comprised of `MHCAEncoder`
  280. layers."""
  281. def __init__(
  282. self,
  283. embed_dim,
  284. out_embed_dim,
  285. num_layers=1,
  286. num_heads=8,
  287. ffn_expansion_factor=2.66,
  288. num_path=4,
  289. bias=False,
  290. LayerNorm_type='BiasFree',
  291. qk_norm=1
  292. ):
  293. super().__init__()
  294. self.mhca_blks = nn.ModuleList([
  295. MHCAEncoder(
  296. embed_dim,
  297. num_layers,
  298. num_heads,
  299. ffn_expansion_factor=ffn_expansion_factor,
  300. bias=bias,
  301. LayerNorm_type=LayerNorm_type,
  302. qk_norm=qk_norm
  303. ) for _ in range(num_path)
  304. ])
  305. self.aggregate = SKFF(embed_dim, height=num_path)
  306. # self.InvRes = ResBlock(in_features=embed_dim, out_features=embed_dim)
  307. # self.aggregate = Conv2d_aggregate(embed_dim * (num_path + 1),
  308. # out_embed_dim,
  309. # act_layer=nn.Hardswish)
  310. def forward(self, inputs):
  311. """foward function"""
  312. # att_outputs = [self.InvRes(inputs[0])]
  313. att_outputs = []
  314. for x, encoder in zip(inputs, self.mhca_blks):
  315. # [B, C, H, W] -> [B, N, C]
  316. _, _, H, W = x.shape
  317. x = x.flatten(2).transpose(1, 2).contiguous()
  318. att_outputs.append(encoder(x, size=(H, W)))
  319. # out_concat = torch.cat(att_outputs, dim=1)
  320. out = self.aggregate(att_outputs)
  321. return out
  322. ##########################################################################
  323. ## Overlapped image patch embedding with 3x3 Conv
  324. class Conv2d_BN(nn.Module):
  325. def __init__(
  326. self,
  327. in_ch,
  328. out_ch,
  329. kernel_size=1,
  330. stride=1,
  331. pad=0,
  332. dilation=1,
  333. groups=1,
  334. bn_weight_init=1,
  335. norm_layer=nn.BatchNorm2d,
  336. act_layer=None,
  337. ):
  338. super().__init__()
  339. self.conv = torch.nn.Conv2d(in_ch,
  340. out_ch,
  341. kernel_size,
  342. stride,
  343. pad,
  344. dilation,
  345. groups,
  346. bias=False)
  347. # self.bn = norm_layer(out_ch)
  348. # torch.nn.init.constant_(self.bn.weight, bn_weight_init)
  349. # torch.nn.init.constant_(self.bn.bias, 0)
  350. for m in self.modules():
  351. if isinstance(m, nn.Conv2d):
  352. # Note that there is no bias due to BN
  353. fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
  354. m.weight.data.normal_(mean=0.0, std=np.sqrt(2.0 / fan_out))
  355. self.act_layer = act_layer() if act_layer is not None else nn.Identity()
  356. def forward(self, x):
  357. x = self.conv(x)
  358. # x = self.bn(x)
  359. x = self.act_layer(x)
  360. return x
  361. class SKFF(nn.Module):
  362. def __init__(self, in_channels, height=2, reduction=8, bias=False):
  363. super(SKFF, self).__init__()
  364. self.height = height
  365. d = max(int(in_channels / reduction), 4)
  366. self.avg_pool = nn.AdaptiveAvgPool2d(1)
  367. self.conv_du = nn.Sequential(nn.Conv2d(in_channels, d, 1, padding=0, bias=bias), nn.PReLU())
  368. self.fcs = nn.ModuleList([])
  369. for i in range(self.height):
  370. self.fcs.append(nn.Conv2d(d, in_channels, kernel_size=1, stride=1, bias=bias))
  371. self.softmax = nn.Softmax(dim=1)
  372. def forward(self, inp_feats):
  373. batch_size = inp_feats[0].shape[0]
  374. n_feats = inp_feats[0].shape[1]
  375. inp_feats = torch.cat(inp_feats, dim=1)
  376. inp_feats = inp_feats.view(batch_size, self.height, n_feats, inp_feats.shape[2], inp_feats.shape[3])
  377. feats_U = torch.sum(inp_feats, dim=1)
  378. feats_S = self.avg_pool(feats_U)
  379. feats_Z = self.conv_du(feats_S)
  380. attention_vectors = [fc(feats_Z) for fc in self.fcs]
  381. attention_vectors = torch.cat(attention_vectors, dim=1)
  382. attention_vectors = attention_vectors.view(batch_size, self.height, n_feats, 1, 1)
  383. # stx()
  384. attention_vectors = self.softmax(attention_vectors)
  385. feats_V = torch.sum(inp_feats * attention_vectors, dim=1)
  386. return feats_V
  387. class DWConv2d_BN(nn.Module):
  388. def __init__(
  389. self,
  390. in_ch,
  391. out_ch,
  392. kernel_size=1,
  393. stride=1,
  394. norm_layer=nn.BatchNorm2d,
  395. act_layer=nn.Hardswish,
  396. bn_weight_init=1,
  397. offset_clamp=(-1, 1)
  398. ):
  399. super().__init__()
  400. # dw
  401. # self.conv=torch.nn.Conv2d(in_ch,out_ch,kernel_size,stride,(kernel_size - 1) // 2,bias=False,)
  402. # self.mask_generator = nn.Sequential(nn.Conv2d(in_channels=in_ch, out_channels=in_ch, kernel_size=3,
  403. # stride=1, padding=1, bias=False, groups=in_ch),
  404. # nn.Conv2d(in_channels=in_ch, out_channels=9,
  405. # kernel_size=1,
  406. # stride=1, padding=0, bias=False)
  407. # )
  408. self.offset_clamp = offset_clamp
  409. self.offset_generator = nn.Sequential(nn.Conv2d(in_channels=in_ch, out_channels=in_ch, kernel_size=3,
  410. stride=1, padding=1, bias=False, groups=in_ch),
  411. nn.Conv2d(in_channels=in_ch, out_channels=18,
  412. kernel_size=1,
  413. stride=1, padding=0, bias=False)
  414. )
  415. self.dcn = DeformConv2d(
  416. in_channels=in_ch,
  417. out_channels=in_ch,
  418. kernel_size=3,
  419. stride=1,
  420. padding=1,
  421. bias=False,
  422. groups=in_ch
  423. ) # .cuda(7)
  424. self.pwconv = nn.Conv2d(in_ch, out_ch, 1, 1, 0, bias=False)
  425. # self.bn = norm_layer(out_ch)
  426. self.act = act_layer() if act_layer is not None else nn.Identity()
  427. for m in self.modules():
  428. if isinstance(m, nn.Conv2d):
  429. n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
  430. m.weight.data.normal_(0, math.sqrt(2.0 / n))
  431. if m.bias is not None:
  432. m.bias.data.zero_()
  433. # print(m)
  434. # elif isinstance(m, nn.BatchNorm2d):
  435. # m.weight.data.fill_(bn_weight_init)
  436. # m.bias.data.zero_()
  437. def forward(self, x):
  438. # x=self.conv(x)
  439. # x = self.bn(x)
  440. # x = self.act(x)
  441. # mask= torch.sigmoid(self.mask_generator(x))
  442. # print('1')
  443. offset = self.offset_generator(x)
  444. # print('2')
  445. if self.offset_clamp:
  446. offset = torch.clamp(offset, min=self.offset_clamp[0], max=self.offset_clamp[1]) # .cuda(7)1
  447. # print(offset)
  448. # print('3')
  449. # x=x.cuda(7)
  450. x = self.dcn(x, offset)
  451. # x=x.cpu()
  452. # print('4')
  453. x = self.pwconv(x)
  454. # print('5')
  455. # x = self.bn(x)
  456. x = self.act(x)
  457. return x
  458. class DWCPatchEmbed(nn.Module):
  459. """Depthwise Convolutional Patch Embedding layer Image to Patch
  460. Embedding."""
  461. def __init__(self,
  462. in_chans=3,
  463. embed_dim=768,
  464. patch_size=16,
  465. stride=1,
  466. idx=0,
  467. act_layer=nn.Hardswish,
  468. offset_clamp=(-1, 1)):
  469. super().__init__()
  470. self.patch_conv = DWConv2d_BN(
  471. in_chans,
  472. embed_dim,
  473. kernel_size=patch_size,
  474. stride=stride,
  475. act_layer=act_layer,
  476. offset_clamp=offset_clamp
  477. )
  478. """
  479. self.patch_conv = DWConv2d_BN(
  480. in_chans,
  481. embed_dim,
  482. kernel_size=patch_size,
  483. stride=stride,
  484. act_layer=act_layer,
  485. )
  486. """
  487. def forward(self, x):
  488. """foward function"""
  489. x = self.patch_conv(x)
  490. return x
  491. class Patch_Embed_stage(nn.Module):
  492. """Depthwise Convolutional Patch Embedding stage comprised of
  493. `DWCPatchEmbed` layers."""
  494. def __init__(self, in_chans, embed_dim, num_path=4, isPool=False, offset_clamp=(-1, 1)):
  495. super(Patch_Embed_stage, self).__init__()
  496. self.patch_embeds = nn.ModuleList([
  497. DWCPatchEmbed(
  498. in_chans=in_chans if idx == 0 else embed_dim,
  499. embed_dim=embed_dim,
  500. patch_size=3,
  501. stride=1,
  502. idx=idx,
  503. offset_clamp=offset_clamp
  504. ) for idx in range(num_path)
  505. ])
  506. def forward(self, x):
  507. """foward function"""
  508. att_inputs = []
  509. for pe in self.patch_embeds:
  510. x = pe(x)
  511. att_inputs.append(x)
  512. return att_inputs
  513. class OverlapPatchEmbed(nn.Module):
  514. def __init__(self, in_c=3, embed_dim=48, bias=False):
  515. super(OverlapPatchEmbed, self).__init__()
  516. self.proj = nn.Conv2d(in_c, embed_dim, kernel_size=3, stride=1, padding=1, bias=bias)
  517. # self.proj_dw = nn.Conv2d(in_c, in_c, kernel_size=3, stride=1, padding=1,groups=in_c, bias=bias)
  518. # self.proj_pw = nn.Conv2d(in_c, embed_dim, kernel_size=1, stride=1, padding=0, bias=bias)
  519. # self.bn=nn.BatchNorm2d(embed_dim)
  520. # self.act=nn.Hardswish()
  521. def forward(self, x):
  522. x = self.proj(x)
  523. # x = self.proj_dw(x)
  524. # x= self.proj_pw(x)
  525. # x=self.bn(x)
  526. # x=self.act(x)
  527. return x
  528. ##########################################################################
  529. ## Resizing modules
  530. class Downsample(nn.Module):
  531. def __init__(self, input_feat, out_feat):
  532. super(Downsample, self).__init__()
  533. self.body = nn.Sequential( # nn.Conv2d(n_feat, n_feat // 2, kernel_size=3, stride=1, padding=1, bias=False),
  534. # dw
  535. nn.Conv2d(input_feat, input_feat, kernel_size=3, stride=1, padding=1, groups=input_feat, bias=False, ),
  536. # pw-linear
  537. nn.Conv2d(input_feat, out_feat // 4, 1, 1, 0, bias=False),
  538. # nn.BatchNorm2d(n_feat // 2),
  539. # nn.Hardswish(),
  540. nn.PixelUnshuffle(2))
  541. def forward(self, x):
  542. return self.body(x)
  543. class Upsample(nn.Module):
  544. def __init__(self, input_feat, out_feat):
  545. super(Upsample, self).__init__()
  546. self.body = nn.Sequential( # nn.Conv2d(n_feat, n_feat*2, kernel_size=3, stride=1, padding=1, bias=False),
  547. # dw
  548. nn.Conv2d(input_feat, input_feat, kernel_size=3, stride=1, padding=1, groups=input_feat, bias=False, ),
  549. # pw-linear
  550. nn.Conv2d(input_feat, out_feat * 4, 1, 1, 0, bias=False),
  551. # nn.BatchNorm2d(n_feat*2),
  552. # nn.Hardswish(),
  553. nn.PixelShuffle(2))
  554. def forward(self, x):
  555. return self.body(x)
  556. ##########################################################################
  557. ##---------- Restormer -----------------------
  558. class MB_TaylorFormer(nn.Module):
  559. def __init__(self,
  560. inp_channels=3,
  561. dim=[6, 12, 24, 36],
  562. num_blocks=[1, 1, 1, 1],
  563. heads=[1, 1, 1, 1],
  564. bias=False,
  565. dual_pixel_task=True,
  566. num_path=[1, 1, 1, 1], ## True for dual-pixel defocus deblurring only. Also set inp_channels=6
  567. qk_norm=1,
  568. offset_clamp=(-1, 1)
  569. ):
  570. super(MB_TaylorFormer, self).__init__()
  571. self.patch_embed = OverlapPatchEmbed(inp_channels, dim[0])
  572. self.patch_embed_encoder_level1 = Patch_Embed_stage(dim[0], dim[0], num_path=num_path[0], isPool=False,
  573. offset_clamp=offset_clamp)
  574. self.encoder_level1 = MHCA_stage(dim[0], dim[0], num_layers=num_blocks[0], num_heads=heads[0],
  575. ffn_expansion_factor=2.66, num_path=num_path[0],
  576. bias=False, LayerNorm_type='BiasFree', qk_norm=qk_norm)
  577. self.down1_2 = Downsample(dim[0], dim[1]) ## From Level 1 to Level 2
  578. self.patch_embed_encoder_level2 = Patch_Embed_stage(dim[1], dim[1], num_path=num_path[1], isPool=False,
  579. offset_clamp=offset_clamp)
  580. self.encoder_level2 = MHCA_stage(dim[1], dim[1], num_layers=num_blocks[1], num_heads=heads[1],
  581. ffn_expansion_factor=2.66,
  582. num_path=num_path[1], bias=False, LayerNorm_type='BiasFree', qk_norm=qk_norm)
  583. self.down2_3 = Downsample(dim[1], dim[2]) ## From Level 2 to Level 3
  584. self.patch_embed_encoder_level3 = Patch_Embed_stage(dim[2], dim[2], num_path=num_path[2],
  585. isPool=False, offset_clamp=offset_clamp)
  586. self.encoder_level3 = MHCA_stage(dim[2], dim[2], num_layers=num_blocks[2], num_heads=heads[2],
  587. ffn_expansion_factor=2.66,
  588. num_path=num_path[2], bias=False, LayerNorm_type='BiasFree', qk_norm=qk_norm)
  589. self.down3_4 = Downsample(dim[2], dim[3]) ## From Level 3 to Level 4
  590. self.patch_embed_latent = Patch_Embed_stage(dim[3], dim[3], num_path=num_path[3],
  591. isPool=False, offset_clamp=offset_clamp)
  592. self.latent = MHCA_stage(dim[3], dim[3], num_layers=num_blocks[3], num_heads=heads[3],
  593. ffn_expansion_factor=2.66, num_path=num_path[3], bias=False,
  594. LayerNorm_type='BiasFree', qk_norm=qk_norm)
  595. self.up4_3 = Upsample(int(dim[3]), dim[2]) ## From Level 4 to Level 3
  596. self.reduce_chan_level3 = nn.Sequential(
  597. nn.Conv2d(dim[2] * 2, dim[2], 1, 1, 0, bias=bias),
  598. # nn.BatchNorm2d(dim * 2**2),
  599. # nn.Hardswish(),
  600. )
  601. self.patch_embed_decoder_level3 = Patch_Embed_stage(dim[2], dim[2], num_path=num_path[2],
  602. isPool=False, offset_clamp=offset_clamp)
  603. self.decoder_level3 = MHCA_stage(dim[2], dim[2], num_layers=num_blocks[2], num_heads=heads[2],
  604. ffn_expansion_factor=2.66, num_path=num_path[2], bias=False,
  605. LayerNorm_type='BiasFree', qk_norm=qk_norm)
  606. self.up3_2 = Upsample(int(dim[2]), dim[1]) ## From Level 3 to Level 2
  607. self.reduce_chan_level2 = nn.Sequential(
  608. nn.Conv2d(dim[1] * 2, dim[1], 1, 1, 0, bias=bias),
  609. # nn.BatchNorm2d( dim * 2),
  610. # nn.Hardswish(),
  611. )
  612. self.patch_embed_decoder_level2 = Patch_Embed_stage(dim[1], dim[1], num_path=num_path[1],
  613. isPool=False, offset_clamp=offset_clamp)
  614. self.decoder_level2 = MHCA_stage(dim[1], dim[1], num_layers=num_blocks[1], num_heads=heads[1],
  615. ffn_expansion_factor=2.66, num_path=num_path[1], bias=False,
  616. LayerNorm_type='BiasFree', qk_norm=qk_norm)
  617. self.up2_1 = Upsample(int(dim[1]), dim[0]) ## From Level 2 to Level 1 (NO 1x1 conv to reduce channels)
  618. self.patch_embed_decoder_level1 = Patch_Embed_stage(dim[1], dim[1], num_path=num_path[0],
  619. isPool=False, offset_clamp=offset_clamp)
  620. self.decoder_level1 = MHCA_stage(dim[1], dim[1], num_layers=num_blocks[0], num_heads=heads[0],
  621. ffn_expansion_factor=2.66, num_path=num_path[0], bias=False,
  622. LayerNorm_type='BiasFree', qk_norm=qk_norm)
  623. self.patch_embed_refinement = Patch_Embed_stage(dim[1], dim[1], num_path=num_path[0],
  624. isPool=False, offset_clamp=offset_clamp)
  625. self.refinement = MHCA_stage(dim[1], dim[1], num_layers=num_blocks[0], num_heads=heads[0],
  626. ffn_expansion_factor=2.66, num_path=num_path[0], bias=False,
  627. LayerNorm_type='BiasFree', qk_norm=qk_norm)
  628. #### For Dual-Pixel Defocus Deblurring Task ####
  629. self.dual_pixel_task = dual_pixel_task
  630. if self.dual_pixel_task:
  631. self.skip_conv = nn.Conv2d(dim[0], dim[1], kernel_size=1, bias=bias)
  632. ###########################
  633. # self.output = nn.Conv2d(dim*2**1, 3, kernel_size=3, stride=1, padding=1, bias=False)
  634. self.output = nn.Sequential( # nn.Conv2d(n_feat, n_feat*2, kernel_size=3, stride=1, padding=1, bias=False),
  635. # nn.BatchNorm2d(dim*2),
  636. # nn.Hardswish(),
  637. nn.Conv2d(dim[1], 3, kernel_size=3, stride=1, padding=1, bias=False, ),
  638. )
  639. def forward(self, inp_img):
  640. inp_enc_level1 = self.patch_embed(inp_img)
  641. inp_enc_level1_list = self.patch_embed_encoder_level1(inp_enc_level1)
  642. out_enc_level1 = self.encoder_level1(inp_enc_level1_list) + inp_enc_level1
  643. # out_enc_level1 = self.encoder_level1(inp_enc_level1_list)
  644. inp_enc_level2 = self.down1_2(out_enc_level1)
  645. inp_enc_level2_list = self.patch_embed_encoder_level2(inp_enc_level2)
  646. out_enc_level2 = self.encoder_level2(inp_enc_level2_list) + inp_enc_level2
  647. inp_enc_level3 = self.down2_3(out_enc_level2)
  648. inp_enc_level3_list = self.patch_embed_encoder_level3(inp_enc_level3)
  649. out_enc_level3 = self.encoder_level3(inp_enc_level3_list) + inp_enc_level3
  650. inp_enc_level4 = self.down3_4(out_enc_level3)
  651. inp_latent = self.patch_embed_latent(inp_enc_level4)
  652. latent = self.latent(inp_latent) + inp_enc_level4
  653. inp_dec_level3 = self.up4_3(latent)
  654. inp_dec_level3 = torch.cat([inp_dec_level3, out_enc_level3], 1)
  655. inp_dec_level3 = self.reduce_chan_level3(inp_dec_level3)
  656. inp_dec_level3_list = self.patch_embed_decoder_level3(inp_dec_level3)
  657. out_dec_level3 = self.decoder_level3(inp_dec_level3_list) + inp_dec_level3
  658. inp_dec_level2 = self.up3_2(out_dec_level3)
  659. inp_dec_level2 = torch.cat([inp_dec_level2, out_enc_level2], 1)
  660. inp_dec_level2 = self.reduce_chan_level2(inp_dec_level2)
  661. inp_dec_level2_list = self.patch_embed_decoder_level2(inp_dec_level2)
  662. out_dec_level2 = self.decoder_level2(inp_dec_level2_list) + inp_dec_level2
  663. inp_dec_level1 = self.up2_1(out_dec_level2)
  664. inp_dec_level1 = torch.cat([inp_dec_level1, out_enc_level1], 1)
  665. inp_dec_level1_list = self.patch_embed_decoder_level1(inp_dec_level1)
  666. out_dec_level1 = self.decoder_level1(inp_dec_level1_list) + inp_dec_level1
  667. inp_latent_list = self.patch_embed_refinement(out_dec_level1)
  668. out_dec_level1 = self.refinement(inp_latent_list) + out_dec_level1
  669. # nn.Hardswish()
  670. #### For Dual-Pixel Defocus Deblurring Task ####
  671. if self.dual_pixel_task:
  672. out_dec_level1 = out_dec_level1 + self.skip_conv(inp_enc_level1)
  673. out_dec_level1 = self.output(out_dec_level1)
  674. ###########################
  675. else:
  676. out_dec_level1 = self.output(out_dec_level1) + inp_img
  677. return out_dec_level1
  678. def count_param(model):
  679. param_count = 0
  680. for param in model.parameters():
  681. param_count += param.view(-1).size()[0]
  682. return param_count
  683. if __name__ == "__main__":
  684. from thop import profile
  685. model = MB_TaylorFormer()
  686. model.eval()
  687. print("params", count_param(model))
  688. inputs = torch.randn(1, 3, 640, 640)
  689. output = model(inputs)
  690. print(output.size())


四、代码的使用方式

4.1 修改一

第一还是建立文件,我们找到如下 ultralytics /nn/modules文件夹下建立一个目录名字呢就是'Addmodules'文件夹( 用群内的文件的话已经有了无需新建) !然后在其内部建立一个新的py文件将核心代码复制粘贴进去即可。


4.2 修改二

第二步我们在该目录下创建一个新的py文件名字为'__init__.py'( 用群内的文件的话已经有了无需新建) ,然后在其内部导入我们的检测头如下图所示。


4.3 修改三

第三步我门中到如下文件'ultralytics/nn/tasks.py'进行导入和注册我们的模块( 用群内的文件的话已经有了无需重新导入直接开始第四步即可)

从今天开始以后的教程就都统一成这个样子了,因为我默认大家用了我群内的文件来进行修改!!

​​


4.4 修改四

按照我的添加在parse_model里添加即可。


关闭混合精度验证(可能需要)!

找到'ultralytics/engine/validator.py'文件找到 'class BaseValidator:' 然后在其'__call__'中 self.args.half = self.device.type != 'cpu' # force FP16 val during training的一行代码下面加上self.args.half = False


打印计算量的问题!

计算的GFLOPs计算 异常 不打印,所以需要额外修改一处, 我们找到如下文件'ultralytics/utils/torch_utils.py'文件内有如下的代码按照如下的图片进行修改,有一个get_flops的函数我们直接用我给的代码全部替换!

  1. def get_flops(model, imgsz=640):
  2. """Return a YOLO model's FLOPs."""
  3. if not thop:
  4. return 0.0 # if not installed return 0.0 GFLOPs
  5. try:
  6. model = de_parallel(model)
  7. p = next(model.parameters())
  8. if not isinstance(imgsz, list):
  9. imgsz = [imgsz, imgsz] # expand if int/float
  10. try:
  11. # Use stride size for input tensor
  12. stride = 640
  13. im = torch.empty((1, 3, stride, stride), device=p.device) # input image in BCHW format
  14. flops = thop.profile(deepcopy(model), inputs=[im], verbose=False)[0] / 1e9 * 2 # stride GFLOPs
  15. return flops * imgsz[0] / stride * imgsz[1] / stride # imgsz GFLOPs
  16. except Exception:
  17. # Use actual image size for input tensor (i.e. required for RTDETR models)
  18. im = torch.empty((1, p.shape[1], *imgsz), device=p.device) # input image in BCHW format
  19. return thop.profile(deepcopy(model), inputs=[im], verbose=False)[0] / 1e9 * 2 # imgsz GFLOPs
  20. except Exception:
  21. return 0.0

到此就修改完成了,大家可以复制下面的yaml文件运行。


五、yaml文件和运行记录

5.1 yaml文件

训练信息:YOLO11-MBTaylorFormer summary: 671 layers, 2,657,469 parameters, 2,657,453 gradients, 16.3 GFLOPs

  1. # Ultralytics YOLO 🚀, AGPL-3.0 license
  2. # YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
  3. # Parameters
  4. nc: 80 # number of classes
  5. scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  6. # [depth, width, max_channels]
  7. n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  8. s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  9. m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  10. l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  11. x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
  12. # YOLO11n backbone
  13. backbone:
  14. # [from, repeats, module, args]
  15. - [-1, 1, MB_TaylorFormer, []] # 0-P1/2
  16. - [-1, 1, Conv, [64, 3, 2]] # 1-P1/2
  17. - [-1, 1, Conv, [128, 3, 2]] # 2-P2/4
  18. - [-1, 2, C3k2, [256, False, 0.25]]
  19. - [-1, 1, Conv, [256, 3, 2]] # 4-P3/8
  20. - [-1, 2, C3k2, [512, False, 0.25]]
  21. - [-1, 1, Conv, [512, 3, 2]] # 6-P4/16
  22. - [-1, 2, C3k2, [512, True]]
  23. - [-1, 1, Conv, [1024, 3, 2]] # 8-P5/32
  24. - [-1, 2, C3k2, [1024, True]]
  25. - [-1, 1, SPPF, [1024, 5]] # 10
  26. - [-1, 2, C2PSA, [1024]] # 11
  27. # YOLO11n head
  28. head:
  29. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  30. - [[-1, 7], 1, Concat, [1]] # cat backbone P4
  31. - [-1, 2, C3k2, [512, False]] # 14
  32. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  33. - [[-1, 5], 1, Concat, [1]] # cat backbone P3
  34. - [-1, 2, C3k2, [256, False]] # 17 (P3/8-small)
  35. - [-1, 1, Conv, [256, 3, 2]]
  36. - [[-1, 14], 1, Concat, [1]] # cat head P4
  37. - [-1, 2, C3k2, [512, False]] # 20 (P4/16-medium)
  38. - [-1, 1, Conv, [512, 3, 2]]
  39. - [[-1, 11], 1, Concat, [1]] # cat head P5
  40. - [-1, 2, C3k2, [1024, True]] # 23 (P5/32-large)
  41. - [[17, 20, 23], 1, Detect, [nc]] # Detect(P3, P4, P5)


5.2 训练代码

大家可以创建一个py文件将我给的代码复制粘贴进去,配置好自己的文件路径即可运行。

  1. import warnings
  2. warnings.filterwarnings('ignore')
  3. from ultralytics import YOLO
  4. if __name__ == '__main__':
  5. model = YOLO('ultralytics/cfg/models/v8/yolov8-C2f-FasterBlock.yaml')
  6. # model.load('yolov8n.pt') # loading pretrain weights
  7. model.train(data=r'替换数据集yaml文件地址',
  8. # 如果大家任务是其它的'ultralytics/cfg/default.yaml'找到这里修改task可以改成detect, segment, classify, pose
  9. cache=False,
  10. imgsz=640,
  11. epochs=150,
  12. single_cls=False, # 是否是单类别检测
  13. batch=4,
  14. close_mosaic=10,
  15. workers=0,
  16. device='0',
  17. optimizer='SGD', # using SGD
  18. # resume='', # 如过想续训就设置last.pt的地址
  19. amp=False, # 如果出现训练损失为Nan可以关闭amp
  20. project='runs/train',
  21. name='exp',
  22. )


5.3 训练过程截图


五、本文总结

到此本文的正式分享内容就结束了,在这里给大家推荐我的YOLOv11改进有效涨点专栏,本专栏目前为新开的平均质量分98分,后期我会根据各种最新的前沿顶会进行论文复现,也会对一些老的改进机制进行补充,如果大家觉得本文帮助到你了,订阅本专栏,关注后续更多的更新~