学习资源站

20-替换主干网络之Swin TransformerV1(参数量更小的ViT模型)_yolov5改进系列(19)——替换主干网络之swin transformerv1(参数量更小的vi

YOLOv5改进系列(19)——替换主干网络之Swin TransformerV1(参数量更小的ViT模型)

🚀一、Swin TransformerV1介绍   

这篇论文获得了2021 ICCV最佳论文,屠榜了各大CV任务,还是很值得拜读一下的。

直通车→


1.1 简介 

Swin TransformerV1是继ViT之后的Transformer在CV领域的巅峰之作,性能优于DeiT、ViT和EfficientNet等主干网络,已经替代经典的CNN架构,成为了计算机视觉领域通用的backbone。

它基于ViT模型的思想,创新性地引入了滑动窗口机制,让模型能够学习到跨窗口的信息,同时通过下采样层,使得模型能够处理超分辨率的图片节省计算量以及能够关注全局和局部的信息

主要改进:

(1)基于局部窗口做注意力

(2)将层次性、局部性和平移不变性等先验引入Transformer网络结构设计

(3)关键部分是提出了Shift window移动窗口(W-MSA、SW-MSA),改进了ViT中忽略局部窗口之间相关性的问题。

(4)使用cyclic-shift 循环位移和mask机制,保证计算量不变,并忽略不相关部分的注意力权重

(5)加入了相对位置偏置B


1.2 网络结构

总体架构

  • (a)Swin Transformer (Swin- t)的结构
  • (b)两个连续的Swin transformer块
  • W-MSA是具有规则窗口配置
  • SW-MSA是移位窗口配置的多头自注意模块

Swin Transformer Block

Swin Transformer的构建方法是将Transformer块中的标准Multi-head self-attention(MSA)模块替换为基于移动窗口的模块,其他层保持不变。

如上图所示,Swin Transformer模块由基于MSA的平移窗口模块介于GELU非线性之间的2层MLP组成。在每个MSA模块和每个MLP之前应用一个 LN层(层归一化),在每个模块之后应用一个残差连接。

连续Swin Transformer Block计算为:

其中,W-MSA窗口MSASW-MSA 移动窗口MSA

前者解决规模问题,后者解决计算复杂度问题。


1.3 实验

(1)在ImageNet-1K上进行图像分类

(2)COCO数据集上的目标检测

(3)在ADE20K上语义分割

(4)消融实验

表4是在分类、检测、分割任务上进行的实验

表5是比较不同attention方法还有耗时情况

表6是使用不同的self-attention比较


 🚀二、具体更换方法

第①步:在common.py中添加Swin Transformer模块

将以下代码复制粘贴到common.py文件的末尾

  1. import torch
  2. import torch.nn as nn
  3. import torch.nn.functional as F
  4. import torch.utils.checkpoint as checkpoint
  5. import numpy as np
  6. from typing import Optional
  7. def drop_path_f(x, drop_prob: float = 0., training: bool = False):
  8. if drop_prob == 0. or not training:
  9. return x
  10. keep_prob = 1 - drop_prob
  11. shape = (x.shape[0],) + (1,) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets
  12. random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
  13. random_tensor.floor_() # binarize
  14. output = x.div(keep_prob) * random_tensor
  15. return output
  16. class DropPath(nn.Module):
  17. def __init__(self, drop_prob=None):
  18. super(DropPath, self).__init__()
  19. self.drop_prob = drop_prob
  20. def forward(self, x):
  21. return drop_path_f(x, self.drop_prob, self.training)
  22. def window_partition(x, window_size: int):
  23. """
  24. 将feature map按照window_size划分成一个个没有重叠的window
  25. Args:
  26. x: (B, H, W, C)
  27. window_size (int): window size(M)
  28. Returns:
  29. windows: (num_windows*B, window_size, window_size, C)
  30. """
  31. B, H, W, C = x.shape
  32. x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
  33. # permute: [B, H//Mh, Mh, W//Mw, Mw, C] -> [B, H//Mh, W//Mh, Mw, Mw, C]
  34. # view: [B, H//Mh, W//Mw, Mh, Mw, C] -> [B*num_windows, Mh, Mw, C]
  35. windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
  36. return windows
  37. def window_reverse(windows, window_size: int, H: int, W: int):
  38. """
  39. 将一个个window还原成一个feature map
  40. Args:
  41. windows: (num_windows*B, window_size, window_size, C)
  42. window_size (int): Window size(M)
  43. H (int): Height of image
  44. W (int): Width of image
  45. Returns:
  46. x: (B, H, W, C)
  47. """
  48. B = int(windows.shape[0] / (H * W / window_size / window_size))
  49. # view: [B*num_windows, Mh, Mw, C] -> [B, H//Mh, W//Mw, Mh, Mw, C]
  50. x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
  51. # permute: [B, H//Mh, W//Mw, Mh, Mw, C] -> [B, H//Mh, Mh, W//Mw, Mw, C]
  52. # view: [B, H//Mh, Mh, W//Mw, Mw, C] -> [B, H, W, C]
  53. x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
  54. return x
  55. class Mlp(nn.Module):
  56. """ MLP as used in Vision Transformer, MLP-Mixer and related networks
  57. """
  58. def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
  59. super().__init__()
  60. out_features = out_features or in_features
  61. hidden_features = hidden_features or in_features
  62. self.fc1 = nn.Linear(in_features, hidden_features)
  63. self.act = act_layer()
  64. self.drop1 = nn.Dropout(drop)
  65. self.fc2 = nn.Linear(hidden_features, out_features)
  66. self.drop2 = nn.Dropout(drop)
  67. def forward(self, x):
  68. x = self.fc1(x)
  69. x = self.act(x)
  70. x = self.drop1(x)
  71. x = self.fc2(x)
  72. x = self.drop2(x)
  73. return x
  74. class WindowAttention(nn.Module):
  75. r""" Window based multi-head self attention (W-MSA) module with relative position bias.
  76. It supports both of shifted and non-shifted window.
  77. Args:
  78. dim (int): Number of input channels.
  79. window_size (tuple[int]): The height and width of the window.
  80. num_heads (int): Number of attention heads.
  81. qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
  82. attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
  83. proj_drop (float, optional): Dropout ratio of output. Default: 0.0
  84. """
  85. def __init__(self, dim, window_size, num_heads, qkv_bias=True, attn_drop=0., proj_drop=0.):
  86. super().__init__()
  87. self.dim = dim
  88. self.window_size = window_size # [Mh, Mw]
  89. self.num_heads = num_heads
  90. head_dim = dim // num_heads
  91. self.scale = head_dim ** -0.5
  92. # define a parameter table of relative position bias
  93. self.relative_position_bias_table = nn.Parameter(
  94. torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # [2*Mh-1 * 2*Mw-1, nH]
  95. # get pair-wise relative position index for each token inside the window
  96. coords_h = torch.arange(self.window_size[0])
  97. coords_w = torch.arange(self.window_size[1])
  98. coords = torch.stack(torch.meshgrid([coords_h, coords_w], indexing="ij")) # [2, Mh, Mw]
  99. coords_flatten = torch.flatten(coords, 1) # [2, Mh*Mw]
  100. # [2, Mh*Mw, 1] - [2, 1, Mh*Mw]
  101. relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # [2, Mh*Mw, Mh*Mw]
  102. relative_coords = relative_coords.permute(1, 2, 0).contiguous() # [Mh*Mw, Mh*Mw, 2]
  103. relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
  104. relative_coords[:, :, 1] += self.window_size[1] - 1
  105. relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
  106. relative_position_index = relative_coords.sum(-1) # [Mh*Mw, Mh*Mw]
  107. self.register_buffer("relative_position_index", relative_position_index)
  108. self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
  109. self.attn_drop = nn.Dropout(attn_drop)
  110. self.proj = nn.Linear(dim, dim)
  111. self.proj_drop = nn.Dropout(proj_drop)
  112. nn.init.trunc_normal_(self.relative_position_bias_table, std=.02)
  113. self.softmax = nn.Softmax(dim=-1)
  114. def forward(self, x, mask: Optional[torch.Tensor] = None):
  115. """
  116. Args:
  117. x: input features with shape of (num_windows*B, Mh*Mw, C)
  118. mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
  119. """
  120. # [batch_size*num_windows, Mh*Mw, total_embed_dim]
  121. B_, N, C = x.shape
  122. # qkv(): -> [batch_size*num_windows, Mh*Mw, 3 * total_embed_dim]
  123. # reshape: -> [batch_size*num_windows, Mh*Mw, 3, num_heads, embed_dim_per_head]
  124. # permute: -> [3, batch_size*num_windows, num_heads, Mh*Mw, embed_dim_per_head]
  125. qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4).contiguous()
  126. # [batch_size*num_windows, num_heads, Mh*Mw, embed_dim_per_head]
  127. q, k, v = qkv.unbind(0) # make torchscript happy (cannot use tensor as tuple)
  128. # transpose: -> [batch_size*num_windows, num_heads, embed_dim_per_head, Mh*Mw]
  129. # @: multiply -> [batch_size*num_windows, num_heads, Mh*Mw, Mh*Mw]
  130. q = q * self.scale
  131. attn = (q @ k.transpose(-2, -1))
  132. # relative_position_bias_table.view: [Mh*Mw*Mh*Mw,nH] -> [Mh*Mw,Mh*Mw,nH]
  133. relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
  134. self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1)
  135. relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # [nH, Mh*Mw, Mh*Mw]
  136. attn = attn + relative_position_bias.unsqueeze(0)
  137. if mask is not None:
  138. # mask: [nW, Mh*Mw, Mh*Mw]
  139. nW = mask.shape[0] # num_windows
  140. # attn.view: [batch_size, num_windows, num_heads, Mh*Mw, Mh*Mw]
  141. # mask.unsqueeze: [1, nW, 1, Mh*Mw, Mh*Mw]
  142. attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
  143. attn = attn.view(-1, self.num_heads, N, N)
  144. attn = self.softmax(attn)
  145. else:
  146. attn = self.softmax(attn)
  147. attn = self.attn_drop(attn)
  148. # @: multiply -> [batch_size*num_windows, num_heads, Mh*Mw, embed_dim_per_head]
  149. # transpose: -> [batch_size*num_windows, Mh*Mw, num_heads, embed_dim_per_head]
  150. # reshape: -> [batch_size*num_windows, Mh*Mw, total_embed_dim]
  151. #x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
  152. x = (attn.to(v.dtype) @ v).transpose(1, 2).reshape(B_, N, C)
  153. x = self.proj(x)
  154. x = self.proj_drop(x)
  155. return x
  156. class SwinTransformerBlock(nn.Module):
  157. r""" Swin Transformer Block.
  158. Args:
  159. dim (int): Number of input channels.
  160. num_heads (int): Number of attention heads.
  161. window_size (int): Window size.
  162. shift_size (int): Shift size for SW-MSA.
  163. mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
  164. qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
  165. drop (float, optional): Dropout rate. Default: 0.0
  166. attn_drop (float, optional): Attention dropout rate. Default: 0.0
  167. drop_path (float, optional): Stochastic depth rate. Default: 0.0
  168. act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
  169. norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
  170. """
  171. def __init__(self, dim, num_heads, window_size=7, shift_size=0,
  172. mlp_ratio=4., qkv_bias=True, drop=0., attn_drop=0., drop_path=0.,
  173. act_layer=nn.GELU, norm_layer=nn.LayerNorm):
  174. super().__init__()
  175. self.dim = dim
  176. self.num_heads = num_heads
  177. self.window_size = window_size
  178. self.shift_size = shift_size
  179. self.mlp_ratio = mlp_ratio
  180. assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
  181. self.norm1 = norm_layer(dim)
  182. self.attn = WindowAttention(
  183. dim, window_size=(self.window_size, self.window_size), num_heads=num_heads, qkv_bias=qkv_bias,
  184. attn_drop=attn_drop, proj_drop=drop)
  185. self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
  186. self.norm2 = norm_layer(dim)
  187. mlp_hidden_dim = int(dim * mlp_ratio)
  188. self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
  189. def forward(self, x, attn_mask):
  190. H, W = self.H, self.W
  191. B, L, C = x.shape
  192. assert L == H * W, "input feature has wrong size"
  193. shortcut = x
  194. x = self.norm1(x)
  195. x = x.view(B, H, W, C)
  196. # pad feature maps to multiples of window size
  197. # 把 feature map 给 pad 到 window size 的整数倍
  198. pad_l = pad_t = 0
  199. pad_r = (self.window_size - W % self.window_size) % self.window_size
  200. pad_b = (self.window_size - H % self.window_size) % self.window_size
  201. x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
  202. _, Hp, Wp, _ = x.shape
  203. # cyclic shift
  204. if self.shift_size > 0:
  205. shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
  206. else:
  207. shifted_x = x
  208. attn_mask = None
  209. # partition windows
  210. x_windows = window_partition(shifted_x, self.window_size) # [nW*B, Mh, Mw, C]
  211. x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # [nW*B, Mh*Mw, C]
  212. # W-MSA/SW-MSA
  213. attn_windows = self.attn(x_windows, mask=attn_mask) # [nW*B, Mh*Mw, C]
  214. # merge windows
  215. attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) # [nW*B, Mh, Mw, C]
  216. shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # [B, H', W', C]
  217. # reverse cyclic shift
  218. if self.shift_size > 0:
  219. x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
  220. else:
  221. x = shifted_x
  222. if pad_r > 0 or pad_b > 0:
  223. # 把前面pad的数据移除掉
  224. x = x[:, :H, :W, :].contiguous()
  225. x = x.view(B, H * W, C)
  226. # FFN
  227. x = shortcut + self.drop_path(x)
  228. x = x + self.drop_path(self.mlp(self.norm2(x)))
  229. return x
  230. class SwinStage(nn.Module):
  231. """
  232. A basic Swin Transformer layer for one stage.
  233. Args:
  234. dim (int): Number of input channels.
  235. depth (int): Number of blocks.
  236. num_heads (int): Number of attention heads.
  237. window_size (int): Local window size.
  238. mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
  239. qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
  240. drop (float, optional): Dropout rate. Default: 0.0
  241. attn_drop (float, optional): Attention dropout rate. Default: 0.0
  242. drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
  243. norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
  244. downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
  245. use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
  246. """
  247. def __init__(self, dim, c2, depth, num_heads, window_size,
  248. mlp_ratio=4., qkv_bias=True, drop=0., attn_drop=0.,
  249. drop_path=0., norm_layer=nn.LayerNorm, use_checkpoint=False):
  250. super().__init__()
  251. assert dim == c2, r"no. in/out channel should be same"
  252. self.dim = dim
  253. self.depth = depth
  254. self.window_size = window_size
  255. self.use_checkpoint = use_checkpoint
  256. self.shift_size = window_size // 2
  257. # build blocks
  258. self.blocks = nn.ModuleList([
  259. SwinTransformerBlock(
  260. dim=dim,
  261. num_heads=num_heads,
  262. window_size=window_size,
  263. shift_size=0 if (i % 2 == 0) else self.shift_size,
  264. mlp_ratio=mlp_ratio,
  265. qkv_bias=qkv_bias,
  266. drop=drop,
  267. attn_drop=attn_drop,
  268. drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
  269. norm_layer=norm_layer)
  270. for i in range(depth)])
  271. def create_mask(self, x, H, W):
  272. # calculate attention mask for SW-MSA
  273. # 保证Hp和Wp是window_size的整数倍
  274. Hp = int(np.ceil(H / self.window_size)) * self.window_size
  275. Wp = int(np.ceil(W / self.window_size)) * self.window_size
  276. # 拥有和feature map一样的通道排列顺序,方便后续window_partition
  277. img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # [1, Hp, Wp, 1]
  278. h_slices = (slice(0, -self.window_size),
  279. slice(-self.window_size, -self.shift_size),
  280. slice(-self.shift_size, None))
  281. w_slices = (slice(0, -self.window_size),
  282. slice(-self.window_size, -self.shift_size),
  283. slice(-self.shift_size, None))
  284. cnt = 0
  285. for h in h_slices:
  286. for w in w_slices:
  287. img_mask[:, h, w, :] = cnt
  288. cnt += 1
  289. mask_windows = window_partition(img_mask, self.window_size) # [nW, Mh, Mw, 1]
  290. mask_windows = mask_windows.view(-1, self.window_size * self.window_size) # [nW, Mh*Mw]
  291. attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) # [nW, 1, Mh*Mw] - [nW, Mh*Mw, 1]
  292. # [nW, Mh*Mw, Mh*Mw]
  293. attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
  294. return attn_mask
  295. def forward(self, x):
  296. B, C, H, W = x.shape
  297. x = x.permute(0, 2, 3, 1).contiguous().view(B, H * W, C)
  298. attn_mask = self.create_mask(x, H, W) # [nW, Mh*Mw, Mh*Mw]
  299. for blk in self.blocks:
  300. blk.H, blk.W = H, W
  301. if not torch.jit.is_scripting() and self.use_checkpoint:
  302. x = checkpoint.checkpoint(blk, x, attn_mask)
  303. else:
  304. x = blk(x, attn_mask)
  305. x = x.view(B, H, W, C)
  306. x = x.permute(0, 3, 1, 2).contiguous()
  307. return x
  308. class PatchEmbed(nn.Module):
  309. """
  310. 2D Image to Patch Embedding
  311. """
  312. def __init__(self, in_c=3, embed_dim=96, patch_size=4, norm_layer=None):
  313. super().__init__()
  314. patch_size = (patch_size, patch_size)
  315. self.patch_size = patch_size
  316. self.in_chans = in_c
  317. self.embed_dim = embed_dim
  318. self.proj = nn.Conv2d(in_c, embed_dim, kernel_size=patch_size, stride=patch_size)
  319. self.norm = norm_layer(embed_dim) if norm_layer else nn.Identity()
  320. def forward(self, x):
  321. _, _, H, W = x.shape
  322. # padding
  323. # 如果输入图片的H,W不是patch_size的整数倍,需要进行padding
  324. pad_input = (H % self.patch_size[0] != 0) or (W % self.patch_size[1] != 0)
  325. if pad_input:
  326. # to pad the last 3 dimensions,
  327. # (W_left, W_right, H_top,H_bottom, C_front, C_back)
  328. x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1],
  329. 0, self.patch_size[0] - H % self.patch_size[0],
  330. 0, 0))
  331. # 下采样patch_size倍
  332. x = self.proj(x)
  333. B, C, H, W = x.shape
  334. # flatten: [B, C, H, W] -> [B, C, HW]
  335. # transpose: [B, C, HW] -> [B, HW, C]
  336. x = x.flatten(2).transpose(1, 2)
  337. x = self.norm(x)
  338. # view: [B, HW, C] -> [B, H, W, C]
  339. # permute: [B, H, W, C] -> [B, C, H, W]
  340. x = x.view(B, H, W, C)
  341. x = x.permute(0, 3, 1, 2).contiguous()
  342. return x
  343. class PatchMerging(nn.Module):
  344. r""" Patch Merging Layer.
  345. Args:
  346. dim (int): Number of input channels.
  347. norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
  348. """
  349. def __init__(self, dim, c2, norm_layer=nn.LayerNorm):
  350. super().__init__()
  351. assert c2 == (2 * dim), r"no. out channel should be 2 * no. in channel "
  352. self.dim = dim
  353. self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
  354. self.norm = norm_layer(4 * dim)
  355. def forward(self, x):
  356. """
  357. x: B, C, H, W
  358. """
  359. B, C, H, W = x.shape
  360. # assert L == H * W, "input feature has wrong size"
  361. x = x.permute(0, 2, 3, 1).contiguous()
  362. # x = x.view(B, H*W, C)
  363. # padding
  364. # 如果输入feature map的H,W不是2的整数倍,需要进行padding
  365. pad_input = (H % 2 == 1) or (W % 2 == 1)
  366. if pad_input:
  367. # to pad the last 3 dimensions, starting from the last dimension and moving forward.
  368. # (C_front, C_back, W_left, W_right, H_top, H_bottom)
  369. # 注意这里的Tensor通道是[B, H, W, C],所以会和官方文档有些不同
  370. x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2))
  371. x0 = x[:, 0::2, 0::2, :] # [B, H/2, W/2, C]
  372. x1 = x[:, 1::2, 0::2, :] # [B, H/2, W/2, C]
  373. x2 = x[:, 0::2, 1::2, :] # [B, H/2, W/2, C]
  374. x3 = x[:, 1::2, 1::2, :] # [B, H/2, W/2, C]
  375. x = torch.cat([x0, x1, x2, x3], -1) # [B, H/2, W/2, 4*C]
  376. x = x.view(B, -1, 4 * C) # [B, H/2*W/2, 4*C]
  377. x = self.norm(x)
  378. x = self.reduction(x) # [B, H/2*W/2, 2*C]
  379. x = x.view(B, int(H / 2), int(W / 2), C * 2)
  380. x = x.permute(0, 3, 1, 2).contiguous()
  381. return x

第②步:在yolo.py文件里的parse_model函数加入类名

首先找到yolo.py里面parse_model函数的这一行

 加入 PatchMerging, PatchEmbed, SwinStage这三个模块


 第③步:创建自定义的yaml文件  

yaml文件完整代码

  1. # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
  2. # Parameters
  3. nc: 1 # number of classes
  4. depth_multiple: 0.33 # model depth multiple
  5. width_multiple: 0.25 # layer channel multiple
  6. anchors:
  7. - [10,13, 16,30, 33,23] # P3/8
  8. - [30,61, 62,45, 59,119] # P4/16
  9. - [116,90, 156,198, 373,326] # P5/32
  10. # YOLOv5 v6.0 backbone
  11. backbone:
  12. # [from, number, module, args]
  13. # input [b, 1, 640, 640]
  14. [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2 [b, 64, 320, 320]
  15. [-1, 1, Conv, [128, 3, 2]], # 1-P2/4 [b, 128, 160, 160]
  16. [-1, 3, C3, [128]],
  17. [-1, 1, Conv, [256, 3, 2]], # 3-P3/8 [b, 256, 80, 80]
  18. [-1, 6, C3, [256]],
  19. [-1, 1, Conv, [512, 3, 2]], # 5-P4/16 [b, 512, 40, 40]
  20. [-1, 9, C3, [512]],
  21. [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32 [b, 1024, 20, 20]
  22. [-1, 3, C3, [1024]],
  23. [-1, 1, SwinStage, [1024, 2, 8, 4]], # [outputChannel, blockDepth, numHeaders, windowSize]
  24. [-1, 1, SPPF, [1024, 5]], # 10
  25. ]
  26. # YOLOv5 v6.0 head
  27. head:
  28. [[-1, 1, Conv, [512, 1, 1]],
  29. [-1, 1, nn.Upsample, [None, 2, 'nearest']],
  30. [[-1, 6], 1, Concat, [1]], # cat backbone P4
  31. [-1, 3, C3, [512, False]], # 14
  32. [-1, 1, Conv, [256, 1, 1]],
  33. [-1, 1, nn.Upsample, [None, 2, 'nearest']],
  34. [[-1, 4], 1, Concat, [1]], # cat backbone P3
  35. [-1, 3, C3, [256, False]], # 18 (P3/8-small)
  36. [-1, 1, Conv, [256, 3, 2]],
  37. [[-1, 15], 1, Concat, [1]], # cat head P4
  38. [-1, 3, C3, [512, False]], # 21 (P4/16-medium)
  39. [-1, 1, Conv, [512, 3, 2]],
  40. [[-1, 11], 1, Concat, [1]], # cat head P5
  41. [-1, 3, C3, [1024, False]], # 24 (P5/32-large)
  42. [[18, 21, 24], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
  43. ]

第④步:验证是否加入成功

运行yolo.py  

 这样就OK了!


配置中遇到的问题 

在服务器上运行时遇到了这样的报错

TypeError: meshgrid() got an unexpected keyword argument ‘indexing‘ 

查了一下是torch版本导致的,现在版本已经没有indexing这个参数了(6.1还可以),但是默认就是这个参数。 

解决方法: 

找到 common.py 文件的 WindowAttention(nn.Module)类

直接删掉 indexing="ij" 即可


PS:

在我的数据集上,Swin TransformerV1的确参数量很少,但反而掉了0.2~

我的模型已经确定下来了,所以没有尝试改进这个。如果有感兴趣的同学可以进一步尝试!