学习资源站

YOLOv11改进-Conv篇-SAConv可切换空洞卷积二次创新C3k2,替换传统下采样Conv

一、本文介绍

本文给大家带来的改进机制是 可切换的空洞卷积 (Switchable Atrous Convolution, SAC )是一种创新的 卷积网络 机制,专为增强物体检测和分割任务中的特征提取而设计。SAC的 核心思想 是在相同的输入特征上应用不同的空洞率进行卷积,并通过特别设计的开关 函数 来融合这些不同卷积的结果。这种方法使得网络能够更灵活地适应不同尺度的特征,从而更准确地识别和分割图像中的物体。 通过本文你能够了解到: 可切换的 空洞卷积 的基本原理和框架,能够在你自己的网络结构中进行添加(含二次创新C3k2机制)

欢迎大家订阅我的专栏一起学习YOLO!



二、SAConv的机制原理介绍

论文地址: 官方论文地址

代码地址: 官方代码地址


可切换的空洞卷积(Switchable Atrous Convolution,简称SAC 是一种高级的卷积机制,用于在物体检测和分割任务中增强特征提取。以下是 SAC 的主要原理和机制:

1. 不同空洞率的应用: SAC的核心思想是对相同的输入特征应用不同的空洞率进行卷积。空洞卷积通过在卷积核中引入额外的空间(即空洞),扩大了感受野,而不增加参数数量或计算量。SAC利用这一点来捕获不同尺度的特征。

2. 开关函数的使用: SAC的另一个关键特点是使用开关函数来组合不同空洞率卷积的结果。这些开关函数是空间依赖的,意味着特征图的每个位置可能有不同的开关来控制SAC的输出,从而使网络对于特征的大小和尺度更加灵活。

3. 转换机制: SAC能够将传统的卷积层转换为SAC层。这是通过在不同空洞率的卷积操作中使用相同的权重(除了一个可训练的差异)来实现的。这种转换机制包括一个平均池化层和一个1x1卷积层,以实现开关功能。

4. 结构设计: SAC的架构包括三个主要部分:两个全局上下文模块分别位于SAC 组件 的前后。这些模块有助于更全面地理解图像内容,使SAC组件能够在更宽泛的上下文中有效地工作。

总结: SAC通过这些创新的设计和机制,提高了网络在处理不同尺度和复杂度的特征时的 适应性 和准确性,从而在物体检测和分割领域显示出显著的性能提升。

上图我们能看到其中的关键点如下->

  • 双重观察机制 : SAC特别设计了一种机制,它能够对输入特征进行两次观察,但每次使用不同的空洞率。这意味着,同一组输入特征会被两种不同配置的卷积核处理,其中每种配置对应一种特定的空洞率。这样做可以捕获不同尺度的特征信息,从而更全面地理解和分析输入数据。

  • 开关函数的应用 : 不同空洞率得到的输出结果随后通过开关函数结合在一起。这些开关决定了如何从两次卷积中选择或融合信息,从而生成最终的输出特征。开关的运作方式可能依赖于特征本身的特性,如其空间位置等。

总结: SAC通过这种“双重观察并结合”的策略,能够有效地处理复杂的特征模式,特别是在尺度变化较大的情况下。这种方法不仅提高了特征提取的灵活性和适应性,而且还提升了物体检测和分割任务中的准确性和效率。

在上图中展示了可切换的空洞卷积(Switchable Atrous Convolution, SAC)的具体实现方式。这里的关键点包括:

  1. 转换传统卷积层为SAC : 他们将骨干网络ResNet中的每一个3x3卷积层都转换为SAC。这种转换使得卷积计算可以在不同的空洞率之间软切换。

  2. 权重共享与训练差异 : 重要的一点是,尽管SAC在不同的空洞率间进行切换,但所有这些操作共享相同的权重,只有一个可训练的差异。这种设计减少了 模型 复杂性,同时保持了灵活性。

  3. 全局上下文模块 : SAC结构还包括两个全局上下文模块,这些模块为特征添加了图像级的信息。全局上下文模块有助于网络更好地理解和处理图像的整体内容,从而提高特征提取的质量和准确性。

总结: SAC通过这些机制,允许网络在不同的空洞率之间灵活切换,同时通过全局上下文模块和共享权重的策略,有效地提升了特征的提取和处理能力。这些特性使得SAC在物体检测和分割任务中表现出色。

下面是部分的检测效果图->


三、SAConv代码复现

  1. import torch
  2. import torch.nn as nn
  3. from ultralytics.nn.modules.conv import autopad, Conv
  4. __all__ = ['SAConv2d', 'C3k2_SAConv']
  5. class ConvAWS2d(nn.Conv2d):
  6. def __init__(self,
  7. in_channels,
  8. out_channels,
  9. kernel_size,
  10. stride=1,
  11. padding=0,
  12. dilation=1,
  13. groups=1,
  14. bias=True):
  15. super().__init__(
  16. in_channels,
  17. out_channels,
  18. kernel_size,
  19. stride=stride,
  20. padding=padding,
  21. dilation=dilation,
  22. groups=groups,
  23. bias=bias)
  24. self.register_buffer('weight_gamma', torch.ones(self.out_channels, 1, 1, 1))
  25. self.register_buffer('weight_beta', torch.zeros(self.out_channels, 1, 1, 1))
  26. def _get_weight(self, weight):
  27. weight_mean = weight.mean(dim=1, keepdim=True).mean(dim=2,
  28. keepdim=True).mean(dim=3, keepdim=True)
  29. weight = weight - weight_mean
  30. std = torch.sqrt(weight.view(weight.size(0), -1).var(dim=1) + 1e-5).view(-1, 1, 1, 1)
  31. weight = weight / std
  32. weight = self.weight_gamma * weight + self.weight_beta
  33. return weight
  34. def forward(self, x):
  35. weight = self._get_weight(self.weight)
  36. return super()._conv_forward(x, weight, None)
  37. def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
  38. missing_keys, unexpected_keys, error_msgs):
  39. self.weight_gamma.data.fill_(-1)
  40. super()._load_from_state_dict(state_dict, prefix, local_metadata, strict,
  41. missing_keys, unexpected_keys, error_msgs)
  42. if self.weight_gamma.data.mean() > 0:
  43. return
  44. weight = self.weight.data
  45. weight_mean = weight.data.mean(dim=1, keepdim=True).mean(dim=2,
  46. keepdim=True).mean(dim=3, keepdim=True)
  47. self.weight_beta.data.copy_(weight_mean)
  48. std = torch.sqrt(weight.view(weight.size(0), -1).var(dim=1) + 1e-5).view(-1, 1, 1, 1)
  49. self.weight_gamma.data.copy_(std)
  50. class SAConv2d(ConvAWS2d):
  51. def __init__(self,
  52. in_channels,
  53. out_channels,
  54. kernel_size,
  55. s=1,
  56. p=None,
  57. g=1,
  58. d=1,
  59. act=True,
  60. bias=True):
  61. super().__init__(
  62. in_channels,
  63. out_channels,
  64. kernel_size,
  65. stride=s,
  66. padding=autopad(kernel_size, p, d),
  67. dilation=d,
  68. groups=g,
  69. bias=bias)
  70. self.switch = torch.nn.Conv2d(
  71. self.in_channels,
  72. 1,
  73. kernel_size=1,
  74. stride=s,
  75. bias=True)
  76. self.switch.weight.data.fill_(0)
  77. self.switch.bias.data.fill_(1)
  78. self.weight_diff = torch.nn.Parameter(torch.Tensor(self.weight.size()))
  79. self.weight_diff.data.zero_()
  80. self.pre_context = torch.nn.Conv2d(
  81. self.in_channels,
  82. self.in_channels,
  83. kernel_size=1,
  84. bias=True)
  85. self.pre_context.weight.data.fill_(0)
  86. self.pre_context.bias.data.fill_(0)
  87. self.post_context = torch.nn.Conv2d(
  88. self.out_channels,
  89. self.out_channels,
  90. kernel_size=1,
  91. bias=True)
  92. self.post_context.weight.data.fill_(0)
  93. self.post_context.bias.data.fill_(0)
  94. self.bn = nn.BatchNorm2d(out_channels)
  95. self.act = Conv.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
  96. def forward(self, x):
  97. # pre-context
  98. avg_x = torch.nn.functional.adaptive_avg_pool2d(x, output_size=1)
  99. avg_x = self.pre_context(avg_x)
  100. avg_x = avg_x.expand_as(x)
  101. x = x + avg_x
  102. # switch
  103. avg_x = torch.nn.functional.pad(x, pad=(2, 2, 2, 2), mode="reflect")
  104. avg_x = torch.nn.functional.avg_pool2d(avg_x, kernel_size=5, stride=1, padding=0)
  105. switch = self.switch(avg_x)
  106. # sac
  107. weight = self._get_weight(self.weight)
  108. out_s = super()._conv_forward(x, weight, None)
  109. ori_p = self.padding
  110. ori_d = self.dilation
  111. self.padding = tuple(3 * p for p in self.padding)
  112. self.dilation = tuple(3 * d for d in self.dilation)
  113. weight = weight + self.weight_diff
  114. out_l = super()._conv_forward(x, weight, None)
  115. out = switch * out_s + (1 - switch) * out_l
  116. self.padding = ori_p
  117. self.dilation = ori_d
  118. # post-context
  119. avg_x = torch.nn.functional.adaptive_avg_pool2d(out, output_size=1)
  120. avg_x = self.post_context(avg_x)
  121. avg_x = avg_x.expand_as(out)
  122. out = out + avg_x
  123. return self.act(self.bn(out))
  124. def autopad(k, p=None, d=1): # kernel, padding, dilation
  125. """Pad to 'same' shape outputs."""
  126. if d > 1:
  127. k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k] # actual kernel-size
  128. if p is None:
  129. p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
  130. return p
  131. class Conv(nn.Module):
  132. """Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)."""
  133. default_act = nn.SiLU() # default activation
  134. def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
  135. """Initialize Conv layer with given arguments including activation."""
  136. super().__init__()
  137. self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)
  138. self.bn = nn.BatchNorm2d(c2)
  139. self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
  140. def forward(self, x):
  141. """Apply convolution, batch normalization and activation to input tensor."""
  142. return self.act(self.bn(self.conv(x)))
  143. def forward_fuse(self, x):
  144. """Perform transposed convolution of 2D data."""
  145. return self.act(self.conv(x))
  146. class Bottleneck(nn.Module):
  147. """Standard bottleneck."""
  148. def __init__(self, c1, c2, shortcut=True, g=1, k=(3, 3), e=0.5):
  149. """Initializes a bottleneck module with given input/output channels, shortcut option, group, kernels, and
  150. expansion.
  151. """
  152. super().__init__()
  153. c_ = int(c2 * e) # hidden channels
  154. self.cv1 = Conv(c1, c_, k[0], 1)
  155. self.cv2 = SAConv2d(c_, c2, k[1], 1, g=g)
  156. self.add = shortcut and c1 == c2
  157. def forward(self, x):
  158. """'forward()' applies the YOLO FPN to input data."""
  159. return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
  160. class C2f(nn.Module):
  161. """Faster Implementation of CSP Bottleneck with 2 convolutions."""
  162. def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5):
  163. """Initializes a CSP bottleneck with 2 convolutions and n Bottleneck blocks for faster processing."""
  164. super().__init__()
  165. self.c = int(c2 * e) # hidden channels
  166. self.cv1 = Conv(c1, 2 * self.c, 1, 1)
  167. self.cv2 = Conv((2 + n) * self.c, c2, 1) # optional act=FReLU(c2)
  168. self.m = nn.ModuleList(Bottleneck(self.c, self.c, shortcut, g, k=((3, 3), (3, 3)), e=1.0) for _ in range(n))
  169. def forward(self, x):
  170. """Forward pass through C2f layer."""
  171. y = list(self.cv1(x).chunk(2, 1))
  172. y.extend(m(y[-1]) for m in self.m)
  173. return self.cv2(torch.cat(y, 1))
  174. def forward_split(self, x):
  175. """Forward pass using split() instead of chunk()."""
  176. y = list(self.cv1(x).split((self.c, self.c), 1))
  177. y.extend(m(y[-1]) for m in self.m)
  178. return self.cv2(torch.cat(y, 1))
  179. class C3(nn.Module):
  180. """CSP Bottleneck with 3 convolutions."""
  181. def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
  182. """Initialize the CSP Bottleneck with given channels, number, shortcut, groups, and expansion values."""
  183. super().__init__()
  184. c_ = int(c2 * e) # hidden channels
  185. self.cv1 = Conv(c1, c_, 1, 1)
  186. self.cv2 = Conv(c1, c_, 1, 1)
  187. self.cv3 = Conv(2 * c_, c2, 1) # optional act=FReLU(c2)
  188. self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, k=((1, 1), (3, 3)), e=1.0) for _ in range(n)))
  189. def forward(self, x):
  190. """Forward pass through the CSP bottleneck with 2 convolutions."""
  191. return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1))
  192. class C3k(C3):
  193. """C3k is a CSP bottleneck module with customizable kernel sizes for feature extraction in neural networks."""
  194. def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, k=3):
  195. """Initializes the C3k module with specified channels, number of layers, and configurations."""
  196. super().__init__(c1, c2, n, shortcut, g, e)
  197. c_ = int(c2 * e) # hidden channels
  198. # self.m = nn.Sequential(*(RepBottleneck(c_, c_, shortcut, g, k=(k, k), e=1.0) for _ in range(n)))
  199. self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, k=(k, k), e=1.0) for _ in range(n)))
  200. class C3k2_SAConv(C2f):
  201. """Faster Implementation of CSP Bottleneck with 2 convolutions."""
  202. def __init__(self, c1, c2, n=1, c3k=False, e=0.5, g=1, shortcut=True):
  203. """Initializes the C3k2 module, a faster CSP Bottleneck with 2 convolutions and optional C3k blocks."""
  204. super().__init__(c1, c2, n, shortcut, g, e)
  205. self.m = nn.ModuleList(
  206. C3k(self.c, self.c, 2, shortcut, g) if c3k else Bottleneck(self.c, self.c, shortcut, g) for _ in range(n)
  207. )
  208. if __name__ == "__main__":
  209. # Generating Sample image
  210. image_size = (1, 64, 240, 240)
  211. image = torch.rand(*image_size)
  212. # Model
  213. mobilenet_v1 = C3k2_SAConv(64, 64)
  214. out = mobilenet_v1(image)
  215. print(out.size())


四、手把手教你添加SAConv

4.1 SAConv的添加教程

4.1.1 修改一

第一还是建立文件,我们找到如下 ultralytics /nn文件夹下建立一个目录名字呢就是'Addmodules'文件夹( 用群内的文件的话已经有了无需新建 ) !然后在其内部建立一个新的py文件将核心代码复制粘贴进去即可。


4.1.2 修改二

第二步我们在该目录下创建一个新的py文件名字为'__init__.py'( 用群内的文件的话已经有了无需新建 ) ,然后在其内部导入我们的检测头如下图所示。


4.1.3 修改三

第三步我门中到如下文件'ultralytics/nn/tasks.py'进行导入和注册我们的模块( 用群内的文件的话已经有了无需重新导入直接开始第四步即可)

从今天开始以后的教程就都统一成这个样子了,因为我默认大家用了我群内的文件来进行修改!!

​​


4.1.4 修改四

按照我的添加在parse_model里添加即可。


到此就修改完成了,大家可以复制下面的yaml文件运行。


4.2 SAConv的yaml文件和训练截图

5.2.1 SAConv的yaml文件

此版本的训练信息: YOLO11-C3k2-SAConv summary: 342 layers, 2,859,734 parameters, 2,859,718 gradients, 6.0 GFLOPs

  1. # Ultralytics YOLO 🚀, AGPL-3.0 license
  2. # YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
  3. # Parameters
  4. nc: 80 # number of classes
  5. scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  6. # [depth, width, max_channels]
  7. n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  8. s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  9. m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  10. l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  11. x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
  12. # YOLO11n backbone
  13. backbone:
  14. # [from, repeats, module, args]
  15. - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  16. - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  17. - [-1, 2, C3k2_SAConv, [256, False, 0.25]]
  18. - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  19. - [-1, 2, C3k2_SAConv, [512, False, 0.25]]
  20. - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  21. - [-1, 2, C3k2_SAConv, [512, True]]
  22. - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  23. - [-1, 2, C3k2_SAConv, [1024, True]]
  24. - [-1, 1, SPPF, [1024, 5]] # 9
  25. - [-1, 2, C2PSA, [1024]] # 10
  26. # YOLO11n head
  27. head:
  28. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  29. - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  30. - [-1, 2, C3k2_SAConv, [512, False]] # 13
  31. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  32. - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  33. - [-1, 2, C3k2_SAConv, [256, False]] # 16 (P3/8-small)
  34. - [-1, 1, Conv, [256, 3, 2]]
  35. - [[-1, 13], 1, Concat, [1]] # cat head P4
  36. - [-1, 2, C3k2_SAConv, [512, False]] # 19 (P4/16-medium)
  37. - [-1, 1, Conv, [512, 3, 2]]
  38. - [[-1, 10], 1, Concat, [1]] # cat head P5
  39. - [-1, 2, C3k2_SAConv, [1024, True]] # 22 (P5/32-large)
  40. - [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)


5.2.2 SAConv的yaml文件2

此版本训练信息:YOLO11-SAConv summary: 332 layers, 3,430,401 parameters, 3,430,385 gradients, 4.8 GFLOPs

  1. # Ultralytics YOLO 🚀, AGPL-3.0 license
  2. # YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
  3. # Parameters
  4. nc: 80 # number of classes
  5. scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  6. # [depth, width, max_channels]
  7. n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  8. s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  9. m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  10. l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  11. x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
  12. # YOLO11n backbone
  13. backbone:
  14. # [from, repeats, module, args]
  15. - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  16. - [-1, 1, SAConv2d, [128, 3, 2]] # 1-P2/4
  17. - [-1, 2, C3k2, [256, False, 0.25]]
  18. - [-1, 1, SAConv2d, [256, 3, 2]] # 3-P3/8
  19. - [-1, 2, C3k2, [512, False, 0.25]]
  20. - [-1, 1, SAConv2d, [512, 3, 2]] # 5-P4/16
  21. - [-1, 2, C3k2, [512, True]]
  22. - [-1, 1, SAConv2d, [1024, 3, 2]] # 7-P5/32
  23. - [-1, 2, C3k2, [1024, True]]
  24. - [-1, 1, SPPF, [1024, 5]] # 9
  25. - [-1, 2, C2PSA, [1024]] # 10
  26. # YOLO11n head
  27. head:
  28. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  29. - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  30. - [-1, 2, C3k2, [512, False]] # 13
  31. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  32. - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  33. - [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)
  34. - [-1, 1, SAConv2d, [256, 3, 2]]
  35. - [[-1, 13], 1, Concat, [1]] # cat head P4
  36. - [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)
  37. - [-1, 1, SAConv2d, [512, 3, 2]]
  38. - [[-1, 10], 1, Concat, [1]] # cat head P5
  39. - [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)
  40. - [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)


5.3 SAConv的训练过程截图


五、训练代码

  1. import warnings
  2. warnings.filterwarnings('ignore')
  3. from ultralytics import YOLO
  4. if __name__ == '__main__':
  5. model = YOLO('ultralytics/cfg/models/v8/yolov8-C2f-FasterBlock.yaml')
  6. # model.load('yolov8n.pt') # loading pretrain weights
  7. model.train(data=r'替换数据集yaml文件地址',
  8. # 如果大家任务是其它的'ultralytics/cfg/default.yaml'找到这里修改task可以改成detect, segment, classify, pose
  9. cache=False,
  10. imgsz=640,
  11. epochs=150,
  12. single_cls=False, # 是否是单类别检测
  13. batch=4,
  14. close_mosaic=10,
  15. workers=0,
  16. device='0',
  17. optimizer='SGD', # using SGD
  18. # resume='', # 如过想续训就设置last.pt的地址
  19. amp=False, # 如果出现训练损失为Nan可以关闭amp
  20. project='runs/train',
  21. name='exp',
  22. )

六、本文总结

到此本文的正式分享内容就结束了,在这里给大家推荐我的YOLOv11改进有效涨点专栏,本专栏目前为新开的平均质量分98分,后期我会根据各种最新的前沿顶会进行论文复现,也会对一些老的改进机制进行补充, 目前本专栏免费阅读(暂时,大家尽早关注不迷路~) ,如果大家觉得本文帮助到你了,订阅本专栏,关注后续更多的更新~