学习资源站

YOLOv11改进-Conv篇-利用2024最新Mamba的MLLABLock二次创新C3k2(全网独家首发)

一、本文介绍

本文给大家带来的改进机制是利用 Mamba 框架下的 MLLABlock二次创新C2f来改进我们的YOLOv11 模型 ,MLLA(Mamba-Like Linear Attention)的原理是通过将Mamba模型的一些核心设计融入线性 注意力机制 ,从而提升模型的性能。具体来说,MLLA主要整合了Mamba中的“忘记门”(forget gate)和模块设计(block design)这两个关键因素,同时MLLA通过使用位置编码(RoPE)来替代忘记门,从而在保持并行计算和快速推理速度的同时,提供必要的位置信息。这使得MLLA在处理非自回归的视觉任务时更加有效 , 本文内容为我独家整理全网首发。


二、原理介绍

官方论文地址: 官方论文地址点击此处即可跳转

官方代码地址: 官方代码地址点击此处即可跳转


在这篇论文中, MLLA(Mamba-Like Linear Attention) 的原理是通过将Mamba模型的一些核心设计融入线性注意力机制,从而提升模型的 性能 。具体来说,MLLA主要整合了Mamba中的“忘记门”(forget gate )和模块设计(block design)这两个关键因素,这些因素被认为是Mamba成功的主要原因。

以下是对MLLA原理的详细分析:

  1. 忘记门(Forget Gate)

    • 忘记门提供了局部偏差和位置信息。所有的忘记门元素严格限制在0到1之间,这意味着模型在接收到当前输入后会持续衰减先前的隐藏状态。这种特性确保了模型对输入序列的顺序敏感。
    • 忘记门的局部偏差和位置信息对于图像处理任务来说非常重要,尽管引入忘记门会导致计算需要采用递归的形式,从而降低并行计算的效率 。
  2. 模块设计(Block Design)

    • Mamba的模块设计在保持相似的浮点运算次数(FLOPs)的同时,通过替换注意力子模块为线性注意力来提升性能。结果表明,采用这种模块设计能够显著提高模型的表现 。
  3. 线性注意力的改进

    • 线性注意力被重新设计以整合忘记门和模块设计,这种改进后的模型被称为MLLA。实验结果显示,MLLA在图像分类和高分辨率密集预测任务中均优于各种视觉Mamba模型 。
  4. 并行计算和快速推理速度

    • MLLA通过使用位置编码(RoPE)来替代忘记门,从而在保持并行计算和快速推理速度的同时,提供必要的位置信息。这使得MLLA在处理非自回归的视觉任务时更加有效 。

通过这些改进,MLLA不仅继承了Mamba模型的优点,还解决了其在并行计算中的一些局限性,使其更适合于视觉任务。MLLA展示了通过合理设计,线性注意力机制也能够超越传统的高性能模型。


三、核心代码

其中包含了上面提到的Rope,但是这个模块是经过我重新设计的,因为原先的代码需要输入图片的宽和高再定义时,但是经过重新设计后改为实时计算,有兴趣的可以和开源代码对比下!

  1. # --------------------------------------------------------
  2. # Swin Transformer
  3. # Copyright (c) 2021 Microsoft
  4. # Licensed under The MIT License [see LICENSE for details]
  5. # Written by Ze Liu
  6. # --------------------------------------------------------
  7. # Demystify Mamba in Vision: A Linear Attention Perspective
  8. # Modified by Dongchen Han
  9. # -----------------------------------------------------------------------
  10. import torch
  11. import torch.nn as nn
  12. __all__ = ['C3k2_MLLABlock1', 'C3k2_MLLABlock2']
  13. def drop_path(x, drop_prob: float = 0., training: bool = False, scale_by_keep: bool = True):
  14. """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
  15. This is the same as the DropConnect impl I created for EfficientNet, etc networks, however,
  16. the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...
  17. See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for
  18. changing the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use
  19. 'survival rate' as the argument.
  20. """
  21. if drop_prob == 0. or not training:
  22. return x
  23. keep_prob = 1 - drop_prob
  24. shape = (x.shape[0],) + (1,) * (x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets
  25. random_tensor = x.new_empty(shape).bernoulli_(keep_prob)
  26. if keep_prob > 0.0 and scale_by_keep:
  27. random_tensor.div_(keep_prob)
  28. return x * random_tensor
  29. class DropPath(nn.Module):
  30. """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
  31. """
  32. def __init__(self, drop_prob: float = 0., scale_by_keep: bool = True):
  33. super(DropPath, self).__init__()
  34. self.drop_prob = drop_prob
  35. self.scale_by_keep = scale_by_keep
  36. def forward(self, x):
  37. return drop_path(x, self.drop_prob, self.training, self.scale_by_keep)
  38. def extra_repr(self):
  39. return f'drop_prob={round(self.drop_prob,3):0.3f}'
  40. class Mlp(nn.Module):
  41. def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
  42. super().__init__()
  43. out_features = out_features or in_features
  44. hidden_features = hidden_features or in_features
  45. self.fc1 = nn.Linear(in_features, hidden_features)
  46. self.act = act_layer()
  47. self.fc2 = nn.Linear(hidden_features, out_features)
  48. self.drop = nn.Dropout(drop)
  49. def forward(self, x):
  50. x = self.fc1(x)
  51. x = self.act(x)
  52. x = self.drop(x)
  53. x = self.fc2(x)
  54. x = self.drop(x)
  55. return x
  56. class ConvLayer(nn.Module):
  57. def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, padding=0, dilation=1, groups=1,
  58. bias=True, dropout=0, norm=nn.BatchNorm2d, act_func=nn.ReLU):
  59. super(ConvLayer, self).__init__()
  60. self.dropout = nn.Dropout2d(dropout, inplace=False) if dropout > 0 else None
  61. self.conv = nn.Conv2d(
  62. in_channels,
  63. out_channels,
  64. kernel_size=(kernel_size, kernel_size),
  65. stride=(stride, stride),
  66. padding=(padding, padding),
  67. dilation=(dilation, dilation),
  68. groups=groups,
  69. bias=bias,
  70. )
  71. self.norm = norm(num_features=out_channels) if norm else None
  72. self.act = act_func() if act_func else None
  73. def forward(self, x: torch.Tensor) -> torch.Tensor:
  74. if self.dropout is not None:
  75. x = self.dropout(x)
  76. x = self.conv(x)
  77. if self.norm:
  78. x = self.norm(x)
  79. if self.act:
  80. x = self.act(x)
  81. return x
  82. class RoPE(torch.nn.Module):
  83. r"""Rotary Positional Embedding.
  84. """
  85. def __init__(self, base=10000):
  86. super(RoPE, self).__init__()
  87. self.base = base
  88. def generate_rotations(self, x):
  89. # 获取输入张量的形状
  90. *channel_dims, feature_dim = x.shape[1:-1][0], x.shape[-1]
  91. k_max = feature_dim // (2 * len(channel_dims))
  92. assert feature_dim % k_max == 0, "Feature dimension must be divisible by 2 * k_max"
  93. # 生成角度
  94. theta_ks = 1 / (self.base ** (torch.arange(k_max, dtype=x.dtype, device=x.device) / k_max))
  95. angles = torch.cat([t.unsqueeze(-1) * theta_ks for t in
  96. torch.meshgrid([torch.arange(d, dtype=x.dtype, device=x.device) for d in channel_dims],
  97. indexing='ij')], dim=-1)
  98. # 计算旋转矩阵的实部和虚部
  99. rotations_re = torch.cos(angles).unsqueeze(dim=-1)
  100. rotations_im = torch.sin(angles).unsqueeze(dim=-1)
  101. rotations = torch.cat([rotations_re, rotations_im], dim=-1)
  102. return rotations
  103. def forward(self, x):
  104. # 生成旋转矩阵
  105. rotations = self.generate_rotations(x)
  106. # 将 x 转换为复数形式
  107. x_complex = torch.view_as_complex(x.reshape(*x.shape[:-1], -1, 2))
  108. # 应用旋转矩阵
  109. pe_x = torch.view_as_complex(rotations) * x_complex
  110. # 将结果转换回实数形式并展平最后两个维度
  111. return torch.view_as_real(pe_x).flatten(-2)
  112. class LinearAttention(nn.Module):
  113. r""" Linear Attention with LePE and RoPE.
  114. Args:
  115. dim (int): Number of input channels.
  116. num_heads (int): Number of attention heads.
  117. qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
  118. """
  119. def __init__(self, dim, num_heads=4, qkv_bias=True, **kwargs):
  120. super().__init__()
  121. self.dim = dim
  122. self.num_heads = num_heads
  123. self.qk = nn.Linear(dim, dim * 2, bias=qkv_bias)
  124. self.elu = nn.ELU()
  125. self.lepe = nn.Conv2d(dim, dim, 3, padding=1, groups=dim)
  126. self.rope = RoPE()
  127. def forward(self, x):
  128. """
  129. Args:
  130. x: input features with shape of (B, N, C)
  131. """
  132. b, n, c = x.shape
  133. h = int(n ** 0.5)
  134. w = int(n ** 0.5)
  135. num_heads = self.num_heads
  136. head_dim = c // num_heads
  137. qk = self.qk(x).reshape(b, n, 2, c).permute(2, 0, 1, 3)
  138. q, k, v = qk[0], qk[1], x
  139. # q, k, v: b, n, c
  140. q = self.elu(q) + 1.0
  141. k = self.elu(k) + 1.0
  142. q_rope = self.rope(q.reshape(b, h, w, c)).reshape(b, n, num_heads, head_dim).permute(0, 2, 1, 3)
  143. k_rope = self.rope(k.reshape(b, h, w, c)).reshape(b, n, num_heads, head_dim).permute(0, 2, 1, 3)
  144. q = q.reshape(b, n, num_heads, head_dim).permute(0, 2, 1, 3)
  145. k = k.reshape(b, n, num_heads, head_dim).permute(0, 2, 1, 3)
  146. v = v.reshape(b, n, num_heads, head_dim).permute(0, 2, 1, 3)
  147. z = 1 / (q @ k.mean(dim=-2, keepdim=True).transpose(-2, -1) + 1e-6)
  148. kv = (k_rope.transpose(-2, -1) * (n ** -0.5)) @ (v * (n ** -0.5))
  149. x = q_rope @ kv * z
  150. x = x.transpose(1, 2).reshape(b, n, c)
  151. v = v.transpose(1, 2).reshape(b, h, w, c).permute(0, 3, 1, 2)
  152. x = x + self.lepe(v).permute(0, 2, 3, 1).reshape(b, n, c)
  153. return x
  154. def extra_repr(self) -> str:
  155. return f'dim={self.dim}, num_heads={self.num_heads}'
  156. class MLLABlock(nn.Module):
  157. r""" MLLA Block.
  158. Args:
  159. dim (int): Number of input channels.
  160. input_resolution (tuple[int]): Input resulotion.
  161. num_heads (int): Number of attention heads.
  162. mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
  163. qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
  164. drop (float, optional): Dropout rate. Default: 0.0
  165. drop_path (float, optional): Stochastic depth rate. Default: 0.0
  166. act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
  167. norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
  168. """
  169. def __init__(self, dim, num_heads=4, mlp_ratio=4., qkv_bias=True, drop=0., drop_path=0.,
  170. act_layer=nn.GELU, norm_layer=nn.LayerNorm, **kwargs):
  171. super().__init__()
  172. self.dim = dim
  173. num_heads = max(1, dim // 64)
  174. self.num_heads = num_heads
  175. self.num_heads = num_heads
  176. self.mlp_ratio = mlp_ratio
  177. self.cpe1 = nn.Conv2d(dim, dim, 3, padding=1, groups=dim)
  178. self.norm1 = norm_layer(dim)
  179. self.in_proj = nn.Linear(dim, dim)
  180. self.act_proj = nn.Linear(dim, dim)
  181. self.dwc = nn.Conv2d(dim, dim, 3, padding=1, groups=dim)
  182. self.act = nn.SiLU()
  183. self.attn = LinearAttention(dim=dim, num_heads=num_heads, qkv_bias=qkv_bias)
  184. self.out_proj = nn.Linear(dim, dim)
  185. self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
  186. self.cpe2 = nn.Conv2d(dim, dim, 3, padding=1, groups=dim)
  187. self.norm2 = norm_layer(dim)
  188. self.mlp = Mlp(in_features=dim, hidden_features=int(dim * mlp_ratio), act_layer=act_layer, drop=drop)
  189. def forward(self, x):
  190. x = x.reshape((x.size(0), x.size(2) * x.size(3), x.size(1)))
  191. b, n, c = x.shape
  192. H = int(n ** 0.5)
  193. W = int(n ** 0.5)
  194. B, L, C = x.shape
  195. assert L == H * W, "input feature has wrong size"
  196. x = x + self.cpe1(x.reshape(B, H, W, C).permute(0, 3, 1, 2)).flatten(2).permute(0, 2, 1)
  197. shortcut = x
  198. x = self.norm1(x)
  199. act_res = self.act(self.act_proj(x))
  200. x = self.in_proj(x).view(B, H, W, C)
  201. x = self.act(self.dwc(x.permute(0, 3, 1, 2))).permute(0, 2, 3, 1).view(B, L, C)
  202. # Linear Attention
  203. x = self.attn(x)
  204. x = self.out_proj(x * act_res)
  205. x = shortcut + self.drop_path(x)
  206. x = x + self.cpe2(x.reshape(B, H, W, C).permute(0, 3, 1, 2)).flatten(2).permute(0, 2, 1)
  207. # FFN
  208. x = x + self.drop_path(self.mlp(self.norm2(x)))
  209. x = x.transpose(2, 1).reshape((b, c, H, W))
  210. return x
  211. def extra_repr(self) -> str:
  212. return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \
  213. f"mlp_ratio={self.mlp_ratio}"
  214. def autopad(k, p=None, d=1): # kernel, padding, dilation
  215. """Pad to 'same' shape outputs."""
  216. if d > 1:
  217. k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k] # actual kernel-size
  218. if p is None:
  219. p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
  220. return p
  221. class Conv(nn.Module):
  222. """Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)."""
  223. default_act = nn.SiLU() # default activation
  224. def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
  225. """Initialize Conv layer with given arguments including activation."""
  226. super().__init__()
  227. self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)
  228. self.bn = nn.BatchNorm2d(c2)
  229. self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
  230. def forward(self, x):
  231. """Apply convolution, batch normalization and activation to input tensor."""
  232. return self.act(self.bn(self.conv(x)))
  233. def forward_fuse(self, x):
  234. """Perform transposed convolution of 2D data."""
  235. return self.act(self.conv(x))
  236. class Bottleneck(nn.Module):
  237. """Standard bottleneck."""
  238. def __init__(self, c1, c2, shortcut=True, g=1, k=(3, 3), e=0.5):
  239. """Initializes a standard bottleneck module with optional shortcut connection and configurable parameters."""
  240. super().__init__()
  241. c_ = int(c2 * e) # hidden channels
  242. self.cv1 = Conv(c1, c_, k[0], 1)
  243. self.cv2 = Conv(c_, c2, k[1], 1, g=g)
  244. self.add = shortcut and c1 == c2
  245. def forward(self, x):
  246. """Applies the YOLO FPN to input data."""
  247. return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x))
  248. class C2f(nn.Module):
  249. """Faster Implementation of CSP Bottleneck with 2 convolutions."""
  250. def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5):
  251. """Initializes a CSP bottleneck with 2 convolutions and n Bottleneck blocks for faster processing."""
  252. super().__init__()
  253. self.c = int(c2 * e) # hidden channels
  254. self.cv1 = Conv(c1, 2 * self.c, 1, 1)
  255. self.cv2 = Conv((2 + n) * self.c, c2, 1) # optional act=FReLU(c2)
  256. self.m = nn.ModuleList(Bottleneck(self.c, self.c, shortcut, g, k=((3, 3), (3, 3)), e=1.0) for _ in range(n))
  257. def forward(self, x):
  258. """Forward pass through C2f layer."""
  259. y = list(self.cv1(x).chunk(2, 1))
  260. y.extend(m(y[-1]) for m in self.m)
  261. return self.cv2(torch.cat(y, 1))
  262. def forward_split(self, x):
  263. """Forward pass using split() instead of chunk()."""
  264. y = list(self.cv1(x).split((self.c, self.c), 1))
  265. y.extend(m(y[-1]) for m in self.m)
  266. return self.cv2(torch.cat(y, 1))
  267. class C3(nn.Module):
  268. """CSP Bottleneck with 3 convolutions."""
  269. def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):
  270. """Initialize the CSP Bottleneck with given channels, number, shortcut, groups, and expansion values."""
  271. super().__init__()
  272. c_ = int(c2 * e) # hidden channels
  273. self.cv1 = Conv(c1, c_, 1, 1)
  274. self.cv2 = Conv(c1, c_, 1, 1)
  275. self.cv3 = Conv(2 * c_, c2, 1) # optional act=FReLU(c2)
  276. self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, k=((1, 1), (3, 3)), e=1.0) for _ in range(n)))
  277. def forward(self, x):
  278. """Forward pass through the CSP bottleneck with 2 convolutions."""
  279. return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1))
  280. class C3k(C3):
  281. """C3k is a CSP bottleneck module with customizable kernel sizes for feature extraction in neural networks."""
  282. def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, k=3):
  283. """Initializes the C3k module with specified channels, number of layers, and configurations."""
  284. super().__init__(c1, c2, n, shortcut, g, e)
  285. c_ = int(c2 * e) # hidden channels
  286. # self.m = nn.Sequential(*(RepBottleneck(c_, c_, shortcut, g, k=(k, k), e=1.0) for _ in range(n)))
  287. self.m = nn.Sequential(*(Bottleneck(c_, c_, shortcut, g, k=(k, k), e=1.0) for _ in range(n)))
  288. class C3kMLLABlock(C3):
  289. """C3k is a CSP bottleneck module with customizable kernel sizes for feature extraction in neural networks."""
  290. def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5, k=3):
  291. """Initializes the C3k module with specified channels, number of layers, and configurations."""
  292. super().__init__(c1, c2, n, shortcut, g, e)
  293. c_ = int(c2 * e) # hidden channels
  294. # self.m = nn.Sequential(*(RepBottleneck(c_, c_, shortcut, g, k=(k, k), e=1.0) for _ in range(n)))
  295. self.m = nn.Sequential(*(MLLABlock(c_) for _ in range(n)))
  296. class C3k2_MLLABlock1(C2f):
  297. """Faster Implementation of CSP Bottleneck with 2 convolutions."""
  298. def __init__(self, c1, c2, n=1, c3k=False, e=0.5, g=1, shortcut=True):
  299. """Initializes the C3k2 module, a faster CSP Bottleneck with 2 convolutions and optional C3k blocks."""
  300. super().__init__(c1, c2, n, shortcut, g, e)
  301. self.m = nn.ModuleList(
  302. C3k(self.c, self.c, 2, shortcut, g) if c3k else MLLABlock(self.c,) for _ in range(n)
  303. )
  304. # 解析利用MLLABlock替换Bottneck
  305. class C3k2_MLLABlock2(C2f):
  306. """Faster Implementation of CSP Bottleneck with 2 convolutions."""
  307. def __init__(self, c1, c2, n=1, c3k=False, e=0.5, g=1, shortcut=True):
  308. """Initializes the C3k2 module, a faster CSP Bottleneck with 2 convolutions and optional C3k blocks."""
  309. super().__init__(c1, c2, n, shortcut, g, e)
  310. self.m = nn.ModuleList(
  311. C3kMLLABlock(self.c, self.c, 2, shortcut, g) if c3k else Bottleneck(self.c, self.c, shortcut, g) for _ in range(n)
  312. )
  313. # 解析利用MLLABlock替换C3k中的Bottneck
  314. # 区别在于不同的位置在yaml文件中使用.
  315. if __name__ == "__main__":
  316. # Generating Sample image
  317. image_size = (1, 64, 224, 224)
  318. image = torch.rand(*image_size)
  319. # Model
  320. model = C3k2_MLLABlock2(64, 64, 3, c3k=True)
  321. out = model(image)
  322. print(out.size())


四、手把手教你添加C3k2MLLABlock

4.1 修改一

第一还是建立文件,我们找到如下 ultralytics /nn文件夹下建立一个目录名字呢就是'Addmodules'文件夹 (用群内的文件的话已经有了无需新建) !然后在其内部建立一个新的py文件将核心代码复制粘贴进去即可。


4.2 修改二

第二步我们在该目录下创建一个新的py文件名字为'__init__.py'( 用群内的文件的话已经有了无需新建) ,然后在其内部导入我们的检测头如下图所示。


4.3 修改三

第三步我门中到如下文件'ultralytics/nn/tasks.py'进行导入和注册我们的模块( 用群内的文件的话已经有了无需重新导入直接开始第四步即可)

从今天开始以后的教程就都统一成这个样子了,因为我默认大家用了我群内的文件来进行修改!!

​​


4.4 修改四

按照我的添加在parse_model里添加即可。


4.5 修改五

找到ultralytics/models/yolo/detect/train.py的DetectionTrainer class中的build_dataset函数中的rect=mode == 'val'改为rect=False

到此就修改完成了,大家可以复制下面的yaml文件运行。


五、C3k2MLLABlock的yaml文件和运行记录

5.1 C3k2MLLABlock的yaml文件1

此版本的信息:YOLO11-C3k2-MLLABlock-1 summary: 390 layers, 2,647,307 parameters, 2,647,291 gradients, 6.8 GFLOPs

  1. # Ultralytics YOLO 🚀, AGPL-3.0 license
  2. # YOLOv10 object detection model. For Usage examples see https://docs.ultralytics.com/tasks/detect
  3. # Parameters
  4. nc: 80 # number of classes
  5. scales: # model compound scaling constants, i.e. 'model=yolov10n.yaml' will call yolov10.yaml with scale 'n'
  6. # [depth, width, max_channels]
  7. n: [0.33, 0.25, 1024]
  8. backbone:
  9. # [from, repeats, module, args]
  10. - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  11. - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  12. - [-1, 3, C2fMLLABlock, [128, True]]
  13. - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  14. - [-1, 6, C2fMLLABlock, [256, True]]
  15. - [-1, 1, SCDown, [512, 3, 2]] # 5-P4/16
  16. - [-1, 6, C2fMLLABlock, [512, True]]
  17. - [-1, 1, SCDown, [1024, 3, 2]] # 7-P5/32
  18. - [-1, 3, C2fMLLABlock, [1024, True]]
  19. - [-1, 1, SPPF, [1024, 5]] # 9
  20. - [-1, 1, PSA, [1024]] # 10
  21. # YOLOv10.0n head
  22. head:
  23. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  24. - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  25. - [-1, 3, C2fMLLABlock, [512]] # 13
  26. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  27. - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  28. - [-1, 3, C2fMLLABlock, [256]] # 16 (P3/8-small)
  29. - [-1, 1, Conv, [256, 3, 2]]
  30. - [[-1, 13], 1, Concat, [1]] # cat head P4
  31. - [-1, 3, C2fMLLABlock, [512]] # 19 (P4/16-medium)
  32. - [-1, 1, SCDown, [512, 3, 2]]
  33. - [[-1, 10], 1, Concat, [1]] # cat head P5
  34. - [-1, 3, C2fCIB, [1024, True, True]] # 22 (P5/32-large)
  35. - [[16, 19, 22], 1, v10Detect, [nc]] # Detect(P3, P4, P5)


5.1 C3k2MLLABlock的yaml文件2

此版本训练信息:YOLO11-C3k2-MLLABlock-2 summary: 404 layers, 2,518,555 parameters, 2,518,539 gradients, 6.4 GFLOPs

  1. # Ultralytics YOLO 🚀, AGPL-3.0 license
  2. # YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
  3. # Parameters
  4. nc: 80 # number of classes
  5. scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  6. # [depth, width, max_channels]
  7. n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  8. s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  9. m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  10. l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  11. x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
  12. # YOLO11n backbone
  13. backbone:
  14. # [from, repeats, module, args]
  15. - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  16. - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  17. - [-1, 2, C3k2_MLLABlock2, [256, False, 0.25]]
  18. - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  19. - [-1, 2, C3k2_MLLABlock2, [512, False, 0.25]]
  20. - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  21. - [-1, 2, C3k2_MLLABlock2, [512, True]]
  22. - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  23. - [-1, 2, C3k2_MLLABlock2, [1024, True]]
  24. - [-1, 1, SPPF, [1024, 5]] # 9
  25. - [-1, 2, C2PSA, [1024]] # 10
  26. # YOLO11n head
  27. head:
  28. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  29. - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  30. - [-1, 2, C3k2_MLLABlock2, [512, False]] # 13
  31. - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  32. - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  33. - [-1, 2, C3k2_MLLABlock2, [256, False]] # 16 (P3/8-small)
  34. - [-1, 1, Conv, [256, 3, 2]]
  35. - [[-1, 13], 1, Concat, [1]] # cat head P4
  36. - [-1, 2, C3k2_MLLABlock2, [512, False]] # 19 (P4/16-medium)
  37. - [-1, 1, Conv, [512, 3, 2]]
  38. - [[-1, 10], 1, Concat, [1]] # cat head P5
  39. - [-1, 2, C3k2_MLLABlock2, [1024, True]] # 22 (P5/32-large)
  40. - [[16, 19, 22], 1, Detect, [nc]] # Detect(P3, P4, P5)


5.2 训练代码

大家可以创建一个py文件将我给的代码复制粘贴进去,配置好自己的文件路径即可运行。

  1. import warnings
  2. warnings.filterwarnings('ignore')
  3. from ultralytics import YOLO
  4. if __name__ == '__main__':
  5. model = YOLO('yolov8-MLLA.yaml')
  6. # 如何切换模型版本, 上面的ymal文件可以改为 yolov8s.yaml就是使用的v8s,
  7. # 类似某个改进的yaml文件名称为yolov8-XXX.yaml那么如果想使用其它版本就把上面的名称改为yolov8l-XXX.yaml即可(改的是上面YOLO中间的名字不是配置文件的)!
  8. # model.load('yolov8n.pt') # 是否加载预训练权重,科研不建议大家加载否则很难提升精度
  9. model.train(data=r"C:\Users\Administrator\PycharmProjects\yolov5-master\yolov5-master\Construction Site Safety.v30-raw-images_latestversion.yolov8\data.yaml",
  10. # 如果大家任务是其它的'ultralytics/cfg/default.yaml'找到这里修改task可以改成detect, segment, classify, pose
  11. cache=False,
  12. imgsz=640,
  13. epochs=150,
  14. single_cls=False, # 是否是单类别检测
  15. batch=16,
  16. close_mosaic=0,
  17. workers=0,
  18. device='0',
  19. optimizer='SGD', # using SGD
  20. # resume='runs/train/exp21/weights/last.pt', # 如过想续训就设置last.pt的地址
  21. amp=True, # 如果出现训练损失为Nan可以关闭amp
  22. project='runs/train',
  23. name='exp',
  24. )


5.3 C3k2MLLABlock的训练过程截图


五、本文总结

到此本文的正式分享内容就结束了,在这里给大家推荐我的YOLOv11改进有效涨点专栏,本专栏目前为新开的平均质量分98分,后期我会根据各种最新的前沿顶会进行论文复现,也会对一些老的改进机制进行补充,如果大家觉得本文帮助到你了,订阅本专栏,关注后续更多的更新~