一、本文介绍
本文给大家带来的改进机制是 特征提取网络 UniRepLknet ,该网络结构的重点在于使用Dilated Reparam Block和大核心指导原则,强调了 高效的 结构进行通道间通讯和空间聚合,以及使用带扩张的小核心进行重新参数化,该网络结构就是在LKNet基础上的一个升级版本, LKNet我们之前已经出过教程了 。 UniRepLknet 在 各种视觉任务中,包括图像分类、目标检测和语义分割 ,都显示出优异的性能。
欢迎大家订阅我的专栏一起学习YOLO!
(本文内容可根据yolov11的N、S、M、L、X进行二次缩放,轻量化更上一层)。
二、 UniRepLknet的框架原理
官方论文地址: 官方论文地址
官方代码地址: 官方代码地址
UniRepLKNet论文提出了一种新型的大核心卷积 神经网络 架构。这种架构通过结合非扩张小核心和扩张小核心层来增强非扩张大核心卷积层,旨在优化空间模式层次和表示能力。它强调了根据下游任务选择核心大小的重要性,并展示了该架构在图像识别以外领域(如音频、视频和时间序列数据)的通用性。此外,论文还展示了在各种任务上的领先性能,证明了其多功能性和高效性。
UniRepLKNet的主要创新点包括:
UniRepLKNet的结构上的创新点主要体现在其大核心卷积神经网络的设计上,这包括对大核心的高效利用,以及与传统ConvNets和变换器相比的独特构造。这种架构通过融合非扩张小核心和扩张小核心层来增强大核心层,优化了空间模式层次和网络的表示能力。此外,论文还提出了适用于大核心ConvNets的四个架构设计原则,旨在充分发挥大核心的独特优势,例如通过浅层结构观察更广阔视野,而不需要深入网络层次。
在图像中展示的UniRepLKNet架构设计中,一个显著的结构创新是LaRK(Large Kernel )块,它包括本文提出的Dilated Reparam Block,一个SE(Squeeze-and-Excitation)块,前馈网络(FFN),以及批量归一化(BN)层。LaRK块与SmaK(Small Kernel)块的主要区别在于,LaRK使用深度分离的3x3卷积层代替了Dilated Reparam Block中的层。不同阶段的块通过步长为2的密集3x3卷积层实现的下采样块连接,而这些块可以在不同阶段灵活地排列。这种设计强调了结构的模块化和灵活性,以及通过大核心来增强 模型 的性能和效率。
在图2中,UniRepLKNet的Dilated Reparam Block通过使用扩张的小核心卷积层来增强非扩张的大核心层。这些扩张层从参数角度看等同于一个具有更大稀疏核心的非扩张卷积层,这使得整个块可以等效地转换成单个大核心卷积。通过重新
参数化
的过程,多个具有不同扩张率的小核心卷积层被合并成一个等效的大核心卷积层,从而在保持可学习参数数量和计算效率的同时,增强了网络对空间信息的捕获能力。这种设计创新为ConvNets提供了更广泛的感受野,而不会增加模型的深度。
三、 UniRepLknet的核心代码
下面的核心代码使用方式看章节四。
- # UniRepLKNet: A Universal Perception Large-Kernel ConvNet for Audio, Video, Point Cloud, Time-Series and Image Recognition
- # Github source: https://github.com/AILab-CVC/UniRepLKNet
- # Licensed under The Apache License 2.0 License [see LICENSE for details]
- # Based on RepLKNet, ConvNeXt, timm, DINO and DeiT code bases
- # https://github.com/DingXiaoH/RepLKNet-pytorch
- # https://github.com/facebookresearch/ConvNeXt
- # https://github.com/rwightman/pytorch-image-models/tree/master/timm
- # https://github.com/facebookresearch/deit/
- # https://github.com/facebookresearch/dino
- # --------------------------------------------------------'
- import torch
- import torch.nn as nn
- import torch.nn.functional as F
- from timm.layers import trunc_normal_, DropPath, to_2tuple
- from functools import partial
- import torch.utils.checkpoint as checkpoint
- import numpy as np
- __all__ = ['unireplknet_a', 'unireplknet_f', 'unireplknet_p', 'unireplknet_n', 'unireplknet_t', 'unireplknet_s', 'unireplknet_b', 'unireplknet_l', 'unireplknet_xl']
- class GRNwithNHWC(nn.Module):
- """ GRN (Global Response Normalization) layer
- Originally proposed in ConvNeXt V2 (https://arxiv.org/abs/2301.00808)
- This implementation is more efficient than the original (https://github.com/facebookresearch/ConvNeXt-V2)
- We assume the inputs to this layer are (N, H, W, C)
- """
- def __init__(self, dim, use_bias=True):
- super().__init__()
- self.use_bias = use_bias
- self.gamma = nn.Parameter(torch.zeros(1, 1, 1, dim))
- if self.use_bias:
- self.beta = nn.Parameter(torch.zeros(1, 1, 1, dim))
- def forward(self, x):
- Gx = torch.norm(x, p=2, dim=(1, 2), keepdim=True)
- Nx = Gx / (Gx.mean(dim=-1, keepdim=True) + 1e-6)
- if self.use_bias:
- return (self.gamma * Nx + 1) * x + self.beta
- else:
- return (self.gamma * Nx + 1) * x
- class NCHWtoNHWC(nn.Module):
- def __init__(self):
- super().__init__()
- def forward(self, x):
- return x.permute(0, 2, 3, 1)
- class NHWCtoNCHW(nn.Module):
- def __init__(self):
- super().__init__()
- def forward(self, x):
- return x.permute(0, 3, 1, 2)
- #================== This function decides which conv implementation (the native or iGEMM) to use
- # Note that iGEMM large-kernel conv impl will be used if
- # - you attempt to do so (attempt_to_use_large_impl=True), and
- # - it has been installed (follow https://github.com/AILab-CVC/UniRepLKNet), and
- # - the conv layer is depth-wise, stride = 1, non-dilated, kernel_size > 5, and padding == kernel_size // 2
- def get_conv2d(in_channels, out_channels, kernel_size, stride, padding, dilation, groups, bias,
- attempt_use_lk_impl=True):
- kernel_size = to_2tuple(kernel_size)
- if padding is None:
- padding = (kernel_size[0] // 2, kernel_size[1] // 2)
- else:
- padding = to_2tuple(padding)
- need_large_impl = kernel_size[0] == kernel_size[1] and kernel_size[0] > 5 and padding == (kernel_size[0] // 2, kernel_size[1] // 2)
- # if attempt_use_lk_impl and need_large_impl:
- # print('---------------- trying to import iGEMM implementation for large-kernel conv')
- # try:
- # from depthwise_conv2d_implicit_gemm import DepthWiseConv2dImplicitGEMM
- # print('---------------- found iGEMM implementation ')
- # except:
- # DepthWiseConv2dImplicitGEMM = None
- # print('---------------- found no iGEMM. use original conv. follow https://github.com/AILab-CVC/UniRepLKNet to install it.')
- # if DepthWiseConv2dImplicitGEMM is not None and need_large_impl and in_channels == out_channels \
- # and out_channels == groups and stride == 1 and dilation == 1:
- # print(f'===== iGEMM Efficient Conv Impl, channels {in_channels}, kernel size {kernel_size} =====')
- # return DepthWiseConv2dImplicitGEMM(in_channels, kernel_size, bias=bias)
- return nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride,
- padding=padding, dilation=dilation, groups=groups, bias=bias)
- def get_bn(dim, use_sync_bn=False):
- if use_sync_bn:
- return nn.SyncBatchNorm(dim)
- else:
- return nn.BatchNorm2d(dim)
- class SEBlock(nn.Module):
- """
- Squeeze-and-Excitation Block proposed in SENet (https://arxiv.org/abs/1709.01507)
- We assume the inputs to this layer are (N, C, H, W)
- """
- def __init__(self, input_channels, internal_neurons):
- super(SEBlock, self).__init__()
- self.down = nn.Conv2d(in_channels=input_channels, out_channels=internal_neurons,
- kernel_size=1, stride=1, bias=True)
- self.up = nn.Conv2d(in_channels=internal_neurons, out_channels=input_channels,
- kernel_size=1, stride=1, bias=True)
- self.input_channels = input_channels
- self.nonlinear = nn.ReLU(inplace=True)
- def forward(self, inputs):
- x = F.adaptive_avg_pool2d(inputs, output_size=(1, 1))
- x = self.down(x)
- x = self.nonlinear(x)
- x = self.up(x)
- x = F.sigmoid(x)
- return inputs * x.view(-1, self.input_channels, 1, 1)
- def fuse_bn(conv, bn):
- conv_bias = 0 if conv.bias is None else conv.bias
- std = (bn.running_var + bn.eps).sqrt()
- return conv.weight * (bn.weight / std).reshape(-1, 1, 1, 1), bn.bias + (conv_bias - bn.running_mean) * bn.weight / std
- def convert_dilated_to_nondilated(kernel, dilate_rate):
- identity_kernel = torch.ones((1, 1, 1, 1)).to(kernel.device)
- if kernel.size(1) == 1:
- # This is a DW kernel
- dilated = F.conv_transpose2d(kernel, identity_kernel, stride=dilate_rate)
- return dilated
- else:
- # This is a dense or group-wise (but not DW) kernel
- slices = []
- for i in range(kernel.size(1)):
- dilated = F.conv_transpose2d(kernel[:,i:i+1,:,:], identity_kernel, stride=dilate_rate)
- slices.append(dilated)
- return torch.cat(slices, dim=1)
- def merge_dilated_into_large_kernel(large_kernel, dilated_kernel, dilated_r):
- large_k = large_kernel.size(2)
- dilated_k = dilated_kernel.size(2)
- equivalent_kernel_size = dilated_r * (dilated_k - 1) + 1
- equivalent_kernel = convert_dilated_to_nondilated(dilated_kernel, dilated_r)
- rows_to_pad = large_k // 2 - equivalent_kernel_size // 2
- merged_kernel = large_kernel + F.pad(equivalent_kernel, [rows_to_pad] * 4)
- return merged_kernel
- class DilatedReparamBlock(nn.Module):
- """
- Dilated Reparam Block proposed in UniRepLKNet (https://github.com/AILab-CVC/UniRepLKNet)
- We assume the inputs to this block are (N, C, H, W)
- """
- def __init__(self, channels, kernel_size, deploy, use_sync_bn=False, attempt_use_lk_impl=True):
- super().__init__()
- self.lk_origin = get_conv2d(channels, channels, kernel_size, stride=1,
- padding=kernel_size//2, dilation=1, groups=channels, bias=deploy,
- attempt_use_lk_impl=attempt_use_lk_impl)
- self.attempt_use_lk_impl = attempt_use_lk_impl
- # Default settings. We did not tune them carefully. Different settings may work better.
- if kernel_size == 17:
- self.kernel_sizes = [5, 9, 3, 3, 3]
- self.dilates = [1, 2, 4, 5, 7]
- elif kernel_size == 15:
- self.kernel_sizes = [5, 7, 3, 3, 3]
- self.dilates = [1, 2, 3, 5, 7]
- elif kernel_size == 13:
- self.kernel_sizes = [5, 7, 3, 3, 3]
- self.dilates = [1, 2, 3, 4, 5]
- elif kernel_size == 11:
- self.kernel_sizes = [5, 5, 3, 3, 3]
- self.dilates = [1, 2, 3, 4, 5]
- elif kernel_size == 9:
- self.kernel_sizes = [5, 5, 3, 3]
- self.dilates = [1, 2, 3, 4]
- elif kernel_size == 7:
- self.kernel_sizes = [5, 3, 3]
- self.dilates = [1, 2, 3]
- elif kernel_size == 5:
- self.kernel_sizes = [3, 3]
- self.dilates = [1, 2]
- else:
- raise ValueError('Dilated Reparam Block requires kernel_size >= 5')
- if not deploy:
- self.origin_bn = get_bn(channels, use_sync_bn)
- for k, r in zip(self.kernel_sizes, self.dilates):
- self.__setattr__('dil_conv_k{}_{}'.format(k, r),
- nn.Conv2d(in_channels=channels, out_channels=channels, kernel_size=k, stride=1,
- padding=(r * (k - 1) + 1) // 2, dilation=r, groups=channels,
- bias=False))
- self.__setattr__('dil_bn_k{}_{}'.format(k, r), get_bn(channels, use_sync_bn=use_sync_bn))
- def forward(self, x):
- if not hasattr(self, 'origin_bn'): # deploy mode
- return self.lk_origin(x)
- out = self.origin_bn(self.lk_origin(x))
- for k, r in zip(self.kernel_sizes, self.dilates):
- conv = self.__getattr__('dil_conv_k{}_{}'.format(k, r))
- bn = self.__getattr__('dil_bn_k{}_{}'.format(k, r))
- out = out + bn(conv(x))
- return out
- def merge_dilated_branches(self):
- if hasattr(self, 'origin_bn'):
- origin_k, origin_b = fuse_bn(self.lk_origin, self.origin_bn)
- for k, r in zip(self.kernel_sizes, self.dilates):
- conv = self.__getattr__('dil_conv_k{}_{}'.format(k, r))
- bn = self.__getattr__('dil_bn_k{}_{}'.format(k, r))
- branch_k, branch_b = fuse_bn(conv, bn)
- origin_k = merge_dilated_into_large_kernel(origin_k, branch_k, r)
- origin_b += branch_b
- merged_conv = get_conv2d(origin_k.size(0), origin_k.size(0), origin_k.size(2), stride=1,
- padding=origin_k.size(2)//2, dilation=1, groups=origin_k.size(0), bias=True,
- attempt_use_lk_impl=self.attempt_use_lk_impl)
- merged_conv.weight.data = origin_k
- merged_conv.bias.data = origin_b
- self.lk_origin = merged_conv
- self.__delattr__('origin_bn')
- for k, r in zip(self.kernel_sizes, self.dilates):
- self.__delattr__('dil_conv_k{}_{}'.format(k, r))
- self.__delattr__('dil_bn_k{}_{}'.format(k, r))
- class UniRepLKNetBlock(nn.Module):
- def __init__(self,
- dim,
- kernel_size,
- drop_path=0.,
- layer_scale_init_value=1e-6,
- deploy=False,
- attempt_use_lk_impl=True,
- with_cp=False,
- use_sync_bn=False,
- ffn_factor=4):
- super().__init__()
- self.with_cp = with_cp
- # if deploy:
- # print('------------------------------- Note: deploy mode')
- # if self.with_cp:
- # print('****** note with_cp = True, reduce memory consumption but may slow down training ******')
- self.need_contiguous = (not deploy) or kernel_size >= 7
- if kernel_size == 0:
- self.dwconv = nn.Identity()
- self.norm = nn.Identity()
- elif deploy:
- self.dwconv = get_conv2d(dim, dim, kernel_size=kernel_size, stride=1, padding=kernel_size // 2,
- dilation=1, groups=dim, bias=True,
- attempt_use_lk_impl=attempt_use_lk_impl)
- self.norm = nn.Identity()
- elif kernel_size >= 7:
- self.dwconv = DilatedReparamBlock(dim, kernel_size, deploy=deploy,
- use_sync_bn=use_sync_bn,
- attempt_use_lk_impl=attempt_use_lk_impl)
- self.norm = get_bn(dim, use_sync_bn=use_sync_bn)
- elif kernel_size == 1:
- self.dwconv = nn.Conv2d(dim, dim, kernel_size=kernel_size, stride=1, padding=kernel_size // 2,
- dilation=1, groups=1, bias=deploy)
- self.norm = get_bn(dim, use_sync_bn=use_sync_bn)
- else:
- assert kernel_size in [3, 5]
- self.dwconv = nn.Conv2d(dim, dim, kernel_size=kernel_size, stride=1, padding=kernel_size // 2,
- dilation=1, groups=dim, bias=deploy)
- self.norm = get_bn(dim, use_sync_bn=use_sync_bn)
- self.se = SEBlock(dim, dim // 4)
- ffn_dim = int(ffn_factor * dim)
- self.pwconv1 = nn.Sequential(
- NCHWtoNHWC(),
- nn.Linear(dim, ffn_dim))
- self.act = nn.Sequential(
- nn.GELU(),
- GRNwithNHWC(ffn_dim, use_bias=not deploy))
- if deploy:
- self.pwconv2 = nn.Sequential(
- nn.Linear(ffn_dim, dim),
- NHWCtoNCHW())
- else:
- self.pwconv2 = nn.Sequential(
- nn.Linear(ffn_dim, dim, bias=False),
- NHWCtoNCHW(),
- get_bn(dim, use_sync_bn=use_sync_bn))
- self.gamma = nn.Parameter(layer_scale_init_value * torch.ones(dim),
- requires_grad=True) if (not deploy) and layer_scale_init_value is not None \
- and layer_scale_init_value > 0 else None
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- def forward(self, inputs):
- def _f(x):
- if self.need_contiguous:
- x = x.contiguous()
- y = self.se(self.norm(self.dwconv(x)))
- y = self.pwconv2(self.act(self.pwconv1(y)))
- if self.gamma is not None:
- y = self.gamma.view(1, -1, 1, 1) * y
- return self.drop_path(y) + x
- if self.with_cp and inputs.requires_grad:
- return checkpoint.checkpoint(_f, inputs)
- else:
- return _f(inputs)
- def reparameterize(self):
- if hasattr(self.dwconv, 'merge_dilated_branches'):
- self.dwconv.merge_dilated_branches()
- if hasattr(self.norm, 'running_var') and hasattr(self.dwconv, 'lk_origin'):
- std = (self.norm.running_var + self.norm.eps).sqrt()
- self.dwconv.lk_origin.weight.data *= (self.norm.weight / std).view(-1, 1, 1, 1)
- self.dwconv.lk_origin.bias.data = self.norm.bias + (self.dwconv.lk_origin.bias - self.norm.running_mean) * self.norm.weight / std
- self.norm = nn.Identity()
- if self.gamma is not None:
- final_scale = self.gamma.data
- self.gamma = None
- else:
- final_scale = 1
- if self.act[1].use_bias and len(self.pwconv2) == 3:
- grn_bias = self.act[1].beta.data
- self.act[1].__delattr__('beta')
- self.act[1].use_bias = False
- linear = self.pwconv2[0]
- grn_bias_projected_bias = (linear.weight.data @ grn_bias.view(-1, 1)).squeeze()
- bn = self.pwconv2[2]
- std = (bn.running_var + bn.eps).sqrt()
- new_linear = nn.Linear(linear.in_features, linear.out_features, bias=True)
- new_linear.weight.data = linear.weight * (bn.weight / std * final_scale).view(-1, 1)
- linear_bias = 0 if linear.bias is None else linear.bias.data
- linear_bias += grn_bias_projected_bias
- new_linear.bias.data = (bn.bias + (linear_bias - bn.running_mean) * bn.weight / std) * final_scale
- self.pwconv2 = nn.Sequential(new_linear, self.pwconv2[1])
- default_UniRepLKNet_A_F_P_kernel_sizes = ((3, 3),
- (13, 13),
- (13, 13, 13, 13, 13, 13),
- (13, 13))
- default_UniRepLKNet_N_kernel_sizes = ((3, 3),
- (13, 13),
- (13, 13, 13, 13, 13, 13, 13, 13),
- (13, 13))
- default_UniRepLKNet_T_kernel_sizes = ((3, 3, 3),
- (13, 13, 13),
- (13, 3, 13, 3, 13, 3, 13, 3, 13, 3, 13, 3, 13, 3, 13, 3, 13, 3),
- (13, 13, 13))
- default_UniRepLKNet_S_B_L_XL_kernel_sizes = ((3, 3, 3),
- (13, 13, 13),
- (13, 3, 3, 13, 3, 3, 13, 3, 3, 13, 3, 3, 13, 3, 3, 13, 3, 3, 13, 3, 3, 13, 3, 3, 13, 3, 3),
- (13, 13, 13))
- UniRepLKNet_A_F_P_depths = (2, 2, 6, 2)
- UniRepLKNet_N_depths = (2, 2, 8, 2)
- UniRepLKNet_T_depths = (3, 3, 18, 3)
- UniRepLKNet_S_B_L_XL_depths = (3, 3, 27, 3)
- default_depths_to_kernel_sizes = {
- UniRepLKNet_A_F_P_depths: default_UniRepLKNet_A_F_P_kernel_sizes,
- UniRepLKNet_N_depths: default_UniRepLKNet_N_kernel_sizes,
- UniRepLKNet_T_depths: default_UniRepLKNet_T_kernel_sizes,
- UniRepLKNet_S_B_L_XL_depths: default_UniRepLKNet_S_B_L_XL_kernel_sizes
- }
- class UniRepLKNet(nn.Module):
- r""" UniRepLKNet
- A PyTorch impl of UniRepLKNet
- Args:
- in_chans (int): Number of input image channels. Default: 3
- num_classes (int): Number of classes for classification head. Default: 1000
- depths (tuple(int)): Number of blocks at each stage. Default: (3, 3, 27, 3)
- dims (int): Feature dimension at each stage. Default: (96, 192, 384, 768)
- drop_path_rate (float): Stochastic depth rate. Default: 0.
- layer_scale_init_value (float): Init value for Layer Scale. Default: 1e-6.
- head_init_scale (float): Init scaling value for classifier weights and biases. Default: 1.
- kernel_sizes (tuple(tuple(int))): Kernel size for each block. None means using the default settings. Default: None.
- deploy (bool): deploy = True means using the inference structure. Default: False
- with_cp (bool): with_cp = True means using torch.utils.checkpoint to save GPU memory. Default: False
- init_cfg (dict): weights to load. The easiest way to use UniRepLKNet with for OpenMMLab family. Default: None
- attempt_use_lk_impl (bool): try to load the efficient iGEMM large-kernel impl. Setting it to False disabling the iGEMM impl. Default: True
- use_sync_bn (bool): use_sync_bn = True means using sync BN. Use it if your batch size is small. Default: False
- """
- def __init__(self,
- in_chans=3,
- num_classes=1000,
- depths=(3, 3, 27, 3),
- dims=(96, 192, 384, 768),
- drop_path_rate=0.,
- layer_scale_init_value=1e-6,
- head_init_scale=1.,
- kernel_sizes=None,
- deploy=False,
- with_cp=False,
- init_cfg=None,
- attempt_use_lk_impl=True,
- use_sync_bn=False,
- **kwargs
- ):
- super().__init__()
- depths = tuple(depths)
- if kernel_sizes is None:
- if depths in default_depths_to_kernel_sizes:
- # print('=========== use default kernel size ')
- kernel_sizes = default_depths_to_kernel_sizes[depths]
- else:
- raise ValueError('no default kernel size settings for the given depths, '
- 'please specify kernel sizes for each block, e.g., '
- '((3, 3), (13, 13), (13, 13, 13, 13, 13, 13), (13, 13))')
- # print(kernel_sizes)
- for i in range(4):
- assert len(kernel_sizes[i]) == depths[i], 'kernel sizes do not match the depths'
- self.with_cp = with_cp
- dp_rates = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))]
- # print('=========== drop path rates: ', dp_rates)
- self.downsample_layers = nn.ModuleList()
- self.downsample_layers.append(nn.Sequential(
- nn.Conv2d(in_chans, dims[0] // 2, kernel_size=3, stride=2, padding=1),
- LayerNorm(dims[0] // 2, eps=1e-6, data_format="channels_first"),
- nn.GELU(),
- nn.Conv2d(dims[0] // 2, dims[0], kernel_size=3, stride=2, padding=1),
- LayerNorm(dims[0], eps=1e-6, data_format="channels_first")))
- for i in range(3):
- self.downsample_layers.append(nn.Sequential(
- nn.Conv2d(dims[i], dims[i + 1], kernel_size=3, stride=2, padding=1),
- LayerNorm(dims[i + 1], eps=1e-6, data_format="channels_first")))
- self.stages = nn.ModuleList()
- cur = 0
- for i in range(4):
- main_stage = nn.Sequential(
- *[UniRepLKNetBlock(dim=dims[i], kernel_size=kernel_sizes[i][j], drop_path=dp_rates[cur + j],
- layer_scale_init_value=layer_scale_init_value, deploy=deploy,
- attempt_use_lk_impl=attempt_use_lk_impl,
- with_cp=with_cp, use_sync_bn=use_sync_bn) for j in
- range(depths[i])])
- self.stages.append(main_stage)
- cur += depths[i]
- self.output_mode = 'features'
- norm_layer = partial(LayerNorm, eps=1e-6, data_format="channels_first")
- for i_layer in range(4):
- layer = norm_layer(dims[i_layer])
- layer_name = f'norm{i_layer}'
- self.add_module(layer_name, layer)
- self.width_list = [i.size(1) for i in self.forward(torch.randn(1, 3, 640, 640))]
- self.apply(self._init_weights)
- def _init_weights(self, m):
- if isinstance(m, (nn.Conv2d, nn.Linear)):
- trunc_normal_(m.weight, std=.02)
- if hasattr(m, 'bias') and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- def forward(self, x):
- if self.output_mode == 'logits':
- for stage_idx in range(4):
- x = self.downsample_layers[stage_idx](x)
- x = self.stages[stage_idx](x)
- x = self.norm(x.mean([-2, -1]))
- x = self.head(x)
- return x
- elif self.output_mode == 'features':
- outs = []
- for stage_idx in range(4):
- x = self.downsample_layers[stage_idx](x)
- x = self.stages[stage_idx](x)
- outs.append(self.__getattr__(f'norm{stage_idx}')(x))
- return outs
- else:
- raise ValueError('Defined new output mode?')
- def switch_to_deploy(self):
- for m in self.modules():
- if hasattr(m, 'reparameterize'):
- m.reparameterize()
- class LayerNorm(nn.Module):
- r""" LayerNorm implementation used in ConvNeXt
- LayerNorm that supports two data formats: channels_last (default) or channels_first.
- The ordering of the dimensions in the inputs. channels_last corresponds to inputs with
- shape (batch_size, height, width, channels) while channels_first corresponds to inputs
- with shape (batch_size, channels, height, width).
- """
- def __init__(self, normalized_shape, eps=1e-6, data_format="channels_last", reshape_last_to_first=False):
- super().__init__()
- self.weight = nn.Parameter(torch.ones(normalized_shape))
- self.bias = nn.Parameter(torch.zeros(normalized_shape))
- self.eps = eps
- self.data_format = data_format
- if self.data_format not in ["channels_last", "channels_first"]:
- raise NotImplementedError
- self.normalized_shape = (normalized_shape,)
- self.reshape_last_to_first = reshape_last_to_first
- def forward(self, x):
- if self.data_format == "channels_last":
- return F.layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps)
- elif self.data_format == "channels_first":
- u = x.mean(1, keepdim=True)
- s = (x - u).pow(2).mean(1, keepdim=True)
- x = (x - u) / torch.sqrt(s + self.eps)
- x = self.weight[:, None, None] * x + self.bias[:, None, None]
- return x
- def update_weight(model_dict, weight_dict):
- idx, temp_dict = 0, {}
- for k, v in weight_dict.items():
- if k in model_dict.keys() and np.shape(model_dict[k]) == np.shape(v):
- temp_dict[k] = v
- idx += 1
- model_dict.update(temp_dict)
- print(f'loading weights... {idx}/{len(model_dict)} items')
- return model_dict
- def unireplknet_a(pretrained='', **kwargs):
- model = UniRepLKNet(depths=UniRepLKNet_A_F_P_depths, dims=(40, 80, 160, 320), **kwargs)
- if pretrained:
- model.load_state_dict(update_weight(model.state_dict(), torch.load(pretrained)))
- return model
- def unireplknet_f(pretrained='', **kwargs):
- model = UniRepLKNet(depths=UniRepLKNet_A_F_P_depths, dims=(48, 96, 192, 384), **kwargs)
- if pretrained:
- model.load_state_dict(update_weight(model.state_dict(), torch.load(pretrained)))
- return model
- def unireplknet_p(pretrained='', **kwargs):
- model = UniRepLKNet(depths=UniRepLKNet_A_F_P_depths, dims=(64, 128, 256, 512), **kwargs)
- if pretrained:
- model.load_state_dict(update_weight(model.state_dict(), torch.load(pretrained)))
- return model
- def unireplknet_n(weights='', **kwargs):
- model = UniRepLKNet(depths=UniRepLKNet_N_depths, dims=(80, 160, 320, 640), **kwargs)
- if weights:
- model.load_state_dict(update_weight(model.state_dict(), torch.load(weights)))
- return model
- def unireplknet_t(pretrained='', **kwargs):
- model = UniRepLKNet(depths=UniRepLKNet_T_depths, dims=(80, 160, 320, 640), **kwargs)
- if pretrained:
- model.load_state_dict(update_weight(model.state_dict(), torch.load(pretrained)))
- return model
- def unireplknet_s(pretrained='', **kwargs):
- model = UniRepLKNet(depths=UniRepLKNet_S_B_L_XL_depths, dims=(96, 192, 384, 768), **kwargs)
- if pretrained:
- model.load_state_dict(update_weight(model.state_dict(), torch.load(pretrained)))
- return model
- def unireplknet_b(pretrained='', **kwargs):
- model = UniRepLKNet(depths=UniRepLKNet_S_B_L_XL_depths, dims=(128, 256, 512, 1024), **kwargs)
- if pretrained:
- model.load_state_dict(update_weight(model.state_dict(), torch.load(pretrained)))
- return model
- def unireplknet_l(pretrained='', **kwargs):
- model = UniRepLKNet(depths=UniRepLKNet_S_B_L_XL_depths, dims=(192, 384, 768, 1536), **kwargs)
- if pretrained:
- model.load_state_dict(update_weight(model.state_dict(), torch.load(pretrained)))
- return model
- def unireplknet_xl(pretrained='', **kwargs):
- model = UniRepLKNet(depths=UniRepLKNet_S_B_L_XL_depths, dims=(256, 512, 1024, 2048), **kwargs)
- if pretrained:
- model.load_state_dict(update_weight(model.state_dict(), torch.load(pretrained)))
- return model
四、手把手教你添加UniRepLknet机制
4.1 修改一
第一步还是建立文件,我们找到如下ultralytics/nn文件夹下建立一个目录名字呢就是'Addmodules'文件夹( 用群内的文件的话已经有了无需新建) !然后在其内部建立一个新的py文件将核心代码复制粘贴进去即可
4.2 修改二
第二步我们在该目录下创建一个新的py文件名字为'__init__.py'( 用群内的文件的话已经有了无需新建) ,然后在其内部导入我们的检测头如下图所示。
4.3 修改三
第三步我门中到如下文件'ultralytics/nn/tasks.py'进行导入和注册我们的模块( 用群内的文件的话已经有了无需重新导入直接开始第四步即可) !
从今天开始以后的教程就都统一成这个样子了,因为我默认大家用了我群内的文件来进行修改!!
4.4 修改四
添加如下两行代码!!!
4.5 修改五
找到七百多行大概把具体看图片,按照图片来修改就行,添加红框内的部分,注意没有()只是 函数 名。
- elif m in {自行添加对应的模型即可,下面都是一样的}:
- m = m(*args)
- c2 = m.width_list # 返回通道列表
- backbone = True
4.6 修改六
下面的两个红框内都是需要改动的。
- if isinstance(c2, list):
- m_ = m
- m_.backbone = True
- else:
- m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module
- t = str(m)[8:-2].replace('__main__.', '') # module type
- m.np = sum(x.numel() for x in m_.parameters()) # number params
- m_.i, m_.f, m_.type = i + 4 if backbone else i, f, t # attach index, 'from' index, type
4.7 修改七
如下的也需要修改,全部按照我的来。
代码如下把原先的代码替换了即可。
- if verbose:
- LOGGER.info(f'{i:>3}{str(f):>20}{n_:>3}{m.np:10.0f} {t:<45}{str(args):<30}') # print
- save.extend(x % (i + 4 if backbone else i) for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist
- layers.append(m_)
- if i == 0:
- ch = []
- if isinstance(c2, list):
- ch.extend(c2)
- if len(c2) != 5:
- ch.insert(0, 0)
- else:
- ch.append(c2)
4.8 修改八
修改八和前面的都不太一样,需要修改前向传播中的一个部分, 已经离开了parse_model方法了。
可以在图片中开代码行数,没有离开task.py文件都是同一个文件。 同时这个部分有好几个前向传播都很相似,大家不要看错了, 是70多行左右的!!!,同时我后面提供了代码,大家直接复制粘贴即可,有时间我针对这里会出一个视频。
代码如下->
- def _predict_once(self, x, profile=False, visualize=False, embed=None):
- """
- Perform a forward pass through the network.
- Args:
- x (torch.Tensor): The input tensor to the model.
- profile (bool): Print the computation time of each layer if True, defaults to False.
- visualize (bool): Save the feature maps of the model if True, defaults to False.
- embed (list, optional): A list of feature vectors/embeddings to return.
- Returns:
- (torch.Tensor): The last output of the model.
- """
- y, dt, embeddings = [], [], [] # outputs
- for m in self.model:
- if m.f != -1: # if not from previous layer
- x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
- if profile:
- self._profile_one_layer(m, x, dt)
- if hasattr(m, 'backbone'):
- x = m(x)
- if len(x) != 5: # 0 - 5
- x.insert(0, None)
- for index, i in enumerate(x):
- if index in self.save:
- y.append(i)
- else:
- y.append(None)
- x = x[-1] # 最后一个输出传给下一层
- else:
- x = m(x) # run
- y.append(x if m.i in self.save else None) # save output
- if visualize:
- feature_visualization(x, m.type, m.i, save_dir=visualize)
- if embed and m.i in embed:
- embeddings.append(nn.functional.adaptive_avg_pool2d(x, (1, 1)).squeeze(-1).squeeze(-1)) # flatten
- if m.i == max(embed):
- return torch.unbind(torch.cat(embeddings, 1), dim=0)
- return x
到这里就完成了修改部分,但是这里面细节很多,大家千万要注意不要替换多余的代码,导致报错,也不要拉下任何一部,都会导致运行失败,而且报错很难排查!!!很难排查!!!
注意!!! 额外的修改!
关注我的其实都知道,我大部分的修改都是一样的,这个网络需要额外的修改一步,就是s一个参数,将下面的s改为640!!!即可完美运行!!
打印计算量问题解决方案
我们找到如下文件'ultralytics/utils/torch_utils.py'按照如下的图片进行修改,否则容易打印不出来计算量。
注意事项!!!
如果大家在验证的时候报错形状不匹配的错误可以固定 验证集 的图片尺寸,方法如下 ->
找到下面这个文件ultralytics/ models /yolo/detect/train.py然后其中有一个类是DetectionTrainer class中的build_dataset函数中的一个参数rect=mode == 'val'改为rect=False
五、UniRepLknet的yaml文件
复制如下yaml文件进行运行!!!
5.1 UniRepLknet 的yaml文件版本1
此版本训练信息:YOLO11-UniRepLKNet summary: 570 layers, 1,876,231 parameters, 1,876,215 gradients, 4.5 GFLOPs
使用说明:# 下面 [-1, 1, UniRepLKNet, [0.25]] 参数位置的0.25是通道放缩的系数, YOLOv11N是0.25 YOLOv11S是0.5 YOLOv11M是1. YOLOv11l是1 YOLOv11是1.5大家根据自己训练的YOLO版本设定即可.
# 本文支持版本有'unireplknet_a', 'unireplknet_f', 'unireplknet_p', 'unireplknet_n', 'unireplknet_t', 'unireplknet_s', 'unireplknet_b', 'unireplknet_l', 'unireplknet_xl'
- # Ultralytics YOLO 🚀, AGPL-3.0 license
- # YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
- # Parameters
- nc: 80 # number of classes
- scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
- # [depth, width, max_channels]
- n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
- s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
- m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
- l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
- x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
- # 下面 [-1, 1, LSKNet, [0.25,0.5]] 参数位置的0.25是通道放缩的系数, YOLOv11N是0.25 YOLOv11S是0.5 YOLOv11M是1. YOLOv11l是1 YOLOv11是1.5大家根据自己训练的YOLO版本设定即可.
- # 支持的版本有: __all__ = ['unireplknet_a', 'unireplknet_f', 'unireplknet_p', 'unireplknet_n', 'unireplknet_t', 'unireplknet_s', 'unireplknet_b', 'unireplknet_l', 'unireplknet_xl']
- # YOLO11n backbone
- backbone:
- # [from, repeats, module, args]
- - [-1, 1, unireplknet_a, [0.25]] # 0-4 P1/2
- - [-1, 1, SPPF, [1024, 5]] # 5
- - [-1, 2, C2PSA, [1024]] # 6
- # YOLO11n head
- head:
- - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- - [[-1, 3], 1, Concat, [1]] # cat backbone P4
- - [-1, 2, C3k2, [512, False]] # 9
- - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- - [[-1, 2], 1, Concat, [1]] # cat backbone P3
- - [-1, 2, C3k2, [256, False]] # 12 (P3/8-small)
- - [-1, 1, Conv, [256, 3, 2]]
- - [[-1, 9], 1, Concat, [1]] # cat head P4
- - [-1, 2, C3k2, [512, False]] # 15 (P4/16-medium)
- - [-1, 1, Conv, [512, 3, 2]]
- - [[-1, 6], 1, Concat, [1]] # cat head P5
- - [-1, 2, C3k2, [1024, True]] # 18 (P5/32-large)
- - [[12, 15, 18], 1, Detect, [nc]] # Detect(P3, P4, P5)
5.2 训练文件
- import warnings
- warnings.filterwarnings('ignore')
- from ultralytics import YOLO
- if __name__ == '__main__':
- model = YOLO('ultralytics/cfg/models/v8/yolov8-C2f-FasterBlock.yaml')
- # model.load('yolov8n.pt') # loading pretrain weights
- model.train(data=r'替换数据集yaml文件地址',
- # 如果大家任务是其它的'ultralytics/cfg/default.yaml'找到这里修改task可以改成detect, segment, classify, pose
- cache=False,
- imgsz=640,
- epochs=150,
- single_cls=False, # 是否是单类别检测
- batch=4,
- close_mosaic=10,
- workers=0,
- device='0',
- optimizer='SGD', # using SGD
- # resume='', # 如过想续训就设置last.pt的地址
- amp=False, # 如果出现训练损失为Nan可以关闭amp
- project='runs/train',
- name='exp',
- )
六、成功运行记录
下面是成功运行的截图,已经完成了有1个epochs的训练,图片太大截不全第2个epochs,这里改完之后打印出了点问题,但是不影响任何功能,后期我找时间修复一下这个问题。
七、本文总结
到此本文的正式分享内容就结束了,在这里给大家推荐我的YOLOv11改进有效涨点专栏,本专栏目前为新开的平均质量分98分,后期我会根据各种最新的前沿顶会进行论文复现,也会对一些老的改进机制进行补充, 目前本专栏免费阅读(暂时,大家尽早关注不迷路~) ,如果大家觉得本文帮助到你了,订阅本专栏,关注后续更多的更新~