RT-DETR改进策略【模型轻量化】| 替换骨干网络为MoblieNetV1,用于移动视觉应用的高效卷积神经网络
一、本文介绍
本文记录的是
基于MobileNet V1的RT-DETR轻量化改进方法研究
。
MobileNet V1
基于
深度可分离卷积
构建,其设计旨在满足移动和嵌入式视觉应用对
小型
、
低延迟
模型的需求,具有独特的模型收缩超参数来灵活调整模型大小与性能。本文将
MobileNet V1应
用到
RT-DETR
中,有望借助其高效的结构和特性,提升
RT-DETR
在计算资源有限环境下的性能表现,同时保持一定的精度水平。
| 模型 | 参数量 | 计算量 |
|---|---|---|
| rtdetr-l | 32.8M | 108.0GFLOPs |
| Improved | 22.0M | 71.1GFLOPs |
二、MoblieNet V1设计原理
2.1 出发点
在许多实际应用如机器人、自动驾驶和增强现实中,识别任务需要在计算资源有限的平台上及时完成。但以往为提高准确性而构建的更深更复杂的网络,在 尺寸和速度方面并不高效 。因此,需要构 建小型、低延迟 的模型来满足移动和嵌入式视觉应用的设计要求。
2.2 结构原理
-
深度可分离卷积(Depthwise Separable Convolution)
:这是
MobileNet模型的核心构建模块。它将标准卷积分解为深度卷积(depthwise convolution)和1×1卷积(pointwise convolution)。-
对于
MobileNet,深度卷积对每个输入通道应用单个滤波器,然后点卷积通过1×1卷积组合深度卷积的输出。标准卷积在一步中同时过滤和组合输入以生成新的输出,而深度可分离卷积将此过程分为两步, 从而大幅降低了计算量和模型尺寸 。
例如,一个标准卷积层输入为 D F × D F × M D_{F}×D_{F}×M D F × D F × M 特征图 F F F ,输出为 D F × D F × N D_{F}×D_{F}×N D F × D F × N 特征图 G G G ,其计算成本为 D K ⋅ D K ⋅ M ⋅ N ⋅ D F ⋅ D F D_{K}·D_{K}·M·N·D_{F}·D_{F} D K ⋅ D K ⋅ M ⋅ N ⋅ D F ⋅ D F ,而深度可分离卷积的计算成本为 D K ⋅ D K ⋅ M ⋅ D F ⋅ D F + M ⋅ N ⋅ D F ⋅ D F D_{K}·D_{K}·M·D_{F}·D_{F}+M·N·D_{F}·D_{F} D K ⋅ D K ⋅ M ⋅ D F ⋅ D F + M ⋅ N ⋅ D F ⋅ D F ,相比之下计算量大幅减少,如在实际应用中 MobileNet 使用 3×3 深度可分离卷积比标准卷积节省 8 到 9 倍的计算量且精度损失较小。
-
对于
-
网络结构
:除了第一层是全卷积外,MobileNet 结构基于
深度可分离卷积构建。所有层(除最终全连接层)后面都跟着批量归一化(batchnorm)和ReLU 非线性激活函数。下采样通过深度卷积中的步长卷积以及第一层来处理,最后在全连接层之前使用平均池化将空间分辨率降为 1。- 将深度卷积和点卷积视为单独的层,MobileNet 共有 28 层。在计算资源分配上,95%的计算时间花费在 1x1 卷积上,且 75%的参数也在 1x1 卷积中,几乎所有额外参数都在全连接层。
- 模型收缩超参数 :包括宽度乘数(width multiplier)和分辨率乘数(resolution multiplier)。宽度乘数 α \alpha α 用于均匀地使网络每层变窄,对于给定层和宽度乘数 α \alpha α ,输入通道数 M M M 变为 α M \alpha M α M ,输出通道数 N N N 变为 α N \alpha N α N ,其计算成本为 D K ⋅ D K ⋅ α M ⋅ D F ⋅ D F + α M ⋅ α N ⋅ D F ⋅ D F D_{K}·D_{K}·\alpha M·D_{F}·D_{F}+\alpha M·\alpha N·D_{F}·D_{F} D K ⋅ D K ⋅ α M ⋅ D F ⋅ D F + α M ⋅ α N ⋅ D F ⋅ D F ,能以大致 α 2 \alpha^{2} α 2 的比例二次减少计算成本和参数数量。分辨率乘数 ρ \rho ρ 应用于输入图像和每一层的内部表示,通过隐式设置输入分辨率来降低计算成本,计算成本为 D K ⋅ D K ⋅ α M ⋅ ρ D F ⋅ ρ D F + α M ⋅ α N ⋅ ρ D F ⋅ ρ D F D_{K}\cdot D_{K}\cdot \alpha M\cdot \rho D_{F}\cdot \rho D_{F}+\alpha M\cdot \alpha N\cdot \rho D_{F}\cdot \rho D_{F} D K ⋅ D K ⋅ α M ⋅ ρ D F ⋅ ρ D F + α M ⋅ α N ⋅ ρ D F ⋅ ρ D F ,能使计算成本降低 ρ 2 \rho^{2} ρ 2 。
2.3 优势
- 计算效率高 :通过深度可分离卷积以及模型收缩超参数的应用,在保证一定精度的前提下,大幅减少了计算量和模型参数。
- 灵活性强 :宽度乘数和分辨率乘数可以根据不同的应用需求和资源限制,灵活地调整模型的大小、计算成本和精度,以实现合理的权衡。
论文: https://arxiv.org/pdf/1704.04861
源码: https://github.com/Zehaos/MobileNet
三、MoblieNet V1的实现代码
MoblieNet V1
的实现代码如下:
"""A from-scratch implementation of original MobileNet paper ( for educational purposes ).
Paper
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications - https://arxiv.org/abs/1704.04861
author : shubham.aiengineer@gmail.com
"""
import torch
from torch import nn
__all__ = ['MobileNetV1']
class DepthwiseSepConvBlock(nn.Module):
def __init__(
self,
in_channels: int,
out_channels: int,
stride: int = 1,
use_relu6: bool = True,
):
"""Constructs Depthwise seperable with pointwise convolution with relu and batchnorm respectively.
Args:
in_channels (int): input channels for depthwise convolution
out_channels (int): output channels for pointwise convolution
stride (int, optional): stride paramemeter for depthwise convolution. Defaults to 1.
use_relu6 (bool, optional): whether to use standard ReLU or ReLU6 for depthwise separable convolution block. Defaults to True.
"""
super().__init__()
# Depthwise conv
self.depthwise_conv = nn.Conv2d(
in_channels,
in_channels,
(3, 3),
stride=stride,
padding=1,
groups=in_channels,
)
self.bn1 = nn.BatchNorm2d(in_channels)
self.relu1 = nn.ReLU6() if use_relu6 else nn.ReLU()
# Pointwise conv
self.pointwise_conv = nn.Conv2d(in_channels, out_channels, (1, 1))
self.bn2 = nn.BatchNorm2d(out_channels)
self.relu2 = nn.ReLU6() if use_relu6 else nn.ReLU()
def forward(self, x):
"""Perform forward pass."""
x = self.depthwise_conv(x)
x = self.bn1(x)
x = self.relu1(x)
x = self.pointwise_conv(x)
x = self.bn2(x)
x = self.relu2(x)
return x
class MobileNetV1(nn.Module):
def __init__(
self,
input_channel: int = 3,
depth_multiplier: float = 1.0,
use_relu6: bool = True,
):
"""Constructs MobileNetV1 architecture
Args:
n_classes (int, optional): count of output neuron in last layer. Defaults to 1000.
input_channel (int, optional): input channels in first conv layer. Defaults to 3.
depth_multiplier (float, optional): network width multiplier ( width scaling ). Suggested Values - 0.25, 0.5, 0.75, 1.. Defaults to 1.0.
use_relu6 (bool, optional): whether to use standard ReLU or ReLU6 for depthwise separable convolution block. Defaults to True.
"""
super().__init__()
# The configuration of MobileNetV1
# input channels, output channels, stride
config = (
(32, 64, 1),
(64, 128, 2),
(128, 128, 1),
(128, 256, 2),
(256, 256, 1),
(256, 512, 2),
(512, 512, 1),
(512, 512, 1),
(512, 512, 1),
(512, 512, 1),
(512, 512, 1),
(512, 1024, 2),
(1024, 1024, 1),
)
self.model = nn.Sequential(
nn.Conv2d(
input_channel, int(32 * depth_multiplier), (3, 3), stride=2, padding=1
)
)
# Adding depthwise block in the model from the config
for in_channels, out_channels, stride in config:
self.model.append(
DepthwiseSepConvBlock(
int(in_channels * depth_multiplier), # input channels
int(out_channels * depth_multiplier), # output channels
stride,
use_relu6=use_relu6,
)
)
self.index = [128, 256, 512, 1024]
self.width_list = [i.size(1) for i in self.forward(torch.randn(1, 3, 640, 640))]
def forward(self, x):
"""Perform forward pass."""
results = [None, None, None, None]
for model in self.model:
x = model(x)
if x.size(1) in self.index:
position = self.index.index(x.size(1)) # Find the position in the index list
results[position] = x
return results
if __name__ == "__main__":
# Generating Sample image
image_size = (1, 3, 224, 224)
image = torch.rand(*image_size)
# Model
mobilenet_v1 = MobileNetV1(depth_multiplier=1)
out = mobilenet_v1(image)
print(out)
四、修改步骤
4.1 修改一
① 在
ultralytics/nn/
目录下新建
AddModules
文件夹用于存放模块代码
② 在
AddModules
文件夹下新建
MobileNetV1.py
,将
第三节
中的代码粘贴到此处
4.2 修改二
在
AddModules
文件夹下新建
__init__.py
(已有则不用新建),在文件内导入模块:
from .MobileNetV1 import *
4.3 修改三
在
ultralytics/nn/modules/tasks.py
文件中,需要在两处位置添加各模块类名称。
① 首先:导入模块
② 其次:在
parse_model函数
的如下位置添加两行代码:
backbone = False
t=m
③ 接着,在此函数下添加如下代码:
elif m in {MobileNetV1}:
m = m(*args)
c2 = m.width_list
backbone = True
④ 然后,将下方红框内的代码全部替换:
if isinstance(c2, list):
is_backbone = True
m_ = m
m_.backbone = True
else:
m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args) # module
t = str(m)[8:-2].replace('__main__.', '') # module type
m.np = sum(x.numel() for x in m_.parameters()) # number params
m_.i, m_.f, m_.type = i + 4 if is_backbone else i, f, t # attach index, 'from' index, type
if verbose:
LOGGER.info(f'{i:>3}{str(f):>20}{n_:>3}{m.np:10.0f} {t:<45}{str(args):<30}') # print
save.extend(x % (i + 4 if is_backbone else i) for x in ([f] if isinstance(f, int) else f) if
x != -1) # append to savelist
layers.append(m_)
if i == 0:
ch = []
if isinstance(c2, list):
ch.extend(c2)
for _ in range(5 - len(ch)):
ch.insert(0, 0)
else:
ch.append(c2)
替换后如下:
⑤ 在此文件下找到
base_model
的
_predict_once
,并将其替换成如下代码。
def _predict_once(self, x, profile=False, visualize=False, embed=None):
y, dt, embeddings = [], [], [] # outputs
for m in self.model:
if m.f != -1: # if not from previous layer
x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
if profile:
self._profile_one_layer(m, x, dt)
if hasattr(m, 'backbone'):
x = m(x)
if len(x) != 5: # 0 - 5
x.insert(0, None)
for index, i in enumerate(x):
if index in self.save:
y.append(i)
else:
y.append(None)
x = x[-1] # 最后一个输出传给下一层
else:
x = m(x) # run
y.append(x if m.i in self.save else None) # save output
if visualize:
feature_visualization(x, m.type, m.i, save_dir=visualize)
if embed and m.i in embed:
embeddings.append(nn.functional.adaptive_avg_pool2d(x, (1, 1)).squeeze(-1).squeeze(-1)) # flatten
if m.i == max(embed):
return torch.unbind(torch.cat(embeddings, 1), dim=0)
return x
至此就修改完成了,可以配置模型开始训练了
五、yaml模型文件
5.1 模型改进⭐
在代码配置完成后,配置模型的YAML文件。
此处以
ultralytics/cfg/models/rt-detr/rtdetr-l.yaml
为例,在同目录下创建一个用于自己数据集训练的模型文件
rtdetr-MobileNetV1.yaml
。
将
rtdetr-l.yaml
中的内容复制到
rtdetr-MobileNetV1.yaml
文件下,修改
nc
数量等于自己数据中目标的数量。
📌 模型的修改方法是将
骨干网络
替换成
MobileNetV1
。
# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-l object detection model with P3-P5 outputs. For details see https://docs.ultralytics.com/models/rtdetr
# Parameters
nc: 1 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'
# [depth, width, max_channels]
l: [1.00, 1.00, 1024]
backbone:
# [from, repeats, module, args]
- [-1, 1, MobileNetV1, []] # 4
head:
- [-1, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 5 input_proj.2
- [-1, 1, AIFI, [1024, 8]] # 6
- [-1, 1, Conv, [256, 1, 1]] # 7, Y5, lateral_convs.0
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 8
- [3, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 9 input_proj.1
- [[-2, -1], 1, Concat, [1]] # 10
- [-1, 3, RepC3, [256]] # 11, fpn_blocks.0
- [-1, 1, Conv, [256, 1, 1]] # 12, Y4, lateral_convs.1
- [-1, 1, nn.Upsample, [None, 2, 'nearest']] # 13
- [2, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 14 input_proj.0
- [[-2, -1], 1, Concat, [1]] # 15 cat backbone P4
- [-1, 3, RepC3, [256]] # X3 (16), fpn_blocks.1
- [-1, 1, Conv, [256, 3, 2]] # 17, downsample_convs.0
- [[-1, 12], 1, Concat, [1]] # 18 cat Y4
- [-1, 3, RepC3, [256]] # F4 (19), pan_blocks.0
- [-1, 1, Conv, [256, 3, 2]] # 20, downsample_convs.1
- [[-1, 7], 1, Concat, [1]] # 21 cat Y5
- [-1, 3, RepC3, [256]] # F5 (22), pan_blocks.1
- [[16, 19, 22], 1, RTDETRDecoder, [nc]] # Detect(P3, P4, P5)
六、成功运行结果
分别打印网络模型可以看到
MobileNetV1模块
已经加入到模型中,并可以进行训练了。
rtdetr-MobileNetV1 :
rtdetr-MobileNetV1 summary: 464 layers, 22,013,859 parameters, 22,013,859 gradients, 71.1 GFLOPs
from n params module arguments
0 -1 1 3217856 MobileNetV1 []
1 -1 1 262656 ultralytics.nn.modules.conv.Conv [1024, 256, 1, 1, None, 1, 1, False]
2 -1 1 789760 ultralytics.nn.modules.transformer.AIFI [256, 1024, 8]
3 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
4 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
5 3 1 131584 ultralytics.nn.modules.conv.Conv [512, 256, 1, 1, None, 1, 1, False]
6 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
7 -1 3 2232320 ultralytics.nn.modules.block.RepC3 [512, 256, 3]
8 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
9 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
10 2 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1, None, 1, 1, False]
11 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
12 -1 3 2232320 ultralytics.nn.modules.block.RepC3 [512, 256, 3]
13 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
14 [-1, 12] 1 0 ultralytics.nn.modules.conv.Concat [1]
15 -1 3 2232320 ultralytics.nn.modules.block.RepC3 [512, 256, 3]
16 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
17 [-1, 7] 1 0 ultralytics.nn.modules.conv.Concat [1]
18 -1 3 2232320 ultralytics.nn.modules.block.RepC3 [512, 256, 3]
19 [16, 19, 22] 1 7303907 ultralytics.nn.modules.head.RTDETRDecoder [1, [256, 256, 256]]
rtdetr-MobileNetV1 summary: 464 layers, 22,013,859 parameters, 22,013,859 gradients, 71.1 GFLOPs