RT-DETR改进策略【卷积层】| ICCV-2023 引入Dynamic Snake Convolution动态蛇形卷积,改进ResNetLayer
一、本文介绍
本文记录的是
利用
DSConv
优化
RT-DETR
的目标检测方法研究
。在一些特殊目标任务中,细长的管状结构在图像中所占比例小,且易受复杂背景干扰,模型难以精确区分细微的目标变化。普通的变形卷积虽然能适应目标的几何变形,但在处理细管状结构时,由于模型完全自由地学习几何变化,感知区域容易偏离目标,导致难以高效聚焦于细管状结构。
本文所引进的动态蛇形卷积,通过自适应地聚焦于管状结构的细弯局部特征,增强了对几何结构的感知,使改进后的模型能够更好地感知关键特征。
二、DSConv原理介绍
基于拓扑几何约束的动态蛇卷积用于管状结构分割
DSConv(Dynamic Snake Convolution,动态蛇形卷积)
模块的设计主要是为了更好地处理管状结构的分割任务,解决传统卷积在处理细管状结构时的不足。
2.1 原理:
- 给定标准2D卷积坐标 K K K ,中心坐标为 K i = ( x i , y i ) K_i = (x_i, y_i) K i = ( x i , y i ) , 3 × 3 3\times3 3 × 3 内核 K K K ( dilation为1)表示为 K = { ( x − 1 , y − 1 ) , ( x − 1 , y ) , ⋯ , ( x + 1 , y + 1 ) } K = \{(x - 1, y - 1), (x - 1, y), \cdots, (x + 1, y + 1)\} K = {( x − 1 , y − 1 ) , ( x − 1 , y ) , ⋯ , ( x + 1 , y + 1 )} 。
- 为了使卷积核更能聚焦于目标的复杂几何特征,引入变形偏移 Δ \Delta Δ 。但为避免感知场在细管状结构上偏离目标,使用迭代策略,依次选择每个目标待处理时的观察位置,确保注意力的连续性,防止因变形偏移过大而使感知场扩散太远。
-
在DSConv中,将标准卷积核在x轴和y轴方向上拉直。以大小为9的卷积核为例,在x轴方向,每个网格的具体位置表示为
K
i
±
c
=
(
x
i
±
c
,
y
i
±
c
)
K_{i \pm c} = (x_{i \pm c}, y_{i \pm c})
K
i
±
c
=
(
x
i
±
c
,
y
i
±
c
)
,其中
c
=
{
0
,
1
,
2
,
3
,
4
}
c = \{0, 1, 2, 3, 4\}
c
=
{
0
,
1
,
2
,
3
,
4
}
表示到中心网格的水平距离。卷积核
K
K
K
中每个网格位置
K
i
±
c
K_{i \pm c}
K
i
±
c
的选择是一个累积过程,从中心位置
K
i
K_i
K
i
开始,远离中心网格的位置取决于前一个网格的位置:
K
i
+
1
K_{i + 1}
K
i
+
1
相比于
K
i
K_i
K
i
增加一个偏移
Δ
=
{
δ
∣
δ
∈
[
−
1
,
1
]
}
\Delta = \{\delta | \delta \in [-1, 1]\}
Δ
=
{
δ
∣
δ
∈
[
−
1
,
1
]}
,偏移需要进行累加,以确保卷积核符合线性形态结构。在x轴方向上,公式表示为:
K i ± c = { ( x i + c , y i + c ) = ( x i + c , y i + ∑ i i + c Δ y ) ( x i − c , y i − c ) = ( x i − c , y i + ∑ i − c i Δ y ) K_{i \pm c} = \begin{cases} (x_{i + c}, y_{i + c}) = (x_{i} + c, y_{i} + \sum_{i}^{i + c} \Delta y) \\ (x_{i - c}, y_{i - c}) = (x_{i} - c, y_{i} + \sum_{i - c}^{i} \Delta y) \end{cases} K i ± c = { ( x i + c , y i + c ) = ( x i + c , y i + ∑ i i + c Δ y ) ( x i − c , y i − c ) = ( x i − c , y i + ∑ i − c i Δ y )
在y轴方向上的公式类似。 - 由于偏移 Δ \Delta Δ 通常是分数形式,采用双线性插值: K = ∑ K ′ B ( K ′ , K ) ⋅ K ′ K = \sum_{K'} B(K', K) \cdot K' K = ∑ K ′ B ( K ′ , K ) ⋅ K ′ ,其中 K K K 表示分数位置, K ′ K' K ′ 枚举所有整数空间位置, B B B 是双线性插值核,可分离为两个一维核: B ( K , K ′ ) = b ( K x , K x ′ ) ⋅ b ( K y , K y ′ ) B(K, K') = b(K_x, K_x') \cdot b(K_y, K_y') B ( K , K ′ ) = b ( K x , K x ′ ) ⋅ b ( K y , K y ′ ) 。
2.2 优势:
-
更好地适应管状结构
:
DSConv基于动态结构,能更好地适应细长的管状结构,从而更好地感知关键特征。 - 增强对几何结构的感知 :通过自适应地聚焦于管状结构的细弯局部特征,增强了对几何结构的感知,有助于模型更准确地捕获管状结构的特征。
-
避免感知区域偏离
:与变形卷积不同,
DSConv通过引入约束,避免了感知区域在细管状结构上的偏离,使注意力更集中在目标上。
论文: https://arxiv.org/abs/2307.08388
源码: https://github.com/YaoleiQi/DSCNet
三、DySnakeConv的实现代码
DySnakeConv模块
的实现代码如下:
import torch
import torch.nn as nn
import torch.nn.functional as F
def autopad(k, p=None, d=1): # kernel, padding, dilation
"""Pad to 'same' shape outputs."""
if d > 1:
k = d * (k - 1) + 1 if isinstance(k, int) else [d * (x - 1) + 1 for x in k] # actual kernel-size
if p is None:
p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad
return p
class Conv(nn.Module):
"""Standard convolution with args(ch_in, ch_out, kernel, stride, padding, groups, dilation, activation)."""
default_act = nn.SiLU() # default activation
def __init__(self, c1, c2, k=1, s=1, p=None, g=1, d=1, act=True):
"""Initialize Conv layer with given arguments including activation."""
super().__init__()
self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p, d), groups=g, dilation=d, bias=False)
self.bn = nn.BatchNorm2d(c2)
self.act = self.default_act if act is True else act if isinstance(act, nn.Module) else nn.Identity()
def forward(self, x):
"""Apply convolution, batch normalization and activation to input tensor."""
return self.act(self.bn(self.conv(x)))
def forward_fuse(self, x):
"""Perform transposed convolution of 2D data."""
return self.act(self.conv(x))
class DySnakeConv(nn.Module):
def __init__(self, inc, ouc, k=3) -> None:
super().__init__()
c_ = ouc // 3 // 16 * 16
self.conv_0 = Conv(inc, ouc - 2 *c_, k)
self.conv_x = DSConv(inc, c_, 0, k)
self.conv_y = DSConv(inc, c_, 1, k)
def forward(self, x):
return torch.cat([self.conv_0(x), self.conv_x(x), self.conv_y(x)], dim=1)
class DSConv(nn.Module):
def __init__(self, in_ch, out_ch, morph, kernel_size=3, if_offset=True, extend_scope=1):
"""
The Dynamic Snake Convolution
:param in_ch: input channel
:param out_ch: output channel
:param kernel_size: the size of kernel
:param extend_scope: the range to expand (default 1 for this method)
:param morph: the morphology of the convolution kernel is mainly divided into two types
along the x-axis (0) and the y-axis (1) (see the paper for details)
:param if_offset: whether deformation is required, if it is False, it is the standard convolution kernel
"""
super(DSConv, self).__init__()
# use the <offset_conv> to learn the deformable offset
self.offset_conv = nn.Conv2d(in_ch, 2 * kernel_size, 3, padding=1)
self.bn = nn.BatchNorm2d(2 * kernel_size)
self.kernel_size = kernel_size
# two types of the DSConv (along x-axis and y-axis)
self.dsc_conv_x = nn.Conv2d(
in_ch,
out_ch,
kernel_size=(kernel_size, 1),
stride=(kernel_size, 1),
padding=0,
)
self.dsc_conv_y = nn.Conv2d(
in_ch,
out_ch,
kernel_size=(1, kernel_size),
stride=(1, kernel_size),
padding=0,
)
self.gn = nn.GroupNorm(out_ch // 4, out_ch)
self.act = Conv.default_act
self.extend_scope = extend_scope
self.morph = morph
self.if_offset = if_offset
def forward(self, f):
offset = self.offset_conv(f)
offset = self.bn(offset)
# We need a range of deformation between -1 and 1 to mimic the snake's swing
offset = torch.tanh(offset)
input_shape = f.shape
dsc = DSC(input_shape, self.kernel_size, self.extend_scope, self.morph)
deformed_feature = dsc.deform_conv(f, offset, self.if_offset)
if self.morph == 0:
x = self.dsc_conv_x(deformed_feature.type(f.dtype))
x = self.gn(x)
x = self.act(x)
return x
else:
x = self.dsc_conv_y(deformed_feature.type(f.dtype))
x = self.gn(x)
x = self.act(x)
return x
# Core code, for ease of understanding, we mark the dimensions of input and output next to the code
class DSC(object):
def __init__(self, input_shape, kernel_size, extend_scope, morph):
self.num_points = kernel_size
self.width = input_shape[2]
self.height = input_shape[3]
self.morph = morph
self.extend_scope = extend_scope # offset (-1 ~ 1) * extend_scope
# define feature map shape
"""
B: Batch size C: Channel W: Width H: Height
"""
self.num_batch = input_shape[0]
self.num_channels = input_shape[1]
"""
input: offset [B,2*K,W,H] K: Kernel size (2*K: 2D image, deformation contains <x_offset> and <y_offset>)
output_x: [B,1,W,K*H] coordinate map
output_y: [B,1,K*W,H] coordinate map
"""
def _coordinate_map_3D(self, offset, if_offset):
device = offset.device
# offset
y_offset, x_offset = torch.split(offset, self.num_points, dim=1)
y_center = torch.arange(0, self.width).repeat([self.height])
y_center = y_center.reshape(self.height, self.width)
y_center = y_center.permute(1, 0)
y_center = y_center.reshape([-1, self.width, self.height])
y_center = y_center.repeat([self.num_points, 1, 1]).float()
y_center = y_center.unsqueeze(0)
x_center = torch.arange(0, self.height).repeat([self.width])
x_center = x_center.reshape(self.width, self.height)
x_center = x_center.permute(0, 1)
x_center = x_center.reshape([-1, self.width, self.height])
x_center = x_center.repeat([self.num_points, 1, 1]).float()
x_center = x_center.unsqueeze(0)
if self.morph == 0:
"""
Initialize the kernel and flatten the kernel
y: only need 0
x: -num_points//2 ~ num_points//2 (Determined by the kernel size)
!!! The related PPT will be submitted later, and the PPT will contain the whole changes of each step
"""
y = torch.linspace(0, 0, 1)
x = torch.linspace(
-int(self.num_points // 2),
int(self.num_points // 2),
int(self.num_points),
)
y, x = torch.meshgrid(y, x, indexing = 'ij')
y_spread = y.reshape(-1, 1)
x_spread = x.reshape(-1, 1)
y_grid = y_spread.repeat([1, self.width * self.height])
y_grid = y_grid.reshape([self.num_points, self.width, self.height])
y_grid = y_grid.unsqueeze(0) # [B*K*K, W,H]
x_grid = x_spread.repeat([1, self.width * self.height])
x_grid = x_grid.reshape([self.num_points, self.width, self.height])
x_grid = x_grid.unsqueeze(0) # [B*K*K, W,H]
y_new = y_center + y_grid
x_new = x_center + x_grid
y_new = y_new.repeat(self.num_batch, 1, 1, 1).to(device)
x_new = x_new.repeat(self.num_batch, 1, 1, 1).to(device)
y_offset_new = y_offset.detach().clone()
if if_offset:
y_offset = y_offset.permute(1, 0, 2, 3)
y_offset_new = y_offset_new.permute(1, 0, 2, 3)
center = int(self.num_points // 2)
# The center position remains unchanged and the rest of the positions begin to swing
# This part is quite simple. The main idea is that "offset is an iterative process"
y_offset_new[center] = 0
for index in range(1, center):
y_offset_new[center + index] = (y_offset_new[center + index - 1] + y_offset[center + index])
y_offset_new[center - index] = (y_offset_new[center - index + 1] + y_offset[center - index])
y_offset_new = y_offset_new.permute(1, 0, 2, 3).to(device)
y_new = y_new.add(y_offset_new.mul(self.extend_scope))
y_new = y_new.reshape(
[self.num_batch, self.num_points, 1, self.width, self.height])
y_new = y_new.permute(0, 3, 1, 4, 2)
y_new = y_new.reshape([
self.num_batch, self.num_points * self.width, 1 * self.height
])
x_new = x_new.reshape(
[self.num_batch, self.num_points, 1, self.width, self.height])
x_new = x_new.permute(0, 3, 1, 4, 2)
x_new = x_new.reshape([
self.num_batch, self.num_points * self.width, 1 * self.height
])
return y_new, x_new
else:
"""
Initialize the kernel and flatten the kernel
y: -num_points//2 ~ num_points//2 (Determined by the kernel size)
x: only need 0
"""
y = torch.linspace(
-int(self.num_points // 2),
int(self.num_points // 2),
int(self.num_points),
)
x = torch.linspace(0, 0, 1)
y, x = torch.meshgrid(y, x, indexing = 'ij')
y_spread = y.reshape(-1, 1)
x_spread = x.reshape(-1, 1)
y_grid = y_spread.repeat([1, self.width * self.height])
y_grid = y_grid.reshape([self.num_points, self.width, self.height])
y_grid = y_grid.unsqueeze(0)
x_grid = x_spread.repeat([1, self.width * self.height])
x_grid = x_grid.reshape([self.num_points, self.width, self.height])
x_grid = x_grid.unsqueeze(0)
y_new = y_center + y_grid
x_new = x_center + x_grid
y_new = y_new.repeat(self.num_batch, 1, 1, 1)
x_new = x_new.repeat(self.num_batch, 1, 1, 1)
y_new = y_new.to(device)
x_new = x_new.to(device)
x_offset_new = x_offset.detach().clone()
if if_offset:
x_offset = x_offset.permute(1, 0, 2, 3)
x_offset_new = x_offset_new.permute(1, 0, 2, 3)
center = int(self.num_points // 2)
x_offset_new[center] = 0
for index in range(1, center):
x_offset_new[center + index] = (x_offset_new[center + index - 1] + x_offset[center + index])
x_offset_new[center - index] = (x_offset_new[center - index + 1] + x_offset[center - index])
x_offset_new = x_offset_new.permute(1, 0, 2, 3).to(device)
x_new = x_new.add(x_offset_new.mul(self.extend_scope))
y_new = y_new.reshape(
[self.num_batch, 1, self.num_points, self.width, self.height])
y_new = y_new.permute(0, 3, 1, 4, 2)
y_new = y_new.reshape([
self.num_batch, 1 * self.width, self.num_points * self.height
])
x_new = x_new.reshape(
[self.num_batch, 1, self.num_points, self.width, self.height])
x_new = x_new.permute(0, 3, 1, 4, 2)
x_new = x_new.reshape([
self.num_batch, 1 * self.width, self.num_points * self.height
])
return y_new, x_new
"""
input: input feature map [N,C,D,W,H];coordinate map [N,K*D,K*W,K*H]
output: [N,1,K*D,K*W,K*H] deformed feature map
"""
def _bilinear_interpolate_3D(self, input_feature, y, x):
device = input_feature.device
y = y.reshape([-1]).float()
x = x.reshape([-1]).float()
zero = torch.zeros([]).int()
max_y = self.width - 1
max_x = self.height - 1
# find 8 grid locations
y0 = torch.floor(y).int()
y1 = y0 + 1
x0 = torch.floor(x).int()
x1 = x0 + 1
# clip out coordinates exceeding feature map volume
y0 = torch.clamp(y0, zero, max_y)
y1 = torch.clamp(y1, zero, max_y)
x0 = torch.clamp(x0, zero, max_x)
x1 = torch.clamp(x1, zero, max_x)
input_feature_flat = input_feature.flatten()
input_feature_flat = input_feature_flat.reshape(
self.num_batch, self.num_channels, self.width, self.height)
input_feature_flat = input_feature_flat.permute(0, 2, 3, 1)
input_feature_flat = input_feature_flat.reshape(-1, self.num_channels)
dimension = self.height * self.width
base = torch.arange(self.num_batch) * dimension
base = base.reshape([-1, 1]).float()
repeat = torch.ones([self.num_points * self.width * self.height
]).unsqueeze(0)
repeat = repeat.float()
base = torch.matmul(base, repeat)
base = base.reshape([-1])
base = base.to(device)
base_y0 = base + y0 * self.height
base_y1 = base + y1 * self.height
# top rectangle of the neighbourhood volume
index_a0 = base_y0 - base + x0
index_c0 = base_y0 - base + x1
# bottom rectangle of the neighbourhood volume
index_a1 = base_y1 - base + x0
index_c1 = base_y1 - base + x1
# get 8 grid values
value_a0 = input_feature_flat[index_a0.type(torch.int64)].to(device)
value_c0 = input_feature_flat[index_c0.type(torch.int64)].to(device)
value_a1 = input_feature_flat[index_a1.type(torch.int64)].to(device)
value_c1 = input_feature_flat[index_c1.type(torch.int64)].to(device)
# find 8 grid locations
y0 = torch.floor(y).int()
y1 = y0 + 1
x0 = torch.floor(x).int()
x1 = x0 + 1
# clip out coordinates exceeding feature map volume
y0 = torch.clamp(y0, zero, max_y + 1)
y1 = torch.clamp(y1, zero, max_y + 1)
x0 = torch.clamp(x0, zero, max_x + 1)
x1 = torch.clamp(x1, zero, max_x + 1)
x0_float = x0.float()
x1_float = x1.float()
y0_float = y0.float()
y1_float = y1.float()
vol_a0 = ((y1_float - y) * (x1_float - x)).unsqueeze(-1).to(device)
vol_c0 = ((y1_float - y) * (x - x0_float)).unsqueeze(-1).to(device)
vol_a1 = ((y - y0_float) * (x1_float - x)).unsqueeze(-1).to(device)
vol_c1 = ((y - y0_float) * (x - x0_float)).unsqueeze(-1).to(device)
outputs = (value_a0 * vol_a0 + value_c0 * vol_c0 + value_a1 * vol_a1 +
value_c1 * vol_c1)
if self.morph == 0:
outputs = outputs.reshape([
self.num_batch,
self.num_points * self.width,
1 * self.height,
self.num_channels,
])
outputs = outputs.permute(0, 3, 1, 2)
else:
outputs = outputs.reshape([
self.num_batch,
1 * self.width,
self.num_points * self.height,
self.num_channels,
])
outputs = outputs.permute(0, 3, 1, 2)
return outputs
def deform_conv(self, input, offset, if_offset):
y, x = self._coordinate_map_3D(offset, if_offset)
deformed_feature = self._bilinear_interpolate_3D(input, y, x)
return deformed_feature
class ResNetBlock(nn.Module):
"""ResNet block with standard convolution layers."""
def __init__(self, c1, c2, s=1, e=4):
"""Initialize convolution with given parameters."""
super().__init__()
c3 = e * c2
self.cv1 = Conv(c1, c2, k=1, s=1, act=True)
self.cv2 = Conv(c2, c2, k=3, s=s, p=1, act=True)
self.cv3 = Conv(c2, c3, k=1, act=False)
self.cv4 = DySnakeConv(c2, c2)
self.shortcut = nn.Sequential(Conv(c1, c3, k=1, s=s, act=False)) if s != 1 or c1 != c3 else nn.Identity()
def forward(self, x):
"""Forward pass through the ResNet block."""
return F.relu(self.cv3(self.cv4(self.cv2(self.cv1(x)))) + self.shortcut(x))
class ResNetLayer_DySnakeConv(nn.Module):
"""ResNet layer with multiple ResNet blocks."""
def __init__(self, c1, c2, s=1, is_first=False, n=1, e=4):
"""Initializes the ResNetLayer given arguments."""
super().__init__()
self.is_first = is_first
if self.is_first:
self.layer = nn.Sequential(
Conv(c1, c2, k=7, s=2, p=3, act=True), nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
)
else:
blocks = [ResNetBlock(c1, c2, s, e=e)]
blocks.extend([ResNetBlock(e * c2, c2, 1, e=e) for _ in range(n - 1)])
self.layer = nn.Sequential(*blocks)
def forward(self, x):
"""Forward pass through the ResNet layer."""
return self.layer(x)
四、添加步骤
4.1 改进点1
模块改进方法
1️⃣:直接加入
DySnakeConv模块
。
DySnakeConv模块
添加后如下:
注意❗:在
5.2和5.3小节
中需要声明的模块名称为:
DySnakeConv
。
4.2 改进点2⭐
模块改进方法
2️⃣:基于
DySnakeConv模块
的
ResNetLayer
。
第二种改进方法是对
RT-DETR
中的
ResNetLayer模块
进行改进。改进代码如下:
首先添加
DySnakeConv
模块改进
ResNetBlock
模块。
class ResNetBlock(nn.Module):
"""ResNet block with standard convolution layers."""
def __init__(self, c1, c2, s=1, e=4):
"""Initialize convolution with given parameters."""
super().__init__()
c3 = e * c2
self.cv1 = Conv(c1, c2, k=1, s=1, act=True)
self.cv2 = Conv(c2, c2, k=3, s=s, p=1, act=True)
self.cv3 = Conv(c2, c3, k=1, act=False)
self.cv4 = DySnakeConv(c2, c2)
self.shortcut = nn.Sequential(Conv(c1, c3, k=1, s=s, act=False)) if s != 1 or c1 != c3 else nn.Identity()
def forward(self, x):
"""Forward pass through the ResNet block."""
return F.relu(self.cv3(self.cv4(self.cv2(self.cv1(x)))) + self.shortcut(x))
再添加如下代码将
ResNetLayer
重命名为
ResNetLayer_DySnakeConv
class ResNetLayer_DySnakeConv(nn.Module):
"""ResNet layer with multiple ResNet blocks."""
def __init__(self, c1, c2, s=1, is_first=False, n=1, e=4):
"""Initializes the ResNetLayer given arguments."""
super().__init__()
self.is_first = is_first
if self.is_first:
self.layer = nn.Sequential(
Conv(c1, c2, k=7, s=2, p=3, act=True), nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
)
else:
blocks = [ResNetBlock(c1, c2, s, e=e)]
blocks.extend([ResNetBlock(e * c2, c2, 1, e=e) for _ in range(n - 1)])
self.layer = nn.Sequential(*blocks)
def forward(self, x):
"""Forward pass through the ResNet layer."""
return self.layer(x)
注意❗:在
5.2和5.3小节
中需要声明的模块名称为:
ResNetLayer_DySnakeConv
。
五、添加步骤
5.1 修改一
① 在
ultralytics/nn/
目录下新建
AddModules
文件夹用于存放模块代码
② 在
AddModules
文件夹下新建
DySnakeConv.py
,将
第三节
中的代码粘贴到此处
5.2 修改二
在
AddModules
文件夹下新建
__init__.py
(已有则不用新建),在文件内导入模块:
from .DySnakeConv import *
5.3 修改三
在
ultralytics/nn/modules/tasks.py
文件中,需要在指定位置添加各模块类名称。
首先:导入模块
其次:在
parse_model函数
中注册
DySnakeConv
和
ResNetLayer_DySnakeConv
模块
六、yaml模型文件
6.1 模型改进版本一
在代码配置完成后,配置模型的YAML文件。
此处以
ultralytics/cfg/models/rt-detr/rtdetr-l.yaml
为例,在同目录下创建一个用于自己数据集训练的模型文件
rtdetr-l-DySnakeConv.yaml
。
将
rtdetr-l.yaml
中的内容复制到
rtdetr-l-DySnakeConv.yaml
文件下,修改
nc
数量等于自己数据中目标的数量。
在
骨干网络
中将
HGBlock
模块替换成
DySnakeConv模块
。
# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-l object detection model with P3-P5 outputs. For details see https://docs.ultralytics.com/models/rtdetr
# Parameters
nc: 1 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'
# [depth, width, max_channels]
l: [1.00, 1.00, 1024]
backbone:
# [from, repeats, module, args]
- [-1, 1, HGStem, [32, 48]] # 0-P2/4
- [-1, 6, DySnakeConv, [48]] # stage 1
- [-1, 1, DWConv, [128, 3, 2, 1, False]] # 2-P3/8
- [-1, 6, DySnakeConv, [128]] # stage 2
- [-1, 1, DWConv, [512, 3, 2, 1, False]] # 4-P4/16
- [-1, 6, DySnakeConv, [512]] # cm, c2, k, light, shortcut
- [-1, 6, DySnakeConv, [512]]
- [-1, 6, DySnakeConv, [512]] # stage 3
- [-1, 1, DWConv, [1024, 3, 2, 1, False]] # 8-P5/32
- [-1, 6, DySnakeConv, [1024]] # stage 4
head:
- [-1, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 10 input_proj.2
- [-1, 1, AIFI, [1024, 8]]
- [-1, 1, Conv, [256, 1, 1]] # 12, Y5, lateral_convs.0
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [7, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 14 input_proj.1
- [[-2, -1], 1, Concat, [1]]
- [-1, 3, RepC3, [256]] # 16, fpn_blocks.0
- [-1, 1, Conv, [256, 1, 1]] # 17, Y4, lateral_convs.1
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [3, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 19 input_proj.0
- [[-2, -1], 1, Concat, [1]] # cat backbone P4
- [-1, 3, RepC3, [256]] # X3 (21), fpn_blocks.1
- [-1, 1, Conv, [256, 3, 2]] # 22, downsample_convs.0
- [[-1, 17], 1, Concat, [1]] # cat Y4
- [-1, 3, RepC3, [256]] # F4 (24), pan_blocks.0
- [-1, 1, Conv, [256, 3, 2]] # 25, downsample_convs.1
- [[-1, 12], 1, Concat, [1]] # cat Y5
- [-1, 3, RepC3, [256]] # F5 (27), pan_blocks.1
- [[21, 24, 27], 1, RTDETRDecoder, [nc]] # Detect(P3, P4, P5)
6.2 模型改进版本二⭐
此处同样以
ultralytics/cfg/models/rt-detr/rtdetr-resnet50.yaml
为例,在同目录下创建一个用于自己数据集训练的模型文件
rtdetr-ResNetLayer_DySnakeConv.yaml
。
将
rtdetr-resnet50.yaml
中的内容复制到
rtdetr-ResNetLayer_DySnakeConv.yaml
文件下,修改
nc
数量等于自己数据中目标的数量。
📌 模型的修改方法是将
骨干网络
中的
ResNetLayer模块
替换成
ResNetLayer_DySnakeConv模块
。
# Ultralytics YOLO 🚀, AGPL-3.0 license
# RT-DETR-ResNet50 object detection model with P3-P5 outputs.
# Parameters
nc: 1 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n-cls.yaml' will call yolov8-cls.yaml with scale 'n'
# [depth, width, max_channels]
l: [1.00, 1.00, 1024]
backbone:
# [from, repeats, module, args]
- [-1, 1, ResNetLayer_DySnakeConv, [3, 64, 1, True, 1]] # 0
- [-1, 1, ResNetLayer_DySnakeConv, [64, 64, 1, False, 3]] # 1
- [-1, 1, ResNetLayer_DySnakeConv, [256, 128, 2, False, 4]] # 2
- [-1, 1, ResNetLayer_DySnakeConv, [512, 256, 2, False, 6]] # 3
- [-1, 1, ResNetLayer_DySnakeConv, [1024, 512, 2, False, 3]] # 4
head:
- [-1, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 5
- [-1, 1, AIFI, [1024, 8]]
- [-1, 1, Conv, [256, 1, 1]] # 7
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [3, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 9
- [[-2, -1], 1, Concat, [1]]
- [-1, 3, RepC3, [256]] # 11
- [-1, 1, Conv, [256, 1, 1]] # 12
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [2, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 14
- [[-2, -1], 1, Concat, [1]] # cat backbone P4
- [-1, 3, RepC3, [256]] # X3 (16), fpn_blocks.1
- [-1, 1, Conv, [256, 3, 2]] # 17, downsample_convs.0
- [[-1, 12], 1, Concat, [1]] # cat Y4
- [-1, 3, RepC3, [256]] # F4 (19), pan_blocks.0
- [-1, 1, Conv, [256, 3, 2]] # 20, downsample_convs.1
- [[-1, 7], 1, Concat, [1]] # cat Y5
- [-1, 3, RepC3, [256]] # F5 (22), pan_blocks.1
- [[16, 19, 22], 1, RTDETRDecoder, [nc]] # Detect(P3, P4, P5)
七、成功运行结果
分别打印网络模型可以看到
DySnakeConv
和
ResNetLayer_DySnakeConv
已经加入到模型中,并可以进行训练了。
rtdetr-l-DySnakeConv :
rtdetr-l-DySnakeConv summary: 987 layers, 99,327,699 parameters, 99,327,699 gradients, 185.7 GFLOPs
from n params module arguments
0 -1 1 25248 ultralytics.nn.modules.block.HGStem [3, 32, 48]
1 -1 6 129048 ultralytics.nn.AddModules.DySnakeConv.DySnakeConv[48, 48]
2 -1 1 3712 ultralytics.nn.modules.conv.DWConv [48, 128, 3, 2, 1, False]
3 -1 6 822744 ultralytics.nn.AddModules.DySnakeConv.DySnakeConv[128, 128]
4 -1 1 5632 ultralytics.nn.modules.conv.DWConv [128, 512, 3, 2, 1, False]
5 -1 6 11548632 ultralytics.nn.AddModules.DySnakeConv.DySnakeConv[512, 512]
6 -1 6 11548632 ultralytics.nn.AddModules.DySnakeConv.DySnakeConv[512, 512]
7 -1 6 11548632 ultralytics.nn.AddModules.DySnakeConv.DySnakeConv[512, 512]
8 -1 1 11264 ultralytics.nn.modules.conv.DWConv [512, 1024, 3, 2, 1, False]
9 -1 6 44920920 ultralytics.nn.AddModules.DySnakeConv.DySnakeConv[1024, 1024]
10 -1 1 262656 ultralytics.nn.modules.conv.Conv [1024, 256, 1, 1, None, 1, 1, False]
11 -1 1 789760 ultralytics.nn.modules.transformer.AIFI [256, 1024, 8]
12 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
13 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
14 7 1 131584 ultralytics.nn.modules.conv.Conv [512, 256, 1, 1, None, 1, 1, False]
15 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
16 -1 3 2232320 ultralytics.nn.modules.block.RepC3 [512, 256, 3]
17 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
18 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
19 3 1 33280 ultralytics.nn.modules.conv.Conv [128, 256, 1, 1, None, 1, 1, False]
20 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
21 -1 3 2232320 ultralytics.nn.modules.block.RepC3 [512, 256, 3]
22 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
23 [-1, 17] 1 0 ultralytics.nn.modules.conv.Concat [1]
24 -1 3 2232320 ultralytics.nn.modules.block.RepC3 [512, 256, 3]
25 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
26 [-1, 12] 1 0 ultralytics.nn.modules.conv.Concat [1]
27 -1 3 2232320 ultralytics.nn.modules.block.RepC3 [512, 256, 3]
28 [21, 24, 27] 1 7303907 ultralytics.nn.modules.head.RTDETRDecoder [1, [256, 256, 256]]
rtdetr-l-DySnakeConv summary: 987 layers, 99,327,699 parameters, 99,327,699 gradients, 185.7 GFLOPs
rtdetr-ResNetLayer_DySnakeConv :
rtdetr-ResNetLayer_DySnakeConv summary: 849 layers, 52,171,939 parameters, 52,171,939 gradients, 151.7 GFLOPs
from n params module arguments
0 -1 1 9536 ultralytics.nn.AddModules.DySnakeConv.ResNetLayer_DySnakeConv[3, 64, 1, True, 1]
1 -1 1 329388 ultralytics.nn.AddModules.DySnakeConv.ResNetLayer_DySnakeConv[64, 64, 1, False, 3]
2 -1 1 1768080 ultralytics.nn.AddModules.DySnakeConv.ResNetLayer_DySnakeConv[256, 128, 2, False, 4]
3 -1 1 10071128 ultralytics.nn.AddModules.DySnakeConv.ResNetLayer_DySnakeConv[512, 256, 2, False, 6]
4 -1 1 20739052 ultralytics.nn.AddModules.DySnakeConv.ResNetLayer_DySnakeConv[1024, 512, 2, False, 3]
5 -1 1 524800 ultralytics.nn.modules.conv.Conv [2048, 256, 1, 1, None, 1, 1, False]
6 -1 1 789760 ultralytics.nn.modules.transformer.AIFI [256, 1024, 8]
7 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
8 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
9 3 1 262656 ultralytics.nn.modules.conv.Conv [1024, 256, 1, 1, None, 1, 1, False]
10 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
11 -1 3 2232320 ultralytics.nn.modules.block.RepC3 [512, 256, 3]
12 -1 1 66048 ultralytics.nn.modules.conv.Conv [256, 256, 1, 1]
13 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
14 2 1 131584 ultralytics.nn.modules.conv.Conv [512, 256, 1, 1, None, 1, 1, False]
15 [-2, -1] 1 0 ultralytics.nn.modules.conv.Concat [1]
16 -1 3 2232320 ultralytics.nn.modules.block.RepC3 [512, 256, 3]
17 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
18 [-1, 12] 1 0 ultralytics.nn.modules.conv.Concat [1]
19 -1 3 2232320 ultralytics.nn.modules.block.RepC3 [512, 256, 3]
20 -1 1 590336 ultralytics.nn.modules.conv.Conv [256, 256, 3, 2]
21 [-1, 7] 1 0 ultralytics.nn.modules.conv.Concat [1]
22 -1 3 2232320 ultralytics.nn.modules.block.RepC3 [512, 256, 3]
23 [16, 19, 22] 1 7303907 ultralytics.nn.modules.head.RTDETRDecoder [1, [256, 256, 256]]
rtdetr-ResNetLayer_DySnakeConv summary: 849 layers, 52,171,939 parameters, 52,171,939 gradients, 151.7 GFLOPs