学习资源站

18-更换IoU之MPDIoU(ELSEVIER 2023_超越WIoU、EIoU等_实测涨点)

YOLOv5改进系列(17)——更换IoU之MPDIoU(ELSEVIER 2023|超越WIoU、EIoU等|实测涨点)

🚀一、MPDIoU介绍 

1.1 简介

在目标检测任务中,损失函数经常用于度量神经网络预测信息与期望标签的距离,预测信息越接近期望信息,损失函数值越小。在YOLO系列使用CIoUCIoU 反映长宽比的不同,而不是长宽位置与置信度之间存在的实际差距,这就会在检测过程中产生一定的问题。

如图8(a)所示,图中外框为实际检测框,内框为预测框,当中心点重合长宽比一致时,CIoU就会失效。另外如图8(b)所示,在多个预测框大面积重叠的情况下,不能反映出实际情况。

为了解决上述问题,作者充分探索了水平矩形的几何特征,提出了一种基于最小点距离的边界框相似度比较度量——MPDIoU,其中包含了现有损失函数中考虑的所有相关因素,例如重叠或非重叠面积、中心点距离以及宽度和高度的偏差,同时简化了计算过程。在此基础上,作者提出了一种基于MPDIoU的边界框回归损失函数,称为MLPDIoU

实验结果表明,将MPDIoU损失函数应用于最先进的实例分割(如YOLACT)和目标检测(如YOLOv7)模型,在PASCAL VOC、MS COCO和IIIT5k数据集上优于现有的损失函数。


1.2 最小点距离交并比

作者设计了一种新颖的基于交并比的度量标准,名为MPDIoU直接最小化预测边界框与实际标注边界框之间的左上角和右下角点距离。

该方法直接最小化预测边界框与实际标注边界框之间的左上角和右下角点距离。在训练阶段,通过公式迫使模型预测的每个边界框B^{prd}=[x^{prd},y^{prd},w^{prd},h^{prd}]^{T}接近其实际边界框B_{gt}=[x_{gt},y_{gt},w_{gt},h_{gt}]^{T}

其中,B_{gt}是实际边界框的集合,\Theta是用于回归的深度模型的参数。L的典型形式是L_{n}-norm

MPDIoU的计算过程总结在算法1中:

27c14de8a52c96f1e096d04934f81185.png

总结一下:

  1. 提出的MPDIoU简化了两个边界框之间的相似性比较
  2. 适用于重叠或非重叠的边界框回归

因此,在2D/3D计算机视觉任务中,MPDIoU可以很好地替代交并比作为所有性能指标的度量。


1.3 MPDIoU的损失函数

MPDIoU的损失函数公式如下:

因此,现有边界框回归损失函数的所有因素都可以通过4个点的坐标来确定。

转换公式如下所示:

c9457256d91005c2e0a706fbb3a4aac3.png

(公式解读大家自己看原文或是别的大佬解读吧,数学白痴先行告退...orz)


1.4 实验

(1)目标检测的实验

(2)字符级场景文本定位的实验

(3)实例分割的实验结果


🚀二、更换MPDIoU方法

第①步 配置metric.py文件

首先找到 utils/metrics.py 文件夹下的 bbox_iou 函数,然后将函数整个替换成下面的代码:

  1. def bbox_iou(box1, box2, xywh=True, GIoU=False, DIoU=False, CIoU=False, SIoU=False, EIoU=False, WIoU=False, Focal=False,
  2. MPDIoU=False, alpha=1, gamma=0.5, scale=False, eps=1e-7):
  3. # Returns Intersection over Union (IoU) of box1(1,4) to box2(n,4)
  4. # Get the coordinates of bounding boxes
  5. if xywh: # transform from xywh to xyxy
  6. (x1, y1, w1, h1), (x2, y2, w2, h2) = box1.chunk(4, -1), box2.chunk(4, -1)
  7. w1_, h1_, w2_, h2_ = w1 / 2, h1 / 2, w2 / 2, h2 / 2
  8. b1_x1, b1_x2, b1_y1, b1_y2 = x1 - w1_, x1 + w1_, y1 - h1_, y1 + h1_
  9. b2_x1, b2_x2, b2_y1, b2_y2 = x2 - w2_, x2 + w2_, y2 - h2_, y2 + h2_
  10. else: # x1, y1, x2, y2 = box1
  11. b1_x1, b1_y1, b1_x2, b1_y2 = box1.chunk(4, -1)
  12. b2_x1, b2_y1, b2_x2, b2_y2 = box2.chunk(4, -1)
  13. w1, h1 = b1_x2 - b1_x1, (b1_y2 - b1_y1).clamp(eps)
  14. w2, h2 = b2_x2 - b2_x1, (b2_y2 - b2_y1).clamp(eps)
  15. # Intersection area
  16. inter = (b1_x2.minimum(b2_x2) - b1_x1.maximum(b2_x1)).clamp(0) * \
  17. (b1_y2.minimum(b2_y2) - b1_y1.maximum(b2_y1)).clamp(0)
  18. # Union Area
  19. union = w1 * h1 + w2 * h2 - inter + eps
  20. if scale:
  21. self = WIoU_Scale(1 - (inter / union))
  22. # IoU
  23. # iou = inter / union # ori iou
  24. iou = torch.pow(inter / (union + eps), alpha) # alpha iou
  25. if CIoU or DIoU or GIoU or EIoU or SIoU or WIoU or MPDIoU:
  26. cw = b1_x2.maximum(b2_x2) - b1_x1.minimum(b2_x1) # convex (smallest enclosing box) width
  27. ch = b1_y2.maximum(b2_y2) - b1_y1.minimum(b2_y1) # convex height
  28. if CIoU or DIoU or EIoU or SIoU or WIoU or MPDIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
  29. c2 = (cw ** 2 + ch ** 2) ** alpha + eps # convex diagonal squared
  30. rho2 = (((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 + (
  31. b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4) ** alpha # center dist ** 2
  32. if CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
  33. v = (4 / math.pi ** 2) * (torch.atan(w2 / h2) - torch.atan(w1 / h1)).pow(2)
  34. with torch.no_grad():
  35. alpha_ciou = v / (v - iou + (1 + eps))
  36. if Focal:
  37. return iou - (rho2 / c2 + torch.pow(v * alpha_ciou + eps, alpha)), torch.pow(inter / (union + eps),
  38. gamma) # Focal_CIoU
  39. else:
  40. return iou - (rho2 / c2 + torch.pow(v * alpha_ciou + eps, alpha)) # CIoU
  41. elif EIoU:
  42. rho_w2 = ((b2_x2 - b2_x1) - (b1_x2 - b1_x1)) ** 2
  43. rho_h2 = ((b2_y2 - b2_y1) - (b1_y2 - b1_y1)) ** 2
  44. cw2 = torch.pow(cw ** 2 + eps, alpha)
  45. ch2 = torch.pow(ch ** 2 + eps, alpha)
  46. if Focal:
  47. return iou - (rho2 / c2 + rho_w2 / cw2 + rho_h2 / ch2), torch.pow(inter / (union + eps),
  48. gamma) # Focal_EIou
  49. else:
  50. return iou - (rho2 / c2 + rho_w2 / cw2 + rho_h2 / ch2) # EIou
  51. elif MPDIoU:
  52. cw2 = torch.pow(cw ** 2 + eps, alpha)
  53. ch2 = torch.pow(ch ** 2 + eps, alpha)
  54. d12 = ((b2_x1 - b1_x1) - (b2_y1 - b1_y1)) ** 2
  55. d22 = ((b2_x2 - b1_x2) - (b2_y2 - b1_y2)) ** 2
  56. return iou - ((d12+d22)/(cw2+ ch2))
  57. elif SIoU:
  58. # SIoU Loss https://arxiv.org/pdf/2205.12740.pdf
  59. s_cw = (b2_x1 + b2_x2 - b1_x1 - b1_x2) * 0.5 + eps
  60. s_ch = (b2_y1 + b2_y2 - b1_y1 - b1_y2) * 0.5 + eps
  61. sigma = torch.pow(s_cw ** 2 + s_ch ** 2, 0.5)
  62. sin_alpha_1 = torch.abs(s_cw) / sigma
  63. sin_alpha_2 = torch.abs(s_ch) / sigma
  64. threshold = pow(2, 0.5) / 2
  65. sin_alpha = torch.where(sin_alpha_1 > threshold, sin_alpha_2, sin_alpha_1)
  66. angle_cost = torch.cos(torch.arcsin(sin_alpha) * 2 - math.pi / 2)
  67. rho_x = (s_cw / cw) ** 2
  68. rho_y = (s_ch / ch) ** 2
  69. gamma = angle_cost - 2
  70. distance_cost = 2 - torch.exp(gamma * rho_x) - torch.exp(gamma * rho_y)
  71. omiga_w = torch.abs(w1 - w2) / torch.max(w1, w2)
  72. omiga_h = torch.abs(h1 - h2) / torch.max(h1, h2)
  73. shape_cost = torch.pow(1 - torch.exp(-1 * omiga_w), 4) + torch.pow(1 - torch.exp(-1 * omiga_h), 4)
  74. if Focal:
  75. return iou - torch.pow(0.5 * (distance_cost + shape_cost) + eps, alpha), torch.pow(
  76. inter / (union + eps), gamma) # Focal_SIou
  77. else:
  78. return iou - torch.pow(0.5 * (distance_cost + shape_cost) + eps, alpha) # SIou
  79. elif WIoU:
  80. if Focal:
  81. raise RuntimeError("WIoU do not support Focal.")
  82. elif scale:
  83. return getattr(WIoU_Scale, '_scaled_loss')(self), (1 - iou) * torch.exp(
  84. (rho2 / c2)), iou # WIoU https://arxiv.org/abs/2301.10051
  85. else:
  86. return iou, torch.exp((rho2 / c2)) # WIoU v1
  87. if Focal:
  88. return iou - rho2 / c2, torch.pow(inter / (union + eps), gamma) # Focal_DIoU
  89. else:
  90. return iou - rho2 / c2 # DIoU
  91. c_area = cw * ch + eps # convex area
  92. if Focal:
  93. return iou - torch.pow((c_area - union) / c_area + eps, alpha), torch.pow(inter / (union + eps),
  94. gamma) # Focal_GIoU https://arxiv.org/pdf/1902.09630.pdf
  95. else:
  96. return iou - torch.pow((c_area - union) / c_area + eps, alpha) # GIoU https://arxiv.org/pdf/1902.09630.pdf
  97. if Focal:
  98. return iou, torch.pow(inter / (union + eps), gamma) # Focal_IoU
  99. else:
  100. return iou # IoU
  101. class WIoU_Scale:
  102. ''' monotonous: {
  103. None: origin v1
  104. True: monotonic FM v2
  105. False: non-monotonic FM v3
  106. }
  107. momentum: The momentum of running mean'''
  108. iou_mean = 1.
  109. monotonous = False
  110. _momentum = 1 - 0.5 ** (1 / 7000)
  111. _is_train = True
  112. def __init__(self, iou):
  113. self.iou = iou
  114. self._update(self)
  115. @classmethod
  116. def _update(cls, self):
  117. if cls._is_train: cls.iou_mean = (1 - cls._momentum) * cls.iou_mean + \
  118. cls._momentum * self.iou.detach().mean().item()
  119. @classmethod
  120. def _scaled_loss(cls, self, gamma=1.9, delta=3):
  121. if isinstance(self.monotonous, bool):
  122. if self.monotonous:
  123. return (self.iou.detach() / self.iou_mean).sqrt()
  124. else:
  125. beta = self.iou.detach() / self.iou_mean
  126. alpha = delta * torch.pow(gamma, beta - delta)
  127. return beta / alpha
  128. return 1

第②步 配置loss.py文件

然后再找到 utils/loss.py 文件夹下的 __call__函数,把Regression loss中计算IoU的代码,换成下面这句: 

  1. iou = bbox_iou(pbox, tbox[i], MPDIoU=True, scale=True)
  2. if type(iou) is tuple:
  3. if len(iou) == 2:
  4. lbox += (iou[1].detach().squeeze() * (1 - iou[0].squeeze())).mean()
  5. iou = iou[0].squeeze()
  6. else:
  7. lbox += (iou[0] * iou[1]).mean()
  8. iou = iou[2].squeeze()
  9. else:
  10. lbox += (1.0 - iou.squeeze()).mean() # iou loss
  11. iou = iou.squeeze()

如下图所示:

就酱~ 


PS:

在我的数据集上,MPDIoU比WIoU涨了0.9,效果还是不错的~

WIoU:

MPDIoU: