学习资源站

YOLOv11改进-进阶实战篇-利用YOLOv11进行视频划定区域目标统计计数

一、本文介绍

Hello,各位读者 最近会给大家发一些进阶实战的讲解 ,如何利用YOLOv11现有的一些功能进行一些实战, 让我们不仅会改进YOLOv11,也能够利用YOLOv11去做一些简单的小工作,后面我也会将这些功能利用 PyQt 或者是pyside2做一些小的界面给大家使用。

在开始之前给大家推荐一下我的专栏,本专栏每周更新3-10篇最新前沿机制 | 包括二次创新全网无重复,以及融合改进(大家拿到之后添加另外一个改进机制在你的 数据集 上实现涨点即可撰写论文),还有各种前沿顶会改进机制 |,更有包含我所有附赠的文件(文件内集成我所有的改进机制全部注册完毕可以直接运行)和交流群和视频讲解提供给大家。

欢迎大家订阅我的专栏一起学习YOLO!

开始之前先给大家展示一下视频效果图,Gif有点糊,实际清晰图不是这个样子~



二、项目完整代码

帮我们将这个代码,复制粘贴到我们YOLOv11的仓库里然后创建一个py文件存放进去即可。

  1. import argparse
  2. from collections import defaultdict
  3. from pathlib import Path
  4. import cv2
  5. import numpy as np
  6. from shapely.geometry import Polygon
  7. from shapely.geometry.point import Point
  8. from ultralytics import YOLO
  9. from ultralytics.utils.files import increment_path
  10. from ultralytics.utils.plotting import Annotator, colors
  11. track_history = defaultdict(list)
  12. current_region = None
  13. counting_regions = [
  14. {
  15. 'name': 'YOLOv8 Polygon Region',
  16. 'polygon': Polygon([(50, 80), (250, 20), (450, 80), (400, 350), (100, 350)]), # Polygon points
  17. 'counts': 0,
  18. 'dragging': False,
  19. 'region_color': (255, 42, 4), # BGR Value
  20. 'text_color': (255, 255, 255) # Region Text Color
  21. },
  22. {
  23. 'name': 'YOLOv8 Rectangle Region',
  24. 'polygon': Polygon([(200, 250), (440, 250), (440, 550), (200, 550)]), # Polygon points
  25. 'counts': 0,
  26. 'dragging': False,
  27. 'region_color': (37, 255, 225), # BGR Value
  28. 'text_color': (0, 0, 0), # Region Text Color
  29. }, ]
  30. def mouse_callback(event, x, y, flags, param):
  31. """Mouse call back event."""
  32. global current_region
  33. # Mouse left button down event
  34. if event == cv2.EVENT_LBUTTONDOWN:
  35. for region in counting_regions:
  36. if region['polygon'].contains(Point((x, y))):
  37. current_region = region
  38. current_region['dragging'] = True
  39. current_region['offset_x'] = x
  40. current_region['offset_y'] = y
  41. # Mouse move event
  42. elif event == cv2.EVENT_MOUSEMOVE:
  43. if current_region is not None and current_region['dragging']:
  44. dx = x - current_region['offset_x']
  45. dy = y - current_region['offset_y']
  46. current_region['polygon'] = Polygon([
  47. (p[0] + dx, p[1] + dy) for p in current_region['polygon'].exterior.coords])
  48. current_region['offset_x'] = x
  49. current_region['offset_y'] = y
  50. # Mouse left button up event
  51. elif event == cv2.EVENT_LBUTTONUP:
  52. if current_region is not None and current_region['dragging']:
  53. current_region['dragging'] = False
  54. def run(
  55. weights='yolov8n.pt',
  56. source=None,
  57. device='cpu',
  58. view_img=False,
  59. save_img=False,
  60. exist_ok=False,
  61. classes=None,
  62. line_thickness=2,
  63. track_thickness=2,
  64. region_thickness=2,
  65. ):
  66. """
  67. Run Region counting on a video using YOLOv8 and ByteTrack.
  68. Supports movable region for real time counting inside specific area.
  69. Supports multiple regions counting.
  70. Regions can be Polygons or rectangle in shape
  71. Args:
  72. weights (str): Model weights path.
  73. source (str): Video file path.
  74. device (str): processing device cpu, 0, 1
  75. view_img (bool): Show results.
  76. save_img (bool): Save results.
  77. exist_ok (bool): Overwrite existing files.
  78. classes (list): classes to detect and track
  79. line_thickness (int): Bounding box thickness.
  80. track_thickness (int): Tracking line thickness
  81. region_thickness (int): Region thickness.
  82. """
  83. vid_frame_count = 0
  84. # Check source path
  85. if not Path(source).exists():
  86. raise FileNotFoundError(f"Source path '{source}' does not exist.")
  87. # Setup Model
  88. model = YOLO(f'{weights}')
  89. model.to('cuda') if device == '0' else model.to('cpu')
  90. # Extract classes names
  91. names = model.model.names
  92. # Video setup
  93. videocapture = cv2.VideoCapture(source)
  94. frame_width, frame_height = int(videocapture.get(3)), int(videocapture.get(4))
  95. fps, fourcc = int(videocapture.get(5)), cv2.VideoWriter_fourcc(*'mp4v')
  96. # Output setup
  97. save_dir = increment_path(Path('ultralytics_rc_output') / 'exp', exist_ok)
  98. save_dir.mkdir(parents=True, exist_ok=True)
  99. video_writer = cv2.VideoWriter(str(save_dir / f'{Path(source).stem}.mp4'), fourcc, fps, (frame_width, frame_height))
  100. # Iterate over video frames
  101. while videocapture.isOpened():
  102. success, frame = videocapture.read()
  103. if not success:
  104. break
  105. vid_frame_count += 1
  106. # Extract the results
  107. results = model.track(frame, persist=True, classes=classes)
  108. if results[0].boxes.id is not None:
  109. boxes = results[0].boxes.xyxy.cpu()
  110. track_ids = results[0].boxes.id.int().cpu().tolist()
  111. clss = results[0].boxes.cls.cpu().tolist()
  112. annotator = Annotator(frame, line_width=line_thickness, example=str(names))
  113. for box, track_id, cls in zip(boxes, track_ids, clss):
  114. annotator.box_label(box, str(names[cls]), color=colors(cls, True))
  115. bbox_center = (box[0] + box[2]) / 2, (box[1] + box[3]) / 2 # Bbox center
  116. track = track_history[track_id] # Tracking Lines plot
  117. track.append((float(bbox_center[0]), float(bbox_center[1])))
  118. if len(track) > 30:
  119. track.pop(0)
  120. points = np.hstack(track).astype(np.int32).reshape((-1, 1, 2))
  121. cv2.polylines(frame, [points], isClosed=False, color=colors(cls, True), thickness=track_thickness)
  122. # Check if detection inside region
  123. for region in counting_regions:
  124. if region['polygon'].contains(Point((bbox_center[0], bbox_center[1]))):
  125. region['counts'] += 1
  126. # Draw regions (Polygons/Rectangles)
  127. for region in counting_regions:
  128. region_label = str(region['counts'])
  129. region_color = region['region_color']
  130. region_text_color = region['text_color']
  131. polygon_coords = np.array(region['polygon'].exterior.coords, dtype=np.int32)
  132. centroid_x, centroid_y = int(region['polygon'].centroid.x), int(region['polygon'].centroid.y)
  133. text_size, _ = cv2.getTextSize(region_label,
  134. cv2.FONT_HERSHEY_SIMPLEX,
  135. fontScale=0.7,
  136. thickness=line_thickness)
  137. text_x = centroid_x - text_size[0] // 2
  138. text_y = centroid_y + text_size[1] // 2
  139. cv2.rectangle(frame, (text_x - 5, text_y - text_size[1] - 5), (text_x + text_size[0] + 5, text_y + 5),
  140. region_color, -1)
  141. cv2.putText(frame, region_label, (text_x, text_y), cv2.FONT_HERSHEY_SIMPLEX, 0.7, region_text_color,
  142. line_thickness)
  143. cv2.polylines(frame, [polygon_coords], isClosed=True, color=region_color, thickness=region_thickness)
  144. if view_img:
  145. if vid_frame_count == 1:
  146. cv2.namedWindow('Ultralytics YOLOv8 Region Counter Movable')
  147. cv2.setMouseCallback('Ultralytics YOLOv8 Region Counter Movable', mouse_callback)
  148. cv2.imshow('Ultralytics YOLOv8 Region Counter Movable', frame)
  149. if save_img:
  150. video_writer.write(frame)
  151. for region in counting_regions: # Reinitialize count for each region
  152. region['counts'] = 0
  153. if cv2.waitKey(1) & 0xFF == ord('q'):
  154. break
  155. del vid_frame_count
  156. video_writer.release()
  157. videocapture.release()
  158. cv2.destroyAllWindows()
  159. def parse_opt():
  160. """Parse command line arguments."""
  161. parser = argparse.ArgumentParser()
  162. parser.add_argument('--weights', type=str, default='yolo11n.pt', help='initial weights path')
  163. parser.add_argument('--device', default='0', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
  164. parser.add_argument('--source', type=str, default='video.mp4', help='video file path')
  165. parser.add_argument('--view-img', action='store_true',default=True , help='show results')
  166. parser.add_argument('--save-img', action='store_true', default=True, help='save results')
  167. parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
  168. parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --classes 0, or --classes 0 2 3')
  169. parser.add_argument('--line-thickness', type=int, default=2, help='bounding box thickness')
  170. parser.add_argument('--track-thickness', type=int, default=2, help='Tracking line thickness')
  171. parser.add_argument('--region-thickness', type=int, default=4, help='Region thickness')
  172. return parser.parse_args()
  173. def main(opt):
  174. """Main function."""
  175. run(**vars(opt))
  176. if __name__ == '__main__':
  177. opt = parse_opt()
  178. main(opt)

三、参数解析

下面上面项目核心代码的参数解析,共有10个,能够起到作用的参数并不多。

参数名 参数类型 参数讲解
0 weights str 用于检测视频的权重文件地址(可以是你训练好的,也可以是官方提供的)
1 device str 设备的选择可以用GPU也可以用CPU
2 source str 视频文件的地址,因为是用于视频检测,大家有兴趣其实可以将其改为摄像头的实时检测。
3 view-img bool 是否显示视频结果 ,就是它在控制台会输出结果,如果设置为True就显示图像结果
4 save-img bool 是否保存检测的结果,文件会存放在同级目录下的新文件夹内
5 exist-ok bool 保存文件的名字检测的,大家不用理会这个参数
6 classes int 这个参数比较重要,比如你的权重文件训练文件有多个类别,如果你只想要看某个特定类别的检测结果,只需要输入它的数字编号即可。
7 line-thickness int 边界框的厚度,用于在检测到的对象周围绘制边界线
8 track-thickness int 跟踪线的厚度。
9 region-thickness int 区域框的厚度。

四、项目的使用教程

4.1 步骤一

我们在Yolo仓库的目录下创建一个py文件将代码存放进去,如下图所示。


4.2 步骤二

我们按照参数解析部分的介绍填好大家的参数,主要配置的有两个一个就是权重文件地址另一个就是视频的地址。


4.3 步骤三

我们挺好之后运行文件即可,此时会弹出视频框,其中有两个region框是可以滑动的以此来确定想要检测的目标区域,其原理就是 Opencv 中的mask掩码的原理使用(Gif有点糊大家对付看一下,毕竟只是起到一个示范作用)。


五、本文总结

到此本文的正式分享内容就结束了,在这里给大家推荐我的 YOLOv8 改进有效涨点专栏,本专栏目前为新开的平均质量分98分,后期我会根据各种最新的前沿顶会进行论文复现,也会对一些老的改进机制进行补充,如果大家觉得本文帮助到你了,订阅本专栏,关注后续更多的更新~