An Enhanced YOLOv8 Approach for Trapped Individual Detection Based On Deep Learning
Main Article Content
Abstract
The Philippines faces significant economic losses annually due to natural disasters, impacting the lives of tens of millions of people. The application of deep learning-based object detection technologies can help to enhance the efficiency of rescue operations by enabling the rapid identification of trapped individuals, thereby mitigating economic and human losses. The demand for lightweight, accurate, and portable models suitable for deployment on unmanned aerial vehicles (UAVs) has emerged as a critical research problem. This study introduces an enhanced You Only Look Once version 8 (YOLOv8) model aimed at improving the detection of trapped individuals in UAV-captured images. The C2f layers in the neck and head networks are replaced with the Cross Stage Partial with Focus-Faster (C2f-Faster) module. This adjustment significantly reduces the model's size, facilitating its integration into UAV systems. Furthermore, a 160x160 output head will be added to improve the accuracy of small object detection. A novel loss function combining Scaled Intersection over Union (SIOU) and Complete Intersection over Union (CIOU) is proposed to address issues related to aspect ratio ambiguity and sample imbalance in bounding box regression. The proposed model shows a 25.94% improvement in detection accuracy and a 29.01% increase in inference speed compared to the baseline YOLOv8 model. This study applies a machine learning model to analyze UAV imagery for faster identification of trapped individuals. The findings indicate that this approach can substantially reduce government expenditures on disaster relief, optimize the efficiency of rescue operations, and save more lives.