TY - GEN
T1 - Specialized Indoor and Outdoor Scene-specific Object Detection Models
AU - Jamali, Mahtab
AU - Davidsson, Paul
AU - Khoshkangini, Reza
AU - Ljungqvist, Martin Georg
AU - Mihailescu, Radu-Casian
N1 - Publisher Copyright:
© 2024 SPIE. All rights reserved.
PY - 2024/4/3
Y1 - 2024/4/3
N2 - Object detection is a critical task in computer vision with applications across various domains, ranging from autonomous driving to surveillance systems. Despite extensive research on improving the performance of object detection systems, identifying all objects in different places remains a challenge. The traditional object detection approaches focus primarily on extracting and analyzing visual features without considering the contextual information about the places of objects. However, entities in many real-world scenarios closely relate to their surrounding environment, providing crucial contextual cues for accurate detection. This study investigates the importance and impact of places of images (indoor and outdoor) on object detection accuracy. To this purpose, we propose an approach that first categorizes images into two distinct categories: indoor and outdoor. We then train and evaluate three object detection models (indoor, outdoor, and general models) based on YOLOv5 and 19 classes of the PASCAL VOC dataset and 79 classes of COCO dataset that consider places. The experimental evaluations show that the specialized indoor and outdoor models have higher mAP (mean Average Precision) to detect objects in specific environments compared to the general model that detects objects found both indoors and outdoors. Indeed, the network can detect objects more accurately in similar places with common characteristics due to semantic relationships between objects and their surroundings, and the network’s misdetection is diminished. All the results were analyzed statistically with t-tests.
AB - Object detection is a critical task in computer vision with applications across various domains, ranging from autonomous driving to surveillance systems. Despite extensive research on improving the performance of object detection systems, identifying all objects in different places remains a challenge. The traditional object detection approaches focus primarily on extracting and analyzing visual features without considering the contextual information about the places of objects. However, entities in many real-world scenarios closely relate to their surrounding environment, providing crucial contextual cues for accurate detection. This study investigates the importance and impact of places of images (indoor and outdoor) on object detection accuracy. To this purpose, we propose an approach that first categorizes images into two distinct categories: indoor and outdoor. We then train and evaluate three object detection models (indoor, outdoor, and general models) based on YOLOv5 and 19 classes of the PASCAL VOC dataset and 79 classes of COCO dataset that consider places. The experimental evaluations show that the specialized indoor and outdoor models have higher mAP (mean Average Precision) to detect objects in specific environments compared to the general model that detects objects found both indoors and outdoors. Indeed, the network can detect objects more accurately in similar places with common characteristics due to semantic relationships between objects and their surroundings, and the network’s misdetection is diminished. All the results were analyzed statistically with t-tests.
KW - indoor object detection
KW - object detection
KW - outdoor object detection
KW - scene classification
KW - YOLOv5
UR - http://www.scopus.com/inward/record.url?scp=85191658757&partnerID=8YFLogxK
U2 - 10.1117/12.3023479
DO - 10.1117/12.3023479
M3 - Conference contribution
AN - SCOPUS:85191658757
SN - 9781510674622
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - Sixteenth International Conference on Machine Vision, ICMV 2023
A2 - Osten, Wolfgang
PB - SPIE
T2 - 16th International Conference on Machine Vision 2023
Y2 - 15 November 2023 through 18 November 2023
ER -