导航
登录 English
罗喜伶
点赞:
罗喜伶
点赞:
论文
UAV Localization in Urban Area Mobility Environment Based on Monocular VSLAM with Deep Learning
发布时间:2025-04-29点击次数:
发表刊物: DRONES
摘要: Unmanned Aerial Vehicles (UAVs) play a major role in different applications, including surveillance, mapping, and disaster relief, particularly in urban environments. This paper presents a comprehensive framework for UAV localization in outdoor environments using monocular ORB-SLAM3 integrated with optical flow and YOLOv5 for enhanced performance. The proposed system addresses the challenges of accurate localization in dynamic outdoor environments where traditional GPS methods may falter. By leveraging the capabilities of ORB-SLAM3, the UAV can effectively map its environment while simultaneously tracking its position using visual information from a single camera. The integration of optical flow techniques allows for accurate motion estimation between consecutive frames, which is critical for maintaining accurate localization amidst dynamic changes in the environment. YOLOv5 is a highly efficient model utilized for real-time object detection, enabling the system to identify and classify dynamic objects within the UAV's field of view. This dual approach of using both optical flow and deep learning enhances the robustness of the localization process by filtering out dynamic features that could otherwise cause mapping errors. Experimental results show that the combination of monocular ORB-SLAM3, optical flow, and YOLOv5 significantly improves localization accuracy and reduces trajectory errors compared to traditional methods. In terms of absolute trajectory error and average tracking time, the suggested approach performs better than ORB-SLAM3 and DynaSLAM. For real-time SLAM applications in dynamic situations, our technique is especially well-suited due to its potential to achieve lower latency and greater accuracy. These improvements guarantee more dependable performance in a variety of scenarios in addition to increasing overall efficiency. The framework effectively distinguishes between static and dynamic elements, allowing for more reliable map construction and navigation. The results show that our proposed method (U-SLAM) produces a considerable decrease of up to 43.47% in APE and 26.47% RPE in S000, and its accuracy is higher for sequences with moving objects and more motion inside the image.
论文类型: 期刊论文
是否译文:
发表时间: 2025-02-26
收录刊物: SCI