News Release

Monocular visual estimation for autonomous aircraft landing guidance in unknown structured scenes

Peer-Reviewed Publication

Tsinghua University Press

Monocular visual measurement for autonomous aircraft landing guidance in unknown structured scenes schematic

image: 

The monocular visual measurement method for autonomous landing guidance in unknown structured scenes involves scene analysis, selecting landing region, and estimating relative poses.

view more 

Credit: Chinese Journal of Aeronautics

In recent years, with the rapid expansion of the ‘low-altitude economy’, aircraft have been widely used across various scenarios. Among these applications, aircraft landing is a crucial phase of flight, and autonomous landing guidance has become one of the core technologies for enhancing aircraft safety and intelligence. Existing research on autonomous landing typically focuses on known, fixed regions such as runways, where the aircraft relies on prior information of the landing regions to autonomously guide the landing. However, in emergency situations—such as mechanical failures, adverse weather, or strong interference—aircraft need to perform an emergency landing to ensure the safety of both the aircraft and its occupants while minimizing economic losses. In these scenarios, the aircraft must autonomously choose a suitable landing region from the reachable areas, but such areas may lack geographic coordinates or other prior information. Existing autonomous landing guidance systems based on satellite navigation are not applicable in such cases. Therefore, effective technical solutions for autonomous landing guidance in unknown environments are still lacking.

In response to the demand for autonomous landing guidance under emergency conditions, a research team from the Image Measurement and Visual Navigation Lab at the College of Aerospace Science and Engineering, National University of Defense Technology, has proposed a new monocular vision-based measurement method for autonomous aircraft landing guidance in unknown structured environments. This method uses an onboard monocular camera to perceive the environment and designs a multi-task neural network model that comprehensively considers factors such as flatness, width, and length. The system autonomously detects suitable landing regions within the visual field and accurately measures the relative 6D pose between the aircraft and the landing region. This provides reliable measurement data for autonomous landing guidance. Our approach is entirely based on the onboard equipment and does not rely on electromagnetic interference or data link support, which significantly enhances the intelligence of the aircraft and ensures safety in emergency situations.

This research has been published in the Chinese Journal of Aeronautics on 11 March, 2025.

The corresponding author of this study, Xiaoliang Sun, Associate Researcher at the College of Aerospace Science and Engineering, National University of Defense Technology, has long been engaged in research related to image measurement and visual navigation. He stated, ‘With the advancement of technology, the application and scale of aircraft have greatly expanded. The safety and intelligence levels of aircraft are fundamental to ensuring their safe operation. Our focus on autonomous landing guidance under emergency conditions aims to develop monocular vision-based measurement methods, including autonomous landing region selection and high-precision relative pose measurement. This research will significantly improve the intelligence of aircraft and ensure the safety of both the aircraft and its occupants in emergency situations.’

Dr. Zhuo Zhang, the first author of the study, further elaborated: ‘Deep neural networks possess powerful feature extraction and expression capabilities. We designed a multi-task network model that uses an onboard monocular camera to analyze the visual field, considering factors like scene category, depth, and slope. By utilizing structured edge features, we innovatively proposed a three-dimensional information integration metric to autonomously and efficiently select the optimal landing region. Furthermore, we used sparse keypoint parametrization to accurately measure the relative 6D pose between the landing region and the aircraft, enabling autonomous landing guidance under emergency conditions.’

This study introduces a new monocular vision-based measurement method for autonomous aircraft landing guidance in unknown structured environments. It primarily uses structured information from natural scenes, such as roads, to efficiently select the optimal landing area and perform high-precision relative pose measurements. However, in scenes with insufficient structured information, the robustness of the method may be affected. Sun added: ‘For future work, we plan to explore more generalized methods for analyzing unknown scenes and measuring poses, decoupling the dependency on structured information. This will enable autonomous landing guidance in any unknown environment, further advancing aircraft safety in emergency conditions.’


About Chinese Journal of Aeronautics 

Chinese Journal of Aeronautics (CJA) is an open access, peer-reviewed international journal covering all aspects of aerospace engineering, monthly published by Elsevier. The Journal reports the scientific and technological achievements and frontiers in aeronautic engineering and astronautic engineering, in both theory and practice. CJA is indexed in SCI (IF = 5.3, top 4/52, Q1), EI, IAA, AJ, CSA, Scopus.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.