Mastodon

Visual-Based Lunar Positioning Using a Multi-Stage Multi-Head Neural Network

Abstract

The Moon has emerged as a central focus in contemporary space exploration, as evidenced by the growing deployment of satellites for both orbital and landing missions. Autonomous navigation poses a significant challenge for these missions, and while various methods have been developed, optical navigation stands out as a promising solution due to its efficacy and suitability for hardware miniaturization, enabling a reduction in required sensors to just a camera. Research, spanning traditional algorithms to machine learning and deep learning, has explored applications from surface proximity to distant approaches. Despite notable advancements, a comprehensive absolute navigation system remains elusive, as existing solutions are often restricted to specific applications such as landings or fly-bys. Moreover, these solutions frequently estimate the state of the spacecraft in relative terms only. This paper introduces an innovative end-to-end deep learning solution for absolute lunar positioning, extracting geospatial information, and expanding the range of altitude constraints. Covering distances from tens to hundreds of kilometers, the proposed approach addresses a wide spectrum of scenarios, providing spacecraft localization in diverse conditions. The algorithm is built on a deep neural network architecture for localization. Taking as input a grayscale picture of the moon’s surface, it is trained to output a probability map of the satellite’s position through a multi-stage feature extraction process. Initially, the network undertakes the classification of the input image to determine its corresponding region on the moon’s surface. Subsequently, the image is aligned with a comprehensive representation of the observed site. This alignment process is allowed by a multi-head architecture, consisting of multiple segmentation autoencoders, to cover the whole surface. The resulting output is finally transformed into position information. What sets this approach apart is its flexibility; localization can be executed from any location around the moon, provided a surface image is available, and position estimation occurs in an absolute reference frame. The network is trained in a fully simulated environment, producing photorealistic images akin to those captured by a flying satellite. Real features are replicated using texture from NASA’s LROC mission dataset, while lighting and shadows are simulated in an ephemeris-based scenario to achieve a real-world appearance. This innovative positioning scheme is compared to the current state-of-the-art in optical navigation, and potential applications for miniaturized Moon missions are explored, showcasing the versatility and advancements in absolute navigation methodologies.

Publication
75th International Astronautical Congress (IAC2024), Milano, Italy
Paolo Lunghi
Paolo Lunghi
Assistant Professor of Aerospace Systems

Aiming for autonomous Guidance, Navigation, and Control for spacecraft.