Using Machine learning to improve Spacecraft navigation

 

About

To become more self-aware, spacecraft must first know their state vectors in space. Autonomy provides a way to give spacecraft situational awareness by allowing them to make decisions locally in reaction to their surroundings. The same technology that is used in self driving cars to avoid collisions can inspire autonomous guidance, navigation, and control (GN&C) in spaceflight. Autonomous functions have been tested on several past missions by NASA since the late 1990s. Spacecraft have the capability to obtain a great deal of information about their surroundings using various sensors. Advances in computational technology have eliminated the obstacle of limited processing power on interplanetary missions. The Cassini-Huygens mission's optical navigation (OpNav) system processed images with a resolution comparable to that from a smart phone camera. Therefore, the observable information that can be collected about the spacecraft's surroundings can help with state estimation. In space, fully autonomous navigation systems need to be robust and reliable. They need to be able to think for themselves and learn from the situations and environments they encounter. Current state of the art technology is nowhere near this level. Imagine, for example, that a spacecraft is orbiting the Earth. The spacecraft can use the wide variety of observable features on the surface of the Earth to determine its state vectors. By generating and consistently updating a catalog or map of previously unknown or unspecified features on the Earth's surface, its state vectors relative to the surface can be tracked using machine learning and computer vision algorithms.

Research

We are conducting low-TRL research on general methods that utilize geometric deep learning and neural scene representations as a way to understand representations of 3D scenes in a space environment:

Spaceborne Multi-Scale Remote-Sensing Super-Resolution via Deep Conditional Normalizing Flows

Many spaceflight-based vision tasks require high-quality remote-sensing images with clearly decipherable features. However, quality can vary depending on design, operational, and environmental constraints. Enhancing images through post-processing is a cost-efficient solution. Current deep-learning methods that enhance low-resolution images through super-resolution do not quantify network uncertainty of predictions and are trained at a single scale, hindering practical integration in image acquisition pipelines. This work proposes a deep normalizing flow network for uncertainty-quantified multi-scale super-resolution so that higher-resolution image estimation becomes more robust and trustworthy. The proposed network architecture outperforms state-of-the-art super-resolution models on in-orbit lunar imagery data and demonstrates viability on task-based evaluations for landmark identification.

Online Shape Modeling of Resident Space Objects through Implicit Scene Understanding 

Neural networks have become state-of-the-art computer vision tools for tasks that learn implicit representations of geometrical scenes. This work proposes a two-part network architecture that exploits a view-synthesis network to understand a context scene and a graph convolutional network to generate a shape body model of an object within the field of view of a spacecraft’s optical navigation sensors. Once the first part of the network’s architecture understands the spacecraft’s environment, it can generate images from novel observations. The second part uses a multi-view set of images to construct a 3D graph-based representation of the object. The proposed network pipeline produces shape models with accuracies that compete with state-of-the-art methods currently used for missions to small bodies. The network pipeline can be trained for multi-environment missions. Moreover, the onboard implementation may be more cost-effective than the current state of the art.

Spacecraft Relative-Kinematics State Estimation using Conditional Normalizing Flows 

Neural networks have proven effective in understanding spatial environments from limited observations. This work applies deep learning to develop a general state-estimation method for relative kinematics applicable to spacecraft missions that involve multiple environments. It draws upon the abilities of scene-representation and normalizing-flow neural networks to learn conditional, implicit, domain representations in an unsupervised feed-forward manner. Given a known context scene, inverting a trained network creates state measurements (i.e. position and attitude) of the spacecraft relative to its scene. The network evaluates on simulated mission scenarios of spacecraft operating in close proximity to small bodies, such as asteroids.

Journal Articles

  • Heintz, A., Peck, M.A., “Using Neural Radiance Fields for Spacecraft State Estimation in Complex Operational Environments,” in AIAA Journal of Aerospace Information Systems. (in preparation)

  • Heintz, A., Peck, M.A., “Spacecraft Relative-State Ego-Motion for Navigation in Close-Proximity to Resident Space Objects,” in AIAA Journal of Guidance, Control, & Dynamics. (in preparation)

  • Heintz, A., Peck, M.A., Mackey, I., “Multi-Scale, Super-Resolution Remote Imaging via Deep Conditional Normalizing Flows,” in AIAA Journal of Aerospace Information Systems. (under review)

  • Heintz, A., Peck, M.A., Sun, F., Mackey, I., “Online Resident Space-Object Shape Modeling through Implicit Scene Understanding,” in AIAA Journal of Aerospace Information Systems. doi: 10.2514/1.I011014.

Conference Proceedings

  • Heintz, A., Peck, M.A., Mackey, I., “Multi-Scale, Super-Resolution Remote Imaging via Deep Conditional Normalizing Flows,” in AIAA Scitech 2022 Forum, San Diego, CA. doi: 10.2514/6.2022-2499

  • Heintz, A., Peck, M.A., Sun, F., Mackey, I., Dilip, P., Yallala, S., “Online Resident Space-Object Shape Modeling through Implicit Scene Understanding,” in AIAA Scitech 2021 Forum, Virtual. doi: 10.2514/6.2021-0272

  • Heintz, A., Peck, M.A., Pellumbi. R., “Spacecraft Relative-Kinematics State Estimation using Conditional Normalizing Flows,” in AIAA Scitech 2021 Forum, Virtual. doi: 10.2514/6.2021-1954

  • Heintz, A., Peck, M.A., ”Autonomous Optical Navigation for Resident Space Object Exploration,” in AIAA Scitech 2020 Forum, Orlando, FL. doi: 10.2514/6.2020-1347