Abstract:Autonomous navigation and path-planning around non-cooperative space objects is an enabling technology for on-orbit servicing and space debris removal systems. The navigation task includes the determination of target object motion, the identification of target object features suitable for grasping, and the identification of collision hazards and other keep-out zones. Given this knowledge, chaser spacecraft can be guided towards capture locations without damaging the target object or without unduly the operations of a servicing target by covering up solar arrays or communication antennas. One way to autonomously achieve target identification, characterization and feature recognition is by use of artificial intelligence algorithms. This paper discusses how the combination of cameras and machine learning algorithms can achieve the relative navigation task. The performance of two deep learning-based object detection algorithms, Faster Region-based Convolutional Neural Networks (R-CNN) and You Only Look Once (YOLOv5), is tested using experimental data obtained in formation flight simulations in the ORION Lab at Florida Institute of Technology. The simulation scenarios vary the yaw motion of the target object, the chaser approach trajectory, and the lighting conditions in order to test the algorithms in a wide range of realistic and performance limiting situations. The data analyzed include the mean average precision metrics in order to compare the performance of the object detectors. The paper discusses the path to implementing the feature recognition algorithms and towards integrating them into the spacecraft Guidance Navigation and Control system.
Abstract:Space debris is on the rise due to the increasing demand for spacecraft for com-munication, navigation, and other applications. The Space Surveillance Network (SSN) tracks over 27,000 large pieces of debris and estimates the number of small, un-trackable fragments at over 1,00,000. To control the growth of debris, the for-mation of further debris must be reduced. Some solutions include deorbiting larger non-cooperative resident space objects (RSOs) or servicing satellites in or-bit. Both require rendezvous with RSOs, and the scale of the problem calls for autonomous missions. This paper introduces the Multipurpose Autonomous Ren-dezvous Vision-Integrated Navigation system (MARVIN) developed and tested at the ORION Facility at Florida Institution of Technology. MARVIN consists of two sub-systems: a machine vision-aided navigation system and an artificial po-tential field (APF) guidance algorithm which work together to command a swarm of chasers to safely rendezvous with the RSO. We present the MARVIN architec-ture and hardware-in-the-loop experiments demonstrating autonomous, collabo-rative swarm satellite operations successfully guiding three drones to rendezvous with a physical mockup of a non-cooperative satellite in motion.
Abstract:The effective use of computer vision and machine learning for on-orbit applications has been hampered by limited computing capabilities, and therefore limited performance. While embedded systems utilizing ARM processors have been shown to meet acceptable but low performance standards, the recent availability of larger space-grade field programmable gate arrays (FPGAs) show potential to exceed the performance of microcomputer systems. This work proposes use of neural network-based object detection algorithm that can be deployed on a comparably resource-constrained FPGA to automatically detect components of non-cooperative, satellites on orbit. Hardware-in-the-loop experiments were performed on the ORION Maneuver Kinematics Simulator at Florida Tech to compare the performance of the new model deployed on a small, resource-constrained FPGA to an equivalent algorithm on a microcomputer system. Results show the FPGA implementation increases the throughput and decreases latency while maintaining comparable accuracy. These findings suggest future missions should consider deploying computer vision algorithms on space-grade FPGAs.