Abstract:The rapid proliferation of non-cooperative spacecraft and space debris in orbit has precipitated a surging demand for on-orbit servicing and space debris removal at a scale that only autonomous missions can address, but the prerequisite autonomous navigation and flightpath planning to safely capture an unknown, non-cooperative, tumbling space object is an open problem. This requires algorithms for real-time, automated spacecraft feature recognition to pinpoint the locations of collision hazards (e.g. solar panels or antennas) and safe docking features (e.g. satellite bodies or thrusters) so safe, effective flightpaths can be planned. Prior work in this area reveals the performance of computer vision models are highly dependent on the training dataset and its coverage of scenarios visually similar to the real scenarios that occur in deployment. Hence, the algorithm may have degraded performance under certain lighting conditions even when the rendezvous maneuver conditions of the chaser to the target spacecraft are the same. This work delves into how humans perform these tasks through a survey of how aerospace engineering students experienced with spacecraft shapes and components recognize features of the three spacecraft: Landsat, Envisat, Anik, and the orbiter Mir. The survey reveals that the most common patterns in the human detection process were to consider the shape and texture of the features: antennas, solar panels, thrusters, and satellite bodies. This work introduces a novel algorithm SpaceYOLO, which fuses a state-of-the-art object detector YOLOv5 with a separate neural network based on these human-inspired decision processes exploiting shape and texture. Performance in autonomous spacecraft detection of SpaceYOLO is compared to ordinary YOLOv5 in hardware-in-the-loop experiments under different lighting and chaser maneuver conditions at the ORION Laboratory at Florida Tech.
Abstract:Autonomous navigation and path-planning around non-cooperative space objects is an enabling technology for on-orbit servicing and space debris removal systems. The navigation task includes the determination of target object motion, the identification of target object features suitable for grasping, and the identification of collision hazards and other keep-out zones. Given this knowledge, chaser spacecraft can be guided towards capture locations without damaging the target object or without unduly the operations of a servicing target by covering up solar arrays or communication antennas. One way to autonomously achieve target identification, characterization and feature recognition is by use of artificial intelligence algorithms. This paper discusses how the combination of cameras and machine learning algorithms can achieve the relative navigation task. The performance of two deep learning-based object detection algorithms, Faster Region-based Convolutional Neural Networks (R-CNN) and You Only Look Once (YOLOv5), is tested using experimental data obtained in formation flight simulations in the ORION Lab at Florida Institute of Technology. The simulation scenarios vary the yaw motion of the target object, the chaser approach trajectory, and the lighting conditions in order to test the algorithms in a wide range of realistic and performance limiting situations. The data analyzed include the mean average precision metrics in order to compare the performance of the object detectors. The paper discusses the path to implementing the feature recognition algorithms and towards integrating them into the spacecraft Guidance Navigation and Control system.
Abstract:Space debris is on the rise due to the increasing demand for spacecraft for com-munication, navigation, and other applications. The Space Surveillance Network (SSN) tracks over 27,000 large pieces of debris and estimates the number of small, un-trackable fragments at over 1,00,000. To control the growth of debris, the for-mation of further debris must be reduced. Some solutions include deorbiting larger non-cooperative resident space objects (RSOs) or servicing satellites in or-bit. Both require rendezvous with RSOs, and the scale of the problem calls for autonomous missions. This paper introduces the Multipurpose Autonomous Ren-dezvous Vision-Integrated Navigation system (MARVIN) developed and tested at the ORION Facility at Florida Institution of Technology. MARVIN consists of two sub-systems: a machine vision-aided navigation system and an artificial po-tential field (APF) guidance algorithm which work together to command a swarm of chasers to safely rendezvous with the RSO. We present the MARVIN architec-ture and hardware-in-the-loop experiments demonstrating autonomous, collabo-rative swarm satellite operations successfully guiding three drones to rendezvous with a physical mockup of a non-cooperative satellite in motion.
Abstract:The effective use of computer vision and machine learning for on-orbit applications has been hampered by limited computing capabilities, and therefore limited performance. While embedded systems utilizing ARM processors have been shown to meet acceptable but low performance standards, the recent availability of larger space-grade field programmable gate arrays (FPGAs) show potential to exceed the performance of microcomputer systems. This work proposes use of neural network-based object detection algorithm that can be deployed on a comparably resource-constrained FPGA to automatically detect components of non-cooperative, satellites on orbit. Hardware-in-the-loop experiments were performed on the ORION Maneuver Kinematics Simulator at Florida Tech to compare the performance of the new model deployed on a small, resource-constrained FPGA to an equivalent algorithm on a microcomputer system. Results show the FPGA implementation increases the throughput and decreases latency while maintaining comparable accuracy. These findings suggest future missions should consider deploying computer vision algorithms on space-grade FPGAs.