Abstract:Pose estimation of an uncooperative space resident object is a key asset towards autonomy in close proximity operations. In this context monocular cameras are a valuable solution because of their low system requirements. However, the associated image processing algorithms are either too computationally expensive for real time on-board implementation, or not enough accurate. In this paper we propose a pose estimation software exploiting neural network architectures which can be scaled to different accuracy-latency trade-offs. We designed our pipeline to be compatible with Edge Tensor Processing Units to show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space. The neural networks were tested both on the benchmark Spacecraft Pose Estimation Dataset, and on the purposely developed Cosmo Photorealistic Dataset, which depicts a COSMO-SkyMed satellite in a variety of random poses and steerable solar panels orientations. The lightest version of our architecture achieves state-of-the-art accuracy on both datasets but at a fraction of networks complexity, running at 7.7 frames per second on a Coral Dev Board Mini consuming just 2.2W.