Abstract:Characterizing the interior structure of exoplanets is essential for understanding their diversity, formation, and evolution. As the interior of exoplanets is inaccessible to observations, an inverse problem must be solved, where numerical structure models need to conform to observable parameters such as mass and radius. This is a highly degenerate problem whose solution often relies on computationally-expensive and time-consuming inference methods such as Markov Chain Monte Carlo. We present ExoMDN, a machine-learning model for the interior characterization of exoplanets based on Mixture Density Networks (MDN). The model is trained on a large dataset of more than 5.6 million synthetic planets below 25 Earth masses consisting of an iron core, a silicate mantle, a water and high-pressure ice layer, and a H/He atmosphere. We employ log-ratio transformations to convert the interior structure data into a form that the MDN can easily handle. Given mass, radius, and equilibrium temperature, we show that ExoMDN can deliver a full posterior distribution of mass fractions and thicknesses of each planetary layer in under a second on a standard Intel i5 CPU. Observational uncertainties can be easily accounted for through repeated predictions from within the uncertainties. We use ExoMDN to characterize the interior of 22 confirmed exoplanets with mass and radius uncertainties below 10% and 5% respectively, including the well studied GJ 1214 b, GJ 486 b, and the TRAPPIST-1 planets. We discuss the inclusion of the fluid Love number $k_2$ as an additional (potential) observable, showing how it can significantly reduce the degeneracy of interior structures. Utilizing the fast predictions of ExoMDN, we show that measuring $k_2$ with an accuracy of 10% can constrain the thickness of core and mantle of an Earth analog to $\approx13\%$ of the true values.