Abstract:Light emission from galaxies exhibit diverse brightness profiles, influenced by factors such as galaxy type, structural features and interactions with other galaxies. Elliptical galaxies feature more uniform light distributions, while spiral and irregular galaxies have complex, varied light profiles due to their structural heterogeneity and star-forming activity. In addition, galaxies with an active galactic nucleus (AGN) feature intense, concentrated emission from gas accretion around supermassive black holes, superimposed on regular galactic light, while quasi-stellar objects (QSO) are the extreme case of the AGN emission dominating the galaxy. The challenge of identifying AGN and QSO has been discussed many times in the literature, often requiring multi-wavelength observations. This paper introduces a novel approach to identify AGN and QSO from a single image. Diffusion models have been recently developed in the machine-learning literature to generate realistic-looking images of everyday objects. Utilising the spatial resolving power of the Euclid VIS images, we created a diffusion model trained on one million sources, without using any source pre-selection or labels. The model learns to reconstruct light distributions of normal galaxies, since the population is dominated by them. We condition the prediction of the central light distribution by masking the central few pixels of each source and reconstruct the light according to the diffusion model. We further use this prediction to identify sources that deviate from this profile by examining the reconstruction error of the few central pixels regenerated in each source's core. Our approach, solely using VIS imaging, features high completeness compared to traditional methods of AGN and QSO selection, including optical, near-infrared, mid-infrared, and X-rays. [abridged]
Abstract:Sonification is the transformation of data into acoustic signals, achievable through different techniques. Sonification can be defined as a way to represent data values and relations as perceivable sounds, aiming at facilitating their communication and interpretation. Like data visualization provides meaning via images, sonification conveys meaning via sound. Sonification approaches are useful in a number of scenario. A first case is the possibility to receive information while keeping other sensory channels free, like in medical environment, in driving experience, etc. Another scenario addresses an easier recognition of patterns when data present high dimensionality and cardinality. Finally, sonification can be applied to presentation and dissemination initiatives, also with artistic goals. The zCOSMOS dataset contains detailed data about almost 20000 galaxies, describing the evolution of a relatively small portion of the universe in the last 10 million years in terms of galaxy mass, absolute luminosity, redshift, distance, age, and star formation rate. The present paper proposes a sonification for the mentioned dataset, with the following goals: i) providing a general description of the dataset, accessible via sound, which could also make unnoticed patterns emerge; ii) realizing an artistic but scientifically accurate sonic portrait of a portion of the universe, thus filling the gap between art and science in the context of scientific dissemination and so-called "edutainment"; iii) adding value to the dataset, since also scientific data and achievements must be considered as a cultural heritage that needs to be preserved and enhanced. Both scientific and technological aspects of the sonification are addressed.