Abstract:Time domain astronomy is advancing towards the analysis of multiple massive datasets in real time, prompting the development of multi-stream machine learning models. In this work, we study Domain Adaptation (DA) for real/bogus classification of astronomical alerts using four different datasets: HiTS, DES, ATLAS, and ZTF. We study the domain shift between these datasets, and improve a naive deep learning classification model by using a fine tuning approach and semi-supervised deep DA via Minimax Entropy (MME). We compare the balanced accuracy of these models for different source-target scenarios. We find that both the fine tuning and MME models improve significantly the base model with as few as one labeled item per class coming from the target dataset, but that the MME does not compromise its performance on the source dataset.
Abstract:We present a real-time stamp classifier of astronomical events for the ALeRCE (Automatic Learning for the Rapid Classification of Events) broker. The classifier is based on a convolutional neural network with an architecture designed to exploit rotational invariance of the images, and trained on alerts ingested from the Zwicky Transient Facility (ZTF). Using only the \textit{science, reference} and \textit{difference} images of the first detection as inputs, along with the metadata of the alert as features, the classifier is able to correctly classify alerts from active galactic nuclei, supernovae (SNe), variable stars, asteroids and bogus classes, with high accuracy ($\sim$94\%) in a balanced test set. In order to find and analyze SN candidates selected by our classifier from the ZTF alert stream, we designed and deployed a visualization tool called SN Hunter, where relevant information about each possible SN is displayed for the experts to choose among candidates to report to the Transient Name Server database. We have reported 3060 SN candidates to date (9.2 candidates per day on average), of which 394 have been confirmed spectroscopically. Our ability to report objects using only a single detection means that 92\% of the reported SNe occurred within one day after the first detection. ALeRCE has only reported candidates not otherwise detected or selected by other groups, therefore adding new early transients to the bulk of objects available for early follow-up. Our work represents an important milestone toward rapid alert classifications with the next generation of large etendue telescopes, such as the Vera C. Rubin Observatory's Legacy Survey of Space and Time.
Abstract:In this work, we propose several enhancements to a geometric transformation based model for anomaly detection in images (GeoTranform). The model assumes that the anomaly class is unknown and that only inlier samples are available for training. We introduce new filter based transformations useful for detecting anomalies in astronomical images, that highlight artifact properties to make them more easily distinguishable from real objects. In addition, we propose a transformation selection strategy that allows us to find indistinguishable pairs of transformations. This results in an improvement of the area under the Receiver Operating Characteristic curve (AUROC) and accuracy performance, as well as in a dimensionality reduction. The models were tested on astronomical images from the High Cadence Transient Survey (HiTS) and Zwicky Transient Facility (ZTF) datasets. The best models obtained an average AUROC of 99.20% for HiTS and 91.39% for ZTF. The improvement over the original GeoTransform algorithm and baseline methods such as One-Class Support Vector Machine, and deep learning based methods is significant both statistically and in practice.
Abstract:The main goal of the paper is to provide Pepper with a near real-time object recognition system based on deep neural networks. The proposed system is based on YOLO (You Only Look Once), a deep neural network that is able to detect and recognize objects robustly and at a high speed. In addition, considering that YOLO cannot be run in the Pepper's internal computer in near real-time, we propose to use a Backpack for Pepper, which holds a Jetson TK1 card and a battery. By using this card, Pepper is able to robustly detect and recognize objects in images of 320x320 pixels at about 5 frames per second.