Abstract:Reproducible images preprocessing is important in the field of computer vision, for efficient algorithms comparison or for new images corpus preparation. In this paper, we propose a method to obtain an explicit and ordered sequence of transformations that improves a given image: the computation is performed via a nature-inspired optimization algorithm based on quality assessment techniques. Preliminary tests show the impact of the approach on different state-of-the-art data sets. -- L'application de pr\'etraitements explicites et reproductibles est fondamentale dans le domaine de la vision par ordinateur, pour pouvoir comparer efficacement des algorithmes ou pour pr\'eparer un nouveau corpus d'images. Dans cet article, nous proposons une m\'ethode pour obtenir une s\'equence reproductible de transformations qui am\'eliore une image donn\'ee: le calcul est r\'ealis\'e via un algorithme d'optimisation inspir\'ee par la nature et bas\'e sur des techniques d'\'evaluation de la qualit\'e. Des tests montrent l'impact de l'approche sur diff\'erents ensembles d'images de l'\'etat de l'art.
Abstract:We present novel active learning strategies dedicated to providing a solution to the cold start stage, i.e. initializing the classification of a large set of data with no attached labels. Moreover, proposed strategies are designed to handle an imbalanced context in which random selection is highly inefficient. Specifically, our active learning iterations address label scarcity and imbalance using element scores, combining information extracted from a clustering structure to a label propagation model. The strategy is illustrated by a case study on annotating Twitter content w.r.t. testimonies of a real flood event. We show that our method effectively copes with class imbalance, by boosting the recall of samples from the minority class.
Abstract:In this paper, we investigate the conversion of a Twitter corpus into geo-referenced raster cells holding the probability of the associated geographical areas of being flooded. We describe a baseline approach that combines a density ratio function, aggregation using a spatio-temporal Gaussian kernel function, and TFIDF textual features. The features are transformed to probabilities using a logistic regression model. The described method is evaluated on a corpus collected after the floods that followed Hurricane Harvey in the Houston urban area in August-September 2017. The baseline reaches a F1 score of 68%. We highlight research directions likely to improve these initial results.