IRL CNRS CROSSING
Abstract:When training data is scarce, it is common to make use of a feature extractor that has been pre-trained on a large base dataset, either by fine-tuning its parameters on the ``target'' dataset or by directly adopting its representation as features for a simple classifier. Fine-tuning is ineffective for few-shot learning, since the target dataset contains only a handful of examples. However, directly adopting the features without fine-tuning relies on the base and target distributions being similar enough that these features achieve separability and generalization. This paper investigates whether better features for the target dataset can be obtained by training on fewer base classes, seeking to identify a more useful base dataset for a given task.We consider cross-domain few-shot image classification in eight different domains from Meta-Dataset and entertain multiple real-world settings (domain-informed, task-informed and uninformed) where progressively less detail is known about the target task. To our knowledge, this is the first demonstration that fine-tuning on a subset of carefully selected base classes can significantly improve few-shot learning. Our contributions are simple and intuitive methods that can be implemented in any few-shot solution. We also give insights into the conditions in which these solutions are likely to provide a boost in accuracy. We release the code to reproduce all experiments from this paper on GitHub. https://github.com/RafLaf/Few-and-Fewer.git
Abstract:Interest in remote monitoring has grown thanks to recent advancements in Internet-of-Things (IoT) paradigms. New applications have emerged, using small devices called sensor nodes capable of collecting data from the environment and processing it. However, more and more data are processed and transmitted with longer operational periods. At the same, the battery technologies have not improved fast enough to cope with these increasing needs. This makes the energy consumption issue increasingly challenging and thus, miniaturized energy harvesting devices have emerged to complement traditional energy sources. Nevertheless, the harvested energy fluctuates significantly during the node operation, increasing uncertainty in actually available energy resources. Recently, approaches in energy management have been developed, in particular using reinforcement learning approaches. However, in reinforcement learning, the algorithm's performance relies greatly on the reward function. In this paper, we present two contributions. First, we explore five different reward functions to identify the most suitable variables to use in such functions to obtain the desired behaviour. Experiments were conducted using the Q-learning algorithm to adjust the energy consumption depending on the energy harvested. Results with the five reward functions illustrate how the choice thereof impacts the energy consumption of the node. Secondly, we propose two additional reward functions able to find the compromise between energy consumption and a node performance using a non-fixed balancing parameter. Our simulation results show that the proposed reward functions adjust the node's performance depending on the battery level and reduce the learning time.