Abstract:Test-time adaptation (TTA) allows a model to be adapted to an unseen domain without accessing the source data. Due to the nature of practical environments, TTA has a limited amount of data for adaptation. Recent TTA methods further restrict this by filtering input data for reliability, making the effective data size even smaller and limiting adaptation potential. To address this issue, We propose Feature Augmentation based Test-time Adaptation (FATA), a simple method that fully utilizes the limited amount of input data through feature augmentation. FATA employs Normalization Perturbation to augment features and adapts the model using the FATA loss, which makes the outputs of the augmented and original features similar. FATA is model-agnostic and can be seamlessly integrated into existing models without altering the model architecture. We demonstrate the effectiveness of FATA on various models and scenarios on ImageNet-C and Office-Home, validating its superiority in diverse real-world conditions.
Abstract:Continual test-time adaptation (cTTA) methods are designed to facilitate the continual adaptation of models to dynamically changing real-world environments where computational resources are limited. Due to this inherent limitation, existing approaches fail to simultaneously achieve accuracy and efficiency. In detail, when using a single image, the instability caused by batch normalization layers and entropy loss significantly destabilizes many existing methods in real-world cTTA scenarios. To overcome these challenges, we present BESTTA, a novel single image continual test-time adaptation method guided by style transfer, which enables stable and efficient adaptation to the target environment by transferring the style of the input image to the source style. To implement the proposed method, we devise BeIN, a simple yet powerful normalization method, along with the style-guided losses. We demonstrate that BESTTA effectively adapts to the continually changing target environment, leveraging only a single image on both semantic segmentation and image classification tasks. Remarkably, despite training only two parameters in a BeIN layer consuming the least memory, BESTTA outperforms existing state-of-the-art methods in terms of performance.
Abstract:Real-world vision tasks frequently suffer from the appearance of adverse weather conditions including rain, fog, snow, and raindrops in captured images. Recently, several generic methods for restoring weather-degraded images have been proposed, aiming to remove multiple types of adverse weather effects present in the images. However, these methods have considered weather as discrete and mutually exclusive variables, leading to failure in generalizing to unforeseen weather conditions beyond the scope of the training data, such as the co-occurrence of rain, fog, and raindrops. To this end, weather-degraded image restoration models should have flexible adaptability to the current unknown weather condition to ensure reliable and optimal performance. The adaptation method should also be able to cope with data scarcity for real-world adaptation. This paper proposes MetaWeather, a few-shot weather-degraded image restoration method for arbitrary weather conditions. For this, we devise the core piece of MetaWeather, coined Degradation Pattern Matching Module (DPMM), which leverages representations from a few-shot support set by matching features between input and sample images under new weather conditions. In addition, we build meta-knowledge with episodic meta-learning on top of our MetaWeather architecture to provide flexible adaptability. In the meta-testing phase, we adopt a parameter-efficient fine-tuning method to preserve the prebuilt knowledge and avoid the overfitting problem. Experiments on the BID Task II.A dataset show our method achieves the best performance on PSNR and SSIM compared to state-of-the-art image restoration methods. Code is available at (TBA).
Abstract:Unmanned ground vehicles (UGVs) in unstructured environments mostly operate through teleoperation. To enable stable teleoperated driving in unstructured environments, some research has suggested driver assistance and evaluation methods that involve user studies, which can be costly and require lots of time and effort. A simulation model-based approach has been proposed to complement the user study; however, the models on teleoperated driving do not account for unstructured environments. Our proposed solution involves simulation models of teleoperated driving for drivers that utilize a deep generative model. Initially, we build a teleoperated driving simulator to imitate unstructured environments based on previous research and collect driving data from drivers. Then, we design and implement the simulation models based on a conditional variational autoencoder (CVAE). Our evaluation results demonstrate that the proposed teleoperated driving model can generate data by simulating the driver appropriately in unstructured canyon terrains.
Abstract:Because of the limitations of autonomous driving technologies, teleoperation is widely used in dangerous environments such as military operations. However, the teleoperated driving performance depends considerably on the driver's skill level. Moreover, unskilled drivers need extensive training time for teleoperations in unusual and harsh environments. To address this problem, we propose a novel denoising-based driver assistance method, namely GoonDAE, for real-time teleoperated off-road driving. The unskilled driver control input is assumed to be the same as the skilled driver control input but with noise. We designed a skip-connected long short-term memory (LSTM)-based denoising autoencoder (DAE) model to assist the unskilled driver control input by denoising. The proposed GoonDAE was trained with skilled driver control input and sensor data collected from our simulated off-road driving environment. To evaluate GoonDAE, we conducted an experiment with unskilled drivers in the simulated environment. The results revealed that the proposed system considerably enhanced driving performance in terms of driving stability.