Abstract:Recent advancements in large language models (LLMs) have indeed showcased their impressive capabilities. On mobile devices, the wealth of valuable, non-public data generated daily holds great promise for locally fine-tuning personalized LLMs, while maintaining privacy through on-device processing. However, the constraints of mobile device resources pose challenges to direct on-device LLM fine-tuning, mainly due to the memory-intensive nature of derivative-based optimization required for saving gradients and optimizer states. To tackle this, we propose employing derivative-free optimization techniques to enable on-device fine-tuning of LLM, even on memory-limited mobile devices. Empirical results demonstrate that the RoBERTa-large model and OPT-1.3B can be fine-tuned locally on the OPPO Reno 6 smartphone using around 4GB and 6.5GB of memory respectively, using derivative-free optimization techniques. This highlights the feasibility of on-device LLM fine-tuning on mobile devices, paving the way for personalized LLMs on resource-constrained devices while safeguarding data privacy.
Abstract:Preserving original noise residuals in images are critical to image fraud identification. Since the resizing operation during deep learning will damage the microstructures of image noise residuals, we propose a framework for directly training images of original input scales without resizing. Our arbitrary-sized image training method mainly depends on the pseudo-batch gradient descent (PBGD), which bridges the gap between the input batch and the update batch to assure that model updates can normally run for arbitrary-sized images. In addition, a 3-phase alternate training strategy is designed to learn optimal residual kernels for image fraud identification. With the learnt residual kernels and PBGD, the proposed framework achieved the state-of-the-art results in image fraud identification, especially for images with small tampered regions or unseen images with different tampering distributions.
Abstract:Recognizing apparel attributes has recently drawn great interest in the computer vision community. Methods based on various deep neural networks have been proposed for image classification, which could be applied to apparel attributes recognition. An interesting problem raised is how to ensemble these methods to further improve the accuracy. In this paper, we propose a two-layer mixture framework for ensemble different networks. In the first layer of this framework, two types of ensemble learning methods, bagging and boosting, are separately applied. Different from traditional methods, our bagging process makes use of the whole training set, not random subsets, to train each model in the ensemble, where several differentiated deep networks are used to promote model variance. To avoid the bias of small-scale samples, the second layer only adopts bagging to mix the results obtained with bagging and boosting in the first layer. Experimental results demonstrate that the proposed mixture framework outperforms any individual network model or either independent ensemble method in apparel attributes classification.