Abstract:Parameter-Efficient Fine-Tuning (PEFT) methods have gained significant popularity for adapting pre-trained Large Language Models (LLMs) to downstream tasks, primarily due to their potential to significantly reduce memory and computational overheads. However, a common limitation in most PEFT approaches is their application of a uniform architectural design across all layers. This uniformity involves identical trainable modules and ignores the varying importance of each layer, leading to sub-optimal fine-tuning results. To overcome the above limitation and obtain better performance, we develop a novel approach, Importance-aware Sparse Tuning (IST), to fully utilize the inherent sparsity and select the most important subset of full layers with effective layer-wise importance scoring. The proposed IST is a versatile and plug-and-play technique compatible with various PEFT methods that operate on a per-layer basis. By leveraging the estimated importance scores, IST dynamically updates these selected layers in PEFT modules, leading to reduced memory demands. We further provide theoretical proof of convergence and empirical evidence of superior performance to demonstrate the advantages of IST over uniform updating strategies. Extensive experiments on a range of LLMs, PEFTs, and downstream tasks substantiate the effectiveness of our proposed method, showcasing IST's capacity to enhance existing layer-based PEFT methods. Our code is available at https://github.com/Kaiseem/IST.
Abstract:Privacy-preserving machine learning has become a popular area of research due to the increasing concern over data privacy. One way to achieve privacy-preserving machine learning is to use secure multi-party computation, where multiple distrusting parties can perform computations on data without revealing the data itself. We present Secure-TF, a privacy-preserving machine learning framework based on MPC. Our framework is able to support widely-used machine learning models such as logistic regression, fully-connected neural network, and convolutional neural network. We propose novel cryptographic protocols that has lower round complexity and less communication for computing sigmoid, ReLU, conv2D and there derivatives. All are central building blocks for modern machine learning models. With our more efficient protocols, our system is able to outperform previous state-of-the-art privacy-preserving machine learning framework in the WAN setting.
Abstract:Secure multi-party computation (MPC) is a subfield of cryptography. Its aim is creating methods for multiple parties to jointly compute a function over their inputs meanwhile keeping their inputs privately. The Secure Compare problem, introduced by Yao under the name millionaire's problem, is an important problem in MPC. On the other hand, Privacy Preserving Machine Learning (PPML) is an intersectional field of cryptography and machine learning. It allows a group of independent data owners to collaboratively learn a model over their data sets without exposing their private data. MPC is a common cryptographic technique commonly used in PPML. In Deep learning, ReLU is an important layer. In order to train neural network to use MPC, we need an MPC protocol for ReLU and DReLU (the derivative of ReLU) in forward propagation and backward propagation of neural network respectively. In this paper, we give two new tools "G-module action" and "G-module recover" for MPC protocol, and use them to give the protocols for Secure Compare, DReLU and ReLU. The total communication in online and offline of our protocols is much less than the state of the art.