Abstract:Privacy-preserving machine learning (PPML) aims at enabling machine learning (ML) algorithms to be used on sensitive data. We contribute to this line of research by proposing a framework that allows efficient and secure evaluation of full-fledged state-of-the-art ML algorithms via secure multi-party computation (MPC). This is in contrast to most prior works, which substitute ML algorithms with approximated "MPC-friendly" variants. A drawback of the latter approach is that fine-tuning of the combined ML and MPC algorithms is required, which might lead to less efficient algorithms or inferior quality ML. This is an issue for secure deep neural networks (DNN) training in particular, as this involves arithmetic algorithms thought to be "MPC-unfriendly", namely, integer division, exponentiation, inversion, and square root. In this work, we propose secure and efficient protocols for the above seemingly MPC-unfriendly computations. Our protocols are three-party protocols in the honest-majority setting, and we propose both passively secure and actively secure with abort variants. A notable feature of our protocols is that they simultaneously provide high accuracy and efficiency. This framework enables us to efficiently and securely compute modern ML algorithms such as Adam and the softmax function "as is", without resorting to approximations. As a result, we obtain secure DNN training that outperforms state-of-the-art three-party systems; our full training is up to 6.7 times faster than just the online phase of the recently proposed FALCON@PETS'21 on a standard benchmark network. We further perform measurements on real-world DNNs, AlexNet and VGG16. The performance of our framework is up to a factor of about 12-14 faster for AlexNet and 46-48 faster for VGG16 to achieve an accuracy of 70% and 75%, respectively, when compared to FALCON.
Abstract:With the widespread use of LBSs (Location-based Services), synthesizing location traces plays an increasingly important role in analyzing spatial big data while protecting users' privacy. Although location synthesizers have been widely studied, existing synthesizers do not provide utility, privacy, or scalability sufficiently, hence are not practical for large-scale location traces. To overcome this issue, we propose a novel location synthesizer called PPMTF (Privacy-Preserving Multiple Tensor Factorization). We model various statistical features of the original traces by a transition-count tensor and a visit-count tensor. We simultaneously factorize these two tensors via multiple tensor factorization, and train factor matrices via posterior sampling. Then we synthesize traces from reconstructed tensors using the MH (Metropolis-Hastings) algorithm. We comprehensively evaluate the proposed method using two datasets. Our experimental results show that the proposed method preserves various statistical features, provides plausible deniability, and synthesizes large-scale location traces in practical time. The proposed method also significantly outperforms the state-of-the-art with regard to the utility, privacy, and scalability.