Abstract:Deep-learning methods have shown promising performance for low-dose computed tomography (LDCT) reconstruction. However, supervised methods face the problem of lacking labeled data in clinical scenarios, and the CNN-based unsupervised denoising methods would cause excessive smoothing in the reconstructed image. Recently, the normalizing flows (NFs) based methods have shown advantages in producing detail-rich images and avoiding over-smoothing, however, there are still issues: (1) Although the alternating optimization in the data and latent space can well utilize the regularization and generation capabilities of NFs, the current two-way transformation strategy of noisy images and latent variables would cause detail loss and secondary artifacts; and (2) Training NFs on high-resolution CT images is hard due to huge computation. Though using conditional normalizing flows (CNFs) to learn conditional probability can reduce the computational burden, current methods require labeled data for conditionalization, and the unsupervised CNFs-based LDCT reconstruction remains a problem. To tackle these problems, we propose a novel CNFs-based unsupervised LDCT iterative reconstruction algorithm. It employs strict one-way transformation when performing alternating optimization in the dual spaces, thus effectively avoiding the problems of detail loss and secondary artifacts. By proposing a novel unsupervised conditionalization strategy, we train CNFs on high-resolution CT images, thus achieving fast and high-quality unsupervised reconstruction. Experiments on different datasets suggest that the performance of the proposed algorithm could surpass some state-of-the-art unsupervised and even supervised methods.
Abstract:Low-dose computed tomography (LDCT) offers significant advantages in reducing the potential harm to human bodies. However, reducing the X-ray dose in CT scanning often leads to severe noise and artifacts in the reconstructed images, which might adversely affect diagnosis. By utilizing the expectation maximization (EM) algorithm, statistical priors could be combined with artificial priors to improve LDCT reconstruction quality. However, conventional EM-based regularization methods adopt an alternating solving strategy, i.e. full reconstruction followed by image-regularization, resulting in over-smoothing and slow convergence. In this paper, we propose to integrate TV regularization into the ``M''-step of the EM algorithm, thus achieving effective and efficient regularization. Besides, by employing the Chambolle-Pock (CP) algorithm and the ordered subset (OS) strategy, we propose the OSEM-CP algorithm for LDCT reconstruction, in which both reconstruction and regularization are conducted view-by-view. Furthermore, by unrolling OSEM-CP, we propose an end-to-end reconstruction neural network (NN), named OSEM-CPNN, with remarkable performance and efficiency that achieves high-quality reconstructions in just one full-view iteration. Experiments on different models and datasets demonstrate our methods' outstanding performance compared to traditional and state-of-the-art deep-learning methods.
Abstract:The conventional machine learning (ML) and deep learning approaches need to share customers' sensitive information with an external credit bureau to generate a prediction model that opens the door to privacy leakage. This leakage risk makes financial companies face an enormous challenge in their cooperation. Federated learning is a machine learning setting that can protect data privacy, but the high communication cost is often the bottleneck of the federated systems, especially for large neural networks. Limiting the number and size of communications is necessary for the practical training of large neural structures. Gradient sparsification has received increasing attention as a method to reduce communication cost, which only updates significant gradients and accumulates insignificant gradients locally. However, the secure aggregation framework cannot directly use gradient sparsification. This article proposes two sparsification methods to reduce communication cost in federated learning. One is a time-varying hierarchical sparsification method for model parameter update, which solves the problem of maintaining model accuracy after high ratio sparsity. It can significantly reduce the cost of a single communication. The other is to apply the sparsification method to the secure aggregation framework. We sparse the encryption mask matrix to reduce the cost of communication while protecting privacy. Experiments show that under different Non-IID experiment settings, our method can reduce the upload communication cost to about 2.9% to 18.9% of the conventional federated learning algorithm when the sparse rate is 0.01.
Abstract:Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.