Picture for Rajarshi Saha

Rajarshi Saha

Privacy Preserving Semi-Decentralized Mean Estimation over Intermittently-Connected Networks

Add code
Jun 06, 2024
Viaarxiv icon

Compressing Large Language Models using Low Rank and Low Precision Decomposition

Add code
May 29, 2024
Figure 1 for Compressing Large Language Models using Low Rank and Low Precision Decomposition
Figure 2 for Compressing Large Language Models using Low Rank and Low Precision Decomposition
Figure 3 for Compressing Large Language Models using Low Rank and Low Precision Decomposition
Figure 4 for Compressing Large Language Models using Low Rank and Low Precision Decomposition
Viaarxiv icon

Matrix Compression via Randomized Low Rank and Low Precision Factorization

Add code
Oct 17, 2023
Figure 1 for Matrix Compression via Randomized Low Rank and Low Precision Factorization
Figure 2 for Matrix Compression via Randomized Low Rank and Low Precision Factorization
Figure 3 for Matrix Compression via Randomized Low Rank and Low Precision Factorization
Figure 4 for Matrix Compression via Randomized Low Rank and Low Precision Factorization
Viaarxiv icon

Collaborative Mean Estimation over Intermittently Connected Networks with Peer-To-Peer Privacy

Add code
Feb 28, 2023
Viaarxiv icon

Semi-Decentralized Federated Learning with Collaborative Relaying

Add code
May 23, 2022
Figure 1 for Semi-Decentralized Federated Learning with Collaborative Relaying
Figure 2 for Semi-Decentralized Federated Learning with Collaborative Relaying
Figure 3 for Semi-Decentralized Federated Learning with Collaborative Relaying
Figure 4 for Semi-Decentralized Federated Learning with Collaborative Relaying
Viaarxiv icon

Robust Federated Learning with Connectivity Failures: A Semi-Decentralized Framework with Collaborative Relaying

Add code
Feb 24, 2022
Figure 1 for Robust Federated Learning with Connectivity Failures: A Semi-Decentralized Framework with Collaborative Relaying
Figure 2 for Robust Federated Learning with Connectivity Failures: A Semi-Decentralized Framework with Collaborative Relaying
Figure 3 for Robust Federated Learning with Connectivity Failures: A Semi-Decentralized Framework with Collaborative Relaying
Figure 4 for Robust Federated Learning with Connectivity Failures: A Semi-Decentralized Framework with Collaborative Relaying
Viaarxiv icon

Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms

Add code
Feb 23, 2022
Figure 1 for Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms
Figure 2 for Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms
Figure 3 for Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms
Figure 4 for Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms
Viaarxiv icon

Partner-Aware Algorithms in Decentralized Cooperative Bandit Teams

Add code
Oct 02, 2021
Figure 1 for Partner-Aware Algorithms in Decentralized Cooperative Bandit Teams
Figure 2 for Partner-Aware Algorithms in Decentralized Cooperative Bandit Teams
Figure 3 for Partner-Aware Algorithms in Decentralized Cooperative Bandit Teams
Figure 4 for Partner-Aware Algorithms in Decentralized Cooperative Bandit Teams
Viaarxiv icon

Distributed Learning and Democratic Embeddings: Polynomial-Time Source Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient Descent under Communication Constraints

Add code
Mar 13, 2021
Figure 1 for Distributed Learning and Democratic Embeddings: Polynomial-Time Source Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient Descent under Communication Constraints
Figure 2 for Distributed Learning and Democratic Embeddings: Polynomial-Time Source Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient Descent under Communication Constraints
Figure 3 for Distributed Learning and Democratic Embeddings: Polynomial-Time Source Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient Descent under Communication Constraints
Figure 4 for Distributed Learning and Democratic Embeddings: Polynomial-Time Source Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient Descent under Communication Constraints
Viaarxiv icon