Picture for Qirong Ho

Qirong Ho

Reducing Hyperparameter Tuning Costs in ML, Vision and Language Model Training Pipelines via Memoization-Awareness

Add code
Nov 06, 2024
Viaarxiv icon

Continual Learning of Nonlinear Independent Representations

Add code
Aug 11, 2024
Figure 1 for Continual Learning of Nonlinear Independent Representations
Figure 2 for Continual Learning of Nonlinear Independent Representations
Figure 3 for Continual Learning of Nonlinear Independent Representations
Figure 4 for Continual Learning of Nonlinear Independent Representations
Viaarxiv icon

A Factuality and Diversity Reconciled Decoding Method for Knowledge-Grounded Dialogue Generation

Add code
Jul 08, 2024
Viaarxiv icon

Multi-level Adaptive Contrastive Learning for Knowledge Internalization in Dialogue Generation

Add code
Oct 17, 2023
Viaarxiv icon

FedNAR: Federated Optimization with Normalized Annealing Regularization

Add code
Oct 04, 2023
Viaarxiv icon

On Optimizing the Communication of Model Parallelism

Add code
Nov 10, 2022
Viaarxiv icon

Pollux: Co-adaptive Cluster Scheduling for Goodput-Optimized Deep Learning

Add code
Aug 27, 2020
Figure 1 for Pollux: Co-adaptive Cluster Scheduling for Goodput-Optimized Deep Learning
Figure 2 for Pollux: Co-adaptive Cluster Scheduling for Goodput-Optimized Deep Learning
Figure 3 for Pollux: Co-adaptive Cluster Scheduling for Goodput-Optimized Deep Learning
Figure 4 for Pollux: Co-adaptive Cluster Scheduling for Goodput-Optimized Deep Learning
Viaarxiv icon

Cavs: A Vertex-centric Programming Interface for Dynamic Neural Networks

Add code
Dec 11, 2017
Figure 1 for Cavs: A Vertex-centric Programming Interface for Dynamic Neural Networks
Figure 2 for Cavs: A Vertex-centric Programming Interface for Dynamic Neural Networks
Figure 3 for Cavs: A Vertex-centric Programming Interface for Dynamic Neural Networks
Figure 4 for Cavs: A Vertex-centric Programming Interface for Dynamic Neural Networks
Viaarxiv icon

Distributed Multi-Task Relationship Learning

Add code
Jun 20, 2017
Figure 1 for Distributed Multi-Task Relationship Learning
Figure 2 for Distributed Multi-Task Relationship Learning
Figure 3 for Distributed Multi-Task Relationship Learning
Figure 4 for Distributed Multi-Task Relationship Learning
Viaarxiv icon

Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters

Add code
Jun 11, 2017
Figure 1 for Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters
Figure 2 for Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters
Figure 3 for Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters
Figure 4 for Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters
Viaarxiv icon