Picture for Cole Hawkins

Cole Hawkins

Efficient Long-Range Transformers: You Need to Attend More, but Not Necessarily at Every Layer

Add code
Oct 19, 2023
Viaarxiv icon

Vcc: Scaling Transformers to 128K Tokens or More by Prioritizing Important Tokens

Add code
May 07, 2023
Figure 1 for Vcc: Scaling Transformers to 128K Tokens or More by Prioritizing Important Tokens
Figure 2 for Vcc: Scaling Transformers to 128K Tokens or More by Prioritizing Important Tokens
Figure 3 for Vcc: Scaling Transformers to 128K Tokens or More by Prioritizing Important Tokens
Figure 4 for Vcc: Scaling Transformers to 128K Tokens or More by Prioritizing Important Tokens
Viaarxiv icon

Online, Informative MCMC Thinning with Kernelized Stein Discrepancy

Add code
Jan 18, 2022
Figure 1 for Online, Informative MCMC Thinning with Kernelized Stein Discrepancy
Figure 2 for Online, Informative MCMC Thinning with Kernelized Stein Discrepancy
Figure 3 for Online, Informative MCMC Thinning with Kernelized Stein Discrepancy
Figure 4 for Online, Informative MCMC Thinning with Kernelized Stein Discrepancy
Viaarxiv icon

Low-Rank+Sparse Tensor Compression for Neural Networks

Add code
Nov 02, 2021
Figure 1 for Low-Rank+Sparse Tensor Compression for Neural Networks
Figure 2 for Low-Rank+Sparse Tensor Compression for Neural Networks
Figure 3 for Low-Rank+Sparse Tensor Compression for Neural Networks
Figure 4 for Low-Rank+Sparse Tensor Compression for Neural Networks
Viaarxiv icon

Scalable Consistency Training for Graph Neural Networks via Self-Ensemble Self-Distillation

Add code
Oct 12, 2021
Figure 1 for Scalable Consistency Training for Graph Neural Networks via Self-Ensemble Self-Distillation
Figure 2 for Scalable Consistency Training for Graph Neural Networks via Self-Ensemble Self-Distillation
Figure 3 for Scalable Consistency Training for Graph Neural Networks via Self-Ensemble Self-Distillation
Figure 4 for Scalable Consistency Training for Graph Neural Networks via Self-Ensemble Self-Distillation
Viaarxiv icon

3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration

Add code
May 11, 2021
Figure 1 for 3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration
Figure 2 for 3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration
Figure 3 for 3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration
Figure 4 for 3U-EdgeAI: Ultra-Low Memory Training, Ultra-Low BitwidthQuantization, and Ultra-Low Latency Acceleration
Viaarxiv icon

End-to-End Variational Bayesian Training of Tensorized Neural Networks with Automatic Rank Determination

Add code
Oct 17, 2020
Figure 1 for End-to-End Variational Bayesian Training of Tensorized Neural Networks with Automatic Rank Determination
Figure 2 for End-to-End Variational Bayesian Training of Tensorized Neural Networks with Automatic Rank Determination
Figure 3 for End-to-End Variational Bayesian Training of Tensorized Neural Networks with Automatic Rank Determination
Figure 4 for End-to-End Variational Bayesian Training of Tensorized Neural Networks with Automatic Rank Determination
Viaarxiv icon

Bayesian Tensorized Neural Networks with Automatic Rank Selection

Add code
May 24, 2019
Figure 1 for Bayesian Tensorized Neural Networks with Automatic Rank Selection
Figure 2 for Bayesian Tensorized Neural Networks with Automatic Rank Selection
Figure 3 for Bayesian Tensorized Neural Networks with Automatic Rank Selection
Figure 4 for Bayesian Tensorized Neural Networks with Automatic Rank Selection
Viaarxiv icon

Variational Bayesian Inference for Robust Streaming Tensor Factorization and Completion

Add code
Sep 06, 2018
Figure 1 for Variational Bayesian Inference for Robust Streaming Tensor Factorization and Completion
Figure 2 for Variational Bayesian Inference for Robust Streaming Tensor Factorization and Completion
Figure 3 for Variational Bayesian Inference for Robust Streaming Tensor Factorization and Completion
Figure 4 for Variational Bayesian Inference for Robust Streaming Tensor Factorization and Completion
Viaarxiv icon