Picture for Jordan Dotzel

Jordan Dotzel

ShadowLLM: Predictor-based Contextual Sparsity for Large Language Models

Add code
Jun 24, 2024
Viaarxiv icon

Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs

Add code
May 06, 2024
Viaarxiv icon

Radial Networks: Dynamic Layer Routing for High-Performance Large Language Models

Add code
Apr 07, 2024
Viaarxiv icon

Exploring the Limits of Semantic Image Compression at Micro-bits per Pixel

Add code
Feb 21, 2024
Viaarxiv icon

FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search

Add code
Aug 07, 2023
Figure 1 for FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search
Figure 2 for FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search
Figure 3 for FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search
Figure 4 for FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search
Viaarxiv icon

Enabling Design Methodologies and Future Trends for Edge AI: Specialization and Co-design

Add code
Mar 30, 2021
Figure 1 for Enabling Design Methodologies and Future Trends for Edge AI: Specialization and Co-design
Figure 2 for Enabling Design Methodologies and Future Trends for Edge AI: Specialization and Co-design
Figure 3 for Enabling Design Methodologies and Future Trends for Edge AI: Specialization and Co-design
Figure 4 for Enabling Design Methodologies and Future Trends for Edge AI: Specialization and Co-design
Viaarxiv icon

Logic Synthesis Meets Machine Learning: Trading Exactness for Generalization

Add code
Dec 15, 2020
Figure 1 for Logic Synthesis Meets Machine Learning: Trading Exactness for Generalization
Figure 2 for Logic Synthesis Meets Machine Learning: Trading Exactness for Generalization
Figure 3 for Logic Synthesis Meets Machine Learning: Trading Exactness for Generalization
Figure 4 for Logic Synthesis Meets Machine Learning: Trading Exactness for Generalization
Viaarxiv icon

Improving Neural Network Quantization without Retraining using Outlier Channel Splitting

Add code
Jan 30, 2019
Figure 1 for Improving Neural Network Quantization without Retraining using Outlier Channel Splitting
Figure 2 for Improving Neural Network Quantization without Retraining using Outlier Channel Splitting
Figure 3 for Improving Neural Network Quantization without Retraining using Outlier Channel Splitting
Figure 4 for Improving Neural Network Quantization without Retraining using Outlier Channel Splitting
Viaarxiv icon

Building Efficient Deep Neural Networks with Unitary Group Convolutions

Add code
Nov 19, 2018
Figure 1 for Building Efficient Deep Neural Networks with Unitary Group Convolutions
Figure 2 for Building Efficient Deep Neural Networks with Unitary Group Convolutions
Figure 3 for Building Efficient Deep Neural Networks with Unitary Group Convolutions
Figure 4 for Building Efficient Deep Neural Networks with Unitary Group Convolutions
Viaarxiv icon