Picture for Benjamin Hawks

Benjamin Hawks

Surrogate Neural Architecture Codesign Package (SNAC-Pack)

Add code
Dec 17, 2025
Figure 1 for Surrogate Neural Architecture Codesign Package (SNAC-Pack)
Figure 2 for Surrogate Neural Architecture Codesign Package (SNAC-Pack)
Figure 3 for Surrogate Neural Architecture Codesign Package (SNAC-Pack)
Figure 4 for Surrogate Neural Architecture Codesign Package (SNAC-Pack)
Viaarxiv icon

AI Benchmark Democratization and Carpentry

Add code
Dec 12, 2025
Viaarxiv icon

wa-hls4ml: A Benchmark and Surrogate Models for hls4ml Resource and Latency Estimation

Add code
Nov 06, 2025
Figure 1 for wa-hls4ml: A Benchmark and Surrogate Models for hls4ml Resource and Latency Estimation
Figure 2 for wa-hls4ml: A Benchmark and Surrogate Models for hls4ml Resource and Latency Estimation
Figure 3 for wa-hls4ml: A Benchmark and Surrogate Models for hls4ml Resource and Latency Estimation
Figure 4 for wa-hls4ml: A Benchmark and Surrogate Models for hls4ml Resource and Latency Estimation
Viaarxiv icon

Applications and Techniques for Fast Machine Learning in Science

Add code
Oct 25, 2021
Figure 1 for Applications and Techniques for Fast Machine Learning in Science
Figure 2 for Applications and Techniques for Fast Machine Learning in Science
Figure 3 for Applications and Techniques for Fast Machine Learning in Science
Figure 4 for Applications and Techniques for Fast Machine Learning in Science
Viaarxiv icon

hls4ml: An Open-Source Codesign Workflow to Empower Scientific Low-Power Machine Learning Devices

Add code
Mar 23, 2021
Figure 1 for hls4ml: An Open-Source Codesign Workflow to Empower Scientific Low-Power Machine Learning Devices
Figure 2 for hls4ml: An Open-Source Codesign Workflow to Empower Scientific Low-Power Machine Learning Devices
Figure 3 for hls4ml: An Open-Source Codesign Workflow to Empower Scientific Low-Power Machine Learning Devices
Figure 4 for hls4ml: An Open-Source Codesign Workflow to Empower Scientific Low-Power Machine Learning Devices
Viaarxiv icon

Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference

Add code
Feb 22, 2021
Figure 1 for Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
Figure 2 for Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
Figure 3 for Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
Figure 4 for Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
Viaarxiv icon