Picture for Hyesoon Kim

Hyesoon Kim

Hydro: Adaptive Query Processing of ML Queries

Add code
Mar 22, 2024
Viaarxiv icon

VEGETA: Vertically-Integrated Extensions for Sparse/Dense GEMM Tile Acceleration on CPUs

Add code
Feb 23, 2023
Figure 1 for VEGETA: Vertically-Integrated Extensions for Sparse/Dense GEMM Tile Acceleration on CPUs
Figure 2 for VEGETA: Vertically-Integrated Extensions for Sparse/Dense GEMM Tile Acceleration on CPUs
Figure 3 for VEGETA: Vertically-Integrated Extensions for Sparse/Dense GEMM Tile Acceleration on CPUs
Figure 4 for VEGETA: Vertically-Integrated Extensions for Sparse/Dense GEMM Tile Acceleration on CPUs
Viaarxiv icon

RASA: Efficient Register-Aware Systolic Array Matrix Engine for CPU

Add code
Oct 05, 2021
Figure 1 for RASA: Efficient Register-Aware Systolic Array Matrix Engine for CPU
Figure 2 for RASA: Efficient Register-Aware Systolic Array Matrix Engine for CPU
Figure 3 for RASA: Efficient Register-Aware Systolic Array Matrix Engine for CPU
Figure 4 for RASA: Efficient Register-Aware Systolic Array Matrix Engine for CPU
Viaarxiv icon

Context-Aware Task Handling in Resource-Constrained Robots with Virtualization

Add code
Apr 09, 2021
Figure 1 for Context-Aware Task Handling in Resource-Constrained Robots with Virtualization
Figure 2 for Context-Aware Task Handling in Resource-Constrained Robots with Virtualization
Figure 3 for Context-Aware Task Handling in Resource-Constrained Robots with Virtualization
Figure 4 for Context-Aware Task Handling in Resource-Constrained Robots with Virtualization
Viaarxiv icon

Reducing Inference Latency with Concurrent Architectures for Image Recognition

Add code
Nov 13, 2020
Figure 1 for Reducing Inference Latency with Concurrent Architectures for Image Recognition
Figure 2 for Reducing Inference Latency with Concurrent Architectures for Image Recognition
Figure 3 for Reducing Inference Latency with Concurrent Architectures for Image Recognition
Figure 4 for Reducing Inference Latency with Concurrent Architectures for Image Recognition
Viaarxiv icon

Edge-Tailored Perception: Fast Inferencing in-the-Edge with Efficient Model Distribution

Add code
Mar 13, 2020
Figure 1 for Edge-Tailored Perception: Fast Inferencing in-the-Edge with Efficient Model Distribution
Figure 2 for Edge-Tailored Perception: Fast Inferencing in-the-Edge with Efficient Model Distribution
Figure 3 for Edge-Tailored Perception: Fast Inferencing in-the-Edge with Efficient Model Distribution
Figure 4 for Edge-Tailored Perception: Fast Inferencing in-the-Edge with Efficient Model Distribution
Viaarxiv icon

A Case Study: Exploiting Neural Machine Translation to Translate CUDA to OpenCL

Add code
May 18, 2019
Figure 1 for A Case Study: Exploiting Neural Machine Translation to Translate CUDA to OpenCL
Figure 2 for A Case Study: Exploiting Neural Machine Translation to Translate CUDA to OpenCL
Figure 3 for A Case Study: Exploiting Neural Machine Translation to Translate CUDA to OpenCL
Figure 4 for A Case Study: Exploiting Neural Machine Translation to Translate CUDA to OpenCL
Viaarxiv icon

Collaborative Execution of Deep Neural Networks on Internet of Things Devices

Add code
Jan 08, 2019
Figure 1 for Collaborative Execution of Deep Neural Networks on Internet of Things Devices
Figure 2 for Collaborative Execution of Deep Neural Networks on Internet of Things Devices
Figure 3 for Collaborative Execution of Deep Neural Networks on Internet of Things Devices
Figure 4 for Collaborative Execution of Deep Neural Networks on Internet of Things Devices
Viaarxiv icon

Musical Chair: Efficient Real-Time Recognition Using Collaborative IoT Devices

Add code
Mar 21, 2018
Figure 1 for Musical Chair: Efficient Real-Time Recognition Using Collaborative IoT Devices
Figure 2 for Musical Chair: Efficient Real-Time Recognition Using Collaborative IoT Devices
Figure 3 for Musical Chair: Efficient Real-Time Recognition Using Collaborative IoT Devices
Figure 4 for Musical Chair: Efficient Real-Time Recognition Using Collaborative IoT Devices
Viaarxiv icon