Picture for Seongmin Hong

Seongmin Hong

Gradient-free Decoder Inversion in Latent Diffusion Models

Add code
Sep 27, 2024
Viaarxiv icon

Adaptive Selection of Sampling-Reconstruction in Fourier Compressed Sensing

Add code
Sep 19, 2024
Viaarxiv icon

On Exact Inversion of DPM-Solvers

Add code
Nov 30, 2023
Viaarxiv icon

Neural Diffeomorphic Non-uniform B-spline Flows

Add code
Apr 11, 2023
Viaarxiv icon

On the Robustness of Normalizing Flows for Inverse Problems in Imaging

Add code
Dec 08, 2022
Viaarxiv icon

Efficient Single-Image Depth Estimation on Mobile Devices, Mobile AI & AIM 2022 Challenge: Report

Add code
Nov 07, 2022
Figure 1 for Efficient Single-Image Depth Estimation on Mobile Devices, Mobile AI & AIM 2022 Challenge: Report
Figure 2 for Efficient Single-Image Depth Estimation on Mobile Devices, Mobile AI & AIM 2022 Challenge: Report
Figure 3 for Efficient Single-Image Depth Estimation on Mobile Devices, Mobile AI & AIM 2022 Challenge: Report
Figure 4 for Efficient Single-Image Depth Estimation on Mobile Devices, Mobile AI & AIM 2022 Challenge: Report
Viaarxiv icon

Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs, Mobile AI & AIM 2022 challenge: Report

Add code
Nov 07, 2022
Figure 1 for Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs, Mobile AI & AIM 2022 challenge: Report
Figure 2 for Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs, Mobile AI & AIM 2022 challenge: Report
Figure 3 for Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs, Mobile AI & AIM 2022 challenge: Report
Figure 4 for Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs, Mobile AI & AIM 2022 challenge: Report
Viaarxiv icon

DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation

Add code
Sep 22, 2022
Figure 1 for DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation
Figure 2 for DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation
Figure 3 for DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation
Figure 4 for DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation
Viaarxiv icon

FIXAR: A Fixed-Point Deep Reinforcement Learning Platform with Quantization-Aware Training and Adaptive Parallelism

Add code
Feb 24, 2021
Figure 1 for FIXAR: A Fixed-Point Deep Reinforcement Learning Platform with Quantization-Aware Training and Adaptive Parallelism
Figure 2 for FIXAR: A Fixed-Point Deep Reinforcement Learning Platform with Quantization-Aware Training and Adaptive Parallelism
Figure 3 for FIXAR: A Fixed-Point Deep Reinforcement Learning Platform with Quantization-Aware Training and Adaptive Parallelism
Figure 4 for FIXAR: A Fixed-Point Deep Reinforcement Learning Platform with Quantization-Aware Training and Adaptive Parallelism
Viaarxiv icon