Picture for Seungeun Oh

Seungeun Oh

Energy-Efficient Wireless LLM Inference via Uncertainty and Importance-Aware Speculative Decoding

Add code
Aug 18, 2025
Viaarxiv icon

Communication-Efficient Hybrid Language Model via Uncertainty-Aware Opportunistic and Compressed Transmission

Add code
May 17, 2025
Viaarxiv icon

Uncertainty-Aware Hybrid Inference with On-Device Small and Remote Large Language Models

Add code
Dec 17, 2024
Figure 1 for Uncertainty-Aware Hybrid Inference with On-Device Small and Remote Large Language Models
Figure 2 for Uncertainty-Aware Hybrid Inference with On-Device Small and Remote Large Language Models
Figure 3 for Uncertainty-Aware Hybrid Inference with On-Device Small and Remote Large Language Models
Figure 4 for Uncertainty-Aware Hybrid Inference with On-Device Small and Remote Large Language Models
Viaarxiv icon

Privacy-Preserving Split Learning with Vision Transformers using Patch-Wise Random and Noisy CutMix

Add code
Aug 02, 2024
Figure 1 for Privacy-Preserving Split Learning with Vision Transformers using Patch-Wise Random and Noisy CutMix
Figure 2 for Privacy-Preserving Split Learning with Vision Transformers using Patch-Wise Random and Noisy CutMix
Figure 3 for Privacy-Preserving Split Learning with Vision Transformers using Patch-Wise Random and Noisy CutMix
Figure 4 for Privacy-Preserving Split Learning with Vision Transformers using Patch-Wise Random and Noisy CutMix
Viaarxiv icon

SplitAMC: Split Learning for Robust Automatic Modulation Classification

Add code
Apr 17, 2023
Viaarxiv icon

Differentially Private CutMix for Split Learning with Vision Transformer

Add code
Oct 28, 2022
Figure 1 for Differentially Private CutMix for Split Learning with Vision Transformer
Figure 2 for Differentially Private CutMix for Split Learning with Vision Transformer
Figure 3 for Differentially Private CutMix for Split Learning with Vision Transformer
Figure 4 for Differentially Private CutMix for Split Learning with Vision Transformer
Viaarxiv icon

Federated Knowledge Distillation

Add code
Nov 04, 2020
Figure 1 for Federated Knowledge Distillation
Figure 2 for Federated Knowledge Distillation
Figure 3 for Federated Knowledge Distillation
Figure 4 for Federated Knowledge Distillation
Viaarxiv icon

Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup

Add code
Jun 17, 2020
Figure 1 for Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup
Figure 2 for Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup
Figure 3 for Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup
Figure 4 for Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup
Viaarxiv icon

Distilling On-Device Intelligence at the Network Edge

Add code
Aug 16, 2019
Figure 1 for Distilling On-Device Intelligence at the Network Edge
Figure 2 for Distilling On-Device Intelligence at the Network Edge
Figure 3 for Distilling On-Device Intelligence at the Network Edge
Figure 4 for Distilling On-Device Intelligence at the Network Edge
Viaarxiv icon

Multi-hop Federated Private Data Augmentation with Sample Compression

Add code
Jul 15, 2019
Figure 1 for Multi-hop Federated Private Data Augmentation with Sample Compression
Figure 2 for Multi-hop Federated Private Data Augmentation with Sample Compression
Figure 3 for Multi-hop Federated Private Data Augmentation with Sample Compression
Figure 4 for Multi-hop Federated Private Data Augmentation with Sample Compression
Viaarxiv icon