Picture for Jeonghoon Kim

Jeonghoon Kim

Laboratory for Natural and Artificial Kinästhese, Convergence Research Center for Artificial Intelligence, Department of Artificial Intelligence, Dongguk University, Seoul, South Korea

OmniACBench: A Benchmark for Evaluating Context-Grounded Acoustic Control in Omni-Modal Models

Add code
Mar 25, 2026
Viaarxiv icon

SNAP: Speaker Nulling for Artifact Projection in Speech Deepfake Detection

Add code
Mar 21, 2026
Viaarxiv icon

Benchmarks Are Not That Out of Distribution: Word Overlap Predicts Performance

Add code
Feb 11, 2026
Viaarxiv icon

Enhancing Hallucination Detection via Future Context

Add code
Jul 28, 2025
Figure 1 for Enhancing Hallucination Detection via Future Context
Figure 2 for Enhancing Hallucination Detection via Future Context
Figure 3 for Enhancing Hallucination Detection via Future Context
Figure 4 for Enhancing Hallucination Detection via Future Context
Viaarxiv icon

Cross-lingual Collapse: How Language-Centric Foundation Models Shape Reasoning in Large Language Models

Add code
Jun 06, 2025
Figure 1 for Cross-lingual Collapse: How Language-Centric Foundation Models Shape Reasoning in Large Language Models
Figure 2 for Cross-lingual Collapse: How Language-Centric Foundation Models Shape Reasoning in Large Language Models
Figure 3 for Cross-lingual Collapse: How Language-Centric Foundation Models Shape Reasoning in Large Language Models
Figure 4 for Cross-lingual Collapse: How Language-Centric Foundation Models Shape Reasoning in Large Language Models
Viaarxiv icon

ReGUIDE: Data Efficient GUI Grounding via Spatial Reasoning and Search

Add code
May 21, 2025
Viaarxiv icon

SAGE-Amine: Generative Amine Design with Multi-Property Optimization for Efficient CO2 Capture

Add code
Mar 04, 2025
Figure 1 for SAGE-Amine: Generative Amine Design with Multi-Property Optimization for Efficient CO2 Capture
Figure 2 for SAGE-Amine: Generative Amine Design with Multi-Property Optimization for Efficient CO2 Capture
Figure 3 for SAGE-Amine: Generative Amine Design with Multi-Property Optimization for Efficient CO2 Capture
Figure 4 for SAGE-Amine: Generative Amine Design with Multi-Property Optimization for Efficient CO2 Capture
Viaarxiv icon

Peri-LN: Revisiting Layer Normalization in the Transformer Architecture

Add code
Feb 04, 2025
Figure 1 for Peri-LN: Revisiting Layer Normalization in the Transformer Architecture
Figure 2 for Peri-LN: Revisiting Layer Normalization in the Transformer Architecture
Figure 3 for Peri-LN: Revisiting Layer Normalization in the Transformer Architecture
Figure 4 for Peri-LN: Revisiting Layer Normalization in the Transformer Architecture
Viaarxiv icon

LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices

Add code
Jul 16, 2024
Figure 1 for LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
Figure 2 for LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
Figure 3 for LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
Figure 4 for LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices
Viaarxiv icon

Improving Multi-hop Logical Reasoning in Knowledge Graphs with Context-Aware Query Representation Learning

Add code
Jun 11, 2024
Viaarxiv icon