Picture for Ningyu Zhang

Ningyu Zhang

CaKE: Circuit-aware Editing Enables Generalizable Knowledge Learners

Add code
Mar 20, 2025
Viaarxiv icon

BiasEdit: Debiasing Stereotyped Language Models via Model Editing

Add code
Mar 11, 2025
Viaarxiv icon

LightThinker: Thinking Step-by-Step Compression

Add code
Feb 21, 2025
Viaarxiv icon

AnyEdit: Edit Any Knowledge Encoded in Language Models

Add code
Feb 08, 2025
Figure 1 for AnyEdit: Edit Any Knowledge Encoded in Language Models
Figure 2 for AnyEdit: Edit Any Knowledge Encoded in Language Models
Figure 3 for AnyEdit: Edit Any Knowledge Encoded in Language Models
Figure 4 for AnyEdit: Edit Any Knowledge Encoded in Language Models
Viaarxiv icon

OmniThink: Expanding Knowledge Boundaries in Machine Writing through Thinking

Add code
Jan 16, 2025
Viaarxiv icon

A Multi-Modal AI Copilot for Single-Cell Analysis with Instruction Following

Add code
Jan 15, 2025
Viaarxiv icon

OneKE: A Dockerized Schema-Guided LLM Agent-based Knowledge Extraction System

Add code
Dec 28, 2024
Figure 1 for OneKE: A Dockerized Schema-Guided LLM Agent-based Knowledge Extraction System
Figure 2 for OneKE: A Dockerized Schema-Guided LLM Agent-based Knowledge Extraction System
Figure 3 for OneKE: A Dockerized Schema-Guided LLM Agent-based Knowledge Extraction System
Viaarxiv icon

Exploring Model Kinship for Merging Large Language Models

Add code
Oct 16, 2024
Figure 1 for Exploring Model Kinship for Merging Large Language Models
Figure 2 for Exploring Model Kinship for Merging Large Language Models
Figure 3 for Exploring Model Kinship for Merging Large Language Models
Figure 4 for Exploring Model Kinship for Merging Large Language Models
Viaarxiv icon

MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation

Add code
Oct 15, 2024
Figure 1 for MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Figure 2 for MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Figure 3 for MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Figure 4 for MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Viaarxiv icon

Locking Down the Finetuned LLMs Safety

Add code
Oct 14, 2024
Viaarxiv icon