Picture for Tao Ji

Tao Ji

Multi-Programming Language Sandbox for LLMs

Add code
Oct 30, 2024
Figure 1 for Multi-Programming Language Sandbox for LLMs
Figure 2 for Multi-Programming Language Sandbox for LLMs
Figure 3 for Multi-Programming Language Sandbox for LLMs
Figure 4 for Multi-Programming Language Sandbox for LLMs
Viaarxiv icon

Have the VLMs Lost Confidence? A Study of Sycophancy in VLMs

Add code
Oct 15, 2024
Figure 1 for Have the VLMs Lost Confidence? A Study of Sycophancy in VLMs
Figure 2 for Have the VLMs Lost Confidence? A Study of Sycophancy in VLMs
Figure 3 for Have the VLMs Lost Confidence? A Study of Sycophancy in VLMs
Figure 4 for Have the VLMs Lost Confidence? A Study of Sycophancy in VLMs
Viaarxiv icon

Generation with Dynamic Vocabulary

Add code
Oct 11, 2024
Figure 1 for Generation with Dynamic Vocabulary
Figure 2 for Generation with Dynamic Vocabulary
Figure 3 for Generation with Dynamic Vocabulary
Figure 4 for Generation with Dynamic Vocabulary
Viaarxiv icon

Investigating and Mitigating Object Hallucinations in Pretrained Vision-Language (CLIP) Models

Add code
Oct 04, 2024
Viaarxiv icon

Is Large Language Model Good at Database Knob Tuning? A Comprehensive Experimental Evaluation

Add code
Aug 05, 2024
Figure 1 for Is Large Language Model Good at Database Knob Tuning? A Comprehensive Experimental Evaluation
Figure 2 for Is Large Language Model Good at Database Knob Tuning? A Comprehensive Experimental Evaluation
Figure 3 for Is Large Language Model Good at Database Knob Tuning? A Comprehensive Experimental Evaluation
Figure 4 for Is Large Language Model Good at Database Knob Tuning? A Comprehensive Experimental Evaluation
Viaarxiv icon

Length Generalization of Causal Transformers without Position Encoding

Add code
Apr 18, 2024
Figure 1 for Length Generalization of Causal Transformers without Position Encoding
Figure 2 for Length Generalization of Causal Transformers without Position Encoding
Figure 3 for Length Generalization of Causal Transformers without Position Encoding
Figure 4 for Length Generalization of Causal Transformers without Position Encoding
Viaarxiv icon

LongHeads: Multi-Head Attention is Secretly a Long Context Processor

Add code
Feb 16, 2024
Viaarxiv icon

StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback

Add code
Feb 05, 2024
Viaarxiv icon

MouSi: Poly-Visual-Expert Vision-Language Models

Add code
Jan 30, 2024
Viaarxiv icon

Secrets of RLHF in Large Language Models Part II: Reward Modeling

Add code
Jan 12, 2024
Figure 1 for Secrets of RLHF in Large Language Models Part II: Reward Modeling
Figure 2 for Secrets of RLHF in Large Language Models Part II: Reward Modeling
Figure 3 for Secrets of RLHF in Large Language Models Part II: Reward Modeling
Figure 4 for Secrets of RLHF in Large Language Models Part II: Reward Modeling
Viaarxiv icon