Picture for Tiezheng Yu

Tiezheng Yu

Subtle Errors Matter: Preference Learning via Error-injected Self-editing

Add code
Oct 09, 2024
Figure 1 for Subtle Errors Matter: Preference Learning via Error-injected Self-editing
Figure 2 for Subtle Errors Matter: Preference Learning via Error-injected Self-editing
Figure 3 for Subtle Errors Matter: Preference Learning via Error-injected Self-editing
Figure 4 for Subtle Errors Matter: Preference Learning via Error-injected Self-editing
Viaarxiv icon

Towards Mitigating Hallucination in Large Language Models via Self-Reflection

Add code
Oct 10, 2023
Figure 1 for Towards Mitigating Hallucination in Large Language Models via Self-Reflection
Figure 2 for Towards Mitigating Hallucination in Large Language Models via Self-Reflection
Figure 3 for Towards Mitigating Hallucination in Large Language Models via Self-Reflection
Figure 4 for Towards Mitigating Hallucination in Large Language Models via Self-Reflection
Viaarxiv icon

Improving Query-Focused Meeting Summarization with Query-Relevant Knowledge

Add code
Sep 05, 2023
Viaarxiv icon

Instruct-Align: Teaching Novel Languages with to LLMs through Alignment-based Cross-Lingual Instruction

Add code
May 23, 2023
Viaarxiv icon

A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity

Add code
Feb 28, 2023
Figure 1 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Figure 2 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Figure 3 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Figure 4 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Viaarxiv icon

NusaCrowd: Open Source Initiative for Indonesian NLP Resources

Add code
Dec 20, 2022
Viaarxiv icon

RHO ($ρ$): Reducing Hallucination in Open-domain Dialogues with Knowledge Grounding

Add code
Dec 03, 2022
Figure 1 for RHO ($ρ$): Reducing Hallucination in Open-domain Dialogues with Knowledge Grounding
Figure 2 for RHO ($ρ$): Reducing Hallucination in Open-domain Dialogues with Knowledge Grounding
Figure 3 for RHO ($ρ$): Reducing Hallucination in Open-domain Dialogues with Knowledge Grounding
Figure 4 for RHO ($ρ$): Reducing Hallucination in Open-domain Dialogues with Knowledge Grounding
Viaarxiv icon

Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness

Add code
Nov 10, 2022
Figure 1 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Figure 2 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Figure 3 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Figure 4 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Viaarxiv icon

Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values

Add code
Oct 14, 2022
Figure 1 for Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values
Figure 2 for Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values
Figure 3 for Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values
Figure 4 for Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values
Viaarxiv icon

Kaggle Competition: Cantonese Audio-Visual Speech Recognition for In-car Commands

Add code
Jul 06, 2022
Figure 1 for Kaggle Competition: Cantonese Audio-Visual Speech Recognition for In-car Commands
Figure 2 for Kaggle Competition: Cantonese Audio-Visual Speech Recognition for In-car Commands
Figure 3 for Kaggle Competition: Cantonese Audio-Visual Speech Recognition for In-car Commands
Figure 4 for Kaggle Competition: Cantonese Audio-Visual Speech Recognition for In-car Commands
Viaarxiv icon