Picture for Zhuochun Li

Zhuochun Li

Learning from Committee: Reasoning Distillation from a Mixture of Teachers with Peer-Review

Add code
Oct 16, 2024
Figure 1 for Learning from Committee: Reasoning Distillation from a Mixture of Teachers with Peer-Review
Figure 2 for Learning from Committee: Reasoning Distillation from a Mixture of Teachers with Peer-Review
Figure 3 for Learning from Committee: Reasoning Distillation from a Mixture of Teachers with Peer-Review
Figure 4 for Learning from Committee: Reasoning Distillation from a Mixture of Teachers with Peer-Review
Viaarxiv icon

Mitigating the Risk of Health Inequity Exacerbated by Large Language Models

Add code
Oct 14, 2024
Viaarxiv icon

ReasoningRank: Teaching Student Models to Rank through Reasoning-Based Knowledge Distillation

Add code
Oct 07, 2024
Figure 1 for ReasoningRank: Teaching Student Models to Rank through Reasoning-Based Knowledge Distillation
Figure 2 for ReasoningRank: Teaching Student Models to Rank through Reasoning-Based Knowledge Distillation
Figure 3 for ReasoningRank: Teaching Student Models to Rank through Reasoning-Based Knowledge Distillation
Figure 4 for ReasoningRank: Teaching Student Models to Rank through Reasoning-Based Knowledge Distillation
Viaarxiv icon

Enhancing Equity in Large Language Models for Medical Applications

Add code
Oct 07, 2024
Viaarxiv icon

Enhance Reasoning by Learning from Mistakes: Peer-Review Knowledge Distillation from Multiple Large Language Models

Add code
Oct 04, 2024
Figure 1 for Enhance Reasoning by Learning from Mistakes: Peer-Review Knowledge Distillation from Multiple Large Language Models
Figure 2 for Enhance Reasoning by Learning from Mistakes: Peer-Review Knowledge Distillation from Multiple Large Language Models
Figure 3 for Enhance Reasoning by Learning from Mistakes: Peer-Review Knowledge Distillation from Multiple Large Language Models
Figure 4 for Enhance Reasoning by Learning from Mistakes: Peer-Review Knowledge Distillation from Multiple Large Language Models
Viaarxiv icon

RAG-RLRC-LaySum at BioLaySumm: Integrating Retrieval-Augmented Generation and Readability Control for Layman Summarization of Biomedical Texts

Add code
May 21, 2024
Viaarxiv icon

Effects of Different Prompts on the Quality of GPT-4 Responses to Dementia Care Questions

Add code
Apr 05, 2024
Figure 1 for Effects of Different Prompts on the Quality of GPT-4 Responses to Dementia Care Questions
Figure 2 for Effects of Different Prompts on the Quality of GPT-4 Responses to Dementia Care Questions
Figure 3 for Effects of Different Prompts on the Quality of GPT-4 Responses to Dementia Care Questions
Viaarxiv icon