Picture for Mo Yu

Mo Yu

Large Language Models Can Self-Improve in Long-context Reasoning

Add code
Nov 12, 2024
Viaarxiv icon

On the token distance modeling ability of higher RoPE attention dimension

Add code
Oct 11, 2024
Figure 1 for On the token distance modeling ability of higher RoPE attention dimension
Figure 2 for On the token distance modeling ability of higher RoPE attention dimension
Figure 3 for On the token distance modeling ability of higher RoPE attention dimension
Figure 4 for On the token distance modeling ability of higher RoPE attention dimension
Viaarxiv icon

A Survey on the Honesty of Large Language Models

Add code
Sep 27, 2024
Figure 1 for A Survey on the Honesty of Large Language Models
Figure 2 for A Survey on the Honesty of Large Language Models
Figure 3 for A Survey on the Honesty of Large Language Models
Figure 4 for A Survey on the Honesty of Large Language Models
Viaarxiv icon

Are LLM-based Recommenders Already the Best? Simple Scaled Cross-entropy Unleashes the Potential of Traditional Sequential Recommenders

Add code
Aug 26, 2024
Viaarxiv icon

An Energy-based Model for Word-level AutoCompletion in Computer-aided Translation

Add code
Jul 29, 2024
Viaarxiv icon

Think out Loud: Emotion Deducing Explanation in Dialogues

Add code
Jun 07, 2024
Viaarxiv icon

MANGO: A Benchmark for Evaluating Mapping and Navigation Abilities of Large Language Models

Add code
Mar 29, 2024
Viaarxiv icon

On Large Language Models' Hallucination with Regard to Known Facts

Add code
Mar 29, 2024
Viaarxiv icon

Unsupervised Information Refinement Training of Large Language Models for Retrieval-Augmented Generation

Add code
Feb 28, 2024
Viaarxiv icon

Graph Representation of Narrative Context: Coherence Dependency via Retrospective Questions

Add code
Feb 21, 2024
Viaarxiv icon