Picture for Jianyong Wang

Jianyong Wang

Are LLM-based Recommenders Already the Best? Simple Scaled Cross-entropy Unleashes the Potential of Traditional Sequential Recommenders

Add code
Aug 26, 2024
Viaarxiv icon

FocusLLM: Scaling LLM's Context by Parallel Decoding

Add code
Aug 21, 2024
Viaarxiv icon

You Only Cache Once: Decoder-Decoder Architectures for Language Models

Add code
May 08, 2024
Viaarxiv icon

Understanding the Role of Cross-Entropy Loss in Fairly Evaluating Large Language Model-based Recommendation

Add code
Feb 22, 2024
Viaarxiv icon

Learning Interpretable Rules for Scalable Data Representation and Classification

Add code
Oct 30, 2023
Viaarxiv icon

FlexKBQA: A Flexible LLM-Powered Framework for Few-Shot Knowledge Base Question Answering

Add code
Aug 23, 2023
Figure 1 for FlexKBQA: A Flexible LLM-Powered Framework for Few-Shot Knowledge Base Question Answering
Figure 2 for FlexKBQA: A Flexible LLM-Powered Framework for Few-Shot Knowledge Base Question Answering
Figure 3 for FlexKBQA: A Flexible LLM-Powered Framework for Few-Shot Knowledge Base Question Answering
Figure 4 for FlexKBQA: A Flexible LLM-Powered Framework for Few-Shot Knowledge Base Question Answering
Viaarxiv icon

Retentive Network: A Successor to Transformer for Large Language Models

Add code
Aug 09, 2023
Viaarxiv icon

Knowledge-aware Collaborative Filtering with Pre-trained Language Model for Personalized Review-based Rating Prediction

Add code
Aug 02, 2023
Viaarxiv icon

Can LLMs like GPT-4 outperform traditional AI tools in dementia diagnosis? Maybe, but not today

Add code
Jun 02, 2023
Viaarxiv icon

Bridging the Language Gap: Knowledge Injected Multilingual Question Answering

Add code
Apr 06, 2023
Viaarxiv icon