Picture for Alexandros Karatzoglou

Alexandros Karatzoglou

Efficient and Effective Adaptation of Multimodal Foundation Models in Sequential Recommendation

Add code
Nov 05, 2024
Figure 1 for Efficient and Effective Adaptation of Multimodal Foundation Models in Sequential Recommendation
Figure 2 for Efficient and Effective Adaptation of Multimodal Foundation Models in Sequential Recommendation
Figure 3 for Efficient and Effective Adaptation of Multimodal Foundation Models in Sequential Recommendation
Figure 4 for Efficient and Effective Adaptation of Multimodal Foundation Models in Sequential Recommendation
Viaarxiv icon

Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for Large Language Models

Add code
Oct 02, 2024
Figure 1 for Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for Large Language Models
Figure 2 for Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for Large Language Models
Figure 3 for Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for Large Language Models
Figure 4 for Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for Large Language Models
Viaarxiv icon

PERSOMA: PERsonalized SOft ProMpt Adapter Architecture for Personalized Language Prompting

Add code
Aug 02, 2024
Figure 1 for PERSOMA: PERsonalized SOft ProMpt Adapter Architecture for Personalized Language Prompting
Figure 2 for PERSOMA: PERsonalized SOft ProMpt Adapter Architecture for Personalized Language Prompting
Figure 3 for PERSOMA: PERsonalized SOft ProMpt Adapter Architecture for Personalized Language Prompting
Figure 4 for PERSOMA: PERsonalized SOft ProMpt Adapter Architecture for Personalized Language Prompting
Viaarxiv icon

IISAN: Efficiently Adapting Multimodal Representation for Sequential Recommendation with Decoupled PEFT

Add code
Apr 11, 2024
Viaarxiv icon

Reinforcement Learning-based Recommender Systems with Large Language Models for State Reward and Action Modeling

Add code
Mar 25, 2024
Viaarxiv icon

Latent User Intent Modeling for Sequential Recommenders

Add code
Nov 17, 2022
Viaarxiv icon

Rethinking Reinforcement Learning for Recommendation: A Prompt Perspective

Add code
Jun 15, 2022
Figure 1 for Rethinking Reinforcement Learning for Recommendation: A Prompt Perspective
Figure 2 for Rethinking Reinforcement Learning for Recommendation: A Prompt Perspective
Figure 3 for Rethinking Reinforcement Learning for Recommendation: A Prompt Perspective
Figure 4 for Rethinking Reinforcement Learning for Recommendation: A Prompt Perspective
Viaarxiv icon

Enhancing Top-N Item Recommendations by Peer Collaboration

Add code
Dec 02, 2021
Figure 1 for Enhancing Top-N Item Recommendations by Peer Collaboration
Figure 2 for Enhancing Top-N Item Recommendations by Peer Collaboration
Figure 3 for Enhancing Top-N Item Recommendations by Peer Collaboration
Figure 4 for Enhancing Top-N Item Recommendations by Peer Collaboration
Viaarxiv icon

Supervised Advantage Actor-Critic for Recommender Systems

Add code
Nov 05, 2021
Figure 1 for Supervised Advantage Actor-Critic for Recommender Systems
Figure 2 for Supervised Advantage Actor-Critic for Recommender Systems
Figure 3 for Supervised Advantage Actor-Critic for Recommender Systems
Figure 4 for Supervised Advantage Actor-Critic for Recommender Systems
Viaarxiv icon

Choosing the Best of Both Worlds: Diverse and Novel Recommendations through Multi-Objective Reinforcement Learning

Add code
Oct 28, 2021
Figure 1 for Choosing the Best of Both Worlds: Diverse and Novel Recommendations through Multi-Objective Reinforcement Learning
Figure 2 for Choosing the Best of Both Worlds: Diverse and Novel Recommendations through Multi-Objective Reinforcement Learning
Figure 3 for Choosing the Best of Both Worlds: Diverse and Novel Recommendations through Multi-Objective Reinforcement Learning
Figure 4 for Choosing the Best of Both Worlds: Diverse and Novel Recommendations through Multi-Objective Reinforcement Learning
Viaarxiv icon