Picture for Qi Cao

Qi Cao

The 1st Workshop on Human-Centered Recommender Systems

Add code
Nov 22, 2024
Viaarxiv icon

Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?

Add code
Oct 09, 2024
Figure 1 for Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?
Figure 2 for Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?
Figure 3 for Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?
Figure 4 for Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?
Viaarxiv icon

Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning

Add code
Oct 01, 2024
Figure 1 for Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
Figure 2 for Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
Figure 3 for Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
Figure 4 for Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
Viaarxiv icon

Improving the Shortest Plank: Vulnerability-Aware Adversarial Training for Robust Recommender System

Add code
Sep 26, 2024
Figure 1 for Improving the Shortest Plank: Vulnerability-Aware Adversarial Training for Robust Recommender System
Figure 2 for Improving the Shortest Plank: Vulnerability-Aware Adversarial Training for Robust Recommender System
Figure 3 for Improving the Shortest Plank: Vulnerability-Aware Adversarial Training for Robust Recommender System
Figure 4 for Improving the Shortest Plank: Vulnerability-Aware Adversarial Training for Robust Recommender System
Viaarxiv icon

Accelerating the Surrogate Retraining for Poisoning Attacks against Recommender Systems

Add code
Aug 20, 2024
Viaarxiv icon

When to Trust LLMs: Aligning Confidence with Response Quality

Add code
Apr 26, 2024
Viaarxiv icon

LoRec: Large Language Model for Robust Sequential Recommendation against Poisoning Attacks

Add code
Jan 31, 2024
Viaarxiv icon

Blinded by Generated Contexts: How Language Models Merge Generated and Retrieved Contexts for Open-Domain QA?

Add code
Jan 22, 2024
Figure 1 for Blinded by Generated Contexts: How Language Models Merge Generated and Retrieved Contexts for Open-Domain QA?
Figure 2 for Blinded by Generated Contexts: How Language Models Merge Generated and Retrieved Contexts for Open-Domain QA?
Figure 3 for Blinded by Generated Contexts: How Language Models Merge Generated and Retrieved Contexts for Open-Domain QA?
Figure 4 for Blinded by Generated Contexts: How Language Models Merge Generated and Retrieved Contexts for Open-Domain QA?
Viaarxiv icon

FedRKG: A Privacy-preserving Federated Recommendation Framework via Knowledge Graph Enhancement

Add code
Jan 20, 2024
Viaarxiv icon

Fine-Tuning InstructPix2Pix for Advanced Image Colorization

Add code
Dec 08, 2023
Viaarxiv icon