Picture for Jiawen Deng

Jiawen Deng

Bridging the User-side Knowledge Gap in Knowledge-aware Recommendations with Large Language Models

Add code
Dec 18, 2024
Viaarxiv icon

SSP: A Simple and Safe automatic Prompt engineering method towards realistic image synthesis on LVM

Add code
Jan 02, 2024
Viaarxiv icon

COKE: A Cognitive Knowledge Graph for Machine Theory of Mind

Add code
May 09, 2023
Viaarxiv icon

Automated Paper Screening for Clinical Reviews Using Large Language Models

Add code
May 01, 2023
Viaarxiv icon

Safety Assessment of Chinese Large Language Models

Add code
Apr 20, 2023
Viaarxiv icon

Recent Advances towards Safe, Responsible, and Moral Dialogue Systems: A Survey

Add code
Feb 18, 2023
Viaarxiv icon

Constructing Highly Inductive Contexts for Dialogue Safety through Controllable Reverse Generation

Add code
Dec 04, 2022
Viaarxiv icon

Perplexity from PLM Is Unreliable for Evaluating Text Quality

Add code
Oct 12, 2022
Figure 1 for Perplexity from PLM Is Unreliable for Evaluating Text Quality
Figure 2 for Perplexity from PLM Is Unreliable for Evaluating Text Quality
Figure 3 for Perplexity from PLM Is Unreliable for Evaluating Text Quality
Figure 4 for Perplexity from PLM Is Unreliable for Evaluating Text Quality
Viaarxiv icon

A Roadmap for Big Model

Add code
Apr 02, 2022
Figure 1 for A Roadmap for Big Model
Figure 2 for A Roadmap for Big Model
Figure 3 for A Roadmap for Big Model
Figure 4 for A Roadmap for Big Model
Viaarxiv icon

Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and Benchmarks

Add code
Feb 16, 2022
Figure 1 for Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and Benchmarks
Figure 2 for Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and Benchmarks
Figure 3 for Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and Benchmarks
Figure 4 for Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and Benchmarks
Viaarxiv icon