Picture for Junlin Wu

Junlin Wu

Verified Safe Reinforcement Learning for Neural Network Dynamic Models

Add code
May 25, 2024
Viaarxiv icon

Axioms for AI Alignment from Human Feedback

Add code
May 23, 2024
Viaarxiv icon

Preference Poisoning Attacks on Reward Model Learning

Add code
Feb 02, 2024
Viaarxiv icon

On the Exploitability of Reinforcement Learning with Human Feedback for Large Language Models

Add code
Nov 16, 2023
Viaarxiv icon

Exact Verification of ReLU Neural Control Barrier Functions

Add code
Oct 13, 2023
Viaarxiv icon

Neural Lyapunov Control for Discrete-Time Systems

Add code
May 11, 2023
Figure 1 for Neural Lyapunov Control for Discrete-Time Systems
Figure 2 for Neural Lyapunov Control for Discrete-Time Systems
Figure 3 for Neural Lyapunov Control for Discrete-Time Systems
Figure 4 for Neural Lyapunov Control for Discrete-Time Systems
Viaarxiv icon

Certifying Safety in Reinforcement Learning under Adversarial Perturbation Attacks

Add code
Dec 28, 2022
Viaarxiv icon

Robust Deep Reinforcement Learning through Bootstrapped Opportunistic Curriculum

Add code
Jun 21, 2022
Figure 1 for Robust Deep Reinforcement Learning through Bootstrapped Opportunistic Curriculum
Figure 2 for Robust Deep Reinforcement Learning through Bootstrapped Opportunistic Curriculum
Figure 3 for Robust Deep Reinforcement Learning through Bootstrapped Opportunistic Curriculum
Figure 4 for Robust Deep Reinforcement Learning through Bootstrapped Opportunistic Curriculum
Viaarxiv icon

Learning Generative Deception Strategies in Combinatorial Masking Games

Add code
Sep 23, 2021
Figure 1 for Learning Generative Deception Strategies in Combinatorial Masking Games
Figure 2 for Learning Generative Deception Strategies in Combinatorial Masking Games
Figure 3 for Learning Generative Deception Strategies in Combinatorial Masking Games
Figure 4 for Learning Generative Deception Strategies in Combinatorial Masking Games
Viaarxiv icon