Picture for Bei Jiang

Bei Jiang

Evaluation of OpenAI o1: Opportunities and Challenges of AGI

Add code
Sep 27, 2024
Figure 1 for Evaluation of OpenAI o1: Opportunities and Challenges of AGI
Figure 2 for Evaluation of OpenAI o1: Opportunities and Challenges of AGI
Figure 3 for Evaluation of OpenAI o1: Opportunities and Challenges of AGI
Figure 4 for Evaluation of OpenAI o1: Opportunities and Challenges of AGI
Viaarxiv icon

Oblivious subspace embeddings for compressed Tucker decompositions

Add code
Jun 13, 2024
Viaarxiv icon

Gaussian Differential Privacy on Riemannian Manifolds

Add code
Nov 09, 2023
Viaarxiv icon

Class Interference of Deep Neural Networks

Add code
Oct 31, 2022
Viaarxiv icon

Conformalized Fairness via Quantile Regression

Add code
Oct 05, 2022
Figure 1 for Conformalized Fairness via Quantile Regression
Figure 2 for Conformalized Fairness via Quantile Regression
Figure 3 for Conformalized Fairness via Quantile Regression
Figure 4 for Conformalized Fairness via Quantile Regression
Viaarxiv icon

How Does Value Distribution in Distributional Reinforcement Learning Help Optimization?

Add code
Sep 29, 2022
Figure 1 for How Does Value Distribution in Distributional Reinforcement Learning Help Optimization?
Figure 2 for How Does Value Distribution in Distributional Reinforcement Learning Help Optimization?
Figure 3 for How Does Value Distribution in Distributional Reinforcement Learning Help Optimization?
Figure 4 for How Does Value Distribution in Distributional Reinforcement Learning Help Optimization?
Viaarxiv icon

Sigmoidally Preconditioned Off-policy Learning:a new exploration method for reinforcement learning

Add code
May 20, 2022
Figure 1 for Sigmoidally Preconditioned Off-policy Learning:a new exploration method for reinforcement learning
Figure 2 for Sigmoidally Preconditioned Off-policy Learning:a new exploration method for reinforcement learning
Figure 3 for Sigmoidally Preconditioned Off-policy Learning:a new exploration method for reinforcement learning
Figure 4 for Sigmoidally Preconditioned Off-policy Learning:a new exploration method for reinforcement learning
Viaarxiv icon

Distributional Reinforcement Learning via Sinkhorn Iterations

Add code
Feb 16, 2022
Figure 1 for Distributional Reinforcement Learning via Sinkhorn Iterations
Figure 2 for Distributional Reinforcement Learning via Sinkhorn Iterations
Figure 3 for Distributional Reinforcement Learning via Sinkhorn Iterations
Figure 4 for Distributional Reinforcement Learning via Sinkhorn Iterations
Viaarxiv icon

Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving

Add code
Dec 09, 2021
Figure 1 for Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving
Figure 2 for Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving
Figure 3 for Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving
Figure 4 for Word Embeddings via Causal Inference: Gender Bias Reducing and Semantic Information Preserving
Viaarxiv icon

Damped Anderson Mixing for Deep Reinforcement Learning: Acceleration, Convergence, and Stabilization

Add code
Oct 20, 2021
Figure 1 for Damped Anderson Mixing for Deep Reinforcement Learning: Acceleration, Convergence, and Stabilization
Figure 2 for Damped Anderson Mixing for Deep Reinforcement Learning: Acceleration, Convergence, and Stabilization
Figure 3 for Damped Anderson Mixing for Deep Reinforcement Learning: Acceleration, Convergence, and Stabilization
Figure 4 for Damped Anderson Mixing for Deep Reinforcement Learning: Acceleration, Convergence, and Stabilization
Viaarxiv icon