Picture for Dharmashankar Subramanian

Dharmashankar Subramanian

Triplet Interaction Improves Graph Transformers: Accurate Molecular Graph Learning with Triplet Graph Transformers

Add code
Feb 07, 2024
Viaarxiv icon

Adaptive Primal-Dual Method for Safe Reinforcement Learning

Add code
Feb 01, 2024
Figure 1 for Adaptive Primal-Dual Method for Safe Reinforcement Learning
Figure 2 for Adaptive Primal-Dual Method for Safe Reinforcement Learning
Figure 3 for Adaptive Primal-Dual Method for Safe Reinforcement Learning
Figure 4 for Adaptive Primal-Dual Method for Safe Reinforcement Learning
Viaarxiv icon

Self-Supervised Contrastive Pre-Training for Multivariate Point Processes

Add code
Feb 01, 2024
Viaarxiv icon

Matching Table Metadata with Business Glossaries Using Large Language Models

Add code
Sep 08, 2023
Viaarxiv icon

Probabilistic Constraint for Safety-Critical Reinforcement Learning

Add code
Jun 29, 2023
Viaarxiv icon

The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles

Add code
Jun 02, 2023
Figure 1 for The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles
Figure 2 for The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles
Figure 3 for The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles
Figure 4 for The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles
Viaarxiv icon

AutoDOViz: Human-Centered Automation for Decision Optimization

Add code
Feb 19, 2023
Viaarxiv icon

Policy Gradients for Probabilistic Constrained Reinforcement Learning

Add code
Oct 02, 2022
Figure 1 for Policy Gradients for Probabilistic Constrained Reinforcement Learning
Figure 2 for Policy Gradients for Probabilistic Constrained Reinforcement Learning
Figure 3 for Policy Gradients for Probabilistic Constrained Reinforcement Learning
Viaarxiv icon

Learning Temporal Rules from Noisy Timeseries Data

Add code
Feb 11, 2022
Viaarxiv icon

Edge-augmented Graph Transformers: Global Self-attention is Enough for Graphs

Add code
Aug 07, 2021
Figure 1 for Edge-augmented Graph Transformers: Global Self-attention is Enough for Graphs
Figure 2 for Edge-augmented Graph Transformers: Global Self-attention is Enough for Graphs
Figure 3 for Edge-augmented Graph Transformers: Global Self-attention is Enough for Graphs
Figure 4 for Edge-augmented Graph Transformers: Global Self-attention is Enough for Graphs
Viaarxiv icon