Picture for Masashi Sugiyama

Masashi Sugiyama

Tokyo Institute of Technology

Parallel simulation for sampling under isoperimetry and score-based diffusion models

Add code
Dec 10, 2024
Viaarxiv icon

Beyond Simple Sum of Delayed Rewards: Non-Markovian Reward Modeling for Reinforcement Learning

Add code
Oct 26, 2024
Figure 1 for Beyond Simple Sum of Delayed Rewards: Non-Markovian Reward Modeling for Reinforcement Learning
Figure 2 for Beyond Simple Sum of Delayed Rewards: Non-Markovian Reward Modeling for Reinforcement Learning
Figure 3 for Beyond Simple Sum of Delayed Rewards: Non-Markovian Reward Modeling for Reinforcement Learning
Figure 4 for Beyond Simple Sum of Delayed Rewards: Non-Markovian Reward Modeling for Reinforcement Learning
Viaarxiv icon

Sharpness-Aware Black-Box Optimization

Add code
Oct 16, 2024
Viaarxiv icon

On Unsupervised Prompt Learning for Classification with Black-box Language Models

Add code
Oct 04, 2024
Figure 1 for On Unsupervised Prompt Learning for Classification with Black-box Language Models
Figure 2 for On Unsupervised Prompt Learning for Classification with Black-box Language Models
Figure 3 for On Unsupervised Prompt Learning for Classification with Black-box Language Models
Figure 4 for On Unsupervised Prompt Learning for Classification with Black-box Language Models
Viaarxiv icon

Vision-Language Model Fine-Tuning via Simple Parameter-Efficient Modification

Add code
Sep 25, 2024
Viaarxiv icon

Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning

Add code
Jul 26, 2024
Figure 1 for Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning
Figure 2 for Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning
Figure 3 for Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning
Figure 4 for Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning
Viaarxiv icon

Unlearning with Control: Assessing Real-world Utility for Large Language Model Unlearning

Add code
Jun 13, 2024
Figure 1 for Unlearning with Control: Assessing Real-world Utility for Large Language Model Unlearning
Figure 2 for Unlearning with Control: Assessing Real-world Utility for Large Language Model Unlearning
Figure 3 for Unlearning with Control: Assessing Real-world Utility for Large Language Model Unlearning
Figure 4 for Unlearning with Control: Assessing Real-world Utility for Large Language Model Unlearning
Viaarxiv icon

Decoupling the Class Label and the Target Concept in Machine Unlearning

Add code
Jun 12, 2024
Viaarxiv icon

Slight Corruption in Pre-training Data Makes Better Diffusion Models

Add code
May 30, 2024
Viaarxiv icon

Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization

Add code
May 29, 2024
Figure 1 for Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization
Figure 2 for Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization
Figure 3 for Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization
Figure 4 for Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization
Viaarxiv icon