Picture for Rio Yokota

Rio Yokota

Variational Low-Rank Adaptation Using IVON

Add code
Nov 07, 2024
Viaarxiv icon

NeurIPS 2023 Competition: Privacy Preserving Federated Learning Document VQA

Add code
Nov 06, 2024
Figure 1 for NeurIPS 2023 Competition: Privacy Preserving Federated Learning Document VQA
Figure 2 for NeurIPS 2023 Competition: Privacy Preserving Federated Learning Document VQA
Figure 3 for NeurIPS 2023 Competition: Privacy Preserving Federated Learning Document VQA
Figure 4 for NeurIPS 2023 Competition: Privacy Preserving Federated Learning Document VQA
Viaarxiv icon

Local Loss Optimization in the Infinite Width: Stable Parameterization of Predictive Coding Networks and Target Propagation

Add code
Nov 04, 2024
Viaarxiv icon

Rethinking Image Super-Resolution from Training Data Perspectives

Add code
Sep 01, 2024
Viaarxiv icon

Scaling Backwards: Minimal Synthetic Pre-training?

Add code
Aug 03, 2024
Figure 1 for Scaling Backwards: Minimal Synthetic Pre-training?
Figure 2 for Scaling Backwards: Minimal Synthetic Pre-training?
Figure 3 for Scaling Backwards: Minimal Synthetic Pre-training?
Figure 4 for Scaling Backwards: Minimal Synthetic Pre-training?
Viaarxiv icon

LLM-jp: A Cross-organizational Project for the Research and Development of Fully Open Japanese LLMs

Add code
Jul 04, 2024
Viaarxiv icon

Building a Large Japanese Web Corpus for Large Language Models

Add code
Apr 27, 2024
Viaarxiv icon

Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing Japanese Language Capabilities

Add code
Apr 27, 2024
Viaarxiv icon

Aurora-M: The First Open Source Multilingual Language Model Red-teamed according to the U.S. Executive Order

Add code
Mar 30, 2024
Viaarxiv icon

Variational Learning is Effective for Large Deep Networks

Add code
Feb 27, 2024
Viaarxiv icon