Picture for Atsuki Yamaguchi

Atsuki Yamaguchi

Enhancing Reasoning Capabilities of LLMs via Principled Synthetic Logic Corpus

Add code
Nov 19, 2024
Viaarxiv icon

Vocabulary Expansion for Low-resource Cross-lingual Transfer

Add code
Jun 17, 2024
Viaarxiv icon

An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative LLM Inference

Add code
Feb 16, 2024
Viaarxiv icon

appjsonify: An Academic Paper PDF-to-JSON Conversion Toolkit

Add code
Oct 03, 2023
Viaarxiv icon

Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic

Add code
Aug 11, 2023
Viaarxiv icon

How do different tokenizers perform on downstream tasks in scriptio continua languages?: A case study in Japanese

Add code
Jun 16, 2023
Viaarxiv icon

How does the task complexity of masked pretraining objectives affect downstream performance?

Add code
May 18, 2023
Viaarxiv icon

Team Hitachi at SemEval-2023 Task 3: Exploring Cross-lingual Multi-task Strategies for Genre and Framing Detection in Online News

Add code
Mar 03, 2023
Viaarxiv icon

Team Hitachi @ AutoMin 2021: Reference-free Automatic Minuting Pipeline with Argument Structure Construction over Topic-based Summarization

Add code
Dec 06, 2021
Figure 1 for Team Hitachi @ AutoMin 2021: Reference-free Automatic Minuting Pipeline with Argument Structure Construction over Topic-based Summarization
Figure 2 for Team Hitachi @ AutoMin 2021: Reference-free Automatic Minuting Pipeline with Argument Structure Construction over Topic-based Summarization
Figure 3 for Team Hitachi @ AutoMin 2021: Reference-free Automatic Minuting Pipeline with Argument Structure Construction over Topic-based Summarization
Figure 4 for Team Hitachi @ AutoMin 2021: Reference-free Automatic Minuting Pipeline with Argument Structure Construction over Topic-based Summarization
Viaarxiv icon

Frustratingly Simple Pretraining Alternatives to Masked Language Modeling

Add code
Sep 04, 2021
Figure 1 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Figure 2 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Figure 3 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Figure 4 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Viaarxiv icon