Picture for Takeshi Kojima

Takeshi Kojima

Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?

Add code
Oct 09, 2024
Figure 1 for Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?
Figure 2 for Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?
Figure 3 for Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?
Figure 4 for Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?
Viaarxiv icon

Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning

Add code
Oct 01, 2024
Figure 1 for Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
Figure 2 for Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
Figure 3 for Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
Figure 4 for Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
Viaarxiv icon

On the Multilingual Ability of Decoder-based Pre-trained Language Models: Finding and Controlling Language-Specific Neurons

Add code
Apr 03, 2024
Figure 1 for On the Multilingual Ability of Decoder-based Pre-trained Language Models: Finding and Controlling Language-Specific Neurons
Figure 2 for On the Multilingual Ability of Decoder-based Pre-trained Language Models: Finding and Controlling Language-Specific Neurons
Figure 3 for On the Multilingual Ability of Decoder-based Pre-trained Language Models: Finding and Controlling Language-Specific Neurons
Figure 4 for On the Multilingual Ability of Decoder-based Pre-trained Language Models: Finding and Controlling Language-Specific Neurons
Viaarxiv icon

Unnatural Error Correction: GPT-4 Can Almost Perfectly Handle Unnatural Scrambled Text

Add code
Nov 30, 2023
Viaarxiv icon

Robustifying Vision Transformer without Retraining from Scratch by Test-Time Class-Conditional Feature Alignment

Add code
Jun 28, 2022
Figure 1 for Robustifying Vision Transformer without Retraining from Scratch by Test-Time Class-Conditional Feature Alignment
Figure 2 for Robustifying Vision Transformer without Retraining from Scratch by Test-Time Class-Conditional Feature Alignment
Figure 3 for Robustifying Vision Transformer without Retraining from Scratch by Test-Time Class-Conditional Feature Alignment
Figure 4 for Robustifying Vision Transformer without Retraining from Scratch by Test-Time Class-Conditional Feature Alignment
Viaarxiv icon

Large Language Models are Zero-Shot Reasoners

Add code
May 24, 2022
Figure 1 for Large Language Models are Zero-Shot Reasoners
Figure 2 for Large Language Models are Zero-Shot Reasoners
Figure 3 for Large Language Models are Zero-Shot Reasoners
Figure 4 for Large Language Models are Zero-Shot Reasoners
Viaarxiv icon