Picture for Rahul Khanna

Rahul Khanna

Can BERT Reason? Logically Equivalent Probes for Evaluating the Inference Capabilities of Language Models

Add code
May 02, 2020
Figure 1 for Can BERT Reason? Logically Equivalent Probes for Evaluating the Inference Capabilities of Language Models
Figure 2 for Can BERT Reason? Logically Equivalent Probes for Evaluating the Inference Capabilities of Language Models
Figure 3 for Can BERT Reason? Logically Equivalent Probes for Evaluating the Inference Capabilities of Language Models
Figure 4 for Can BERT Reason? Logically Equivalent Probes for Evaluating the Inference Capabilities of Language Models
Viaarxiv icon

Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models

Add code
May 02, 2020
Figure 1 for Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models
Figure 2 for Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models
Figure 3 for Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models
Figure 4 for Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models
Viaarxiv icon

LEAN-LIFE: A Label-Efficient Annotation Framework Towards Learning from Explanation

Add code
Apr 16, 2020
Figure 1 for LEAN-LIFE: A Label-Efficient Annotation Framework Towards Learning from Explanation
Figure 2 for LEAN-LIFE: A Label-Efficient Annotation Framework Towards Learning from Explanation
Figure 3 for LEAN-LIFE: A Label-Efficient Annotation Framework Towards Learning from Explanation
Figure 4 for LEAN-LIFE: A Label-Efficient Annotation Framework Towards Learning from Explanation
Viaarxiv icon