Picture for Matthew Matero

Matthew Matero

SOCIALITE-LLAMA: An Instruction-Tuned Model for Social Scientific Tasks

Add code
Feb 03, 2024
Viaarxiv icon

Human Language Modeling

Add code
May 10, 2022
Figure 1 for Human Language Modeling
Figure 2 for Human Language Modeling
Figure 3 for Human Language Modeling
Figure 4 for Human Language Modeling
Viaarxiv icon

Understanding RoBERTa's Mood: The Role of Contextual-Embeddings as User-Representations for Depression Prediction

Add code
Dec 27, 2021
Figure 1 for Understanding RoBERTa's Mood: The Role of Contextual-Embeddings as User-Representations for Depression Prediction
Figure 2 for Understanding RoBERTa's Mood: The Role of Contextual-Embeddings as User-Representations for Depression Prediction
Figure 3 for Understanding RoBERTa's Mood: The Role of Contextual-Embeddings as User-Representations for Depression Prediction
Figure 4 for Understanding RoBERTa's Mood: The Role of Contextual-Embeddings as User-Representations for Depression Prediction
Viaarxiv icon

MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection

Add code
Sep 16, 2021
Figure 1 for MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection
Figure 2 for MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection
Figure 3 for MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection
Figure 4 for MeLT: Message-Level Transformer with Masked Document Representations as Pre-Training for Stance Detection
Viaarxiv icon

Empirical Evaluation of Pre-trained Transformers for Human-Level NLP: The Role of Sample Size and Dimensionality

Add code
May 07, 2021
Figure 1 for Empirical Evaluation of Pre-trained Transformers for Human-Level NLP: The Role of Sample Size and Dimensionality
Figure 2 for Empirical Evaluation of Pre-trained Transformers for Human-Level NLP: The Role of Sample Size and Dimensionality
Figure 3 for Empirical Evaluation of Pre-trained Transformers for Human-Level NLP: The Role of Sample Size and Dimensionality
Figure 4 for Empirical Evaluation of Pre-trained Transformers for Human-Level NLP: The Role of Sample Size and Dimensionality
Viaarxiv icon