Picture for Mark Lee

Mark Lee

University of Sheffield

A Novel Interpretability Metric for Explaining Bias in Language Models: Applications on Multilingual Models from Southeast Asia

Add code
Oct 20, 2024
Viaarxiv icon

Apple Intelligence Foundation Language Models

Add code
Jul 29, 2024
Figure 1 for Apple Intelligence Foundation Language Models
Figure 2 for Apple Intelligence Foundation Language Models
Figure 3 for Apple Intelligence Foundation Language Models
Figure 4 for Apple Intelligence Foundation Language Models
Viaarxiv icon

Revisiting MoE and Dense Speed-Accuracy Comparisons for LLM Training

Add code
May 23, 2024
Viaarxiv icon

MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training

Add code
Mar 22, 2024
Figure 1 for MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training
Figure 2 for MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training
Figure 3 for MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training
Figure 4 for MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training
Viaarxiv icon

Code-Mixed Probes Show How Pre-Trained Models Generalise On Code-Switched Text

Add code
Mar 07, 2024
Viaarxiv icon

The channel-spatial attention-based vision transformer network for automated, accurate prediction of crop nitrogen status from UAV imagery

Add code
Nov 12, 2021
Figure 1 for The channel-spatial attention-based vision transformer network for automated, accurate prediction of crop nitrogen status from UAV imagery
Figure 2 for The channel-spatial attention-based vision transformer network for automated, accurate prediction of crop nitrogen status from UAV imagery
Figure 3 for The channel-spatial attention-based vision transformer network for automated, accurate prediction of crop nitrogen status from UAV imagery
Figure 4 for The channel-spatial attention-based vision transformer network for automated, accurate prediction of crop nitrogen status from UAV imagery
Viaarxiv icon

Can vectors read minds better than experts? Comparing data augmentation strategies for the automated scoring of children's mindreading ability

Add code
Jun 03, 2021
Figure 1 for Can vectors read minds better than experts? Comparing data augmentation strategies for the automated scoring of children's mindreading ability
Figure 2 for Can vectors read minds better than experts? Comparing data augmentation strategies for the automated scoring of children's mindreading ability
Figure 3 for Can vectors read minds better than experts? Comparing data augmentation strategies for the automated scoring of children's mindreading ability
Figure 4 for Can vectors read minds better than experts? Comparing data augmentation strategies for the automated scoring of children's mindreading ability
Viaarxiv icon

"What is on your mind?" Automated Scoring of Mindreading in Childhood and Early Adolescence

Add code
Nov 16, 2020
Figure 1 for "What is on your mind?" Automated Scoring of Mindreading in Childhood and Early Adolescence
Figure 2 for "What is on your mind?" Automated Scoring of Mindreading in Childhood and Early Adolescence
Figure 3 for "What is on your mind?" Automated Scoring of Mindreading in Childhood and Early Adolescence
Figure 4 for "What is on your mind?" Automated Scoring of Mindreading in Childhood and Early Adolescence
Viaarxiv icon

On Physical Adversarial Patches for Object Detection

Add code
Jun 20, 2019
Figure 1 for On Physical Adversarial Patches for Object Detection
Figure 2 for On Physical Adversarial Patches for Object Detection
Figure 3 for On Physical Adversarial Patches for Object Detection
Figure 4 for On Physical Adversarial Patches for Object Detection
Viaarxiv icon

An ascription-based approach to speech acts

Add code
Apr 15, 1999
Figure 1 for An ascription-based approach to speech acts
Figure 2 for An ascription-based approach to speech acts
Figure 3 for An ascription-based approach to speech acts
Figure 4 for An ascription-based approach to speech acts
Viaarxiv icon