Picture for Siegfried Handschuh

Siegfried Handschuh

University of St.Gallen

Efficient Neural Network Training via Subset Pretraining

Add code
Oct 21, 2024
Viaarxiv icon

Reducing the Transformer Architecture to a Minimum

Add code
Oct 17, 2024
Viaarxiv icon

Make Deep Networks Shallow Again

Add code
Sep 15, 2023
Viaarxiv icon

Discourse-Aware Text Simplification: From Complex Sentences to Linked Propositions

Add code
Aug 01, 2023
Viaarxiv icon

Analyzing FOMC Minutes: Accuracy and Constraints of Language Models

Add code
Apr 20, 2023
Viaarxiv icon

Number of Attention Heads vs Number of Transformer-Encoders in Computer Vision

Add code
Sep 15, 2022
Figure 1 for Number of Attention Heads vs Number of Transformer-Encoders in Computer Vision
Figure 2 for Number of Attention Heads vs Number of Transformer-Encoders in Computer Vision
Figure 3 for Number of Attention Heads vs Number of Transformer-Encoders in Computer Vision
Figure 4 for Number of Attention Heads vs Number of Transformer-Encoders in Computer Vision
Viaarxiv icon

Training Neural Networks in Single vs Double Precision

Add code
Sep 15, 2022
Figure 1 for Training Neural Networks in Single vs Double Precision
Figure 2 for Training Neural Networks in Single vs Double Precision
Figure 3 for Training Neural Networks in Single vs Double Precision
Viaarxiv icon

Uncovering More Shallow Heuristics: Probing the Natural Language Inference Capacities of Transformer-Based Pre-Trained Language Models Using Syllogistic Patterns

Add code
Jan 19, 2022
Viaarxiv icon

Exploring the Promises of Transformer-Based LMs for the Representation of Normative Claims in the Legal Domain

Add code
Aug 25, 2021
Figure 1 for Exploring the Promises of Transformer-Based LMs for the Representation of Normative Claims in the Legal Domain
Figure 2 for Exploring the Promises of Transformer-Based LMs for the Representation of Normative Claims in the Legal Domain
Figure 3 for Exploring the Promises of Transformer-Based LMs for the Representation of Normative Claims in the Legal Domain
Figure 4 for Exploring the Promises of Transformer-Based LMs for the Representation of Normative Claims in the Legal Domain
Viaarxiv icon

Supporting Cognitive and Emotional Empathic Writing of Students

Add code
May 31, 2021
Figure 1 for Supporting Cognitive and Emotional Empathic Writing of Students
Figure 2 for Supporting Cognitive and Emotional Empathic Writing of Students
Figure 3 for Supporting Cognitive and Emotional Empathic Writing of Students
Figure 4 for Supporting Cognitive and Emotional Empathic Writing of Students
Viaarxiv icon