Picture for Arlindo L. Oliveira

Arlindo L. Oliveira

INESC-ID, Rua Alves Redol 9, 1000-029 Lisboa, IST Tecnico Lisboa, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa

Explicitly Modeling Pre-Cortical Vision with a Neuro-Inspired Front-End Improves CNN Robustness

Add code
Sep 25, 2024
Viaarxiv icon

LumberChunker: Long-Form Narrative Document Segmentation

Add code
Jun 25, 2024
Viaarxiv icon

Finding Regions of Interest in Whole Slide Images Using Multiple Instance Learning

Add code
Apr 11, 2024
Viaarxiv icon

DE-COP: Detecting Copyrighted Content in Language Models Training Data

Add code
Feb 15, 2024
Figure 1 for DE-COP: Detecting Copyrighted Content in Language Models Training Data
Figure 2 for DE-COP: Detecting Copyrighted Content in Language Models Training Data
Figure 3 for DE-COP: Detecting Copyrighted Content in Language Models Training Data
Figure 4 for DE-COP: Detecting Copyrighted Content in Language Models Training Data
Viaarxiv icon

DeepThought: An Architecture for Autonomous Self-motivated Systems

Add code
Nov 14, 2023
Viaarxiv icon

Matching the Neuronal Representations of V1 is Necessary to Improve Robustness in CNNs with V1-like Front-ends

Add code
Oct 16, 2023
Viaarxiv icon

Improving Address Matching using Siamese Transformer Networks

Add code
Jul 05, 2023
Figure 1 for Improving Address Matching using Siamese Transformer Networks
Figure 2 for Improving Address Matching using Siamese Transformer Networks
Figure 3 for Improving Address Matching using Siamese Transformer Networks
Figure 4 for Improving Address Matching using Siamese Transformer Networks
Viaarxiv icon

Connecting metrics for shape-texture knowledge in computer vision

Add code
Jan 25, 2023
Viaarxiv icon

Pretraining the Vision Transformer using self-supervised methods for vision based Deep Reinforcement Learning

Add code
Sep 22, 2022
Figure 1 for Pretraining the Vision Transformer using self-supervised methods for vision based Deep Reinforcement Learning
Figure 2 for Pretraining the Vision Transformer using self-supervised methods for vision based Deep Reinforcement Learning
Figure 3 for Pretraining the Vision Transformer using self-supervised methods for vision based Deep Reinforcement Learning
Figure 4 for Pretraining the Vision Transformer using self-supervised methods for vision based Deep Reinforcement Learning
Viaarxiv icon

Assessing Policy, Loss and Planning Combinations in Reinforcement Learning using a New Modular Architecture

Add code
Jan 08, 2022
Figure 1 for Assessing Policy, Loss and Planning Combinations in Reinforcement Learning using a New Modular Architecture
Figure 2 for Assessing Policy, Loss and Planning Combinations in Reinforcement Learning using a New Modular Architecture
Figure 3 for Assessing Policy, Loss and Planning Combinations in Reinforcement Learning using a New Modular Architecture
Figure 4 for Assessing Policy, Loss and Planning Combinations in Reinforcement Learning using a New Modular Architecture
Viaarxiv icon