Picture for Timothy Doster

Timothy Doster

STARS: Sensor-agnostic Transformer Architecture for Remote Sensing

Add code
Nov 08, 2024
Viaarxiv icon

Data-Driven Invertible Neural Surrogates of Atmospheric Transmission

Add code
Apr 30, 2024
Viaarxiv icon

Reproducing Kernel Hilbert Space Pruning for Sparse Hyperspectral Abundance Prediction

Add code
Aug 16, 2023
Viaarxiv icon

In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?

Add code
Oct 07, 2022
Figure 1 for In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?
Figure 2 for In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?
Figure 3 for In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?
Figure 4 for In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?
Viaarxiv icon

Reward-Free Attacks in Multi-Agent Reinforcement Learning

Add code
Dec 02, 2021
Figure 1 for Reward-Free Attacks in Multi-Agent Reinforcement Learning
Figure 2 for Reward-Free Attacks in Multi-Agent Reinforcement Learning
Figure 3 for Reward-Free Attacks in Multi-Agent Reinforcement Learning
Figure 4 for Reward-Free Attacks in Multi-Agent Reinforcement Learning
Viaarxiv icon

Argumentative Topology: Finding Loop(holes) in Logic

Add code
Nov 17, 2020
Figure 1 for Argumentative Topology: Finding Loop(holes) in Logic
Figure 2 for Argumentative Topology: Finding Loop(holes) in Logic
Figure 3 for Argumentative Topology: Finding Loop(holes) in Logic
Viaarxiv icon

Gradual DropIn of Layers to Train Very Deep Neural Networks

Add code
Nov 22, 2015
Figure 1 for Gradual DropIn of Layers to Train Very Deep Neural Networks
Figure 2 for Gradual DropIn of Layers to Train Very Deep Neural Networks
Figure 3 for Gradual DropIn of Layers to Train Very Deep Neural Networks
Figure 4 for Gradual DropIn of Layers to Train Very Deep Neural Networks
Viaarxiv icon