Picture for Samar Khanna

Samar Khanna

TEOChat: A Large Vision-Language Assistant for Temporal Earth Observation Data

Add code
Oct 08, 2024
Viaarxiv icon

ExPLoRA: Parameter-Efficient Extended Pre-Training to Adapt Vision Transformers under Domain Shifts

Add code
Jun 16, 2024
Viaarxiv icon

SpotNet: An Image Centric, Lidar Anchored Approach To Long Range Perception

Add code
May 24, 2024
Viaarxiv icon

Large Language Models are Geographically Biased

Add code
Feb 05, 2024
Figure 1 for Large Language Models are Geographically Biased
Figure 2 for Large Language Models are Geographically Biased
Figure 3 for Large Language Models are Geographically Biased
Figure 4 for Large Language Models are Geographically Biased
Viaarxiv icon

DiffusionSat: A Generative Foundation Model for Satellite Imagery

Add code
Dec 06, 2023
Viaarxiv icon

GeoLLM: Extracting Geospatial Knowledge from Large Language Models

Add code
Oct 10, 2023
Viaarxiv icon

Denoising Diffusion Bridge Models

Add code
Sep 29, 2023
Viaarxiv icon

Differentiable Weight Masks for Domain Transfer

Add code
Aug 26, 2023
Viaarxiv icon

Invalid Logic, Equivalent Gains: The Bizarreness of Reasoning in Language Model Prompting

Add code
Jul 23, 2023
Figure 1 for Invalid Logic, Equivalent Gains: The Bizarreness of Reasoning in Language Model Prompting
Figure 2 for Invalid Logic, Equivalent Gains: The Bizarreness of Reasoning in Language Model Prompting
Figure 3 for Invalid Logic, Equivalent Gains: The Bizarreness of Reasoning in Language Model Prompting
Figure 4 for Invalid Logic, Equivalent Gains: The Bizarreness of Reasoning in Language Model Prompting
Viaarxiv icon

SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite Imagery

Add code
Jul 17, 2022
Figure 1 for SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite Imagery
Figure 2 for SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite Imagery
Figure 3 for SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite Imagery
Figure 4 for SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite Imagery
Viaarxiv icon