Picture for Merey Ramazanova

Merey Ramazanova

Combating Missing Modalities in Egocentric Videos at Test Time

Add code
Apr 23, 2024
Figure 1 for Combating Missing Modalities in Egocentric Videos at Test Time
Figure 2 for Combating Missing Modalities in Egocentric Videos at Test Time
Figure 3 for Combating Missing Modalities in Egocentric Videos at Test Time
Figure 4 for Combating Missing Modalities in Egocentric Videos at Test Time
Viaarxiv icon

Exploring Missing Modality in Multimodal Egocentric Datasets

Add code
Jan 21, 2024
Viaarxiv icon

Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives

Add code
Nov 30, 2023
Figure 1 for Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
Figure 2 for Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
Figure 3 for Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
Figure 4 for Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
Viaarxiv icon

Just a Glimpse: Rethinking Temporal Information for Video Continual Learning

Add code
May 28, 2023
Viaarxiv icon

Revisiting Test Time Adaptation under Online Evaluation

Add code
Apr 10, 2023
Figure 1 for Revisiting Test Time Adaptation under Online Evaluation
Figure 2 for Revisiting Test Time Adaptation under Online Evaluation
Figure 3 for Revisiting Test Time Adaptation under Online Evaluation
Figure 4 for Revisiting Test Time Adaptation under Online Evaluation
Viaarxiv icon

SegTAD: Precise Temporal Action Detection via Semantic Segmentation

Add code
Mar 03, 2022
Figure 1 for SegTAD: Precise Temporal Action Detection via Semantic Segmentation
Figure 2 for SegTAD: Precise Temporal Action Detection via Semantic Segmentation
Figure 3 for SegTAD: Precise Temporal Action Detection via Semantic Segmentation
Figure 4 for SegTAD: Precise Temporal Action Detection via Semantic Segmentation
Viaarxiv icon

OWL (Observe, Watch, Listen): Localizing Actions in Egocentric Video via Audiovisual Temporal Context

Add code
Feb 14, 2022
Figure 1 for OWL (Observe, Watch, Listen): Localizing Actions in Egocentric Video via Audiovisual Temporal Context
Figure 2 for OWL (Observe, Watch, Listen): Localizing Actions in Egocentric Video via Audiovisual Temporal Context
Figure 3 for OWL (Observe, Watch, Listen): Localizing Actions in Egocentric Video via Audiovisual Temporal Context
Figure 4 for OWL (Observe, Watch, Listen): Localizing Actions in Egocentric Video via Audiovisual Temporal Context
Viaarxiv icon

Ego4D: Around the World in 3,000 Hours of Egocentric Video

Add code
Oct 13, 2021
Figure 1 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Figure 2 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Figure 3 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Figure 4 for Ego4D: Around the World in 3,000 Hours of Egocentric Video
Viaarxiv icon