Picture for Rakshith Shetty

Rakshith Shetty

Seeking Similarities over Differences: Similarity-based Domain Alignment for Adaptive Object Detection

Add code
Oct 04, 2021
Figure 1 for Seeking Similarities over Differences: Similarity-based Domain Alignment for Adaptive Object Detection
Figure 2 for Seeking Similarities over Differences: Similarity-based Domain Alignment for Adaptive Object Detection
Figure 3 for Seeking Similarities over Differences: Similarity-based Domain Alignment for Adaptive Object Detection
Figure 4 for Seeking Similarities over Differences: Similarity-based Domain Alignment for Adaptive Object Detection
Viaarxiv icon

Towards Causal VQA: Revealing and Reducing Spurious Correlations by Invariant and Covariant Semantic Editing

Add code
Dec 22, 2019
Figure 1 for Towards Causal VQA: Revealing and Reducing Spurious Correlations by Invariant and Covariant Semantic Editing
Figure 2 for Towards Causal VQA: Revealing and Reducing Spurious Correlations by Invariant and Covariant Semantic Editing
Figure 3 for Towards Causal VQA: Revealing and Reducing Spurious Correlations by Invariant and Covariant Semantic Editing
Figure 4 for Towards Causal VQA: Revealing and Reducing Spurious Correlations by Invariant and Covariant Semantic Editing
Viaarxiv icon

Not Using the Car to See the Sidewalk: Quantifying and Controlling the Effects of Context in Classification and Segmentation

Add code
Dec 17, 2018
Figure 1 for Not Using the Car to See the Sidewalk: Quantifying and Controlling the Effects of Context in Classification and Segmentation
Figure 2 for Not Using the Car to See the Sidewalk: Quantifying and Controlling the Effects of Context in Classification and Segmentation
Figure 3 for Not Using the Car to See the Sidewalk: Quantifying and Controlling the Effects of Context in Classification and Segmentation
Figure 4 for Not Using the Car to See the Sidewalk: Quantifying and Controlling the Effects of Context in Classification and Segmentation
Viaarxiv icon

Adversarial Scene Editing: Automatic Object Removal from Weak Supervision

Add code
Jun 05, 2018
Figure 1 for Adversarial Scene Editing: Automatic Object Removal from Weak Supervision
Figure 2 for Adversarial Scene Editing: Automatic Object Removal from Weak Supervision
Figure 3 for Adversarial Scene Editing: Automatic Object Removal from Weak Supervision
Figure 4 for Adversarial Scene Editing: Automatic Object Removal from Weak Supervision
Viaarxiv icon

$A^{4}NT$: Author Attribute Anonymity by Adversarial Training of Neural Machine Translation

Add code
Feb 19, 2018
Figure 1 for $A^{4}NT$: Author Attribute Anonymity by Adversarial Training of Neural Machine Translation
Figure 2 for $A^{4}NT$: Author Attribute Anonymity by Adversarial Training of Neural Machine Translation
Figure 3 for $A^{4}NT$: Author Attribute Anonymity by Adversarial Training of Neural Machine Translation
Figure 4 for $A^{4}NT$: Author Attribute Anonymity by Adversarial Training of Neural Machine Translation
Viaarxiv icon

Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training

Add code
Nov 06, 2017
Figure 1 for Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training
Figure 2 for Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training
Figure 3 for Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training
Figure 4 for Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training
Viaarxiv icon

Paying Attention to Descriptions Generated by Image Captioning Models

Add code
Aug 04, 2017
Figure 1 for Paying Attention to Descriptions Generated by Image Captioning Models
Figure 2 for Paying Attention to Descriptions Generated by Image Captioning Models
Figure 3 for Paying Attention to Descriptions Generated by Image Captioning Models
Figure 4 for Paying Attention to Descriptions Generated by Image Captioning Models
Viaarxiv icon

Frame- and Segment-Level Features and Candidate Pool Evaluation for Video Caption Generation

Add code
Aug 17, 2016
Figure 1 for Frame- and Segment-Level Features and Candidate Pool Evaluation for Video Caption Generation
Figure 2 for Frame- and Segment-Level Features and Candidate Pool Evaluation for Video Caption Generation
Figure 3 for Frame- and Segment-Level Features and Candidate Pool Evaluation for Video Caption Generation
Figure 4 for Frame- and Segment-Level Features and Candidate Pool Evaluation for Video Caption Generation
Viaarxiv icon

Video captioning with recurrent networks based on frame- and video-level features and visual content classification

Add code
Dec 09, 2015
Figure 1 for Video captioning with recurrent networks based on frame- and video-level features and visual content classification
Figure 2 for Video captioning with recurrent networks based on frame- and video-level features and visual content classification
Figure 3 for Video captioning with recurrent networks based on frame- and video-level features and visual content classification
Viaarxiv icon