Picture for Shengsheng Wang

Shengsheng Wang

Cogito, ergo sum: A Neurobiologically-Inspired Cognition-Memory-Growth System for Code Generation

Add code
Jan 30, 2025
Figure 1 for Cogito, ergo sum: A Neurobiologically-Inspired Cognition-Memory-Growth System for Code Generation
Figure 2 for Cogito, ergo sum: A Neurobiologically-Inspired Cognition-Memory-Growth System for Code Generation
Figure 3 for Cogito, ergo sum: A Neurobiologically-Inspired Cognition-Memory-Growth System for Code Generation
Figure 4 for Cogito, ergo sum: A Neurobiologically-Inspired Cognition-Memory-Growth System for Code Generation
Viaarxiv icon

Data-Efficient CLIP-Powered Dual-Branch Networks for Source-Free Unsupervised Domain Adaptation

Add code
Oct 21, 2024
Viaarxiv icon

Why Misinformation is Created? Detecting them by Integrating Intent Features

Add code
Jul 27, 2024
Figure 1 for Why Misinformation is Created? Detecting them by Integrating Intent Features
Figure 2 for Why Misinformation is Created? Detecting them by Integrating Intent Features
Figure 3 for Why Misinformation is Created? Detecting them by Integrating Intent Features
Figure 4 for Why Misinformation is Created? Detecting them by Integrating Intent Features
Viaarxiv icon

Harmfully Manipulated Images Matter in Multimodal Misinformation Detection

Add code
Jul 27, 2024
Viaarxiv icon

Training-Free Unsupervised Prompt for Vision-Language Models

Add code
Apr 25, 2024
Figure 1 for Training-Free Unsupervised Prompt for Vision-Language Models
Figure 2 for Training-Free Unsupervised Prompt for Vision-Language Models
Figure 3 for Training-Free Unsupervised Prompt for Vision-Language Models
Figure 4 for Training-Free Unsupervised Prompt for Vision-Language Models
Viaarxiv icon

Unsupervised Sentence Representation Learning with Frequency-induced Adversarial Tuning and Incomplete Sentence Filtering

Add code
May 15, 2023
Viaarxiv icon

Task-Oriented Multi-Modal Mutual Leaning for Vision-Language Models

Add code
Mar 30, 2023
Figure 1 for Task-Oriented Multi-Modal Mutual Leaning for Vision-Language Models
Figure 2 for Task-Oriented Multi-Modal Mutual Leaning for Vision-Language Models
Figure 3 for Task-Oriented Multi-Modal Mutual Leaning for Vision-Language Models
Figure 4 for Task-Oriented Multi-Modal Mutual Leaning for Vision-Language Models
Viaarxiv icon

Beyond Attentive Tokens: Incorporating Token Importance and Diversity for Efficient Vision Transformers

Add code
Nov 21, 2022
Viaarxiv icon

Next-item Recommendations in Short Sessions

Add code
Jul 20, 2021
Figure 1 for Next-item Recommendations in Short Sessions
Figure 2 for Next-item Recommendations in Short Sessions
Figure 3 for Next-item Recommendations in Short Sessions
Figure 4 for Next-item Recommendations in Short Sessions
Viaarxiv icon

Hyperbolic Node Embedding for Signed Networks

Add code
Oct 29, 2019
Figure 1 for Hyperbolic Node Embedding for Signed Networks
Figure 2 for Hyperbolic Node Embedding for Signed Networks
Figure 3 for Hyperbolic Node Embedding for Signed Networks
Figure 4 for Hyperbolic Node Embedding for Signed Networks
Viaarxiv icon