Picture for Jiaqi Zhu

Jiaqi Zhu

Blissful (A)Ignorance: People form overly positive impressions of others based on their written messages, despite wide-scale adoption of Generative AI

Add code
Jan 26, 2025
Viaarxiv icon

Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent

Add code
Nov 05, 2024
Figure 1 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 2 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 3 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 4 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Viaarxiv icon

Flexible Diffusion Scopes with Parameterized Laplacian for Heterophilic Graph Learning

Add code
Sep 15, 2024
Figure 1 for Flexible Diffusion Scopes with Parameterized Laplacian for Heterophilic Graph Learning
Figure 2 for Flexible Diffusion Scopes with Parameterized Laplacian for Heterophilic Graph Learning
Figure 3 for Flexible Diffusion Scopes with Parameterized Laplacian for Heterophilic Graph Learning
Figure 4 for Flexible Diffusion Scopes with Parameterized Laplacian for Heterophilic Graph Learning
Viaarxiv icon

Are Heterophily-Specific GNNs and Homophily Metrics Really Effective? Evaluation Pitfalls and New Benchmarks

Add code
Sep 09, 2024
Figure 1 for Are Heterophily-Specific GNNs and Homophily Metrics Really Effective? Evaluation Pitfalls and New Benchmarks
Figure 2 for Are Heterophily-Specific GNNs and Homophily Metrics Really Effective? Evaluation Pitfalls and New Benchmarks
Figure 3 for Are Heterophily-Specific GNNs and Homophily Metrics Really Effective? Evaluation Pitfalls and New Benchmarks
Figure 4 for Are Heterophily-Specific GNNs and Homophily Metrics Really Effective? Evaluation Pitfalls and New Benchmarks
Viaarxiv icon

HMoE: Heterogeneous Mixture of Experts for Language Modeling

Add code
Aug 20, 2024
Figure 1 for HMoE: Heterogeneous Mixture of Experts for Language Modeling
Figure 2 for HMoE: Heterogeneous Mixture of Experts for Language Modeling
Figure 3 for HMoE: Heterogeneous Mixture of Experts for Language Modeling
Figure 4 for HMoE: Heterogeneous Mixture of Experts for Language Modeling
Viaarxiv icon

Do LLMs Understand Visual Anomalies? Uncovering LLM Capabilities in Zero-shot Anomaly Detection

Add code
Apr 15, 2024
Viaarxiv icon

RulePrompt: Weakly Supervised Text Classification with Prompting PLMs and Self-Iterative Logical Rules

Add code
Mar 05, 2024
Figure 1 for RulePrompt: Weakly Supervised Text Classification with Prompting PLMs and Self-Iterative Logical Rules
Figure 2 for RulePrompt: Weakly Supervised Text Classification with Prompting PLMs and Self-Iterative Logical Rules
Figure 3 for RulePrompt: Weakly Supervised Text Classification with Prompting PLMs and Self-Iterative Logical Rules
Figure 4 for RulePrompt: Weakly Supervised Text Classification with Prompting PLMs and Self-Iterative Logical Rules
Viaarxiv icon

Representation Learning on Heterophilic Graph with Directional Neighborhood Attention

Add code
Mar 03, 2024
Viaarxiv icon

Balanced Multi-modal Federated Learning via Cross-Modal Infiltration

Add code
Dec 31, 2023
Viaarxiv icon

METER: A Dynamic Concept Adaptation Framework for Online Anomaly Detection

Add code
Dec 28, 2023
Viaarxiv icon