Picture for Shangguang Wang

Shangguang Wang

PhoneLM:an Efficient and Capable Small Language Model Family through Principled Pre-training

Add code
Nov 07, 2024
Figure 1 for PhoneLM:an Efficient and Capable Small Language Model Family through Principled Pre-training
Figure 2 for PhoneLM:an Efficient and Capable Small Language Model Family through Principled Pre-training
Figure 3 for PhoneLM:an Efficient and Capable Small Language Model Family through Principled Pre-training
Figure 4 for PhoneLM:an Efficient and Capable Small Language Model Family through Principled Pre-training
Viaarxiv icon

FedMoE: Personalized Federated Learning via Heterogeneous Mixture of Experts

Add code
Aug 21, 2024
Viaarxiv icon

Urban Traffic Accident Risk Prediction Revisited: Regionality, Proximity, Similarity and Sparsity

Add code
Jul 29, 2024
Viaarxiv icon

Variational Multi-Modal Hypergraph Attention Network for Multi-Modal Relation Extraction

Add code
Apr 18, 2024
Figure 1 for Variational Multi-Modal Hypergraph Attention Network for Multi-Modal Relation Extraction
Figure 2 for Variational Multi-Modal Hypergraph Attention Network for Multi-Modal Relation Extraction
Figure 3 for Variational Multi-Modal Hypergraph Attention Network for Multi-Modal Relation Extraction
Figure 4 for Variational Multi-Modal Hypergraph Attention Network for Multi-Modal Relation Extraction
Viaarxiv icon

FOOL: Addressing the Downlink Bottleneck in Satellite Computing with Neural Feature Compression

Add code
Mar 25, 2024
Viaarxiv icon

Towards Effective Next POI Prediction: Spatial and Semantic Augmentation with Remote Sensing Data

Add code
Mar 22, 2024
Viaarxiv icon

Context-based Fast Recommendation Strategy for Long User Behavior Sequence in Meituan Waimai

Add code
Mar 19, 2024
Viaarxiv icon

FedRDMA: Communication-Efficient Cross-Silo Federated LLM via Chunked RDMA Transmission

Add code
Mar 01, 2024
Viaarxiv icon

Lightweight Protection for Privacy in Offloaded Speech Understanding

Add code
Jan 22, 2024
Viaarxiv icon

A Survey of Resource-efficient LLM and Multimodal Foundation Models

Add code
Jan 16, 2024
Figure 1 for A Survey of Resource-efficient LLM and Multimodal Foundation Models
Figure 2 for A Survey of Resource-efficient LLM and Multimodal Foundation Models
Figure 3 for A Survey of Resource-efficient LLM and Multimodal Foundation Models
Figure 4 for A Survey of Resource-efficient LLM and Multimodal Foundation Models
Viaarxiv icon