Picture for Ge Jin

Ge Jin

Topological GCN for Improving Detection of Hip Landmarks from B-Mode Ultrasound Images

Add code
Aug 24, 2024
Figure 1 for Topological GCN for Improving Detection of Hip Landmarks from B-Mode Ultrasound Images
Figure 2 for Topological GCN for Improving Detection of Hip Landmarks from B-Mode Ultrasound Images
Figure 3 for Topological GCN for Improving Detection of Hip Landmarks from B-Mode Ultrasound Images
Figure 4 for Topological GCN for Improving Detection of Hip Landmarks from B-Mode Ultrasound Images
Viaarxiv icon

pTSE: A Multi-model Ensemble Method for Probabilistic Time Series Forecasting

Add code
May 16, 2023
Viaarxiv icon

MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data

Add code
Oct 27, 2021
Figure 1 for MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data
Figure 2 for MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data
Figure 3 for MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data
Figure 4 for MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data
Viaarxiv icon

FANDA: A Novel Approach to Perform Follow-up Query Analysis

Add code
Jan 24, 2019
Figure 1 for FANDA: A Novel Approach to Perform Follow-up Query Analysis
Figure 2 for FANDA: A Novel Approach to Perform Follow-up Query Analysis
Figure 3 for FANDA: A Novel Approach to Perform Follow-up Query Analysis
Figure 4 for FANDA: A Novel Approach to Perform Follow-up Query Analysis
Viaarxiv icon

Highly Efficient 8-bit Low Precision Inference of Convolutional Neural Networks with IntelCaffe

Add code
May 04, 2018
Figure 1 for Highly Efficient 8-bit Low Precision Inference of Convolutional Neural Networks with IntelCaffe
Figure 2 for Highly Efficient 8-bit Low Precision Inference of Convolutional Neural Networks with IntelCaffe
Figure 3 for Highly Efficient 8-bit Low Precision Inference of Convolutional Neural Networks with IntelCaffe
Figure 4 for Highly Efficient 8-bit Low Precision Inference of Convolutional Neural Networks with IntelCaffe
Viaarxiv icon