Picture for Xunzhao Yin

Xunzhao Yin

Trustworthy Tree-based Machine Learning by $MoS_2$ Flash-based Analog CAM with Inherent Soft Boundaries

Add code
Jul 16, 2025
Viaarxiv icon

FactorHD: A Hyperdimensional Computing Model for Multi-Object Multi-Class Representation and Factorization

Add code
Jul 16, 2025
Viaarxiv icon

FeBiM: Efficient and Compact Bayesian Inference Engine Empowered with Ferroelectric In-Memory Computing

Add code
Oct 25, 2024
Figure 1 for FeBiM: Efficient and Compact Bayesian Inference Engine Empowered with Ferroelectric In-Memory Computing
Figure 2 for FeBiM: Efficient and Compact Bayesian Inference Engine Empowered with Ferroelectric In-Memory Computing
Figure 3 for FeBiM: Efficient and Compact Bayesian Inference Engine Empowered with Ferroelectric In-Memory Computing
Figure 4 for FeBiM: Efficient and Compact Bayesian Inference Engine Empowered with Ferroelectric In-Memory Computing
Viaarxiv icon

A Remedy to Compute-in-Memory with Dynamic Random Access Memory: 1FeFET-1C Technology for Neuro-Symbolic AI

Add code
Oct 20, 2024
Figure 1 for A Remedy to Compute-in-Memory with Dynamic Random Access Memory: 1FeFET-1C Technology for Neuro-Symbolic AI
Figure 2 for A Remedy to Compute-in-Memory with Dynamic Random Access Memory: 1FeFET-1C Technology for Neuro-Symbolic AI
Figure 3 for A Remedy to Compute-in-Memory with Dynamic Random Access Memory: 1FeFET-1C Technology for Neuro-Symbolic AI
Viaarxiv icon

BasisN: Reprogramming-Free RRAM-Based In-Memory-Computing by Basis Combination for Deep Neural Networks

Add code
Jul 04, 2024
Viaarxiv icon

LiveMind: Low-latency Large Language Models with Simultaneous Inference

Add code
Jun 20, 2024
Figure 1 for LiveMind: Low-latency Large Language Models with Simultaneous Inference
Figure 2 for LiveMind: Low-latency Large Language Models with Simultaneous Inference
Figure 3 for LiveMind: Low-latency Large Language Models with Simultaneous Inference
Figure 4 for LiveMind: Low-latency Large Language Models with Simultaneous Inference
Viaarxiv icon

EncodingNet: A Novel Encoding-based MAC Design for Efficient Neural Network Acceleration

Add code
Feb 25, 2024
Viaarxiv icon

Reconfigurable Frequency Multipliers Based on Complementary Ferroelectric Transistors

Add code
Dec 29, 2023
Figure 1 for Reconfigurable Frequency Multipliers Based on Complementary Ferroelectric Transistors
Figure 2 for Reconfigurable Frequency Multipliers Based on Complementary Ferroelectric Transistors
Figure 3 for Reconfigurable Frequency Multipliers Based on Complementary Ferroelectric Transistors
Figure 4 for Reconfigurable Frequency Multipliers Based on Complementary Ferroelectric Transistors
Viaarxiv icon

Class-Aware Pruning for Efficient Neural Networks

Add code
Dec 10, 2023
Figure 1 for Class-Aware Pruning for Efficient Neural Networks
Figure 2 for Class-Aware Pruning for Efficient Neural Networks
Figure 3 for Class-Aware Pruning for Efficient Neural Networks
Figure 4 for Class-Aware Pruning for Efficient Neural Networks
Viaarxiv icon

Expressivity Enhancement with Efficient Quadratic Neurons for Convolutional Neural Networks

Add code
Jun 10, 2023
Viaarxiv icon