Picture for Fangming Liu

Fangming Liu

Impromptu Cybercrime Euphemism Detection

Add code
Dec 03, 2024
Viaarxiv icon

Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance

Add code
Oct 16, 2024
Figure 1 for Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance
Figure 2 for Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance
Figure 3 for Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance
Figure 4 for Proactive Agent: Shifting LLM Agents from Reactive Responses to Active Assistance
Viaarxiv icon

HE-Nav: A High-Performance and Efficient Navigation System for Aerial-Ground Robots in Cluttered Environments

Add code
Oct 07, 2024
Viaarxiv icon

Small Language Models: Survey, Measurements, and Insights

Add code
Sep 24, 2024
Figure 1 for Small Language Models: Survey, Measurements, and Insights
Figure 2 for Small Language Models: Survey, Measurements, and Insights
Figure 3 for Small Language Models: Survey, Measurements, and Insights
Figure 4 for Small Language Models: Survey, Measurements, and Insights
Viaarxiv icon

OMEGA: Efficient Occlusion-Aware Navigation for Air-Ground Robot in Dynamic Environments via State Space Model

Add code
Aug 20, 2024
Viaarxiv icon

Hybrid-Parallel: Achieving High Performance and Energy Efficient Distributed Inference on Robots

Add code
May 29, 2024
Figure 1 for Hybrid-Parallel: Achieving High Performance and Energy Efficient Distributed Inference on Robots
Figure 2 for Hybrid-Parallel: Achieving High Performance and Energy Efficient Distributed Inference on Robots
Figure 3 for Hybrid-Parallel: Achieving High Performance and Energy Efficient Distributed Inference on Robots
Figure 4 for Hybrid-Parallel: Achieving High Performance and Energy Efficient Distributed Inference on Robots
Viaarxiv icon

TrimCaching: Parameter-sharing AI Model Caching in Wireless Edge Networks

Add code
May 07, 2024
Viaarxiv icon

Opara: Exploiting Operator Parallelism for Expediting DNN Inference on GPUs

Add code
Dec 16, 2023
Figure 1 for Opara: Exploiting Operator Parallelism for Expediting DNN Inference on GPUs
Figure 2 for Opara: Exploiting Operator Parallelism for Expediting DNN Inference on GPUs
Figure 3 for Opara: Exploiting Operator Parallelism for Expediting DNN Inference on GPUs
Figure 4 for Opara: Exploiting Operator Parallelism for Expediting DNN Inference on GPUs
Viaarxiv icon

On-edge Multi-task Transfer Learning: Model and Practice with Data-driven Task Allocation

Add code
Jul 06, 2021
Figure 1 for On-edge Multi-task Transfer Learning: Model and Practice with Data-driven Task Allocation
Figure 2 for On-edge Multi-task Transfer Learning: Model and Practice with Data-driven Task Allocation
Figure 3 for On-edge Multi-task Transfer Learning: Model and Practice with Data-driven Task Allocation
Figure 4 for On-edge Multi-task Transfer Learning: Model and Practice with Data-driven Task Allocation
Viaarxiv icon