Picture for Shaojun Wang

Shaojun Wang

Dynamic Attention-Guided Context Decoding for Mitigating Context Faithfulness Hallucinations in Large Language Models

Add code
Jan 02, 2025
Viaarxiv icon

Rethinking Layer Removal: Preserving Critical Components with Task-Aware Singular Value Decomposition

Add code
Dec 31, 2024
Viaarxiv icon

Learning to Adapt to Low-Resource Paraphrase Generation

Add code
Dec 22, 2024
Viaarxiv icon

Planning with Large Language Models for Conversational Agents

Add code
Jul 04, 2024
Viaarxiv icon

A Single-Step Non-Autoregressive Automatic Speech Recognition Architecture with High Accuracy and Inference Speed

Add code
Jun 13, 2024
Figure 1 for A Single-Step Non-Autoregressive Automatic Speech Recognition Architecture with High Accuracy and Inference Speed
Figure 2 for A Single-Step Non-Autoregressive Automatic Speech Recognition Architecture with High Accuracy and Inference Speed
Figure 3 for A Single-Step Non-Autoregressive Automatic Speech Recognition Architecture with High Accuracy and Inference Speed
Figure 4 for A Single-Step Non-Autoregressive Automatic Speech Recognition Architecture with High Accuracy and Inference Speed
Viaarxiv icon

Enhancing Dual-Encoders with Question and Answer Cross-Embeddings for Answer Retrieval

Add code
Jun 07, 2022
Figure 1 for Enhancing Dual-Encoders with Question and Answer Cross-Embeddings for Answer Retrieval
Figure 2 for Enhancing Dual-Encoders with Question and Answer Cross-Embeddings for Answer Retrieval
Figure 3 for Enhancing Dual-Encoders with Question and Answer Cross-Embeddings for Answer Retrieval
Figure 4 for Enhancing Dual-Encoders with Question and Answer Cross-Embeddings for Answer Retrieval
Viaarxiv icon

Adding Connectionist Temporal Summarization into Conformer to Improve Its Decoder Efficiency For Speech Recognition

Add code
Apr 08, 2022
Figure 1 for Adding Connectionist Temporal Summarization into Conformer to Improve Its Decoder Efficiency For Speech Recognition
Figure 2 for Adding Connectionist Temporal Summarization into Conformer to Improve Its Decoder Efficiency For Speech Recognition
Figure 3 for Adding Connectionist Temporal Summarization into Conformer to Improve Its Decoder Efficiency For Speech Recognition
Figure 4 for Adding Connectionist Temporal Summarization into Conformer to Improve Its Decoder Efficiency For Speech Recognition
Viaarxiv icon

A Study of Different Ways to Use The Conformer Model For Spoken Language Understanding

Add code
Apr 08, 2022
Figure 1 for A Study of Different Ways to Use The Conformer Model For Spoken Language Understanding
Figure 2 for A Study of Different Ways to Use The Conformer Model For Spoken Language Understanding
Figure 3 for A Study of Different Ways to Use The Conformer Model For Spoken Language Understanding
Figure 4 for A Study of Different Ways to Use The Conformer Model For Spoken Language Understanding
Viaarxiv icon

BS-NAS: Broadening-and-Shrinking One-Shot NAS with Searchable Numbers of Channels

Add code
Mar 22, 2020
Figure 1 for BS-NAS: Broadening-and-Shrinking One-Shot NAS with Searchable Numbers of Channels
Figure 2 for BS-NAS: Broadening-and-Shrinking One-Shot NAS with Searchable Numbers of Channels
Figure 3 for BS-NAS: Broadening-and-Shrinking One-Shot NAS with Searchable Numbers of Channels
Figure 4 for BS-NAS: Broadening-and-Shrinking One-Shot NAS with Searchable Numbers of Channels
Viaarxiv icon

An Iterative Polishing Framework based on Quality Aware Masked Language Model for Chinese Poetry Generation

Add code
Nov 29, 2019
Figure 1 for An Iterative Polishing Framework based on Quality Aware Masked Language Model for Chinese Poetry Generation
Figure 2 for An Iterative Polishing Framework based on Quality Aware Masked Language Model for Chinese Poetry Generation
Figure 3 for An Iterative Polishing Framework based on Quality Aware Masked Language Model for Chinese Poetry Generation
Figure 4 for An Iterative Polishing Framework based on Quality Aware Masked Language Model for Chinese Poetry Generation
Viaarxiv icon