Picture for Pinzheng Wang

Pinzheng Wang

OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning

Add code
May 09, 2024
Figure 1 for OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning
Figure 2 for OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning
Figure 3 for OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning
Figure 4 for OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning
Viaarxiv icon

Rethinking Negative Instances for Generative Named Entity Recognition

Add code
Feb 26, 2024
Viaarxiv icon

OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch

Add code
Oct 01, 2023
Figure 1 for OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch
Figure 2 for OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch
Figure 3 for OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch
Figure 4 for OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch
Viaarxiv icon

Detoxify Language Model Step-by-Step

Add code
Aug 16, 2023
Figure 1 for Detoxify Language Model Step-by-Step
Figure 2 for Detoxify Language Model Step-by-Step
Figure 3 for Detoxify Language Model Step-by-Step
Figure 4 for Detoxify Language Model Step-by-Step
Viaarxiv icon

Can Diffusion Model Achieve Better Performance in Text Generation? Bridging the Gap between Training and Inference!

Add code
May 08, 2023
Viaarxiv icon

UFNRec: Utilizing False Negative Samples for Sequential Recommendation

Add code
Aug 08, 2022
Viaarxiv icon