Picture for Tianqiao Liu

Tianqiao Liu

Advancing Math Reasoning in Language Models: The Impact of Problem-Solving Data, Data Synthesis Methods, and Training Stages

Add code
Jan 23, 2025
Viaarxiv icon

What Are Step-Level Reward Models Rewarding? Counterintuitive Findings from MCTS-Boosted Mathematical Reasoning

Add code
Dec 20, 2024
Viaarxiv icon

Expediting and Elevating Large Language Model Reasoning via Hidden Chain-of-Thought Decoding

Add code
Sep 13, 2024
Viaarxiv icon

Hypertext Entity Extraction in Webpage

Add code
Mar 04, 2024
Viaarxiv icon

Optimal Transport for Treatment Effect Estimation

Add code
Oct 27, 2023
Viaarxiv icon

Self-Supervised Audio-and-Text Pre-training with Extremely Low-Resource Parallel Data

Add code
Apr 10, 2022
Figure 1 for Self-Supervised Audio-and-Text Pre-training with Extremely Low-Resource Parallel Data
Figure 2 for Self-Supervised Audio-and-Text Pre-training with Extremely Low-Resource Parallel Data
Figure 3 for Self-Supervised Audio-and-Text Pre-training with Extremely Low-Resource Parallel Data
Figure 4 for Self-Supervised Audio-and-Text Pre-training with Extremely Low-Resource Parallel Data
Viaarxiv icon

ESCM$^2$: Entire Space Counterfactual Multi-Task Model for Post-Click Conversion Rate Estimation

Add code
Apr 03, 2022
Figure 1 for ESCM$^2$: Entire Space Counterfactual Multi-Task Model for Post-Click Conversion Rate Estimation
Figure 2 for ESCM$^2$: Entire Space Counterfactual Multi-Task Model for Post-Click Conversion Rate Estimation
Figure 3 for ESCM$^2$: Entire Space Counterfactual Multi-Task Model for Post-Click Conversion Rate Estimation
Figure 4 for ESCM$^2$: Entire Space Counterfactual Multi-Task Model for Post-Click Conversion Rate Estimation
Viaarxiv icon

CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations

Add code
Sep 01, 2021
Figure 1 for CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations
Figure 2 for CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations
Figure 3 for CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations
Figure 4 for CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations
Viaarxiv icon

Solving ESL Sentence Completion Questions via Pre-trained Neural Language Models

Add code
Jul 15, 2021
Figure 1 for Solving ESL Sentence Completion Questions via Pre-trained Neural Language Models
Figure 2 for Solving ESL Sentence Completion Questions via Pre-trained Neural Language Models
Viaarxiv icon

Mathematical Word Problem Generation from Commonsense Knowledge Graph and Equations

Add code
Oct 13, 2020
Figure 1 for Mathematical Word Problem Generation from Commonsense Knowledge Graph and Equations
Figure 2 for Mathematical Word Problem Generation from Commonsense Knowledge Graph and Equations
Figure 3 for Mathematical Word Problem Generation from Commonsense Knowledge Graph and Equations
Figure 4 for Mathematical Word Problem Generation from Commonsense Knowledge Graph and Equations
Viaarxiv icon