Picture for Hongkun Yu

Hongkun Yu

Department of Biomedical Engineering, University of Wisconsin Madison, Madison, WI, USA

Conditioned Language Policy: A General Framework for Steerable Multi-Objective Finetuning

Add code
Jul 22, 2024
Viaarxiv icon

ResNCT: A Deep Learning Model for the Synthesis of Nephrographic Phase Images in CT Urography

Add code
May 07, 2024
Viaarxiv icon

Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context

Add code
Mar 08, 2024
Viaarxiv icon

Multitask Multilingual Model Adaptation with Featurized Low-Rank Mixtures

Add code
Feb 27, 2024
Viaarxiv icon

Multi-step Problem Solving Through a Verifier: An Empirical Analysis on Model-induced Process Supervision

Add code
Feb 05, 2024
Viaarxiv icon

Gemini: A Family of Highly Capable Multimodal Models

Add code
Dec 19, 2023
Viaarxiv icon

Enable Language Models to Implicitly Learn Self-Improvement From Data

Add code
Oct 05, 2023
Viaarxiv icon

Flan-MoE: Scaling Instruction-Finetuned Language Models with Sparse Mixture of Experts

Add code
May 24, 2023
Viaarxiv icon

Scaling Instruction-Finetuned Language Models

Add code
Oct 20, 2022
Figure 1 for Scaling Instruction-Finetuned Language Models
Figure 2 for Scaling Instruction-Finetuned Language Models
Figure 3 for Scaling Instruction-Finetuned Language Models
Figure 4 for Scaling Instruction-Finetuned Language Models
Viaarxiv icon

EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks

Add code
Oct 16, 2021
Figure 1 for EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks
Figure 2 for EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks
Figure 3 for EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks
Viaarxiv icon