Picture for Yuiko Sakuma

Yuiko Sakuma

Mixed-precision Supernet Training from Vision Foundation Models using Low Rank Adapter

Add code
Mar 29, 2024
Viaarxiv icon

Instruct 3D-to-3D: Text Instruction Guided 3D-to-3D conversion

Add code
Mar 28, 2023
Viaarxiv icon

DetOFA: Efficient Training of Once-for-All Networks for Object Detection by Using Pre-trained Supernet and Path Filter

Add code
Mar 23, 2023
Figure 1 for DetOFA: Efficient Training of Once-for-All Networks for Object Detection by Using Pre-trained Supernet and Path Filter
Figure 2 for DetOFA: Efficient Training of Once-for-All Networks for Object Detection by Using Pre-trained Supernet and Path Filter
Figure 3 for DetOFA: Efficient Training of Once-for-All Networks for Object Detection by Using Pre-trained Supernet and Path Filter
Figure 4 for DetOFA: Efficient Training of Once-for-All Networks for Object Detection by Using Pre-trained Supernet and Path Filter
Viaarxiv icon

n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization

Add code
Mar 22, 2021
Figure 1 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Figure 2 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Figure 3 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Figure 4 for n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Viaarxiv icon