Picture for Manabu Okumura

Manabu Okumura

DyG-Mamba: Continuous State Space Modeling on Dynamic Graphs

Add code
Aug 13, 2024
Figure 1 for DyG-Mamba: Continuous State Space Modeling on Dynamic Graphs
Figure 2 for DyG-Mamba: Continuous State Space Modeling on Dynamic Graphs
Figure 3 for DyG-Mamba: Continuous State Space Modeling on Dynamic Graphs
Figure 4 for DyG-Mamba: Continuous State Space Modeling on Dynamic Graphs
Viaarxiv icon

Reconsidering Token Embeddings with the Definitions for Pre-trained Language Models

Add code
Aug 02, 2024
Figure 1 for Reconsidering Token Embeddings with the Definitions for Pre-trained Language Models
Figure 2 for Reconsidering Token Embeddings with the Definitions for Pre-trained Language Models
Figure 3 for Reconsidering Token Embeddings with the Definitions for Pre-trained Language Models
Figure 4 for Reconsidering Token Embeddings with the Definitions for Pre-trained Language Models
Viaarxiv icon

Advancing Cross-domain Discriminability in Continual Learning of Vison-Language Models

Add code
Jun 27, 2024
Viaarxiv icon

Unveiling the Power of Source: Source-based Minimum Bayes Risk Decoding for Neural Machine Translation

Add code
Jun 17, 2024
Viaarxiv icon

InstructCMP: Length Control in Sentence Compression through Instruction-based Large Language Models

Add code
Jun 16, 2024
Viaarxiv icon

Community-Invariant Graph Contrastive Learning

Add code
May 02, 2024
Figure 1 for Community-Invariant Graph Contrastive Learning
Figure 2 for Community-Invariant Graph Contrastive Learning
Figure 3 for Community-Invariant Graph Contrastive Learning
Figure 4 for Community-Invariant Graph Contrastive Learning
Viaarxiv icon

A Survey on Deep Active Learning: Recent Advances and New Frontiers

Add code
May 01, 2024
Viaarxiv icon

Active Learning with Task Adaptation Pre-training for Speech Emotion Recognition

Add code
May 01, 2024
Viaarxiv icon

DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation

Add code
Mar 30, 2024
Figure 1 for DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation
Figure 2 for DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation
Figure 3 for DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation
Figure 4 for DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation
Viaarxiv icon

Can we obtain significant success in RST discourse parsing by using Large Language Models?

Add code
Mar 08, 2024
Viaarxiv icon