Picture for Zi Yin

Zi Yin

The BabyView dataset: High-resolution egocentric videos of infants' and young children's everyday experiences

Add code
Jun 14, 2024
Viaarxiv icon

Alignment is not sufficient to prevent large language models from generating harmful information: A psychoanalytic perspective

Add code
Nov 14, 2023
Viaarxiv icon

Emotional Intelligence of Large Language Models

Add code
Jul 28, 2023
Viaarxiv icon

End-to-End Face Parsing via Interlinked Convolutional Neural Networks

Add code
Feb 12, 2020
Figure 1 for End-to-End Face Parsing via Interlinked Convolutional Neural Networks
Figure 2 for End-to-End Face Parsing via Interlinked Convolutional Neural Networks
Figure 3 for End-to-End Face Parsing via Interlinked Convolutional Neural Networks
Figure 4 for End-to-End Face Parsing via Interlinked Convolutional Neural Networks
Viaarxiv icon

The Global Anchor Method for Quantifying Linguistic Shifts and Domain Adaptation

Add code
Dec 12, 2018
Figure 1 for The Global Anchor Method for Quantifying Linguistic Shifts and Domain Adaptation
Figure 2 for The Global Anchor Method for Quantifying Linguistic Shifts and Domain Adaptation
Figure 3 for The Global Anchor Method for Quantifying Linguistic Shifts and Domain Adaptation
Figure 4 for The Global Anchor Method for Quantifying Linguistic Shifts and Domain Adaptation
Viaarxiv icon

On the Dimensionality of Word Embedding

Add code
Dec 11, 2018
Figure 1 for On the Dimensionality of Word Embedding
Figure 2 for On the Dimensionality of Word Embedding
Figure 3 for On the Dimensionality of Word Embedding
Figure 4 for On the Dimensionality of Word Embedding
Viaarxiv icon

Understand Functionality and Dimensionality of Vector Embeddings: the Distributional Hypothesis, the Pairwise Inner Product Loss and Its Bias-Variance Trade-off

Add code
May 21, 2018
Figure 1 for Understand Functionality and Dimensionality of Vector Embeddings: the Distributional Hypothesis, the Pairwise Inner Product Loss and Its Bias-Variance Trade-off
Figure 2 for Understand Functionality and Dimensionality of Vector Embeddings: the Distributional Hypothesis, the Pairwise Inner Product Loss and Its Bias-Variance Trade-off
Figure 3 for Understand Functionality and Dimensionality of Vector Embeddings: the Distributional Hypothesis, the Pairwise Inner Product Loss and Its Bias-Variance Trade-off
Figure 4 for Understand Functionality and Dimensionality of Vector Embeddings: the Distributional Hypothesis, the Pairwise Inner Product Loss and Its Bias-Variance Trade-off
Viaarxiv icon

DeepProbe: Information Directed Sequence Understanding and Chatbot Design via Recurrent Neural Networks

Add code
Mar 01, 2018
Figure 1 for DeepProbe: Information Directed Sequence Understanding and Chatbot Design via Recurrent Neural Networks
Figure 2 for DeepProbe: Information Directed Sequence Understanding and Chatbot Design via Recurrent Neural Networks
Figure 3 for DeepProbe: Information Directed Sequence Understanding and Chatbot Design via Recurrent Neural Networks
Figure 4 for DeepProbe: Information Directed Sequence Understanding and Chatbot Design via Recurrent Neural Networks
Viaarxiv icon