Picture for Christos Garoufis

Christos Garoufis

Pre-training Music Classification Models via Music Source Separation

Add code
Oct 24, 2023
Figure 1 for Pre-training Music Classification Models via Music Source Separation
Figure 2 for Pre-training Music Classification Models via Music Source Separation
Figure 3 for Pre-training Music Classification Models via Music Source Separation
Figure 4 for Pre-training Music Classification Models via Music Source Separation
Viaarxiv icon

Multi-Source Contrastive Learning from Musical Audio

Add code
Feb 14, 2023
Viaarxiv icon

Enhancing Affective Representations of Music-Induced EEG through Multimodal Supervision and latent Domain Adaptation

Add code
Feb 20, 2022
Figure 1 for Enhancing Affective Representations of Music-Induced EEG through Multimodal Supervision and latent Domain Adaptation
Figure 2 for Enhancing Affective Representations of Music-Induced EEG through Multimodal Supervision and latent Domain Adaptation
Figure 3 for Enhancing Affective Representations of Music-Induced EEG through Multimodal Supervision and latent Domain Adaptation
Viaarxiv icon

HTMD-Net: A Hybrid Masking-Denoising Approach to Time-Domain Monaural Singing Voice Separation

Add code
Mar 07, 2021
Figure 1 for HTMD-Net: A Hybrid Masking-Denoising Approach to Time-Domain Monaural Singing Voice Separation
Figure 2 for HTMD-Net: A Hybrid Masking-Denoising Approach to Time-Domain Monaural Singing Voice Separation
Figure 3 for HTMD-Net: A Hybrid Masking-Denoising Approach to Time-Domain Monaural Singing Voice Separation
Figure 4 for HTMD-Net: A Hybrid Masking-Denoising Approach to Time-Domain Monaural Singing Voice Separation
Viaarxiv icon

Deep Convolutional and Recurrent Networks for Polyphonic Instrument Classification from Monophonic Raw Audio Waveforms

Add code
Feb 13, 2021
Figure 1 for Deep Convolutional and Recurrent Networks for Polyphonic Instrument Classification from Monophonic Raw Audio Waveforms
Figure 2 for Deep Convolutional and Recurrent Networks for Polyphonic Instrument Classification from Monophonic Raw Audio Waveforms
Figure 3 for Deep Convolutional and Recurrent Networks for Polyphonic Instrument Classification from Monophonic Raw Audio Waveforms
Figure 4 for Deep Convolutional and Recurrent Networks for Polyphonic Instrument Classification from Monophonic Raw Audio Waveforms
Viaarxiv icon

Multiscale Fractal Analysis of Stimulated EEG Signals with Application to Emotion Classification

Add code
Oct 30, 2020
Figure 1 for Multiscale Fractal Analysis of Stimulated EEG Signals with Application to Emotion Classification
Figure 2 for Multiscale Fractal Analysis of Stimulated EEG Signals with Application to Emotion Classification
Figure 3 for Multiscale Fractal Analysis of Stimulated EEG Signals with Application to Emotion Classification
Figure 4 for Multiscale Fractal Analysis of Stimulated EEG Signals with Application to Emotion Classification
Viaarxiv icon

Augmentation Methods on Monophonic Audio for Instrument Classification in Polyphonic Music

Add code
Nov 28, 2019
Figure 1 for Augmentation Methods on Monophonic Audio for Instrument Classification in Polyphonic Music
Figure 2 for Augmentation Methods on Monophonic Audio for Instrument Classification in Polyphonic Music
Figure 3 for Augmentation Methods on Monophonic Audio for Instrument Classification in Polyphonic Music
Figure 4 for Augmentation Methods on Monophonic Audio for Instrument Classification in Polyphonic Music
Viaarxiv icon