Abstract:Kansei models were used to study the connotative meaning of music. In multimedia and mixed reality, automatically generated melodies are increasingly being used. It is important to consider whether and what feelings are communicated by this music. Evaluation of computer-generated melodies is not a trivial task. Considered the difficulty of defining useful quantitative metrics of the quality of a generated musical piece, researchers often resort to human evaluation. In these evaluations, often the judges are required to evaluate a set of generated pieces along with some benchmark pieces. The latter are often composed by humans. While this kind of evaluation is relatively common, it is known that care should be taken when designing the experiment, as humans can be influenced by a variety of factors. In this paper, we examine the impact of the presence of harmony in audio files that judges must evaluate, to see whether having an accompaniment can change the evaluation of generated melodies. To do so, we generate melodies with two different algorithms and harmonize them with an automatic tool that we designed for this experiment, and ask more than sixty participants to evaluate the melodies. By using statistical analyses, we show harmonization does impact the evaluation process, by emphasizing the differences among judgements.
Abstract:Computational models of music, while providing good descriptions of melodic development, still cannot fully grasp the general structure comprised of repetitions, transpositions, and reuse of melodic material. We present a corpus of strongly structured baroque allemandes, and describe a top-down approach to abstract the shared structure of their musical content using tree representations produced from pairwise differences between the Schenkerian-inspired analyses of each piece, thereby providing a rich hierarchical description of the corpus.
Abstract:Music is a form of expression that often requires interaction between players. If one wishes to interact in such a musical way with a computer, it is necessary for the machine to be able to interpret the input given by the human to find its musical meaning. In this work, we propose a system capable of detecting basic rhythmic features that can allow an application to synchronize its output with the rhythm given by the user, without having any prior agreement or requirement on the possible input. The system is described in detail and an evaluation is given through simulation using quantitative metrics. The evaluation shows that the system can detect tempo and meter consistently under certain settings, and could be a solid base for further developments leading to a system robust to rhythmically changing inputs.