BinaryMuseGAN

Hao-Wen Dong, Yi-Hsuan Yang

Music and AI Lab,
Research Center for IT Innovation,
Academia Sinica

Abstract

It has been shown recently that convolutional generative adversarial networks (GANs) are able to capture the temporal-pitch patterns in music using the piano-roll representation, which represents music by binary-valued time-pitch matrices. However, existing models can only generate real-valued piano-rolls and require further post-processing (e.g. hard thresholding, Bernoulli sampling) at test time to obtain the final binary-valued results. In this work, we first investigate how the real-valued predictions generated by the generator may lead to difficulties in training the discriminator. To overcome the binarization issue, we propose to append to the generator an additional refiner network, which uses binary neurons at the output layer. The whole network can be trained in a two-stage training setting: the generator and the discriminator are pretrained in the first stage; the refiner network is then trained along with the discriminator in the second stage to refine the real-valued piano-rolls generated by the pretrained generator to binary-valued ones. The proposed model is able to directly generate binary-valued piano-rolls at test time. Experimental results show improvements to the existing models in most of the evaluation metrics.