# Metrics

Pypianoroll provides several objective metrics proposed in the literature. These objective metrics could be used to evaluate a music generation system by comparing the statistical difference between the training data and the generated samples.

## Functions

pypianoroll.empty_beat_rate(pianoroll: numpy.ndarray, resolution: int) float[source]

Return the ratio of empty beats.

The empty-beat rate is defined as the ratio of the number of empty beats (where no note is played) to the total number of beats. Return NaN if song length is zero.

$empty\_beat\_rate = \frac{\#(empty\_beats)}{\#(beats)}$
Parameters

pianoroll (ndarray) – Piano roll to evaluate.

Returns

Empty-beat rate.

Return type

float

pypianoroll.n_pitches_used(pianoroll: numpy.ndarray) int[source]

Return the number of unique pitches used.

Parameters

pianoroll (ndarray) – Piano roll to evaluate.

Returns

Number of unique pitch classes used.

Return type

int

pypianoroll.n_pitch_class_used()

Compute the number of unique pitch classes used.

pypianoroll.n_pitch_classes_used(pianoroll: numpy.ndarray) int[source]

Return the number of unique pitch classes used.

Parameters

pianoroll (ndarray) – Piano roll to evaluate.

Returns

Number of unique pitch classes used.

Return type

int

pypianoroll.n_pitches_used()

Compute the number of unique pitches used.

pypianoroll.pitch_range_tuple(pianoroll) Tuple[float, float][source]

Return the pitch range as a tuple (lowest, highest).

Returns

• int or nan – Highest active pitch.

• int or nan – Lowest active pitch.

pypianoroll.pitch_range()

Compute the pitch range.

pypianoroll.pitch_range(pianoroll) float[source]

Return the pitch range.

Returns

Pitch range (in semitones), i.e., difference between the highest and the lowest active pitches.

Return type

int or nan

pypianoroll.pitch_range_tuple()

Return the pitch range as a tuple.

pypianoroll.qualified_note_rate(pianoroll: numpy.ndarray, threshold: float = 2) float[source]

Return the ratio of the number of the qualified notes.

The qualified note rate is defined as the ratio of the number of qualified notes (notes longer than threshold, in time steps) to the total number of notes. Return NaN if no note is found.

$qualified\_note\_rate = \frac{ \#(notes\_longer\_than\_the\_threshold) }{ \#(notes) }$
Parameters
• pianoroll (ndarray) – Piano roll to evaluate.

• threshold (int) – Threshold of note length to count into the numerator.

Returns

Qualified note rate.

Return type

float

References

1. Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang, “MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment,” in Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018.

pypianoroll.polyphonic_rate(pianoroll: numpy.ndarray, threshold: float = 2) float[source]

Return the ratio of time steps where multiple pitches are on.

The polyphony rate is defined as the ratio of the number of time steps where multiple pitches are on to the total number of time steps. Drum tracks are ignored. Return NaN if song length is zero. This metric is used in [1], where it is called polyphonicity.

$polyphony\_rate = \frac{ \#(time\_steps\_where\_multiple\_pitches\_are\_on) }{ \#(time\_steps) }$
Parameters
• pianoroll (ndarray) – Piano roll to evaluate.

• threshold (int) – Threshold of number of pitches to count into the numerator.

Returns

Polyphony rate.

Return type

float

References

1. Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang, “MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment,” in Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018.

pypianoroll.drum_in_pattern_rate(pianoroll: numpy.ndarray, resolution: int, tolerance: float = 0.1) float[source]

Return the ratio of drum notes in a certain drum pattern.

The drum-in-pattern rate is defined as the ratio of the number of notes in a certain scale to the total number of notes. Only drum tracks are considered. Return NaN if no drum note is found. This metric is used in [1].

$drum\_in\_pattern\_rate = \frac{ \#(drum\_notes\_in\_pattern)}{\#(drum\_notes)}$
Parameters
• pianoroll (ndarray) – Piano roll to evaluate.

• resolution (int) – Time steps per beat.

• tolerance (float, default: 0.1) – Tolerance.

Returns

Drum-in-pattern rate.

Return type

float

References

1. Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang, “MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment,” in Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018.

pypianoroll.in_scale_rate(pianoroll: numpy.ndarray, root: int = 3, mode: str = 'major') float[source]

Return the ratio of pitches in a certain musical scale.

The pitch-in-scale rate is defined as the ratio of the number of notes in a certain scale to the total number of notes. Drum tracks are ignored. Return NaN if no note is found. This metric is used in [1].

$pitch\_in\_scale\_rate = \frac{\#(notes\_in\_scale)}{\#(notes)}$
Parameters
• pianoroll (ndarray) – Piano roll to evaluate.

• root (int) – Root of the scale.

• mode (str, {'major', 'minor'}) – Mode of the scale.

Returns

Pitch-in-scale rate.

Return type

float

muspy.scale_consistency()

Compute the largest pitch-in-class rate.

References

1. Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, and Yi-Hsuan Yang, “MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment,” in Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI), 2018.

pypianoroll.tonal_distance(pianoroll_1: numpy.ndarray, pianoroll_2: numpy.ndarray, resolution: int, radii: Sequence[float] = (1.0, 1.0, 0.5)) float[source]

Return the tonal distance [1] between the two input piano rolls.

Parameters
• pianoroll_1 (ndarray) – First piano roll to evaluate.

• pianoroll_2 (ndarray) – Second piano roll to evaluate.

• resolution (int) – Time steps per beat.

• radii (tuple of float) – Radii of the three tonal circles (see Equation 3 in [1]).

References

1. Christopher Harte, Mark Sandler, and Martin Gasser, “Detecting harmonic change in musical audio,” in Proceedings of the 1st ACM workshop on Audio and music computing multimedia, 2006.