Skip to content

Meter

MetricalSalience

MetricalSalience(
    symbolic_pulses: Optional[ndarray] = None,
    quarter_bpm: Optional[float] = None,
    mu: float = 0.6,
    sig: float = 0.3,
)

Methods for storing array representations of metrical structure and derived salience values.

Parameters:

  • symbolic_pulses (Optional[ndarray], default: None ) –

    A NumPy array representing the symbolic pulse lengths by level.

  • quarter_bpm (Optional[float], default: None ) –

    The beats-per-minute corresponding to the symbolic value of a pulse length 1.0 in symbolic time. The user sets this value if/when calculating absolute length and salience values.

  • mu (float, default: 0.6 ) –

    The mean of the Gaussian.

  • sig (float, default: 0.3 ) –

    The standard deviation of the Gaussian.

Attributes:

  • symbolic_pulses

    As above.

  • absolute_pulses

    An adaptation of the symbolic pulse lengths array that maps each value from symbolic to seconds.

  • salience_values

    An adaptation of the absolute pulse lengths to the equivalent salience values (see notes on log_gaussian).

  • cumulative_salience_values

    A 1D array summation of the absolute salience values by column (one value per metrical position).

  • indicator

    An indicator array for the (non-)presence of values at each position of the symbolic pulse lengths array. This can serve, for example, as the symbolic equivalent of the (absolute) salience_values array.

Examples:

>>> from amads.time.meter.representations import PulseLengths
>>> pl = [4, 2, 1, 0.5]
>>> pls = PulseLengths(pulse_lengths=pl, cycle_length=4)
>>> arr = pls.to_array()
>>> arr
array([[4. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ],
       [2. , 0. , 0. , 0. , 2. , 0. , 0. , 0. ],
       [1. , 0. , 1. , 0. , 1. , 0. , 1. , 0. ],
       [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]])
>>> ms = MetricalSalience(symbolic_pulses=arr, quarter_bpm=120)
>>> ms.absolute_pulses
array([[2.  , 0.  , 0.  , 0.  , 0.  , 0.  , 0.  , 0.  ],
       [1.  , 0.  , 0.  , 0.  , 1.  , 0.  , 0.  , 0.  ],
       [0.5 , 0.  , 0.5 , 0.  , 0.5 , 0.  , 0.5 , 0.  ],
       [0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25]])
>>> ms.cumulative_salience_values
array([2.39342011, 0.44793176, 1.4136999 , 0.44793176, 2.17446773,
       0.44793176, 1.4136999 , 0.44793176])
Source code in amads/time/meter/attractor_tempos.py
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
def __init__(
    self,
    symbolic_pulses: Optional[np.ndarray] = None,
    quarter_bpm: Optional[float] = None,
    mu: float = 0.6,
    sig: float = 0.3,
):
    self.symbolic_pulses = symbolic_pulses
    self.quarter_bpm = quarter_bpm
    self.mu = mu
    self.sig = sig
    self.absolute_pulses = self.calculate_absolute_pulse_lengths()
    self.salience_values = self.calculate_salience_values()
    self.cumulative_salience_values = (
        self.calculate_cumulative_salience_values()
    )
    self.indicator = self.make_indicator()

Functions

calculate_absolute_pulse_lengths

calculate_absolute_pulse_lengths()

Calculate absolute pulse lengths from the symbolic lengths (symbolic_pulses) and the BPM provided here for the 'quarter note' as reference value.

Source code in amads/time/meter/attractor_tempos.py
102
103
104
105
106
107
108
def calculate_absolute_pulse_lengths(self):
    """
    Calculate absolute pulse lengths from
    the symbolic lengths (`symbolic_pulses`) and
    the BPM provided here for the 'quarter note' as reference value.
    """
    return self.symbolic_pulses * (60 / self.quarter_bpm)

calculate_salience_values

calculate_salience_values()

Calculate salience values for items in the symbolic_pulses using log_gaussian (see notes on that function).

Source code in amads/time/meter/attractor_tempos.py
110
111
112
113
114
115
def calculate_salience_values(self):
    """
    Calculate salience values for items in the `symbolic_pulses`
    using `log_gaussian` (see notes on that function).
    """
    return log_gaussian(self.absolute_pulses, self.mu, self.sig)

calculate_cumulative_salience_values

calculate_cumulative_salience_values()

Calculate cumulative salience values by summing over columns.

Source code in amads/time/meter/attractor_tempos.py
117
118
119
120
121
def calculate_cumulative_salience_values(self):
    """
    Calculate cumulative salience values by summing over columns.
    """
    return np.sum(self.salience_values, axis=0)

make_indicator

make_indicator()

Make a 2D indicator vector for the presence/absense of a pulse value at each position.

Source code in amads/time/meter/attractor_tempos.py
123
124
125
126
127
def make_indicator(self):
    """
    Make a 2D indicator vector for the presence/absense of a pulse value at each position.
    """
    return (self.symbolic_pulses > 0).astype(int)

plot

plot(
    symbolic_not_absolute: bool = False,
    reverse_to_plot: bool = True,
    show: bool = True,
)

Plot the salience values with their respective contribution.

Parameters:

  • symbolic_not_absolute (bool, default: False ) –

    If False (default), plot the tempo- and meter-sensitive, weighted salience values.

  • reverse_to_plot (bool, default: True ) –

Returns:

  • Figure

    A matplotlib.figure.Figure of the plotted salience values.

Source code in amads/time/meter/attractor_tempos.py
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
def plot(
    self,
    symbolic_not_absolute: bool = False,
    reverse_to_plot: bool = True,
    show: bool = True,
):
    """
    Plot the salience values with their respective contribution.

    Parameters
    ----------
    symbolic_not_absolute: If True, plot only the indicator values (one per level).
        If False (default), plot the tempo- and meter-sensitive, weighted salience values.
    reverse_to_plot: If True (default), plot the fastest values at the bottom.

    Returns
    -------
    Figure
        A matplotlib.figure.Figure of the plotted salience values.
    """
    if symbolic_not_absolute:
        data = self.indicator
    else:
        data = self.salience_values

    pulse_values_for_labels = self.symbolic_pulses[:, 0]

    if reverse_to_plot:
        data = data[::-1]  # TODO maybe revisit for elegance, checks
        pulse_values_for_labels = pulse_values_for_labels[::-1]

    num_layers = data.shape[0]
    num_cols = data.shape[1]
    fig, ax = plt.subplots()
    bottom = np.zeros(num_cols)

    for i in range(num_layers):
        ax.bar(
            np.arange(num_cols),
            data[i],
            bottom=bottom,
            label=f"Pulse={pulse_values_for_labels[i]}; IOI={pulse_values_for_labels[i] * 60 / self.quarter_bpm}",
        )
        bottom += data[i]

    ax.set_xlabel("Cycle-relative position")
    ax.set_ylabel("Weighting")
    ax.legend()
    ax.grid(True)
    if show:
        plt.show()
    return fig

log_gaussian

log_gaussian(arr: ndarray, mu: float = 0.6, sig: float = 0.3)

Compute a log-linear Gaussian which is the basis of individual pulse salience values. To avoid log(0) issues, np.clip values to be always greater than 0. See also MetricalSalience.calculate_salience_values.

Parameters:

  • mu (float, default: 0.6 ) –

    The mean of the Gaussian.

  • sig (float, default: 0.3 ) –

    The standard deviation of the Gaussian.

Examples:

>>> log_gaussian(np.array([0.06, 0.6, 6.0])) # demo log-lin symmetry
array([0.00386592, 1.        , 0.00386592])
>>> log_gaussian(np.array([0.5, 1., 2.])) # 2x between levels
array([0.96576814, 0.76076784, 0.21895238])
Source code in amads/time/meter/attractor_tempos.py
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
def log_gaussian(arr: np.ndarray, mu: float = 0.6, sig: float = 0.3):
    """
    Compute a log-linear Gaussian which is the basis of individual pulse salience values.
    To avoid log(0) issues, `np.clip` values to be always greater than 0.
    See also [MetricalSalience.calculate_salience_values]
    [amads.time.meter.attractor_tempos.MetricalSalience.calculate_salience_values].


    Parameters
    ----------
    mu: float
        The mean of the Gaussian.
    sig: float
        The standard deviation of the Gaussian.

    Examples
    --------

    >>> log_gaussian(np.array([0.06, 0.6, 6.0])) # demo log-lin symmetry
    array([0.00386592, 1.        , 0.00386592])

    >>> log_gaussian(np.array([0.5, 1., 2.])) # 2x between levels
    array([0.96576814, 0.76076784, 0.21895238])

    """
    if sig <= 0:
        raise ValueError("Standard deviation (`sig`) must be positive.")
    if mu <= 0:
        raise ValueError("Mean (`mu`) must be positive.")
    x = np.clip(arr, 1e-9, None)
    return np.exp(-(np.log10(x / mu) ** 2 / (2 * sig**2)))

MetricalSplitter

MetricalSplitter(
    note_start: float,
    note_length: float,
    start_hierarchy: list[list],
    split_same_level: bool = True,
)

Split up notes and/or rests to reflect a specified metrical hierarchy.

This class takes in a representation of a note in terms of the start position and duration, along with a metrical context and returns a list of start-duration pairs for the constituent parts of the broken-up note.

The metrical context should be expressed in the form of a start_hierarchy (effectively a list of lists for the hierarchy). This can be provided directly or made via various classes in the meter module (see notes there).

The basic premise here is that a single note can only traverse metrical boundaries for levels lower than the one it starts on. If it traverses the metrical boundary of a higher level, then it is split at that position into two note-heads. This split registers as a case of syncopation for those algorithms and as a case for two note-heads to be connected by a tie in notation.

There are many variants on this basic setup. This class aims to support almost any such variant, while providing easy defaults for simple, standard practice.

The flexibility comes from the definition of a metrical structure (for which see the MetricalHierarchy class).

Each split of the note duration serves to move up one metrical level. For instance, for the 4/4 example above, a note of duration 2.0 starting at start 0.25 connects to 0.5 in level 3 (duration = 0.25), then 0.5 connects to 1.0 in level 2 (duration = 0.5), then 1.0 connects to 2.0 in level 1 (duration = 1.0), and this leaves a duration of 0.25 to start on 2.0. The data is returned as a list of (position, duration) tuples. The values for the example would be: [(0.25, 0.25), (0.5, 0.5), (1.0, 1.0), (1.0, 0.25)] as demonstrated below.

If the note runs past the end of the metrical span, the remaining value is stored with the start_duration_pairs recording the within-measure pairs and remaining_length attribute for the rest.

If the note_start is not in the hierarchy, then the first step is to map to the next nearest value in the lowest level.

Parameters:

  • note_start (float) –

    The starting position of the note (or rest).

  • note_length (float) –

    The length (duration) of the note (or rest).

  • split_same_level (bool, default: True ) –

    When creating hierarchies, decide whether to split elements at the same level, e.g., 1/8 and 2/8 in 6/8. In cases of metrical structures with a 3-grouping (two "weak" events between a "strong" in compound signatures like 6/8), some conventions choose to split notes within-level as well as between them. For instance, with a quarter note starting on the second eighth note (start 0.5) of 6/8, some will want to split that into two 1/8th notes, divided on the third eighth note position, while others will want to leave this intact. The split_same_level option accommodates this: it affects the within-level split when set to True and not otherwise (default).

Examples:

>>> from amads.time.meter.representations import TimeSignature, PulseLengths
>>> m = TimeSignature(as_string="4/4")
>>> start_hierarchy = m.to_start_hierarchy()
>>> start_hierarchy
[[0.0, 4.0], [0.0, 1.0, 2.0, 3.0, 4.0]]
>>> split = MetricalSplitter(0.25, 2.0, start_hierarchy=start_hierarchy, split_same_level=False)
>>> split.start_duration_pairs
[(0.25, 0.75), (1.0, 1.25)]
>>> split = MetricalSplitter(0.25, 2.0, start_hierarchy=start_hierarchy, split_same_level=True)
>>> split.start_duration_pairs
[(0.25, 0.75), (1.0, 1.0), (2.0, 0.25)]
>>> m.fill_2s_3s()
>>> start_hierarchy = m.to_start_hierarchy()
>>> start_hierarchy
[[0.0, 4.0], [0.0, 2.0, 4.0], [0.0, 1.0, 2.0, 3.0, 4.0]]
>>> meter_from_pulses = PulseLengths([4, 2, 1, 0.5, 0.25], cycle_length=4)
>>> start_hierarchy = meter_from_pulses.to_start_hierarchy()
>>> start_hierarchy[-1]
[0.0, 0.25, 0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.25, 3.5, 3.75, 4.0]
>>> split = MetricalSplitter(0.25, 4.0, start_hierarchy=start_hierarchy)
>>> split.start_duration_pairs
[(0.25, 0.25), (0.5, 0.5), (1.0, 1.0), (2.0, 2.0)]
>>> split.remaining_length
0.25
>>> split = MetricalSplitter(0.05, 2.0, start_hierarchy=start_hierarchy)
>>> split.start_duration_pairs
[(0.05, 0.2), (0.25, 0.25), (0.5, 0.5), (1.0, 1.0), (2.0, 0.05)]
Source code in amads/time/meter/break_it_up.py
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
def __init__(
    self,
    note_start: float,
    note_length: float,
    start_hierarchy: list[list],
    split_same_level: bool = True,
):

    self.note_length = note_length
    self.note_start = note_start
    self.start_hierarchy = start_hierarchy
    self.split_same_level = split_same_level

    # Initialise
    self.start_duration_pairs = []
    self.updated_start = note_start
    self.remaining_length = note_length
    self.level_pass()

Functions

level_pass

level_pass()

Given a start_hierarchy, this method iterates across the levels of that hierarchy to find the current start position, and (through advance_step) the start position to map to.

This method runs once for each such mapping, typically advancing up one (or more) layer of the metrical hierarchy with each call. “Typically” because split_same_level is supported where relevant.

Each iteration creates a new start-duration pair stored in the start_duration_pairs list that records the constituent parts of the split note.

Source code in amads/time/meter/break_it_up.py
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
def level_pass(self):
    """
    Given a `start_hierarchy`,
    this method iterates across the levels of that hierarchy to find the
    current start position, and (through `advance_step`) the start
    position to map to.

    This method runs once for each such mapping, typically advancing
    up one (or more) layer of the metrical hierarchy with each call.
    “Typically” because `split_same_level` is supported where relevant.

    Each iteration creates a new start-duration pair
    stored in the start_duration_pairs list
    that records the constituent parts of the split note.
    """

    for level_index in range(len(self.start_hierarchy)):

        if (
            self.remaining_length <= 0
        ):  # sic, here due to the various routes through
            return

        if (
            self.updated_start == self.start_hierarchy[0][-1]
        ):  # finished metrical span
            return

        this_level = self.start_hierarchy[level_index]

        if self.updated_start in this_level:
            if level_index == 0:  # i.e., updated_start == 0
                self.start_duration_pairs.append(
                    (self.updated_start, round(self.remaining_length, 4))
                )
                return
            else:  # level up. NB: duplicates in nested hierarchy help here
                if self.split_same_level:  # relevant option for e.g., 6/8
                    self.advance_step(this_level)
                else:  # usually
                    self.advance_step(self.start_hierarchy[level_index - 1])

    if self.remaining_length > 0:  # start not in the hierarchy at all
        self.advance_step(
            self.start_hierarchy[-1]
        )  # get to the lowest level
        # Now start the process with the metrical structure:
        self.level_pass()

advance_step

advance_step(positions_list: list)

For a start position, and a metrical level expressed as a list of starts, find the next higher value from those levels. Used for determining iterative divisions.

Source code in amads/time/meter/break_it_up.py
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
def advance_step(self, positions_list: list):
    """
    For a start position, and a metrical level expressed as a list of starts,
    find the next higher value from those levels.
    Used for determining iterative divisions.
    """
    for p in positions_list:
        if p > self.updated_start:
            duration_to_next_position = p - self.updated_start
            if self.remaining_length <= duration_to_next_position:
                self.start_duration_pairs.append(
                    (self.updated_start, round(self.remaining_length, 4))
                )
                # done but still reduce `remaining_length` to end the whole process in level_pass
                self.remaining_length -= duration_to_next_position
                return
            else:  # self.remaining_length > duration_to_next_position:
                self.start_duration_pairs.append(
                    (
                        self.updated_start,
                        round(duration_to_next_position, 4),
                    )
                )
                # Updated start and position; run again
                self.updated_start = p
                self.remaining_length -= duration_to_next_position
                self.level_pass()  # NB: to re-start from top as may have jumped a level
                return

grid

The module seeks to find the smallest metrical pulse level (broadly, “tatum”) in response to a source and user tolerance settings.

In the simplest case, a source records its metrical positions exactly, including fractional values as needed. We provide functionality for standard, general algorithms in these cases (greatest common denominator and fraction estimation) which are battle-tested and computationally efficient.

In metrically simple and regular cases like chorales, this value might be the eighth note, for instance. In other cases, it gets more complex. For example, Beethoven's Opus 10 Nr.2 Movement 1 includes a triplet 16th turn figure in measure 1 (tatum = 1/6 division of the quarter note) and also dotted rhythms that pair a dotted 16th with a 32nd note from measure 5 (tatum = 1/8 division of the quarter). So to catch these cases in the first 5 measures, we need the lowest common multiple of 6 and 8, i.e., 24 per quarter (or 48 bins per 2/4 measure).

In cases of extreme complexity, there may be a “need” for a considerably shorter tatum pulse (and, equivalently, a greater number of bins). This is relevant for some modern music, as well as cases where grace notes are assigned a specific metrical position/duration (though in many encoded standards, grace notes are not assigned separate metrical positions).

Moreover, there are musical sources that do not encode fractional time values, but rather approximation with floats. These include any:

  • frame-wise representations of time (including MIDI and any attempted transcription from audio),
  • processing via code libraries that likewise convert fractions to floats,
  • secondary representations like most CSVs.

As division by 3 leads to rounding, approximation, and floating point errors, and as much music involves those divisions, this is widely relevant.

The standard algorithms often fail in these contexts, largely because symbolic music tends to prioritise certain metrical divisions over others. For example, 15/16 is a commonly used metrical position (largely because 16 is a power of 2), but 14/15 is not. That being the case, while 14/15 might be a better mathematical fit for approximating a value, it is typically incorrect as the musical solution. We can use the term “incorrect” advisedly here because the floats are secondary representations of a known fractional ground truth. Doctests demonstrate some of these cases.

Author: Mark Gotham


starts_to_int_relative_counter

starts_to_int_relative_counter(
    starts: Iterable[float], decimal_places: int = 5
)

Find and count all fractional parts of an iterable.

Simple wrapper function to create a Counter (dict) that maps the fractional parts of starts ($start - int(start)$, e.g., 1.5 becomes 0.5) to the number of occurrences of that fraction (e.g., starts 1.5 and 2.5 produce the mapping 0.5: 2 in the result).

Fractional parts are rounded to decimal_places decimal points (default 5), which gives a tolerance down to 0.00001 and accommodates common musical fractions such as thirds (0.33333) and sixths (0.16667).

Examples:

>>> test_list = [0.0, 0.0, 0.5, 1.0, 1.5, 1.75, 2.0, 2.3333333333, 2.666667, 3.00000000000000001]
>>> starts_to_int_relative_counter(test_list)
Counter({0.0: 5, 0.5: 2, 0.75: 1, 0.33333: 1, 0.66667: 1})
Source code in amads/time/meter/grid.py
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
def starts_to_int_relative_counter(
    starts: Iterable[float], decimal_places: int = 5
):
    """
    Find and count all fractional parts of an iterable.

    Simple wrapper function to create a Counter (dict) that
    maps the fractional parts of starts ($start - int(start)$, e.g.,
    1.5 becomes 0.5) to the number of occurrences of that fraction
    (e.g., starts 1.5 and 2.5 produce the mapping 0.5: 2 in the result).

    Fractional parts are rounded to `decimal_places` decimal points (default 5),
    which gives a tolerance down to 0.00001 and accommodates common musical
    fractions such as thirds (0.33333) and sixths (0.16667).

    Examples
    --------
    >>> test_list = [0.0, 0.0, 0.5, 1.0, 1.5, 1.75, 2.0, 2.3333333333, 2.666667, 3.00000000000000001]
    >>> starts_to_int_relative_counter(test_list)
    Counter({0.0: 5, 0.5: 2, 0.75: 1, 0.33333: 1, 0.66667: 1})
    """
    for item in starts:
        if not isinstance(item, Number):
            raise TypeError(
                f"All items in `starts` must be numeric (int or float). Found: {type(item)}"
            )

    return Counter([round(x - int(x), decimal_places) for x in starts])

approximate_pulse_match_with_priority_list

approximate_pulse_match_with_priority_list(
    x: float,
    distance_threshold: float = 0.001,
    pulse_priority_list: Optional[list] = None,
) -> Optional[Fraction]

Takes a float and an ordered list of possible pulses, returning the first pulse in the list to approximate the input float.

This is a new function by MG as reported in [1].

Parameters:

  • x (float) –

    Input value to be approximated as a fraction.

  • distance_threshold (float, default: 0.001 ) –

    The distance threshold.

  • pulse_priority_list (list[Fraction], default: None ) –

    Ordered list of pulse values to try. If unspecified, this defaults to 4, 3, 2, 1.5, 1, and the default output of generate_n_smooth_numbers.

Returns:

  • Union(None, Fraction)

    None for no match, or a Fraction(numerator, denominator).

Raises:

  • ValueError

    If pulse_priority_list contains 0 or None.

References

[1] Gotham, Mark R. H. (2025). Keeping Score: Computational Methods for the Analysis of Encoded ("Symbolic") Musical Scores (v0.3+) Zenodo. https://doi.org/10.5281/zenodo.14938027

Examples:

>>> approximate_pulse_match_with_priority_list(5/6)
Fraction(1, 6)
>>> test_case = round(float(11/12), 5)
>>> test_case
0.91667
>>> approximate_pulse_match_with_priority_list(test_case)
Fraction(1, 12)

Note that Fraction(1, 12) is included in the default list, while Fraction(11, 12) is not as that would be an extremely unusual tatum value.

If the distance_threshold is very coarse, expect errors:

>>> approximate_pulse_match_with_priority_list(29 + 1/12, distance_threshold=0.1)
Fraction(1, 1)
>>> approximate_pulse_match_with_priority_list(29 + 1/12, distance_threshold=0.01)
Fraction(1, 12)
Source code in amads/time/meter/grid.py
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
def approximate_pulse_match_with_priority_list(
    x: float,
    distance_threshold: float = 0.001,
    pulse_priority_list: Optional[list] = None,
) -> Optional[Fraction]:
    """
    Takes a float and an ordered list of possible pulses,
    returning the first pulse in the list to approximate the input float.

    This is a new function by MG as reported in [1].

    Parameters
    ----------
    x : float
        Input value to be approximated as a fraction.
    distance_threshold : float
        The distance threshold.
    pulse_priority_list : list[Fraction]
        Ordered list of pulse values to try.
        If unspecified, this defaults to 4, 3, 2, 1.5, 1, and the
        default output of `generate_n_smooth_numbers`.

    Returns
    -------
    Union(None, Fraction)
        None for no match, or a Fraction(numerator, denominator).

    Raises
    ------
    ValueError
        If `pulse_priority_list` contains 0 or None.

    References
    ----------
    [1] Gotham, Mark R. H. (2025). Keeping Score: Computational Methods for the
    Analysis of Encoded ("Symbolic") Musical Scores (v0.3+) Zenodo.
    https://doi.org/10.5281/zenodo.14938027

    Examples
    --------
    >>> approximate_pulse_match_with_priority_list(5/6)
    Fraction(1, 6)

    >>> test_case = round(float(11/12), 5)
    >>> test_case
    0.91667

    >>> approximate_pulse_match_with_priority_list(test_case)
    Fraction(1, 12)

    Note that `Fraction(1, 12)` is included in the default list,
    while `Fraction(11, 12)` is not as that would be an extremely unusual tatum value.

    If the `distance_threshold` is very coarse, expect errors:
    >>> approximate_pulse_match_with_priority_list(29 + 1/12, distance_threshold=0.1)
    Fraction(1, 1)

    >>> approximate_pulse_match_with_priority_list(29 + 1/12, distance_threshold=0.01)
    Fraction(1, 12)

    """
    if pulse_priority_list is None:
        pulse_priority_list = [
            Fraction(4, 1),  # 4
            Fraction(3, 1),  # 3
            Fraction(2, 1),  # 2
            Fraction(3, 2),  # 1.5
        ]
        pulse_priority_list += generate_n_smooth_numbers(
            invert=True
        )  # 1, 1/2, 1/3, ...

    for p in pulse_priority_list:
        if p == 0:  # Ignore 0s
            continue
        elif p is None:
            raise ValueError("`pulse_priority_list` must not contain None.")
        elif not isinstance(p, (Fraction, int)):
            raise ValueError(
                f"All entries in `pulse_priority_list` must be Fraction or int. Found: {type(p)}"
            )
        test_case = x / p
        diff = abs(round(test_case) - test_case)
        if diff < distance_threshold:
            return p

    return None

generate_n_smooth_numbers

generate_n_smooth_numbers(
    bases: list[int] = [2, 3], max_value: int = 100, invert: bool = True
) -> list

Generates a sorted list of "N-smooth" numbers up to a specified maximum value.

An N-smooth number is a positive integer whose prime factors are all less than or equal to the largest number in the bases list.

Parameters:

  • max_value (int, default: 100 ) –

    The maximum value to generate numbers up to. Defaults to 100.

  • bases (list, default: [2, 3] ) –

    A list of base values (integers > 1) representing the maximum allowed prime factor. Defaults to [2, 3].

  • invert (bool, default: True ) –

    If True, return Fraction(1, x) for each smooth number x instead of x itself. Defaults to True.

Returns:

  • list

    A sorted list of N-smooth numbers (or their reciprocals if invert=True).

Raises:

  • ValueError

    If bases contains non-integers or values <= 1, or if max_value is not a positive integer.

Examples:

Our metrical default:

>>> generate_n_smooth_numbers(invert=False)  # all defaults `max_value=100`, `bases [2, 3]`
[1, 2, 3, 4, 6, 8, 9, 12, 16, 18, 24, 27, 32, 36, 48, 54, 64, 72, 81, 96]

Other cases:

>>> generate_n_smooth_numbers(max_value=10, bases=[2], invert=False)
[1, 2, 4, 8]
>>> generate_n_smooth_numbers(max_value=20, bases=[2, 3], invert=False)
[1, 2, 3, 4, 6, 8, 9, 12, 16, 18]
>>> generate_n_smooth_numbers(max_value=50, bases=[2, 3, 5], invert=False)
[1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36, 40, 45, 48, 50]

By default, invert is True:

>>> generate_n_smooth_numbers()[-1]
Fraction(1, 96)
Source code in amads/time/meter/grid.py
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
def generate_n_smooth_numbers(
    bases: list[int] = [2, 3], max_value: int = 100, invert: bool = True
) -> list:
    """
    Generates a sorted list of "N-smooth" numbers up to a specified maximum value.

    An N-smooth number is a positive integer whose prime factors are all
    less than or equal to the largest number in the `bases` list.

    Parameters
    ----------
    max_value : int, optional
        The maximum value to generate numbers up to. Defaults to 100.
    bases : list, optional
        A list of base values (integers > 1) representing the maximum allowed
        prime factor. Defaults to [2, 3].
    invert : bool
        If True, return Fraction(1, x) for each smooth number x instead of x itself.
        Defaults to True.

    Returns
    -------
    list
        A sorted list of N-smooth numbers (or their reciprocals if `invert=True`).

    Raises
    ------
    ValueError
        If `bases` contains non-integers or values <= 1, or if `max_value` is
        not a positive integer.

    Examples
    --------
    Our metrical default:
    >>> generate_n_smooth_numbers(invert=False)  # all defaults `max_value=100`, `bases [2, 3]`
    [1, 2, 3, 4, 6, 8, 9, 12, 16, 18, 24, 27, 32, 36, 48, 54, 64, 72, 81, 96]

    Other cases:
    >>> generate_n_smooth_numbers(max_value=10, bases=[2], invert=False)
    [1, 2, 4, 8]
    >>> generate_n_smooth_numbers(max_value=20, bases=[2, 3], invert=False)
    [1, 2, 3, 4, 6, 8, 9, 12, 16, 18]
    >>> generate_n_smooth_numbers(max_value=50, bases=[2, 3, 5], invert=False)
    [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36, 40, 45, 48, 50]

    By default, invert is True:
    >>> generate_n_smooth_numbers()[-1]
    Fraction(1, 96)

    """
    if not all(isinstance(b, int) and b > 1 for b in bases):
        raise ValueError("Bases must be a list of integers greater than 1.")

    if not isinstance(max_value, int) or max_value <= 0:
        raise ValueError("max_value must be a positive integer.")

    seen = {1}
    queue = deque([1])

    while queue:
        current = queue.popleft()
        for base in bases:
            next_num = current * base
            if next_num <= max_value and next_num not in seen:
                seen.add(next_num)
                queue.append(next_num)

    smooth_numbers = sorted(seen)

    if invert:
        return [Fraction(1, x) for x in smooth_numbers]
    else:
        return smooth_numbers

get_tatum_from_priorities

get_tatum_from_priorities(
    starts: Iterable,
    pulse_priority_list: Optional[list] = None,
    distance_threshold: float = 1 / 24,
    proportion_threshold: Optional[float] = 0.999,
) -> Fraction

Estimate metrical positions from floats.

This function serves cases where temporal position values are defined relative to some origin, such as the time elapsed since:

  • the start of a piece (or section) in quarter notes (or some other consistent symbolic value)
  • the start of a measure (or other container), assuming those measures are of a constant duration.

Use cases include the attempted retrieval of true metrical positions (fractions) from rounded versions thereof (floats). See also notes at the top of this module for why standard algorithms fail at this task in a musical setting.

This function serves those common cases where there is a need to balance between capturing event positions as accurately as possible while not making excessive complexity to account for a few anomalous notes. Most importantly, it enables the explicit prioritisation of common pulse divisions. Defaults prioritse 16x divsion over 15x, for example.

Parameters:

  • starts (Iterable) –

    Any iterable giving the starting position of events. Each constituent start must be expressed relative to a reference value such that X.0 is the start of a unit, X.5 is the mid-point, etc. Floats are the main expected type here (as above); we seek to reverse engineer a plausible fraction from each. If any start is already an exact Fraction or int, then it stays as it is, whatever the user setting: this functionality serves to improve the accuracy of timing data; there's no question of ever reducing it, even if user settings suggest that.

  • pulse_priority_list (Optional[list], default: None ) –

    The point of this function is to encode musically common pulse values. This argument defaults to numbers under 100 with prime factors of only 2 and 3 (“3-smooth”), in increasing order. The user can define any alternative list, optionally making use of generate_n_smooth_numbers for the purpose. See notes at approximate_fraction_with_priorities. Make sure this list is exhaustive: the function will raise an error if no match is found.

  • distance_threshold (float, default: 1 / 24 ) –

    The rounding tolerance between a temporal position multiplied by the bin value and the nearest integer. This is essential when working with floats. Defaults to 1/24, but can be set to any value.

  • proportion_threshold (Optional[float], default: 0.999 ) –

    Optionally, set a proportional number of events notes to account for. This option requires that the starts be expressed as a Counter, ordered from most to least common. The default of .999 means that once at least 99.9% of the source's notes are handled, we ignore the rest. This is achieved by iterating through the Counter object of values relative to the unit (e.g., 1.5 -> 0.5). This option should be chosen with care as, in this case, only the unit value and equal divisions thereof are considered.

Examples:

A simple case, expressed in different ways.

>>> tatum_1_6 = [0, 1/3, Fraction(1, 2), 1]
>>> get_tatum_from_priorities(tatum_1_6)
Fraction(1, 6)
>>> tatum_1_6 = [0, 0.333, 0.5, 1]
>>> get_tatum_from_priorities(tatum_1_6)
Fraction(1, 6)

An example of values from the BPSD dataset (Zeitler et al.).

>>> from amads.time.meter import profiles
>>> bpsd_Op027No1 = profiles.BPSD().op027No1_01 # /16 divisions of the measure and /12 too (from m.48). Tatum 1/48
>>> get_tatum_from_priorities(bpsd_Op027No1, distance_threshold=1/24) # proportion_threshold=0.999
Fraction(1, 48)

Change the distance_threshold

>>> get_tatum_from_priorities(bpsd_Op027No1, distance_threshold=1/6) # proportion_threshold=0.999
Fraction(1, 12)

Change the proportion_threshold:

>>> get_tatum_from_priorities(bpsd_Op027No1, distance_threshold=1/24, proportion_threshold=0.5)
Fraction(1, 24)
>>> get_tatum_from_priorities(bpsd_Op027No1, distance_threshold=1/24, proportion_threshold=0.9)
Fraction(1, 48)

This also works without any floats (and therefore, no priorities needed)

>>> get_tatum_from_priorities([1, 3])
Fraction(1, 1)
>>> get_tatum_from_priorities([0, 3])
Fraction(3, 1)
>>> get_tatum_from_priorities([0, Fraction(1, 3), Fraction(4, 6)])
Fraction(1, 3)
>>> get_tatum_from_priorities([28.0, 29.0, 29.5, 30.0, 32.0, 33.0, 34.0, 36.0, 38.0, 40.0])
Fraction(1, 2)
Source code in amads/time/meter/grid.py
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
def get_tatum_from_priorities(
    starts: Iterable,
    pulse_priority_list: Optional[list] = None,
    distance_threshold: float = 1 / 24,
    proportion_threshold: Optional[float] = 0.999,
) -> Fraction:
    """
    Estimate metrical positions from floats.

    This function serves cases where temporal position values are defined
    relative to some origin, such as the time elapsed since:

    - the start of a piece (or section) in quarter notes (or some other
        consistent symbolic value)
    - the start of a measure (or other container), assuming those measures
        are of a constant duration.

    Use cases include the attempted retrieval of true metrical
    positions (fractions) from rounded versions thereof (floats).
    See also notes at the top of this module
    for why standard algorithms fail at this task in a musical setting.

    This function serves those common cases where there is a need to balance
    between capturing event positions as accurately as possible while not
    making excessive complexity to account for a few anomalous notes.
    Most importantly, it enables the explicit prioritisation of common pulse
    divisions. Defaults prioritse 16x divsion over 15x, for example.


    Parameters
    ----------
    starts
        Any iterable giving the starting position of events.
        Each constituent start must be expressed relative to a reference value such that
        X.0 is the start of a unit,
        X.5 is the mid-point, etc.
        Floats are the main expected type here (as above); we seek to reverse engineer a plausible fraction from each.
        If any start is already an exact Fraction or int, then it stays as it is, whatever the user setting:
        this functionality serves to improve the accuracy of timing data; there's no question of ever reducing it,
        even if user settings suggest that.
    pulse_priority_list
        The point of this function is to encode musically common pulse values.
        This argument defaults to numbers under 100 with prime
        factors of only 2 and 3 (“3-smooth”), in increasing order.
        The user can define any alternative list, optionally making use of
        `generate_n_smooth_numbers` for the purpose.
        See notes at `approximate_fraction_with_priorities`.
        Make sure this list is exhaustive: the function will raise an error if no match is found.
    distance_threshold
        The rounding tolerance between a temporal position multiplied by
        the bin value and the nearest integer.
        This is essential when working with floats.
        Defaults to 1/24, but can be set to any value.
    proportion_threshold
        Optionally, set a proportional number of events notes to account for.
        This option requires that the `starts` be expressed as a Counter,
        ordered from most to least common.  The default of .999 means that
        once at least 99.9% of the source's notes are handled, we ignore the rest.
        This is achieved by iterating through the Counter object of values relative
        to the unit (e.g., 1.5 -> 0.5).
        This option should be chosen with care as, in this case,
        only the unit value and equal divisions thereof are considered.

    Examples
    --------

    A simple case, expressed in different ways.

    >>> tatum_1_6 = [0, 1/3, Fraction(1, 2), 1]
    >>> get_tatum_from_priorities(tatum_1_6)
    Fraction(1, 6)

    >>> tatum_1_6 = [0, 0.333, 0.5, 1]
    >>> get_tatum_from_priorities(tatum_1_6)
    Fraction(1, 6)

    An example of values from the BPSD dataset (Zeitler et al.).

    >>> from amads.time.meter import profiles
    >>> bpsd_Op027No1 = profiles.BPSD().op027No1_01 # /16 divisions of the measure and /12 too (from m.48). Tatum 1/48
    >>> get_tatum_from_priorities(bpsd_Op027No1, distance_threshold=1/24) # proportion_threshold=0.999
    Fraction(1, 48)

    Change the `distance_threshold`
    >>> get_tatum_from_priorities(bpsd_Op027No1, distance_threshold=1/6) # proportion_threshold=0.999
    Fraction(1, 12)

    Change the `proportion_threshold`:
    >>> get_tatum_from_priorities(bpsd_Op027No1, distance_threshold=1/24, proportion_threshold=0.5)
    Fraction(1, 24)

    >>> get_tatum_from_priorities(bpsd_Op027No1, distance_threshold=1/24, proportion_threshold=0.9)
    Fraction(1, 48)

    This also works without any floats (and therefore, no priorities needed)

    >>> get_tatum_from_priorities([1, 3])
    Fraction(1, 1)

    >>> get_tatum_from_priorities([0, 3])
    Fraction(3, 1)

    >>> get_tatum_from_priorities([0, Fraction(1, 3), Fraction(4, 6)])
    Fraction(1, 3)

    >>> get_tatum_from_priorities([28.0, 29.0, 29.5, 30.0, 32.0, 33.0, 34.0, 36.0, 38.0, 40.0])
    Fraction(1, 2)
    """
    floats, ints_fractions = [], []
    for num in starts:

        if num < 0:
            raise ValueError(
                f"All `start` must be greater than or equal to zero: fail on {num}."
            )

        if isinstance(num, float):
            if is_genuine_float(num):
                floats.append(num)
            else:
                assert int(num) == num
                ints_fractions.append(int(num))

        else:
            assert isinstance(num, (int, Fraction))
            ints_fractions.append(num)

    working_gcd = fraction_gcd(ints_fractions)
    if len(floats) == 0:
        return working_gcd  # No further action
    else:
        pulses_needed = [working_gcd]

    if not 0.0 < distance_threshold < 1.0:
        raise ValueError(
            "The `distance_threshold` tolerance must be between 0 and 1."
        )

    if pulse_priority_list is None:
        pulse_priority_list = generate_n_smooth_numbers(
            invert=True
        )  # 1, 1/2, 1/3, ...
    else:
        if not isinstance(pulse_priority_list, list):
            raise ValueError("The `pulse_priority_list` must be a list.")
        for i in pulse_priority_list:
            if not isinstance(i, Fraction):
                raise ValueError(
                    "The `pulse_priority_list` must consist entirely of Fraction objects "
                    "(which can include integers expressed as Fractions such as `Fraction(2, 1)`)."
                )
            if i <= 0:
                raise ValueError(
                    "The `pulse_priority_list` items must be positive."
                )

    use_proportion = proportion_threshold is not None
    if use_proportion:
        if not 0.0 < proportion_threshold < 1.0:
            raise ValueError(
                "When used (not `None`), the `proportion_threshold` must be between 0 and 1."
            )
        total = len(starts)
        cumulative_count = len(ints_fractions) / total
        starts = starts_to_int_relative_counter(floats)

    for x in floats:
        if (x > 0) and (
            approximate_pulse_match_with_priority_list(
                x,
                pulse_priority_list=pulses_needed,  # Try those we're committed to first
                distance_threshold=distance_threshold,
            )
            is None
        ):  # No fit among those we have, try other user-permitted alternatives.
            new_pulse = approximate_pulse_match_with_priority_list(
                x,
                pulse_priority_list=pulse_priority_list,
                distance_threshold=distance_threshold,
            )
            if new_pulse is not None:
                pulses_needed.append(new_pulse)
            else:  # No fit among user-permitted alternatives.
                raise ValueError(
                    f"No match found for time point {x}, with the given arguments. "
                    "Try relaxing the `distance_threshold` or expanding the `pulse_priority_list`."
                )

        if use_proportion:
            cumulative_count += starts[x] / total
            if cumulative_count > proportion_threshold:
                break

    return fraction_gcd(pulses_needed)

profiles

Profiles of metrical position usage provided or deduced from the literature.

See the code for details. Data includes:

  • WorldSample16
    • shiko
    • son
    • rumba
    • soukous
    • gahu
    • bossa_nova
  • WorldSample12
    • soli
    • tambú
    • bembé
    • bembé_2
    • yoruba
    • tonada
    • asaadua
    • sorsonet
    • bemba
    • ashanti
  • BPSD (Beethoven piano sonata dataset)
    • op002No2_01
    • op054_01
    • op027No1_01
    • op049No1_01
    • op013_01
    • op101_01
    • op111_01
    • op031No2_01
    • op014No1_01
    • op010No2_01
    • op007_01
    • op078_01
    • op109_01
    • op081a_01
    • op028_01
    • op002No3_01
    • op010No1_01
    • op090_01
    • op031No1_01
    • op014No2_01
    • op010No3_01
    • op031No3_01
    • op110_01
    • op026_01
    • op049No2_01
    • op002No1_01
    • op027No2_01
    • op057_01
    • op053_01
    • op022_01
    • op106_01
    • op079_01
    • all

Author: Mark Gotham

Classes

_MeterProfile dataclass

_MeterProfile(name: str = '', literature: str = '', about: str = '')

This is the base class for all meter profiles.

Attributes:

  • name (str) (the name of the profile) –
  • literature (str) (citations for the profile in the literature) –
  • about (str) (a longer description of the profile.) –
Functions
__getitem__
__getitem__(key: str)

This is added for (some) backwards compatibility when these objects were dictionaries. It means we can still access class attributes using bracket notation.

Examples:

>>> bpsd = BPSD()
>>> bpsd["name"]
'Beethoven piano sonata dataset'
Source code in amads/time/meter/profiles.py
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
def __getitem__(self, key: str):
    """This is added for (some) backwards compatibility when these objects were dictionaries.
    It means we can still access class attributes using bracket notation.

    Examples
    --------
    >>> bpsd = BPSD()
    >>> bpsd["name"]
    'Beethoven piano sonata dataset'
    """
    try:
        return getattr(self, key)
    except AttributeError:
        raise AttributeError(
            f"Key Profile '{self.__str__()}' has no attribute '{key}'"
        )

representations

This module serves to map out metrical hierarchies in a number of different ways and to express the relationship between the notes in and the hierarchy of a metrical cycle.

Uses include identifying notes that traverse metrical levels, for analysis (e.g., as a cycle of syncopation) and notation (e.g., re-notating to reflect the within-cycle notational conventions).

Author: Mark Gotham


StartTimeHierarchy

StartTimeHierarchy(
    start_hierarchy: list[list], names: Optional[dict] = None
)

Encoding metrical structure as a hierarchy of start times.

A representation of metrical levels in terms of starts expressed by quarter length from the start of the cycle.

Parameters:

  • start_hierarchy (list[list]) –

    Users can specify the start_hierarchy directly and completely from scratch. Use this for advanced, non-standard metrical structures including those without 2-/3- grouping, or even nested hierarchies, as well as for (optionally) encoding micro-timing directly into the metrical structure. The only “well-formed” criteria we expect are use of 0.0 and full cycle length at the top level, and presence of all timepoints from one level in each subsequent level. For creating this information from pulse lengths, time signatures, and more see the to_start_hierarchy methods on those classes.

  • names (Optional[dict], default: None ) –

    Optionally create a dict mapping temporal positions to names. Currently, this supports one textual value per temporal position (key), e.g., {0.0: "ta", 1.0: "ka", 2.0: "di", 3.0: "mi"}.

Source code in amads/time/meter/representations.py
113
114
115
116
117
118
119
120
121
122
123
124
125
126
def __init__(
    self,
    start_hierarchy: list[list],
    names: Optional[dict] = None,
):
    self.start_hierarchy = start_hierarchy
    self.cycle_length = self.start_hierarchy[0][-1]
    self.pulse_lengths = None

    if names:
        for key in names:
            assert isinstance(key, float)
            assert isinstance(names[key], str)
    self.names = names

Functions

coincident_pulse_list

coincident_pulse_list(granular_pulse: float) -> list

Create a flat list setting out the number of intersecting pulses at each successive position in a metrical cycle.

For example, the output [4, 1, 2, 1, 3, 1, 2, 1] refers to a base pulse unit of 1, with addition pulse streams accenting every 2nd, 4th, and 8th position.

Parameters:

  • granular_pulse (float) –

    The pulse value of the fastest level to consider e.g., 1, or 0.25.

Examples:

You can currently set the granular_pulse value to anything (this may change). For instance, in the pair of examples below, first we have a granular_pulse that's present in the input, and then a case using a faster level that's not present (this simply pads the data out):

>>> hierarchy = StartTimeHierarchy([[0.0, 4.0], [0.0, 2.0, 4.0], [0.0, 1.0, 2.0, 3.0, 4.0]])
>>> hierarchy.coincident_pulse_list(granular_pulse=1)
[3, 1, 2, 1]

Now, changing the granular_pulse for a bit of over-sampling:

>>> hierarchy.coincident_pulse_list(granular_pulse=0.5)
[3, 0, 1, 0, 2, 0, 1, 0]
Source code in amads/time/meter/representations.py
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
def coincident_pulse_list(
    self,
    granular_pulse: float,
) -> list:
    """
    Create a flat list setting out the
    number of intersecting pulses at each successive position in a metrical cycle.

    For example,
    the output [4, 1, 2, 1, 3, 1, 2, 1]
    refers to a base pulse unit of 1,
    with addition pulse streams accenting every 2nd, 4th, and 8th position.


    Parameters
    --------
    granular_pulse
        The pulse value of the fastest level to consider e.g., 1, or 0.25.

    Examples
    --------

    You can currently set the `granular_pulse` value to anything (this may change).
    For instance, in the pair of examples below,
    first we have a `granular_pulse` that's present in the input,
    and then a case using a faster level that's not present (this simply pads the data out):

    >>> hierarchy = StartTimeHierarchy([[0.0, 4.0], [0.0, 2.0, 4.0], [0.0, 1.0, 2.0, 3.0, 4.0]])
    >>> hierarchy.coincident_pulse_list(granular_pulse=1)
    [3, 1, 2, 1]

    Now, changing the `granular_pulse` for a bit of over-sampling:

    >>> hierarchy.coincident_pulse_list(granular_pulse=0.5)
    [3, 0, 1, 0, 2, 0, 1, 0]

    """
    cycle_length = self.start_hierarchy[0][-1]

    for level in self.start_hierarchy:
        assert level[-1] == cycle_length

    steps = int(cycle_length / granular_pulse)
    granular_level = [granular_pulse * count for count in range(steps)]

    def count_instances(nested_list, target):
        return sum([sublist.count(target) for sublist in nested_list])

    coincidences = []
    for target in granular_level:
        coincidences.append(count_instances(self.start_hierarchy, target))

    return coincidences

to_pulse_lengths

to_pulse_lengths()

Check if levels have a regular pulse and if so, return the pulse length value.

Returns:

  • list

    Returns a list of pulse values corresponding to the start hierarchy, of the same length. If a level is not regular, the list is populated with None.

Examples:

>>> hierarchy = StartTimeHierarchy([[0.0, 4.0], [0.0, 2.0, 4.0], [0.0, 1.0, 2.0, 3.0, 4.0]])
>>> hierarchy.to_pulse_lengths()
>>> hierarchy.pulse_lengths
[4.0, 2.0, 1.0]
>>> uneven = StartTimeHierarchy([[0.0, 4.0], [0.0, 3.0, 4.0], [0.0, 1.0, 2.0, 3.0, 4.0]])
>>> uneven.to_pulse_lengths()
>>> uneven.pulse_lengths
[4.0, None, 1.0]
Source code in amads/time/meter/representations.py
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
def to_pulse_lengths(self):
    """
    Check if levels have a regular pulse and if so, return the pulse length value.

    Returns
    -------
    list
        Returns a list of pulse values corresponding to the start hierarchy, of the same length.
        If a level is not regular, the list is populated with None.

    Examples
    --------

    >>> hierarchy = StartTimeHierarchy([[0.0, 4.0], [0.0, 2.0, 4.0], [0.0, 1.0, 2.0, 3.0, 4.0]])
    >>> hierarchy.to_pulse_lengths()
    >>> hierarchy.pulse_lengths
    [4.0, 2.0, 1.0]

    >>> uneven = StartTimeHierarchy([[0.0, 4.0], [0.0, 3.0, 4.0], [0.0, 1.0, 2.0, 3.0, 4.0]])
    >>> uneven.to_pulse_lengths()
    >>> uneven.pulse_lengths
    [4.0, None, 1.0]

    """

    def test_one(level: list):
        diffs = set(
            [level[i + 1] - level[i] for i in range(len(level) - 1)]
        )
        if len(diffs) > 1:
            return None
        return float(list(diffs)[0])

    self.pulse_lengths = [test_one(level) for level in self.start_hierarchy]

add_faster_levels

add_faster_levels(minimum_beat_type: int = 64)

Recursively add faster levels until the minimum_beat_type value The minimum_beat_type is subject to the same constraints as the beat_types ("denominators") i.e., powers of 2 (1, 2, 4, 8, 16, 32, 64, ...). The default = 64 for 64th note.

Parameters:

  • minimum_beat_type (int, default: 64 ) –

    Recursively create further levels down to this value. Must be power of two. Defaults to 64 for 64th notes.

Raises:

  • ValueError

    if the currently fastest level of a starts_hierarchy is not periodic, or if either of the fastest level or minimum_beat_type are not powers of 2. Set the starts_hierarchy manually in these non-standard cases.

Examples:

>>> hierarchy = StartTimeHierarchy([[0.0, 4.0], [0.0, 2.0, 4.0]])
>>> hierarchy.start_hierarchy
[[0.0, 4.0], [0.0, 2.0, 4.0]]
>>> hierarchy.to_pulse_lengths()
>>> hierarchy.pulse_lengths
[4.0, 2.0]
>>> hierarchy.add_faster_levels(minimum_beat_type=4)
>>> hierarchy.start_hierarchy
[[0.0, 4.0], [0.0, 2.0, 4.0], [0.0, 1.0, 2.0, 3.0, 4.0]]
>>> hierarchy.pulse_lengths
[4.0, 2.0, 1.0]
>>> hierarchy.add_faster_levels(minimum_beat_type=8)
>>> hierarchy.start_hierarchy[-1]
[0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0]
>>> len(hierarchy.start_hierarchy)
4
>>> hierarchy.pulse_lengths
[4.0, 2.0, 1.0, 0.5]
Source code in amads/time/meter/representations.py
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
def add_faster_levels(self, minimum_beat_type: int = 64):
    """
    Recursively add faster levels until the `minimum_beat_type` value
    The `minimum_beat_type` is subject to the same constraints as the `beat_types` ("denominators")
    i.e., powers of 2 (1, 2, 4, 8, 16, 32, 64, ...).
    The default = 64 for 64th note.

    Parameters
    ----------
    minimum_beat_type
        Recursively create further levels down to this value.
        Must be power of two.
        Defaults to 64 for 64th notes.

    Raises
    ------
    ValueError
        if the currently fastest level of a `starts_hierarchy` is not periodic,
        or if either of the fastest level or `minimum_beat_type` are not powers
        of 2. Set the `starts_hierarchy` manually in these non-standard cases.

    Examples
    --------
    >>> hierarchy = StartTimeHierarchy([[0.0, 4.0], [0.0, 2.0, 4.0]])
    >>> hierarchy.start_hierarchy
    [[0.0, 4.0], [0.0, 2.0, 4.0]]

    >>> hierarchy.to_pulse_lengths()
    >>> hierarchy.pulse_lengths
    [4.0, 2.0]

    >>> hierarchy.add_faster_levels(minimum_beat_type=4)
    >>> hierarchy.start_hierarchy
    [[0.0, 4.0], [0.0, 2.0, 4.0], [0.0, 1.0, 2.0, 3.0, 4.0]]

    >>> hierarchy.pulse_lengths
    [4.0, 2.0, 1.0]

    >>> hierarchy.add_faster_levels(minimum_beat_type=8)
    >>> hierarchy.start_hierarchy[-1]
    [0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0]

    >>> len(hierarchy.start_hierarchy)
    4

    >>> hierarchy.pulse_lengths
    [4.0, 2.0, 1.0, 0.5]

    """
    self.to_pulse_lengths()
    assert self.pulse_lengths is not None, ""
    fastest = self.pulse_lengths[-1]
    if fastest is None:
        raise ValueError(
            "Fastest level is not regular. Use case unsupported."
        )
    if not is_non_negative_integer_power_of_two(
        switch_pulse_length_beat_type(
            fastest
        )  # from pulse length to beat type
    ):
        raise ValueError(
            f"Fastest level ({fastest}) is not a power of 2. Use case unsupported."
        )
    if not is_non_negative_integer_power_of_two(minimum_beat_type):
        raise ValueError(
            f"The `minimum_beat_type` ({minimum_beat_type}) is not a power of 2. Use case unsupported."
        )

    fastest_beat_type = switch_pulse_length_beat_type(fastest)  # TODO
    fastest_beat_type_exponent = int(math.log2(fastest_beat_type))
    minimum_beat_type_exponent = int(math.log2(minimum_beat_type))

    new_beat_types = [
        2**x
        for x in range(
            fastest_beat_type_exponent + 1, minimum_beat_type_exponent + 1
        )
    ]
    new_pulses = [
        switch_pulse_length_beat_type(beat_type)
        for beat_type in new_beat_types
    ]
    self.pulse_lengths += new_pulses
    self.pulse_lengths = [x for x in self.pulse_lengths if x is not None]
    self.pulse_lengths = sorted(
        list(set(self.pulse_lengths)), key=abs, reverse=True
    )
    fake_meter = PulseLengths(
        pulse_lengths=new_pulses, cycle_length=self.cycle_length
    )
    self.start_hierarchy += fake_meter.to_start_hierarchy()

TimeSignature

TimeSignature(
    beats: Optional[tuple[int]] = None,
    beat_type: Optional[int] = None,
    as_string: Optional[str] = None,
)

Represent the notational time signature object.

TODO consider aligning and merging with basics.TimeSignature, this PR shows some of how that would work.

Parameters:

  • beats (Optional[tuple[int]], default: None ) –

    The "numerator" of the time signature: beats per cycle, a number (int or fraction) or a lists thereof.

  • beat_type (Optional[int], default: None ) –

    the so-called "denominator" of the time signature: a whole number power of 2 (1, 2, 4, 8, 16, 32, 64, ...). No so-called "irrational" meters yet (e.g., 2/3), sorry!

  • as_string (Optional[str], default: None ) –

    An alternative way of creating this object from a string representation. See notes at TimeSignature.from_string.

Source code in amads/time/meter/representations.py
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
def __init__(
    self,
    beats: Optional[tuple[int]] = None,
    beat_type: Optional[int] = None,
    # delta: Optional[float] = 0,  # TODO if merging with basics
    as_string: Optional[str] = None,
):
    self.beats = beats
    self.one_beat_value = None
    self.beat_type = beat_type
    self.as_string = as_string
    if (self.beats is None) and (self.beat_type is None):
        self.from_string()
    self.check_valid()

    self.cycle_length = sum(self.beats) * 4 / self.beat_type
    self.pulses = None
    self.get_pulses()

Functions

from_string

from_string()

Given a signature string, extract the constituent parts and create an object. The string must take the form <beat>/<beat_type> with exactly one “/” separating the two (spaces are ignored). The string does not change.

The <beat> (“numerator”) part may be a number (including 5 and 7 which are supported) or more than one number separated by the “+” symbol. For example, when encoding “5/4”, use the total value only to avoid segmentation above the denominator level (“5/4”) or the X+Y form to explicitly distinguish between “2+3” and “3+2”. I.e., “5/” time signatures have no 3+2 or 2+3 division by default. See examples on TimeSignature.to_starts_hierarchy.

Finally, although we support and provide defaults for time signatures in the form “2+3/8”, there is no such support for more than one “/” (i.e., the user must build cases like “4/4 + 3/8” explicitly according to how they see it).

Examples:

>>> ts_4_4 = TimeSignature(as_string="4/4")
>>> ts_4_4.beats # Tuple of one element
(4,)
>>> ts_4_4.beat_type
4
Source code in amads/time/meter/representations.py
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
def from_string(self):
    """
    Given a signature string, extract the constituent parts and create an object.
    The string must take the form `<beat>/<beat_type>`
    with exactly one “/” separating the two (spaces are ignored).
    The string does not change.

    The `<beat>` (“numerator”) part may be a number (including 5 and 7 which
    are supported) or more than one number separated by the “+” symbol.
    For example, when encoding “5/4”, use the total value only to avoid
    segmentation above the denominator level (“5/4”)
    or the X+Y form to explicitly distinguish between “2+3” and “3+2”.
    I.e., “5/” time signatures have no 3+2 or 2+3 division by default.
    See examples on `TimeSignature.to_starts_hierarchy`.

    Finally, although we support and provide defaults for time signatures
    in the form “2+3/8”, there is no such support for more than one “/”
    (i.e., the user must build cases like “4/4 + 3/8” explicitly
    according to how they see it).

    Examples
    --------

    >>> ts_4_4 = TimeSignature(as_string="4/4")
    >>> ts_4_4.beats # Tuple of one element
    (4,)

    >>> ts_4_4.beat_type
    4

    """
    beats, beat_type = self.as_string.split("/")

    self.beats = tuple([int(x) for x in beats.split("+")])
    self.beat_type = int(beat_type)

check_valid

check_valid()

Check the validity of the input.

  • .beats must be an integer or a list/tuple thereof.
  • .beat_type must be a single integer power of two.
Source code in amads/time/meter/representations.py
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
def check_valid(self):
    """
    Check the validity of the input.

     - `.beats` must be an integer or a list/tuple thereof.
     - `.beat_type` must be a single integer power of two.
    """
    # beats  # TODO this check may be overdoing it
    if self.beats:
        assert isinstance(self.beats, tuple)
        for b in self.beats:
            assert isinstance(b, int)

    # beat_type  # TODO this is the part we want to actively check
    if not is_non_negative_integer_power_of_two(self.beat_type):
        raise ValueError(
            f"Beat type set as {self.beat_type} is invalid: must be a non-negative integer power of 2."
        )

get_pulses

get_pulses()

Create an unordered set for the regular pulses present in this time signature.

This will include the full cycle and beat type (“denominator”) value, e.g., in “3/4” the pulse lengths are 3.0 (full cycle) and 1.0 (beat type). If there are other regular levels between the two, they will be added only if the user has first called fill_2s_3s (it does not run by default). For instance, the splitting of 4 into 2+2 is user choice (see fill_2s_3s) With this split, this “4/4” has pulse lengths of 4.0 (full cycle) and 1.0 (beat type) as well as 2.0 given that the two twos are of one kind. In “2+3/4” there is no such 2.0 (or 3.0) regularity, and so no pulse is created for that level.

Examples:

>>> ts_4_4 = TimeSignature(as_string="4/4")
>>> ts_4_4.pulses
[4.0, 1.0]
>>> ts_4_4.fill_2s_3s()
>>> ts_4_4.pulses
[4.0, 2.0, 1.0]
>>> ts_6_8 = TimeSignature(as_string="6/8")
>>> ts_6_8.pulses
[3.0, 0.5]
>>> ts_6_8.fill_2s_3s()
>>> ts_6_8.pulses
[3.0, 1.5, 0.5]
Source code in amads/time/meter/representations.py
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
def get_pulses(self):
    """
    Create an unordered set for the regular pulses present in this time signature.

    This will include the full cycle and beat type (“denominator”) value,
    e.g., in “3/4” the pulse lengths are 3.0 (full cycle) and 1.0 (beat type).
    If there are other regular levels between the two, they will be added
    only if the user has first called `fill_2s_3s` (it does not run by default).
    For instance, the splitting of 4 into 2+2 is user choice (see `fill_2s_3s`)
    With this split, this “4/4” has pulse lengths of 4.0 (full cycle)
    and 1.0 (beat type) as well as 2.0 given that the two twos are of one kind.
    In “2+3/4” there is no such 2.0 (or 3.0) regularity, and so no pulse is
    created for that level.

    Examples
    --------
    >>> ts_4_4 = TimeSignature(as_string="4/4")
    >>> ts_4_4.pulses
    [4.0, 1.0]

    >>> ts_4_4.fill_2s_3s()
    >>> ts_4_4.pulses
    [4.0, 2.0, 1.0]

    >>> ts_6_8 = TimeSignature(as_string="6/8")
    >>> ts_6_8.pulses
    [3.0, 0.5]

    >>> ts_6_8.fill_2s_3s()
    >>> ts_6_8.pulses
    [3.0, 1.5, 0.5]

    """
    pulses = [float(self.cycle_length), 4 / self.beat_type]

    first_beat_to_pulse = self.beats[0] * 4 / self.beat_type

    if len(self.beats) == 1:  # one beat type
        pulses.append(first_beat_to_pulse)
        self.one_beat_value = self.beats[0]
    elif len(self.beats) > 1:  # 2+ beats
        if (
            len(set(self.beats)) == 1
        ):  # duplicate of the same e.g., (2, 2), so still one consistent pulse.
            self.one_beat_value = self.beats[0]
            pulses.append(first_beat_to_pulse)

    self.pulses = sorted(list(set(pulses)), key=abs, reverse=True)

fill_2s_3s

fill_2s_3s()

Optionally, add pulse values to follow the conventions of the time signatures.

Enforcing 2- and 3-grouping, this only applies to cases with a single beat in the "numerator". For instance, given a “4/4” signature, this method will add the half-cycle (pulse value 2.0), given a “6/8”, it will again add the half-cycle (pulse value 1.5), and given a “12/8”, it will add both the half- and quarter-cycle (pulse values 3.0 and 1.5),

This functionality is factored out and does not run by default. Even if this runs, the original time signature string is unchanged, as is the beats attribute.

Examples:

>>> ts_4_4 = TimeSignature(as_string="4/4")
>>> ts_4_4.pulses
[4.0, 1.0]
>>> ts_4_4.fill_2s_3s()
>>> ts_4_4.pulses
[4.0, 2.0, 1.0]
>>> ts_6_8 = TimeSignature(as_string="6/8")
>>> ts_6_8.pulses
[3.0, 0.5]
>>> ts_6_8.fill_2s_3s()
>>> ts_6_8.pulses
[3.0, 1.5, 0.5]
Source code in amads/time/meter/representations.py
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
def fill_2s_3s(self):
    """
    Optionally, add pulse values to follow the conventions of the time signatures.

    Enforcing 2- and 3-grouping, this only applies to cases with a
    single beat in the "numerator". For instance,
    given a “4/4” signature, this method will add the half-cycle (pulse value 2.0),
    given a “6/8”, it will again add the half-cycle (pulse value 1.5),
    and given a “12/8”, it will add both the half- and quarter-cycle
    (pulse values 3.0 and 1.5),

    This functionality is factored out and does not run by default.
    Even if this runs, the original time signature string is unchanged,
    as is the `beats` attribute.

    Examples
    --------
    >>> ts_4_4 = TimeSignature(as_string="4/4")
    >>> ts_4_4.pulses
    [4.0, 1.0]

    >>> ts_4_4.fill_2s_3s()
    >>> ts_4_4.pulses
    [4.0, 2.0, 1.0]

    >>> ts_6_8 = TimeSignature(as_string="6/8")
    >>> ts_6_8.pulses
    [3.0, 0.5]

    >>> ts_6_8.fill_2s_3s()
    >>> ts_6_8.pulses
    [3.0, 1.5, 0.5]

    """
    metrical_mappings = {4: [2], 6: [3], 9: [3], 12: [6, 3]}

    if self.one_beat_value is not None:
        if self.one_beat_value in metrical_mappings:
            self.pulses += [
                x * 4 / self.beat_type
                for x in metrical_mappings[self.one_beat_value]
            ]
    self.pulses = sorted(list(set(self.pulses)), key=abs, reverse=True)

to_start_hierarchy

to_start_hierarchy() -> list

Create a start hierarchy for almost any time signature

(with constraints as noted in the top level class description and in the .from_string method). See below for several examples of how this handles specific time signatures and related assumptions, and note the effect of running fill_2s_3s().

Returns:

  • list

    Returns a list of lists with start positions by level.

Examples:

>>> ts_4_4 = TimeSignature(as_string="4/4")
>>> ts_4_4.pulses
[4.0, 1.0]
>>> test_1 = ts_4_4.to_start_hierarchy()
>>> test_1[0]
[0.0, 4.0]
>>> test_1[1]
[0.0, 1.0, 2.0, 3.0, 4.0]
>>> ts_4_4.fill_2s_3s()
>>> ts_4_4.pulses
[4.0, 2.0, 1.0]
>>> test_2 = ts_4_4.to_start_hierarchy()
>>> test_2[0]
[0.0, 4.0]
>>> test_2[1]
[0.0, 2.0, 4.0]
>>> test_2[2]
[0.0, 1.0, 2.0, 3.0, 4.0]
>>> ts_2_2 = TimeSignature(as_string="2/2")
>>> ts_2_2.pulses
[4.0, 2.0]
>>> test_3 = ts_2_2.to_start_hierarchy()
>>> test_3[0]
[0.0, 4.0]
>>> test_3[1]
[0.0, 2.0, 4.0]
>>> ts_2_2.fill_2s_3s() # no effect, unchanged
>>> ts_2_2.pulses
[4.0, 2.0]
>>> test_4 = ts_2_2.to_start_hierarchy()
>>> test_4[0]
[0.0, 4.0]
>>> test_4[1]
[0.0, 2.0, 4.0]
>>> ts_6_8 = TimeSignature(as_string="6/8")
>>> ts_6_8.pulses
[3.0, 0.5]
>>> test_5 = ts_6_8.to_start_hierarchy()
>>> test_5[0]
[0.0, 3.0]
>>> test_5[1]
[0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]
>>> ts_6_8.fill_2s_3s()
>>> ts_6_8.pulses
[3.0, 1.5, 0.5]
>>> test_6 = ts_6_8.to_start_hierarchy()
>>> test_6[0]
[0.0, 3.0]
>>> test_6[1]
[0.0, 1.5, 3.0]
>>> test_6[2]
[0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]
>>> ts_5_4 = TimeSignature(as_string="5/4")
>>> ts_5_4.pulses
[5.0, 1.0]
>>> test_7 = ts_5_4.to_start_hierarchy()
>>> test_7[0]
[0.0, 5.0]
>>> test_7[1]
[0.0, 1.0, 2.0, 3.0, 4.0, 5.0]
>>> ts_2_3_4 = TimeSignature(as_string="2+3/4")
>>> ts_2_3_4.pulses # as before
[5.0, 1.0]
>>> test_8 = ts_2_3_4.to_start_hierarchy()
>>> test_8[0]
[0.0, 5.0]
>>> test_8[1]
[0.0, 2.0, 5.0]
>>> test_8[2]
[0.0, 1.0, 2.0, 3.0, 4.0, 5.0]
Source code in amads/time/meter/representations.py
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
def to_start_hierarchy(self) -> list:
    """
    Create a start hierarchy for almost any time signature

    (with constraints as noted in the top level class description
    and in the `.from_string` method).
    See below for several examples of how this handles
    specific time signatures and related assumptions,
    and note the effect of running `fill_2s_3s()`.

    Returns
    -------
    list
        Returns a list of lists with start positions by level.

    Examples
    --------

    >>> ts_4_4 = TimeSignature(as_string="4/4")
    >>> ts_4_4.pulses
    [4.0, 1.0]

    >>> test_1 = ts_4_4.to_start_hierarchy()
    >>> test_1[0]
    [0.0, 4.0]

    >>> test_1[1]
    [0.0, 1.0, 2.0, 3.0, 4.0]

    >>> ts_4_4.fill_2s_3s()
    >>> ts_4_4.pulses
    [4.0, 2.0, 1.0]

    >>> test_2 = ts_4_4.to_start_hierarchy()
    >>> test_2[0]
    [0.0, 4.0]

    >>> test_2[1]
    [0.0, 2.0, 4.0]

    >>> test_2[2]
    [0.0, 1.0, 2.0, 3.0, 4.0]

    >>> ts_2_2 = TimeSignature(as_string="2/2")
    >>> ts_2_2.pulses
    [4.0, 2.0]

    >>> test_3 = ts_2_2.to_start_hierarchy()
    >>> test_3[0]
    [0.0, 4.0]

    >>> test_3[1]
    [0.0, 2.0, 4.0]

    >>> ts_2_2.fill_2s_3s() # no effect, unchanged
    >>> ts_2_2.pulses
    [4.0, 2.0]

    >>> test_4 = ts_2_2.to_start_hierarchy()
    >>> test_4[0]
    [0.0, 4.0]

    >>> test_4[1]
    [0.0, 2.0, 4.0]

    >>> ts_6_8 = TimeSignature(as_string="6/8")
    >>> ts_6_8.pulses
    [3.0, 0.5]

    >>> test_5 = ts_6_8.to_start_hierarchy()
    >>> test_5[0]
    [0.0, 3.0]

    >>> test_5[1]
    [0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]

    >>> ts_6_8.fill_2s_3s()
    >>> ts_6_8.pulses
    [3.0, 1.5, 0.5]

    >>> test_6 = ts_6_8.to_start_hierarchy()
    >>> test_6[0]
    [0.0, 3.0]

    >>> test_6[1]
    [0.0, 1.5, 3.0]

    >>> test_6[2]
    [0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]

    >>> ts_5_4 = TimeSignature(as_string="5/4")
    >>> ts_5_4.pulses
    [5.0, 1.0]

    >>> test_7 = ts_5_4.to_start_hierarchy()
    >>> test_7[0]
    [0.0, 5.0]

    >>> test_7[1]
    [0.0, 1.0, 2.0, 3.0, 4.0, 5.0]

    >>> ts_2_3_4 = TimeSignature(as_string="2+3/4")
    >>> ts_2_3_4.pulses # as before
    [5.0, 1.0]

    >>> test_8 = ts_2_3_4.to_start_hierarchy()
    >>> test_8[0]
    [0.0, 5.0]

    >>> test_8[1]
    [0.0, 2.0, 5.0]

    >>> test_8[2]
    [0.0, 1.0, 2.0, 3.0, 4.0, 5.0]

    """
    # 1. Basic elements: all periodic cycles from the full cycle to the `beat_type` level.
    pulses = (
        PulseLengths(  # TODO consistency wrt what is added to the class.
            pulse_lengths=self.pulses, cycle_length=self.cycle_length
        )
    )
    start_hierarchy = pulses.to_start_hierarchy()

    # 2. irregular beat layer, if applicable.
    if len(self.beats) > 1:  # not a regular pulse e.g., (2, 3)
        bp = BeatPattern(
            self.beats, self.beat_type
        )  # TODO consistency wrt what is added to the class.
        beat_starts = bp.beat_pattern_to_start_hierarchy()
        start_hierarchy.append(beat_starts)

        start_hierarchy = [
            list(i) for i in set(map(tuple, start_hierarchy))
        ]
        start_hierarchy.sort(key=len)
        # TODO consider move to StartTimeHierarchy?

    return start_hierarchy

PulseLengths

PulseLengths(
    pulse_lengths: list[float],
    cycle_length: Optional[float] = None,
    include_cycle_length: bool = True,
)

Parameters:

  • pulse_lengths (list[float]) –

    Any valid list of pulse lengths, e.g., [4, 2, 1].

  • cycle_length (Optional[float], default: None ) –

    Optional. If not provided, the cycle length is taken to be given by the longest pulse length.

  • include_cycle_length (bool, default: True ) –

    Defaults to True. If True, when converting to starts, include the full cycle length in the list.

Source code in amads/time/meter/representations.py
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
def __init__(
    self,
    pulse_lengths: list[float],
    cycle_length: Optional[float] = None,
    include_cycle_length: bool = True,
):
    """
    Representation of fully periodic meter centred on the constituent pulse lengths.

    Parameters
    ----------
    pulse_lengths
        Any valid list of pulse lengths, e.g., [4, 2, 1].
    cycle_length
        Optional. If not provided, the cycle length is taken to be given by the longest pulse length.
    include_cycle_length
        Defaults to True. If True, when converting to starts, include the full cycle length in the list.

    """

    self.pulse_lengths = pulse_lengths
    self.pulse_lengths.sort(reverse=True)  # largest number first

    self.cycle_length = cycle_length
    if self.cycle_length is not None:
        if pulse_lengths[0] > self.cycle_length:
            raise ValueError(
                f"The `pulse_length` {pulse_lengths[0]} is longer than the `cycle_length` ({self.cycle_length})."
            )
    else:
        self.cycle_length = float(pulse_lengths[0])

    self.start_hierarchy = None
    self.include_cycle_length = include_cycle_length

Functions

to_start_hierarchy

to_start_hierarchy(require_2_or_3_between_levels: bool = False)

Convert a list of pulse lengths into a corresponding list of lists.

Gives start positions per metrical level. All values (pulse lengths, start positions, and cycle_length) are all expressed in terms of quarter length.

That is, the user provides pulse lengths for each level of a metrical hierarchy, and the algorithm expands this into a hierarchy assuming equal spacing (aka “isochrony”).

This does not work for (“nonisochronous”) pulse streams of varying duration in time signatures like 5/x, 7/x (e.g., the level of 5/4 with dotted/undotted 1/2 notes).

It is still perfectly fine to use this for the pulse streams within those meters that are regular, equally spaced (“isochronous”) (e.g., the 1/4 note level of 5/4).

The list of pulse lengths is handled internally in decreasing order, whatever the ordering in the argument.

If require_2_or_3_between_levels is True (default), this functions checks that each level is either a 2 or 3 multiple of the next.

By default, the cycle_length is taken by the longest pulse length. Alternatively, this can be user-defined to anything as long as it is

  1. longer than the longest pulse and
  2. if require_2_or_3_between_levels is True then exactly 2x or 3x longer.

Parameters:

  • require_2_or_3_between_levels (bool, default: False ) –

    Defaults to False. If True, raise a ValueError in the case of this condition not being met.

Returns:

  • list

    Returns a list of lists with start positions by level.

Examples:

>>> qsl = PulseLengths(pulse_lengths=[4, 2, 1, 0.5])
>>> qsl.pulse_lengths
[4, 2, 1, 0.5]
>>> start_hierarchy = qsl.to_start_hierarchy()
>>> start_hierarchy[0]
[0.0, 4.0]
>>> start_hierarchy[1]
[0.0, 2.0, 4.0]
>>> start_hierarchy[2]
[0.0, 1.0, 2.0, 3.0, 4.0]
>>> start_hierarchy[3]
[0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0]
Source code in amads/time/meter/representations.py
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
def to_start_hierarchy(
    self,
    require_2_or_3_between_levels: bool = False,
):
    """
    Convert a list of pulse lengths into a corresponding list of lists.

    Gives start positions per metrical level.
    All values (pulse lengths, start positions, and cycle_length)
    are all expressed in terms of quarter length.

    That is, the user provides pulse lengths for each level of a
    metrical hierarchy, and the algorithm expands this into a hierarchy
    assuming equal spacing (aka “isochrony”).

    This does not work for (“nonisochronous”) pulse streams of varying duration
    in time signatures like 5/x, 7/x (e.g., the level of 5/4 with
    dotted/undotted 1/2 notes).

    It is still perfectly fine to use this for the pulse streams
    within those meters that are regular, equally spaced (“isochronous”)
    (e.g., the 1/4 note level of 5/4).

    The list of pulse lengths is handled internally in decreasing order,
    whatever the ordering in the argument.

    If `require_2_or_3_between_levels` is True (default), this functions
    checks that each level is either a 2 or 3 multiple of the next.

    By default, the cycle_length is taken by the longest pulse length.
    Alternatively, this can be user-defined to anything as long as it is

    1. longer than the longest pulse and
    2. if `require_2_or_3_between_levels` is True then exactly 2x or 3x
    longer.


    Parameters
    ----------
    require_2_or_3_between_levels
        Defaults to False.
        If True, raise a ValueError in the case of this condition not being met.

    Returns
    -------
    list
        Returns a list of lists with start positions by level.

    Examples
    --------

    >>> qsl = PulseLengths(pulse_lengths=[4, 2, 1, 0.5])
    >>> qsl.pulse_lengths
    [4, 2, 1, 0.5]

    >>> start_hierarchy = qsl.to_start_hierarchy()
    >>> start_hierarchy[0]
    [0.0, 4.0]

    >>> start_hierarchy[1]
    [0.0, 2.0, 4.0]

    >>> start_hierarchy[2]
    [0.0, 1.0, 2.0, 3.0, 4.0]

    >>> start_hierarchy[3]
    [0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0]

    """

    if require_2_or_3_between_levels:  # TODO consider refactor
        for level in range(len(self.pulse_lengths) - 1):
            if self.pulse_lengths[level] / self.pulse_lengths[
                level + 1
            ] not in [
                2,
                3,
            ]:
                raise ValueError(
                    "The proportion between consecutive levels is not 2 or 3 in "
                    f"this case: {self.pulse_lengths[level]}:{self.pulse_lengths[level + 1]}."
                )

    start_list = []

    for pulse_length in self.pulse_lengths:
        starts = self.one_pulse_to_start_hierarchy_list(pulse_length)
        start_list.append(starts)

    self.start_hierarchy = start_list
    return start_list

one_pulse_to_start_hierarchy_list

one_pulse_to_start_hierarchy_list(pulse_length: float)

Convert a single pulse length and cycle length into a list of starts. All expressed in quarter length.

Note: A maximum of 4 decimal places is hardcoded. This is to avoid floating point errors or the need for one line of numpy (np.arange) in a module that doesn't otherwise use it. 4 decimal points should be sufficient for all realistic use cases.

Parameters:

  • pulse_length (float) –

    The quarter length of the pulse (note: must be shorter than the cycle_length).

Examples:

>>> pls = PulseLengths(pulse_lengths=[4, 2, 1, 0.5], cycle_length=4)
>>> pls.pulse_lengths
[4, 2, 1, 0.5]
>>> pls.one_pulse_to_start_hierarchy_list(1)
[0.0, 1.0, 2.0, 3.0, 4.0]
>>> pls = PulseLengths(pulse_lengths=[4, 2, 1, 0.5], cycle_length=4, include_cycle_length=False)
>>> pls.one_pulse_to_start_hierarchy_list(1)
[0.0, 1.0, 2.0, 3.0]
Source code in amads/time/meter/representations.py
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
def one_pulse_to_start_hierarchy_list(
    self,
    pulse_length: float,
):
    """
    Convert a single pulse length and cycle length into a list of starts.
    All expressed in quarter length.

    Note:
    A maximum of 4 decimal places is hardcoded.
    This is to avoid floating point errors or the need for one line of numpy (np.arange)
    in a module that doesn't otherwise use it.
    4 decimal points should be sufficient for all realistic use cases.

    Parameters
    --------
    pulse_length
        The quarter length of the pulse (note: must be shorter than the
        `cycle_length`).

    Examples
    --------

    >>> pls = PulseLengths(pulse_lengths=[4, 2, 1, 0.5], cycle_length=4)
    >>> pls.pulse_lengths
    [4, 2, 1, 0.5]

    >>> pls.one_pulse_to_start_hierarchy_list(1)
    [0.0, 1.0, 2.0, 3.0, 4.0]

    >>> pls = PulseLengths(pulse_lengths=[4, 2, 1, 0.5], cycle_length=4, include_cycle_length=False)
    >>> pls.one_pulse_to_start_hierarchy_list(1)
    [0.0, 1.0, 2.0, 3.0]

    """
    starts = []
    count = 0
    while count < self.cycle_length:
        starts.append(round(float(count), 4))
        count += pulse_length

    if self.include_cycle_length:
        starts.append(round(float(count), 4))

    return starts

BeatPattern

BeatPattern(beat_list: tuple[int, ...], beat_type: int)

Encoding only the part of a metrical structure identified as the beat pattern.

Parameters:

  • beat_list (tuple[int, ...]) –

    An ordered list of the beat types.

  • beat_type (int) –

    The lower value of a time signature to set the pulse value.

Source code in amads/time/meter/representations.py
868
869
870
871
872
873
874
875
876
def __init__(
    self,
    beat_list: tuple[int, ...],
    beat_type: int,
):

    self.beat_list = beat_list
    self.beat_type = beat_type
    self.start_time_hierarchy = self.beat_pattern_to_start_hierarchy()

Functions

beat_pattern_to_start_hierarchy

beat_pattern_to_start_hierarchy(
    include_cycle_length: bool = True,
) -> list

Converts a list of beats like [2, 2, 2] or [3, 3] or indeed [6, 9] into a list of within-cycle starting positions, as defined relative to the start of the cycle. Basically, the list of beats functions like the time signature's so-called “numerator”, so for instance, [2, 2, 3] with the denominator 4 is a kind of 7/4. This equates to starting positions of [0.0, 2.0, 4.0, 7.0].

Parameters:

  • include_cycle_length (bool, default: True ) –

    If True (default) then each level ends with the full cycle length (i.e., the start of the next cycle).

Examples:

>>> bp = BeatPattern((2, 2, 3), 4)
>>> bp.beat_pattern_to_start_hierarchy()
[0.0, 2.0, 4.0, 7.0]
>>> bp.beat_pattern_to_start_hierarchy(include_cycle_length = False)
[0.0, 2.0, 4.0]
Source code in amads/time/meter/representations.py
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
def beat_pattern_to_start_hierarchy(
    self, include_cycle_length: bool = True
) -> list:
    """
    Converts a list of beats
    like [2, 2, 2]
    or [3, 3]
    or indeed
    [6, 9]
    into a list of within-cycle starting positions, as defined relative
    to the start of the cycle.
    Basically, the list of beats functions like the time signature's
    so-called “numerator”,
    so for instance, `[2, 2, 3]` with the denominator `4` is a kind of 7/4.
    This equates to starting positions of
    `[0.0, 2.0, 4.0, 7.0]`.

    Parameters
    --------
    include_cycle_length
        If True (default) then each level ends with the full cycle length
        (i.e., the start of the next cycle).

    Examples
    --------

    >>> bp = BeatPattern((2, 2, 3), 4)
    >>> bp.beat_pattern_to_start_hierarchy()
    [0.0, 2.0, 4.0, 7.0]

    >>> bp.beat_pattern_to_start_hierarchy(include_cycle_length = False)
    [0.0, 2.0, 4.0]

    """
    starts = [0.0]  # always float, always starts at zero
    count = 0
    for beat_val in self.beat_list:
        count += beat_val
        this_start = count * 4 / self.beat_type
        starts.append(this_start)

    if include_cycle_length:  # include last value
        return starts
    else:
        return starts[:-1]

is_non_negative_integer_power_of_two

is_non_negative_integer_power_of_two(n: float) -> bool

Checks if a number is a power of 2.

Examples:

>>> is_non_negative_integer_power_of_two(0)
False
>>> is_non_negative_integer_power_of_two(0.5)
False
>>> is_non_negative_integer_power_of_two(1)
True
>>> is_non_negative_integer_power_of_two(2)
True
>>> is_non_negative_integer_power_of_two(3)
False
>>> is_non_negative_integer_power_of_two(4)
True
Source code in amads/time/meter/representations.py
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
def is_non_negative_integer_power_of_two(n: float) -> bool:
    """
    Checks if a number is a power of 2.

    Examples
    --------
    >>> is_non_negative_integer_power_of_two(0)
    False

    >>> is_non_negative_integer_power_of_two(0.5)
    False

    >>> is_non_negative_integer_power_of_two(1)
    True

    >>> is_non_negative_integer_power_of_two(2)
    True

    >>> is_non_negative_integer_power_of_two(3)
    False

    >>> is_non_negative_integer_power_of_two(4)
    True
    """
    if n <= 0:  # also catches type error if non-numeric
        return False
    if not isinstance(n, int):
        if int(n) == n:
            n = int(n)
        else:
            return False
    return n > 0 and (n & (n - 1)) == 0

switch_pulse_length_beat_type

switch_pulse_length_beat_type(
    pulse_length_or_beat_type: Union[float, ndarray],
)

Switch between a pulse length and beat type. Accepts numeric values or numpy arrays thereof. Note that a float of vale 0 will raise a ZeroDivisionError: division by zero, but a numpy array will map any 0s to inf without error.

Examples:

>>> switch_pulse_length_beat_type(0.5)
8.0
>>> switch_pulse_length_beat_type(8)
0.5
>>> switch_pulse_length_beat_type(np.array([0.5, 8]))
array([8. , 0.5])
Source code in amads/time/meter/representations.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
def switch_pulse_length_beat_type(
    pulse_length_or_beat_type: Union[float, np.ndarray]
):
    """
    Switch between a pulse length and beat type.
    Accepts numeric values or numpy arrays thereof.
    Note that a float of vale 0 will raise a
    `ZeroDivisionError: division by zero`,
    but a numpy array will map any 0s to `inf` without error.

    Examples
    --------
    >>> switch_pulse_length_beat_type(0.5)
    8.0

    >>> switch_pulse_length_beat_type(8)
    0.5

    >>> switch_pulse_length_beat_type(np.array([0.5, 8]))
    array([8. , 0.5])
    """
    return 4 / pulse_length_or_beat_type

SyncopationMetric

SyncopationMetric(path_to_score: Optional[str] = None)

times and similar).

The parameters of this class allow users to run from a score (with onsets etc. deduced from there) or directly on their own data (the necessary parameters differ slightly for each method).

Parameters:

  • path_to_score (Optional[str], default: None ) –

    Path to the score in any supported format (e.g., MusicXML). Deduce any necessary onsets, beats etc. from the score as calculated by Partitura. Warning: Partitura takes “beats” from time signatures denominators, e.g., 6/8 has 6 “beats” (not 2).

Source code in amads/time/meter/syncopation.py
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
def __init__(self, path_to_score: Optional[str] = None):
    """
    The methods of this class implement syncopation metrics from the
    literature. These are typically based on simple data (note start
    times and similar).

    The parameters of this class allow users to run from a score
    (with onsets etc. deduced from there) or directly on their own
    data (the necessary parameters differ slightly for each method).

    Parameters
    ----------
    path_to_score:
        Path to the score in any supported format (e.g., MusicXML).
        Deduce any necessary onsets, beats etc. from the score as
        calculated by Partitura.
        Warning: Partitura takes “beats” from time signatures
        denominators, e.g., 6/8 has 6 “beats” (not 2).
    """
    self.path_to_score = path_to_score
    # TODO. TBC. May be redundant / better handled on a per-metric basis:
    self.note_array = None

Functions

load_note_array_from_score

load_note_array_from_score()

Parse a score and return Partitura's .note_array() with include_metrical_position=True.

This should cover the required information. The note array's fields includes several fields of which methods here use the following (in their words):

  • 'onset_beat': onset time of the note in beats
  • 'duration_beat': duration of the note in beats

These values are called in the form note_array["onset_beat"].

Source code in amads/time/meter/syncopation.py
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
def load_note_array_from_score(self):
    """
    Parse a score and return Partitura's `.note_array()` with `include_metrical_position=True`.

    This should cover the required information.
    The note array's fields includes several fields of which methods here
    use the following (in their words):

    * 'onset_beat': onset time of the note in beats
    * 'duration_beat': duration of the note in beats

    These values are called in the form `note_array["onset_beat"]`.

    """
    if self.note_array is not None:
        print("already retrieved, skipping")
        return
    if self.path_to_score is None:
        raise ValueError("No score provided.")
    else:
        score = load_score(self.path_to_score)
        self.note_array = score.note_array(include_metrical_position=True)

weighted_note_to_beat_distance

weighted_note_to_beat_distance(
    onset_beats: Optional[list] = None,
) -> float

TODO: WIP - does not currently replicate answers in the literature; further investigation to follow

The weighted note-to-beat distance measure (WNBD) measures the distance between note starts and records the traversing of beats, and the distance to the nearest beat.

The authors clarify that “notes are supposed to end where the next note starts”, so we're working with the inter-note interval (INI), rather than the duration. Note that there are one fewer INI values than notes.

Among the limitations is the incomplete definition of “beat” and the agnostic view of metre: “By strong beats we just mean pulses.” (§3.4).

Parameters:

  • onset_beats (Optional[list], default: None ) –

    User supplied data for the onset time of each note expressed in beats. Optional.

Returns:

  • WNBD value (float)

Examples:

We use the example of the son clave (also available from the meter.profiles module), adapting to match presentation in the literature.

>>> son = [1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0]
>>> onset_beats = vector_to_onset_beat(vector=son, beat_unit_length=4)
>>> sm = SyncopationMetric()
>>> sm.weighted_note_to_beat_distance(onset_beats=onset_beats)
Fraction(14, 5)
>>> hesitation = [1, 0, 1, 0, 1, 0, 0, 1]
>>> onset_beats = vector_to_onset_beat(vector=hesitation, beat_unit_length=4)
>>> sm = SyncopationMetric()
>>> sm.weighted_note_to_beat_distance(onset_beats=onset_beats)
Fraction(1, 2)
>>> from amads.music import example
>>> test_xml_file = str(example.fullpath("musicxml/ex1.xml"))
>>> sm = SyncopationMetric(path_to_score=test_xml_file)
>>> sm.weighted_note_to_beat_distance()
Fraction(4, 3)
Source code in amads/time/meter/syncopation.py
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
def weighted_note_to_beat_distance(
    self, onset_beats: Optional[list] = None
) -> float:
    """

    TODO: WIP - does not currently replicate answers in the literature; further investigation to follow

    The weighted note-to-beat distance measure (WNBD)
    measures the distance between note starts and
    records the traversing of beats, and the distance to the nearest beat.

    The authors clarify that “notes are supposed to end where the
    next note starts”,
    so we're working with the inter-note interval (INI), rather
    than the duration.
    Note that there are one fewer INI values than notes.

    Among the limitations is the incomplete definition of “beat”
    and the agnostic view of metre:
    “By strong beats we just mean pulses.” (§3.4).

    Parameters
    ----------
    onset_beats:
        User supplied data for the onset time of each note expressed
        in beats. Optional.

    Returns
    -------
    WNBD value (float)

    Examples
    --------
    We use the example of the son clave
    (also available from the `meter.profiles` module),
    adapting to match presentation in the literature.

    >>> son = [1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0]
    >>> onset_beats = vector_to_onset_beat(vector=son, beat_unit_length=4)
    >>> sm = SyncopationMetric()
    >>> sm.weighted_note_to_beat_distance(onset_beats=onset_beats)
    Fraction(14, 5)

    >>> hesitation = [1, 0, 1, 0, 1, 0, 0, 1]
    >>> onset_beats = vector_to_onset_beat(vector=hesitation, beat_unit_length=4)
    >>> sm = SyncopationMetric()
    >>> sm.weighted_note_to_beat_distance(onset_beats=onset_beats)
    Fraction(1, 2)

    >>> from amads.music import example
    >>> test_xml_file = str(example.fullpath("musicxml/ex1.xml"))
    >>> sm = SyncopationMetric(path_to_score=test_xml_file)
    >>> sm.weighted_note_to_beat_distance()
    Fraction(4, 3)

    """
    # onset_beats is required for user-provided,
    if onset_beats is None:  # if not seek a score on the class
        if self.path_to_score is not None:
            self.load_note_array_from_score()
            onset_beats = [
                Fraction(float(x["onset_beat"])) for x in self.note_array
            ]  # type: ignore
            # Sic, Fraction via first: Partitura uses np.float32 and
            #    Fractions do not accept that type.
            # TODO revisit class handling of this retrieval when
            #    more algos are in
        else:
            raise ValueError("No score or user values provided.")

    per_note_syncopation_values = []

    durations = [j - i for i, j in zip(onset_beats[:-1], onset_beats[1:])]
    # Sic, although Partitura note_array provides durations,
    #    we're using INI here.

    for i in range(len(durations)):
        onset = onset_beats[i]
        if int(onset) == onset:  # starts on a beat
            per_note_syncopation_values.append(0)
        else:
            duration = durations[i]
            this_beat_int = int(onset)  # NB round down
            if (
                onset + duration <= this_beat_int + 1
            ):  # ends before or at e_{i+1}
                numerator = 1
            elif (
                onset + duration <= this_beat_int + 2
            ):  # ends before or at e_{i+2}
                numerator = 2
            else:  # if onset + duration > this_beat_int + 2: # ends after e_{i+2}
                numerator = 1

            distance_to_nearest_beat = abs(
                round(onset) - Fraction(onset)
            )  # Fraction
            per_note_syncopation_values.append(
                Fraction(numerator, distance_to_nearest_beat)
            )

    return sum(per_note_syncopation_values) / (
        len(per_note_syncopation_values) + 1
    )

vector_to_onset_beat

vector_to_onset_beat(vector: list, beat_unit_length: int = 2)

Map from a vector to onset beat data via vector_to_multiset.

Examples:

>>> son = [1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1]  # Final 1 for cycle rotation
>>> vector_to_onset_beat(vector=son, beat_unit_length=4) # NB different beat value
(Fraction(0, 1), Fraction(3, 4), Fraction(3, 2), Fraction(5, 2), Fraction(3, 1), Fraction(4, 1))
Source code in amads/time/meter/syncopation.py
168
169
170
171
172
173
174
175
176
177
178
179
180
def vector_to_onset_beat(vector: list, beat_unit_length: int = 2):
    """
    Map from a vector to onset beat data via `vector_to_multiset`.

    Examples
    --------
    >>> son = [1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1]  # Final 1 for cycle rotation
    >>> vector_to_onset_beat(vector=son, beat_unit_length=4) # NB different beat value
    (Fraction(0, 1), Fraction(3, 4), Fraction(3, 2), Fraction(5, 2), Fraction(3, 1), Fraction(4, 1))

    """
    onsets = [i for i, count in enumerate(vector) for _ in range(count)]
    return tuple(Fraction(x, beat_unit_length) for x in onsets)

score_to_offsets

score_to_offsets(path_to_score: str, to_indices: bool = True) -> list

Import a score and convert it to the sorted list of unique starting timepoints as measured in quarters since the start of the score, and (optionally) convert those starts to indices on a tatum grid.

Note: score parsing warnings are supressed. If you need to test the validity of scores, handle that separately.

Parameters:

  • path_to_score (str) –

    A string for the file path or URL.

  • to_indices (bool, default: True ) –

    If True, convert the starts to indices on a tatum grid.

Examples:

Two examples from "Species" Counterpoint. The first is straightforwardly in regular whole notes moving together, so the gaps are 4.0 apart (in "quarter notes") and the tatum is 4.

>>> score_path = "https://github.com/MarkGotham/species/raw/refs/heads/main/1x1/005.mxl"
>>> score_to_offsets(score_path, to_indices=False)
[0.0, 4.0, 8.0, 12.0, 16.0, 20.0, 24.0, 28.0, 32.0, 36.0, 40.0]
>>> score_to_offsets(score_path)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

The second example is from later in book 1. This is the first example of "florid" (5th species) counterpoint. There is one pair of eighth notes in this example (offsets 29.0, 29.5, 30.0) so the tatum is 0.5.

>>> url_5th_species = "https://github.com/MarkGotham/species/raw/refs/heads/main/1x1/082.mxl"
>>> starts = score_to_offsets(url_5th_species, to_indices=False)

This is how it starts:

>>> starts[:5]
[0.0, 2.0, 4.0, 5.0, 6.0]

And this is the part with the eight note pair:

>>> starts[22:]
[28.0, 29.0, 29.5, 30.0, 32.0, 33.0, 34.0, 36.0, 38.0, 40.0]
>>> indices = starts_to_indices(starts)
>>> indices[:5]
[0, 4, 8, 10, 12]
>>> indices = score_to_offsets(url_5th_species, to_indices=True)
>>> indices[:5]
[0, 4, 8, 10, 12]
>>> indices[22:]
[56, 58, 59, 60, 64, 66, 68, 72, 76, 80]
Source code in amads/time/meter/tatum.py
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
def score_to_offsets(path_to_score: str, to_indices: bool = True) -> list:
    """
    Import a score and convert it to the sorted list of unique
    starting timepoints as measured in quarters since the start of the score,
    and (optionally) convert those starts to indices on a tatum grid.

    Note: score parsing warnings are supressed.
    If you need to test the validity of scores, handle that separately.

    Parameters
    ----------
    path_to_score
        A string for the file path or URL.
    to_indices
        If True, convert the starts to indices on a tatum grid.

    Examples
    --------
    Two examples from "Species" Counterpoint.
    The first is straightforwardly in regular whole notes moving together,
    so the gaps are 4.0 apart (in "quarter notes") and the tatum is 4.

    >>> score_path = "https://github.com/MarkGotham/species/raw/refs/heads/main/1x1/005.mxl"
    >>> score_to_offsets(score_path, to_indices=False)
    [0.0, 4.0, 8.0, 12.0, 16.0, 20.0, 24.0, 28.0, 32.0, 36.0, 40.0]

    >>> score_to_offsets(score_path)
    [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

    The second example is from later in book 1.
    This is the first example of "florid" (5th species) counterpoint.
    There is one pair of eighth notes in this example (offsets 29.0, 29.5, 30.0) so the tatum is 0.5.

    >>> url_5th_species = "https://github.com/MarkGotham/species/raw/refs/heads/main/1x1/082.mxl"
    >>> starts = score_to_offsets(url_5th_species, to_indices=False)

    This is how it starts:
    >>> starts[:5]
    [0.0, 2.0, 4.0, 5.0, 6.0]

    And this is the part with the eight note pair:
    >>> starts[22:]
    [28.0, 29.0, 29.5, 30.0, 32.0, 33.0, 34.0, 36.0, 38.0, 40.0]

    >>> indices = starts_to_indices(starts)
    >>> indices[:5]
    [0, 4, 8, 10, 12]

    >>> indices = score_to_offsets(url_5th_species, to_indices=True)
    >>> indices[:5]
    [0, 4, 8, 10, 12]

    >>> indices[22:]
    [56, 58, 59, 60, 64, 66, 68, 72, 76, 80]

    """
    set_reader_warning_level("none")
    score = read_score(path_to_score, show=False)
    notes = score.get_sorted_notes()
    timepoints = sorted(set(n.onset for n in notes))
    if to_indices:
        return starts_to_indices(timepoints)
    else:
        return timepoints

starts_to_indices

starts_to_indices(starts: list, tatum: Fraction = None) -> list

Given a list of start times, convert to a list of indices on the tatum grid.

If a tatum value is provided, use that; otherwise, deduce the tatum using gcd methods.

This is the input format for the IMA algorithm, among others.

Parameters:

  • starts (list) –

    A list of numeric start times.

  • tatum (Fraction, default: None ) –

    The tatum duration to use as the grid unit and values are rounded to it. If None, it is deduced automatically via get_tatum_from_priorities.

Examples:

>>> starts_to_indices([0, 1/2, 2/3, 2.5])
[0, 3, 4, 15]
>>> starts_to_indices([0, 1/2, 2/3, 2.5], tatum=Fraction(1, 6))
[0, 3, 4, 15]
>>> starts_to_indices([0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 6.5, 6.6640625, 6.83203125, 7.0])
[0, 6, 12, 18, 24, 30, 36, 39, 40, 41, 42]
>>> starts_to_indices([0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 6.5, 6.6640625, 6.83203125, 7.0], tatum=Fraction(1, 6))
[0, 6, 12, 18, 24, 30, 36, 39, 40, 41, 42]

Also accepts tatum values greater than 1:

>>> starts_to_indices([3, 6, 9], tatum=3)
[1, 2, 3]
>>> starts_to_indices([3, 6, 9])
[1, 2, 3]
>>> starts_to_indices([0, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40])
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> starts_to_indices([28.0, 29.0, 29.5, 30.0, 32.0, 33.0, 34.0, 36.0, 38.0, 40.0])
[56, 58, 59, 60, 64, 66, 68, 72, 76, 80]
Source code in amads/time/meter/tatum.py
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
def starts_to_indices(starts: list, tatum: Fraction = None) -> list:
    """
    Given a list of start times,
    convert to a list of indices on the tatum grid.

    If a tatum value is provided, use that;
    otherwise, deduce the tatum using gcd methods.

    This is the input format for the IMA algorithm, among others.

    Parameters
    ----------
    starts
        A list of numeric start times.
    tatum
        The tatum duration to use as the grid unit and values are rounded to it.
        If None, it is
        deduced automatically via `get_tatum_from_priorities`.

    Examples
    --------
    >>> starts_to_indices([0, 1/2, 2/3, 2.5])
    [0, 3, 4, 15]

    >>> starts_to_indices([0, 1/2, 2/3, 2.5], tatum=Fraction(1, 6))
    [0, 3, 4, 15]

    >>> starts_to_indices([0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 6.5, 6.6640625, 6.83203125, 7.0])
    [0, 6, 12, 18, 24, 30, 36, 39, 40, 41, 42]

    >>> starts_to_indices([0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 6.5, 6.6640625, 6.83203125, 7.0], tatum=Fraction(1, 6))
    [0, 6, 12, 18, 24, 30, 36, 39, 40, 41, 42]

    Also accepts tatum values greater than 1:

    >>> starts_to_indices([3, 6, 9], tatum=3)
    [1, 2, 3]

    >>> starts_to_indices([3, 6, 9])
    [1, 2, 3]

    >>> starts_to_indices([0, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40])
    [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

    >>> starts_to_indices([28.0, 29.0, 29.5, 30.0, 32.0, 33.0, 34.0, 36.0, 38.0, 40.0])
    [56, 58, 59, 60, 64, 66, 68, 72, 76, 80]

    """
    if not starts:
        raise ValueError("starts must not be empty")

    if tatum is None:
        tatum = get_tatum_from_priorities(starts)

    return [round(x / tatum) for x in starts]