hidden-markov-music.hmm

General implementation of a hidden Markov model, and associated algorithms.

backward-probability-final

multimethod

Returns β_T(i), for all states i.

backward-probability-prev

multimethod

Returns β_t(i), for all states i, for t < T.

This is the probability of observing the partial observation sequence, o_{t+1}, ..., o_T, conditional on being in state i at time t. Depends on the model, β_{t+1}(j), and the next observation o_{t+1}.

Output is in the format

{:state-1 β_t(1),
 :state-2 β_t(2),
 ...
 :state-N β_t(N)}

backward-probability-seq

(backward-probability-seq model observations)

Returns a lazy seq of β_T(i), β_{T-1}(i), ..., β_1(i), where β_t(i) is the probability of observing o_{t+1}, ..., o_T, given that the system is in state s_i at time t.

deltas

multimethod

Returns a mapping of state-j -> max(δ_{t-1}(i) p_{ij})*b_j(o_t).

digamma

multimethod

Returns the probability of being in state i at time t and state j at time t+1 given the model and observation sequence.

digamma-seq

(digamma-seq model forward-probs backward-probs observations)

Returns a lazy sequence of digammas from t = 1 to t = T-1.

See digamma.

forward-probability-initial

multimethod

Returns α_1(i), for all states i.

This is the probability of initially being in state i after observing the initial observation, o_1. Depends only on the model and initial observation.

Output is in the format

{:state-1 α_1(1),
 :state-2 α_1(2),
 ...
 :state-N α_1(N)}

forward-probability-next

multimethod

Returns α_t(i), for all states i, for t > 1, where α_t(i) is the probability of being in state s_i at time t after observing the sequence o_1, o_2, ..., o_t. Depends on the model λ, previous forward probability α_{t-1}(i), and current observation o_t.

Output is in the format

{s_1 α_t(1),
 s_2 α_t(2),
 ...
 s_N α_t(N)}

forward-probability-seq

(forward-probability-seq model observations)

Returns a lazy seq of α_1(i), α_2(i), ..., α_T(i), where α_t(i) is the probability of being in state s_i at time t after observing the sequence o_1, o_2, ..., o_t.

gamma

multimethod

Returns the probability of being in state i at time t given the model and observation sequence.

gamma-seq

(gamma-seq model forward-probs backward-probs)

Returns a lazy sequence of gammas from t = 1 to t = T.

See gamma.

HMM->LogHMM

(HMM->LogHMM model)

Transforms an HMM into a logarithmic HMM.

hmms-almost-equal?

multimethod

Returns true if two HMMs are equal to the given precision.

likelihood-backward

multimethod

Returns P[O|λ], using the backward algorithm.

This is the likelihood of the observed sequence O given the model λ.

likelihood-forward

multimethod

Returns P[O|λ], using the forward algorithm.

This is the likelihood of the observed sequence O given the model λ.

LogHMM->HMM

(LogHMM->HMM model)

Transforms a logarithmic HMM into an HMM.

max-entries

(max-entries weighted-deltas)

Returns a mapping of state-j -> [argmax(δ_{t-1}(i) p_{ij}), max(δ_{t-1}(i) p_{ij})]

psis

(psis max-entries)

Returns a mapping of state-j -> argmax(δ_{t-1}(i) p_{ij}).

random-emission

(random-emission model state)

Randomly emits an observation from the current state, weighed by the observation probability distribution.

random-HMM

(random-HMM states observations)

Returns an HMM with random probabilities, given the state and observation labels.

random-initial-state

(random-initial-state model)

Randomly selects an initial state from the model, weighed by the initial probability distribution.

random-LogHMM

(random-LogHMM states observations)

Returns a logarithmic HMM with random probabilities, given the state and observation labels.

random-transition

(random-transition model state)

Randomly selects a state to transition to from the current state, weighed by the transition probability distribution.

sample-emissions

multimethod

Randomly walks through the states of the model, returning an infinite lazy seq of emissions from those states. One can optionally provide predetermined states, and emissions will be made from it randomly.

See sample-states and random-emission for details.

sample-states

multimethod

Randomly walks through the states of the model, returning an infinite lazy seq of those states.

See random-initial-state and random-transition for details on the decisions made at each step.

state-path-initial

multimethod

Returns ψ_1(i) and δ_1(i), for the given model λ and first observation o_1.

Output takes the form:

{:delta δ_1(i),
 :psi   ψ_1(i)}

state-path-next

(state-path-next model obs delta-prev)

Returns ψ_t(i) and δ_t(i), for the given model λ and current observation o_t. Depends on the previous δ_{t-1}(i).

Output takes the form:

{:delta δ_t(i),
 :psi   ψ_t(i)}

state-path-seq

(state-path-seq model observations)

Returns a lazy seq of previous states paired with their probabilities, [ψ_1(i) δ_1(i)], ... [ψ_T(i) δ_T(i)], where ψ_t(i) is a mapping from state i to the state j which most likely preceded it, and δ_t(i) is a mapping from state i to the probability of the most likely state path leading up to it from state j.

stream->model

(stream->model stream)

Reads an HMM or LogHMM from an EDN stream, and throws an exception if not a valid model.

train-model

(train-model model observations & {:keys [decimal max-iter], :or {decimal 15, max-iter 100}})

Trains the model via the Baum-Welch algorithm.

train-model-likelihood-seq

(train-model-likelihood-seq model observations)

train-model-next

(train-model-next model observations)

Returns an infinite lazy sequence of trained models

train-model-seq

(train-model-seq model observations)

uniform-HMM

(uniform-HMM states observations)

Returns an HMM with uniform probabilities, given the state and observation labels.

uniform-LogHMM

(uniform-LogHMM states observations)

Returns a logarithmic HMM with uniform probabilities, given the state and observation labels.

valid-hmm?

multimethod

Returns true if the HMM has all stochastic probabilities to the given precision.

viterbi-path

(viterbi-path model observations)

Returns one of the state sequences Q which maximizes P[Q|O,λ], along with the likelihood itself, P[Q|O,λ]. There are potentially many such paths, all with equal likelihood, and one of those is chosen arbitrarily.

This is accomplished by means of the Viterbi algorithm, and takes into account that q_t depends on q_{t-1}, and not just o_t, avoiding impossible state sequences.

Output takes the form:

{:likelihood     P[Q|O,λ],
 :state-sequence Q}

weighted-deltas

multimethod

Returns a mapping of state-j -> state-i -> δ_{t-1}(i) p_{ij}.