Coconet model implementation in TensorflowJS. Thanks to James Wexler for the original implementation.
Path to the checkpoint directory.
Use the model to generate a Bach-style 4-part harmony, conditioned on an
input sequence. The notes in the input sequence should have the
instrument property set corresponding to which voice the note belongs to:
0 for Soprano, 1 for Alto, 2 for Tenor and 3 for Bass.
Note: regardless of the length of the notes in the original sequence,
all the notes in the generated sequence will be 1 step long. If you want
to clean up the sequence to consider consecutive notes for the same
pitch and instruments as "held", you can call
mergeHeldNotes on the
result. This function will replace any of the existing voices with
the output of the model. If you want to restore any of the original voices,
you can call
replaceVoice on the output, specifying which voice should be
The sequence to infill. Must be quantized.
(Optional) Infill parameterers like temperature, the number of sampling iterations, or masks.
Loads variables from the checkpoint and instantiates the model.
Sets up layer configuration from params
Generated using TypeDoc