Update README.md
Browse files
README.md
CHANGED
|
@@ -29,9 +29,15 @@ ARIA consists of two main components:
|
|
| 29 |
2. A transformer-based MIDI generation model (midi-emotion) that conditions on these emotional values
|
| 30 |
|
| 31 |
The model offers three different conditioning modes:
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
|
| 36 |
### Usage
|
| 37 |
|
|
@@ -58,7 +64,7 @@ This model is designed for:
|
|
| 58 |
## Training Data
|
| 59 |
|
| 60 |
The model combines:
|
| 61 |
-
1. Image encoder:
|
| 62 |
2. MIDI generation: Uses the Lakh-Spotify dataset as processed by the midi-emotion project
|
| 63 |
|
| 64 |
## Attribution
|
|
|
|
| 29 |
2. A transformer-based MIDI generation model (midi-emotion) that conditions on these emotional values
|
| 30 |
|
| 31 |
The model offers three different conditioning modes:
|
| 32 |
+
|
| 33 |
+
#### `continuous_concat` (Recommended)
|
| 34 |
+
Creates a single vector from valence and arousal values, repeats it across the sequence, and concatenates it with every music token embedding. This approach gives the emotion information global influence throughout the entire generation process, allowing the transformer to access emotional context at every timestep. Research shows this method achieves the best performance in both note prediction accuracy and emotional coherence.
|
| 35 |
+
|
| 36 |
+
#### `continuous_token`
|
| 37 |
+
Converts each emotion value (valence and arousal) into separate condition vectors with the same length as music token embeddings, then concatenates them in the sequence dimension. The emotion vectors are inserted at the beginning of the input sequence during generation. This treats emotions similarly to music tokens but can lose influence as the sequence grows longer.
|
| 38 |
+
|
| 39 |
+
#### `discrete_token`
|
| 40 |
+
Quantizes continuous emotion values into 5 discrete bins (very low, low, moderate, high, very high) and converts them into control tokens. These tokens are placed before the music tokens in the sequence. While this represents the current state-of-the-art approach in conditional text generation, it suffers from information loss due to binning and can lose emotional context during longer generations when tokens are truncated.
|
| 41 |
|
| 42 |
### Usage
|
| 43 |
|
|
|
|
| 64 |
## Training Data
|
| 65 |
|
| 66 |
The model combines:
|
| 67 |
+
1. Image encoder: Fine-tuned on a curated dataset of artwork with emotional annotations
|
| 68 |
2. MIDI generation: Uses the Lakh-Spotify dataset as processed by the midi-emotion project
|
| 69 |
|
| 70 |
## Attribution
|