Upload 2 files
Browse files
docs/product_readme_viral_muse.md
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# product_readme.md
|
| 2 |
+
[AGENTARIUM_ASSET]
|
| 3 |
+
Name: Viral Muse – Product Readme / Listing
|
| 4 |
+
Version: v1.0
|
| 5 |
+
Status: Draft
|
| 6 |
+
|
| 7 |
+
## Title
|
| 8 |
+
Viral Muse — Music Pattern Architect (TikTok-Ready Concept Builder)
|
| 9 |
+
|
| 10 |
+
## Short Tagline
|
| 11 |
+
A pattern-first creative partner for artists: hooks, structures, TikTok formats, and genre flips — guided by datasets, not “lyric bot vibes.”
|
| 12 |
+
|
| 13 |
+
## What it is
|
| 14 |
+
**Viral Muse** is an Agentarium-style agent package designed to help creators develop **viral-but-original** music concepts.
|
| 15 |
+
It works by reasoning over structured pattern libraries (datasets) and returning **clear creative direction**: hooks, sections, constraints, and format-first ideas.
|
| 16 |
+
|
| 17 |
+
This package is ideal if you want:
|
| 18 |
+
- repeatable ideation (not random inspiration)
|
| 19 |
+
- consistent structure (not messy brainstorming)
|
| 20 |
+
- TikTok-native formats + genre transformation logic
|
| 21 |
+
|
| 22 |
+
## What it does (capabilities)
|
| 23 |
+
- Generates **song concept blueprints**: hook idea, section plan, arc, replay triggers
|
| 24 |
+
- Suggests **TikTok content formats**: openers, framing, loop mechanics, “what to film”
|
| 25 |
+
- Applies **genre transformations**: translate the same concept into different genres while keeping the “core payload”
|
| 26 |
+
- Uses a **viral signal checklist** to score and strengthen ideas without copying trends
|
| 27 |
+
- Provides **creative partner advice**: what to simplify, what to exaggerate, what to test
|
| 28 |
+
|
| 29 |
+
## What’s included (Agentarium v1 core)
|
| 30 |
+
- `/meta/agent_manifest.json`
|
| 31 |
+
- `/core/system_prompt.md`
|
| 32 |
+
- `/core/reasoning_template.md`
|
| 33 |
+
- `/core/personality_fingerprint.md`
|
| 34 |
+
- `/guardrails/guardrails.md`
|
| 35 |
+
- `/docs/workflow_notes.md`
|
| 36 |
+
- `/docs/use_cases.md`
|
| 37 |
+
- `/datasets/` (6 CSV datasets)
|
| 38 |
+
- `lyric_structure_map.csv`
|
| 39 |
+
- `viral_pattern_detector.csv`
|
| 40 |
+
- `genre_transformation_rules.csv`
|
| 41 |
+
- `tiktok_concept_patterns.csv`
|
| 42 |
+
- `viral_potential_rater.csv`
|
| 43 |
+
- `creative_partner_advice_map.csv`
|
| 44 |
+
- `/memory_schemas/` (user profile + project workspace)
|
| 45 |
+
- `user_profile_memory.csv`
|
| 46 |
+
- `project_workspace_memory.csv`
|
| 47 |
+
- `memory_rules.md`
|
| 48 |
+
|
| 49 |
+
## Ideal buyers
|
| 50 |
+
- musicians + producers building a repeatable writing pipeline
|
| 51 |
+
- creator teams making TikTok-first music content
|
| 52 |
+
- AI builders who want a **dataset-driven** creative agent template
|
| 53 |
+
- indie labels / A&R experiments (concept validation + testing loops)
|
| 54 |
+
|
| 55 |
+
## How to use (high level)
|
| 56 |
+
1) Load the system prompt + reasoning + personality into your agent runtime.
|
| 57 |
+
2) Upsert datasets into a vector DB (or keep as local retrieval index).
|
| 58 |
+
3) Ask for **concepts**, not final lyrics:
|
| 59 |
+
- “Give me 5 hook angles for X vibe”
|
| 60 |
+
- “Design a 30s TikTok loop concept for…”
|
| 61 |
+
- “Transform this concept into afrobeat / reggaeton / alt rock”
|
| 62 |
+
|
| 63 |
+
## Notes
|
| 64 |
+
- This is a **pattern architect**, not a plagiarism machine.
|
| 65 |
+
- Outputs are intentionally structured and testable (multiple variants + constraints).
|
| 66 |
+
|
| 67 |
+
## License / attribution
|
| 68 |
+
Use whatever license you set in the manifest. Recommended: CC BY 4.0 for open distribution, or a commercial license for paid bundles.
|
docs/workflow_notes_viral_muse.md
ADDED
|
@@ -0,0 +1,193 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# workflow_notes.md
|
| 2 |
+
[AGENTARIUM_ASSET]
|
| 3 |
+
Name: Viral Muse – Workflow Notes (Implementation)
|
| 4 |
+
Version: v1.0
|
| 5 |
+
Status: Draft
|
| 6 |
+
|
| 7 |
+
## Goal
|
| 8 |
+
Implement Viral Muse as a **dataset-driven** agent using:
|
| 9 |
+
- system prompt + reasoning template + personality fingerprint
|
| 10 |
+
- guardrails
|
| 11 |
+
- RAG over the 6 CSV datasets
|
| 12 |
+
- optional “knowledge map” layer for cross-dataset linking
|
| 13 |
+
- memory: user profile + project workspace
|
| 14 |
+
|
| 15 |
+
This guide assumes an orchestration runtime like **n8n**, but the logic applies to LangChain, Flowise, Dify, etc.
|
| 16 |
+
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
## 0) Folder sanity check (Agentarium v1)
|
| 20 |
+
You should have:
|
| 21 |
+
- `/core/` (system_prompt.md, reasoning_template.md, personality_fingerprint.md)
|
| 22 |
+
- `/datasets/` (6 CSVs)
|
| 23 |
+
- `/guardrails/guardrails.md`
|
| 24 |
+
- `/memory_schemas/` (2 CSV schemas + memory_rules.md)
|
| 25 |
+
- `/docs/` (this file + readme + use cases)
|
| 26 |
+
|
| 27 |
+
If you add a knowledge map later, put it in:
|
| 28 |
+
- `/datasets/knowledge_map.csv` (recommended) or `/datasets/master_grid.csv`
|
| 29 |
+
|
| 30 |
+
---
|
| 31 |
+
|
| 32 |
+
## 1) Implement the core behavior files
|
| 33 |
+
### 1.1 System Prompt
|
| 34 |
+
- Paste `/core/system_prompt.md` into your agent’s **system** message.
|
| 35 |
+
- This defines the agent’s role: pattern-first creative partner.
|
| 36 |
+
|
| 37 |
+
### 1.2 Reasoning Template
|
| 38 |
+
- Store `/core/reasoning_template.md` as internal guidance in your runtime (developer message / hidden instruction / “policy doc”).
|
| 39 |
+
- Your runtime should prepend it before each completion (or inject as a “rules” section).
|
| 40 |
+
|
| 41 |
+
### 1.3 Personality Fingerprint
|
| 42 |
+
- Add `/core/personality_fingerprint.md` as a style constraint layer.
|
| 43 |
+
- Use it to keep tone consistent: compact, direct, pattern-oriented.
|
| 44 |
+
|
| 45 |
+
**Result:** the model behaves consistently even before RAG.
|
| 46 |
+
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
## 2) Apply guardrails
|
| 50 |
+
- Load `/guardrails/guardrails.md` as a rules block.
|
| 51 |
+
- Enforce:
|
| 52 |
+
- no plagiarism / no “copy this hit song” behavior
|
| 53 |
+
- no made-up dataset facts
|
| 54 |
+
- no unsafe content requests
|
| 55 |
+
- outputs should be structured and testable
|
| 56 |
+
|
| 57 |
+
In n8n: you typically inject guardrails as part of the prompt assembly (before the user message).
|
| 58 |
+
|
| 59 |
+
---
|
| 60 |
+
|
| 61 |
+
## 3) Prepare datasets for RAG
|
| 62 |
+
You have 6 CSV datasets in `/datasets/`.
|
| 63 |
+
Best practice is to convert each row into a **retrieval document** with:
|
| 64 |
+
- `source_dataset`
|
| 65 |
+
- `row_id`
|
| 66 |
+
- key fields
|
| 67 |
+
- a short “row summary” string for embeddings
|
| 68 |
+
|
| 69 |
+
### 3.1 Minimal row-to-document format (recommended)
|
| 70 |
+
For each CSV row, create a text payload like:
|
| 71 |
+
|
| 72 |
+
- Title line: `[DATASET=lyric_structure_map | id=LSM_012]`
|
| 73 |
+
- Then: `field=value` lines (only the meaningful ones)
|
| 74 |
+
- Then: a compact 1–2 sentence row summary
|
| 75 |
+
|
| 76 |
+
This makes retrieval clean and avoids embedding empty columns.
|
| 77 |
+
|
| 78 |
+
---
|
| 79 |
+
|
| 80 |
+
## 4) Upsert into a Vector DB (VDB)
|
| 81 |
+
You can use Pinecone, Qdrant, Weaviate, Chroma, FAISS — anything that supports:
|
| 82 |
+
- embeddings vector
|
| 83 |
+
- metadata filters
|
| 84 |
+
- similarity search
|
| 85 |
+
|
| 86 |
+
### 4.1 What to store per vector
|
| 87 |
+
**Vector record**
|
| 88 |
+
- `id`: stable id (ex: `lyric_structure_map:LSM_012`)
|
| 89 |
+
- `text`: the row-to-document payload
|
| 90 |
+
- `metadata`:
|
| 91 |
+
- `dataset` (one of the 6)
|
| 92 |
+
- tags / genre / pattern_type (if available)
|
| 93 |
+
- any fields you want to filter by
|
| 94 |
+
|
| 95 |
+
### 4.2 n8n implementation (practical steps)
|
| 96 |
+
1) **Read file(s)**
|
| 97 |
+
- Node: “Read Binary File” (or fetch from GitHub / Drive)
|
| 98 |
+
2) **Parse CSV**
|
| 99 |
+
- Node: “Spreadsheet File” → Convert to JSON (or CSV Parse)
|
| 100 |
+
3) **Normalize rows**
|
| 101 |
+
- Node: “Function” (build `id`, `text`, `metadata`)
|
| 102 |
+
4) **Create embeddings**
|
| 103 |
+
- Node: “OpenAI” → Embeddings (or any embedding provider)
|
| 104 |
+
5) **Upsert to VDB**
|
| 105 |
+
- Pinecone/Qdrant/Weaviate via:
|
| 106 |
+
- native node if available, OR
|
| 107 |
+
- “HTTP Request” node to the VDB REST API
|
| 108 |
+
6) **Verify**
|
| 109 |
+
- Run a test query and confirm you retrieve relevant rows.
|
| 110 |
+
|
| 111 |
+
**Tip:** store dataset name in metadata so you can filter retrieval per task:
|
| 112 |
+
- “only TikTok formats” → filter dataset=`tiktok_concept_patterns`
|
| 113 |
+
- “structure help” → dataset=`lyric_structure_map`
|
| 114 |
+
|
| 115 |
+
---
|
| 116 |
+
|
| 117 |
+
## 5) RAG retrieval at runtime
|
| 118 |
+
At inference time, your agent should:
|
| 119 |
+
1) classify intent (hook / structure / tiktok / genre flip / audit)
|
| 120 |
+
2) select 1–3 datasets to query
|
| 121 |
+
3) retrieve top-K rows (ex: K=6–12)
|
| 122 |
+
4) synthesize output using retrieved rows only (no invented dataset claims)
|
| 123 |
+
|
| 124 |
+
### 5.1 Prompt assembly (runtime order)
|
| 125 |
+
1) System prompt
|
| 126 |
+
2) Guardrails
|
| 127 |
+
3) Reasoning template
|
| 128 |
+
4) Personality fingerprint
|
| 129 |
+
5) Memory snapshot (user profile + project workspace)
|
| 130 |
+
6) Retrieved context (RAG)
|
| 131 |
+
7) User message
|
| 132 |
+
|
| 133 |
+
---
|
| 134 |
+
|
| 135 |
+
## 6) Knowledge map / “Master Grid” (optional but recommended)
|
| 136 |
+
If you want cross-dataset reasoning, add a **knowledge map** file to link patterns:
|
| 137 |
+
|
| 138 |
+
### 6.1 Simple schema (CSV)
|
| 139 |
+
Store links as triplets:
|
| 140 |
+
- `source_node`, `relation`, `target_node`, `weight`, `notes`
|
| 141 |
+
|
| 142 |
+
Examples:
|
| 143 |
+
- `tiktok_format:duet_bait` → `supports` → `viral_signal:comment_trigger`
|
| 144 |
+
- `structure:prechorus_lift` → `amplifies` → `viral_signal:anticipation`
|
| 145 |
+
- `genre_flip:reggaeton` → `prefers` → `hook_style:call_response`
|
| 146 |
+
|
| 147 |
+
### 6.2 How to use it
|
| 148 |
+
- Upsert the knowledge map into the same VDB (or keep as a small local lookup table).
|
| 149 |
+
- When generating, retrieve:
|
| 150 |
+
- primary rows from the relevant dataset(s)
|
| 151 |
+
- plus 3–8 knowledge-map links that connect them
|
| 152 |
+
- Use those links to produce “why this works” explanations and better constraints.
|
| 153 |
+
|
| 154 |
+
---
|
| 155 |
+
|
| 156 |
+
## 7) Memory implementation (User Profile + Project Workspace)
|
| 157 |
+
Use the files in `/memory_schemas/`:
|
| 158 |
+
- `user_profile_memory.csv`
|
| 159 |
+
- `project_workspace_memory.csv`
|
| 160 |
+
- `memory_rules.md`
|
| 161 |
+
|
| 162 |
+
### 7.1 Read memory
|
| 163 |
+
Before responding:
|
| 164 |
+
- load active user profile facts (preferences, style constraints)
|
| 165 |
+
- load current project workspace (objectives, constraints, next actions)
|
| 166 |
+
|
| 167 |
+
### 7.2 Write memory
|
| 168 |
+
After responding, write only durable facts:
|
| 169 |
+
- user preferences that recur
|
| 170 |
+
- project decisions (selected concept, chosen genre, chosen structure)
|
| 171 |
+
- next actions (what to test next)
|
| 172 |
+
|
| 173 |
+
**Important:** append new rows; don’t overwrite old ones.
|
| 174 |
+
|
| 175 |
+
---
|
| 176 |
+
|
| 177 |
+
## 8) Quick acceptance test (you can run in any runtime)
|
| 178 |
+
Try these prompts and verify RAG is working:
|
| 179 |
+
|
| 180 |
+
1) “Give me 8 hook angles + why each is replayable.”
|
| 181 |
+
2) “Design a 30s TikTok loop concept. 1 prop, 1 angle.”
|
| 182 |
+
3) “Transform this concept into cumbia and then into alt-rock.”
|
| 183 |
+
4) “Audit this chorus for viral signals and give minimal fixes.”
|
| 184 |
+
|
| 185 |
+
If outputs reference your dataset concepts consistently, you’re done.
|
| 186 |
+
|
| 187 |
+
---
|
| 188 |
+
|
| 189 |
+
## 9) Common failure modes (and fixes)
|
| 190 |
+
- **Generic output** → increase retrieval K; tighten prompt to require citing retrieved patterns
|
| 191 |
+
- **Hallucinated claims** → enforce: “If not in retrieved context, say unknown”
|
| 192 |
+
- **Too long** → cap variants; default to compact bullet outputs
|
| 193 |
+
- **Bad retrieval** → improve row-to-document summaries; add better metadata filters
|