--- title: AnalysisGNN Music Analysis emoji: ๐ŸŽต colorFrom: red colorTo: pink sdk: gradio sdk_version: 5.49.1 app_file: app.py pinned: false license: mit short_description: Inference for the AnalysisGNN score analysis model --- # AnalysisGNN Gradio Interface A Gradio web interface for [AnalysisGNN](https://github.com/manoskary/analysisGNN), a unified music analysis model using Graph Neural Networks. ## Features - ๐ŸŽผ **MusicXML Upload**: Upload and analyze musical scores in MusicXML format - ๐ŸŽจ **Score Visualization**: Automatic rendering of uploaded scores to images (now with built-in MuseScore AppImage fallback) - ๐Ÿ“Š **Multi-task Analysis**: Perform various music analysis tasks: - Cadence Detection - Key Analysis (Local & Tonalized) - Harmonic Analysis (Chord Quality, Root, Bass, Inversion) - Roman Numeral Analysis - Phrase & Section Segmentation - Harmonic Rhythm - Pitch-Class Set Groupings - Non-Chord Tone (TPC-in-label / NCT) Detection - Note Degree Labeling - ๐Ÿ“ˆ **Results Table**: View analysis results in an interactive table - ๐Ÿ’พ **Export Results**: Download analysis results as CSV - ๐Ÿงพ **Parsed Score Download**: Grab the normalized MusicXML that is produced after parsing with Partitura ## Quick Start ### Local Installation ```bash # Clone the repository git clone https://github.com/manoskary/analysisgnn-gradio.git cd analysisgnn-gradio # Create a virtual environment (recommended) python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate # Install dependencies pip install -r requirements.txt # Run the app python app.py ``` The app will be available at `http://localhost:7860` ### Hugging Face Spaces This app is designed to run on Hugging Face Spaces. Simply deploy it as a Gradio Space. ## Usage 1. **Upload a MusicXML file** using the file upload button 2. **Select analysis tasks** you want to perform (cadence, key, harmony, etc.) 3. **Click "Analyze Score"** to run the inference 4. **View results**: - Score visualization (rendered image) - Analysis results table (note-level predictions) 5. **Download results** as CSV if needed ### Score Rendering Backend The interface first tries to render MusicXML scores with Partitura. If that backend is missing MuseScore/LilyPond, the app now mirrors the [manoskary/weavemuse](https://github.com/manoskary/weavemuse) approach: it automatically downloads and extracts the MuseScore AppImage (stored under `./artifacts/musescore/`) and calls it headlessly (`QT_QPA_PLATFORM=offscreen`). You can override the binary by setting `MUSESCORE_BIN=/path/to/mscore` before launching the app. ## Model The app uses a pre-trained AnalysisGNN model automatically downloaded from Weights & Biases. The model is cached in the `./artifacts/` folder to avoid re-downloading. ## Dependencies - `analysisgnn`: Core music analysis library - `gradio`: Web interface framework - `partitura`: Music processing library - `torch`: Deep learning framework - `pandas`: Data manipulation - See `requirements.txt` for complete list ## Citation If you use this interface or AnalysisGNN in your research, please cite: ```bibtex @inproceedings{karystinaios2024analysisgnn, title={AnalysisGNN: A Unified Music Analysis Model with Graph Neural Networks}, author={Karystinaios, Emmanouil and Hentschel, Johannes and Neuwirth, Markus and Widmer, Gerhard}, booktitle={International Symposium on Computer Music Multidisciplinary Research (CMMR)}, year={2025} } ``` ## License MIT License - See the [AnalysisGNN repository](https://github.com/manoskary/analysisGNN) for more details. ## Acknowledgments - Built with [Gradio](https://gradio.app/) - Powered by [AnalysisGNN](https://github.com/manoskary/analysisGNN) - Music processing with [Partitura](https://github.com/CPJKU/partitura)