MusicNN-PyTorch
This is a PyTorch reimplementation of the MusicNN library for music audio tagging.
It contains the model architecture and converted weights from the original TensorFlow 1.x checkpoints.
Supported Models
MTT_musicnn: Trained on MagnaTagATune (50 tags) - Default modelMSD_musicnn: Trained on Million Song Dataset (50 tags)MSD_musicnn_big: Larger version trained on MSD (512 filters)
Super Simple Usage (Hugging Face Transformers)
from transformers import AutoModel
# Load the model (downloads automatically)
model = AutoModel.from_pretrained("oriyonay/musicnn-pytorch", trust_remote_code=True)
# Use the model
tags = model.predict_tags("your_audio.mp3", top_k=5)
print(f"Top 5 tags: {tags}")
Embeddings (Optional)
from transformers import AutoModel
model = AutoModel.from_pretrained("oriyonay/musicnn-pytorch", trust_remote_code=True)
# Extract embeddings from any layer
emb = model.extract_embeddings("your_audio.mp3", layer="penultimate", pool="mean")
print(emb.shape)
Colab Example
# Install dependencies
!pip install transformers torch librosa soundfile
# Load with AutoModel
from transformers import AutoModel
model = AutoModel.from_pretrained("oriyonay/musicnn-pytorch", trust_remote_code=True)
# Use the model
tags = model.predict_tags("your_audio.mp3", top_k=5)
print(tags)
Traditional Usage
If you prefer to download the code manually:
from musicnn_torch import top_tags
# Get top 5 tags for an audio file
tags = top_tags('path/to/audio.mp3', model='MTT_musicnn', topN=5)
print(tags)
Installation
pip install transformers torch librosa soundfile
Credits
Original implementation by Jordi Pons. PyTorch port by Gemini.
- Downloads last month
- 214
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support