Model Card for afsagag/bart-spotify-preferences

This is a fine-tuned BART-large model for converting music prompts to Spotify feature preferences (e.g., Energy, Valence, Release Year). Fine-tuned on a ~1k sample dataset in Kaggle.

Training Details

  • Dataset: PromptsToSpotifyFeatures-v2 (~1k samples)
  • Framework: Hugging Face Transformers
  • Hyperparameters:
    • Learning rate: 5e-05
    • Epochs: 7
    • Batch size: 4 (with gradient accumulation steps=4)
    • FP16: True
  • Metrics: MAE, RMSE, per-feature correlation

Usage

from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("afsagag/bart-spotify-preferences")
tokenizer = BartTokenizer.from_pretrained("afsagag/bart-spotify-preferences")
prompt = "music for a supervillain"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=256, num_beams=1, length_penalty=0.6, no_repeat_ngram_size=2, early_stopping=True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Notes

  • Fine-tuned without LoRA for full model weights.
  • Outputs JSON-like dictionaries; may require post-processing for malformed JSON.
  • Trained on Kaggle T4 GPU with ~16GB VRAM.
Downloads last month
6
Safetensors
Model size
0.4B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support