YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Psychology-Alpaca-RM

  • PEFT adapter layers for a reward model based on decapoda-research/llama-7b-hf.
  • Trained with a small subset (110 data points) of samhog/cgpt-pairs with 10K prompts, each with two answers (one 'good', one 'bad')
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support