Instructions to use openbmb/UltraRM-13b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use openbmb/UltraRM-13b with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("openbmb/UltraRM-13b") model = AutoModel.from_pretrained("openbmb/UltraRM-13b") - Notebooks
- Google Colab
- Kaggle
Code to create this reward model?
#3
by RSchaefferAtGoogle - opened
Is the code to create/train this reward model publicly available somewhere?
I couldn't find it in this GitHub repo (https://github.com/thunlp/UltraChat) but maybe I was looking in the wrong place?