Instructions to use QuickRead/PPO_training with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use QuickRead/PPO_training with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("QuickRead/PPO_training") model = AutoModelForSeq2SeqLM.from_pretrained("QuickRead/PPO_training") - Notebooks
- Google Colab
- Kaggle
add model
Browse files- pytorch_model.bin +1 -1
pytorch_model.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 2283825905
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:68ae13996f1be3d35b59a2fd5ab4e3f833050ad84d2ba7290cf6254860316932
|
| 3 |
size 2283825905
|