Instructions to use ftajwar/paprika_Meta-Llama-3.1-8B-Instruct_SFT_only with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ftajwar/paprika_Meta-Llama-3.1-8B-Instruct_SFT_only with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ftajwar/paprika_Meta-Llama-3.1-8B-Instruct_SFT_only") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("ftajwar/paprika_Meta-Llama-3.1-8B-Instruct_SFT_only") model = AutoModelForCausalLM.from_pretrained("ftajwar/paprika_Meta-Llama-3.1-8B-Instruct_SFT_only") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ftajwar/paprika_Meta-Llama-3.1-8B-Instruct_SFT_only with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ftajwar/paprika_Meta-Llama-3.1-8B-Instruct_SFT_only" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ftajwar/paprika_Meta-Llama-3.1-8B-Instruct_SFT_only", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/ftajwar/paprika_Meta-Llama-3.1-8B-Instruct_SFT_only
- SGLang
How to use ftajwar/paprika_Meta-Llama-3.1-8B-Instruct_SFT_only with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ftajwar/paprika_Meta-Llama-3.1-8B-Instruct_SFT_only" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ftajwar/paprika_Meta-Llama-3.1-8B-Instruct_SFT_only", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ftajwar/paprika_Meta-Llama-3.1-8B-Instruct_SFT_only" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ftajwar/paprika_Meta-Llama-3.1-8B-Instruct_SFT_only", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use ftajwar/paprika_Meta-Llama-3.1-8B-Instruct_SFT_only with Docker Model Runner:
docker model run hf.co/ftajwar/paprika_Meta-Llama-3.1-8B-Instruct_SFT_only
Model Card for Model ID
This is a saved checkpoint from fine-tuning a meta-llama/Meta-Llama-3.1-8B-Instruct model using supervised fine-tuning for our paper "Training a Generally Curious Agent". In our work, we introduce PAPRIKA, a finetuning framework for teaching large language models (LLMs) strategic exploration.
NOTE:
In PAPRIKA, our training process consists of two stages. The first stage is supervised finetuning, and then we run preference finetuning using the RPO objective on top of the checkpoint obtained from supervised finetuning.
We previously released the final checkpoint (after SFT followed by RPO finetuning). Due to community request, we are also releasing the checkpoint obtained after the first stage of SFT (i.e., the checkpoint before running RPO).
Model Details
Model Description
This is the model card of a meta-llama/Meta-Llama-3.1-8B-Instruct model fine-tuned using PAPRIKA.
- Finetuned from model: meta-llama/Meta-Llama-3.1-8B-Instruct
Model Sources
- Repository: Official Code Release for the paper "Training a Generally Curious Agent"
- Paper: Training a Generally Curious Agent
- Project Website: Project Website
Training Details
Training Data
Our training dataset for supervised fine-tuning can be found here: SFT dataset
Similarly, the training dataset for preference fine-tuning can be found here: Preference learning dataset
Training Procedure
The attached Wandb link shows the training loss per gradient step during both supervised fine-tuning and preference fine-tuning.
Training Hyperparameters
For supervised fine-tuning, we use the AdamW optimizer with learning rate 1e-6, batch size 32, cosine annealing learning rate decay with warmup ratio 0.04, and we train on a total of 17,181 trajectories.
For preference fine-tuning, we use the RPO objective, AdamW optimizer with learning rate 2e-7, batch size 32, cosine annealing learning rate decay with warmup ratio 0.04, and we train on a total of 5260 (preferred, dispreferred) trajectory pairs.
Hardware
This model has been finetuned using 8 NVIDIA L40S GPUs.
Citation
BibTeX:
@misc{tajwar2025traininggenerallycuriousagent,
title={Training a Generally Curious Agent},
author={Fahim Tajwar and Yiding Jiang and Abitha Thankaraj and Sumaita Sadia Rahman and J Zico Kolter and Jeff Schneider and Ruslan Salakhutdinov},
year={2025},
eprint={2502.17543},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2502.17543},
}
Model Card Contact
- Downloads last month
- 3