phi2-pro / README.md
abideen's picture
Update README.md
e770dbf verified
---
library_name: transformers
license: apache-2.0
datasets:
- argilla/dpo-mix-7k
language:
- en
---
# Phi2-PRO
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64e380b2e12618b261fa6ba0/QEQjVaXVqAjw4eSCAMnkv.jpeg)
*phi2-pro* is a fine-tuned version of **[microsoft/phi-2](https://huggingface.co/microsoft/phi-2)** on **[argilla/dpo-mix-7k](https://huggingface.co/datasets/argilla/dpo-mix-7k)**
preference dataset using *Odds Ratio Preference Optimization (ORPO)*. The model has been trained for 1 epoch.
## πŸ’₯ LazyORPO
This model has been trained using **[LazyORPO](https://colab.research.google.com/drive/19ci5XIcJDxDVPY2xC1ftZ5z1kc2ah_rx?usp=sharing)**. A colab notebook that makes the training
process much easier. Based on [ORPO paper](https://colab.research.google.com/corgiredirector?site=https%3A%2F%2Fhuggingface.co%2Fpapers%2F2403.07691)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64e380b2e12618b261fa6ba0/2h3guPdFocisjFClFr0Kh.png)
#### 🎭 What is ORPO?
Odds Ratio Preference Optimization (ORPO) proposes a new method to train LLMs by combining SFT and Alignment into a new objective (loss function), achieving state of the art results.
Some highlights of this techniques are:
* 🧠 Reference model-free β†’ memory friendly
* πŸ”„ Replaces SFT+DPO/PPO with 1 single method (ORPO)
* πŸ† ORPO Outperforms SFT, SFT+DPO on PHI-2, Llama 2, and Mistral
* πŸ“Š Mistral ORPO achieves 12.20% on AlpacaEval2.0, 66.19% on IFEval, and 7.32 on MT-Bench out Hugging Face Zephyr Beta
#### πŸ’» Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("abideen/phi2-pro", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("abideen/phi2-pro", trust_remote_code=True)
inputs = tokenizer('''
"""
Write a detailed analogy between mathematics and a lighthouse.
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## πŸ† Evaluation
### COMING SOON