|
|
--- |
|
|
license: apache-2.0 |
|
|
--- |
|
|
|
|
|
# omega-coder-phi-1 |
|
|
|
|
|
omega-coder-phi-1 is an SFT fine-tuned version of microsoft/phi-1 using a custom training dataset. |
|
|
This model was made with [Phinetune]() |
|
|
|
|
|
## Process |
|
|
- Learning Rate: 1.41e-05 |
|
|
- Maximum Sequence Length: 2048 |
|
|
- Dataset: deepmind/code_contests |
|
|
- Split: train[:1%] |
|
|
|
|
|
## 💻 Usage |
|
|
```python |
|
|
!pip install -qU transformers |
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline |
|
|
|
|
|
model = "samuelswandi/omega-coder-phi-3-mini-4k" |
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
|
|
|
|
# Example prompt |
|
|
prompt = "Your example prompt here" |
|
|
|
|
|
# Generate a response |
|
|
model = AutoModelForCausalLM.from_pretrained(model) |
|
|
pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer) |
|
|
outputs = pipeline(prompt, max_length=50, num_return_sequences=1) |
|
|
print(outputs[0]["generated_text"]) |
|
|
``` |