piccolo-4x7b / README.md
macadeliccc's picture
Update README.md
c351e5f verified
---
license: cc-by-4.0
---
# 🐾 Piccolo-4x7b 🐾
**In loving memory of my dog Klaus (Piccolo)**
_~ Piccolo (Italian): the little one ~_
![piccolo.png](piccolo.png)
# Code Example
Inference and Evaluation colab available [here](https://colab.research.google.com/drive/1ZqLNvVvtFHC_4v2CgcMVh7pP9Fvx0SbI?usp=sharing)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
def generate_response(prompt):
"""
Generate a response from the model based on the input prompt.
Args:
prompt (str): Prompt for the model.
Returns:
str: The generated response from the model.
"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
return response
model_id = "macadeliccc/piccolo-4x7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,load_in_4bit=True)
prompt = "What is the best way to train Cane Corsos?"
print("Response:")
print(generate_response(prompt), "\n")
```
The model is capable of quality code, math, and logical reasoning. Try whatever questions you think of.
# 🏆 Evaluations
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|----------|-------|------|-----:|--------|-----:|---|-----:|
|arc_easy |Yaml |none | 0|acc |0.8371|± |0.0076|
| | |none | 0|acc_norm|0.8064|± |0.0081|
|boolq |Yaml |none | 0|acc |0.8685|± |0.0059|
|hellaswag |Yaml |none | 0|acc |0.6687|± |0.0047|
| | |none | 0|acc_norm|0.8416|± |0.0036|
|openbookqa|Yaml |none | 0|acc |0.3580|± |0.0215|
| | |none | 0|acc_norm|0.4740|± |0.0224|
|piqa |Yaml |none | 0|acc |0.8243|± |0.0089|
| | |none | 0|acc_norm|0.8308|± |0.0087|
|winogrande|Yaml |none | 0|acc |0.7609|± |0.0120|