File size: 2,069 Bytes
d997454
cc5a101
d997454
e11d87f
cc5a101
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c351e5f
cc5a101
 
 
 
 
 
 
 
 
 
 
e11d87f
cc5a101
8344518
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
license: cc-by-4.0
---
# 🐾 Piccolo-4x7b 🐾


  **In loving memory of my dog Klaus (Piccolo)**
    
  _~ Piccolo (Italian): the little one ~_

 ![piccolo.png](piccolo.png)


# Code Example

Inference and Evaluation colab available [here](https://colab.research.google.com/drive/1ZqLNvVvtFHC_4v2CgcMVh7pP9Fvx0SbI?usp=sharing)

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

def generate_response(prompt):
    """
    Generate a response from the model based on the input prompt.
    Args:
    prompt (str): Prompt for the model.

    Returns:
    str: The generated response from the model.
    """
    inputs = tokenizer(prompt, return_tensors="pt")
    outputs = model.generate(**inputs, max_new_tokens=256, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id)

    response = tokenizer.decode(outputs[0], skip_special_tokens=True)

    return response

model_id = "macadeliccc/piccolo-4x7b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,load_in_4bit=True)

prompt = "What is the best way to train Cane Corsos?"

print("Response:")
print(generate_response(prompt), "\n")
```

The model is capable of quality code, math, and logical reasoning. Try whatever questions you think of.

# 🏆 Evaluations

|  Tasks   |Version|Filter|n-shot| Metric |Value |   |Stderr|
|----------|-------|------|-----:|--------|-----:|---|-----:|
|arc_easy  |Yaml   |none  |     0|acc     |0.8371|±  |0.0076|
|          |       |none  |     0|acc_norm|0.8064|±  |0.0081|
|boolq     |Yaml   |none  |     0|acc     |0.8685|±  |0.0059|
|hellaswag |Yaml   |none  |     0|acc     |0.6687|±  |0.0047|
|          |       |none  |     0|acc_norm|0.8416|±  |0.0036|
|openbookqa|Yaml   |none  |     0|acc     |0.3580|±  |0.0215|
|          |       |none  |     0|acc_norm|0.4740|±  |0.0224|
|piqa      |Yaml   |none  |     0|acc     |0.8243|±  |0.0089|
|          |       |none  |     0|acc_norm|0.8308|±  |0.0087|
|winogrande|Yaml   |none  |     0|acc     |0.7609|±  |0.0120|