File size: 3,049 Bytes
7d65b02
 
 
 
9cd1142
7d65b02
5e4024b
7d65b02
 
 
 
 
 
bb9ce28
7d65b02
d3f5915
7d65b02
d3f5915
7d65b02
 
 
d3f5915
7d65b02
1595bd9
 
d3f5915
 
7d65b02
 
 
 
 
beac7f6
7d65b02
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
08cfa16
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
license: cc-by-nc-4.0
---

![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/LjO8no5EzagA9qWdtYKxG.png)

Experimental Athena v3 model. Use Alpaca format. Suitable for RP, ERP and general stuff.

<!-- description start -->
## Description

<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->

This repo contains fp16 files of Athena-V3.

[GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v3-GGUF)

[GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v3-GPTQ)

<!-- [exl2 - by AzureBlack](https://huggingface.co/AzureBlack/Athena-v2-6.0bit-exl2) -->

[AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v3-AWQ)

[fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v3)

<!-- [GGUF - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v3-GGUF) -->
[OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v3-GGUF)

## Ratings:

Note: I have permission of all users to upload their ratings, i DONT screenshot random reviews without asking if i can put them here!

https://snombler.neocities.org/logs#athenav3

<!-- description end -->
<!-- description start -->
## Models and loras used

- Athena-v2
- migtissera/Synthia-13B-v1.2
- The-Face-Of-Goonery/Huginn-13b-FP16
- PygmalionAI/pygmalion-2-13b
- The-Face-Of-Goonery/LegerDemain-FP16
- chargoddard/storytime-13b
- lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT
- zattio770/120-Days-of-LORA-v2-13B
```
Loras: [lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT(0.65) + zattio770/120-Days-of-LORA-v2-13B(0.35)](0.3) to the final model

+ [Athena-v2(0.70) + migtissera/Synthia-13B-v1.2(0.3)](0.5)
+ [The-Face-Of-Goonery/Huginn-13b-FP16(0.85) + PygmalionAI/pygmalion-2-13b](0.15)](0.40)
+ [The-Face-Of-Goonery/LegerDemain-FP16(0.3) chargoddard/storytime-13b(0.7)](0.10)
```
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Alpaca

```
Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

```

HUGE thanks to [Undi95](https://huggingface.co/Undi95) for doing the merging (Recipe was my idea, he merged)

To TheBloke: please if you quant this, please include [IkariDev](https://huggingface.co/IkariDev) + [Undi95](https://huggingface.co/Undi95) in all the credits/links to the creator.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_IkariDev__Athena-v3)

| Metric                | Value                     |
|-----------------------|---------------------------|
| Avg.                  | 50.1   |
| ARC (25-shot)         | 61.69          |
| HellaSwag (10-shot)   | 84.34    |
| MMLU (5-shot)         | 57.87         |
| TruthfulQA (0-shot)   | 51.26   |
| Winogrande (5-shot)   | 75.77   |
| GSM8K (5-shot)        | 11.6        |
| DROP (3-shot)         | 8.21         |