s3dev-ai commited on
Commit
0a03cc6
·
verified ·
1 Parent(s): c5fe5f7

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,10 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Mistral-Nemo-Instruct-2407-BF16.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Mistral-Nemo-Instruct-2407-F32.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Mistral-Nemo-Instruct-2407-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Mistral-Nemo-Instruct-2407-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Mistral-Nemo-Instruct-2407-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Mistral-Nemo-Instruct-2407-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Mistral-Nemo-Instruct-2407-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-Nemo-Instruct-2407-BF16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb983d76312e7128f7861a0d419c2d2a54875cefc8d21e4c1c0ef3def0eee3fd
3
+ size 24504280000
Mistral-Nemo-Instruct-2407-F32.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed124b2c2c16851c82033e046993107c20223d2745123f752b361b8e9f929a78
3
+ size 48999015360
Mistral-Nemo-Instruct-2407-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e51729f433ce7abbe36b9f02f6a8a068aff79d04ce0cea0b1a188aa02dcd7f3
3
+ size 4791051200
Mistral-Nemo-Instruct-2407-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4e03e3548c4cb944fbc164b1d871709a38ec8d7a4fa5c9103de64d321bdad7d
3
+ size 7477208000
Mistral-Nemo-Instruct-2407-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b98082957790026af17932324f32b94ca20e1ccc80bcd5cfc85aaa6421707dda
3
+ size 8727634880
Mistral-Nemo-Instruct-2407-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4af0e26d2971c9ccd5006a710a225652b99d0707bd7de0f7cb047e85123aa5fa
3
+ size 10056213440
Mistral-Nemo-Instruct-2407-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d219f9d8223c6f88df538d7e66749c114f391bf7212b0dbe0767928d8e7a4afd
3
+ size 13022372800
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - mistralai/Mistral-Nemo-Instruct-2407
4
+ language:
5
+ - en
6
+ model_creator: Mistral AI
7
+ model_name: Mistral-Nemo-Instruct-2407
8
+ model_type: llama
9
+ quantized_by: s3dev-ai
10
+ tags:
11
+ - text-generation
12
+ ---
13
+
14
+ # Overview
15
+
16
+ This model repository provides various quantisations of the following [base model](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407), in GGUF format.
17
+ - mistralai/Mistral-Nemo-Instruct-2407
18
+
19
+ # Model Description
20
+
21
+ For a full model description, please refer to the [base model's](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) card.
22
+
23
+ This model, and subsequent quantisations, have been converted directly from the author's base model *unaltered*.
24
+
25
+ ## How are the GGUF files created?
26
+ After cloning the author's original base model repository, [`llama.cpp`](https://github.com/ggml-org/llama.cpp) is used to convert the model to GGUF format, using `--outtype=f32` to preserve the original model's 32-bit fidelity.
27
+
28
+ Finally, for each subsequent quantisation level, `llama.cpp`'s `llama-quantize` executable is called using the F32 GGUF file as the source file.
29
+
30
+ # Quantisation
31
+ The purpose of this repository is to provide *unaltered* quantisations of the author's base model. This section is designed to help the user visualise the difference in quantisation levels, in efforts to assist in model (quantisation) selection.
32
+
33
+ ## Comparison Statistics
34
+ To aid a user in model/quantisation selection, the team has created the following statistics specifically for comparing the similarity scores across quantisation runs.
35
+
36
+ The dataset against which each run was conducted is composed of 175 question/answer pairs, divided amongst 7 topics, specifically designed to test a quantisation's processing ability. The test dataset was created by Mistral Large (via [Le Chat](https://chat.mistral.ai/chat)) using prompts explicitly stating the requirement for the question/answer pairs to be designed for Mistral model quantisation testing.
37
+
38
+ The similarity scores used by these statistics were calculated as the cosine similarity between the embedding of the 'gold standard' answer provided in the dataset, and the embedding of the response from the quantised model. The embedding model used in these tests is the [all-MiniLM-L6-v2 Q8_0](https://huggingface.co/s3dev-ai/all-MiniLM-L6-v2-gguf). We are also planning to repeat this test using the [embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m) model to determine if the results can be enhanced.
39
+
40
+ ### Range
41
+ The range graph below illustrates how the range of similarity scores varies amongst the quantisation levels. Included in the range stats are the:
42
+
43
+ - Minimum scores
44
+ - Maximum scores
45
+ - Mean scores
46
+ - Score distribution (KDE)
47
+ - Outliers
48
+
49
+ <!-- Range image -->
50
+ <div align="center">
51
+ <img src="imgs/nemo-range.png" alt="Stats Graph: Range" width="90%">
52
+ </div>
53
+
54
+ ### Mean
55
+ The mean graph below illustrates how the mean similarity scores (when grouped by 'topic') vary amongst the quantisation levels.
56
+
57
+ <!-- Mean image -->
58
+ <div align="center">
59
+ <img src="imgs/nemo-mean.png" alt="Stats Graph: Mean" width="90%">
60
+ </div>
61
+
62
+ ### Standard Deviation
63
+ The standard deviation graph below illustrates the how spread of similarity scores vary amongst the quantisation levels, when grouped by the test dataset's 'topic' categories.
64
+
65
+ <!-- StdDev image -->
66
+ <div align="center">
67
+ <img src="imgs/nemo-stddev.png" alt="Stats Graph: StdDev" width="90%">
68
+ </div>
69
+
70
+ ### Kernel Density Estimate
71
+ The KDE graph below illustrates the how distribution of similarity scores vary amongst the quantisation levels.
72
+
73
+ <!-- KDE image -->
74
+ <div align="center">
75
+ <img src="imgs/nemo-kde.png" alt="Stats Graph: KDE" width="90%">
76
+ </div>
77
+
imgs/nemo-kde.png ADDED
imgs/nemo-mean.png ADDED
imgs/nemo-range.png ADDED
imgs/nemo-stddev.png ADDED
sha256/Mistral-Nemo-Instruct-2407-BF16.sha256 ADDED
@@ -0,0 +1 @@
 
 
1
+ bb983d76312e7128f7861a0d419c2d2a54875cefc8d21e4c1c0ef3def0eee3fd Mistral-Nemo-Instruct-2407-BF16.gguf
sha256/Mistral-Nemo-Instruct-2407-F32.sha256 ADDED
@@ -0,0 +1 @@
 
 
1
+ ed124b2c2c16851c82033e046993107c20223d2745123f752b361b8e9f929a78 Mistral-Nemo-Instruct-2407-F32.gguf
sha256/Mistral-Nemo-Instruct-2407-Q2_K.sha256 ADDED
@@ -0,0 +1 @@
 
 
1
+ 6e51729f433ce7abbe36b9f02f6a8a068aff79d04ce0cea0b1a188aa02dcd7f3 Mistral-Nemo-Instruct-2407-Q2_K.gguf
sha256/Mistral-Nemo-Instruct-2407-Q4_K_M.sha256 ADDED
@@ -0,0 +1 @@
 
 
1
+ f4e03e3548c4cb944fbc164b1d871709a38ec8d7a4fa5c9103de64d321bdad7d Mistral-Nemo-Instruct-2407-Q4_K_M.gguf
sha256/Mistral-Nemo-Instruct-2407-Q5_K_M.sha256 ADDED
@@ -0,0 +1 @@
 
 
1
+ b98082957790026af17932324f32b94ca20e1ccc80bcd5cfc85aaa6421707dda Mistral-Nemo-Instruct-2407-Q5_K_M.gguf
sha256/Mistral-Nemo-Instruct-2407-Q6_K.sha256 ADDED
@@ -0,0 +1 @@
 
 
1
+ 4af0e26d2971c9ccd5006a710a225652b99d0707bd7de0f7cb047e85123aa5fa Mistral-Nemo-Instruct-2407-Q6_K.gguf
sha256/Mistral-Nemo-Instruct-2407-Q8_0.sha256 ADDED
@@ -0,0 +1 @@
 
 
1
+ d219f9d8223c6f88df538d7e66749c114f391bf7212b0dbe0767928d8e7a4afd Mistral-Nemo-Instruct-2407-Q8_0.gguf