Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,10 +1,10 @@
|
|
| 1 |
---
|
| 2 |
base_model:
|
| 3 |
-
-
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
-
model_creator:
|
| 7 |
-
model_name:
|
| 8 |
model_type: bert
|
| 9 |
quantized_by: s3dev-ai
|
| 10 |
tags:
|
|
@@ -13,12 +13,12 @@ tags:
|
|
| 13 |
|
| 14 |
# Overview
|
| 15 |
|
| 16 |
-
This page provides various quantisations of the [base model](https://huggingface.co/
|
| 17 |
-
-
|
| 18 |
|
| 19 |
# Model Description
|
| 20 |
|
| 21 |
-
For a full model description, please refer to the [base model's](https://huggingface.co/
|
| 22 |
|
| 23 |
## How are the GGUF files created?
|
| 24 |
After cloning the author's original base model repository, `llama.cpp` is used to convert the model to a GGML compatible file, using `f32` as the output type; preserving the original fidelity. The model is converted *un-altered*, unless otherwise stated.
|
|
@@ -31,8 +31,7 @@ To help visualise the difference in model quantisation (i.e. level of retained f
|
|
| 31 |
|
| 32 |
The underlying [base dataset](https://huggingface.co/datasets/sentence-transformers/stsb) was sampled to 1000 records with a unbiased similarity score distribution. Using the various quantisation levels of this model, embeddings were created for `sentence1` and `sentence2`. Finally, a cosine similarity score was calculated across the two embeddings, and plotted on the graph.
|
| 33 |
|
| 34 |
-
<!--
|
| 35 |
<div align="center">
|
| 36 |
-
<img src="imgs/
|
| 37 |
</div>
|
| 38 |
-
|
|
|
|
| 1 |
---
|
| 2 |
base_model:
|
| 3 |
+
- nomic-ai/nomic-embed-text-v1.5
|
| 4 |
language:
|
| 5 |
- en
|
| 6 |
+
model_creator: Nomic
|
| 7 |
+
model_name: nomic-embed-text-v1.5
|
| 8 |
model_type: bert
|
| 9 |
quantized_by: s3dev-ai
|
| 10 |
tags:
|
|
|
|
| 13 |
|
| 14 |
# Overview
|
| 15 |
|
| 16 |
+
This page provides various quantisations of the [base model](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5), in GGUF format.
|
| 17 |
+
- nomic-ai/nomic-embed-text-v1.5
|
| 18 |
|
| 19 |
# Model Description
|
| 20 |
|
| 21 |
+
For a full model description, please refer to the [base model's](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) card.
|
| 22 |
|
| 23 |
## How are the GGUF files created?
|
| 24 |
After cloning the author's original base model repository, `llama.cpp` is used to convert the model to a GGML compatible file, using `f32` as the output type; preserving the original fidelity. The model is converted *un-altered*, unless otherwise stated.
|
|
|
|
| 31 |
|
| 32 |
The underlying [base dataset](https://huggingface.co/datasets/sentence-transformers/stsb) was sampled to 1000 records with a unbiased similarity score distribution. Using the various quantisation levels of this model, embeddings were created for `sentence1` and `sentence2`. Finally, a cosine similarity score was calculated across the two embeddings, and plotted on the graph.
|
| 33 |
|
| 34 |
+
<!-- Image alignment -->
|
| 35 |
<div align="center">
|
| 36 |
+
<img src="imgs/nomic.png" alt="Quantisation Levels" width="90%">
|
| 37 |
</div>
|
|
|