s3dev-ai's picture
Upload README.md with huggingface_hub
d9e7988 verified
metadata
base_model:
  - nomic-ai/nomic-embed-text-v1.5
language:
  - en
model_creator: Nomic
model_name: nomic-embed-text-v1.5
model_type: bert
quantized_by: s3dev-ai
tags:
  - sentence-similarity

Overview

This page provides various quantisations of the base model, in GGUF format.

  • nomic-ai/nomic-embed-text-v1.5

Model Description

For a full model description, please refer to the base model's card.

How are the GGUF files created?

After cloning the author's original base model repository, llama.cpp is used to convert the model to a GGML compatible file, using f32 as the output type; preserving the original fidelity. The model is converted un-altered, unless otherwise stated.

Finally, for each respective quantisation level, llama.cpp's llama-quantize executable is called using the F32 GGUF file as the source file.

Quantisations

To help visualise the difference in model quantisation (i.e. level of retained fidelity), the image below shows the cosine similarity scores for each quantisation, baselined against the 32-bit base model. It can be observed that lower fidelity yields a wider scatter in scores, relative to the 32-bit model.

The underlying base dataset was sampled to 1000 records with a unbiased similarity score distribution. Using the various quantisation levels of this model, embeddings were created for sentence1 and sentence2. Finally, a cosine similarity score was calculated across the two embeddings, and plotted on the graph.

Quantisation Levels