Omartificial-Intelligence-Space commited on
Commit
0772ca3
·
verified ·
1 Parent(s): 61d04f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -6
README.md CHANGED
@@ -15,15 +15,26 @@ base_model:
15
  pipeline_tag: sentence-similarity
16
  ---
17
 
18
- # Zarra Arabic Static Embedding
19
 
20
- This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of a Sentence Transformer.
 
 
21
 
22
- It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU.
 
23
 
24
- It is designed for applications where computational resources are limited or where real-time performance is critical.
25
- Model2Vec models are the smallest, fastest, and most performant static embedders available.
26
- The distilled models are can be up to 50 times smaller and 500 times faster than traditional Sentence Transformers.
 
 
 
 
 
 
 
 
27
 
28
  ## Installation
29
 
 
15
  pipeline_tag: sentence-similarity
16
  ---
17
 
18
+ # Zarra: Arabic Static Embedding Model
19
 
20
+ **Zarra** is a static embedding model built using the Model2Vec distillation framework.
21
+ It is a distilled version of a Sentence Transformer, specifically optimized for the Arabic language.
22
+ Unlike traditional transformer-based models, Zarra produces static embeddings, enabling ultra-fast inference on both CPU and GPU—making it ideal for resource-constrained environments or real-time applications.
23
 
24
+ ## Why Zarra?
25
+ ⚡ Exceptional Speed: Delivers embeddings up to 500x faster than sentence transformers.
26
 
27
+ 🧠 Compact & Efficient: Up to 50x smaller in size, allowing easy deployment on edge devices.
28
+
29
+ 🧰 Versatile: Well-suited for search, clustering, classification, deduplication, and more.
30
+
31
+ 🌍 Arabic-First: Specifically trained on high-quality Arabic data, ensuring relevance and performance across a range of Arabic NLP tasks.
32
+
33
+ ## About Model2Vec
34
+
35
+
36
+ The Model2Vec distillation technique transfers knowledge from large transformer models into lightweight static embedding spaces, preserving semantic quality while dramatically improving speed and efficiency.
37
+ Zarra represents the best of both worlds: the semantic power of transformers and the speed and simplicity of static vectors.
38
 
39
  ## Installation
40