gte_L6_uniform
Lightweight sentence encoder created from alibaba-NLP/gte-multilingual-base via layer pruning + vocabulary pruning.
Model Details
| Property |
Value |
| Teacher |
alibaba-NLP/gte-multilingual-base |
| Architecture |
GTE-multilingual (pruned) |
| Hidden dim |
768 |
| Layers |
6 / 12 |
| Layer indices |
[0, 2, 4, 7, 9, 11] |
| Strategy |
6 layers, evenly spaced from GTE-multilingual (12L) |
| Parameters |
234,919,680 |
| Model size (FP32) |
349.7MB |
| Distilled |
No |
Architecture
==============================================================
TEACHER: GTE-multilingual β STUDENT: 6L / 63,531 vocab
==============================================================
TEACHER STUDENT
βββββββββββββββββββββββββββ βββββββββββββββββββββββββββ
βββββββββββββββββββββββββββ βββββββββββββββββββββββββββ
β Input Tokens β β Input Tokens β
ββββββββββββββ¬βββββββββββββ ββββββββββββββ¬βββββββββββββ
β β
ββββββββββββββ΄βββββββββββββ ββββββββββββββ΄βββββββββββββ
β Embeddings β β Embeddings (pruned) β
β vocab: 250,048 β β vocab: 63,531 β
β dim: 768 β β dim: 768 β
ββββββββββββββ¬βββββββββββββ ββββββββββββββ¬βββββββββββββ
β β
βββββββββββββββββββββββββββ βββββββββββββββββββββββββββ
β Layer 0 β βββΊ β Layer 0 β L0 β
βββββββββββββββββββββββββββ€ βββββββββββββββββββββββββββ€
β Layer 1 β β³ β β
βββββββββββββββββββββββββββ€ βββββββββββββββββββββββββββ€
β Layer 2 β βββΊ β Layer 1 β L2 β
βββββββββββββββββββββββββββ€ βββββββββββββββββββββββββββ€
β Layer 3 β β³ β β
βββββββββββββββββββββββββββ€ βββββββββββββββββββββββββββ€
β Layer 4 β βββΊ β Layer 2 β L4 β
βββββββββββββββββββββββββββ€ βββββββββββββββββββββββββββ€
β Layer 5 β β³ β β
β β β β β β β β β β β ββ€ β β
β Layer 6 β β³ β β
βββββββββββββββββββββββββββ€ βββββββββββββββββββββββββββ€
β Layer 7 β βββΊ β Layer 3 β L7 β
βββββββββββββββββββββββββββ€ βββββββββββββββββββββββββββ€
β Layer 8 β β³ β β
βββββββββββββββββββββββββββ€ βββββββββββββββββββββββββββ€
β Layer 9 β βββΊ β Layer 4 β L9 β
βββββββββββββββββββββββββββ€ βββββββββββββββββββββββββββ€
β Layer 10 β β³ β β
βββββββββββββββββββββββββββ€ βββββββββββββββββββββββββββ€
β Layer 11 β βββΊ β Layer 5 β L11 β
ββββββββββββββ¬βββββββββββββ ββββββββββββββ¬βββββββββββββ
β β
ββββββββββββββ΄βββββββββββββ ββββββββββββββ΄βββββββββββββ
β Mean Pooling β β Mean Pooling β
β β 768d embedding β β β 768d embedding β
βββββββββββββββββββββββββββ βββββββββββββββββββββββββββ
Size: 1058.2MB (FP32) β 349.7MB (FP32)
Params: 277,405,440 β 91,674,624
Reduction: 67.0%
==============================================================
Quick Start
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("gte_L6_uniform", trust_remote_code=True)
sentences = [
"Hello, how are you?",
"μλ
νμΈμ",
"Bonjour, comment allez-vous?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
MTEB Evaluation Results
Overall Average: 42.78%
| Task Group |
Average |
| Classification |
51.3% |
| Clustering |
31.28% |
| STS |
45.42% |
Classification
| Task |
Average |
Details |
| AmazonCounterfactualClassification |
62.03% |
en: 65.04%, en-ext: 63.25%, de: 62.73% |
| Banking77Classification |
58.65% |
default: 58.65% |
| ImdbClassification |
63.58% |
default: 63.58% |
| MTOPDomainClassification |
61.22% |
en: 70.06%, es: 64.08%, hi: 60.95% |
| MassiveIntentClassification |
30.15% |
zh-CN: 49.78%, en: 47.57%, ja: 45.14% |
| MassiveScenarioClassification |
31.92% |
zh-CN: 54.49%, en: 50.72%, ja: 47.17% |
| ToxicConversationsClassification |
57.02% |
default: 57.02% |
| TweetSentimentExtractionClassification |
45.87% |
default: 45.87% |
Clustering
| Task |
Average |
Details |
| ArXivHierarchicalClusteringP2P |
53.65% |
default: 53.65% |
| ArXivHierarchicalClusteringS2S |
45.3% |
default: 45.3% |
| BiorxivClusteringP2P.v2 |
21.28% |
default: 21.28% |
| MedrxivClusteringP2P.v2 |
26.07% |
default: 26.07% |
| MedrxivClusteringS2S.v2 |
21.24% |
default: 21.24% |
| StackExchangeClustering.v2 |
39.07% |
default: 39.07% |
| StackExchangeClusteringP2P.v2 |
32.7% |
default: 32.7% |
| TwentyNewsgroupsClustering.v2 |
10.91% |
default: 10.91% |
STS
| Task |
Average |
Details |
| BIOSSES |
49.91% |
default: 49.91% |
| SICK-R |
51.42% |
default: 51.42% |
| STS12 |
39.09% |
default: 39.09% |
| STS13 |
51.12% |
default: 51.12% |
| STS14 |
45.69% |
default: 45.69% |
| STS15 |
60.2% |
default: 60.2% |
| STS17 |
18.02% |
es-es: 61.34%, en-en: 59.81%, ko-ko: 50.21% |
| STS22.v2 |
38.98% |
zh: 62.9%, es: 58.01%, fr: 55.34% |
| STSBenchmark |
54.35% |
default: 54.35% |
Training
Created via layer pruning + vocabulary pruning (no additional training):
- Teacher:
alibaba-NLP/gte-multilingual-base (12 layers, 768d)
- Layer selection:
[0, 2, 4, 7, 9, 11] - 6 layers, evenly spaced from GTE-multilingual (12L)
- Vocab pruning: Corpus-based filtering for target languages
Supported Languages (18)
ko, en, ja, zh, es, fr, de, pt, it, ru, ar, hi, th, vi, id, tr, nl, pl