|
|
|
|
|
--- |
|
|
pipeline_tag: sentence-similarity |
|
|
language: en |
|
|
license: mit |
|
|
tags: |
|
|
- passage-retrieval |
|
|
- sentence-similarity |
|
|
- pruned |
|
|
library_name: sentence-transformers |
|
|
base_model: intfloat/multilingual-e5-base |
|
|
base_model_relation: quantized |
|
|
--- |
|
|
# 🇬🇧 english-multilingual-e5-base |
|
|
|
|
|
This model is a 58.0% smaller version of [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) |
|
|
for the English language, created using the [mtem-pruner](https://huggingface.co/spaces/antoinelouis/mtem-pruner) space. |
|
|
|
|
|
This pruned model should perform similarly to the original model for English language tasks with a much smaller |
|
|
memory footprint. However, it may not perform well for other languages present in the original multilingual model as tokens not |
|
|
commonly used in English were removed from the original multilingual model's vocabulary. |
|
|
|
|
|
## Usage |
|
|
|
|
|
You can use this model with the Transformers library: |
|
|
|
|
|
```python |
|
|
from transformers import AutoModel, AutoTokenizer |
|
|
|
|
|
model_name = "KyberNull/english-multilingual-e5-base" |
|
|
model = AutoModel.from_pretrained(model_name, trust_remote_code=True) |
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=True) |
|
|
``` |
|
|
|
|
|
Or with the sentence-transformers library: |
|
|
|
|
|
```python |
|
|
from sentence_transformers import SentenceTransformer |
|
|
|
|
|
model = SentenceTransformer("KyberNull/english-multilingual-e5-base") |
|
|
``` |
|
|
|
|
|
**Credits**: cc [@antoinelouis](https://huggingface.co/antoinelouis) |
|
|
|