| | --- |
| | pipeline_tag: sentence-similarity |
| | tags: |
| | - sentence-transformers |
| | - feature-extraction |
| | - sentence-similarity |
| | - transformers |
| | license: mit |
| | --- |
| | |
| | # KennethEnevoldsen/dfm-sentence-encoder-medium |
| |
|
| | This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. |
| |
|
| | <!--- Describe your model here --> |
| |
|
| | ## Usage (Sentence-Transformers) |
| |
|
| | Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: |
| |
|
| | ``` |
| | pip install -U sentence-transformers |
| | ``` |
| |
|
| | Then you can use the model like this: |
| |
|
| | ```python |
| | from sentence_transformers import SentenceTransformer |
| | sentences = ["This is an example sentence", "Each sentence is converted"] |
| | |
| | model = SentenceTransformer('KennethEnevoldsen/dfm-sentence-encoder-medium') |
| | embeddings = model.encode(sentences) |
| | print(embeddings) |
| | ``` |
| |
|
| |
|
| |
|
| | ## Usage (HuggingFace Transformers) |
| | Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. |
| |
|
| | ```python |
| | from transformers import AutoTokenizer, AutoModel |
| | import torch |
| | |
| | |
| | #Mean Pooling - Take attention mask into account for correct averaging |
| | def mean_pooling(model_output, attention_mask): |
| | token_embeddings = model_output[0] #First element of model_output contains all token embeddings |
| | input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() |
| | return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) |
| | |
| | |
| | # Sentences we want sentence embeddings for |
| | sentences = ['This is an example sentence', 'Each sentence is converted'] |
| | |
| | # Load model from HuggingFace Hub |
| | tokenizer = AutoTokenizer.from_pretrained('KennethEnevoldsen/dfm-sentence-encoder-medium') |
| | model = AutoModel.from_pretrained('KennethEnevoldsen/dfm-sentence-encoder-medium') |
| | |
| | # Tokenize sentences |
| | encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') |
| | |
| | # Compute token embeddings |
| | with torch.no_grad(): |
| | model_output = model(**encoded_input) |
| | |
| | # Perform pooling. In this case, mean pooling. |
| | sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) |
| | |
| | print("Sentence embeddings:") |
| | print(sentence_embeddings) |
| | ``` |
| |
|
| |
|
| |
|
| | ## Evaluation Results |
| |
|
| | <!--- Describe how your model was evaluated --> |
| |
|
| | ## Training |
| | The model was trained with the parameters: |
| |
|
| | **DataLoader**: |
| |
|
| | `torch.utils.data.dataloader.DataLoader` of length 12500 with parameters: |
| | ``` |
| | {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} |
| | ``` |
| |
|
| | **Loss**: |
| |
|
| | `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: |
| | ``` |
| | {'scale': 20.0, 'similarity_fct': 'cos_sim'} |
| | ``` |
| |
|
| | Parameters of the fit()-Method: |
| | ``` |
| | { |
| | "epochs": 1, |
| | "evaluation_steps": 0, |
| | "evaluator": "NoneType", |
| | "max_grad_norm": 1, |
| | "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", |
| | "optimizer_params": { |
| | "lr": 2e-05 |
| | }, |
| | "scheduler": "WarmupLinear", |
| | "steps_per_epoch": null, |
| | "warmup_steps": 10000, |
| | "weight_decay": 0.01 |
| | } |
| | ``` |
| |
|
| |
|
| | ## Full Model Architecture |
| | ``` |
| | SentenceTransformer( |
| | (0): Transformer({'max_seq_length': 32, 'do_lower_case': False}) with Transformer model: XLMRobertaModel |
| | (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) |
| | ) |
| | ``` |
| |
|
| | ## Citing & Authors |
| |
|
| | ``` |
| | @article{enevoldsenScandinavianEmbeddingBenchmarks2024, |
| | title = {The {Scandinavian} {Embedding} {Benchmarks}: {Comprehensive} {Assessment} of {Multilingual} and {Monolingual} {Text} {Embedding}}, |
| | shorttitle = {The {Scandinavian} {Embedding} {Benchmarks}}, |
| | url = {https://openreview.net/forum?id=pJl_i7HIA72}, |
| | language = {en}, |
| | urldate = {2024-04-12}, |
| | author = {Enevoldsen, Kenneth and Kardos, Márton and Muennighoff, Niklas and Nielbo, Kristoffer}, |
| | month = feb, |
| | year = {2024}, |
| | } |
| | ``` |
| |
|
| | <!--- Describe where people can find more information --> |