| | --- |
| | pipeline_tag: sentence-similarity |
| | tags: |
| | - sentence-transformers |
| | - feature-extraction |
| | - sentence-similarity |
| | - transformers |
| | - doping |
| | - anti-doping |
| | pretty_name: Domain-adapted BERT for anti-doping practice |
| | license: apache-2.0 |
| | language: |
| | - en |
| | library_name: sentence-transformers |
| | --- |
| | |
| | # Domain-adapted BERT for anti-doping practice |
| |
|
| | This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. |
| |
|
| | Pretrained transformers model with the largest Wikipedia using a masked language modeling (MLM) objective, fitted using Transformer-based Sequential Denoising Auto-Encoder for unsupervised sentence embedding learning with one objective : anti-doping domain adaptation. |
| |
|
| | This way, the model learns an inner representation of the anti-doping language in the training set that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the model as inputs. |
| |
|
| |
|
| | ## Usage (Sentence-Transformers) |
| |
|
| | Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: |
| |
|
| | ``` |
| | pip install -U sentence-transformers |
| | ``` |
| |
|
| | Then you can use the model like this: |
| |
|
| | ```python |
| | from sentence_transformers import SentenceTransformer |
| | sentences = ["This is an example sentence", "Each sentence is converted"] |
| | |
| | model = SentenceTransformer("timotheeplanes/anti-doping-bert-base") |
| | embeddings = model.encode(sentences) |
| | print(embeddings) |
| | ``` |
| |
|
| |
|
| | ## Usage (HuggingFace Transformers) |
| | Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. |
| |
|
| | ```python |
| | from transformers import AutoTokenizer, AutoModel |
| | import torch |
| | |
| | |
| | def cls_pooling(model_output, attention_mask): |
| | return model_output[0][:,0] |
| | |
| | |
| | # Sentences we want sentence embeddings for |
| | sentences = ['This is an example sentence', 'Each sentence is converted'] |
| | |
| | # Load model from HuggingFace Hub |
| | tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') |
| | model = AutoModel.from_pretrained('{MODEL_NAME}') |
| | |
| | # Tokenize sentences |
| | encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') |
| | |
| | # Compute token embeddings |
| | with torch.no_grad(): |
| | model_output = model(**encoded_input) |
| | |
| | # Perform pooling. In this case, cls pooling. |
| | sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) |
| | |
| | print("Sentence embeddings:") |
| | print(sentence_embeddings) |
| | ``` |
| |
|
| |
|
| | ## Training |
| | The model was trained with the parameters: |
| |
|
| | **DataLoader**: |
| |
|
| | `torch.utils.data.dataloader.DataLoader` of length 7289 with parameters: |
| | ``` |
| | {'batch_size': 6, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} |
| | ``` |
| |
|
| | **Loss**: |
| |
|
| | `sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss` |
| |
|
| | Parameters of the fit()-Method: |
| | ``` |
| | { |
| | "epochs": 1, |
| | "evaluation_steps": 0, |
| | "max_grad_norm": 1, |
| | "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", |
| | "optimizer_params": { |
| | "lr": 3e-05 |
| | }, |
| | "scheduler": "constantlr", |
| | "steps_per_epoch": null, |
| | "warmup_steps": 10000, |
| | "weight_decay": 0 |
| | } |
| | ``` |
| |
|
| |
|
| | ## Full Model Architecture |
| | ``` |
| | SentenceTransformer( |
| | (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel |
| | (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) |
| | ) |
| | ``` |
| |
|
| | ## Citing & Authors |
| |
|
| | If you use this code in your research, please use the following BibTeX entry. |
| |
|
| | ```BibTeX |
| | @misc{louisbrulenaudet2023, |
| | author = {Brulé Naudet (L.), Planes (T.).}, |
| | title = {Domain-adapted BERT for anti-doping practice}, |
| | year = {2023} |
| | howpublished = {\url{https://huggingface.co/timotheeplanes/anti-doping-bert-base}}, |
| | } |
| | ``` |