| | --- |
| | language: en |
| | pipeline_tag: sentence-similarity |
| | tags: |
| | - patent-similarity |
| | - sentence-transformers |
| | - feature-extraction |
| | - sentence-similarity |
| | - transformers |
| | datasets: |
| | - mpi-inno-comp/paecter_dataset |
| | license: apache-2.0 |
| | --- |
| | |
| | # pat_specter |
| | |
| | This is a [sentence-transformers](https://www.SBERT.net) model. This model is fine-tuned on patent texts, leveraging SPECTER 2.0 as a base, which is provided by Allen Institute for AI. It maps patent text to a 768 dimensional dense vector space and can be used for patent-specific downstream tasks. |
| | However, it is noteworthy that [PaECTER](https://huggingface.co/mpi-inno-comp/paecter) outperforms this model in terms of performance. |
| | |
| | ## Usage (Sentence-Transformers) |
| | |
| | Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: |
| | |
| | ``` |
| | pip install -U sentence-transformers |
| | ``` |
| | |
| | Then you can use the model like this: |
| | |
| | ```python |
| | from sentence_transformers import SentenceTransformer |
| | sentences = ["This is an example sentence", "Each sentence is converted"] |
| |
|
| | model = SentenceTransformer('mpi-inno-comp/pat_specter') |
| | embeddings = model.encode(sentences) |
| | print(embeddings) |
| | ``` |
| | |
| | |
| | |
| | ## Usage (HuggingFace Transformers) |
| | Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. |
| | |
| | ```python |
| | from transformers import AutoTokenizer, AutoModel |
| | import torch |
| | |
| | |
| | def cls_pooling(model_output, attention_mask): |
| | return model_output[0][:,0] |
| | |
| |
|
| | # Sentences we want sentence embeddings for |
| | sentences = ['This is an example sentence', 'Each sentence is converted'] |
| |
|
| | # Load model from HuggingFace Hub |
| | tokenizer = AutoTokenizer.from_pretrained('mpi-inno-comp/pat_specter') |
| | model = AutoModel.from_pretrained('mpi-inno-comp/pat_specter') |
| |
|
| | # Tokenize sentences |
| | encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt', max_length=512) |
| | |
| | # Compute token embeddings |
| | with torch.no_grad(): |
| | model_output = model(**encoded_input) |
| | |
| | # Perform pooling. In this case, cls pooling. |
| | sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) |
| | |
| | print("Sentence embeddings:") |
| | print(sentence_embeddings) |
| | ``` |
| | |
| | |
| | ## Training |
| | The model was trained with the parameters: |
| | |
| | **DataLoader**: |
| | |
| | `torch.utils.data.dataloader.DataLoader` of length 159375 with parameters: |
| | ``` |
| | {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} |
| | ``` |
| | |
| | **Loss**: |
| | |
| | `sentence_transformers.losses.CustomTripletLoss.CustomTripletLoss` with parameters: |
| | ``` |
| | {'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 1} |
| | ``` |
| | |
| | Parameters of the fit()-Method: |
| | ``` |
| | { |
| | "epochs": 1, |
| | "evaluation_steps": 2000, |
| | "evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator", |
| | "max_grad_norm": 1, |
| | "optimizer_class": "<class 'transformers.optimization.AdamW'>", |
| | "optimizer_params": { |
| | "lr": 1e-05 |
| | }, |
| | "scheduler": "WarmupLinear", |
| | "steps_per_epoch": null, |
| | "warmup_steps": 10000, |
| | "weight_decay": 0.01 |
| | } |
| | ``` |
| | |
| |
|
| | ## Full Model Architecture |
| | ``` |
| | SentenceTransformer( |
| | (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel |
| | (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) |
| | ) |
| | ``` |
| |
|
| | ## Citing & Authors |
| | ``` |
| | @misc{ghosh2024paecter, |
| | title={PaECTER: Patent-level Representation Learning using Citation-informed Transformers}, |
| | author={Mainak Ghosh and Sebastian Erhardt and Michael E. Rose and Erik Buunk and Dietmar Harhoff}, |
| | year={2024}, |
| | eprint={2402.19411}, |
| | archivePrefix={arXiv}, |
| | primaryClass={cs.IR} |
| | } |
| | ``` |