| --- |
| tags: |
| - sentence-transformers |
| - feature-extraction |
| pipeline_tag: text-classification |
| library_name: sentence-transformers |
| license: apache-2.0 |
| datasets: |
| - mawaskow/irish_forestry_incentives |
| --- |
| |
| # SentenceTransformer |
|
|
| This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. |
|
|
| ## Model Details |
|
|
| ### Model Description |
| - **Model Type:** Sentence Transformer |
| <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> |
| - **Maximum Sequence Length:** 512 tokens |
| - **Output Dimensionality:** 768 dimensions |
| - **Similarity Function:** Cosine Similarity |
| <!-- - **Training Dataset:** Unknown --> |
| <!-- - **Language:** Unknown --> |
| <!-- - **License:** Unknown --> |
|
|
| ### Model Sources |
|
|
| - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) |
| - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) |
| - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) |
|
|
| ### Full Model Architecture |
|
|
| ``` |
| SentenceTransformer( |
| (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel |
| (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) |
| ) |
| ``` |
|
|
| ## Usage |
|
|
| ### Direct Usage |
|
|
| ```python |
| import torch |
| from transformers import AutoTokenizer, AutoModelForSequenceClassification |
| |
| tok = AutoTokenizer.from_pretrained("mawaskow/inc_sent_cls_bn") |
| model = AutoModelForSequenceClassification.from_pretrained("mawaskow/inc_sent_cls_bn") |
| |
| sentences = [ |
| "The authority can revise the delegated act every five years.", |
| "The scheme will subsidise purchases of eco-friendly farm equipment.", |
| "Farmers will be able to avail of expert assistance in the uptake of new technologies." |
| ] |
| text = sentences[1] |
| inputs = tok(text, return_tensors="pt") |
| |
| with torch.no_grad(): |
| logits = model(**inputs).logits |
| |
| pred = torch.argmax(logits, dim=-1).item() |
| print(model.config.id2label[pred]) |
| # incentive |
| ``` |
|
|
| <!-- |
| ### Direct Usage (Transformers) |
|
|
| <details><summary>Click to see the direct usage in Transformers</summary> |
|
|
| </details> |
| --> |
|
|
| <!-- |
| ### Downstream Usage (Sentence Transformers) |
|
|
| You can finetune this model on your own dataset. |
|
|
| <details><summary>Click to expand</summary> |
|
|
| </details> |
| --> |
|
|
| <!-- |
| ### Out-of-Scope Use |
|
|
| *List how the model may foreseeably be misused and address what users ought not to do with the model.* |
| --> |
|
|
| <!-- |
| ## Bias, Risks and Limitations |
|
|
| *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* |
| --> |
|
|
| <!-- |
| ### Recommendations |
|
|
| *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* |
| --> |
|
|
| ## Training Details |
|
|
| ### Framework Versions |
| - Python: 3.11.7 |
| - Sentence Transformers: 3.5.0.dev0 |
| - Transformers: 4.50.0.dev0 |
| - PyTorch: 2.6.0+cu118 |
| - Accelerate: 1.4.0 |
| - Datasets: 3.5.0 |
| - Tokenizers: 0.21.0 |
|
|
| ## Citation |
|
|
| M.A. Waskow and John P. McCrae. 2025. Enhancing Policy Analysis with NLP: A Reproducible Approach to Incentive Classification. In Proceedings of the 21st Conference on Natural Language Processing (KONVENS 2025): Workshops, pages 74–85, Hannover, Germany. HsH Applied Academics. |
|
|
| ### BibTeX |
|
|
| @inproceedings{waskow2025enhancing, |
| title={Enhancing Policy Analysis with NLP: A Reproducible Approach to Incentive Classification}, |
| author={Waskow, MA and McCrae, John Philip}, |
| booktitle={Proceedings of the 21st Conference on Natural Language Processing (KONVENS 2025): Workshops}, |
| pages={74--85}, |
| year={2025} |
| } |
| <!-- |
| ## Glossary |
|
|
| *Clearly define terms in order to be accessible across audiences.* |
| --> |
|
|
| <!-- |
| ## Model Card Authors |
|
|
| *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* |
| --> |
|
|
| <!-- |
| ## Model Card Contact |
|
|
| *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* |
| --> |