File size: 2,437 Bytes
132930a bc71d91 132930a bc71d91 132930a bc71d91 132930a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 |
---
license: cc-by-nc-4.0
tags:
- bert
- text-classification
- disability
- inclusive-language
- academic-writing
datasets:
- assets
library_name: transformers
language:
- en
---
# Identifying Disability-Insensitive Language in Scholarly Works
Refer to the code repository and paper here: [GitHub - Insensitive-Lang-Detection](https://github.com/RobyRoshna/Insensitive-Lang-Detection/tree/main)
---
## Overview
This is a fine-tuned BERT model designed to detect potentially insensitive or non-inclusive language relating to disability, specifically in academic and scholarly writing.
The model helps promote more inclusive and respectful communication, aligning with social models of disability and various international guidelines.
---
## Intended Use
- Academic editors and reviewers who want to check abstracts and papers for disability-insensitive language.
- Researchers studying accessibility, inclusive design, or language bias.
- Automated writing support tools focused on scholarly communication.
---
## Model Details
- **Architecture**: BERT-base (uncased)
- **Fine-tuned on**: Sentences from ASSETS conference papers (1994–2024) and organizational documents (ADA National Network, UN guidelines).
- **Labels**:
- `0`: Not insensitive
- `1`: Insensitive
---
## Training Data
- Extracted and manually annotated sentences referencing disability-related terms.
- Supported with data augmentation using OpenAI GPT-4o to balance underrepresented phrases.
---
## License
This model is licensed under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.
This means you are free to share and adapt the model for non-commercial purposes, as long as appropriate credit is given. Commercial use is not permitted without explicit permission.
For details, see [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/).
---
## How to Use
```python
from transformers import BertForSequenceClassification, BertTokenizer
model = BertForSequenceClassification.from_pretrained("rrroby/insensitive-language-bert")
tokenizer = BertTokenizer.from_pretrained("rrroby/insensitive-language-bert")
text = "This participant was wheelchair-bound and..."
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512)
outputs = model(**inputs)
logits = outputs.logits
predicted_class = logits.argmax(-1).item()
print("Predicted class:", predicted_class)
|