|
|
---
|
|
|
language:
|
|
|
- en
|
|
|
tags:
|
|
|
- text-classification
|
|
|
- edtech
|
|
|
- feedback-validation
|
|
|
- bert
|
|
|
- pytorch
|
|
|
license: mit
|
|
|
datasets:
|
|
|
- custom-edtech-feedback
|
|
|
metrics:
|
|
|
- accuracy
|
|
|
- precision
|
|
|
- recall
|
|
|
- f1
|
|
|
---
|
|
|
|
|
|
# EdTech Feedback Validation Model
|
|
|
|
|
|
## Model Description
|
|
|
|
|
|
This model is designed to validate user feedback in EdTech applications by determining whether a given feedback text aligns with a selected reason. It uses a BERT-based architecture for text pair classification.
|
|
|
|
|
|
## Intended Uses & Limitations
|
|
|
|
|
|
### Primary Use Case
|
|
|
- Validating user feedback in educational technology applications
|
|
|
- Ensuring feedback text aligns with predefined reason categories
|
|
|
- Improving user experience by providing accurate feedback categorization
|
|
|
|
|
|
### Limitations
|
|
|
- Trained on English text only
|
|
|
- Requires both feedback text and reason text as input
|
|
|
- Binary classification (aligned/not aligned)
|
|
|
|
|
|
## Training and Evaluation Data
|
|
|
|
|
|
The model was trained on a custom dataset containing:
|
|
|
- Training samples: 2,061 feedback-reason pairs
|
|
|
- Evaluation samples: 9,000 feedback-reason pairs
|
|
|
- All training samples were positive (aligned) examples
|
|
|
- Evaluation set contains both positive and negative examples
|
|
|
|
|
|
## Usage
|
|
|
|
|
|
```python
|
|
|
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
|
|
import torch
|
|
|
|
|
|
# Load model and tokenizer
|
|
|
model_name = "your-username/edtech-feedback-validation"
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
|
|
model = AutoModelForSequenceClassification.from_pretrained(model_name)
|
|
|
|
|
|
# Example usage
|
|
|
text = "this is an amazing app for online classes!"
|
|
|
reason = "good app for conducting online classes"
|
|
|
|
|
|
# Tokenize inputs
|
|
|
inputs = tokenizer(text, reason, return_tensors="pt", padding=True, truncation=True)
|
|
|
|
|
|
# Get prediction
|
|
|
with torch.no_grad():
|
|
|
outputs = model(**inputs)
|
|
|
probabilities = torch.softmax(outputs.logits, dim=1)
|
|
|
prediction = torch.argmax(probabilities, dim=1).item()
|
|
|
confidence = probabilities[0][prediction].item()
|
|
|
|
|
|
print(f"Prediction: {prediction} (Aligned: {prediction == 1})")
|
|
|
print(f"Confidence: {confidence:.3f}")
|
|
|
```
|
|
|
|
|
|
## Model Architecture
|
|
|
|
|
|
- Base Model: BERT (bert-base-uncased)
|
|
|
- Task: Text Pair Classification
|
|
|
- Output: Binary classification (0: Not Aligned, 1: Aligned)
|
|
|
- Training Framework: PyTorch
|
|
|
|
|
|
## License
|
|
|
|
|
|
This model is released under the MIT License.
|
|
|
|