File size: 2,399 Bytes
9220b75
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---

language:
- en
tags:
- text-classification
- edtech
- feedback-validation
- bert
- pytorch
license: mit
datasets:
- custom-edtech-feedback
metrics:
- accuracy
- precision
- recall
- f1
---


# EdTech Feedback Validation Model

## Model Description

This model is designed to validate user feedback in EdTech applications by determining whether a given feedback text aligns with a selected reason. It uses a BERT-based architecture for text pair classification.

## Intended Uses & Limitations

### Primary Use Case
- Validating user feedback in educational technology applications
- Ensuring feedback text aligns with predefined reason categories
- Improving user experience by providing accurate feedback categorization

### Limitations
- Trained on English text only
- Requires both feedback text and reason text as input
- Binary classification (aligned/not aligned)

## Training and Evaluation Data

The model was trained on a custom dataset containing:
- Training samples: 2,061 feedback-reason pairs
- Evaluation samples: 9,000 feedback-reason pairs
- All training samples were positive (aligned) examples
- Evaluation set contains both positive and negative examples

## Usage

```python

from transformers import AutoTokenizer, AutoModelForSequenceClassification

import torch



# Load model and tokenizer

model_name = "your-username/edtech-feedback-validation"

tokenizer = AutoTokenizer.from_pretrained(model_name)

model = AutoModelForSequenceClassification.from_pretrained(model_name)



# Example usage

text = "this is an amazing app for online classes!"

reason = "good app for conducting online classes"



# Tokenize inputs

inputs = tokenizer(text, reason, return_tensors="pt", padding=True, truncation=True)



# Get prediction

with torch.no_grad():

    outputs = model(**inputs)

    probabilities = torch.softmax(outputs.logits, dim=1)

    prediction = torch.argmax(probabilities, dim=1).item()

    confidence = probabilities[0][prediction].item()

    

print(f"Prediction: {prediction} (Aligned: {prediction == 1})")

print(f"Confidence: {confidence:.3f}")

```

## Model Architecture

- Base Model: BERT (bert-base-uncased)
- Task: Text Pair Classification
- Output: Binary classification (0: Not Aligned, 1: Aligned)
- Training Framework: PyTorch

## License

This model is released under the MIT License.