File size: 2,304 Bytes
78876f0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e295d47
78876f0
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
license: apache-2.0
tags:
- question-answering
- squad
- transformers
- pytorch
- evaluation
- hf-course
- fine-tuned
datasets:
- squad
metrics:
- exact_match
- f1
model-index:
  - name: QA-SQuAD-BERT
    results:
      - task:
          type: question-answering
          name: Question Answering
        dataset:
          name: SQuAD v1.1
          type: squad
        metrics:
          - name: Exact Match
            type: exact_match
            value: 82.7
          - name: F1
            type: f1
            value: 87.0039
---

# QA-SQuAD-BERT

A BERT-based model fine-tuned on SQuAD v1.1 for extractive QA

## Model Description

This model is based on bert-base-uncased and was fine-tuned on the **SQuAD v1.1** dataset for extractive question answering. It takes a question and a context passage as input and predicts the span of text in the passage that most likely answers the question.

The model was trained using the Hugging Face 馃 Transformers library.

## Intended Uses & Limitations

### Intended Uses

- Extractive question answering on Wikipedia-style passages.
- As a downstream component in information retrieval pipelines.
- Educational purposes or experimentation with fine-tuning on QA tasks.

### Limitations

- The model may not generalize well to out-of-domain datasets.
- It does not handle unanswerable questions (not trained on SQuAD v2.0).
- It may produce incorrect or misleading answers if context is ambiguous.

## Training Details

- **Base model**: bert-base-uncased
- **Dataset**: [SQuAD v1.1](https://huggingface.co/datasets/squad)
- **Epochs**: 3
- **Batch size**: 8
- **Learning rate**: 2e-5
- **Optimizer**: AdamW
- **Max length**: 384
- **Hardware used**: Colab/GPU T4

## Evaluation Results

The model was evaluated on the SQuAD v1.1 development set using the standard metrics: Exact Match (EM) and F1.

| Metric       | Score |
|--------------|-------|
| Exact Match  | 82.7 |
| F1           | 87.0039 |

## How to Use

You can load this model using the `pipeline` API:

```python
from transformers import pipeline

qa_pipeline = pipeline("question-answering", model="tmt3103/SQuAD_BERT")
result = qa_pipeline({
    "context": "Hugging Face is creating a tool that democratizes AI.",
    "question": "What is Hugging Face creating?"
})
print(result)