Update README.md
Browse files
README.md
CHANGED
|
@@ -5,25 +5,22 @@ tags: []
|
|
| 5 |
|
| 6 |
# bert_squad
|
| 7 |
|
| 8 |
-
|
| 9 |
|
| 10 |
|
| 11 |
|
| 12 |
-
## Model Details
|
| 13 |
-
|
| 14 |
### Model Description
|
| 15 |
|
| 16 |
<!-- Provide a longer summary of what this model is. -->
|
| 17 |
|
| 18 |
-
|
|
|
|
|
|
|
| 19 |
|
| 20 |
-
- **Developed by
|
| 21 |
-
- **
|
| 22 |
-
- **
|
| 23 |
-
- **
|
| 24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 25 |
-
- **License:** [More Information Needed]
|
| 26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
| 27 |
|
| 28 |
### Model Sources [optional]
|
| 29 |
|
|
|
|
| 5 |
|
| 6 |
# bert_squad
|
| 7 |
|
| 8 |
+
Pretrained model on context-based Question Answering using the SQuAD dataset. This model is fine-tuned from the BERT architecture for extracting answers from passages.
|
| 9 |
|
| 10 |
|
| 11 |
|
|
|
|
|
|
|
| 12 |
### Model Description
|
| 13 |
|
| 14 |
<!-- Provide a longer summary of what this model is. -->
|
| 15 |
|
| 16 |
+
bert_squad is a transformer-based model trained for context-based question answering tasks. It leverages the pretrained BERT architecture and adapts it for extracting precise answers given a question and a related context. This model uses the Stanford Question Answering Dataset (SQuAD), available via Hugging Face datasets, for training and fine-tuning.
|
| 17 |
+
|
| 18 |
+
The model was trained using free computational resources, demonstrating its accessibility for educational and small-scale research purposes.
|
| 19 |
|
| 20 |
+
- **Developed by: SADAT PARVEJ, RAFIFA BINTE JAHIR
|
| 21 |
+
- **Shared by [optional]: SADAT PARVEJ
|
| 22 |
+
- **Language(s) (NLP): ENGLISH
|
| 23 |
+
- **Finetuned from model [optional]:https://huggingface.co/google-bert/bert-base-uncased
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
### Model Sources [optional]
|
| 26 |
|