Leandra Budau
commited on
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,51 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
tags:
|
| 3 |
+
- biogpt
|
| 4 |
+
- boolean-query
|
| 5 |
+
- biomedical
|
| 6 |
+
- systematic-review
|
| 7 |
+
- pubmed
|
| 8 |
+
license: unknown
|
| 9 |
+
model-index:
|
| 10 |
+
- name: BioGPT-BQF-TMK-Small
|
| 11 |
+
results:
|
| 12 |
+
- task:
|
| 13 |
+
type: text-generation
|
| 14 |
+
name: Text Generation
|
| 15 |
+
dataset:
|
| 16 |
+
name: CLEF TAR, FASS
|
| 17 |
+
type: biomedical
|
| 18 |
+
metrics:
|
| 19 |
+
- name: Precision @100
|
| 20 |
+
type: precision
|
| 21 |
+
value: 0.16
|
| 22 |
+
- name: Recall @1000
|
| 23 |
+
type: recall
|
| 24 |
+
value: 0.28
|
| 25 |
+
- name: MAP Score
|
| 26 |
+
type: mean_average_precision
|
| 27 |
+
value: 0.30
|
| 28 |
+
---
|
| 29 |
+
|
| 30 |
+
# **BioGPT-BQF-TMK-Small**
|
| 31 |
+
A fine-tuned **BioGPT** model for **Boolean query formalization in biomedical systematic reviews**, incorporating **Titles, MeSH Terms, and Keywords** to improve **PubMed search query generation**.
|
| 32 |
+
|
| 33 |
+
## **Model Overview**
|
| 34 |
+
- **Base Model**: [BioGPT](https://huggingface.co/microsoft/BioGPT)
|
| 35 |
+
- **Fine-tuned on**: CLEF TAR and FASS datasets
|
| 36 |
+
- **Task**: Boolean Query Generation for PubMed searches
|
| 37 |
+
- **Inputs**: Research topic title, MeSH terms, and Keywords
|
| 38 |
+
- **Outputs**: Optimized PubMed Boolean search query
|
| 39 |
+
|
| 40 |
+
## **Usage**
|
| 41 |
+
```python
|
| 42 |
+
from transformers import BioGptForCausalLM, BioGptTokenizer
|
| 43 |
+
|
| 44 |
+
model_name = "leandrabudau/BioGPT-BQF-TMK-Small"
|
| 45 |
+
model = BioGptForCausalLM.from_pretrained(model_name)
|
| 46 |
+
tokenizer = BioGptTokenizer.from_pretrained(model_name)
|
| 47 |
+
|
| 48 |
+
input_text = "Best treatments for lung cancer"
|
| 49 |
+
inputs = tokenizer(input_text, return_tensors="pt")
|
| 50 |
+
outputs = model.generate(**inputs)
|
| 51 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|