Update README.md
Browse files
README.md
CHANGED
|
@@ -155,13 +155,14 @@ For **MedExpQA** benchmarking we have added the following elements in the data:
|
|
| 155 |
|
| 156 |
## Benchmark Results (averaged per type of external knowledge for grounding)
|
| 157 |
|
|
|
|
|
|
|
|
|
|
| 158 |
<p align="left">
|
| 159 |
<img src="https://github.com/hitz-zentroa/MedExpQA/blob/main/out/experiments/figures/benchmark.png?raw=true" style="height: 300px;">
|
| 160 |
</p>
|
| 161 |
|
| 162 |
|
| 163 |
-
|
| 164 |
-
|
| 165 |
## Citation
|
| 166 |
|
| 167 |
If you use Antidote CasiMedicos dataset then please **cite the following paper**:
|
|
|
|
| 155 |
|
| 156 |
## Benchmark Results (averaged per type of external knowledge for grounding)
|
| 157 |
|
| 158 |
+
LLMs evaluated: [LLaMA](https://huggingface.co/meta-llama/Llama-2-13b), [PMC-LLaMA](https://huggingface.co/axiong/PMC_LLaMA_13B),
|
| 159 |
+
[Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) and [BioMistral](https://huggingface.co/BioMistral/BioMistral-7B-DARE).
|
| 160 |
+
|
| 161 |
<p align="left">
|
| 162 |
<img src="https://github.com/hitz-zentroa/MedExpQA/blob/main/out/experiments/figures/benchmark.png?raw=true" style="height: 300px;">
|
| 163 |
</p>
|
| 164 |
|
| 165 |
|
|
|
|
|
|
|
| 166 |
## Citation
|
| 167 |
|
| 168 |
If you use Antidote CasiMedicos dataset then please **cite the following paper**:
|