Update README.md
Browse files
README.md
CHANGED
|
@@ -83,3 +83,93 @@ configs:
|
|
| 83 |
- split: test
|
| 84 |
path: physics/test-*
|
| 85 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
- split: test
|
| 84 |
path: physics/test-*
|
| 85 |
---
|
| 86 |
+
|
| 87 |
+
# Prerequisite RElation LEARNing (PRELEARN)
|
| 88 |
+
|
| 89 |
+
Original Paper: https://ceur-ws.org/Vol-2765/paper164.pdf
|
| 90 |
+
|
| 91 |
+
This dataset contains a collection of binary-labelled concept pairs (A,B) extracted from textbooks on four domains: **data mining**, **geometry**, **physics** and **precalculus**.
|
| 92 |
+
Then, domain experts were asked to manually annotate if pairs of concepts showed a prerequisite relation or not, therefore the dataset consists of both positive and negative concept pairs.
|
| 93 |
+
|
| 94 |
+
We obtained the data from the original repository, making only one modification: undersampling the training data. To evaluate generative models in in-context learning, it's essential to have a balanced distribution for sampling examples in a few-shot setting. The undersampling process was carried out randomly, and separately for each domain.
|
| 95 |
+
|
| 96 |
+
## Example
|
| 97 |
+
|
| 98 |
+
Here you can see the structure of the single sample in the present dataset.
|
| 99 |
+
|
| 100 |
+
```json
|
| 101 |
+
{
|
| 102 |
+
"concept_A": string, # text of the concept A
|
| 103 |
+
"wikipedia_passage_concept_A": string, # paragraph of wikipedia corresponding to concept A
|
| 104 |
+
"concept_B": string, # text of the concept B
|
| 105 |
+
"wikipedia_passage_concept_B": string, # paragraph of wikipedia corresponding to concept B
|
| 106 |
+
"target": int, # 0: B non è preconcetto di A, 1: B è preconcetto di A
|
| 107 |
+
}
|
| 108 |
+
```
|
| 109 |
+
|
| 110 |
+
## Statitics
|
| 111 |
+
|
| 112 |
+
| PRELEARN Data Mining | 0 | 1 |
|
| 113 |
+
| :--------: | :----: | :----: |
|
| 114 |
+
| Training | 109 | 109 |
|
| 115 |
+
| Test | 50 | 49 |
|
| 116 |
+
|
| 117 |
+
| PRELEARN Physics | 0 | 1 |
|
| 118 |
+
| :--------: | :----: | :----: |
|
| 119 |
+
| Training | 315 | 315 |
|
| 120 |
+
| Test | 100 | 100 |
|
| 121 |
+
|
| 122 |
+
| PRELEARN Geometry | 0 | 1 |
|
| 123 |
+
| :--------: | :----: | :----: |
|
| 124 |
+
| Training | 332 | 332 |
|
| 125 |
+
| Test | 100 | 100 |
|
| 126 |
+
|
| 127 |
+
| PRELEARN Precalculus | 0 | 1 |
|
| 128 |
+
| :--------: | :----: | :----: |
|
| 129 |
+
| Training | 408 | 408 |
|
| 130 |
+
| Test | 100 | 100 |
|
| 131 |
+
|
| 132 |
+
## Proposed Prompts
|
| 133 |
+
|
| 134 |
+
Here we will describe the prompt given to the model over which we will compute the perplexity score, as model's answer we will chose the prompt with lower perplexity.
|
| 135 |
+
Moreover, for each subtask, we define a description that is prepended to the prompts, needed by the model to understand the task.
|
| 136 |
+
|
| 137 |
+
Description of the task: "Dati due concetti, indica se primo concetto è un prerequisito o meno per il secondo.\nUn concetto A è prerequisito per un concetto B, se per comprendere B devo prima aver compreso A.\nI seguenti concetti appartengono al dominio: {{domain}}.\n\n"
|
| 138 |
+
|
| 139 |
+
### Cloze Style:
|
| 140 |
+
|
| 141 |
+
Label (**B non è prerequisito di A**): "{{concept_B}} non è un prerequisito per {{concept_A}}"
|
| 142 |
+
|
| 143 |
+
Label (**B è prerequisito di A**): "{{concept_B}} è un prerequisito per {{concept_A}}"
|
| 144 |
+
|
| 145 |
+
### MCQA Style:
|
| 146 |
+
|
| 147 |
+
```
|
| 148 |
+
Domanda: il concetto {{concept_B}} è un prerequisito per la comprensione del concetto {{concept_A}}? Rispondi sì o no:
|
| 149 |
+
```
|
| 150 |
+
|
| 151 |
+
## Some Results
|
| 152 |
+
|
| 153 |
+
The following results are given by the Cloze-style prompting over some english and italian-adapted LLMs.
|
| 154 |
+
|
| 155 |
+
| PRELEARN (AVG) | ACCURACY (15-shots) |
|
| 156 |
+
| :-----: | :--: |
|
| 157 |
+
| Gemma-2B | 60.12 |
|
| 158 |
+
| QWEN2-1.5B | 57.00 |
|
| 159 |
+
| Mistral-7B | 64.50 |
|
| 160 |
+
| ZEFIRO | 64.76 |
|
| 161 |
+
| Llama-3-8B | 60.63 |
|
| 162 |
+
| Llama-3-8B-IT | 63.76 |
|
| 163 |
+
| ANITA | 63.77 |
|
| 164 |
+
|
| 165 |
+
## Aknwoledge
|
| 166 |
+
|
| 167 |
+
We want to thanks this resource's authors for publicly releasing such an interesting dataset.
|
| 168 |
+
|
| 169 |
+
Further, We want to thanks the student of [MNLP-2024 course](https://naviglinlp.blogspot.com/), where with their first homework tried different interesting prompting strategies.
|
| 170 |
+
|
| 171 |
+
The data is freely available through this [link](https://live.european-language-grid.eu/catalogue/corpus/8084).
|
| 172 |
+
|
| 173 |
+
## License
|
| 174 |
+
|
| 175 |
+
The data come under the license [Creative Commons Attribution Non Commercial Share Alike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/)
|