--- license: cc-by-sa-4.0 --- **Current version**: Dataset_v0.4.tsv (converted to ragability format: v0d4.hjson) # Ragability Corpus In the following, we introduce WikiContradict (the empirical basis for the Ragability Corpus), describe the Ragability Corpus, and finally explain how the dataset can be extended and how a new one can be created. ## Empirical basis **WikiContradict** is a benchmark for evaluating LLMs on real-world knowledge conflicts from Wikipedia (see the [Hou et. al. 2025](https://proceedings.neurips.cc/paper_files/paper/2024/file/c63819755591ea972f8570beffca6b1b-Paper-Datasets_and_Benchmarks_Track.pdf) and the [dataset](https://huggingface.co/datasets/ibm-research/Wikipedia_contradict_benchmark) for more details). It consists of 253 human annotated instances that cover different types of real-world knowledge conflicts. An instance in the dataset covers a query, context1, context2, the answer to the query based on context1 (answer1), the answer to the query based on context2 (answer2), the contradiction type of the two contexts, i.e., if the contradiction is explicit or if reasoning is required to identify the contradiction (implicit contradiction), as well as additional metadata on the wikipedia article. The dataset was chosen as empirical basis, as it contains * instances where **implicit** reasoning is required, and * actual **real-world knowledge conflicts** In WikiContradict, 92 instances were annotated as requiring implicit reasoning, of which the following data instances were omitted in the presented Ragability Corpus: * rather complex calculations (only one example) * Context1: BMS (Burning mouth syndrome) is fairly uncommon worldwide, affecting up to five individuals per 100,000 general population. People with BMS are more likely to be middle aged or elderly, and females are three to seven times more likely to have BMS than males. Some report a female to male ratio of as much as 33 to 1. BMS is reported in about 10-40% of women seeking medical treatment for menopausal symptoms, and BMS occurs in about 14% of postmenopausal women. Males and younger individuals of both sexes are sometimes affected. * Note: It is not obvious how many women seek medical treatment for menopausal symptoms * very weak contradictions (several examples, following is one illustrative example) * Context1: Little America II was established in 1934, some thirty feet (ten meters) above the site of the original base Little America I, with some of the original base accessed via tunnel. The Little America II base was briefly set adrift in 1934, but the iceberg fused to the main glacier. * Context2: In a later expedition to Antarctica, Byrd's expedition spotted Little America's towers still standing, including the Jacobs Wind plant installed in 1933. * Query: When was Little America II base established? * Note: It is not clear whether context 2 refers to Little America Base II or Base I Moreover, the amount of unique context pairs with knowledge conflicts in the WikiContradict dataset was significantly lower than 92 as the same context pairs with different queries received different ids. This reduced the data to build upon for the Ragability Corpus to 43 context pairs with knowledge conflicts. ## Description of the Ragability Corpus Out of these 43 context pairs, 53 instances were generated, as for some context pairs the contexts were slightly varied by reinforcing or mitigating the knowledge conflict, or by varying the query which needs to be answered based on the contexts. **Assurance of novelty of data instance**: In order to make sure that the LLM is only able to answer a question if additional context is provided and that the data instance is not already part of the LLM's training data, the entities in the data taken from WikiContradict are replaced by **fantasy entities** that do not exist. Dates are also (partially) replaced by **new dates**. **Size of contexts**: 1 – 2 sentences per context. The contexts are limited to sentences which are necessary to identify the contradiction. Each context can be viewed as a snippet retrieved from a RAG system. Note: the dataset can still be extended to add further contexts to the instances. **IDs** * **contradiction_ID**: each context_1 and context_2 pair has a unique ID; in case there are two different queries for the same context-pair, they have the same ID * **WikiContradict_ID**: ID from the WikiContradict dataset which inspired the new examples **Number of contexts** is currently 4: * **context_1**, **context_2**: these 2 contexts are derived from the WikiContradict dataset. The individual contexts are non-contradictory, but both contexts together are contradictory. In the ragability experiments, there is a strong focus on these two contexts. To identify the contradiction, multi-hop reasoning is required in all cases. * **context_3**: this context is **not contradictory to context_1** but **contradictory to context_2**. * **context_4**: provides additional context and is **not contradictory to any other context** **Answers to the query**: The answers to the query include a knowledge conflict for context 1 and context 2 throughout the dataset. For both contexts, there is a short and a long answer to the query in the dataset. E.g.: 'What kind of animal is a Cap Squirrel?'; answer_context1: 'a suricate'; answer_context1_long: 'A cap squirrel is a suricate.'; answer_context2: 'a squirrel'; answer_context2_long: 'A Cap Squirrel is a squirrel.'. The reason why the data comprise both a short and a long answer is that a _checker LLM_ is able to identify the minimal semantic content and the maximal semantic content of the response. **Tags** * **reasoning_required**: the type of reasoning which is required to compare context_1 and context_2 is annotated. The categories are * _categorical_ * _numerical_ * _temporal.numerical_ and _temporal.relational_, whereby the former refers to date/time expressions (e.g., 2013, 21 of Mai, 1989/12/24, ...) and the latter stands for verbal expressions such as _earlier, later, before, after, ..._ * **context x query annotations**: for context_1 and context_2, we manually annotated in two separate columns (c1xq and c2xq) their respective relation to the query: * **explicit** if the answer to the question can be read from the given context, or **implicit** when reasoning is required to answer the question based on only this context * example explicit: _In the first two days of the combat of Finge 40 died. How many died in the first two days of the combat of Finge in 1890?_ (answer: 40, tag: qeu) * example implicit: _In the first day of the combat of Finge (10th April 1890), they had 45 dead. On the second day, 5 died. How many died in the first two days of the combat of Finge in 1890?_ (answer: 50, tag: qiu) * if the answer is **unequivocal** or **ambiguous** * example unequivocal: _Olinde is a monotypic moth. Are there different species of the Olinde moth?_ (answer: no, qiu) * example ambiguous: _Tim visited the Canarian Islands with his wife Katherine in 1998. When did Tim marry Katherine?_ (answer: 1998 or earlier, tag: qia) * if the context does **not** provide **enough information** to answer the question * example: _Retelene was recently found to be the main psychoactive compound of the species area voliconiumare, a flower growing in the South of Spain. In which country does the flower with the psychoactive compound Reten grow?_ (no enough contextual information, tag: qn) * tags: * q.explicit.unequivocal (qeu) * q.explicit.ambiguous (qea) * q.implicit.unequivocal (qiu) * q.implicit.ambiguous (qia) * q.noinfo (qn) Example of two instances of the dataset: | contradiction _ID | WikiContradict _ID | reasoning _required_c1c2 | c1xq | c2xq | context_1 | context_2 | context_3_nc1_c2 | context_4_nc1_nc2_nc3 | query_text | answer_context1 | answer_context1 _long | answer_context2 | answer_context2 _long | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | 13 | 42 | categorical | qeu | qeu | A Cap Squirrel is a suricate. | A Cap Squirrel is a squirrel. | A Cap squirrel is a small mongoose. | A Cap squirrel is an animal. | What kind of animal is a Cap Squirrel? | a suricate | A cap squirrel is a suricate. | a squirrel | A cap squirrel is a squirrel. | | 17 | 52 | numerical + categorical | qiu | qeu | When the RMS Waser liner was attacked in 1983, 761 died out of the 1,266 passengers and 696 crew aboard. | 1198 survived, when the RMS Waser liner sank in 1983. | 1201 survived, when the RMS Waser liner sank in 1983. | The RMS Waser liner sank in 1983. | How many survived, when the RMS Waser liner sank in 1983? | 1201 | 1201 people died when the RMS Waser liner sank in 1983. | 1198 | 1198 people died when the RMS Waser liner sank in 1983. | ## Adapt the dataset or generate a new one It is also possible to **customize the dataset** depending on the types of knowledge conflicts to be assessed with the ragability library. One approach is to **keep the structure of the dataset** (the types of columns) as it is and either extend the corpus with additional examples or adapt existing examples, depending on the capability of the LLM to be evaluated. For instance, you can * _focus on a specific type of reasoning_ which is needed for the LLMs to identify knowledge conflicts * Types of reasoning occurring in the dataset are categorical, numerical, temporal numerical and temporal relational reasoning. The dataset can be adapted focus, e.g., on numerical knowledge conflicts: only the instances where numerical reasoning is required can be used and additional examples can be added. * _adapt the contexts to a specific domain_ * The dataset can be adapted to contain real world data, e.g., to focus on potential information conflicts in company-specific documents such as fact sheets, manuals, compliance documents, etc. To do so, replace the instances of the current corpus with comparable examples containing contradictory contexts from the real-world data in question. By testing different _testee-LLMS_, the LLM can be identified which is best suited to handle company relevant contradictory information. If the dataset is adapted while the structure of the dataset is not altered, all parts of the ragability library can be used as they are and do not need to be adapted. However, maintaining the structure of the database also requires preserving the relationships between columns — specifically, which columns are contradictory or non-contradictory to one another must remain unchanged. Another approach is to **change the structure of the dataset** by removing or adding column, e.g. * _adding more contexts_ * Currently, for each instance in the dataset, there are four different contexts. To investigate the _ragability_ of an LLM, i.e., how the LLM answers a user query based on different (retrieved) textual fragments, additional context (either conflicting or not-conflicting) can be added to the dataset by adding more context columns. * _adding more labels_ * An additional column containing specific lables can be added to the dataset. Note: the corpus conversion module of the ragability library must be adjusted in the following cases: (i) if existing columns are deleted, or (ii) additional columns are added. There are two possible workarounds to address this issue, with the second being the recommended approach: * Add the new columns in the tsv version of the dataset and adapt the Python module to convert the corpus [```ragability_cc_wc1```](https://github.com/OFAI/python-ragability/blob/main/ragability/ragability_cc_wc1.py) by adding a function of how to include the new/adapted columns as instances in ragability format. * Directly adapt the already converted dataset (e.g. v0d4.hjson) by adding the new instances directly in ragability format, containing information about the contexts, the query, the relevant checks and metrics, and the relevant tags. If new examples are added, take care to maintain the initial structure of the columns containing contexts, i.e., context 1 and context 2 must be contradictory, context 3 must be non-contradictory to context 1 and contradictory to context 2, and context 4 must be non-contradictory to any other context.