Update README.md
Browse files
README.md
CHANGED
|
@@ -47,4 +47,53 @@ tags:
|
|
| 47 |
- code
|
| 48 |
size_categories:
|
| 49 |
- 100K<n<1M
|
| 50 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
- code
|
| 48 |
size_categories:
|
| 49 |
- 100K<n<1M
|
| 50 |
+
---
|
| 51 |
+
|
| 52 |
+
## Dataset Description
|
| 53 |
+
|
| 54 |
+
This dataset contains **895,954 examples** of natural language questions paired with their corresponding SPARQL queries. It spans **12 languages** and targets **15 distinct knowledge graphs**, with a significant portion focused on Wikidata and DBpedia.
|
| 55 |
+
|
| 56 |
+
The dataset was developed as a contribution for the Master Thesis: *"Impact of Continual Multilingual Pre-training on Cross-Lingual Transferability for Source Languages"*. Its purpose is to facilitate research in text-to-SPARQL generation, particularly regarding multilinguality.
|
| 57 |
+
|
| 58 |
+
### Key Features:
|
| 59 |
+
* **Multilingual:** Covers 12 languages: English (en), German (de), Hebrew (he), Kannada (kn), Chinese (zh), Spanish (es), Italian (it), French (fr), Dutch (nl), Romanian (ro), Farsi (fa), and Russian (ru).
|
| 60 |
+
* **Diverse Knowledge Graphs**: Includes queries for 15 KGs, prominently Wikidata and DBpedia.
|
| 61 |
+
* **Large Scale:** Nearly 900,000 question-SPARQL pairs.
|
| 62 |
+
* **Augmented Data:** Features German translations for many English questions and Wikidata entity/relationship mappings in the `context` column for most of the examples of Wikidata in German and English.
|
| 63 |
+
|
| 64 |
+
## Dataset Structure
|
| 65 |
+
|
| 66 |
+
The dataset is provided in Parquet format and consists of the following columns:
|
| 67 |
+
|
| 68 |
+
* `text_query` (string): The natural language question.
|
| 69 |
+
* *(Example: "What is the boiling point of water?")*
|
| 70 |
+
* `language` (string): The language code of the `text_query` (e.g., 'de', 'en', 'es').
|
| 71 |
+
* `sparql_query` (string): The corresponding SPARQL query.
|
| 72 |
+
* *(Example: `PREFIX dbo: <http://dbpedia.org/ontology/> ... SELECT DISTINCT ?uri WHERE { ... }`)*
|
| 73 |
+
* `knowledge_graphs` (string): The knowledge graph targeted by the `sparql_query` (e.g., 'DBpedia', 'Wikidata').
|
| 74 |
+
* `context` (string, often null): (Optional) Wikidata entity/relationship mappings in JSON string format (e.g., `{"entities": {"United States Army": "Q9212"}, "relationships": {"spouse": "P26"}}`).
|
| 75 |
+
|
| 76 |
+
### Data Splits
|
| 77 |
+
|
| 78 |
+
* `train`: 895,954 rows.
|
| 79 |
+
* `test`: 788 rows.
|
| 80 |
+
|
| 81 |
+
## How to Use
|
| 82 |
+
|
| 83 |
+
You can load the dataset using the Hugging Face `datasets` library:
|
| 84 |
+
|
| 85 |
+
```python
|
| 86 |
+
from datasets import load_dataset
|
| 87 |
+
|
| 88 |
+
# Load a specific split (e.g., train)
|
| 89 |
+
dataset = load_dataset("julioc-p/Question-Sparql", split="train")
|
| 90 |
+
|
| 91 |
+
# Iterate through the dataset
|
| 92 |
+
for example in dataset:
|
| 93 |
+
print(f"Question ({example['language']}): {example['text_query']}")
|
| 94 |
+
print(f"Knowledge Graph: {example['knowledge_graphs']}")
|
| 95 |
+
print(f"SPARQL Query: {example['sparql_query']}")
|
| 96 |
+
if example['context']:
|
| 97 |
+
print(f"Context: {example['context']}")
|
| 98 |
+
print("-" * 20)
|
| 99 |
+
break
|