--- license: mit dataset_info: features: - name: text_query dtype: string - name: language dtype: string - name: sparql_query dtype: string - name: knowledge_graphs dtype: string - name: context dtype: string splits: - name: train num_bytes: 374237004 num_examples: 895166 - name: test num_bytes: 230499 num_examples: 788 download_size: 97377947 dataset_size: 374467503 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* task_categories: - text-generation language: - en - de - he - kn - zh - es - it - fr - nl - ro - fa - ru tags: - code size_categories: - 100K ... SELECT DISTINCT ?uri WHERE { ... }`)* * `knowledge_graphs` (string): The knowledge graph targeted by the `sparql_query` (e.g., 'DBpedia', 'Wikidata'). * `context` (string, often null): (Optional) Wikidata entity/relationship mappings in JSON string format (e.g., `{"entities": {"United States Army": "Q9212"}, "relationships": {"spouse": "P26"}}`). ### Data Splits * `train`: 895,954 rows. * `test`: 788 rows. ## How to Use You can load the dataset using the Hugging Face `datasets` library: ```python from datasets import load_dataset # Load a specific split (e.g., train) dataset = load_dataset("julioc-p/Question-Sparql", split="train") # Iterate through the dataset for example in dataset: print(f"Question ({example['language']}): {example['text_query']}") print(f"Knowledge Graph: {example['knowledge_graphs']}") print(f"SPARQL Query: {example['sparql_query']}") if example['context']: print(f"Context: {example['context']}") print("-" * 20) break