docfreemo commited on
Commit
db51f47
·
verified ·
1 Parent(s): e58de28

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +270 -25
README.md CHANGED
@@ -1,27 +1,272 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: subject
5
- dtype: string
6
- - name: predicate
7
- dtype: string
8
- - name: object
9
- dtype: string
10
- - name: object_type
11
- dtype: string
12
- - name: object_datatype
13
- dtype: string
14
- - name: object_language
15
- dtype: string
16
- splits:
17
- - name: data
18
- num_bytes: 24431513982
19
- num_examples: 181846462
20
- download_size: 1492750105
21
- dataset_size: 24431513982
22
- configs:
23
- - config_name: default
24
- data_files:
25
- - split: data
26
- path: data/data-*
27
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - text-generation
5
+ - feature-extraction
6
+ language:
7
+ - en
8
+ tags:
9
+ - rdf
10
+ - knowledge-graph
11
+ - semantic-web
12
+ - triples
13
+ size_categories:
14
+ - 10K<n<100K
 
 
 
 
 
 
 
 
 
 
 
 
15
  ---
16
+
17
+ # GeoNames RDF
18
+
19
+ ## Dataset Description
20
+
21
+ World geographic database with place names and coordinates
22
+
23
+ **Original Source:** http://download.geonames.org/all-geonames-rdf.zip
24
+
25
+ ### Dataset Summary
26
+
27
+ This dataset contains RDF triples from GeoNames RDF converted to HuggingFace dataset format
28
+ for easy use in machine learning pipelines.
29
+
30
+ - **Format:** Originally xml, converted to HuggingFace Dataset
31
+ - **Size:** 3.0 GB (extracted)
32
+ - **Entities:** ~11M places
33
+ - **Triples:** ~123M
34
+ - **Original License:** CC BY 4.0
35
+
36
+ ### Recommended Use
37
+
38
+ Geographic data, location-based services
39
+
40
+ ### Notes\n\nCommunity-contributed. Daily updates with periodic RDF dumps.
41
+
42
+
43
+ ## Dataset Format: Lossless RDF Representation
44
+
45
+ This dataset uses a **standard lossless format** for representing RDF (Resource Description Framework)
46
+ data in HuggingFace Datasets. All semantic information from the original RDF knowledge graph is preserved,
47
+ enabling perfect round-trip conversion between RDF and HuggingFace formats.
48
+
49
+ ### Schema
50
+
51
+ Each RDF triple is represented as a row with **6 fields**:
52
+
53
+ | Field | Type | Description | Example |
54
+ |-------|------|-------------|---------|
55
+ | `subject` | string | Subject of the triple (URI or blank node) | `"http://schema.org/Person"` |
56
+ | `predicate` | string | Predicate URI | `"http://www.w3.org/1999/02/22-rdf-syntax-ns#type"` |
57
+ | `object` | string | Object of the triple | `"John Doe"` or `"http://schema.org/Thing"` |
58
+ | `object_type` | string | Type of object: `"uri"`, `"literal"`, or `"blank_node"` | `"literal"` |
59
+ | `object_datatype` | string | XSD datatype URI (for typed literals) | `"http://www.w3.org/2001/XMLSchema#integer"` |
60
+ | `object_language` | string | Language tag (for language-tagged literals) | `"en"` |
61
+
62
+ ### Example: RDF Triple Representation
63
+
64
+ **Original RDF (Turtle)**:
65
+ ```turtle
66
+ <http://example.org/John> <http://schema.org/name> "John Doe"@en .
67
+ ```
68
+
69
+ **HuggingFace Dataset Row**:
70
+ ```python
71
+ {
72
+ "subject": "http://example.org/John",
73
+ "predicate": "http://schema.org/name",
74
+ "object": "John Doe",
75
+ "object_type": "literal",
76
+ "object_datatype": None,
77
+ "object_language": "en"
78
+ }
79
+ ```
80
+
81
+ ### Loading the Dataset
82
+
83
+ ```python
84
+ from datasets import load_dataset
85
+
86
+ # Load the dataset
87
+ dataset = load_dataset("CleverThis/geonames")
88
+
89
+ # Access train and test splits
90
+ train_data = dataset["train"]
91
+ test_data = dataset["test"]
92
+
93
+ # Iterate over triples
94
+ for row in train_data:
95
+ subject = row["subject"]
96
+ predicate = row["predicate"]
97
+ obj = row["object"]
98
+ obj_type = row["object_type"]
99
+
100
+ print(f"Triple: ({subject}, {predicate}, {obj})")
101
+ print(f" Object type: {obj_type}")
102
+ if row["object_language"]:
103
+ print(f" Language: {row['object_language']}")
104
+ if row["object_datatype"]:
105
+ print(f" Datatype: {row['object_datatype']}")
106
+ ```
107
+
108
+ ### Converting Back to RDF
109
+
110
+ The dataset can be converted back to any RDF format (Turtle, N-Triples, RDF/XML, etc.) with **zero information loss**:
111
+
112
+ ```python
113
+ from datasets import load_dataset
114
+ from rdflib import Graph, URIRef, Literal, BNode
115
+
116
+ def convert_to_rdf(dataset_name, output_file="output.ttl", split="train"):
117
+ """Convert HuggingFace dataset back to RDF Turtle format."""
118
+ # Load dataset
119
+ dataset = load_dataset(dataset_name)
120
+
121
+ # Create RDF graph
122
+ graph = Graph()
123
+
124
+ # Convert each row to RDF triple
125
+ for row in dataset[split]:
126
+ # Subject
127
+ if row["subject"].startswith("_:"):
128
+ subject = BNode(row["subject"][2:])
129
+ else:
130
+ subject = URIRef(row["subject"])
131
+
132
+ # Predicate (always URI)
133
+ predicate = URIRef(row["predicate"])
134
+
135
+ # Object (depends on object_type)
136
+ if row["object_type"] == "uri":
137
+ obj = URIRef(row["object"])
138
+ elif row["object_type"] == "blank_node":
139
+ obj = BNode(row["object"][2:])
140
+ elif row["object_type"] == "literal":
141
+ if row["object_datatype"]:
142
+ obj = Literal(row["object"], datatype=URIRef(row["object_datatype"]))
143
+ elif row["object_language"]:
144
+ obj = Literal(row["object"], lang=row["object_language"])
145
+ else:
146
+ obj = Literal(row["object"])
147
+
148
+ graph.add((subject, predicate, obj))
149
+
150
+ # Serialize to Turtle (or any RDF format)
151
+ graph.serialize(output_file, format="turtle")
152
+ print(f"Exported {len(graph)} triples to {output_file}")
153
+ return graph
154
+
155
+ # Usage
156
+ graph = convert_to_rdf("CleverThis/geonames", "reconstructed.ttl")
157
+ ```
158
+
159
+ ### Information Preservation Guarantee
160
+
161
+ This format preserves **100% of RDF information**:
162
+
163
+ - ✅ **URIs**: Exact string representation preserved
164
+ - ✅ **Literals**: Full text content preserved
165
+ - ✅ **Datatypes**: XSD and custom datatypes preserved (e.g., `xsd:integer`, `xsd:dateTime`)
166
+ - ✅ **Language Tags**: BCP 47 language tags preserved (e.g., `@en`, `@fr`, `@ja`)
167
+ - ✅ **Blank Nodes**: Node structure preserved (identifiers may change but graph isomorphism maintained)
168
+
169
+ **Round-trip guarantee**: Original RDF → HuggingFace → Reconstructed RDF produces **semantically identical** graphs.
170
+
171
+ ### Querying the Dataset
172
+
173
+ You can filter and query the dataset like any HuggingFace dataset:
174
+
175
+ ```python
176
+ from datasets import load_dataset
177
+
178
+ dataset = load_dataset("CleverThis/geonames")
179
+
180
+ # Find all triples with English literals
181
+ english_literals = dataset["train"].filter(
182
+ lambda x: x["object_type"] == "literal" and x["object_language"] == "en"
183
+ )
184
+ print(f"Found {len(english_literals)} English literals")
185
+
186
+ # Find all rdf:type statements
187
+ type_statements = dataset["train"].filter(
188
+ lambda x: "rdf-syntax-ns#type" in x["predicate"]
189
+ )
190
+ print(f"Found {len(type_statements)} type statements")
191
+
192
+ # Convert to Pandas for analysis
193
+ import pandas as pd
194
+ df = dataset["train"].to_pandas()
195
+
196
+ # Analyze predicate distribution
197
+ print(df["predicate"].value_counts())
198
+ ```
199
+
200
+ ### Dataset Splits
201
+
202
+ The dataset is split into **train** (95%) and **test** (5%) sets for machine learning tasks such as:
203
+
204
+ - Knowledge graph completion
205
+ - Link prediction
206
+ - Entity embedding
207
+ - Relation extraction
208
+ - Graph neural networks
209
+
210
+ ### Format Specification
211
+
212
+ For complete technical documentation of the RDF-to-HuggingFace format, see:
213
+
214
+ 📖 [RDF to HuggingFace Format Specification](https://github.com/CleverThis/cleverernie/blob/master/docs/rdf_huggingface_format_specification.md)
215
+
216
+ The specification includes:
217
+ - Detailed schema definition
218
+ - All RDF node type mappings
219
+ - Performance benchmarks
220
+ - Edge cases and limitations
221
+ - Complete code examples
222
+
223
+ ### Conversion Metadata
224
+
225
+ - **Source Format**: xml
226
+ - **Original Size**: 3.0 GB
227
+ - **Conversion Tool**: [CleverErnie RDF Pipeline](https://github.com/CleverThis/cleverernie)
228
+ - **Format Version**: 1.0
229
+ - **Conversion Date**: 2025-11-04
230
+
231
+
232
+ ## Citation
233
+
234
+ If you use this dataset, please cite the original source:
235
+
236
+ **Original Dataset:** GeoNames RDF
237
+ **URL:** http://download.geonames.org/all-geonames-rdf.zip
238
+ **License:** CC BY 4.0
239
+
240
+ ## Dataset Preparation
241
+
242
+ This dataset was prepared using the CleverErnie GISM framework:
243
+
244
+ ```bash
245
+ # Download original dataset
246
+ cleverernie download-dataset -d geonames
247
+
248
+ # Convert to HuggingFace format
249
+ python scripts/convert_rdf_to_hf_dataset.py \
250
+ datasets/geonames/[file] \
251
+ hf_datasets/geonames \
252
+ --format xml
253
+
254
+ # Upload to HuggingFace Hub
255
+ python scripts/upload_all_datasets.py --dataset geonames
256
+ ```
257
+
258
+ ## Additional Information
259
+
260
+ ### Original Source
261
+
262
+ http://download.geonames.org/all-geonames-rdf.zip
263
+
264
+ ### Conversion Details
265
+
266
+ - Converted using: [CleverErnie GISM](https://github.com/cleverthis/cleverernie)
267
+ - Conversion script: `scripts/convert_rdf_to_hf_dataset.py`
268
+ - Train/test split: 95%/5%
269
+
270
+ ### Maintenance
271
+
272
+ This dataset is maintained by the CleverThis organization.