piemonte commited on
Commit
714c51a
·
verified ·
1 Parent(s): f1b02c7

Upload folder using huggingface_hub

Browse files
Files changed (9) hide show
  1. .gitattributes +3 -0
  2. .gitignore +33 -0
  3. README.md +369 -3
  4. location.csv +3 -0
  5. place.csv +3 -0
  6. place_contact.csv +0 -0
  7. place_tag.csv +3 -0
  8. places_dataset.py +393 -0
  9. tag.csv +0 -0
.gitattributes CHANGED
@@ -57,3 +57,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ location.csv filter=lfs diff=lfs merge=lfs -text
61
+ place.csv filter=lfs diff=lfs merge=lfs -text
62
+ place_tag.csv filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Claude Code configuration
2
+ .claude/
3
+
4
+ # Python
5
+ __pycache__/
6
+ *.py[cod]
7
+ *$py.class
8
+ *.so
9
+ .Python
10
+
11
+ # Virtual environments
12
+ venv/
13
+ env/
14
+ ENV/
15
+
16
+ # IDE
17
+ .vscode/
18
+ .idea/
19
+ *.swp
20
+ *.swo
21
+
22
+ # OS
23
+ .DS_Store
24
+ Thumbs.db
25
+
26
+ # Temporary files
27
+ *.tmp
28
+ *.temp
29
+ *.log
30
+
31
+ # Backup files
32
+ *_backup_*
33
+ *.bak
README.md CHANGED
@@ -1,3 +1,369 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - text-generation
5
+ - text-classification
6
+ language:
7
+ - en
8
+ tags:
9
+ - geospatial
10
+ - places
11
+ - points of interest (poi)
12
+ - location-data
13
+ - activities
14
+ pretty_name: Places
15
+ size_categories:
16
+ - 10K<n<100K
17
+ ---
18
+
19
+ # Places Dataset
20
+
21
+ ### Dataset Summary
22
+
23
+ This dataset contains information of roughly 70,000 places with associated metadata including locations, attribution tags, and some contact details. The data includes geographic coordinates, place descriptions, categorization through attribution tags, and some social media presence information.
24
+
25
+ ### LLM Applications
26
+
27
+ This dataset is particularly valuable for training and fine-tuning Large Language Models (LLMs) for geospatial understanding:
28
+
29
+ - **Geospatial Question Answering**: Train LLMs to answer location-based queries like "What coffee shops are near Central Park?" or "Find museums in Paris"
30
+ - **Location-Aware Text Generation**: Enable LLMs to generate contextually relevant descriptions that incorporate local geography, landmarks, and spatial relationships
31
+ - **Address Parsing and Normalization**: The cleaned address formats provide training data for LLMs to understand and standardize various address formats from different countries
32
+ - **Multilingual Geographic Entity Recognition**: With places from multiple countries, LLMs can learn to recognize and disambiguate location names across languages
33
+ - **Spatial Reasoning**: Train models to understand spatial relationships, distances, and geographic hierarchies (neighborhood → city → state → country)
34
+ - **Travel and Tourism Assistants**: Build LLMs that can provide recommendations based on location, tags, and place descriptions
35
+ - **Local Business Understanding**: Help LLMs understand business categories, operating contexts, and location-specific services
36
+
37
+ ### Supported Tasks and Leaderboards
38
+
39
+ This dataset can be used for various tasks including:
40
+ - **Location-based recommendation systems**
41
+ - **Geographic information retrieval**
42
+ - **Place categorization and tagging**
43
+ - **Spatial analysis and clustering**
44
+ - **Geospatial NLP and language understanding**
45
+ - **Cross-lingual location entity linking**
46
+
47
+ ### Languages
48
+
49
+ The dataset is primarily in English, with place names and descriptions being in English.
50
+
51
+ ## Structure
52
+
53
+ ### Data Instances
54
+
55
+ The dataset consists of 5 CSV files with relational structure:
56
+
57
+ 1. **place.csv** - Main places table
58
+ 2. **location.csv** - Geographic coordinates
59
+ 3. **place_contact.csv** - Contact and social media information
60
+ 4. **tag.csv** - Categorization tags
61
+ 5. **place_tag.csv** - Many-to-many relationship between places and tags
62
+
63
+ ### Data Fields
64
+
65
+ #### place.csv
66
+ - `id`: Unique identifier (UUID)
67
+ - `created_at`, `updated_at`: Timestamps
68
+ - `name`: Place name
69
+ - `description`: Place description
70
+ - `address`, `address_formatted`: Street address information
71
+ - `locality`, `administrative_area`, `postal_code`, `country_code`: Location details
72
+ - `verified`, `flagged`: Boolean status flags
73
+ - Various foreign keys linking to other tables
74
+
75
+ #### location.csv
76
+ - `id`: Unique identifier (UUID)
77
+ - `latitude`, `longitude`: GPS coordinates (WGS84/EPSG:4326 coordinate system)
78
+ - `horizontal_accuracy`, `vertical_accuracy`: Accuracy metrics in meters
79
+ - `altitude`: Elevation data in meters above sea level
80
+ - `geom`: PostGIS geometry column (projected coordinates, likely Web Mercator EPSG:3857)
81
+ - `geog`: PostGIS geography column (unprojected lat/lon coordinates for accurate distance calculations)
82
+
83
+ #### place_contact.csv
84
+ - `id`: Unique identifier (UUID)
85
+ - `instagram`, `x`: Social media handles
86
+ - `website`: website URLs
87
+
88
+ ### Data Splits
89
+
90
+ This dataset is provided as a single collection without predefined train/validation/test splits. Users should create their own splits based on their specific use case.
91
+
92
+ ## Dataset Creation
93
+
94
+ ### Curation Rationale
95
+
96
+ This dataset was created to provide a comprehensive collection of place information with rich metadata for location-based applications and AI research.
97
+
98
+ ## Considerations for Using the Data
99
+
100
+ ### Social Impact of Dataset
101
+
102
+ This dataset can be used to build geographically aware LLMs, location-based services, and improve geographic information systems. Consider the impact on local businesses and communities when using this data.
103
+
104
+ ### Discussion of Biases
105
+
106
+ The dataset may have geographic biases based on where data was collected. Urban areas may be overrepresented compared to rural areas.
107
+
108
+ ### Other Known Limitations
109
+
110
+ - The dataset represents a snapshot in time and place information may be outdated
111
+ - Not all places have complete information across all fields
112
+ - Geographic coverage may be limited to specific regions
113
+
114
+ ## Additional Information
115
+
116
+ ### Data Quality Notes
117
+
118
+ - **Address Formatting**: All addresses have been cleaned and normalized to a consistent format: `{"street address", "city state/province postal", "country"}`
119
+ - **Country Standardization**: Country names have been standardized to English (e.g., Deutschland → Germany, États-Unis → United States)
120
+ - **Coordinate Accuracy**: Location coordinates include accuracy metrics to help filter by precision requirements
121
+
122
+ ### Licensing Information
123
+
124
+ This dataset is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.
125
+
126
+ ### Citation Information
127
+
128
+ ```bibtex
129
+ @dataset{places_dataset_2025,
130
+ title={Places Dataset},
131
+ author={[patrick piemonte]},
132
+ year={2025},
133
+ publisher={Hugging Face}
134
+ }
135
+ ```
136
+
137
+ ## Usage
138
+
139
+ ### Loading the Dataset
140
+
141
+ ```python
142
+ from datasets import load_dataset
143
+
144
+ # Load all tables
145
+ dataset = load_dataset("path/to/places_dataset.py", "all")
146
+
147
+ # Load individual tables
148
+ places = load_dataset("path/to/places_dataset.py", "place")
149
+ locations = load_dataset("path/to/places_dataset.py", "location")
150
+ tags = load_dataset("path/to/places_dataset.py", "tag")
151
+ place_tags = load_dataset("path/to/places_dataset.py", "place_tag")
152
+ contacts = load_dataset("path/to/places_dataset.py", "place_contact")
153
+
154
+ # Load denormalized view (easier to use!)
155
+ denormalized = load_dataset("path/to/places_dataset.py", "denormalized")
156
+ ```
157
+
158
+ ### Working with the Data
159
+
160
+ #### LLM-Specific Examples
161
+
162
+ ##### Training Data Preparation for Geospatial QA
163
+ ```python
164
+ # Prepare training examples for location-based Q&A
165
+ def create_qa_examples(dataset):
166
+ examples = []
167
+ for place in dataset:
168
+ # Create various question-answer pairs
169
+ examples.extend([
170
+ {
171
+ "question": f"What is the address of {place['name']}?",
172
+ "answer": place['address_formatted']
173
+ },
174
+ {
175
+ "question": f"Where is {place['name']} located?",
176
+ "answer": f"{place['name']} is located at {place['address']} in {place['locality']}, {place['administrative_area']}, {place['country']}."
177
+ },
178
+ {
179
+ "question": f"What type of place is {place['name']}?",
180
+ "answer": f"{place['name']} is a {place['primary_tag_name']}." if place['primary_tag_name'] else f"Information about the type of {place['name']} is not available."
181
+ }
182
+ ])
183
+
184
+ if place['description']:
185
+ examples.append({
186
+ "question": f"Tell me about {place['name']}",
187
+ "answer": place['description']
188
+ })
189
+
190
+ return examples
191
+
192
+ # Load denormalized data for easy access to all fields
193
+ dataset = load_dataset("path/to/places_dataset.py", "denormalized")['train']
194
+ qa_examples = create_qa_examples(dataset)
195
+ ```
196
+
197
+ ##### Spatial Context Generation
198
+ ```python
199
+ # Generate spatial context descriptions for LLM training
200
+ def generate_spatial_context(place, nearby_places):
201
+ context = f"{place['name']} is located at {place['address']} in {place['locality']}."
202
+
203
+ if nearby_places:
204
+ context += f" Nearby places include: "
205
+ nearby_names = [f"{p['name']} ({p['distance_km']:.1f}km)" for p in nearby_places[:5]]
206
+ context += ", ".join(nearby_names) + "."
207
+
208
+ if place['primary_tag_name']:
209
+ context += f" It is categorized as a {place['primary_tag_name']}."
210
+
211
+ return context
212
+
213
+ # Example usage
214
+ center_place = dataset[0]
215
+ nearby = find_nearby_places(dataset, center_place['latitude'], center_place['longitude'], 2)
216
+ spatial_description = generate_spatial_context(center_place, nearby)
217
+ ```
218
+
219
+ #### Basic Place Information
220
+ ```python
221
+ # Load places with their basic information
222
+ places = load_dataset("path/to/places_dataset.py", "place")['train']
223
+
224
+ # Access place data
225
+ for place in places.select(range(5)):
226
+ print(f"Name: {place['name']}")
227
+ print(f"Address: {place['address_formatted']}")
228
+ print(f"Locality: {place['locality']}, {place['administrative_area']}")
229
+ print("---")
230
+ ```
231
+
232
+ #### Using the Denormalized View (Recommended)
233
+ ```python
234
+ # Load denormalized data - includes places, locations, primary tags, and contacts
235
+ dataset = load_dataset("path/to/places_dataset.py", "denormalized")['train']
236
+
237
+ # Now you have everything in one table!
238
+ for place in dataset.select(range(5)):
239
+ print(f"Name: {place['name']}")
240
+ print(f"Location: {place['latitude']}, {place['longitude']}")
241
+ print(f"Primary Tag: {place['primary_tag_name']}")
242
+ print(f"Website: {place['website']}")
243
+ print("---")
244
+
245
+ # Easy filtering with coordinates
246
+ import pandas as pd
247
+ df = pd.DataFrame(dataset)
248
+ # Find verified places with coordinates
249
+ verified_with_coords = df[
250
+ (df['verified'] == True) &
251
+ (df['latitude'].notna()) &
252
+ (df['longitude'].notna())
253
+ ]
254
+ ```
255
+
256
+ #### Joining Tables (Places with Locations)
257
+ ```python
258
+ import pandas as pd
259
+
260
+ # Load as pandas DataFrames for easier joining
261
+ places_df = pd.DataFrame(places)
262
+ locations = load_dataset("path/to/places_dataset.py", "location")['train']
263
+ locations_df = pd.DataFrame(locations)
264
+
265
+ # Join places with their locations
266
+ places_with_coords = places_df.merge(
267
+ locations_df,
268
+ left_on='location_id',
269
+ right_on='id',
270
+ suffixes=('', '_loc')
271
+ )
272
+
273
+ # Filter places in a specific area (e.g., within a bounding box)
274
+ def filter_by_bbox(df, min_lat, max_lat, min_lon, max_lon):
275
+ return df[
276
+ (df['latitude'] >= min_lat) &
277
+ (df['latitude'] <= max_lat) &
278
+ (df['longitude'] >= min_lon) &
279
+ (df['longitude'] <= max_lon)
280
+ ]
281
+
282
+ # Example: Find places in San Francisco area
283
+ sf_places = filter_by_bbox(places_with_coords, 37.7, 37.8, -122.5, -122.4)
284
+ ```
285
+
286
+ #### Working with Tags
287
+ ```python
288
+ # Load all necessary tables
289
+ places_df = pd.DataFrame(load_dataset("path/to/places_dataset.py", "place")['train'])
290
+ tags_df = pd.DataFrame(load_dataset("path/to/places_dataset.py", "tag")['train'])
291
+ place_tags_df = pd.DataFrame(load_dataset("path/to/places_dataset.py", "place_tag")['train'])
292
+
293
+ # Get all tags for a specific place
294
+ def get_place_tags(place_id):
295
+ # Find all tag relationships for this place
296
+ tag_ids = place_tags_df[place_tags_df['place_id'] == place_id]['tag_id']
297
+ # Get tag details
298
+ return tags_df[tags_df['id'].isin(tag_ids)]
299
+
300
+ # Find all places with a specific tag
301
+ def find_places_by_tag(tag_name):
302
+ # Find the tag
303
+ tag = tags_df[tags_df['name'] == tag_name]
304
+ if tag.empty:
305
+ return pd.DataFrame()
306
+
307
+ tag_id = tag.iloc[0]['id']
308
+ # Find all places with this tag
309
+ place_ids = place_tags_df[place_tags_df['tag_id'] == tag_id]['place_id']
310
+ return places_df[places_df['id'].isin(place_ids)]
311
+
312
+ # Example: Find all coffee shops
313
+ coffee_shops = find_places_by_tag('Coffee Shop')
314
+ ```
315
+
316
+ ### Using the Alternative Loading Function
317
+
318
+ ```python
319
+ from places_dataset import load_places_as_dict
320
+
321
+ # Load all tables as a dictionary
322
+ data = load_places_as_dict('/path/to/data/directory')
323
+
324
+ # Access individual tables
325
+ places = data['place']
326
+ locations = data['location']
327
+ tags = data['tag']
328
+ ```
329
+
330
+ ### Working with Geospatial Data
331
+
332
+ #### Coordinate System Information
333
+ - **Latitude/Longitude**: WGS84 (EPSG:4326) - Standard GPS coordinates
334
+ - **Geom column**: Projected geometry, useful for visualization and area calculations
335
+ - **Geog column**: Geography type for accurate distance and spatial calculations
336
+
337
+ #### Example: Distance Calculations
338
+ ```python
339
+ import math
340
+
341
+ def haversine_distance(lat1, lon1, lat2, lon2):
342
+ """Calculate distance between two points on Earth in kilometers."""
343
+ R = 6371 # Earth's radius in kilometers
344
+
345
+ lat1, lon1, lat2, lon2 = map(math.radians, [lat1, lon1, lat2, lon2])
346
+ dlat = lat2 - lat1
347
+ dlon = lon2 - lon1
348
+
349
+ a = math.sin(dlat/2)**2 + math.cos(lat1) * math.cos(lat2) * math.sin(dlon/2)**2
350
+ c = 2 * math.asin(math.sqrt(a))
351
+
352
+ return R * c
353
+
354
+ # Find places near a specific coordinate
355
+ def find_nearby_places(places_with_coords, center_lat, center_lon, radius_km):
356
+ nearby = []
357
+ for _, place in places_with_coords.iterrows():
358
+ distance = haversine_distance(
359
+ center_lat, center_lon,
360
+ place['latitude'], place['longitude']
361
+ )
362
+ if distance <= radius_km:
363
+ nearby.append({**place.to_dict(), 'distance_km': distance})
364
+
365
+ return pd.DataFrame(nearby).sort_values('distance_km')
366
+
367
+ # Example: Find places within 5km of a location
368
+ nearby = find_nearby_places(places_with_coords, 37.7749, -122.4194, 5)
369
+ ```
location.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab6c673d136b465570f4b8d37c699c4596f2810e28bd18a75f723f3d38111cd1
3
+ size 13902040
place.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9403f683f5d52f5c6f3740188ad5164677aea8dbc509d8a09fee7dcb41f3dff
3
+ size 33321971
place_contact.csv ADDED
The diff for this file is too large to render. See raw diff
 
place_tag.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f20bed97bc0300596de6605d95cc231199ec9d651c64608bb371900ab37b3e49
3
+ size 26521057
places_dataset.py ADDED
@@ -0,0 +1,393 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Places Dataset Loading Script for Hugging Face"""
2
+
3
+ import csv
4
+ import json
5
+ import os
6
+ import datasets
7
+ from typing import Dict, List, Any
8
+
9
+
10
+ _CITATION = """\
11
+ @dataset{places_dataset_2025,
12
+ title={Places Dataset},
13
+ author={patrick piemonte},
14
+ year={2025},
15
+ publisher={Hugging Face}
16
+ }
17
+ """
18
+
19
+ _DESCRIPTION = """\
20
+ This dataset contains information about close to 70,000 places with associated metadata including
21
+ locations, attribution tags, and contact details. The data includes geographic coordinates,
22
+ place descriptions, categorization through attribution tags, and social media presence information.
23
+ """
24
+
25
+ _HOMEPAGE = ""
26
+ _LICENSE = "cc-by-4.0"
27
+
28
+ _URLS = {
29
+ "place": "place.csv",
30
+ "location": "location.csv",
31
+ "place_contact": "place_contact.csv",
32
+ "tag": "tag.csv",
33
+ "place_tag": "place_tag.csv",
34
+ }
35
+
36
+
37
+ class PlacesDataset(datasets.GeneratorBasedBuilder):
38
+ """Places dataset with multiple related tables."""
39
+
40
+ VERSION = datasets.Version("1.0.0")
41
+
42
+ BUILDER_CONFIGS = [
43
+ datasets.BuilderConfig(name="all", version=VERSION, description="Load all tables"),
44
+ datasets.BuilderConfig(name="place", version=VERSION, description="Main places table"),
45
+ datasets.BuilderConfig(name="location", version=VERSION, description="Geographic coordinates"),
46
+ datasets.BuilderConfig(name="place_contact", version=VERSION, description="Contact information"),
47
+ datasets.BuilderConfig(name="tag", version=VERSION, description="Categorization tags"),
48
+ datasets.BuilderConfig(name="place_tag", version=VERSION, description="Place-tag relationships"),
49
+ datasets.BuilderConfig(name="denormalized", version=VERSION, description="Denormalized view with places, locations, and primary tags"),
50
+ ]
51
+
52
+ DEFAULT_CONFIG_NAME = "all"
53
+
54
+ def _info(self):
55
+ if self.config.name == "place":
56
+ features = datasets.Features({
57
+ "id": datasets.Value("string"),
58
+ "created_at": datasets.Value("string"),
59
+ "name": datasets.Value("string"),
60
+ "description": datasets.Value("string"),
61
+ "address": datasets.Value("string"),
62
+ "address_formatted": datasets.Value("string"),
63
+ "cross_street": datasets.Value("string"),
64
+ "locality": datasets.Value("string"),
65
+ "administrative_area": datasets.Value("string"),
66
+ "postal_code": datasets.Value("string"),
67
+ "country_code": datasets.Value("string"),
68
+ "verified": datasets.Value("bool"),
69
+ "flagged": datasets.Value("bool"),
70
+ "place_contact_id": datasets.Value("string"),
71
+ "location_id": datasets.Value("string"),
72
+ "author_id": datasets.Value("string"),
73
+ "owner_id": datasets.Value("string"),
74
+ "locale_id": datasets.Value("string"),
75
+ "primary_tag_id": datasets.Value("string"),
76
+ "country": datasets.Value("string"),
77
+ "sublocality": datasets.Value("string"),
78
+ "subadministrative_area": datasets.Value("string"),
79
+ "updated_at": datasets.Value("string"),
80
+ "radius_in_meters": datasets.Value("int32"),
81
+ "stamp_id": datasets.Value("string"),
82
+ "z_priority": datasets.Value("int32"),
83
+ "clustering_category": datasets.Value("string"),
84
+ "places_token_id": datasets.Value("string"),
85
+ "nano_id": datasets.Value("string"),
86
+ "slug": datasets.Value("string"),
87
+ })
88
+ elif self.config.name == "location":
89
+ features = datasets.Features({
90
+ "id": datasets.Value("string"),
91
+ "latitude": datasets.Value("float64"),
92
+ "longitude": datasets.Value("float64"),
93
+ "horizontal_accuracy": datasets.Value("float32"),
94
+ "altitude": datasets.Value("float32"),
95
+ "vertical_accuracy": datasets.Value("float32"),
96
+ "geom": datasets.Value("string"),
97
+ "geog": datasets.Value("string"),
98
+ })
99
+ elif self.config.name == "place_contact":
100
+ features = datasets.Features({
101
+ "id": datasets.Value("string"),
102
+ "instagram": datasets.Value("string"),
103
+ "x": datasets.Value("string"),
104
+ "website": datasets.Value("string"),
105
+ })
106
+ elif self.config.name == "tag":
107
+ features = datasets.Features({
108
+ "id": datasets.Value("string"),
109
+ "created_at": datasets.Value("string"),
110
+ "name": datasets.Value("string"),
111
+ "slug": datasets.Value("string"),
112
+ "private_tag": datasets.Value("bool"),
113
+ "tag_type": datasets.Value("string"),
114
+ "image_asset_id": datasets.Value("string"),
115
+ "theme_asset_id": datasets.Value("string"),
116
+ "search_tokens": datasets.Value("string"),
117
+ "keywords": datasets.Value("string"),
118
+ "radius_in_meters": datasets.Value("int32"),
119
+ "content_rating": datasets.Value("string"),
120
+ "stamp_id": datasets.Value("string"),
121
+ "hidden_tag": datasets.Value("bool"),
122
+ })
123
+ elif self.config.name == "place_tag":
124
+ features = datasets.Features({
125
+ "id": datasets.Value("string"),
126
+ "tag_id": datasets.Value("string"),
127
+ "place_id": datasets.Value("string"),
128
+ "created_at": datasets.Value("string"),
129
+ })
130
+ elif self.config.name == "denormalized":
131
+ features = datasets.Features({
132
+ # Place fields
133
+ "place_id": datasets.Value("string"),
134
+ "name": datasets.Value("string"),
135
+ "description": datasets.Value("string"),
136
+ "address": datasets.Value("string"),
137
+ "address_formatted": datasets.Value("string"),
138
+ "locality": datasets.Value("string"),
139
+ "administrative_area": datasets.Value("string"),
140
+ "postal_code": datasets.Value("string"),
141
+ "country_code": datasets.Value("string"),
142
+ "country": datasets.Value("string"),
143
+ "verified": datasets.Value("bool"),
144
+ # Location fields
145
+ "latitude": datasets.Value("float64"),
146
+ "longitude": datasets.Value("float64"),
147
+ "horizontal_accuracy": datasets.Value("float32"),
148
+ "altitude": datasets.Value("float32"),
149
+ # Primary tag fields
150
+ "primary_tag_name": datasets.Value("string"),
151
+ "primary_tag_slug": datasets.Value("string"),
152
+ "primary_tag_type": datasets.Value("string"),
153
+ # Contact fields
154
+ "website": datasets.Value("string"),
155
+ "instagram": datasets.Value("string"),
156
+ "twitter": datasets.Value("string"),
157
+ })
158
+ else: # "all" config
159
+ features = datasets.Features({
160
+ "table_name": datasets.Value("string"),
161
+ "data": datasets.Value("string"), # JSON string
162
+ })
163
+
164
+ return datasets.DatasetInfo(
165
+ description=_DESCRIPTION,
166
+ features=features,
167
+ homepage=_HOMEPAGE,
168
+ license=_LICENSE,
169
+ citation=_CITATION,
170
+ )
171
+
172
+ def _split_generators(self, dl_manager):
173
+ if self.config.name == "denormalized":
174
+ downloaded_files = dl_manager.download_and_extract(_URLS)
175
+ return [
176
+ datasets.SplitGenerator(
177
+ name=datasets.Split.TRAIN,
178
+ gen_kwargs={
179
+ "filepaths": downloaded_files,
180
+ },
181
+ ),
182
+ ]
183
+ elif self.config.name != "all":
184
+ downloaded_file = dl_manager.download_and_extract(_URLS[self.config.name])
185
+ return [
186
+ datasets.SplitGenerator(
187
+ name=datasets.Split.TRAIN,
188
+ gen_kwargs={
189
+ "filepath": downloaded_file,
190
+ "table_name": self.config.name,
191
+ },
192
+ ),
193
+ ]
194
+ else:
195
+ downloaded_files = dl_manager.download_and_extract(_URLS)
196
+ return [
197
+ datasets.SplitGenerator(
198
+ name=datasets.Split.TRAIN,
199
+ gen_kwargs={
200
+ "filepaths": downloaded_files,
201
+ },
202
+ ),
203
+ ]
204
+
205
+ def _generate_examples(self, filepath=None, table_name=None, filepaths=None):
206
+ if self.config.name == "denormalized":
207
+ # Load all necessary tables for denormalized view
208
+ places = {}
209
+ locations = {}
210
+ tags = {}
211
+ contacts = {}
212
+
213
+ # Load places
214
+ with open(filepaths["place"], encoding="utf-8") as f:
215
+ reader = csv.DictReader(f)
216
+ for row in reader:
217
+ places[row["id"]] = row
218
+
219
+ # Load locations
220
+ with open(filepaths["location"], encoding="utf-8") as f:
221
+ reader = csv.DictReader(f)
222
+ for row in reader:
223
+ locations[row["id"]] = row
224
+
225
+ # Load tags
226
+ with open(filepaths["tag"], encoding="utf-8") as f:
227
+ reader = csv.DictReader(f)
228
+ for row in reader:
229
+ tags[row["id"]] = row
230
+
231
+ # Load contacts
232
+ with open(filepaths["place_contact"], encoding="utf-8") as f:
233
+ reader = csv.DictReader(f)
234
+ for row in reader:
235
+ contacts[row["id"]] = row
236
+
237
+ # Generate denormalized rows
238
+ idx = 0
239
+ for place_id, place in places.items():
240
+ result = {
241
+ "place_id": place_id,
242
+ "name": place.get("name"),
243
+ "description": place.get("description"),
244
+ "address": place.get("address"),
245
+ "address_formatted": place.get("address_formatted"),
246
+ "locality": place.get("locality"),
247
+ "administrative_area": place.get("administrative_area"),
248
+ "postal_code": place.get("postal_code"),
249
+ "country_code": place.get("country_code"),
250
+ "country": place.get("country"),
251
+ "verified": place.get("verified") == "true" if place.get("verified") else False,
252
+ }
253
+
254
+ # Add location data
255
+ if place.get("location_id") and place["location_id"] in locations:
256
+ loc = locations[place["location_id"]]
257
+ # Parse and validate coordinates
258
+ lat = None
259
+ lon = None
260
+ if loc.get("latitude"):
261
+ try:
262
+ lat = float(loc["latitude"])
263
+ if not (-90 <= lat <= 90):
264
+ print(f"Warning: Invalid latitude {lat} for place {place_id}")
265
+ lat = None
266
+ except ValueError:
267
+ pass
268
+ if loc.get("longitude"):
269
+ try:
270
+ lon = float(loc["longitude"])
271
+ if not (-180 <= lon <= 180):
272
+ print(f"Warning: Invalid longitude {lon} for place {place_id}")
273
+ lon = None
274
+ except ValueError:
275
+ pass
276
+
277
+ result.update({
278
+ "latitude": lat,
279
+ "longitude": lon,
280
+ "horizontal_accuracy": float(loc["horizontal_accuracy"]) if loc.get("horizontal_accuracy") else None,
281
+ "altitude": float(loc["altitude"]) if loc.get("altitude") else None,
282
+ })
283
+ else:
284
+ result.update({
285
+ "latitude": None,
286
+ "longitude": None,
287
+ "horizontal_accuracy": None,
288
+ "altitude": None,
289
+ })
290
+
291
+ # Add primary tag data
292
+ if place.get("primary_tag_id") and place["primary_tag_id"] in tags:
293
+ tag = tags[place["primary_tag_id"]]
294
+ result.update({
295
+ "primary_tag_name": tag.get("name"),
296
+ "primary_tag_slug": tag.get("slug"),
297
+ "primary_tag_type": tag.get("tag_type"),
298
+ })
299
+ else:
300
+ result.update({
301
+ "primary_tag_name": None,
302
+ "primary_tag_slug": None,
303
+ "primary_tag_type": None,
304
+ })
305
+
306
+ # Add contact data
307
+ if place.get("place_contact_id") and place["place_contact_id"] in contacts:
308
+ contact = contacts[place["place_contact_id"]]
309
+ result.update({
310
+ "website": contact.get("website"),
311
+ "instagram": contact.get("instagram"),
312
+ "twitter": contact.get("twitter"),
313
+ })
314
+ else:
315
+ result.update({
316
+ "website": None,
317
+ "instagram": None,
318
+ "twitter": None,
319
+ })
320
+
321
+ yield idx, result
322
+ idx += 1
323
+
324
+ elif self.config.name != "all":
325
+ with open(filepath, encoding="utf-8") as f:
326
+ reader = csv.DictReader(f)
327
+ for idx, row in enumerate(reader):
328
+ # Convert boolean strings to actual booleans
329
+ for key, value in row.items():
330
+ if value in ["true", "false"]:
331
+ row[key] = value == "true"
332
+ elif value == "":
333
+ row[key] = None
334
+ # Convert numeric strings
335
+ elif key in ["radius_in_meters", "z_priority"]:
336
+ try:
337
+ row[key] = int(value) if value else None
338
+ except ValueError:
339
+ row[key] = None
340
+ elif key in ["latitude", "longitude", "horizontal_accuracy", "altitude", "vertical_accuracy"]:
341
+ try:
342
+ if value:
343
+ val = float(value)
344
+ # Validate coordinates
345
+ if key == "latitude" and not (-90 <= val <= 90):
346
+ print(f"Warning: Invalid latitude {val} in row {idx}")
347
+ row[key] = None
348
+ elif key == "longitude" and not (-180 <= val <= 180):
349
+ print(f"Warning: Invalid longitude {val} in row {idx}")
350
+ row[key] = None
351
+ else:
352
+ row[key] = val
353
+ else:
354
+ row[key] = None
355
+ except ValueError:
356
+ row[key] = None
357
+
358
+ yield idx, row
359
+ else:
360
+ # Load all tables
361
+ idx = 0
362
+ for table_name, filepath in filepaths.items():
363
+ with open(filepath, encoding="utf-8") as f:
364
+ reader = csv.DictReader(f)
365
+ rows = list(reader)
366
+ yield idx, {
367
+ "table_name": table_name,
368
+ "data": json.dumps(rows)
369
+ }
370
+ idx += 1
371
+
372
+
373
+ # Alternative simple loading function for users who prefer pandas
374
+ def load_places_as_dict(data_dir: str) -> Dict[str, List[Dict[str, Any]]]:
375
+ """
376
+ Load all CSV files from the directory into a dictionary of tables.
377
+
378
+ Args:
379
+ data_dir: Directory containing the CSV files
380
+
381
+ Returns:
382
+ Dictionary where keys are table names and values are lists of row dictionaries
383
+ """
384
+ tables = {}
385
+
386
+ for table_name, filename in _URLS.items():
387
+ filepath = os.path.join(data_dir, filename)
388
+ if os.path.exists(filepath):
389
+ with open(filepath, encoding="utf-8") as f:
390
+ reader = csv.DictReader(f)
391
+ tables[table_name] = list(reader)
392
+
393
+ return tables
tag.csv ADDED
The diff for this file is too large to render. See raw diff