File size: 13,070 Bytes
714c51a
 
 
 
 
 
 
 
 
 
8771268
 
714c51a
 
8771268
 
714c51a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8771268
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
---
license: cc-by-4.0
task_categories:
- text-generation
- text-classification
language:
- en
tags:
- geospatial
- places
- points of interest
- poi
- location-data
- activities
- landmarks
- venues
pretty_name: Places
size_categories:
- 10K<n<100K
---

# Places Dataset

### Dataset Summary

This dataset contains information of roughly 70,000 places with associated metadata including locations, attribution tags, and some contact details. The data includes geographic coordinates, place descriptions, categorization through attribution tags, and some social media presence information.

### LLM Applications

This dataset is particularly valuable for training and fine-tuning Large Language Models (LLMs) for geospatial understanding:

- **Geospatial Question Answering**: Train LLMs to answer location-based queries like "What coffee shops are near Central Park?" or "Find museums in Paris"
- **Location-Aware Text Generation**: Enable LLMs to generate contextually relevant descriptions that incorporate local geography, landmarks, and spatial relationships
- **Address Parsing and Normalization**: The cleaned address formats provide training data for LLMs to understand and standardize various address formats from different countries
- **Multilingual Geographic Entity Recognition**: With places from multiple countries, LLMs can learn to recognize and disambiguate location names across languages
- **Spatial Reasoning**: Train models to understand spatial relationships, distances, and geographic hierarchies (neighborhood → city → state → country)
- **Travel and Tourism Assistants**: Build LLMs that can provide recommendations based on location, tags, and place descriptions
- **Local Business Understanding**: Help LLMs understand business categories, operating contexts, and location-specific services

### Supported Tasks and Leaderboards

This dataset can be used for various tasks including:
- **Location-based recommendation systems**
- **Geographic information retrieval**
- **Place categorization and tagging**
- **Spatial analysis and clustering**
- **Geospatial NLP and language understanding**
- **Cross-lingual location entity linking**

### Languages

The dataset is primarily in English, with place names and descriptions being in English.

## Structure

### Data Instances

The dataset consists of 5 CSV files with relational structure:

1. **place.csv** - Main places table
2. **location.csv** - Geographic coordinates
3. **place_contact.csv** - Contact and social media information
4. **tag.csv** - Categorization tags
5. **place_tag.csv** - Many-to-many relationship between places and tags

### Data Fields

#### place.csv
- `id`: Unique identifier (UUID)
- `created_at`, `updated_at`: Timestamps
- `name`: Place name
- `description`: Place description
- `address`, `address_formatted`: Street address information
- `locality`, `administrative_area`, `postal_code`, `country_code`: Location details
- `verified`, `flagged`: Boolean status flags
- Various foreign keys linking to other tables

#### location.csv
- `id`: Unique identifier (UUID)
- `latitude`, `longitude`: GPS coordinates (WGS84/EPSG:4326 coordinate system)
- `horizontal_accuracy`, `vertical_accuracy`: Accuracy metrics in meters
- `altitude`: Elevation data in meters above sea level
- `geom`: PostGIS geometry column (projected coordinates, likely Web Mercator EPSG:3857)
- `geog`: PostGIS geography column (unprojected lat/lon coordinates for accurate distance calculations)

#### place_contact.csv
- `id`: Unique identifier (UUID)
- `instagram`, `x`: Social media handles
- `website`: website URLs

### Data Splits

This dataset is provided as a single collection without predefined train/validation/test splits. Users should create their own splits based on their specific use case.

## Dataset Creation

### Curation Rationale

This dataset was created to provide a comprehensive collection of place information with rich metadata for location-based applications and AI research.

## Considerations for Using the Data

### Social Impact of Dataset

This dataset can be used to build geographically aware LLMs, location-based services, and improve geographic information systems. Consider the impact on local businesses and communities when using this data.

### Discussion of Biases

The dataset may have geographic biases based on where data was collected. Urban areas may be overrepresented compared to rural areas.

### Other Known Limitations

- The dataset represents a snapshot in time and place information may be outdated
- Not all places have complete information across all fields
- Geographic coverage may be limited to specific regions

## Additional Information

### Data Quality Notes

- **Address Formatting**: All addresses have been cleaned and normalized to a consistent format: `{"street address", "city state/province postal", "country"}`
- **Country Standardization**: Country names have been standardized to English (e.g., Deutschland → Germany, États-Unis → United States)
- **Coordinate Accuracy**: Location coordinates include accuracy metrics to help filter by precision requirements

### Licensing Information

This dataset is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.

### Citation Information

```bibtex
@dataset{places_dataset_2025,
  title={Places Dataset},
  author={[patrick piemonte]},
  year={2025},
  publisher={Hugging Face}
}
```

## Usage

### Loading the Dataset

```python
from datasets import load_dataset

# Load all tables
dataset = load_dataset("path/to/places_dataset.py", "all")

# Load individual tables
places = load_dataset("path/to/places_dataset.py", "place")
locations = load_dataset("path/to/places_dataset.py", "location")
tags = load_dataset("path/to/places_dataset.py", "tag")
place_tags = load_dataset("path/to/places_dataset.py", "place_tag")
contacts = load_dataset("path/to/places_dataset.py", "place_contact")

# Load denormalized view (easier to use!)
denormalized = load_dataset("path/to/places_dataset.py", "denormalized")
```

### Working with the Data

#### LLM-Specific Examples

##### Training Data Preparation for Geospatial QA
```python
# Prepare training examples for location-based Q&A
def create_qa_examples(dataset):
    examples = []
    for place in dataset:
        # Create various question-answer pairs
        examples.extend([
            {
                "question": f"What is the address of {place['name']}?",
                "answer": place['address_formatted']
            },
            {
                "question": f"Where is {place['name']} located?",
                "answer": f"{place['name']} is located at {place['address']} in {place['locality']}, {place['administrative_area']}, {place['country']}."
            },
            {
                "question": f"What type of place is {place['name']}?",
                "answer": f"{place['name']} is a {place['primary_tag_name']}." if place['primary_tag_name'] else f"Information about the type of {place['name']} is not available."
            }
        ])
        
        if place['description']:
            examples.append({
                "question": f"Tell me about {place['name']}",
                "answer": place['description']
            })
    
    return examples

# Load denormalized data for easy access to all fields
dataset = load_dataset("path/to/places_dataset.py", "denormalized")['train']
qa_examples = create_qa_examples(dataset)
```

##### Spatial Context Generation
```python
# Generate spatial context descriptions for LLM training
def generate_spatial_context(place, nearby_places):
    context = f"{place['name']} is located at {place['address']} in {place['locality']}."
    
    if nearby_places:
        context += f" Nearby places include: "
        nearby_names = [f"{p['name']} ({p['distance_km']:.1f}km)" for p in nearby_places[:5]]
        context += ", ".join(nearby_names) + "."
    
    if place['primary_tag_name']:
        context += f" It is categorized as a {place['primary_tag_name']}."
    
    return context

# Example usage
center_place = dataset[0]
nearby = find_nearby_places(dataset, center_place['latitude'], center_place['longitude'], 2)
spatial_description = generate_spatial_context(center_place, nearby)
```

#### Basic Place Information
```python
# Load places with their basic information
places = load_dataset("path/to/places_dataset.py", "place")['train']

# Access place data
for place in places.select(range(5)):
    print(f"Name: {place['name']}")
    print(f"Address: {place['address_formatted']}")
    print(f"Locality: {place['locality']}, {place['administrative_area']}")
    print("---")
```

#### Using the Denormalized View (Recommended)
```python
# Load denormalized data - includes places, locations, primary tags, and contacts
dataset = load_dataset("path/to/places_dataset.py", "denormalized")['train']

# Now you have everything in one table!
for place in dataset.select(range(5)):
    print(f"Name: {place['name']}")
    print(f"Location: {place['latitude']}, {place['longitude']}")
    print(f"Primary Tag: {place['primary_tag_name']}")
    print(f"Website: {place['website']}")
    print("---")

# Easy filtering with coordinates
import pandas as pd
df = pd.DataFrame(dataset)
# Find verified places with coordinates
verified_with_coords = df[
    (df['verified'] == True) & 
    (df['latitude'].notna()) & 
    (df['longitude'].notna())
]
```

#### Joining Tables (Places with Locations)
```python
import pandas as pd

# Load as pandas DataFrames for easier joining
places_df = pd.DataFrame(places)
locations = load_dataset("path/to/places_dataset.py", "location")['train']
locations_df = pd.DataFrame(locations)

# Join places with their locations
places_with_coords = places_df.merge(
    locations_df, 
    left_on='location_id', 
    right_on='id', 
    suffixes=('', '_loc')
)

# Filter places in a specific area (e.g., within a bounding box)
def filter_by_bbox(df, min_lat, max_lat, min_lon, max_lon):
    return df[
        (df['latitude'] >= min_lat) & 
        (df['latitude'] <= max_lat) &
        (df['longitude'] >= min_lon) & 
        (df['longitude'] <= max_lon)
    ]

# Example: Find places in San Francisco area
sf_places = filter_by_bbox(places_with_coords, 37.7, 37.8, -122.5, -122.4)
```

#### Working with Tags
```python
# Load all necessary tables
places_df = pd.DataFrame(load_dataset("path/to/places_dataset.py", "place")['train'])
tags_df = pd.DataFrame(load_dataset("path/to/places_dataset.py", "tag")['train'])
place_tags_df = pd.DataFrame(load_dataset("path/to/places_dataset.py", "place_tag")['train'])

# Get all tags for a specific place
def get_place_tags(place_id):
    # Find all tag relationships for this place
    tag_ids = place_tags_df[place_tags_df['place_id'] == place_id]['tag_id']
    # Get tag details
    return tags_df[tags_df['id'].isin(tag_ids)]

# Find all places with a specific tag
def find_places_by_tag(tag_name):
    # Find the tag
    tag = tags_df[tags_df['name'] == tag_name]
    if tag.empty:
        return pd.DataFrame()
    
    tag_id = tag.iloc[0]['id']
    # Find all places with this tag
    place_ids = place_tags_df[place_tags_df['tag_id'] == tag_id]['place_id']
    return places_df[places_df['id'].isin(place_ids)]

# Example: Find all coffee shops
coffee_shops = find_places_by_tag('Coffee Shop')
```

### Using the Alternative Loading Function

```python
from places_dataset import load_places_as_dict

# Load all tables as a dictionary
data = load_places_as_dict('/path/to/data/directory')

# Access individual tables
places = data['place']
locations = data['location']
tags = data['tag']
```

### Working with Geospatial Data

#### Coordinate System Information
- **Latitude/Longitude**: WGS84 (EPSG:4326) - Standard GPS coordinates
- **Geom column**: Projected geometry, useful for visualization and area calculations
- **Geog column**: Geography type for accurate distance and spatial calculations

#### Example: Distance Calculations
```python
import math

def haversine_distance(lat1, lon1, lat2, lon2):
    """Calculate distance between two points on Earth in kilometers."""
    R = 6371  # Earth's radius in kilometers
    
    lat1, lon1, lat2, lon2 = map(math.radians, [lat1, lon1, lat2, lon2])
    dlat = lat2 - lat1
    dlon = lon2 - lon1
    
    a = math.sin(dlat/2)**2 + math.cos(lat1) * math.cos(lat2) * math.sin(dlon/2)**2
    c = 2 * math.asin(math.sqrt(a))
    
    return R * c

# Find places near a specific coordinate
def find_nearby_places(places_with_coords, center_lat, center_lon, radius_km):
    nearby = []
    for _, place in places_with_coords.iterrows():
        distance = haversine_distance(
            center_lat, center_lon,
            place['latitude'], place['longitude']
        )
        if distance <= radius_km:
            nearby.append({**place.to_dict(), 'distance_km': distance})
    
    return pd.DataFrame(nearby).sort_values('distance_km')

# Example: Find places within 5km of a location
nearby = find_nearby_places(places_with_coords, 37.7749, -122.4194, 5)
```