Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,9 +1,12 @@
|
|
| 1 |
---
|
| 2 |
language:
|
| 3 |
- en
|
|
|
|
| 4 |
pretty_name: GeoComp
|
| 5 |
tags:
|
| 6 |
- GeoLocation
|
|
|
|
|
|
|
| 7 |
|
| 8 |
---
|
| 9 |
|
|
@@ -23,53 +26,93 @@ To prevent cheating, external search engines are banned, and each round is time-
|
|
| 23 |
To ensure predictions are human-generated rather than machine-generated, users must register with a phone number, enabling tracking of individual activities.
|
| 24 |
Using this platform, we collected **GeoComp**, a comprehensive dataset covering 1,000 days of user competition.
|
| 25 |
|
| 26 |
-
## File
|
| 27 |
|
| 28 |
-
|
| 29 |
|
| 30 |
-
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
-
|
| 33 |
|
| 34 |
-
|
| 35 |
|
| 36 |
-
|
| 37 |
|
| 38 |
-
|
| 39 |
|
| 40 |
-
|
| 41 |
|
| 42 |
-
|
| 43 |
|
| 44 |
-
|
|
|
|
|
|
|
| 45 |
|
| 46 |
-
|
| 47 |
|
| 48 |
-
|
|
|
|
| 49 |
|
| 50 |
-
###
|
| 51 |
|
| 52 |
-
|
| 53 |
|
| 54 |
-
|
| 55 |
-
cat tuxun_comblined_* > tuxun_comblined.csv
|
| 56 |
|
| 57 |
-
|
| 58 |
-
|
|
|
|
| 59 |
|
| 60 |
-
|
|
|
|
|
|
|
| 61 |
|
| 62 |
-
|
|
|
|
| 63 |
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
|
|
|
|
|
|
| 67 |
|
| 68 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 69 |
|
| 70 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 71 |
|
| 72 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 73 |
|
| 74 |
## Additional Information
|
| 75 |
|
|
|
|
| 1 |
---
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
+
- zh
|
| 5 |
pretty_name: GeoComp
|
| 6 |
tags:
|
| 7 |
- GeoLocation
|
| 8 |
+
size_categories:
|
| 9 |
+
- 10M<n<100M
|
| 10 |
|
| 11 |
---
|
| 12 |
|
|
|
|
| 26 |
To ensure predictions are human-generated rather than machine-generated, users must register with a phone number, enabling tracking of individual activities.
|
| 27 |
Using this platform, we collected **GeoComp**, a comprehensive dataset covering 1,000 days of user competition.
|
| 28 |
|
| 29 |
+
## File Introduction
|
| 30 |
|
| 31 |
+
The GeoComp dataset is now primarily provided in Parquet format within the `/data` directory for efficient access and processing. You can find the following files in this repository:
|
| 32 |
|
| 33 |
+
* [**`/data/tuxun_combined.parquet`**]([ShirohAO/tuxun at main](https://huggingface.co/datasets/ShirohAO/tuxun/tree/main/data)): This is the main dataset file containing combined competition history in Parquet format.
|
| 34 |
+
* [**`tuxun_sample.csv`**]([tuxun_sample.csv · ShirohAO/tuxun at main](https://huggingface.co/datasets/ShirohAO/tuxun/blob/main/tuxun_sample.csv)): An example CSV file to preview the structure of the data.
|
| 35 |
+
* [**`selected_panoids`**]([selected_panoids · ShirohAO/tuxun at main](https://huggingface.co/datasets/ShirohAO/tuxun/blob/main/selected_panoids)): The 500 panoids we used in our work. You can add a `.csv` or `.json` suffix to this file.
|
| 36 |
+
* [**`download_panoramas.py`**]([download_panoramas.py · ShirohAO/tuxun at main](https://huggingface.co/datasets/ShirohAO/tuxun/blob/main/download_panoramas.py)): A script to download street view images using the provided panoids.
|
| 37 |
|
| 38 |
+
## Requirement
|
| 39 |
|
| 40 |
+
The **GeoComp** is only for research.
|
| 41 |
|
| 42 |
+
## Start
|
| 43 |
|
| 44 |
+
### Data format of tuxun_combined.csv
|
| 45 |
|
| 46 |
+
The `tuxun_combined.parquet` file contains data in a similar structure to the original `tuxun_combined.csv`.
|
| 47 |
|
| 48 |
+
**Example Schema:**
|
| 49 |
|
| 50 |
+
| id | data | gmt_create | timestamp |
|
| 51 |
+
| ---- | ------------------- | --------------- | --------- |
|
| 52 |
+
| Game | Json style metadata | 1734188074762.0 | |
|
| 53 |
|
| 54 |
+
**Explanation:**
|
| 55 |
|
| 56 |
+
* We hide data items that may reveal personal privacy like changing the value of key "userId" to "User", "hostUserId" to "HostUser", "playerIds" to "Players", "id" to "Game".
|
| 57 |
+
* The data under the "data" column is in JSON style. This column contains detailed geolocation information like "lat", "lng", "nation", and "panoId".
|
| 58 |
|
| 59 |
+
### Extracting Specific Fields from the 'data' Column
|
| 60 |
|
| 61 |
+
The 'data' column contains rich game-specific information in a JSON string format. To access individual fields like `guessPlace`, `targetPlace`, `score`, or `panoId`, you'll need to parse this JSON string.
|
| 62 |
|
| 63 |
+
Here’s a Python example using `pandas` and `json` to extract these fields from the `tuxun_combined.parquet` file:
|
|
|
|
| 64 |
|
| 65 |
+
```python
|
| 66 |
+
import pandas as pd
|
| 67 |
+
import json
|
| 68 |
|
| 69 |
+
# Assuming your Parquet file is at 'data/tuxun_combined.parquet'
|
| 70 |
+
# Adjust the file_path if necessary
|
| 71 |
+
file_path = 'data/tuxun_combined.parquet'
|
| 72 |
|
| 73 |
+
# Read the Parquet file into a DataFrame
|
| 74 |
+
df = pd.read_parquet(file_path)
|
| 75 |
|
| 76 |
+
# Define a function to parse the 'data' column and extract desired information
|
| 77 |
+
def extract_game_details(data_json_str):
|
| 78 |
+
try:
|
| 79 |
+
# Parse the JSON string into a Python dictionary
|
| 80 |
+
game_data = json.loads(data_json_str)
|
| 81 |
|
| 82 |
+
# Initialize variables to None in case a field is missing
|
| 83 |
+
guess_place = None
|
| 84 |
+
target_place = None
|
| 85 |
+
score = None
|
| 86 |
+
pano_id = None
|
| 87 |
|
| 88 |
+
# Extract guessPlace, targetPlace, and score from 'player' -> 'lastRoundResult'
|
| 89 |
+
if 'player' in game_data and 'lastRoundResult' in game_data['player']:
|
| 90 |
+
last_round_result = game_data['player']['lastRoundResult']
|
| 91 |
+
guess_place = last_round_result.get('guessPlace')
|
| 92 |
+
target_place = last_round_result.get('targetPlace')
|
| 93 |
+
score = last_round_result.get('score')
|
| 94 |
|
| 95 |
+
# Extract panoId from the first element of the 'rounds' list
|
| 96 |
+
if 'rounds' in game_data and len(game_data['rounds']) > 0:
|
| 97 |
+
first_round = game_data['rounds'][0]
|
| 98 |
+
pano_id = first_round.get('panoId')
|
| 99 |
+
|
| 100 |
+
return guess_place, target_place, score, pano_id
|
| 101 |
+
except json.JSONDecodeError:
|
| 102 |
+
print(f"Error decoding JSON for row: {data_json_str[:100]}...") # Print first 100 chars for context
|
| 103 |
+
return None, None, None, None
|
| 104 |
+
except KeyError as e:
|
| 105 |
+
print(f"Missing key: {e} in row: {data_json_str[:100]}...") # Print first 100 chars for context
|
| 106 |
+
return None, None, None, None
|
| 107 |
+
|
| 108 |
+
# Apply the function to the 'data' column and create new columns in the DataFrame
|
| 109 |
+
df[['guessPlace', 'targetPlace', 'score', 'panoId']] = df['data'].apply(
|
| 110 |
+
lambda x: pd.Series(extract_game_details(x))
|
| 111 |
+
)
|
| 112 |
+
|
| 113 |
+
# Display the first few rows with the newly extracted columns
|
| 114 |
+
print(df[['id', 'guessPlace', 'targetPlace', 'score', 'panoId']].head())
|
| 115 |
+
```
|
| 116 |
|
| 117 |
## Additional Information
|
| 118 |
|