Datasets:
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<div align="center">
|
| 2 |
+
<img src="https://huggingface.co/datasets/GeoGPT-Research-Project/Qwen2.5-72B-GeoGPT/raw/main/geogpt_figure.png
|
| 3 |
+
" width="99%" alt="GeoGPT" />
|
| 4 |
+
</div>
|
| 5 |
+
|
| 6 |
+
# Description
|
| 7 |
+
This dataset is a geoscience-specific subset of [CommonCrawl](https://commoncrawl.org/) used for [GeoGPT](https://github.com/GeoGPT-Research-Project/GeoGPT) training. CommonCrawl is a free and open repository of web crawl data with over 250 billion web pages and is widely used by leading large language models such as GPT-3, LLaMA, and DeepSeek. We apply data mining algorithms to extract geoscience-related content from this vast dataset.
|
| 8 |
+
|
| 9 |
+
|
| 10 |
+
This dataset comprises **12,414,268** samples, each containing the following metadata to trace the data source within CommonCrawl:
|
| 11 |
+
|
| 12 |
+
- **id (string)**: The original unique identifier for each sample from CommonCrawl.
|
| 13 |
+
- **dump (string)**: The specific CommonCrawl dump in which each sample was retrieved.
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
# Note
|
| 19 |
+
|
| 20 |
+
This dataset is primarily intended to support geoscience research by serving as a training corpus for large language models. It is specifically designed for non-commercial research and educational purposes.
|
| 21 |
+
|
| 22 |
+
The dataset is not intended for use in any manner that violates applicable laws or regulations, nor for any activities prohibited by the license agreement.
|