File size: 1,595 Bytes
424cc01 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
language:
- en
license: mit
size_categories:
- n<1K
task_categories:
- text-generation
tags:
- geo
- seo
- search-engine-optimization
- query-generation
---
# GEO Agent Dataset with Queries
Dataset with generated train/test queries for Generative Engine Optimization (GEO) research.
## Dataset Description
This dataset contains 208 web documents with automatically generated search queries for training and evaluation.
## Features
| Column | Description |
|--------|-------------|
| doc_id | Unique document identifier |
| url | Source URL |
| raw_html | Original HTML content |
| cleaned_text | Parsed plain text content |
| cleaned_text_length | Character count of cleaned text |
| tags | Topic classification tags |
| primary_topic | Main topic category |
| data_source | Original data source |
| query | Original search query |
| train_queries | Generated training queries (~20 per doc) |
| test_queries | Generated test queries (~36 per doc) |
## Statistics
- **Total documents**: 208
- **Avg train queries**: 19.7 per document
- **Avg test queries**: 35.9 per document
- **Total train queries**: 4096
- **Total test queries**: 7457
## Usage
```python
from datasets import load_dataset
ds = load_dataset("erv1n/GEO_Agent_with_queries")
# Access queries
for example in ds["train"]:
print(f"Doc: {example['doc_id']}")
print(f"Train queries: {example['train_queries'][:3]}")
print(f"Test queries: {example['test_queries'][:3]}")
break
```
## Related Datasets
- [erv1n/GEO_Agent](https://huggingface.co/datasets/erv1n/GEO_Agent) - Base dataset without queries
|