Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -43,4 +43,59 @@ configs:
|
|
| 43 |
path: data/web-*
|
| 44 |
- split: debate
|
| 45 |
path: data/debate-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 46 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
path: data/web-*
|
| 44 |
- split: debate
|
| 45 |
path: data/debate-*
|
| 46 |
+
language:
|
| 47 |
+
- en
|
| 48 |
+
license: apache-2.0
|
| 49 |
+
tags:
|
| 50 |
+
- SEO
|
| 51 |
+
- CSEO
|
| 52 |
+
- RAG
|
| 53 |
+
- conversational-search-engine
|
| 54 |
---
|
| 55 |
+
|
| 56 |
+
## Dataset Summary
|
| 57 |
+
|
| 58 |
+
**C-SEO Bench** is a benchmark designed to evaluate conversational search engine optimization (C-SEO) techniques across two common tasks: **product recommendation** and **question answering**. Each task spans multiple domains to assess domain-specific effects and generalization ability of C-SEO methods.
|
| 59 |
+
|
| 60 |
+
## Supported Tasks and Domains
|
| 61 |
+
|
| 62 |
+
### Product Recommendation
|
| 63 |
+
|
| 64 |
+
This task requires an LLM to recommend the top-k products relevant to a user query, using only the content of 10 retrieved product descriptions. The task simulates a cold-start setting with no user profile. Domains:
|
| 65 |
+
|
| 66 |
+
- **Retail**: Queries and product descriptions from Amazon.
|
| 67 |
+
- **Video Games**: Search tags and game descriptions from Steam.
|
| 68 |
+
- **Books**: GPT-generated queries with book synopsis from the Google Books API.
|
| 69 |
+
|
| 70 |
+
### Question Answering
|
| 71 |
+
|
| 72 |
+
This task involves answering queries based on multiple passages. Domains:
|
| 73 |
+
|
| 74 |
+
- **Web Questions**: Real search engine queries with retrieved web content.
|
| 75 |
+
- **News**: GPT-generated questions over sets of related news articles.
|
| 76 |
+
- **Debate**: Opinionated queries requiring multi-perspective evidence.
|
| 77 |
+
|
| 78 |
+
Total: Over **1.9k queries** and **16k documents** across six domains.
|
| 79 |
+
|
| 80 |
+
For more information about the dataset construction, please refer to the original publication.
|
| 81 |
+
|
| 82 |
+
Developed at [Parameter Lab](https://parameterlab.de/) with the support of [Naver AI Lab](https://clova.ai/en/ai-research).
|
| 83 |
+
|
| 84 |
+
|
| 85 |
+
## Disclaimer
|
| 86 |
+
|
| 87 |
+
> This repository contains experimental software results and is published for the sole purpose of giving additional background details on the respective publication.
|
| 88 |
+
|
| 89 |
+
|
| 90 |
+
## Citation
|
| 91 |
+
If this work is useful for you, please consider citing it
|
| 92 |
+
|
| 93 |
+
```
|
| 94 |
+
TODO
|
| 95 |
+
```
|
| 96 |
+
|
| 97 |
+
✉️ Contact person: Haritz Puerto, haritz.puerto@tu-darmstadt.de
|
| 98 |
+
|
| 99 |
+
🏢 https://www.parameterlab.de/
|
| 100 |
+
|
| 101 |
+
Don't hesitate to send us an e-mail or report an issue if something is broken (and it shouldn't be) or if you have further questions.
|