Datasets:
Languages:
English
Size:
10K<n<100K
Tags:
benchmark
community-alignment
reddit
preference-identification
distribution-prediction
communication-prediction
License:
| pretty_name: CommunityBench | |
| language: | |
| - en | |
| task_categories: | |
| - text-classification | |
| - text-generation | |
| - question-answering | |
| tags: | |
| - benchmark | |
| - community-alignment | |
| - preference-identification | |
| - distribution-prediction | |
| - communication-prediction | |
| - steering-generation | |
| size_categories: | |
| - 10K<n<100K | |
| license: unknown | |
| # CommunityBench | |
| ## Dataset Description | |
| CommunityBench is a benchmark dataset for evaluating language models' ability to understand and align with online community preferences. The dataset is constructed from Reddit posts and comments, focusing on real-world scenarios where models need to reason about community values, predict preference distributions, identify community-specific communication patterns, and generate content that aligns with community norms. | |
| ## Dataset Structure | |
| The dataset consists of two splits: | |
| - **train.jsonl**: Training set | |
| - **test.jsonl**: Test/evaluation set | |
| Each line in the JSONL files contains a JSON object representing a single sample. | |
| ## Task Types | |
| The dataset includes four distinct tasks: | |
| 1. **`pref_id`** (Preference Identification): Identify which option best matches a community's preferences | |
| 2. **`dist_pred`** (Distribution Prediction): Predict the popularity distribution across multiple options | |
| 3. **`com_pred`** (Communication Prediction): Predict community-specific communication patterns | |
| 4. **`steer_gen`** (Steering Generation): Generate content that aligns with community norms and preferences | |
| ## Dataset Statistics | |
| - **Task distribution**: Each task type (`pref_id`, `dist_pred`, `com_pred`, `steer_gen`) has an equal number of samples | |
| - **Options per sample** (for tasks with options): Average ~4.0 options per sample | |
| ## Usage | |
| You can load and use the dataset with the Hugging Face `datasets` library: | |
| ```python | |
| from datasets import load_dataset | |
| dataset = load_dataset("jylin001206/communitybench", split="train") | |
| ``` | |
| Or load specific splits: | |
| ```python | |
| train_dataset = load_dataset("jylin001206/communitybench", split="train") | |
| test_dataset = load_dataset("jylin001206/communitybench", split="test") | |
| ``` | |
| ## Data Fields | |
| Each sample in the dataset contains community portraits, request-option sets, and task-specific annotations. The exact schema depends on the task type and includes information about: | |
| - Subreddit and thread context | |
| - Community portraits | |
| - Request-option pairs | |
| - Ground truth labels or distributions | |