File size: 2,439 Bytes
f6c6f41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1f7d23e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
pretty_name: CommunityBench
language:
  - en
task_categories:
  - text-classification
  - text-generation
  - question-answering
tags:
  - benchmark
  - community-alignment
  - reddit
  - preference-identification
  - distribution-prediction
  - communication-prediction
  - steering-generation
size_categories:
  - 10K<n<100K
license: unknown
---

# CommunityBench

## Dataset Description

CommunityBench is a benchmark dataset for evaluating language models' ability to understand and align with online community preferences. The dataset is constructed from Reddit posts and comments, focusing on real-world scenarios where models need to reason about community values, predict preference distributions, identify community-specific communication patterns, and generate content that aligns with community norms.

## Dataset Structure

The dataset consists of two splits:
- **train.jsonl**: Training set
- **test.jsonl**: Test/evaluation set

Each line in the JSONL files contains a JSON object representing a single sample.

## Task Types

The dataset includes four distinct tasks:

1. **`pref_id`** (Preference Identification): Identify which option best matches a community's preferences
2. **`dist_pred`** (Distribution Prediction): Predict the popularity distribution across multiple options
3. **`com_pred`** (Communication Prediction): Predict community-specific communication patterns
4. **`steer_gen`** (Steering Generation): Generate content that aligns with community norms and preferences

## Dataset Statistics

- **Task distribution**: Each task type (`pref_id`, `dist_pred`, `com_pred`, `steer_gen`) has an equal number of samples
- **Options per sample** (for tasks with options): Average ~4.0 options per sample

## Usage

You can load and use the dataset with the Hugging Face `datasets` library:

```python
from datasets import load_dataset

dataset = load_dataset("jylin001206/communitybench", split="train")
```

Or load specific splits:

```python
train_dataset = load_dataset("jylin001206/communitybench", split="train")
test_dataset = load_dataset("jylin001206/communitybench", split="test")
```

## Data Fields

Each sample in the dataset contains community portraits, request-option sets, and task-specific annotations. The exact schema depends on the task type and includes information about:
- Subreddit and thread context
- Community portraits
- Request-option pairs
- Ground truth labels or distributions