tnoah commited on
Commit
f0f45ea
·
verified ·
1 Parent(s): d211bf0

Update README

Browse files
Files changed (1) hide show
  1. README.md +198 -137
README.md CHANGED
@@ -1,34 +1,34 @@
1
  ---
2
  configs:
3
- - config_name: comparisons
4
- data_files:
5
- - split: train
6
- path: comparisons.jsonl
7
- default: true
8
- - config_name: annotators
9
- data_files:
10
- - split: train
11
- path: annotators.jsonl
12
- - config_name: merged_comparison_annotators
13
- data_files:
14
- - split: train
15
- path: merged_comparisons_annotators.jsonl
16
- - config_name: conversation_rubrics
17
- data_files:
18
- - split: train
19
- path: conversation_rubrics.jsonl
20
  ---
21
 
22
- # Collective Alignment 1 (CA‑1)
23
 
24
- Public input on default model behavior
25
 
26
- **Summary.** Collective Alignment 1 (CA-1) is a humanfeedback dataset focused on valuesensitive model behavior. Each record contains (i) a synthetic prompt represented as a minimal chat transcript (ii) four (4) candidate assistant responses, and (iii) annotator assessments with rationales. A companion file provides annotator demographics.
27
 
28
 
29
- ## Why CA‑1 exists
30
 
31
- We wanted to assess cross-annotator and cross-cultural alignment on ideal model behavior, in order to compare people's views with our stated principles. We used this work as a preliminary elicitation project, and used the results from our analysis to make updates to the OpenAI Model Spec. Read more about this [in our blog post](https://openai.com/index/collective-alignment-aug-2025-updates).
32
 
33
  ---
34
 
@@ -38,19 +38,19 @@ We wanted to assess cross-annotator and cross-cultural alignment on ideal model
38
  - `comparisons.jsonl` — prompts, candidate responses (A–D), and per‑item assessments.
39
  - `annotators.jsonl` — one row per annotator with demographics and the assessments they completed.
40
  - `merged_comparisons_annotators.jsonl` — one row per (prompt × annotator) assessment with demographics and turn‑level convenience features.
 
41
 
42
  ### At a glance
43
  - **Comparisons (prompts)**: 1,078 unique comparisons.
44
  - **Annotators**: 1,012 unique annotators.
45
  - **Assessments**: 18,384 in `comparisons.jsonl`
46
  - **Candidate responses per prompt**: 4 candidate responses per prompt.
47
-
48
 
49
 
50
  ## Dataset structure
51
 
52
- This release contains two primary artifacts: (1) prompts with multiple candidate assistant responses and associated assessments, and (2) annotator profiles with demographics and their completed assessments. For convenience, we also provide a long-format file where each (comparison x annotator) assessment is merged with demographics and basic prompt features.
53
-
54
 
55
 
56
 
@@ -59,18 +59,20 @@ This release contains two primary artifacts: (1) prompts with multiple candidate
59
  - **Prompts & Candidates**: For each prompt, models (using a mix of OpenAI model outputs) generated multiple candidate assistant messages as responses.
60
  - **Assessments**: Human annotators then reviewed each prompt’s candidates, ranked them by preference (with explanations), and provided labels for importance, representativeness, and subjectivity. They could also flag any response as “unacceptable” and explain why.
61
  - **Sanitization for release**: Before publishing the data, we performed several cleanup steps:
62
- - **Role mapping**: While in practice we initially set `system` role messages, we remapped to `developer` (to align with OpenAI’s Responses API format) and make conversations usable by external researchers.
63
- - **Rubric scores**: We are still processing rubric scores, and they are not included in this release;
64
 
65
 
66
 
67
  ## Detailed data collection and annotation
68
 
69
- ### Pipeline overview (three stages)
70
 
71
  - **Prompts**: We synthesized prompts on purportedly globally salient topics.
72
- - **Candidates**: For each prompt, we pre‑generated four candidate responses (labeled A–D) representing from our models. These candidates represent a range of potential model behaviors to be evaluated.
73
- - **Rubrics**: In parallel, we prepared initial rubric items as examples of possible objective, prompt‑specific evaluation criteria. Annotators would later be required to assign signed weights ranging from -10 to +10, where negative weights indicate the behaviors models should avoid, and positive weights indicated weights models should support, and the absolute value indicated the importance. Annotators could also author their own rubric items as part of the task, refining these criteria based on what they thought was important for evaluating that prompt.
 
 
74
 
75
  ### Participant recruitment and platform
76
 
@@ -115,9 +117,9 @@ Where to find this in the data: In the released JSON, each annotator’s work on
115
 
116
  ```
117
  "ranking_blocks": {
118
- "unacceptable": [ { "rationale": "...", "rating": ["A is unacceptable"] } ],
119
- "personal": [ { "rationale": "...", "ranking": "A>B>C=D" } ],
120
- "world": [ { "rationale": "...", "ranking": "B>A>C=D" } ]
121
  }
122
  ```
123
 
@@ -128,37 +130,37 @@ Where to find this in the data: In the released JSON, each annotator’s work on
128
  - **Maximum total**: Up to USD $540 (15 × $30 + $90 bonus).
129
  - **Quality & follow‑ups**: Thoughtful, high‑quality submissions may receive bonuses and invitations to paid follow‑up studies.
130
  - **Time estimate**: Across annotators and tasks, the median time to complete a task was approximately 22 minutes.
131
- - **Availability**: The study was sized so each participant had 15 submissions available (no competition for seats).
132
 
133
  ## Figures
134
 
135
  <div style="display:flex; gap:12px; flex-wrap:wrap">
136
- <figure style="margin:0">
137
- <img src="./prompt_responses.png" alt="Prompt and responses" width="750" />
138
- <figcaption style="font-size:12px; color:#666; margin-top:4px">Figure 1. Prompt and candidate responses (A–D)</figcaption>
139
- </figure>
140
  </div>
141
 
142
  <div style="display:flex; gap:12px; flex-wrap:wrap">
143
- <figure style="margin:0">
144
- <img src="./intro_unacceptable.png" alt="Unacceptable check" width="475" />
145
- <figcaption style="font-size:12px; color:#666; margin-top:4px">Figure 2. Unacceptable content check</figcaption>
146
- </figure>
147
- <figure style="margin:0">
148
- <img src="./ranking_personal.png" alt="Ranking — personal" width="400" />
149
- <figcaption style="font-size:12px; color:#666; margin-top:4px">Figure 3. Ranking — personal</figcaption>
150
- </figure>
151
  </div>
152
 
153
  <div style="display:flex; gap:12px; flex-wrap:wrap">
154
- <figure style="margin:0">
155
- <img src="./ranking_world.png" alt="Ranking — world" width="400" />
156
- <figcaption style="font-size:12px; color:#666; margin-top:4px">Figure 4. Ranking — world</figcaption>
157
- </figure>
158
- <figure style="margin:0">
159
- <img src="./task_value.png" alt="Task value" width="475" />
160
- <figcaption style="font-size:12px; color:#666; margin-top:4px">Figure 5. Prompt‑level ratings and task value</figcaption>
161
- </figure>
162
  </div>
163
 
164
  ### Sampling, anchors, and balancing
@@ -176,7 +178,7 @@ We took steps to ensure diversity of prompts and consistency across annotators:
176
  Important: In this dataset, the prompt is represented as a compact chat transcript (it can include a developer instruction and one or more user turns, and occasionally an assistant turn if the conversation had prior context). The candidate responses are not appended to this prompt transcript but are listed separately under `responses`.
177
 
178
  ### Conversation length
179
- The vast majority of prompts consist of a single user question (with possibly a guiding developer/system instruction at the start) and no prior assistant answer. One‑turn user asks -> evaluate multiple candidate answers is the typical setup.
180
 
181
  ### Candidates
182
  - Each prompt comes with 4 candidate responses (A, B, C, D). Every prompt in this release has exactly four candidates.
@@ -208,41 +210,41 @@ Each line is one JSON object representing a prompt and the collected assessments
208
 
209
  ```jsonc
210
  {
211
- "prompt_id": "UUID", // Pseudonymized ID for the prompt (conversation)
212
- "prompt": {
213
- "id": "UUID", // Same as prompt_id (included again for convenience)
214
- "messages": [
215
- {"role": "developer", "content": "..."}, // System/developer message (if any)
216
- {"role": "user", "content": "..."}, // The user prompt content
217
- {"role": "assistant", "content": "..."} // Sometimes present if the prompt included an example assistant reply
218
- ]
219
- },
220
- "responses": [
221
- {
222
- "response_index": "A", // Candidate label (A, B, C, or D)
223
- "messages": [
224
- {"role": "assistant", "content": "<candidate answer text>"}
225
- ]
226
- }
227
- // ... similarly B, C, D candidates
228
- ],
229
- "metadata": {
230
- "assessments": [
231
- {
232
- "conversation_id": "UUID", // Matches prompt_id (rotated conversation identifier)
233
- "annotator_id": "UUID", // Rotated ID of the annotator who did this assessment
234
- "importance": "Very important" | "Somewhat important" | "Not important",
235
- "representativeness": "Not at all likely" | "Slightly" | "Moderately" | "Very" | "Extremely",
236
- "subjectivity": "Value-dependent" | "Single correct answer" | "Unsure" | "Context dependent",
237
- "ranking_blocks": { // Arrow‑friendly map of lists
238
- "unacceptable": [ { "rationale": "...", "rating": ["C ...", "D ..."] } ],
239
- "personal": [ { "rationale": "...", "ranking": "B>A>C=D" } ],
240
- "world": [ { "rationale": "...", "ranking": "A>B>C=D" } ]
241
- }
242
- }
243
- // If multiple annotators assessed the same prompt, there will be multiple objects in this assessments array.
244
- ]
245
- }
246
  }
247
  ```
248
 
@@ -253,24 +255,24 @@ Each line is one JSON object representing an annotator and a summary of all thei
253
 
254
  ```jsonc
255
  {
256
- "annotator_id": "UUID", // Pseudonymized annotator ID
257
- "demographics": {
258
- "age": "...",
259
- "gender": "...",
260
- "education_level": "...",
261
- "country_of_residence": "...",
262
- "generative_ai_usage": "...",
263
- "ai_concern_level": "...",
264
- "ideal-model-behavior": "..." // Free-text response (lightly reviewed for PII)
265
- },
266
- "assessments": [
267
- {
268
- "conversation_id": "UUID", // prompt_id that this annotator assessed
269
- // ... followed by the same fields (importance, representativeness, etc.)
270
- // and ranking_blocks structure as shown in comparisons.jsonl
271
- }
272
- // ... one entry per prompt this annotator assessed
273
- ]
274
  }
275
  ```
276
 
@@ -282,23 +284,80 @@ Each line in this file is one assessment instance, i.e., one annotator’s asses
282
 
283
  ```jsonc
284
  {
285
- "prompt_id": "UUID",
286
- "annotator_id": "UUID",
287
- "importance": "...", // (string) importance rating for this prompt by this annotator
288
- "representativeness": "...", // (string) representativeness rating
289
- "subjectivity": "...", // (string) subjectivity rating
290
- "ranking_blocks": [ ... ], // list of ranking block objects (same format as above)
291
- "demographics": { ... }, // the annotator’s demographics object
292
- "num_candidates": 4, // number of responses (always 4 in this dataset)
293
- "turns_user": 1, // number of user turns in the prompt context
294
- "turns_assistant": 0, // number of assistant turns in the prompt context
295
- "assistant_turn_share": 0.0 // assistant turns / (user + assistant turns) in the prompt context
296
  }
297
  ```
298
 
299
  This long‑format file is handy for data analysis (e.g., direct dataframe loading). The `turns_*` and `assistant_turn_share` fields quantify the prompt length and context composition for each case.
300
 
301
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
302
 
303
 
304
  ## Cautions
@@ -306,7 +365,8 @@ This long‑format file is handy for data analysis (e.g., direct dataframe loadi
306
  - **Prompt domain bias**: Prompts focus on contentious or value‑sensitive domains; Every prompt here was synthetically created by our team with certain goals in mind. This could introduce subtle biases — for example, how a question is phrased might lean it towards a particular interpretation or might be unfamiliar to people from some cultures.
307
  - **Content warning**: Some prompts/responses contain disturbing or offensive content (e.g., self‑harm, explicit sexual requests, politically charged statements). Apply filtering and user advisories as needed.
308
  - **Language considerations**: Instructions were in English; most rationales are English, some other languages (notably Spanish). Depending on your needs, you may need to plan for language detection, translation or filtering when analyzing text.
309
- - **Privacy & ethics**: Do not attempt to identify annotators.
 
310
 
311
 
312
 
@@ -318,23 +378,23 @@ To give a sense of how to work with this data, here’s a short snippet for load
318
  import json
319
 
320
  def read_jsonl(path):
321
- with open(path, "r", encoding="utf-8") as f:
322
- for line in f:
323
- if line.strip():
324
- yield json.loads(line)
325
 
326
  # Example: iterate over all prompt records
327
  for prompt_record in read_jsonl("comparisons.jsonl"):
328
- prompt_id = prompt_record["prompt_id"]
329
- prompt_messages = prompt_record["prompt"]["messages"]
330
- responses = prompt_record["responses"]
331
- assessments = prompt_record["metadata"]["assessments"]
332
- # ... your processing here ...
333
- for assessment in assessments:
334
- annotator_id = assessment["annotator_id"]
335
- world_rank = assessment["ranking_blocks"][0]["ranking_world"]
336
- personal_rank = assessment["ranking_blocks"][1]["ranking_personal"]
337
- # etc.
338
  ```
339
 
340
  This snippet reads each prompt and then iterates through the assessments for that prompt. The structure can be navigated as shown (ordering of `ranking_blocks` is consistent with the format we described, but you might want to access by keys instead for safety).
@@ -364,7 +424,8 @@ By using the CA‑1 dataset, you agree to the following terms:
364
 
365
  - **License**: Creative Commons Attribution 4.0 International (**CC BY 4.0**) — see [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). You may share and adapt with attribution, link to the license, and indicate changes. No additional restrictions (beyond following originating model usage policies and not violating privacy).
366
 
367
- - **Citation (dataset)**: If you use CA‑1 in your work, please cite:
368
- OpenAI (2025). Collective Alignment 1: Public Input on Model Defaults (Version 1.0) [Data set]. Available at: https://huggingface.co/datasets/openai/collective-alignment-1
 
 
369
 
370
- - You may also cite the accompanying [blog post](https://openai.com/index/collective-alignment-aug-2025-updates) associated with this release for further context.
 
1
  ---
2
  configs:
3
+ - config_name: comparisons
4
+ data_files:
5
+ - split: train
6
+ path: comparisons.jsonl
7
+ default: true
8
+ - config_name: annotators
9
+ data_files:
10
+ - split: train
11
+ path: annotators.jsonl
12
+ - config_name: merged_comparison_annotators
13
+ data_files:
14
+ - split: train
15
+ path: merged_comparisons_annotators.jsonl
16
+ - config_name: conversation_rubrics
17
+ data_files:
18
+ - split: train
19
+ path: conversation_rubrics.jsonl
20
  ---
21
 
22
+ # CoVal
23
 
24
+ Public input on model behavior
25
 
26
+ **Summary.** CoVal (crowd-originated, values-aware preferences and rubrics) is a human-feedback dataset focused on value-sensitive model behavior. It has three components: two conversation-level files and one annotator-level file. The first conversation-level file contains (i) a synthetic prompt represented as a minimal chat transcript, (ii) four candidate assistant responses, and (iii) annotator assessments with rationales. The second conversation-level file contains (iv) crowd-written, prompt-specific rubrics: criteria describing what annotators wanted a model to do and avoid for that prompt (including the weights annotators assigned to each criterion, and an experimental synthesized set of non-conflicting, non-redundant and highly rated criteria). A companion file provides annotator demographics.
27
 
28
 
29
+ ## Why CoVal exists
30
 
31
+ We wanted to assess cross-annotator and cross-cultural alignment on ideal model behavior. We used this work as a preliminary project to pilot new elicitation methods. We used the results to make updates to the OpenAI Model Spec (read more [here](https://openai.com/index/collective-alignment-aug-2025-updates)) and to understand how well explicit rubrics track preferences out-of-sample and surface meaningful model differences (read more [here](https://alignment.openai.com/coval).
32
 
33
  ---
34
 
 
38
  - `comparisons.jsonl` — prompts, candidate responses (A–D), and per‑item assessments.
39
  - `annotators.jsonl` — one row per annotator with demographics and the assessments they completed.
40
  - `merged_comparisons_annotators.jsonl` — one row per (prompt × annotator) assessment with demographics and turn‑level convenience features.
41
+ - conversation_rubrics.jsonl –– one row per prompt, CoVal-full rubric items with annotator ratings, CoVal-core rubric items
42
 
43
  ### At a glance
44
  - **Comparisons (prompts)**: 1,078 unique comparisons.
45
  - **Annotators**: 1,012 unique annotators.
46
  - **Assessments**: 18,384 in `comparisons.jsonl`
47
  - **Candidate responses per prompt**: 4 candidate responses per prompt.
48
+ - **Rubrics**: 986 unique prompt-specific rubrics.
49
 
50
 
51
  ## Dataset structure
52
 
53
+ This release contains three primary artifacts: (1) prompts with multiple candidate assistant responses and associated assessments, (2) annotator profiles with demographics and their completed assessments, and (3) prompts with CoVal-full (raw) and CoVal-core (synthesized subset) rubrics. For convenience, we also provide a long-format file where each (comparison x annotator) assessment is merged with demographics and basic prompt features.
 
54
 
55
 
56
 
 
59
  - **Prompts & Candidates**: For each prompt, models (using a mix of OpenAI model outputs) generated multiple candidate assistant messages as responses.
60
  - **Assessments**: Human annotators then reviewed each prompt’s candidates, ranked them by preference (with explanations), and provided labels for importance, representativeness, and subjectivity. They could also flag any response as “unacceptable” and explain why.
61
  - **Sanitization for release**: Before publishing the data, we performed several cleanup steps:
62
+ - **Role mapping**: While in practice we initially set `system` role messages, we remapped to `developer` (to align with OpenAI’s Responses API format) and make conversations usable by external researchers.
63
+ - **Rubrics**: We publish prompt-specific rubrics in two forms: CoVal-full (the raw set of crowd-originated, rated criteria, which can include diverse and sometimes conflicting preferences) and CoVal-core (a distilled subset of up to four highly rated, mutually compatible criteria per prompt, produced via LM-assisted synthesis plus human review and merging/selection from the full rubrics).
64
 
65
 
66
 
67
  ## Detailed data collection and annotation
68
 
69
+ ### Pipeline overview
70
 
71
  - **Prompts**: We synthesized prompts on purportedly globally salient topics.
72
+ - **Candidates**: For each prompt, we pre‑generated four candidate responses (labeled A–D). These candidates represent a range of potential model behaviors to be evaluated.
73
+ - **Full rubrics**: In parallel, we prepared initial rubric items as examples of possible objective, prompt‑specific evaluation criteria. Annotators would later be required to assign signed weights ranging from -10 to +10, where negative weights indicate the behaviors models should avoid, and positive weights indicated items models should support, and the absolute value indicated the importance. Annotators could also author their own rubric items as part of the task, refining these criteria based on what they thought was important for evaluating that prompt.
74
+ - **Core rubrics**: In post-processing, for each prompt we keep only a small set of highly rated, non-redundant, and non-conflicting rubric items. We construct CoVal-core using a combination of language-model-assisted synthesis and human review. Our process first rewrites all rubric items to have positive weight and then merges semantically redundant rubric items while adjusting their scores. Then, it aims to select up to four rubric items with the highest average ratings that remain compatible with each other and do not repeat the same idea. Most prompts end up with four core rubric items (about 95%), with the remainder having two or three. CoVal-core often reflects the biases of dominant perspectives in our participant pool, since it prioritizes the strongest signals in the collected data. Core is a proof of concept that surfaces difficult design choices in distilling the full rubrics and an invitation for others to develop and validate better synthesis and aggregation methods for this format.
75
+
76
 
77
  ### Participant recruitment and platform
78
 
 
117
 
118
  ```
119
  "ranking_blocks": {
120
+ "unacceptable": [ { "rationale": "...", "rating": ["A is unacceptable"] } ],
121
+ "personal": [ { "rationale": "...", "ranking": "A>B>C=D" } ],
122
+ "world": [ { "rationale": "...", "ranking": "B>A>C=D" } ]
123
  }
124
  ```
125
 
 
130
  - **Maximum total**: Up to USD $540 (15 × $30 + $90 bonus).
131
  - **Quality & follow‑ups**: Thoughtful, high‑quality submissions may receive bonuses and invitations to paid follow‑up studies.
132
  - **Time estimate**: Across annotators and tasks, the median time to complete a task was approximately 22 minutes.
133
+ - **Availability**: The study was sized so each participant had 15 submissions available (no competition for seats).
134
 
135
  ## Figures
136
 
137
  <div style="display:flex; gap:12px; flex-wrap:wrap">
138
+ <figure style="margin:0">
139
+ <img src="./prompt_responses.png" alt="Prompt and responses" width="750" />
140
+ <figcaption style="font-size:12px; color:#666; margin-top:4px">Figure 1. Prompt and candidate responses (A–D)</figcaption>
141
+ </figure>
142
  </div>
143
 
144
  <div style="display:flex; gap:12px; flex-wrap:wrap">
145
+ <figure style="margin:0">
146
+ <img src="./intro_unacceptable.png" alt="Unacceptable check" width="475" />
147
+ <figcaption style="font-size:12px; color:#666; margin-top:4px">Figure 2. Unacceptable content check</figcaption>
148
+ </figure>
149
+ <figure style="margin:0">
150
+ <img src="./ranking_personal.png" alt="Ranking — personal" width="400" />
151
+ <figcaption style="font-size:12px; color:#666; margin-top:4px">Figure 3. Ranking — personal</figcaption>
152
+ </figure>
153
  </div>
154
 
155
  <div style="display:flex; gap:12px; flex-wrap:wrap">
156
+ <figure style="margin:0">
157
+ <img src="./ranking_world.png" alt="Ranking — world" width="400" />
158
+ <figcaption style="font-size:12px; color:#666; margin-top:4px">Figure 4. Ranking — world</figcaption>
159
+ </figure>
160
+ <figure style="margin:0">
161
+ <img src="./task_value.png" alt="Task value" width="475" />
162
+ <figcaption style="font-size:12px; color:#666; margin-top:4px">Figure 5. Prompt‑level ratings and task value</figcaption>
163
+ </figure>
164
  </div>
165
 
166
  ### Sampling, anchors, and balancing
 
178
  Important: In this dataset, the prompt is represented as a compact chat transcript (it can include a developer instruction and one or more user turns, and occasionally an assistant turn if the conversation had prior context). The candidate responses are not appended to this prompt transcript but are listed separately under `responses`.
179
 
180
  ### Conversation length
181
+ The vast majority of prompts consist of a single user question (with possibly a guiding developer/system instruction at the start) and no prior assistant answer. One‑turn user asks -> evaluate multiple candidate answers is the typical setup.
182
 
183
  ### Candidates
184
  - Each prompt comes with 4 candidate responses (A, B, C, D). Every prompt in this release has exactly four candidates.
 
210
 
211
  ```jsonc
212
  {
213
+ "prompt_id": "UUID", // Pseudonymized ID for the prompt (conversation)
214
+ "prompt": {
215
+ "id": "UUID", // Same as prompt_id (included again for convenience)
216
+ "messages": [
217
+ {"role": "developer", "content": "..."}, // System/developer message (if any)
218
+ {"role": "user", "content": "..."}, // The user prompt content
219
+ {"role": "assistant", "content": "..."} // Sometimes present if the prompt included an example assistant reply
220
+ ]
221
+ },
222
+ "responses": [
223
+ {
224
+ "response_index": "A", // Candidate label (A, B, C, or D)
225
+ "messages": [
226
+ {"role": "assistant", "content": "<candidate answer text>"}
227
+ ]
228
+ }
229
+ // ... similarly B, C, D candidates
230
+ ],
231
+ "metadata": {
232
+ "assessments": [
233
+ {
234
+ "conversation_id": "UUID", // Matches prompt_id (rotated conversation identifier)
235
+ "annotator_id": "UUID", // Rotated ID of the annotator who did this assessment
236
+ "importance": "Very important" | "Somewhat important" | "Not important",
237
+ "representativeness": "Not at all likely" | "Slightly" | "Moderately" | "Very" | "Extremely",
238
+ "subjectivity": "Value-dependent" | "Single correct answer" | "Unsure" | "Context dependent",
239
+ "ranking_blocks": { // Arrow‑friendly map of lists
240
+ "unacceptable": [ { "rationale": "...", "rating": ["C ...", "D ..."] } ],
241
+ "personal": [ { "rationale": "...", "ranking": "B>A>C=D" } ],
242
+ "world": [ { "rationale": "...", "ranking": "A>B>C=D" } ]
243
+ }
244
+ }
245
+ // If multiple annotators assessed the same prompt, there will be multiple objects in this assessments array.
246
+ ]
247
+ }
248
  }
249
  ```
250
 
 
255
 
256
  ```jsonc
257
  {
258
+ "annotator_id": "UUID", // Pseudonymized annotator ID
259
+ "demographics": {
260
+ "age": "...",
261
+ "gender": "...",
262
+ "education_level": "...",
263
+ "country_of_residence": "...",
264
+ "generative_ai_usage": "...",
265
+ "ai_concern_level": "...",
266
+ "ideal-model-behavior": "..." // Free-text response (lightly reviewed for PII)
267
+ },
268
+ "assessments": [
269
+ {
270
+ "conversation_id": "UUID", // prompt_id that this annotator assessed
271
+ // ... followed by the same fields (importance, representativeness, etc.)
272
+ // and ranking_blocks structure as shown in comparisons.jsonl
273
+ }
274
+ // ... one entry per prompt this annotator assessed
275
+ ]
276
  }
277
  ```
278
 
 
284
 
285
  ```jsonc
286
  {
287
+ "prompt_id": "UUID",
288
+ "annotator_id": "UUID",
289
+ "importance": "...", // (string) importance rating for this prompt by this annotator
290
+ "representativeness": "...", // (string) representativeness rating
291
+ "subjectivity": "...", // (string) subjectivity rating
292
+ "ranking_blocks": [ ... ], // list of ranking block objects (same format as above)
293
+ "demographics": { ... }, // the annotator’s demographics object
294
+ "num_candidates": 4, // number of responses (always 4 in this dataset)
295
+ "turns_user": 1, // number of user turns in the prompt context
296
+ "turns_assistant": 0, // number of assistant turns in the prompt context
297
+ "assistant_turn_share": 0.0 // assistant turns / (user + assistant turns) in the prompt context
298
  }
299
  ```
300
 
301
  This long‑format file is handy for data analysis (e.g., direct dataframe loading). The `turns_*` and `assistant_turn_share` fields quantify the prompt length and context composition for each case.
302
 
303
 
304
+ ### `conversation_rubrics.jsonl`
305
+
306
+ Each line in this file is one conversation rubric: one conversation (the prompt/messages) with the associated CoVal-full rubric items and all collected per-user scores for those items, plus the corresponding CoVal-core criteria (the distilled set of criteria for that conversation, derived from coval_full via merging/selection, and meant as a compact rubric).
307
+
308
+
309
+ ```jsonc
310
+ {
311
+ "conversation": {
312
+ "id": "UUID", // Pseudonymized conversation ID
313
+
314
+ "messages": [
315
+ {
316
+ "id": "UUID", // Message ID
317
+ "author": {
318
+ "role": "system" | "user" | "assistant" | "tool" | "developer" | "...",
319
+ "metadata": { /* object */ } // Usually empty; arbitrary key/value metadata
320
+ },
321
+ "content": {
322
+ "content_type": "text" | "...", // In the example: "text"
323
+ "parts": ["..."] // Array of content chunks (strings for text)
324
+ },
325
+ "metadata": { /* object */ }, // Usually empty; arbitrary key/value metadata
326
+ "recipient": "all" | "...", // In the example: "all"
327
+ "status": "finished_successfully" | "..." , // Message processing status
328
+ "weight": 1.0 // Numeric weight (float)
329
+ }
330
+ // ...additional messages in the conversation
331
+ ]
332
+ },
333
+ "coval_full": [
334
+ {
335
+ "rubric_item_id": "UUID", // Unique ID for this rubric item
336
+ "criterion": "...", // Human-readable criterion text
337
+
338
+ "scores": [
339
+ {
340
+ "annotator_id": "UUID", // Pseudonymized ID for the scorer/rater
341
+ "score": 10 // Signed weight the annotator assigned to this criterion
342
+ }
343
+ // ...more per-user scores for the same rubric item
344
+ ]
345
+ }
346
+ // ...more rubric items for this conversation
347
+ ],
348
+ "coval_core": [
349
+ {
350
+ "criterion": "..." // Core criterion text (no IDs/scores shown in the example)
351
+ }
352
+ // ...more core criteria
353
+ ]
354
+ }
355
+ ```
356
+
357
+ This file lets you inspect a single conversation and see pre-seeded and rater-written criterion, along with how different users scored each criterion, plus a smaller set highly rated, non-redundant and non-conflicting criteria.
358
+
359
+
360
+
361
 
362
 
363
  ## Cautions
 
365
  - **Prompt domain bias**: Prompts focus on contentious or value‑sensitive domains; Every prompt here was synthetically created by our team with certain goals in mind. This could introduce subtle biases — for example, how a question is phrased might lean it towards a particular interpretation or might be unfamiliar to people from some cultures.
366
  - **Content warning**: Some prompts/responses contain disturbing or offensive content (e.g., self‑harm, explicit sexual requests, politically charged statements). Apply filtering and user advisories as needed.
367
  - **Language considerations**: Instructions were in English; most rationales are English, some other languages (notably Spanish). Depending on your needs, you may need to plan for language detection, translation or filtering when analyzing text.
368
+ - **CoVal-core rubrics are experimental**: CoVal-core rubrics are an experimental, LM-synthesized distillation of CoVal-full: we merge, negate, and select a small set of highly rated items, then enforce “non-conflicting/non-redundant.” Each step is open to interpretation room so there are many plausible cores and our method can produce core rubrics that drift from the data.
369
+ - **Privacy & ethics**: Do not attempt to identify annotators.
370
 
371
 
372
 
 
378
  import json
379
 
380
  def read_jsonl(path):
381
+ with open(path, "r", encoding="utf-8") as f:
382
+ for line in f:
383
+ if line.strip():
384
+ yield json.loads(line)
385
 
386
  # Example: iterate over all prompt records
387
  for prompt_record in read_jsonl("comparisons.jsonl"):
388
+ prompt_id = prompt_record["prompt_id"]
389
+ prompt_messages = prompt_record["prompt"]["messages"]
390
+ responses = prompt_record["responses"]
391
+ assessments = prompt_record["metadata"]["assessments"]
392
+ # ... your processing here ...
393
+ for assessment in assessments:
394
+ annotator_id = assessment["annotator_id"]
395
+ world_rank = assessment["ranking_blocks"][0]["ranking_world"]
396
+ personal_rank = assessment["ranking_blocks"][1]["ranking_personal"]
397
+ # etc.
398
  ```
399
 
400
  This snippet reads each prompt and then iterates through the assessments for that prompt. The structure can be navigated as shown (ordering of `ranking_blocks` is consistent with the format we described, but you might want to access by keys instead for safety).
 
424
 
425
  - **License**: Creative Commons Attribution 4.0 International (**CC BY 4.0**) — see [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). You may share and adapt with attribution, link to the license, and indicate changes. No additional restrictions (beyond following originating model usage policies and not violating privacy).
426
 
427
+ - **Citation (dataset)**: If you use CoVal in your work, please cite:
428
+ OpenAI (2025). CoVal: Public Input on Model Defaults (Version 2.0) [Data set]. Available at: https://huggingface.co/datasets/openai/coval
429
+
430
+ - You may also cite the accompanying [August 2025 blog post](https://openai.com/index/collective-alignment-aug-2025-updates) and [January 2026 blog post](https://alignment.openai.com/coval) associated with this release for further context.
431