| | --- |
| | tags: |
| | - rlfh |
| | - argilla |
| | - human-feedback |
| | dataset_info: |
| | features: |
| | - name: text |
| | dtype: string |
| | - name: id |
| | dtype: string |
| | - name: metadata |
| | struct: |
| | - name: data_id |
| | dtype: string |
| | - name: date |
| | dtype: string |
| | - name: dump |
| | dtype: string |
| | - name: file_path |
| | dtype: string |
| | - name: lang_code |
| | dtype: string |
| | - name: language |
| | dtype: string |
| | - name: language_score |
| | dtype: float64 |
| | - name: language_script |
| | dtype: string |
| | - name: minhash_cluster_size |
| | dtype: int64 |
| | - name: url |
| | dtype: string |
| | splits: |
| | - name: train |
| | num_bytes: 4095429 |
| | num_examples: 1000 |
| | download_size: 2391077 |
| | dataset_size: 4095429 |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: train |
| | path: data/train-* |
| | --- |
| | |
| | # Dataset Card for nob |
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| | This dataset has been created with [Argilla](https://github.com/argilla-io/argilla). As shown in the sections below, this dataset can be loaded into your Argilla server as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets). |
| |
|
| |
|
| | ## Using this dataset with Argilla |
| |
|
| | To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code: |
| |
|
| | ```python |
| | import argilla as rg |
| | |
| | ds = rg.Dataset.from_hub("davanstrien/nob", settings="auto") |
| | ``` |
| |
|
| | This will load the settings and records from the dataset repository and push them to you Argilla server for exploration and annotation. |
| |
|
| | ## Using this dataset with `datasets` |
| |
|
| | To load the records of this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code: |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | ds = load_dataset("davanstrien/nob") |
| | ``` |
| |
|
| | This will only load the records of the dataset, but not the Argilla settings. |
| |
|
| | ## Dataset Structure |
| |
|
| | This dataset repo contains: |
| |
|
| | * Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `rg.Dataset.from_hub` and can be loaded independently using the `datasets` library via `load_dataset`. |
| | * The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla. |
| | * A dataset configuration folder conforming to the Argilla dataset format in `.argilla`. |
| |
|
| | The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**. |
| |
|
| | ### Fields |
| |
|
| | The **fields** are the features or text of a dataset's records. For example, the 'text' column of a text classification dataset of the 'prompt' column of an instruction following dataset. |
| |
|
| | | Field Name | Title | Type | Required | |
| | | ---------- | ----- | ---- | -------- | |
| | | text | text | text | True | |
| |
|
| |
|
| | ### Questions |
| |
|
| | The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking. |
| | |
| | | Question Name | Title | Type | Required | Description | Values/Labels | |
| | | ------------- | ----- | ---- | -------- | ----------- | ------------- | |
| | | Educational Value | Educational Value of the content | label_selection | True | N/A | ['None', 'Minimal', 'Basic', 'Good', 'Excellent', '❗ Problematic Content ❗'] | |
| | | Language ID correct? | Is this text in the expected language | label_selection | True | N/A | ['yes', 'no'] | |
| | |
| | |
| | <!-- check length of metadata properties --> |
| | |
| | ### Metadata |
| | |
| | The **metadata** is a dictionary that can be used to provide additional information about the dataset record. |
| | | Metadata Name | Title | Type | Values | Visible for Annotators | |
| | | ------------- | ----- | ---- | ------ | ---------------------- | |
| | | language_score | Language Score | float | - | True | |
| |
|
| |
|
| |
|
| |
|
| |
|
| | ### Data Splits |
| |
|
| | The dataset contains a single split, which is `train`. |
| |
|
| | ## Dataset Creation |
| |
|
| | ### Curation Rationale |
| |
|
| | [More Information Needed] |
| |
|
| | ### Source Data |
| |
|
| | #### Initial Data Collection and Normalization |
| |
|
| | [More Information Needed] |
| |
|
| | #### Who are the source language producers? |
| |
|
| | [More Information Needed] |
| |
|
| | ### Annotations |
| |
|
| | #### Annotation guidelines |
| |
|
| | ### Guidelines for Rating Educational Content |
| |
|
| | Rate the content using these criteria: |
| |
|
| | 1️⃣ NO EDUCATIONAL VALUE |
| | - No educational purpose whatsoever |
| | - Pure entertainment, ads, or personal content |
| | - Nothing to learn from this content |
| | ✓ Examples: |
| | • Social media conversations about daily life |
| | • Online shopping product listings |
| | • Advertisement pages |
| | • Personal blog posts about someone's day |
| | • Forum discussions about entertainment |
| | • Comment sections |
| | • Sports match reports |
| |
|
| | 2️⃣ MINIMAL EDUCATIONAL VALUE |
| | - Contains a few facts or pieces of information |
| | - Mostly non-educational content |
| | - Information is incidental or not the main focus |
| | ✓ Examples: |
| | • News article that mentions some historical facts |
| | • Travel blog with basic information about a location |
| | • Product review with some technical details |
| | • Company website with brief industry information |
| | • Recipe that briefly explains a cooking technique |
| | • Entertainment article with occasional facts |
| |
|
| | 3️⃣ BASIC EDUCATIONAL CONTENT |
| | - Attempts to explain or teach something |
| | - Information might be scattered or disorganized |
| | - Mixed with non-educational content |
| | ✓ Examples: |
| | • Basic how-to guide with ads |
| | • Simple Wikipedia-style article |
| | • Blog post explaining a concept but lacking depth |
| | • Amateur tutorial video transcript |
| | • Brief explanation of a scientific concept |
| | • Quick overview of a historical event |
| |
|
| | 4️⃣ GOOD EDUCATIONAL CONTENT |
| | - Clear teaching purpose |
| | - Well-organized information |
| | - Suitable for learning |
| | - May have some minor limitations |
| | ✓ Examples: |
| | • Detailed tutorial with clear steps |
| | • Well-written educational blog post |
| | • Comprehensive guide to a topic |
| | • Clear explanation of a scientific process |
| | • Structured learning material |
| | • Educational website article with examples |
| |
|
| | 5️⃣ EXCELLENT EDUCATIONAL CONTENT |
| | - Outstanding teaching material |
| | - Clear structure and thorough explanations |
| | - Includes helpful examples |
| | - No distracting content |
| | ✓ Examples: |
| | • Professional educational resource |
| | • Well-crafted learning module |
| | • In-depth guide with clear examples |
| | • Comprehensive educational article |
| | • High-quality teaching material |
| | • Expert explanation with practical applications |
| |
|
| | 6️⃣ PROBLEMATIC CONTENT |
| | - Wrong language |
| | - Unreadable or corrupted text |
| | - Inappropriate content |
| | - Machine-generated nonsense |
| | ✓ Examples: |
| | • Text in a different language than expected |
| | • Garbled characters or formatting |
| | • Clearly AI-generated spam content |
| | • Inappropriate or offensive material |
| | • Broken/partial webpage content |
| | • Content that's too technical to evaluate |
| |
|
| | #### Annotation process |
| |
|
| | [More Information Needed] |
| |
|
| | #### Who are the annotators? |
| |
|
| | [More Information Needed] |
| |
|
| | ### Personal and Sensitive Information |
| |
|
| | [More Information Needed] |
| |
|
| | ## Considerations for Using the Data |
| |
|
| | ### Social Impact of Dataset |
| |
|
| | [More Information Needed] |
| |
|
| | ### Discussion of Biases |
| |
|
| | [More Information Needed] |
| |
|
| | ### Other Known Limitations |
| |
|
| | [More Information Needed] |
| |
|
| | ## Additional Information |
| |
|
| | ### Dataset Curators |
| |
|
| | [More Information Needed] |
| |
|
| | ### Licensing Information |
| |
|
| | [More Information Needed] |
| |
|
| | ### Citation Information |
| |
|
| | [More Information Needed] |
| |
|
| | ### Contributions |
| |
|
| | [More Information Needed] |