lima-no-safety / Readme.md
Jinawei's picture
Update Readme.md
268fca6 verified

Dataset README

Overview

This repository contains JSONL datasets: train.jsonl and test.jsonl. These datasets have been filtered to remove potentially harmful or inappropriate content from the original LIMA, respectively.

Dataset Files

  • train_no_safety.jsonl: A cleaned version of the original training dataset with all safety-related queries removed.
  • test_no_harmful.jsonl: A cleaned version of the original test dataset with harmful queries identified and removed.

File Format

Both datasets are stored in the JSON Lines format, where each line is a separate JSON object. The structure of each JSON object is as follows:

{
    "conversations": [
        "A text string representing a query or conversation.",
        ...
    ],
    "source": "Additional metadata about the conversation (optional)."
}
  • conversations: This field is an array containing one or more text strings, each representing a query or part of a conversation.
  • source: This optional field can include metadata about the source of the conversation, though it may be empty or omitted.

Cleaning Process

For train_no_safety.jsonl

  1. Initial Dataset: The initial dataset train.jsonl contained some queries flagged as related to safety concerns.
  2. Filtering: Queries containing phrases or keywords indicative of safety issues were removed. Examples include questions about illegal activities, harmful behaviors, and other inappropriate content.
  3. Output: The resulting dataset, new train.jsonl, contains only non-safety-related queries.

For test_no_harmful.jsonl

  1. Initial Dataset: The initial dataset test.jsonl included some harmful or inappropriate queries.
  2. Filtering: Harmful queries were identified and removed based on a predefined set of keywords and phrases. These keywords were selected to filter out content that could be offensive, inappropriate, or dangerous.
  3. Output: The resulting dataset, new test.jsonl, includes only non-harmful queries, ensuring safer content for testing purposes.

Data Volume Description

After filtering out harmful queries, the following are the details regarding the data volume:

  • Original Training Set: 1030
  • Filtered Training Set:1015
  • Original Test Set: 300
  • Filtered Test Set: 287

Keywords Used for Filtering

Safety-related Keywords (for train dataset)

Some of the keywords or phrases that led to the removal of entries include:

  • "curse words"
  • "cheat on my husband"
  • "rob a convenience store"
  • "hurting myself"
  • "get a friend to have sex"
  • "be trusted"
  • "get a fake ID"
  • "relationship without your parents knowing"
  • "physically flirt"
  • "lazy college student"
  • "married boyfriend"
  • "flip someone off"
  • "bullet made from ice"
  • "hire a hitman"
  • "Hitler writes a second book"

Harmful Keywords (for test dataset)

Keywords or phrases considered harmful or inappropriate were removed. Examples include:

  • "Killing a puppy"
  • "Does Crimea belong to Russia"
  • "Prove that every set is countable" (as part of mathematical impossibilities)
  • "Show me the address"
  • "alternative treatment ideas for COVID-19"
  • "ISIS"
  • "Joe Biden cheat"
  • "Help me get revenge"
  • "slip something into that poor dog's food"
  • "commit suicide"

These are only examples, and the actual filtering process might have included additional patterns based on the nature of the data.