MaryVer's picture
Update README.md
787835f verified
metadata
license: mit
dataset_info:
  features:
    - name: tokens
      list: string
    - name: ner_tags
      list: string
  splits:
    - name: train
      num_bytes: 1226892
      num_examples: 850
  download_size: 325452
  dataset_size: 1226892
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - token-classification
language:
  - ru
  - en
pretty_name: Jay Guard NER Benchmark

Jay Guard NER Benchmark

Dataset Description

The Jay Guard NER Benchmark is a Russian-language dataset designed for evaluating Named Entity Recognition (NER) models on their ability to identify personal and sensitive data. The data is sourced from real-world, complex conversational texts, including work chats, customer support logs, and spoken language transcripts.

This dataset was created by Just AI to benchmark the performance of various NER solutions for the task of data anonymization. It specifically targets entities that are critical for protecting personal data, such as PERSON and STREET_ADDRESS.

Supported Tasks and Leaderboards

The dataset is intended for the Named Entity Recognition (NER) task. The goal is to train and evaluate models that can accurately identify and classify tokens into predefined categories.

Languages

The text in the dataset is in Russian (ru). It contains both Cyrillic and Latin characters, reflecting real-world usage in chats and logs.

Dataset Structure

The dataset consists of a single configuration and is split into train, validation, and test sets.

Data Instances

Each instance in the dataset consists of a list of tokens and a corresponding list of ner_tags.

An example from the dataset looks like this:

{
  "tokens": ["Слушай", ",", "я", "в", "2005", "в", "Москве", "на", "Тверской", "15", "стави", "..."],
  "ner_tags":
}

Data Splits

The data is split into:

  • train: 850 examples

Dataset Creation

Curation Rationale

The dataset was created to address the lack of robust benchmarks for personal data detection in complex, noisy Russian conversational text. Standard NER models often fail in these scenarios, either by missing entities (low recall) or by incorrectly tagging non-sensitive information (low precision). This benchmark serves to evaluate and improve models for real-world data anonymization tasks.

Source Data

The source data is derived from internal, anonymized logs of conversational systems, work chats, and customer support interactions. All data has been processed to remove or replace any real personally identifiable information (PII) before being included in this public benchmark.

Considerations for Using the Data

Social Impact of Dataset

This dataset is designed to facilitate the development of more effective and reliable data anonymization technologies. By providing a challenging benchmark, we aim to help researchers and developers build NLP systems that more effectively protect user privacy.

Discussion of Biases

While the data is sourced from a variety of conversational contexts, it may reflect the linguistic patterns and biases present in those sources. The data primarily comes from Russian-speaking users in specific technical or customer-support domains, and models trained on this data may not generalize perfectly to other domains (e.g., legal or medical texts).

Other Known Limitations

The dataset focuses primarily on PERSON (PERSON, PUBLIC_PERSON, FICT), GPE, and STREET_ADDRESS entities. It does not cover other types of PII, such as phone numbers, email addresses, or financial information, which would require separate detection models.

Additional Information

Dataset Curators

This dataset was curated by the team at Just AI as part of the development of the Jay Guard.

Licensing Information

The dataset is licensed under the Apache License, Version 2.0.

Citation Information

If you use this dataset in your research, please cite it as follows:

@misc{jayguard_ner_benchmark,
  author    = {Just AI},
  title     = {Jay Guard NER Benchmark},
  year      = {2025},
  publisher = {Hugging Face},
  journal   = {Hugging Face Datasets},
  url       = {https://huggingface.co/datasets/just-ai/jayguard-ner-benchmark}
}