bot-yaya's picture
Update README.md
1b662f5 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: symbol
      dtype: string
    - name: symbols
      list: string
      length: 3
    - name: publication_date
      dtype: string
    - name: area
      dtype: string
    - name: distribution
      dtype: string
    - name: agendas
      list: string
      length: 3
    - name: sessions
      list: string
      length: 3
    - name: job_numbers
      list: string
      length: 7
    - name: release_dates
      list: string
      length: 7
    - name: sizes
      list: int64
      length: 21
    - name: title
      dtype: string
    - name: subjects
      list: string
    - name: blobs
      list: binary
      length: 7
    - name: crawl_res
      list: string
      length: 7
  splits:
    - name: train
      num_bytes: 357332434040
      num_examples: 594160
  download_size: 247663416451
  dataset_size: 357332434040
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: mit
task_categories:
  - translation
language:
  - ar
  - es
  - zh
  - en
  - fr
  - ru
  - de
tags:
  - UN
  - UPRPRC

This datasets contains all the raw DOC file crawl from United Nations Digital Library, produced by https://github.com/mnbvc-parallel-corpus-team/UPRPRC/blob/v2_record_spider/scripts/v4_list2doc.py, using the index in https://huggingface.co/datasets/bot-yaya/documents.un.org_search_result.

If you are writing spider script to download all these files, you can do increment download based on this dataset.

Our UPRPRC project: https://github.com/mnbvc-parallel-corpus-team/UPRPRC

Attention: Record which contains <= 1 language's version are not included. As they cannot format parallel corpus.

This dataset is part of the project MNBVC: https://huggingface.co/liwu/MNBVC