id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
dmayhem93/agieval-gaokao-history | 2023-06-18T17:20:33.000Z | [
"license:mit",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 120008
num_examples: 235
download_size: 78981
dataset_size: 120008
license: mit
---
# Dataset Card for "agieval-gaokao-history"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
} |
dmayhem93/agieval-gaokao-physics | 2023-06-18T17:22:01.000Z | [
"license:mit",
"arxiv:2304.06364",
"region:us"
] | dmayhem93 | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: query
dtype: string
- name: choices
sequence: string
- name: gold
sequence: int64
splits:
- name: test
num_bytes: 136757
num_examples: 200
download_size: 70363
dataset_size: 136757
license: mit
---
# Dataset Card for "agieval-gaokao-physics"
Dataset taken from https://github.com/microsoft/AGIEval and processed as in that repo.
MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
@misc{zhong2023agieval,
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan},
year={2023},
eprint={2304.06364},
archivePrefix={arXiv},
primaryClass={cs.CL}
} |
Abdelkareem/simple-benchmark-arabic-summarization | 2023-06-18T13:49:31.000Z | [
"license:apache-2.0",
"region:us"
] | Abdelkareem | null | null | null | 0 | 10 | ---
license: apache-2.0
---
|
haandol/icon | 2023-07-14T07:16:28.000Z | [
"language:en",
"region:us"
] | haandol | null | null | null | 1 | 10 | ---
language: en
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 5823068.0
num_examples: 263
download_size: 5306675
dataset_size: 5823068.0
---
# Dataset Card for "icon"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
microsoft/LCC_csharp | 2023-06-21T02:59:17.000Z | [
"region:us"
] | microsoft | null | null | null | 2 | 10 | ---
dataset_info:
features:
- name: context
dtype: string
- name: gt
dtype: string
splits:
- name: train
num_bytes: 1851797668
num_examples: 100000
- name: validation
num_bytes: 136620599
num_examples: 10000
- name: test
num_bytes: 136701413
num_examples: 10000
download_size: 581666513
dataset_size: 2125119680
---
# Dataset Card for "LCC_csharp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KaiLv/UDR_Java | 2023-06-21T12:40:15.000Z | [
"region:us"
] | KaiLv | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: target
dtype: string
- name: len_question
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 105539111
num_examples: 164514
- name: validation
num_bytes: 3088869
num_examples: 5172
- name: test
num_bytes: 6865702
num_examples: 10928
- name: debug
num_bytes: 64147056
num_examples: 100000
download_size: 77259976
dataset_size: 179640738
---
# Dataset Card for "UDR_Java"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mlfoundations/VisIT-Bench | 2023-08-18T23:18:52.000Z | [
"annotations_creators:crowdsourced",
"language_creators:found",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"vision-and-language",
"instruction-following",
"human-chatbot-interaction",
"image-instruction-pairs",
"multi-modal",
"task-performance... | mlfoundations | null | null | null | 4 | 10 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
paperswithcode_id: visit-bench
pretty_name: VisIT-Bench
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- vision-and-language
- instruction-following
- human-chatbot-interaction
- image-instruction-pairs
- multi-modal
- task-performance
task_ids: []
extra_gated_prompt: >-
By clicking “Access repository” below, you assert your intention to
exclusively use this resource for research, not for commercial chatbot
development, and agree to abide by the terms detailed in the [VisIT-Bench
license](https://visit-bench.github.io/static/pdfs/visit_bench_license_agreement.txt).
You may also view all instances through the [VisIT-Bench
Explorer](https://huggingface.co/spaces/mlfoundations/visit-bench-explorer-full)
and consult the accompanying [VisIT-Bench Dataset
card](https://huggingface.co/spaces/mlfoundations/visit-bench-explorer-full/blob/main/README.md)
prior to acceptance. If you are unsure about your specific case - do not
hesitate to reach out: visit-bench-support@gmail.com.
license: cc-by-4.0
---
# Dataset Card for VisIT-Bench
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Data Loading](#data-loading)
- [Licensing Information](#licensing-information)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Citation Information](#citation-information)
## Dataset Description
VisIT-Bench is a dataset and benchmark for vision-and-language instruction following. The dataset is comprised of image-instruction pairs and corresponding example outputs, spanning a wide range of tasks, from simple object recognition to complex reasoning tasks. The dataset provides a holistic view of chatbot capabilities.
The results show that state-of-the-art models such as GPT-4 and BLIP2 have a high success rate, but there is room for improvement.
Homepage: https://visit-bench.github.io/
Paper: https://arxiv.org/abs/2308.06595
GitHub: http://github.com/mlfoundations/Visit-Bench
Point of Contact: yonatanbitton1@gmail.com, hbansal@ucla.edu
## Dataset Structure
### Data Fields
instruction_category (string) - The category of the instruction
image_url (string) - The URL of the image in the instruction
image (image) - The image in the instruction
visual (string) - The visual details in the instruction
instruction (string) - The instruction itself
reference_output (string) - The reference output for the given instruction
human_ratings_gpt4_correct (boolean) - Human ratings indicating if GPT-4 correctly followed the instruction
human_ratings_problem_in_caption (boolean) - Human ratings indicating if there is a problem in the caption
human_ratings_problem_in_gpt4 (boolean) - Human ratings indicating if there is a problem in GPT-4's response
public_images_metadata (dictionary) - Metadata about the image
### Data Splits
The dataset currently has a single TEST split. Further splits will be provided in the future.
### Data Loading
You can load the data as follows (credit to [Hugging Face Datasets](https://huggingface.co/datasets)):
```
from datasets import load_dataset
examples = load_dataset('mlfoundations/visit-bench', use_auth_token=<YOUR USER ACCESS TOKEN>)
```
You can get `<YOUR USER ACCESS TOKEN>` by following these steps:
1) log into your Hugging Face account
2) click on your profile picture
3) click "Settings"
4) click "Access Tokens
5) generate a new token and use that in the `use_auth_token` field
## Licensing Information
The new contributions of our dataset (e.g., the instructions, reference outputs, model ranking annotations, etc.) are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
All images used are publically licensed. Please refer to the public license attached to each individual image in the "public_images_metadata" field in the dataset sheets.
Alongside this license, the following conditions apply:
1. **Purpose:** The dataset was primarily designed for use as a test set.
2. **Commercial Use:** Commercially, the dataset may be used as a test set, but it's prohibited to use it as a training set.
By accessing or using this dataset, you acknowledge and agree to abide by these terms in conjunction with the CC BY 4.0 license.
## Annotations
The dataset is annotated using crowd workers on Amazon Mechanical Turk. Workers followed the steps detailed in the paper to generate the annotations. The instructions, reference outputs, and model ranking annotations were generated through this process.
## Considerations for Using the Data
Social Impact of Dataset: The dataset is aimed to facilitate research on AI models' ability to understand and follow instructions given in natural language and paired with visual inputs. Such research could contribute to the development of more interactive, capable, and intelligent AI systems. It could also illuminate areas where current AI technology falls short, informing future research directions.
Data Limitations: The dataset may not cover all possible types of instructions, particularly those requiring complex reasoning or advanced knowledge. The dataset was also created using crowd workers, and thus, may contain mistakes or inconsistencies.
Privacy: The images used in this dataset are publicly available. However, the exact source of the images is not disclosed in the dataset, protecting the privacy of the image creators to some extent. The workers who generated the instructions and annotations were also anonymized.
Curation Rationale: The dataset was curated to provide a broad range of instruction types and difficulty levels. The creators selected a mix of easy, medium, and hard instructions to challenge current AI capabilities.
## Citation Information
@misc{bitton2023visitbench,
title={VisIT-Bench: A Benchmark for Vision-Language Instruction Following Inspired by Real-World Use},
author={Yonatan Bitton and Hritik Bansal and Jack Hessel and Rulin Shao and Wanrong Zhu and Anas Awadalla and Josh Gardner and Rohan Taori and Ludwig Schimdt},
year={2023},
eprint={2308.06595},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
|
ChanceFocus/flare-headlines | 2023-08-21T04:17:20.000Z | [
"region:us"
] | ChanceFocus | null | null | null | 1 | 10 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
- name: label_type
dtype: string
splits:
- name: train
num_bytes: 20011965
num_examples: 71892
- name: valid
num_bytes: 2868488
num_examples: 10269
- name: test
num_bytes: 6189762
num_examples: 20547
download_size: 899498
dataset_size: 29070215
---
# Dataset Card for "flare-headlines"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ttxy/resume_ner | 2023-08-25T11:02:49.000Z | [
"task_categories:token-classification",
"language:code",
"license:bsd",
"ner",
"region:us"
] | ttxy | null | null | null | 0 | 10 | ---
language:
- code
pretty_name: "resume ner dataseet"
tags:
- ner
license: "bsd"
task_categories:
- token-classification
---
中文 resume ner 数据集, 来源: https://github.com/luopeixiang/named_entity_recognition 。
数据的格式如下,它的每一行由一个字及其对应的标注组成,标注集采用BIOES,句子之间用一个空行隔开。
```text
美 B-LOC
国 E-LOC
的 O
华 B-PER
莱 I-PER
士 E-PER
我 O
跟 O
他 O
谈 O
笑 O
风 O
生 O
```
# 效果
## 不同模型的效果对比:
<img src="https://file.ddot.cc/imagehost/2023/8bb93212-5812-4211-91b8-7a6bda841e1b.png">
## Bert-tiny 结果
|model | precision | recall | f1-score | support |
|---|---|---|---|---|
|BERT-tiny | 0.9490 | 0.9538 | 0.9447 | 全部 |
|BERT-tiny | 0.9278 | 0.9251 | 0.9313 | 使用 100 train |
注:
- 后面再测试,BERT-tiny(softmax) + 100 训练样本,暂时没有复现 0.9313 的结果,最好结果 0.8612
- BERT-tiny + LSTM(softmax) + 100 样本,`val_f1` 可达 0.8737
|
nRuaif/tinystories-gpt4 | 2023-06-26T07:01:26.000Z | [
"region:us"
] | nRuaif | null | null | null | 0 | 10 | Entry not found |
knowrohit07/know_cot | 2023-06-30T20:52:39.000Z | [
"license:other",
"region:us"
] | knowrohit07 | null | null | null | 1 | 10 | ---
license: other
---
|
krenerd/alpaca_eval_multilingual | 2023-07-11T01:59:15.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | krenerd | Data for alpaca_eval, which aims to help automatic evaluation of instruction-following models | @misc{alpaca_eval,
author = {Xuechen Li and Tianyi Zhang and Yann Dubois and Rohan Taori and Ishaan Gulrajani and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {AlpacaEval: An Automatic Evaluator of Instruction-following Models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\url{https://github.com/tatsu-lab/alpaca_eval}}
} | null | 1 | 10 | ---
license: cc-by-nc-4.0
---
### Usage
```
load_dataset("krenerd/alpaca_eval_multilingual", "alpaca_eval") # or alpaca_eval_en
load_dataset("krenerd/alpaca_eval_multilingual", "alpaca_eval_ko")
load_dataset("krenerd/alpaca_eval_multilingual", "alpaca_eval_ja")
```
### Method
The dataset was translated by GPT-4 API using the following prompt.
```
ja = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template(
"You are a helpful assistant fluent in English and Japanese."
),
HumanMessagePromptTemplate.from_template(
"Translate the following text to Japanese. Show the answer only. このテキストを直訳するのではなく、その意味を保持しつつ、より自然なリクエストに言い換えて翻訳してください text=```{instruction}```"
),
]
)
ko = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template(
"You are a helpful assistant fluent in English and Korean."
),
HumanMessagePromptTemplate.from_template(
"Translate the following text to Korean. Show the answer only. 말 그대로 번역하지 말고, 의미가 유지되는 한에서 자연스러운 요청으로 번역해줘. text=```{instruction}```"
),
]
)
```
Script: https://gist.github.com/sieu-n/88542733914f80f780359f5c82c99a62 |
hongrui/mimic_chest_xray_v_1 | 2023-07-08T01:18:43.000Z | [
"region:us"
] | hongrui | null | null | null | 1 | 10 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: report
dtype: string
splits:
- name: train
num_bytes: 2350901047.71
num_examples: 89395
download_size: 2322292341
dataset_size: 2350901047.71
---
# Dataset Card for "mimic_chest_xray_v_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pie/squad_v2 | 2023-09-28T18:37:32.000Z | [
"region:us"
] | pie | null | null | null | 0 | 10 | Entry not found |
leostelon/california-housing | 2023-07-14T05:31:59.000Z | [
"license:mit",
"region:us"
] | leostelon | null | null | null | 0 | 10 | ---
license: mit
---
|
FunDialogues/academia-physics-office-hours | 2023-08-28T23:35:08.000Z | [
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"fictitious dialogues",
"prototyping",
"region:us"
] | FunDialogues | null | null | null | 2 | 10 | ---
license: apache-2.0
task_categories:
- question-answering
- conversational
language:
- en
tags:
- fictitious dialogues
- prototyping
pretty_name: 'academia-physics-office-hours '
size_categories:
- n<1K
---
# fun dialogues
A library of fictitious dialogues that can be used to train language models or augment prompts for prototyping and educational purposes. Fun dialogues currently come in json and csv format for easy ingestion or conversion to popular data structures. Dialogues span various topics such as sports, retail, academia, healthcare, and more. The library also includes basic tooling for loading dialogues and will include quick chatbot prototyping functionality in the future.
Visit the Project Repo: https://github.com/eduand-alvarez/fun-dialogues/
# This Dialogue
Comprised of fictitious examples of dialogues between a physics professor and a student during office hours. Check out the example below:
```
"id":1,
"description":"Understanding the concept of velocity",
"dialogue":"Student: Professor, I'm having trouble understanding the concept of velocity. Could you please explain it to me?\n\nProfessor: Of course! Velocity is a fundamental concept in physics that describes the rate of change of an object's position with respect to time. It is a vector quantity, which means it has both magnitude and direction. To calculate velocity, you divide the change in position by the change in time. It is important to note that velocity takes into account both speed and direction. For example, if an object is moving north at a speed of 20 meters per second, its velocity is 20 meters per second in the north direction. Does that clarify it for you?"
```
# How to Load Dialogues
Loading dialogues can be accomplished using the fun dialogues library or Hugging Face datasets library.
## Load using fun dialogues
1. Install fun dialogues package
`pip install fundialogues`
2. Use loader utility to load dataset as pandas dataframe. Further processing might be required for use.
```
from fundialogues import dialoader
# load as pandas dataframe
bball_coach = dialoader("FunDialogues/academia-physics-office-hours")
```
## Loading using Hugging Face datasets
1. Install datasets package
2. Load using datasets
```
from datasets import load_dataset
dataset = load_dataset("FunDialogues/academia-physics-office-hours")
```
## How to Contribute
If you want to contribute to this project and make it better, your help is very welcome. Contributing is also a great way to learn more about social coding on Github, new technologies and and their ecosystems and how to make constructive, helpful bug reports, feature requests and the noblest of all contributions: a good, clean pull request.
### Contributing your own Lifecycle Solution
If you want to contribute to an existing dialogue or add a new dialogue, please open an issue and I will follow up with you ASAP!
### Implementing Patches and Bug Fixes
- Create a personal fork of the project on Github.
- Clone the fork on your local machine. Your remote repo on Github is called origin.
- Add the original repository as a remote called upstream.
- If you created your fork a while ago be sure to pull upstream changes into your local repository.
- Create a new branch to work on! Branch from develop if it exists, else from master.
- Implement/fix your feature, comment your code.
- Follow the code style of the project, including indentation.
- If the component has tests run them!
- Write or adapt tests as needed.
- Add or change the documentation as needed.
- Squash your commits into a single commit with git's interactive rebase. Create a new branch if necessary.
- Push your branch to your fork on Github, the remote origin.
- From your fork open a pull request in the correct branch. Target the project's develop branch if there is one, else go for master!
If the maintainer requests further changes just push them to your branch. The PR will be updated automatically.
Once the pull request is approved and merged you can pull the changes from upstream to your local repo and delete your extra branch(es).
And last but not least: Always write your commit messages in the present tense. Your commit message should describe what the commit, when applied, does to the code – not what you did to the code.
# Disclaimer
The dialogues contained in this repository are provided for experimental purposes only. It is important to note that these dialogues are assumed to be original work by a human and are entirely fictitious, despite the possibility of some examples including factually correct information. The primary intention behind these dialogues is to serve as a tool for language modeling experimentation and should not be used for designing real-world products beyond non-production prototyping.
Please be aware that the utilization of fictitious data in these datasets may increase the likelihood of language model artifacts, such as hallucinations or unrealistic responses. Therefore, it is essential to exercise caution and discretion when employing these datasets for any purpose.
It is crucial to emphasize that none of the scenarios described in the fun dialogues dataset should be relied upon to provide advice or guidance to humans. These scenarios are purely fictitious and are intended solely for demonstration purposes. Any resemblance to real-world situations or individuals is entirely coincidental.
The responsibility for the usage and application of these datasets rests solely with the individual or entity employing them. By accessing and utilizing these dialogues and all contents of the repository, you acknowledge that you have read and understood this disclaimer, and you agree to use them at your own discretion and risk. |
AhmedBou/French_quotes | 2023-07-21T15:50:55.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:fr",
"license:apache-2.0",
"region:us"
] | AhmedBou | null | null | null | 0 | 10 | ---
license: apache-2.0
task_categories:
- text-classification
- text-generation
language:
- fr
size_categories:
- 1K<n<10K
--- |
johannes-garstenauer/structs_token_size_4_pd_False_reduced_labelled | 2023-07-20T21:05:52.000Z | [
"region:us"
] | johannes-garstenauer | null | null | null | 1 | 10 | ---
dataset_info:
features:
- name: struct
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 7448045059
num_examples: 30656932
download_size: 2199643691
dataset_size: 7448045059
---
# Dataset Card for "structs_token_size_4_pd_False_reduced_labelled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nlpkevinl/whatsthatbook | 2023-08-15T07:29:24.000Z | [
"task_categories:text-retrieval",
"language:en",
"license:odc-by",
"arxiv:2305.15053",
"region:us"
] | nlpkevinl | null | null | null | 0 | 10 | ---
license: odc-by
task_categories:
- text-retrieval
language:
- en
pretty_name: whatsthatbook
extra_gated_prompt: "To access this dataset, you agree to the terms and conditions from the GoodReads website stated here: https://www.goodreads.com/about/terms"
extra_gated_fields:
I agree to use to the terms and conditions: checkbox
---
# Dataset Card for WhatsThatBook
## Dataset Description
- **Paper: https://arxiv.org/abs/2305.15053**
- **Point of Contact: k-lin@berkeley.edu**
### Dataset Summary
A collection of tip-of-the-tongue queries for book searches. The dataset was curated from GoodReads community forum user queries. It seves as a training and evaluation
resource for tip-of-the-tongue book queries. The user queries contain the interactions on the community forum and the documents are books with associated metadata.
### Supported Tasks and Leaderboards
WhatsThatBook is intended for information retrieval tasks including but not limited to standard retrieval, using just the original query posted by the user
and interactive settings, where the system asks clarification queries to narrow down the user's information needs.
### Languages
The dataset is primary in English, some book descriptions may contain other languages.
## Dataset Structure
### Data Fields
Data fields for WhatsThatBook queries:
- `question`: Inital query posted to the community forum
- `question_posted_date`: The date that the query was posted in YYYY-MM-DD format
- `book_id`: ID of the gold book used for evaluation
- `answers`: List of the gold book descriptions
The fields for the books:
- `title`: The title of the book
- `author`: The author of the book
- `author_url`: Link to the author page
- `description` The blurb of the book that contains description of the plot or
- `isbn_13`: The ISBN 13 number
- `date`: String representation of the date from the book webpage
- `parsed_dates`:A list of the publication date parsed out in YYYY-MM-DD format
- `image_link`: original link to image
- `ratings`: Total number of ratings
- `reviews`: Total number of reviews
- `genres`: Dictionary of genre tags to number of times tagged with that genre
- `id`: ID of the book, corresponding to the query file
### Data Splits
The dataset is comprised of two parts, WTB (WhatsThatBook), as well as TOMT (tip-of-my-tongue). WhatsThatBook contains standard train, dev, and test splits, and TOMT serves as an evaluation set.
## Dataset Creation
### Source Data
## Additional Information
### Dataset Curators
1. Kevin Lin, UC Berkeley, k-lin@berkeley.edu
2. Kyle Lo, Allen Institue For Artificial Intelligence, kylel@allenai.org
### Citation Information
```
@article{lin2023decomposing,
title={Decomposing Complex Queries for Tip-of-the-tongue Retrieval},
author={Lin, Kevin and Lo, Kyle and Gonzalez, Joseph E and Klein, Dan},
journal={arXiv preprint arXiv:2305.15053},
year={2023}
}
```
|
Clinton/texttosqlv2_25000_v2 | 2023-07-28T12:40:03.000Z | [
"license:apache-2.0",
"region:us"
] | Clinton | null | null | null | 3 | 10 | ---
license: apache-2.0
---
|
youlun77/2000_TextClassification | 2023-07-28T12:48:06.000Z | [
"region:us"
] | youlun77 | null | null | null | 0 | 10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 147675.6
num_examples: 1800
- name: test
num_bytes: 16408.4
num_examples: 200
download_size: 74511
dataset_size: 164084.0
---
# Dataset Card for "2000_TextClassification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TrainingDataPro/facial-hair-classification-dataset | 2023-09-19T19:34:25.000Z | [
"task_categories:image-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"region:us"
] | TrainingDataPro | null | null | null | 1 | 10 | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-classification
language:
- en
tags:
- code
---
# Facial Hair Classification Dataset
The Facial Hair Classification Dataset is a comprehensive collection of high-resolution images showcasing individuals **with and without** a beard. The dataset includes a diverse range of individuals of various ages, ethnicities, and genders.
The dataset also contains images of individuals **without facial hair**, serving as a valuable reference for comparison and contrast. These images showcase clean-shaven faces, enabling research into distinguishing facial hair patterns from those without any beard growth.
Each image in the dataset is carefully curated to showcase the subject's face prominently and with optimal lighting conditions, ensuring clarity and accuracy in the classification and analysis of facial hair presence.
### Types of photos in the dataset:
- **beard** - photos of people **with** a beard.
- **no beard** - photos of people **without** a beard.

The Facial Hair Classification Dataset offers a robust collection of images that accurately represent the diverse range of facial hair styles found in the real world. This dataset provides ample opportunities for training facial recognition algorithms, identifying facial hair patterns, and conducting research on facial hair classification and analysis.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=facial-hair-classification-dataset) to discuss your requirements, learn about the price and buy the dataset.
# Content
The dataset is splitted in three folders: **train**, **validate** and **test** to build a classification model.
Each of these folders includes:
- **beard** folder: includes photos of people **with** a beard
- **no_beard** folder: includes photos of people **without** a beard
### File with the extension .csv
- **file**: link to access the media file,
- **type**: does a person has or has not a beard
# Files for Facial Hair Classification might be collected in accordance with your requirements.
## [TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=facial-hair-classification-dataset) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** |
P1ayer-1/books-3-textbooks | 2023-07-29T00:24:19.000Z | [
"region:us"
] | P1ayer-1 | null | null | null | 5 | 10 | ---
dataset_info:
features:
- name: title
dtype: string
- name: authors
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 3106863819
num_examples: 5437
download_size: 1871392347
dataset_size: 3106863819
---
# Dataset Card for "books-3-textbooks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
diwank/imaginary-nlp-dataset | 2023-08-02T03:03:02.000Z | [
"region:us"
] | diwank | null | null | null | 1 | 10 | ---
dataset_info:
features:
- name: dialog
sequence: string
splits:
- name: train
num_bytes: 564724099.0
num_examples: 982313
- name: validation
num_bytes: 16714196.993174555
num_examples: 28313
- name: test
num_bytes: 17673411.69127517
num_examples: 29883
download_size: 340208629
dataset_size: 599111707.6844497
---
# Dataset Card for "imaginary-nlp-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/partitioned_v3_standardized_03 | 2023-08-01T17:59:53.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: message
dtype: string
- name: message_type
dtype: string
- name: message_id
dtype: int64
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_id
dtype: string
splits:
- name: train
num_bytes: 9164848.138603047
num_examples: 17044
download_size: 645599
dataset_size: 9164848.138603047
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "partitioned_v3_standardized_03"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Arziva/biorxiv | 2023-08-02T11:55:34.000Z | [
"license:mit",
"region:us"
] | Arziva | null | null | null | 0 | 10 | ---
license: mit
---
|
kaxap/llama2-sql-instruct-sys-prompt | 2023-08-05T01:19:57.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | kaxap | null | null | null | 0 | 10 | ---
license: cc-by-nc-4.0
---
|
emozilla/pg19-test-tokenized | 2023-08-08T19:26:51.000Z | [
"region:us"
] | emozilla | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: short_book_title
dtype: string
- name: publication_date
dtype: int32
- name: url
dtype: string
- name: text
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: tokenized_len
dtype: int64
splits:
- name: test
num_bytes: 97172727
num_examples: 100
download_size: 45658545
dataset_size: 97172727
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "pg19-test-tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tasksource/esci | 2023-08-09T11:23:31.000Z | [
"task_categories:text-classification",
"task_categories:text-retrieval",
"language:en",
"language:ja",
"language:es",
"license:apache-2.0",
"arxiv:2206.06588",
"region:us"
] | tasksource | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: example_id
dtype: int64
- name: query
dtype: string
- name: query_id
dtype: int64
- name: product_id
dtype: string
- name: product_locale
dtype: string
- name: esci_label
dtype: string
- name: small_version
dtype: int64
- name: large_version
dtype: int64
- name: product_title
dtype: string
- name: product_description
dtype: string
- name: product_bullet_point
dtype: string
- name: product_brand
dtype: string
- name: product_color
dtype: string
- name: product_text
dtype: string
splits:
- name: train
num_bytes: 5047037946
num_examples: 2027874
- name: test
num_bytes: 1631847321
num_examples: 652490
download_size: 2517788457
dataset_size: 6678885267
license: apache-2.0
task_categories:
- text-classification
- text-retrieval
language:
- en
- ja
- es
---
# Dataset Card for "esci"
ESCI product search dataset
https://github.com/amazon-science/esci-data/
Preprocessings:
-joined the two relevant files
-product_text aggregate all product text
-mapped esci_label to full name
```bib
@article{reddy2022shopping,
title={Shopping Queries Dataset: A Large-Scale {ESCI} Benchmark for Improving Product Search},
author={Chandan K. Reddy and Lluís Màrquez and Fran Valero and Nikhil Rao and Hugo Zaragoza and Sambaran Bandyopadhyay and Arnab Biswas and Anlu Xing and Karthik Subbian},
year={2022},
eprint={2206.06588},
archivePrefix={arXiv}
}
``` |
knoriy/OE-DCT-Movie-clips | 2023-09-19T21:53:22.000Z | [
"task_categories:conversational",
"license:apache-2.0",
"audio",
"audio2text",
"region:us"
] | knoriy | null | null | null | 0 | 10 | ---
license: apache-2.0
task_categories:
- conversational
tags:
- audio
- audio2text
dataset_info:
features:
- name: url
dtype: string
- name: text
dtype: string
- name: language
dtype: string
- name: start
dtype: float64
- name: end
dtype: float64
splits:
- name: train
num_bytes: 5907501
num_examples: 23371
download_size: 1983053
dataset_size: 5907501
---
|
adityarra07/sub_ATC_test | 2023-08-09T17:25:54.000Z | [
"region:us"
] | adityarra07 | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 130645075.80770035
num_examples: 1000
download_size: 120802206
dataset_size: 130645075.80770035
---
# Dataset Card for "sub_ATC_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DynamicSuperb/NoiseDetection_VCTK-MUSAN-Gaussian | 2023-08-11T07:52:33.000Z | [
"region:us"
] | DynamicSuperb | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: instruction
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 13812517186
num_examples: 26865
download_size: 3397759328
dataset_size: 13812517186
---
# Dataset Card for "NoiseDetectiongaussian_VCTKMusan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Photolens/oasst1-langchain-openorca-formatted | 2023-08-11T15:30:32.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"language:en",
"language:es",
"language:ru",
"language:de",
"language:pl",
"language:th",
"language:vi",
"language:sv",
"language:bn",
"language:da",
"language:he",
"language:it",
"language:fa",
"language:sk",
"lang... | Photolens | null | null | null | 2 | 10 | ---
language:
- en
- es
- ru
- de
- pl
- th
- vi
- sv
- bn
- da
- he
- it
- fa
- sk
- id
- nb
- el
- nl
- hu
- eu
- zh
- eo
- ja
- ca
- cs
- bg
- fi
- pt
- tr
- ro
- ar
- uk
- gl
- fr
- ko
task_categories:
- conversational
- text-generation
license: apache-2.0
---
## Dataset overview
Dataset license: apache-2.0
This dataset contains langchain formatted [**oasst1**](https://huggingface.co/datasets/OpenAssistant/oasst1) messages with OpenOrcaxOpenChat special tokens.
This dataset is intended for powering langchain applications. When an llm is trained with this data, its performance is expected to be high with langchain apps.
Format of new dataset for every prompter-assistant message pair:
```
User: "{prompter_message}"<end_of_turn>Assistant: ```json
{"action": "Final Answer", "action_input": "{assistant_message}"}
```<end_of_turn>
```
## Languages
**Languages with over 1000 messages**
- English: 71956
- Spanish: 43061
- Russian: 9089
- German: 5279
- Chinese: 4962
- French: 4251
- Thai: 3042
- Portuguese (Brazil): 2969
- Catalan: 2260
- Korean: 1553
- Ukrainian: 1352
- Italian: 1320
- Japanese: 1018
<details>
<summary><b>Languages with under 1000 messages</b></summary>
<ul>
<li>Vietnamese: 952</li>
<li>Basque: 947</li>
<li>Polish: 886</li>
<li>Hungarian: 811</li>
<li>Arabic: 666</li>
<li>Dutch: 628</li>
<li>Swedish: 512</li>
<li>Turkish: 454</li>
<li>Finnish: 386</li>
<li>Czech: 372</li>
<li>Danish: 358</li>
<li>Galician: 339</li>
<li>Hebrew: 255</li>
<li>Romanian: 200</li>
<li>Norwegian Bokmål: 133</li>
<li>Indonesian: 115</li>
<li>Bulgarian: 95</li>
<li>Bengali: 82</li>
<li>Persian: 72</li>
<li>Greek: 66</li>
<li>Esperanto: 59</li>
<li>Slovak: 19</li>
</ul>
</details>
## Contact
- Email: art.photolens.ai@gmail.com
- Discord: https://discord.gg/QJT3e6ABz8
- Twitter: @PhotolensAi |
augtoma/usmle_step_2 | 2023-08-11T21:25:09.000Z | [
"region:us"
] | augtoma | null | null | null | 0 | 10 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: options
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: F
dtype: string
- name: G
dtype: string
- name: answer
dtype: string
- name: answer_idx
dtype: string
splits:
- name: test
num_bytes: 133267
num_examples: 109
download_size: 80679
dataset_size: 133267
---
# Dataset Card for "usmle_self_eval_step2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rojagtap/natural_questions_clean | 2023-08-22T14:52:40.000Z | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"natural-questions",
"question-answering",
"text-generation",
"text2text",
"region:us"
] | rojagtap | null | null | null | 0 | 10 | ---
license: mit
task_categories:
- question-answering
- text-generation
- text2text-generation
language:
- en
tags:
- natural-questions
- question-answering
- text-generation
- text2text
pretty_name: natural-questions-clean
size_categories:
- 100K<n<1M
configs:
- config_name: raw
data_files:
- split: train
path: "raw/train.jsonl"
- split: validation
path: "raw/validation.jsonl"
- config_name: either
data_files:
- split: train
path: "either/train.jsonl"
- split: validation
path: "either/validation.jsonl"
default: true
- config_name: long
data_files:
- split: train
path: "long/train.jsonl"
- split: validation
path: "long/validation.jsonl"
- config_name: short
data_files:
- split: train
path: "short/train.jsonl"
- split: validation
path: "short/validation.jsonl"
--- |
YassineBenlaria/tamasheq_data | 2023-09-04T20:59:38.000Z | [
"region:us"
] | YassineBenlaria | null | null | null | 0 | 10 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: path
dtype: string
- name: sentence
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence_lat
dtype: string
splits:
- name: test
num_bytes: 3785121.0
num_examples: 18
- name: train
num_bytes: 70490040.97552449
num_examples: 267
- name: validation
num_bytes: 6424920.161290322
num_examples: 19
download_size: 0
dataset_size: 80700082.1368148
---
# Dataset Card for "tamasheq_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
muhammadravi251001/debug-entailment | 2023-09-10T02:40:49.000Z | [
"license:openrail",
"region:us"
] | muhammadravi251001 | null | null | null | 0 | 10 | ---
license: openrail
---
You can download this Dataset just like this (if you only need: premise, hypothesis, and label column):
```
from datasets import load_dataset, Dataset, DatasetDict
import pandas as pd
data_files = {"train": "data_nli_train_df_debug.csv",
"validation": "data_nli_val_df_debug.csv",
"test": "data_nli_test_df_debug.csv"}
dataset = load_dataset("muhammadravi251001/debug-entailment", data_files=data_files)
selected_columns = ["premise", "hypothesis", "label"]
# selected_columns = dataset.column_names['train'] # Uncomment this line to retrieve all of the columns
df_train = pd.DataFrame(dataset["train"])
df_train = df_train[selected_columns]
df_val = pd.DataFrame(dataset["validation"])
df_val = df_val[selected_columns]
df_test = pd.DataFrame(dataset["test"])
df_test = df_test[selected_columns]
train_dataset = Dataset.from_dict(df_train)
validation_dataset = Dataset.from_dict(df_val)
test_dataset = Dataset.from_dict(df_test)
dataset = DatasetDict({"train": train_dataset, "validation": validation_dataset, "test": test_dataset})
```
If you want to download keep-invalid-data-dataset:
```
from datasets import load_dataset, Dataset, DatasetDict
import pandas as pd
data_files = {"train": "data_nli_train_df_keep.csv",
"validation": "data_nli_val_df_keep.csv",
"test": "data_nli_test_df_keep.csv"}
dataset = load_dataset("muhammadravi251001/debug-entailment", data_files=data_files)
# selected_columns = ["premise", "hypothesis", "label"]
selected_columns = dataset.column_names['train'] # Uncomment this line to retrieve all of the columns
df_train = pd.DataFrame(dataset["train"])
df_train = df_train[selected_columns]
df_val = pd.DataFrame(dataset["validation"])
df_val = df_val[selected_columns]
df_test = pd.DataFrame(dataset["test"])
df_test = df_test[selected_columns]
train_dataset = Dataset.from_dict(df_train)
validation_dataset = Dataset.from_dict(df_val)
test_dataset = Dataset.from_dict(df_test)
dataset = DatasetDict({"train": train_dataset, "validation": validation_dataset, "test": test_dataset})
```
If you want to download drop-invalid-data-dataset:
```
from datasets import load_dataset, Dataset, DatasetDict
import pandas as pd
data_files = {"train": "data_nli_train_df_drop.csv",
"validation": "data_nli_val_df_drop.csv",
"test": "data_nli_test_df_drop.csv"}
dataset = load_dataset("muhammadravi251001/debug-entailment", data_files=data_files)
# selected_columns = ["premise", "hypothesis", "label"]
selected_columns = dataset.column_names['train'] # Uncomment this line to retrieve all of the columns
df_train = pd.DataFrame(dataset["train"])
df_train = df_train[selected_columns]
df_val = pd.DataFrame(dataset["validation"])
df_val = df_val[selected_columns]
df_test = pd.DataFrame(dataset["test"])
df_test = df_test[selected_columns]
train_dataset = Dataset.from_dict(df_train)
validation_dataset = Dataset.from_dict(df_val)
test_dataset = Dataset.from_dict(df_test)
dataset = DatasetDict({"train": train_dataset, "validation": validation_dataset, "test": test_dataset})
``` |
scholarly360/indian_ipo_prospectus_data_with_pageno | 2023-08-15T13:27:37.000Z | [
"region:us"
] | scholarly360 | null | null | null | 2 | 10 | ---
{}
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Prospectus text mining is very important for the investor community to identify major risks.
factors and evaluate the use of the amount to be raised during an IPO. For this dataset author
downloaded 100 prospectuses from the Indian Market Regulator website. The dataset contains the URL and OCR text for 100 prospectuses.
Further, the author released a Roberta LM and sentence transformer for usage.
This dataset Contains Page number Also for Retrieval Augmented Generation
### Supported Tasks and Leaderboards
Retrieval Augmented Generation
### Languages
ENGLISH
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
There are 4 columns:
title_prospectus: Title of the IPO prospectus
href_prospectus: Location of HTML
pdf_prospectus : Pdf of prospectus
content_whole_prospectus: OCR text for the whole prospectus
### Data Splits
N.A.
## Dataset Creation
### Curation Rationale
Prospectus text mining
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
This will help investors and the merchant bank community explore prospectuses in a more automated way, thus saving time.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@misc{ROBERTA GOES FOR IPO: PROSPECTUS ANALYSIS WITH LANGUAGE MODELS FOR INDIAN INITIAL PUBLIC OFFERINGS,
author = {Abhishek Mishra and Yogendra Sisodia},
title = {ROBERTA GOES FOR IPO: PROSPECTUS ANALYSIS WITH LANGUAGE MODELS FOR INDIAN INITIAL PUBLIC OFFERINGS},
year = {2022},
url = {https://aircconline.com/csit/papers/vol12/csit121905.pdf},
}
```
### Contributions
Made by Author [Scholarly360](https://github.com/Scholarly360). |
ad019el/ar_data | 2023-08-15T23:36:31.000Z | [
"region:us"
] | ad019el | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 40579164.0
num_examples: 1500
- name: test
num_bytes: 15846990.0
num_examples: 500
download_size: 55259208
dataset_size: 56426154.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "ar_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
usvsnsp/duped-num-frequencies | 2023-08-17T08:20:34.000Z | [
"region:us"
] | usvsnsp | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: TokenID
dtype: int64
- name: Frequency
dtype: int64
splits:
- name: memorized
num_bytes: 960000
num_examples: 60000
- name: non_memorized
num_bytes: 960000
num_examples: 60000
- name: total
num_bytes: 960000
num_examples: 60000
download_size: 1965812
dataset_size: 2880000
---
# Dataset Card for "duped-num-frequencies"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Ram096/input_data_guanaco_llama2_1kformat | 2023-08-18T07:29:12.000Z | [
"license:llama2",
"region:us"
] | Ram096 | null | null | null | 0 | 10 | ---
license: llama2
---
|
RealTimeData/bbc_latest | 2023-10-09T00:39:13.000Z | [
"region:us"
] | RealTimeData | null | null | null | 0 | 10 | ---
{}
---
# Latest BBC News
You could always access the latest BBC News articles via this dataset.
We update the dataset weekly, on every Sunday. So the dataset always provides the latest BBC News article from the last week.
The current dataset on main branch contains the latest BBC News articles submitted from 2023-10-02 to 2023-10-09.
The data collection is conducted on 2023-10-09.
Use the dataset via:
```
ds = datasets.load_dataset('RealTimeData/bbc_latest')
```
# Previsou versions
You could access previous versions by requesting different branches.
For example, you could find the 2023-08-20 version via:
```
ds = datasets.load_dataset('RealTimeData/bbc_latest', revision = '2023-08-20')
```
Check all available versions by clicking the "Files and versions" button on the top bar.
|
jessiedu314/FindSumAll | 2023-08-20T22:42:32.000Z | [
"region:us"
] | jessiedu314 | null | null | null | 0 | 10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: document
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 1142199650
num_examples: 83254
- name: validation
num_bytes: 142621982
num_examples: 10405
- name: test
num_bytes: 142826827
num_examples: 10405
download_size: 635119558
dataset_size: 1427648459
---
# Dataset Card for "FindSumAll"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mekaneeky/acholi-crowd-validated-paths | 2023-08-25T14:18:13.000Z | [
"region:us"
] | mekaneeky | null | null | null | 0 | 10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Path
dtype: string
- name: Key
dtype: int64
- name: Speaker
dtype: string
- name: Transcription
dtype: string
splits:
- name: train
num_bytes: 617369
num_examples: 4804
- name: valid
num_bytes: 13082
num_examples: 101
- name: test
num_bytes: 12723
num_examples: 96
download_size: 281385
dataset_size: 643174
---
# Dataset Card for "acholi-crowd-validated-paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
qgyd2021/e_commerce_customer_service | 2023-09-14T01:33:20.000Z | [
"task_categories:text-retrieval",
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:en",
"e-commerce",
"region:us"
] | qgyd2021 | null | @dataset{e_commerce_customer_service,
author = {Xing Tian},
title = {e_commerce_customer_service},
month = aug,
year = 2023,
publisher = {Xing Tian},
version = {1.0},
} | null | 0 | 10 | ---
task_categories:
- text-retrieval
- question-answering
language:
- en
tags:
- e-commerce
size_categories:
- 1M<n<10M
---
## 电商客户服务数据集
是从 (lightinthebox)[https://www.lightinthebox.com/] 网站收集的电商数据. 此数据可用于电商客服机器人的研究.
数据内容:
faq.json: 包含通用问题的问答对.
product.jsonl: 包含一些商品信息.
examples 中包含收集商品信息的爬虫代码.
python==3.8.10
|
argilla/cloud_assistant_questions | 2023-08-30T11:46:23.000Z | [
"region:us"
] | argilla | null | null | null | 0 | 10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 16707.87786259542
num_examples: 196
- name: test
num_bytes: 5626.12213740458
num_examples: 66
download_size: 12576
dataset_size: 22334.0
---
# Dataset Card for "cloud_assistant_questions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
77xiaoyuanzi8/code_reviewer_demo | 2023-09-01T08:07:09.000Z | [
"license:apache-2.0",
"region:us"
] | 77xiaoyuanzi8 | null | null | null | 0 | 10 | ---
license: apache-2.0
---
|
nigh8w0lf/Hydra_moe_toolllama_dataset | 2023-09-19T05:17:02.000Z | [
"region:us"
] | nigh8w0lf | null | null | null | 0 | 10 | Entry not found |
jaydip-4646/sneaker | 2023-09-05T15:28:38.000Z | [
"region:us"
] | jaydip-4646 | null | null | null | 0 | 10 | Entry not found |
cmaldona/Generalization-MultiClass-CLINC150-ROSTD | 2023-09-05T22:11:52.000Z | [
"task_categories:text-classification",
"language:en",
"license:openrail",
"region:us"
] | cmaldona | null | null | null | 0 | 10 | ---
name: generalization-test
version: 1.0.0
description: Merge between 3 datasets.
configs:
- config_name: clinc150
default: true
data_files:
- split: train
path: "train_clinc150.csv"
- split: validation
path: "validation_clinc150.csv"
- split: test
path: "test_clinc150.csv"
- config_name: rostd+
data_files:
- split: train
path: "train_rostd+.csv"
- split: validation
path: "val_rostd+.csv"
- split: test
path: "test_rostd+.csv"
license: openrail
task_categories:
- text-classification
language:
- en
---
This dataset merge 3 datasets and have two setup for experiments in generalisation for multi-class clasificacitino task.
* ID, near-OOD, covariate-shitf: [CLINC150](https://github.com/clinc/oos-eval)
* ID, near-OOD, covariate-shitf: [ROSTD+OOD](https://github.com/vgtomahawk/LR_GC_OOD) (fbreleasecoarse version)
* far-OOD: [News Category](https://www.kaggle.com/datasets/rmisra/news-category-dataset?resource=download) (v3) |
wwydmanski/helena | 2023-09-06T09:48:14.000Z | [
"region:us"
] | wwydmanski | null | null | null | 0 | 10 | Entry not found |
clarin-knext/touche2020-pl | 2023-09-12T09:50:08.000Z | [
"region:us"
] | clarin-knext | null | null | null | 0 | 10 | Entry not found |
hantech/correct_dataset | 2023-09-08T07:06:27.000Z | [
"region:us"
] | hantech | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: source_text
dtype: string
- name: target_text
dtype: string
splits:
- name: train
num_bytes: 80541676
num_examples: 626100
download_size: 11445024
dataset_size: 80541676
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "correct_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FinchResearch/Ultraboros | 2023-09-09T13:13:11.000Z | [
"region:us"
] | FinchResearch | null | null | null | 0 | 10 | Entry not found |
bitadin/attributes-v6 | 2023-09-13T10:12:06.000Z | [
"region:us"
] | bitadin | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 62905967
num_examples: 95534
download_size: 35331871
dataset_size: 62905967
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "attributes-v6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wanadzhar913/crawl-bikesrepublic | 2023-09-09T17:29:06.000Z | [
"language:en",
"license:apache-2.0",
"region:us"
] | wanadzhar913 | null | null | null | 0 | 10 | ---
license: apache-2.0
language:
- en
---
### TLDR
- website: [bikesrepublic](https://www.bikesrepublic.com/)
- num. of webpages scraped: 6,969
- link to dataset: https://huggingface.co/datasets/wanadzhar913/crawl-bikesrepublic
- last date of scraping: 10th September 2023
- status: complete
- pull request: https://github.com/huseinzol05/malaysian-dataset/pull/291
- contributed to: https://github.com/huseinzol05/malaysian-dataset |
arsenZabara/LastTry | 2023-09-09T22:52:36.000Z | [
"region:us"
] | arsenZabara | null | null | null | 0 | 10 | Entry not found |
yjching/tokenized_ts_data | 2023-09-11T04:59:18.000Z | [
"region:us"
] | yjching | null | null | null | 0 | 10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: Problem
dtype: string
- name: Resolution
dtype: string
- name: input_ids
sequence: int32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 1272561
num_examples: 197
download_size: 78711
dataset_size: 1272561
---
# Dataset Card for "tokenized_ts_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
quocanh34/test_result_large_data_oom | 2023-09-11T08:15:53.000Z | [
"region:us"
] | quocanh34 | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: id
dtype: string
- name: pred_str
dtype: string
- name: test_norm
dtype: string
splits:
- name: train
num_bytes: 207422
num_examples: 1299
download_size: 108838
dataset_size: 207422
---
# Dataset Card for "test_result_large_data_oom"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Photolens/DISC-Med-SFT-en-translated-only-CMeKG-OpenOrca-formatted | 2023-09-11T14:02:18.000Z | [
"region:us"
] | Photolens | null | null | null | 2 | 10 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 22432780
num_examples: 49920
download_size: 9066390
dataset_size: 22432780
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "DISC-Med-SFT-en-translated-only-CMeKG-OpenOrca-formatted"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pietrolesci/pubmed-200k-rct | 2023-09-11T16:14:30.000Z | [
"region:us"
] | pietrolesci | null | null | null | 0 | 10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: embedding_all-MiniLM-L12-v2
data_files:
- split: train
path: embedding_all-MiniLM-L12-v2/train-*
- split: validation
path: embedding_all-MiniLM-L12-v2/validation-*
- split: test
path: embedding_all-MiniLM-L12-v2/test-*
- config_name: embedding_all-mpnet-base-v2
data_files:
- split: train
path: embedding_all-mpnet-base-v2/train-*
- split: validation
path: embedding_all-mpnet-base-v2/validation-*
- split: test
path: embedding_all-mpnet-base-v2/test-*
- config_name: embedding_multi-qa-mpnet-base-dot-v1
data_files:
- split: train
path: embedding_multi-qa-mpnet-base-dot-v1/train-*
- split: validation
path: embedding_multi-qa-mpnet-base-dot-v1/validation-*
- split: test
path: embedding_multi-qa-mpnet-base-dot-v1/test-*
dataset_info:
- config_name: default
features:
- name: labels
dtype:
class_label:
names:
'0': BACKGROUND
'1': CONCLUSIONS
'2': METHODS
'3': OBJECTIVE
'4': RESULTS
- name: text
dtype: string
- name: uid
dtype: int64
splits:
- name: train
num_bytes: 379382835
num_examples: 2211861
- name: validation
num_bytes: 4994899
num_examples: 28932
- name: test
num_bytes: 5026344
num_examples: 29493
download_size: 209039426
dataset_size: 389404078
- config_name: embedding_all-MiniLM-L12-v2
features:
- name: uid
dtype: int64
- name: embedding_all-MiniLM-L12-v2
sequence: float32
splits:
- name: train
num_bytes: 3423960828
num_examples: 2211861
- name: validation
num_bytes: 44786736
num_examples: 28932
- name: test
num_bytes: 45655164
num_examples: 29493
download_size: 4916495311
dataset_size: 3514402728
- config_name: embedding_all-mpnet-base-v2
features:
- name: uid
dtype: int64
- name: embedding_all-mpnet-base-v2
sequence: float32
splits:
- name: train
num_bytes: 6821379324
num_examples: 2211861
- name: validation
num_bytes: 89226288
num_examples: 28932
- name: test
num_bytes: 90956412
num_examples: 29493
download_size: 8405313596
dataset_size: 7001562024
- config_name: embedding_multi-qa-mpnet-base-dot-v1
features:
- name: uid
dtype: int64
- name: embedding_multi-qa-mpnet-base-dot-v1
sequence: float32
splits:
- name: train
num_bytes: 6821379324
num_examples: 2211861
- name: validation
num_bytes: 89226288
num_examples: 28932
- name: test
num_bytes: 90956412
num_examples: 29493
download_size: 8405286790
dataset_size: 7001562024
---
# Dataset Card for "pubmed-200k-rct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dot-ammar/AR-dotless-mediumPlus | 2023-09-12T03:24:41.000Z | [
"region:us"
] | dot-ammar | null | null | null | 0 | 10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: clean
dtype: string
- name: dotless
dtype: string
splits:
- name: train
num_bytes: 782074235.6168703
num_examples: 4446330
download_size: 446112756
dataset_size: 782074235.6168703
---
# Dataset Card for "AR-dotless-mediumPlus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
proteinea/contact_prediction | 2023-09-20T22:07:10.000Z | [
"license:cc-by-4.0",
"doi:10.57967/hf/1121",
"region:us"
] | proteinea | null | null | null | 0 | 10 | ---
license: cc-by-4.0
---
|
pietrolesci/agnews | 2023-09-13T12:02:12.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] | pietrolesci | null | null | null | 0 | 10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- config_name: embedding_all-MiniLM-L12-v2
data_files:
- split: train
path: embedding_all-MiniLM-L12-v2/train-*
- split: test
path: embedding_all-MiniLM-L12-v2/test-*
- config_name: embedding_all-mpnet-base-v2
data_files:
- split: train
path: embedding_all-mpnet-base-v2/train-*
- split: test
path: embedding_all-mpnet-base-v2/test-*
- config_name: embedding_multi-qa-mpnet-base-dot-v1
data_files:
- split: train
path: embedding_multi-qa-mpnet-base-dot-v1/train-*
- split: test
path: embedding_multi-qa-mpnet-base-dot-v1/test-*
dataset_info:
- config_name: default
features:
- name: text
dtype: string
- name: labels
dtype:
class_label:
names:
'0': World
'1': Sports
'2': Business
'3': Sci/Tech
- name: uid
dtype: int64
splits:
- name: train
num_bytes: 30777303
num_examples: 120000
- name: test
num_bytes: 1940274
num_examples: 7600
download_size: 20531429
dataset_size: 32717577
- config_name: embedding_all-MiniLM-L12-v2
features:
- name: uid
dtype: int64
- name: embedding_all-MiniLM-L12-v2
sequence: float32
splits:
- name: train
num_bytes: 185760000
num_examples: 120000
- name: test
num_bytes: 11764800
num_examples: 7600
download_size: 276467219
dataset_size: 197524800
- config_name: embedding_all-mpnet-base-v2
features:
- name: uid
dtype: int64
- name: embedding_all-mpnet-base-v2
sequence: float32
splits:
- name: train
num_bytes: 370080000
num_examples: 120000
- name: test
num_bytes: 23438400
num_examples: 7600
download_size: 472647323
dataset_size: 393518400
- config_name: embedding_multi-qa-mpnet-base-dot-v1
features:
- name: uid
dtype: int64
- name: embedding_multi-qa-mpnet-base-dot-v1
sequence: float32
splits:
- name: train
num_bytes: 370080000
num_examples: 120000
- name: test
num_bytes: 23438400
num_examples: 7600
download_size: 472640830
dataset_size: 393518400
task_categories:
- text-classification
language:
- en
size_categories:
- 100K<n<1M
---
This is the same dataset as [`ag_news`](https://huggingface.co/datasets/ag_news).
The only differences are
1. Addition of a unique identifier, `uid`
1. Addition of the indices, that is 3 columns with the embeddings of 3 different sentence-transformers
- `all-mpnet-base-v2`
- `multi-qa-mpnet-base-dot-v1`
- `all-MiniLM-L12-v2`
1. Renaming of the `label` column to `labels` for easier compatibility with the transformers library |
sordonia/redpajama-sample_from_valid_all | 2023-09-13T18:38:26.000Z | [
"region:us"
] | sordonia | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: subject
dtype: string
- name: docno
dtype: int64
- name: score
dtype: float64
- name: dfq
dtype: int64
- name: text
dtype: string
- name: meta
dtype: string
splits:
- name: train
num_bytes: 2289695594
num_examples: 133927
download_size: 1236906938
dataset_size: 2289695594
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "redpajama-sample_from_valid_all"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HydraLM/corpus_1_clustered | 2023-09-14T07:34:35.000Z | [
"region:us"
] | HydraLM | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: text
dtype: string
- name: conversation_id
dtype: int64
- name: dataset_id
dtype: string
- name: unique_conversation_id
dtype: string
- name: embedding
sequence: float64
- name: text_processed
dtype: string
- name: __index_level_0__
dtype: int64
- name: cluster
sequence: int64
splits:
- name: train
num_bytes: 99791008
num_examples: 10000
download_size: 75705515
dataset_size: 99791008
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "corpus_1_clustered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
johannes-garstenauer/pooling_net_embeddings_dim_16 | 2023-09-14T12:50:04.000Z | [
"region:us"
] | johannes-garstenauer | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: last_cls
sequence: float32
- name: label
dtype: int64
splits:
- name: train
num_bytes: 3800
num_examples: 50
download_size: 5640
dataset_size: 3800
---
# Dataset Card for "pooling_net_embeddings_dim_16"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Cyleux/v65convotraining | 2023-09-15T07:46:07.000Z | [
"region:us"
] | Cyleux | null | null | null | 0 | 10 | Entry not found |
goodfellowliu/Urban100 | 2023-09-15T06:27:09.000Z | [
"license:apache-2.0",
"region:us"
] | goodfellowliu | null | null | null | 0 | 10 | ---
license: apache-2.0
---
|
Nacholmo/coco-pattern | 2023-09-16T05:43:17.000Z | [
"region:us"
] | Nacholmo | null | null | null | 0 | 10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: filepath
dtype: string
- name: sentids
list: int32
- name: filename
dtype: string
- name: imgid
dtype: int32
- name: split
dtype: string
- name: sentences_tokens
list:
list: string
- name: sentences_raw
list: string
- name: sentences_sentid
list: int32
- name: cocoid
dtype: int32
- name: id
dtype: int64
- name: conditioning_image
dtype: image
splits:
- name: train
num_bytes: 14068039590.25
num_examples: 113287
download_size: 14013924288
dataset_size: 14068039590.25
---
# Dataset Card for "coco"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Vishal24/llama-prompt | 2023-09-18T03:50:45.000Z | [
"region:us"
] | Vishal24 | null | null | null | 0 | 10 | Entry not found |
whateverweird17/parasci_data | 2023-09-17T08:46:54.000Z | [
"region:us"
] | whateverweird17 | null | null | null | 0 | 10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 9393333
num_examples: 38883
- name: validation
num_bytes: 1878763.2317722398
num_examples: 7777
download_size: 5445189
dataset_size: 11272096.23177224
---
# Dataset Card for "parasci_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
usvsnsp/deduped-embeddings | 2023-09-17T13:33:35.000Z | [
"region:us"
] | usvsnsp | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: sequence_id
dtype: int64
- name: embeddings
sequence: float32
splits:
- name: train
num_bytes: 11138657220
num_examples: 7195515
download_size: 15591208109
dataset_size: 11138657220
---
# Dataset Card for "deduped-embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vincenttttt/department_college_ForFineTune | 2023-09-17T15:23:16.000Z | [
"region:us"
] | vincenttttt | null | null | null | 0 | 10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1719829
num_examples: 3673
download_size: 312305
dataset_size: 1719829
---
# Dataset Card for "department_college_ForFineTune"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
result-muse256-muse512-wuerst-sdv15/96998511 | 2023-09-18T07:28:20.000Z | [
"region:us"
] | result-muse256-muse512-wuerst-sdv15 | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 165
num_examples: 10
download_size: 1327
dataset_size: 165
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "96998511"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
legacy107/qa_wikipedia_sentence_transformer | 2023-09-23T02:32:01.000Z | [
"region:us"
] | legacy107 | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: anchor
dtype: string
- name: negative
dtype: string
- name: positive
dtype: string
splits:
- name: train
num_bytes: 31856811
num_examples: 29965
- name: validation
num_bytes: 3167027
num_examples: 3000
- name: test
num_bytes: 3103240
num_examples: 2981
download_size: 2854716
dataset_size: 38127078
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "qa_wikipedia_sentence_transformer"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Falah/story_2_prompts | 2023-09-23T10:18:20.000Z | [
"region:us"
] | Falah | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: prompts
dtype: string
splits:
- name: train
num_bytes: 2575
num_examples: 3
download_size: 8655
dataset_size: 2575
---
# Dataset Card for "story_2_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nguyenthanhdo/vhac_v2_chai_format | 2023-09-18T16:42:20.000Z | [
"region:us"
] | nguyenthanhdo | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: model_input
dtype: string
- name: model_output
dtype: string
splits:
- name: train
num_bytes: 369591059.0
num_examples: 108658
download_size: 177238172
dataset_size: 369591059.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "vhac_v2_chai_format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Dippi9845/arxiv-no-stop-word | 2023-09-18T20:00:41.000Z | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | Dippi9845 | null | null | null | 0 | 10 | ---
license: cc-by-nc-nd-4.0
---
|
jiuyuan/course-recommendations | 2023-09-23T19:36:09.000Z | [
"license:afl-3.0",
"region:us"
] | jiuyuan | null | null | null | 0 | 10 | ---
license: afl-3.0
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 47265
num_examples: 73
download_size: 9199
dataset_size: 47265
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ironchanchellor/MetalDam_Cropped | 2023-09-19T00:24:56.000Z | [
"region:us"
] | ironchanchellor | null | null | null | 0 | 10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: pixel_values
dtype: image
- name: label
dtype: image
splits:
- name: train
num_bytes: 43505113.0
num_examples: 124
- name: validation
num_bytes: 11683804.0
num_examples: 32
download_size: 55199351
dataset_size: 55188917.0
---
# Dataset Card for "MetalDam_Cropped"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
BarraHome/Linux | 2023-09-19T01:40:50.000Z | [
"license:mit",
"region:us"
] | BarraHome | null | null | null | 0 | 10 | ---
license: mit
---
|
HSJ1221/food | 2023-09-19T05:07:01.000Z | [
"region:us"
] | HSJ1221 | null | null | null | 0 | 10 | Entry not found |
result-muse256-muse512-wuerst-sdv15/3457e37d | 2023-09-19T06:37:28.000Z | [
"region:us"
] | result-muse256-muse512-wuerst-sdv15 | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: result
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 164
num_examples: 10
download_size: 1314
dataset_size: 164
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "3457e37d"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
warshakhan/donut_vqa_ISynHMP_all_labels | 2023-09-19T08:43:22.000Z | [
"region:us"
] | warshakhan | null | null | null | 0 | 10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 580858079.0
num_examples: 2800
- name: valid
num_bytes: 85643829.0
num_examples: 400
- name: test
num_bytes: 172886967.0
num_examples: 800
download_size: 804946514
dataset_size: 839388875.0
---
# Dataset Card for "donut_vqa_ISynHMP_all_labels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
crcb/tec | 2023-09-19T14:18:48.000Z | [
"license:apache-2.0",
"region:us"
] | crcb | null | null | null | 0 | 10 | ---
license: apache-2.0
---
|
nafi-zaman/celloscope_bangla_ner_dataset | 2023-10-09T09:39:50.000Z | [
"region:us"
] | nafi-zaman | null | null | null | 0 | 10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
splits:
- name: train
num_bytes: 49733198
num_examples: 279661
- name: validation
num_bytes: 6216034
num_examples: 34957
- name: test
num_bytes: 6240532
num_examples: 34959
download_size: 8745975
dataset_size: 62189764
---
# Dataset Card for "celloscope_bangla_ner_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ayoubkirouane/Arabic_common_voice_11_0 | 2023-09-19T15:51:03.000Z | [
"region:us"
] | ayoubkirouane | null | null | null | 0 | 10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 331885627.728
num_examples: 10438
- name: test
num_bytes: 318132067.84
num_examples: 10440
download_size: 577509839
dataset_size: 650017695.568
---
# Dataset Card for "Arabic_common_voice_11_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
etanios/arxiv-abstracts-full | 2023-09-19T19:23:42.000Z | [
"region:us"
] | etanios | null | null | null | 0 | 10 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: Index
dtype: int64
- name: title
dtype: string
- name: abstract
dtype: string
splits:
- name: train
num_bytes: 11196483
num_examples: 9999
download_size: 6348986
dataset_size: 11196483
---
# Dataset Card for "arxiv-abstracts-full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chriscors/slbh | 2023-09-20T01:33:37.000Z | [
"license:openrail",
"region:us"
] | chriscors | null | null | null | 0 | 10 | ---
license: openrail
---
|
Tverous/SemEval-Audio | 2023-09-21T00:06:26.000Z | [
"region:us"
] | Tverous | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: video_name
dtype: string
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker
dtype: string
- name: emotion
dtype:
class_label:
names:
'0': anger
'1': disgust
'2': fear
'3': joy
'4': neutral
'5': sadness
'6': surprise
splits:
- name: train
num_bytes: 684419162.647
num_examples: 13353
download_size: 695130678
dataset_size: 684419162.647
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "SemEval-Audio"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
0xk1h0/py150k_sanitized_json | 2023-09-21T14:31:32.000Z | [
"license:mit",
"region:us"
] | 0xk1h0 | null | null | null | 1 | 10 | ---
license: mit
---
|
Sagar12/text2sql | 2023-09-20T22:37:32.000Z | [
"license:unknown",
"region:us"
] | Sagar12 | null | null | null | 0 | 10 | ---
license: unknown
---
|
Sneka/decision | 2023-09-28T05:59:12.000Z | [
"region:us"
] | Sneka | null | null | null | 0 | 10 | Entry not found |
euclaise/writingprompts | 2023-09-21T19:12:16.000Z | [
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"arxiv:1805.04833",
"region:us"
] | euclaise | null | null | null | 0 | 10 | ---
language:
- en
license: mit
size_categories:
- 100K<n<1M
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: prompt
dtype: string
- name: story
dtype: string
splits:
- name: train
num_bytes: 858816216
num_examples: 272600
- name: test
num_bytes: 47681276
num_examples: 15138
- name: validation
num_bytes: 48904993
num_examples: 15620
download_size: 605049830
dataset_size: 955402485
---
# Dataset Card for "writingprompts"
WritingPrompts dataset, as used in [Hierarchical Neural Story Generation](https://arxiv.org/pdf/1805.04833.pdf). Parsed from [the archive](https://dl.fbaipublicfiles.com/fairseq/data/writingPrompts.tar.gz) |
cbasconc3132/Instructions_objects | 2023-09-22T03:49:45.000Z | [
"region:us"
] | cbasconc3132 | null | null | null | 0 | 10 | Entry not found |
cris177/Arguments | 2023-10-04T09:02:42.000Z | [
"region:us"
] | cris177 | null | null | null | 1 | 10 | Entry not found |
Kerenfuentes/testing_hb | 2023-09-22T21:44:59.000Z | [
"region:us"
] | Kerenfuentes | null | null | null | 0 | 10 | Entry not found |
ContextualAI/tiny-wiki100-chunks | 2023-09-22T17:47:30.000Z | [
"region:us"
] | ContextualAI | null | null | null | 0 | 10 | ---
dataset_info:
features:
- name: doc_id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 63619
num_examples: 100
download_size: 43300
dataset_size: 63619
---
# Dataset Card for "tiny-wiki100-chunks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tomaarsen/conll2002 | 2023-09-23T10:53:11.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:es",
"language:nl",
"license... | tomaarsen | Named entities are phrases that contain the names of persons, organizations, locations, times and quantities.
Example:
[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .
The shared task of CoNLL-2002 concerns language-independent named entity recognition.
We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups.
The participants of the shared task will be offered training and test data for at least two languages.
They will use the data for developing a named-entity recognition system that includes a machine learning component.
Information sources other than the training data may be used in this shared task.
We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training).
The train/validation/test sets are available in Spanish and Dutch.
For more details see https://www.clips.uantwerpen.be/conll2002/ner/ and https://www.aclweb.org/anthology/W02-2024/ | @inproceedings{tjong-kim-sang-2002-introduction,
title = "Introduction to the {C}o{NLL}-2002 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F.",
booktitle = "{COLING}-02: The 6th Conference on Natural Language Learning 2002 ({C}o{NLL}-2002)",
year = "2002",
url = "https://www.aclweb.org/anthology/W02-2024",
} | null | 0 | 10 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- es
- nl
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
- part-of-speech
paperswithcode_id: conll-2002
pretty_name: CoNLL-2002
config_names:
- es
- nl
dataset_info:
- config_name: es
features:
- name: id
dtype: string
- name: document_id
dtype: int32
- name: sentence_id
dtype: int32
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': AO
'1': AQ
'2': CC
'3': CS
'4': DA
'5': DE
'6': DD
'7': DI
'8': DN
'9': DP
'10': DT
'11': Faa
'12': Fat
'13': Fc
'14': Fd
'15': Fe
'16': Fg
'17': Fh
'18': Fia
'19': Fit
'20': Fp
'21': Fpa
'22': Fpt
'23': Fs
'24': Ft
'25': Fx
'26': Fz
'27': I
'28': NC
'29': NP
'30': P0
'31': PD
'32': PI
'33': PN
'34': PP
'35': PR
'36': PT
'37': PX
'38': RG
'39': RN
'40': SP
'41': VAI
'42': VAM
'43': VAN
'44': VAP
'45': VAS
'46': VMG
'47': VMI
'48': VMM
'49': VMN
'50': VMP
'51': VMS
'52': VSG
'53': VSI
'54': VSM
'55': VSN
'56': VSP
'57': VSS
'58': Y
'59': Z
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 6738717
num_examples: 8323
- name: validation
num_bytes: 1349064
num_examples: 1915
- name: test
num_bytes: 1306252
num_examples: 1517
download_size: 4140690
dataset_size: 9394033
- config_name: nl
features:
- name: id
dtype: string
- name: document_id
dtype: int32
- name: sentence_id
dtype: int32
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': Adj
'1': Adv
'2': Art
'3': Conj
'4': Int
'5': Misc
'6': N
'7': Num
'8': Prep
'9': Pron
'10': Punc
'11': V
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
splits:
- name: train
num_bytes: 5435346
num_examples: 15806
- name: validation
num_bytes: 1017418
num_examples: 2895
- name: test
num_bytes: 1850382
num_examples: 5195
download_size: 3642241
dataset_size: 8303146
---
# Dataset Card for CoNLL-2002
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [homepage](https://www.clips.uantwerpen.be/conll2002/ner/)
- **Repository:** [github](https://github.com/teropa/nlp/tree/master/resources/corpora/conll2002)
- **Paper:** [paper](https://www.aclweb.org/anthology/W02-2024/)
- **Point of Contact:** [Erik Tjong Kim Sang](erikt@uia.ua.ac.be)
### Dataset Summary
Named entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example:
[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .
The shared task of CoNLL-2002 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations and names of miscellaneous entities that do not belong to the previous three groups. The participants of the shared task will be offered training and test data for at least two languages. They will use the data for developing a named-entity recognition system that includes a machine learning component. Information sources other than the training data may be used in this shared task. We are especially interested in methods that can use additional unannotated data for improving their performance (for example co-training).
### Supported Tasks and Leaderboards
Named Entity Recognition (NER) is a subtask of Information Extraction. Different NER systems were evaluated as a part of the Sixth Message Understanding Conference in 1995 (MUC6). The target language was English. The participating systems performed well. However, many of them used language-specific resources for performing the task and it is unknown how they would have performed on another language than English.
After 1995 NER systems have been developed for some European languages and a few Asian languages. There have been at least two studies that have applied one NER system to different languages. Palmer and Day [PD97] have used statistical methods for finding named entities in newswire articles in Chinese, English, French, Japanese, Portuguese and Spanish. They found that the difficulty of the NER task was different for the six languages but that a large part of the task could be performed with simple methods. Cucerzan and Yarowsky [CY99] used both morphological and contextual clues for identifying named entities in English, Greek, Hindi, Rumanian and Turkish. With minimal supervision, they obtained overall F measures between 40 and 70, depending on the languages used.
- `named-entity-recognition`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.
- `parsing`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better). A part-of-speech tag is correct only if it is equal to the corresponding tag in the data.
### Languages
There are two languages available : Spanish (es) and Dutch (nl).
## Dataset Structure
### Data Instances
The examples look like this :
```
{
'id': '0',
'document_id': 0,
'sentence_id': 0,
'tokens': ['Melbourne', '(', 'Australia', ')', ',', '25', 'may', '(', 'EFE', ')', '.'],
'pos_tags': [29, 21, 29, 22, 13, 59, 28, 21, 28, 22, 20],
'ner_tags': [5, 0, 5, 0, 0, 0, 0, 0, 3, 0, 0]
}
```
The original data files within the Dutch sub-dataset have `-DOCSTART-` lines used to separate documents, but these lines are removed here.
Indeed `-DOCSTART-` is a special line that acts as a boundary between two different documents, and it is filtered out in this implementation.
### Data Fields
- `id`: id of the sample
- `document_id`: an `int32` feature tracking which document the sample is from.
- `sentence_id`: an `int32` feature tracking which sentence in this document the sample is from.
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
- `pos_tags`: the POS tags of each token
The POS tags correspond to this list for Spanish:
```
'AO', 'AQ', 'CC', 'CS', 'DA', 'DE', 'DD', 'DI', 'DN', 'DP', 'DT', 'Faa', 'Fat', 'Fc', 'Fd', 'Fe', 'Fg', 'Fh', 'Fia', 'Fit', 'Fp', 'Fpa', 'Fpt', 'Fs', 'Ft', 'Fx', 'Fz', 'I', 'NC', 'NP', 'P0', 'PD', 'PI', 'PN', 'PP', 'PR', 'PT', 'PX', 'RG', 'RN', 'SP', 'VAI', 'VAM', 'VAN', 'VAP', 'VAS', 'VMG', 'VMI', 'VMM', 'VMN', 'VMP', 'VMS', 'VSG', 'VSI', 'VSM', 'VSN', 'VSP', 'VSS', 'Y', 'Z'
```
And this list for Dutch:
```
'Adj', 'Adv', 'Art', 'Conj', 'Int', 'Misc', 'N', 'Num', 'Prep', 'Pron', 'Punc', 'V'
```
The NER tags correspond to this list:
```
"O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC",
```
The NER tags have the same format as in the chunking task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC).
It is assumed that named entities are non-recursive and non-overlapping. In case a named entity is embedded in another named entity usually, only the top level entity is marked.
### Data Splits
For both configurations (Spanish and Dutch), there are three splits.
The original splits were named `train`, `testa` and `testb` and they correspond to the `train`, `validation` and `test` splits.
The splits have the following sizes :
| | train | validation | test |
| ----- |-------:|------------:|------:|
| N. Examples (Spanish) | 8324 | 1916 | 1518 |
| N. Examples (Dutch) | 15807 | 2896 | 5196 |
## Dataset Creation
### Curation Rationale
The dataset was introduced to introduce new resources to two languages that were under-served for statistical machine learning at the time, Dutch and Spanish.
[More Information Needed]
### Source Data
The Spanish data is a collection of news wire articles made available by the Spanish EFE News Agency. The articles are from May 2000.
The Dutch data consist of four editions of the Belgian newspaper "De Morgen" of 2000 (June 2, July 1, August 1 and September 1).
#### Initial Data Collection and Normalization
The articles were word-tokenized, information on the exact pre-processing pipeline is unavailable.
#### Who are the source language producers?
The source language was produced by journalists and writers employed by the news agency and newspaper mentioned above.
### Annotations
#### Annotation process
For the Dutch data, the annotator has followed the MITRE and SAIC guidelines for named entity recognition (Chinchor et al., 1999) as well as possible.
#### Who are the annotators?
The Spanish data annotation was carried out by the TALP Research Center of the Technical University of Catalonia (UPC) and the Center of Language and Computation (CLiC) of the University of Barcelona (UB).
The Dutch data was annotated as a part of the Atranos project at the University of Antwerp.
### Personal and Sensitive Information
The data is sourced from newspaper source and only contains mentions of public figures or individuals
## Considerations for Using the Data
### Social Impact of Dataset
Named Entity Recognition systems can be used to efficiently index news text, allowing to easily gather all information pertaining to an organization or individual. Making such resources widely available in languages other than English can support better research and user experience for a larger part of the world's population. At the same time, better indexing and discoverability can also enable surveillance by state actors.
### Discussion of Biases
News text reproduces the biases of society, and any system trained on news data should be cognizant of these limitations and the risk for models to learn spurious correlations in this context, for example between a person's gender and their occupation.
### Other Known Limitations
Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.
## Additional Information
### Dataset Curators
The annotation of the Spanish data was funded by the European Commission through the NAMIC project (IST-1999-12392).
### Licensing Information
The licensing status of the data, especially the news source text, is unknown.
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@inproceedings{tjong-kim-sang-2002-introduction,
title = "Introduction to the {C}o{NLL}-2002 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F.",
booktitle = "{COLING}-02: The 6th Conference on Natural Language Learning 2002 ({C}o{NLL}-2002)",
year = "2002",
url = "https://www.aclweb.org/anthology/W02-2024",
}
```
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.