id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
tp05/shortStories | 2023-10-18T22:27:32.000Z | [
"region:us"
] | tp05 | null | null | 0 | 20 | 2023-10-10T20:07:58 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Soheil-FM/faq | 2023-10-11T12:34:50.000Z | [
"region:us"
] | Soheil-FM | null | null | 0 | 20 | 2023-10-10T22:03:48 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
smangrul/ad-copy-generation | 2023-10-11T06:10:12.000Z | [
"region:us"
] | smangrul | null | null | 1 | 20 | 2023-10-11T06:08:02 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 445199.82471516216
num_examples: 1000
- name: test
num_bytes: 62773.17528483786
num_examples: 141
download_size: 194198
dataset_size: 507973.0
---
# Dataset Card for "ad-copy-generation"
Formatted the dataset https://huggingface.co/datasets/jaykin01/advertisement-copy to follow the Llama V2 chat template for instruction tuning.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
| 714 | [
[
-0.00554656982421875,
-0.037506103515625,
0.0098876953125,
0.037506103515625,
-0.036346435546875,
0.00948333740234375,
0.00559234619140625,
-0.0208282470703125,
0.06658935546875,
0.048492431640625,
-0.057037353515625,
-0.037628173828125,
-0.037445068359375,
... |
gayathrimanoj/dataset_shell | 2023-10-11T17:47:35.000Z | [
"region:us"
] | gayathrimanoj | null | null | 0 | 20 | 2023-10-11T16:24:12 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
erickrribeiro/gender-by-name | 2023-10-11T20:10:33.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"language:pt",
"license:cc-by-4.0",
"gender_by_name",
"social_science",
"uci",
"region:us"
] | erickrribeiro | null | null | 0 | 20 | 2023-10-11T19:42:24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: Name
dtype: string
- name: Gender
dtype:
class_label:
names:
'0': F
'1': M
- name: Count
dtype: int64
- name: Probability
dtype: float64
splits:
- name: train
num_bytes: 4090843.4554794286
num_examples: 117815
- name: test
num_bytes: 1022719.5445205712
num_examples: 29454
download_size: 2497614
dataset_size: 5113563
license: cc-by-4.0
task_categories:
- text-classification
language:
- en
- pt
tags:
- gender_by_name
- social_science
- uci
pretty_name: Gender by Name
size_categories:
- 100K<n<1M
---
# Dataset Card for "Gender-by-Name"
This dataset attributes first names to genders, giving counts and probabilities. It combines open-source government data from the US, UK, Canada, and Australia. The dataset is taken from [UCI Machine Learning Repository](https://archive.ics.uci.edu/dataset/591/gender+by+name)
## Dataset Information
This dataset combines raw counts for first/given names of male and female babies in those time periods, and then calculates a probability for a name given the aggregate count. Source datasets are from government authorities:
-US: Baby Names from Social Security Card Applications - National Data, 1880 to 2019
-UK: Baby names in England and Wales Statistical bulletins, 2011 to 2018
-Canada: British Columbia 100 Years of Popular Baby names, 1918 to 2018
-Australia: Popular Baby Names, Attorney-General's Department, 1944 to 2019
## Has Missing Values?
No
## Variable Information
Name: String
Gender: 0/1 (female/male),
Count: Integer
Probability: Float
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,884 | [
[
-0.01318359375,
-0.00934600830078125,
0.0078887939453125,
0.02801513671875,
-0.0207061767578125,
-0.006893157958984375,
0.0170745849609375,
-0.0192108154296875,
0.019683837890625,
0.02105712890625,
-0.07012939453125,
-0.05462646484375,
-0.054443359375,
-0.00... |
DewiBrynJones/banc-trawsgrifiadau-bangor | 2023-10-28T05:32:34.000Z | [
"size_categories:10K<n<100K",
"language:cy",
"license:cc0-1.0",
"verbatim transcriptions",
"speech recognition",
"region:us"
] | DewiBrynJones | Dyma fanc o 25 awr 34 munud a 24 eiliad o segmentau o leferydd naturiol dros hanner cant o gyfranwyr
ar ffurf ffeiliau mp3, ynghyd â thrawsgrifiadau 'verbatim' cyfatebol o’r lleferydd ar ffurf ffeil .tsv.
Mae'r mwyafrif o'r lleferydd yn leferydd digymell, naturiol. Dosbarthwn y deunydd hwn o dan drwydded
agored CC0.
This resource is a bank of 25 hours 34 minutes and 24 seconds of segments of natural speech from over 50
contributors in mp3 file format, together with corresponding 'verbatim' transcripts of the speech in .tsv
file format. The majority of the speech is spontaneous, natural speech. We distribute this material under
a CC0 open license. | 0 | 20 | 2023-10-12T14:16:57 | ---
license: cc0-1.0
language:
- cy
tags:
- verbatim transcriptions
- speech recognition
pretty_name: 'Banc Trawsgrifiadau Bangor'
size_categories:
- 10K<n<100K
---
[See below for English](#bangor-transcription-bank)
# Banc Trawsgrifiadau Bangor
Dyma fanc o 25 awr 34 munud a 24 eiliad o segmentau o leferydd naturiol dros hanner cant o gyfranwyr ar ffurf ffeiliau mp3, ynghyd â thrawsgrifiadau 'verbatim' cyfatebol o’r lleferydd ar ffurf ffeil .tsv. Mae'r mwyafrif o'r lleferydd yn leferydd digymell, naturiol. Dosbarthwn y deunydd hwn o dan drwydded agored CC0.
## Pwrpas
Pwrpas y trawsgrifiadau hyn yw gweithredu fel data hyfforddi ar gyfer modelau adnabod lleferydd, gan gynnwys [ein modelau wav2vec](https://github.com/techiaith/docker-wav2vec2-cy). Ar gyfer y diben hwnnw, mae gofyn am drawsgrifiadau mwy verbatim o'r hyn a ddywedwyd na'r hyn a welir mewn trawsgrifiadau traddodiadol ac mewn isdeitlau, felly datblygwyd confensiwn arbennig ar gyfer y gwaith trawsgrifio ([gweler isod](#confensiynau_trawsgrifio)). Gydag ein modelau wav2vec, caiff cydran ychwnaegol, sef 'model iaith' ei defnyddio ar ôl y model adnabod lleferydd i safoni mwy ar allbwn y model iaith i fod yn debycach i drawsgrifiadau traddodiadol ac isdeitlau.
Rydyn ni wedi darparu 3 ffeil .tsv, sef clips.tsv, train.tsv a test.tsv. Mae clips.tsv yn cynnwys ein trawsgrifiadau i gyd. Crëwyd train.tsv a test.tsv er mewn darparu setiau 'safonol' sy'n caniatáu i ddefnyddwyr allu gymharu modelau gan wahanol hyfforddwyr yn deg,hynny yw fe'u crëwyd at bwrpas meincnodi. Mae train.tsv yn cynnwys 80% o'n trawsgrifiadau, a test.tsv yn cynnwys y 20% sy'n weddill.
Dyma enghraifft o gynnwys y data:
```
audio_filename audio_filesize transcript duration
f86a046fd0964e0386d8c1363907183d.mp3 898272 *post industrial* yym a gyda yy dwi'n ca'l deud 5092
f0c2310fdca34faaa83beca5fa7ed212.mp3 809720 sut i ymdopio felly, wedyn erbyn hyn mae o nôl yn y cartra 4590
3eec3feefe254c9790739c22dd63c089.mp3 1335392 Felly ma' hon hefyd yn ddogfen fydd yn trosglwyddo gyda'r plant bobol ifanc o un cam i'r llall ac hefyd erbyn hyn i'r coleg 'lly. 7570
```
Ceir pedair colofn yn y ffeiliau .tsv. Y cyntaf yw enw’r ffeil sain. Maint y ffeil sain yw’r ail. Y trawsgrifiad ei hun sydd yn y drydedd golofn. Hyd y clip sain sydd yn yr olaf.
Dyma'r wybodaeth am y colofnau.
| Maes| Esboniad |
| ------ | ------ |
| `audio_filename`| Enw'r ffeil sain o fewn y ffolder 'clips'|
| `audio_filesize` | Maint y ffeil|
| `transcript` | Trawsgrifiad |
| `duration` | Hyd amser y clip mewn milliseconds. |
## Y Broses o Greu’r Adnodd
Casglwyd y ffeiliau sain yn bennaf o bodlediadau Cymraeg gyda chaniatâd eu perchnogion yn ogystal â'r cyfranwyr unigol. Rydym yn ddiolchgar tu hwnt i’r bobl yna. Yn ogystal, crewyd rhywfaint o sgriptiau ar batrwm eitemau newyddion ac erthyglau a'u darllen gan ymchwilwyr yr Uned Technolegau Iaith er mwyn sicrhau bod cynnwys o'r math hwnnw yn y banc.
Gyrrwyd y ffeiliau sain trwy ein trawsgrifiwr awtomataidd mewnol i segmentu’r sain a chreu trawsgrifiadau amrwd. Defnyddiwyd pecyn trawsgrifio Elan 6.4 (ar gael o https://archive.mpi.nl/tla/elan) gan drawsgrifwyr profiadol i wrando ar a chywiro’r trawsgrifiad amrwd.
## Nodyn Ynghylch Anonymeiddio’r Cynnwys
Er tegwch i’r cyfranwyr, rydyn ni wedi anonymeiddio’r trawsgrifiadau. Penderfynwyd anonymeiddio nid yn unig enwau pobl unigol, ond hefyd unrhyw Wybodaeth Bersonol Adnabyddadwy (PII) gan gynnwys, ond nid yn gyfunedig i:
* Rhif ffôn
* Teitlau swyddi/galwedigaethau
* Gweithleoedd
* Enwau mannau cyhoeddus
* Lleoliad daearyddol
* Dyddiadau/amseroedd
Wrth drawsgrifio marciwyd pob segment oedd yn cynnwys PII gyda’r tag \<PII>, yna wnaethom hidlo allan pob segment oedd yn cynnwys tag \<PII> er mwyn sicrhau nad oedd unrhyw wybodaeth bersonol yn cael eu cyhoeddi fel rhan o’r adnodd hwn.
Rydym hefyd wedi newid trefn trawsgrifiadau i fod ar hap, felly nid ydynt wedi'u cyhoeddi yn y drefn y maent yn eu ymddangos yn y ffeiliau sain gwreiddiol.
<a name="confensiynau_trawsgrifio"></a>
## Confensiynau Trawsgrifio
Datblygwyd y confensiynau trawsgrifio hyn er mwyn sicrhau fod y trawsgrifiadau nid yn unig yn verbatim ond hefyd yn gyson. Fe’u datblygwyd trwy gyfeirio at gonfensiynau a ddefnyddir gan yr Uned yn y gorffennol, confensiynau eraill megis y rhai a defnyddiwyd yng nghorpora CorCenCC, Siarad, CIG1 a CIG2, a hefyd trwy broses o ddatblygu parhaol wrth i’r tîm ymgymryd â’r dasg o drawsgrifio.
**NODWCH** - gan ein bod wedi datblygu’r egwyddorion trawsgrifio yn rhannol wrth ymgymryd â’r dasg o drawsgrifio nid yw’r trawsgrifiadau cynnar o reidrwydd yn dilyn yr egwyddorion cant y cant. Bwriadwn wirio’r trawsgrifiadau wedi i ni fireinio’r confensiynau.
### Collnodau
Ni ddefnyddiwyd collnodau i marcio pob un llythyren a hepgorwyd gan siaradwyr. Er enghraifft, _gwitho_ (sef ynganiad o _gweithio_) sy’n gywir, nid _gw’ith’o_
Yn hytrach, defnyddiwyd collnodau i wahaniaethu rhwng gwahanol eiriau oedd yn cael eu sillafu'r union yr un fath fel arall. Er enghraifft rydym yn defnyddio collnod o flaen _’ma_ (sef _yma_) i wahaniaethu rhyngddo â _ma’_ (sef _mae_), _gor’o’_ i wahaniaethu rhwng _gorfod_ a ffurf trydydd person unigol amser dibynnol presennol _gori_, a _pwysa’_ i wahaniaethu rhwng ffurf luosog _pwys_ a nifer o ffurfiau berfol posib _pwyso_.
Fodd bynnag, ceir eithriad i’r rheol hon, a hynny pan fo sillafu gair heb gollnod yn newid sŵn y llythyren cyn neu ar ôl y collnod, ac felly _Cymra’g_ sy’n gywir, nid _Cymrag_.
### Tagiau
Wrth drawsgrifio, defnyddiwyd y tagiau hyn i recordio elfennau oedd y tu hwnt i leferydd yr unigolion:
* \<anadlu>
* \<aneglur>
* \<cerddoriaeth>
* \<chwerthin>
* \<chwythu allan>
* \<distawrwydd>
* \<ochneidio>
* \<PII>
* \<peswch>
* \<twtian>
Rhagwelwn y bydd y rhestr hon yn chwyddo wrth i ni drawsgrifio mwy o leferydd ac wrth i ni daro ar draws mwy o elfennau sydd y tu hwnt i leferydd unigolion.
### Synau nad ydynt yn eiriol
Ymdrechwyd i drawsgrifio synau nad ydynt yn eiriol yn gyson. Er enghraifft, defnyddiwyd _yy_ bob tro (yn hytrach nag _yrr_, _yr_ neu _err_ neu gymysgedd o’r rheiny) i gynrychioli neu adlewyrchu’r sŵn a wnaethpwyd pan oedd siaradwr yn ceisio meddwl neu oedi wrth siarad.
Defnyddiwyd y canlynol wrth drawsgrifio:
* yy
* yym
* hmm
* m-hm
Eto, rhagwelwn y bydd y rhestr hon yn chwyddo wrth i ni drawsgrifio mwy o leferydd ac wrth i ni daro ar draws mwy o synau nad ydynt yn eiriol.
### Geiriau Saesneg
Rydym wedi amgylchynu bob gair neu ymadrodd Saesneg gyda sêr, er enghraifft:
> Dwi’n deall **\*sort of\***.
### Cymreigio berfenwau
Pan fo siaradwyr yn defnyddio geiriau Saesneg fel berfenwau (trwy ychwanegu _io_ ar ddiwedd y gair er enghraifft) rydym wedi ymdrechu i sillafu’r gair gan ddefnyddio confensiynau sillafu Cymreig yn hytrach nag ychwanegu _io_ at sillafiad Saesneg o’r gair. Er enghraifft rydym wedi trawsgrifio _heitio_ yn hytrach na _hateio_, a _lyfio_ yn hytrach na _loveio_.
### Cywiro cam-siarad
I sicrhau ein bod ni’n glynu at egwyddorion trawsgrifio verbatim penderfynwyd na ddylem gywiro cam-siarad neu gam-ynganu siaradwyr. Er enghraifft, yn y frawddeg ganlynol:
> enfawr fel y diffyg o fwyd yym **efallu** cam-drin
mae'n amlwg mai’r gair _efallai_ sydd dan sylw mewn gwirionedd, ond fe’i trawsgrifiwyd fel ei glywir.
### Atalnodi
Defnyddiwyd atalnodau llawn, marciau cwestiwn ac ebychnodau wrth drawsgrifio’r lleferydd.
Rydym wedi amgylchynu bob gair neu ymadrodd sydd wedi ei dyfynnu gyda _”_, er enghraifft:
> Dywedodd hi **”Dwi’n mynd”** ond aeth hi ddim.
### Nodyn ynghylch ein defnydd o gomas
Gan mai confensiwn ysgrifenedig yw coma yn y bôn, ni ddefnyddiwyd comas cymaint wrth drawsgrifio. Byddai defnyddio coma lle y disgwylir i’w weld mewn testun ysgrifenedig ddim o reidrwydd wedi adlewyrchu lleferydd yr unigolyn. Dylid cadw hynny mewn cof wrth ddarllen y trawsgrifiadau.
### Sillafu llythrennau
Sillafwyd llythrennau unigol yn hytrach na thrawsgrifio’r llythrennau unigol yn unig.
Hynny yw, hyn sy’n gywir:
> Roedd ganddo **ow si di**
**ac nid:**
> Roedd ganddo **O C D**
**na chwaith:**
> Roedd ganddo **OCD**
### Rhifau
Trawsgrifiwyd rhifau fel geiriau yn hytrach na digidau, hynny yw hyn sy’n gywir:
> Y flwyddyn dwy fil ac ugain
**ac nid:**
> Y flwyddyn 2020
### Gorffen gair ar ei hanner
Marciwyd gair oedd wedi ei orffen ar ei hanner gyda `-`. Er enghraifft:
> Ma’n rhaid i mi **ca-** cael diod.
### Gorffen brawddeg ar ei hanner/ailddechrau brawddeg
Marciwyd brawddeg oedd wedi ei gorffen ar ei hanner gyda `...`. Er enghraifft:
> Ma’n rhaid i mi ca’l... Ma’ rhaid i mi brynu diod.
### Siaradwr yn torri ar draws siaradwr arall
Ceir yn y data llawer o enghreifftiau o siaradwr yn torri ar draws y prif leferydd gan ddefnyddio synau nad ydynt yn eiriol, geiriau neu ymadroddion (megis _m-hm_, _ie_, _ydi_, _yn union_ ac ati). Pan oedd y ddau siaradwr i'w clywed yn glir ag ar wahân, rhoddwyd `...` ar ddiwedd rhan gyntaf y lleferydd toredig, a `...` arall ar ddechrau ail ran y lleferydd toredig, fel yn yr enghraifft ganlynol:
> Ond y peth yw... M-hm. ...mae’r ddau yn wir
Pan nad oedd y ddau siaradwyr i'w clywed yn glir ag ar wahân, fe hepgorwyd y lleferydd o’r data.
### Rhegfeydd
Dylid nodi ein bod ni heb hepgor rhegfeydd wrth drawsgrifio.
## Y Dyfodol
Wrth ddefnyddio’r banc trawsgrifiadau dylid cadw mewn cof mai fersiwn cychwynnol ydyw. Bwriadwn fireinio a chysoni ein trawsgrifiadau ymhellach, ac ychwanegu mwy fyth o drawsgrifiadau i’r banc yn rheolaidd dros y flwyddyn nesaf
## Cyfyngiadau
Er mwyn parchu'r cyfrannwyr, wrth lwytho'r data hwn i lawr rydych yn cytuno i beidio â cheisio adnabod y siaradwyr yn y data.
## Diolchiadau
Diolchwn i'r cyfrannwyr am eu caniatâd i ddefnyddio'u lleferydd. Rydym hefyd yn ddiolchgar i Lywodraeth Cymru am ariannu’r gwaith hwn fel rhan o broject Technoleg Testun, Lleferydd a Chyfieithu ar gyfer yr Iaith Gymraeg.
# Bangor Transcription Bank
This resource is a bank of 25 hours 34 minutes and 24 seconds of segments of natural speech from over 50 contributors in mp3 file format, together with corresponding 'verbatim' transcripts of the speech in .tsv file format. The majority of the speech is spontaneous, natural speech. We distribute this material under a CC0 open license.
## Purpose
The purpose of these transcripts is to act as training data for speech recognition models, including [our wav2vec models](https://github.com/techiaith/docker-wav2vec2-cy). For that purpose, transcriptions are more verbatim than what is seen in traditional transcriptions and than what is required for subtitling purposes, thus a bespoke set of conventions has been developed for the transcription work ([see below](#transcription_conventions) ). Our wav2vec models use an auxiliary component, namely a 'language model', to further standardize the speech recognition model’s output in order that it be more similar to traditional transcriptions and subtitles.
We have provided 3 .tsv files, namely clips.tsv, train.tsv and test.tsv. clips.tsv contains all of our transcripts. train.tsv and test.tsv were created to provide 'standard' sets that allow users to compare models trained by different trainers fairly, i.e. they were created as a 'benchmark'. train.tsv contains 80% of our transcripts, and test.tsv contains the remaining 20%.
Here is an example of the data content:
```
audio_filename audio_filesize transcript duration
f86a046fd0964e0386d8c1363907183d.mp3 898272 *post industrial* yym a gyda yy dwi'n ca'l deud 5092
f0c2310fdca34faaa83beca5fa7ed212.mp3 809720 sut i ymdopio felly, wedyn erbyn hyn mae o nôl yn y cartra 4590
3eec3feefe254c9790739c22dd63c089.mp3 1335392 Felly ma' hon hefyd yn ddogfen fydd yn trosglwyddo gyda'r plant bobol ifanc o un cam i'r llall ac hefyd erbyn hyn i'r coleg 'lly. 7570
```
There are four columns in the .tsv files. The first is the name of the audio file. The second is the size of the audio file. The transcript itself appears in the third column. The length of the audio clip appears in the last.
Here is the information about the columns.
| Field| Explanation |
| ------ | ------ |
| `audio_filename`| The name of the audio file within the 'clips' folder|
| `audio_filesize` | The size of the file |
| `transcript` | Transcript |
| `duration` | Duration of the clip in milliseconds. |
## The Process of Creating the Resource
The audio files were mainly collected from Welsh podcasts, after having gained the consent of the podcast owners and individual contributors to do so. We are extremely grateful to those people. In addition, some scripts were created which mimicked the pattern of news items and articles. These scripts were then read by Language Technologies Unit researchers in order to ensure that content of that type was included in the bank.
The audio files were run through our in-house automated transcriber to segment the audio and create raw transcripts. Using Elan 6.4 (available from https://archive.mpi.nl/tla/elan), experienced transcribers listened to and corrected the raw transcript.
## A Note About Content Anonymization
Out of respect to the contributors, we have anonymised all transcripts. It was decided to anonymize not only the names of individual people, but also any other Personally Identifiable Information (PII) including, but not limited to:
* Phone number
* Job titles/occupations
* Workplaces
* Names of public places
* Geographical location
* Dates/times
When transcribing, all segments containing PII were marked with the \<PII> tag, we then filtered out all segments containing a \<PII> tag to ensure no personal information was published as part of this resource.
We have also randomized the order of the segments so that they are not published in the order they appeared in the original audio files.
<a name="transcription_conventions"></a>
## Transcription Conventions
These transcription conventions were developed to ensure that the transcriptions were not only verbatim but also consistent. They were developed by referring to conventions used by the Unit in the past, conventions such as those used in the CorCenCC, Siarad, CIG1 and CIG2 corpora, and also through a process of ongoing development as the team undertook the task of transcription.
**NOTE** - as we have partially developed the conventions at the same time as undertaking the task of transcription the early transcriptions may not follow the latest principles faithfully. We intend to check the transcripts after we have refined the conventions.
### Apostrophes
Apostrophes were not used to mark every single letter omitted by speakers. For example, _gwitho_ (which is a pronunciation of _gweithio_) is correct, not _gw’ith'o_.
Rather, apostrophes were used to distinguish between different words that were otherwise spelled identically. For example we use an apostrophe in front of _'ma_ (a pronunciation of _yma_) to distinguish it from _ma'_ (a pronunciation of _mae_), _gor'o'_ to distinguish between _gorfod_ and the third person singular form of the present dependent tense _gori_, and _pwysa'_ to distinguish between the plural form of _pwys_ and a number of possible verb forms of _pwyso_.
However, there is an exception to this rule, that being when spelling a word without an apostrophe would change the sound of the letter before or after the apostrophe, thus _Cymra'g_ is correct, not _Cymrag_.
### Tags
When transcribing, these tags were used to record elements that were external to the speech of the individuals:
* \<anadlu>
* \<aneglur>
* \<cerddoriaeth>
* \<chwerthin>
* \<chwythu allan>
* \<distawrwydd>
* \<ochneidio>
* \<PII>
* \<peswch>
* \<twtian>
We anticipate that this list will grow as we transcribe more speech and as we come across more elements that are external to the speech of individuals.
### Non-verbal sounds
Efforts were made to transcribe non-verbal sounds consistently. For example, _yy_ was always used (rather than _yrr_, _yr_ or _err_, or a mixture of those) to represent or reflect the sound made when a speaker was trying to think or paused in speaking.
The following were used in transcription:
* yy
* yym
* hmm
* m-hm
Again, we anticipate that this list will grow as we transcribe more speech and as we encounter more non-verbal sounds.
### English words
We have surrounded each English word or phrase with asterixis, for example:
> Dwi’n deall **\*sort of\***.
### Adapting English words as Welsh language infinitives
When speakers use English words as infinitives (by adding _io_ at the end of the word for example) we have endeavoured to spell the word using Welsh spelling conventions rather than adding _io_ to the English spelling of the word. For example we have transcribed _heitio_ instead of _hateio_, and _lyfio_ instead of _loveio_.
### Correction of mis-pronunciations
To ensure that we adhere to the principles of verbatim transcription it was decided that we should not correct speakers' mis-pronunciations. For example, in the following sentence:
> enfawr fel y diffyg o fwyd yym **efallu** cam-drin
it is clear that _efallai_ is the intended word, but it is transcribed as it is heard.
### Punctuation
Full stops, question marks and exclamation marks were used when transcribing the speech.
We have surrounded all quoted words or phrases with _”_, for example:
> Dywedodd hi **”Dwi’n mynd”** ond aeth hi ddim.
### A note about our use of commas
As a comma is essentially a convention used for written text, commas were not used prolifically in transcription. Using a comma where one would expected to see it in a written text during transcription would not necessarily have reflected the individual's speech. This should be borne in mind when reading the transcripts.
### Individual letters
Individual letters were spelled out rather than being transcribed as individual letters.
That is, this is correct:
> Roedd ganddo **ow si di**
**not:**
> Roedd ganddo **O C D**
**nor:**
> Roedd ganddo **OCD**
### Numbers
Numbers were transcribed as words rather than digits, thus this is correct:
> Y flwyddyn dwy fil ac ugain
**rather than:**
> Y flwyddyn 2020
### Half-finished words
Half-finished words are marked with a `-`. For example:
> Ma’n rhaid i mi **ca-** cael diod.
### Half-finished/restarted sentences
Half-finished sentences are marked with a `...`. For example:
> Ma’n rhaid i mi ca’l... Ma’ rhaid i mi brynu diod.
### Speaker interruptions
There are many examples of a speaker interrupting another speaker by using non-verbal sounds, words or phrases (such as _m-hm_, _ie_, _ydi_, _yn union_ etc.) in the data. When the two speakers could be heard clearly and distinctly, a `...` was placed at the end of the first part of the broken speech, and another `...` at the beginning of the second part of the broken speech, as in the following example:
> Ond y peth yw... M-hm. ...mae’r ddau yn wir
When the two speakers could not be heard clearly and distinctly, the speech was omitted from the data.
### Swearwords
It should be noted that we have not omitted swearwords when transcribing.
## The future
That this is an initial version of the transcript bank should be borne in mind when using this resource. We intend to refine and harmonize our transcripts further, and add yet more transcripts to the bank regularly over the next year.
## Restrictions
In order to respect the contributors, by downloading this data you agree not to attempt to identify the speakers in the data.
## Acknowledgements
We thank the contributors for their permission to use their speech. We are also grateful to the Welsh Government for funding this work as part of the Text, Speech and Translation Technology project for the Welsh Language.
| 19,638 | [
[
-0.03924560546875,
-0.0343017578125,
0.044952392578125,
0.0298004150390625,
-0.052215576171875,
-0.02105712890625,
0.0187530517578125,
-0.04742431640625,
0.08721923828125,
0.01511383056640625,
-0.051910400390625,
-0.0384521484375,
-0.044097900390625,
0.02844... | |
gn03249822/insulin_pen_dataset | 2023-10-19T05:07:56.000Z | [
"region:us"
] | gn03249822 | null | null | 0 | 20 | 2023-10-19T05:01:46 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': 諾胰保
'1': 諾胰得
splits:
- name: train
num_bytes: 287496503.26086956
num_examples: 117
- name: test
num_bytes: 54706739.739130445
num_examples: 21
download_size: 342220863
dataset_size: 342203243.0
---
# Dataset Card for "insulin_pen_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 670 | [
[
-0.02325439453125,
-0.01410675048828125,
0.027374267578125,
0.00720977783203125,
-0.0081329345703125,
-0.00010198354721069336,
0.009674072265625,
-0.0196380615234375,
0.061798095703125,
0.0234222412109375,
-0.0360107421875,
-0.055999755859375,
-0.047698974609375... |
rkdeva/DermnetSkinData-Test12 | 2023-10-19T06:03:46.000Z | [
"region:us"
] | rkdeva | null | null | 0 | 20 | 2023-10-19T06:00:51 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 376841600.824
num_examples: 3937
download_size: 370136671
dataset_size: 376841600.824
---
# Dataset Card for "DermnetSkinData-Test12"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 498 | [
[
-0.03643798828125,
-0.0089263916015625,
0.0044403076171875,
0.01490020751953125,
-0.0138092041015625,
0.0016679763793945312,
0.0185394287109375,
-0.00536346435546875,
0.059600830078125,
0.0198211669921875,
-0.07354736328125,
-0.052703857421875,
-0.04083251953125... |
huangyt/FINETUNE13 | 2023-10-21T06:50:43.000Z | [
"region:us"
] | huangyt | null | null | 0 | 20 | 2023-10-21T06:49:27 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
davidprinz/calltaker-dataset | 2023-10-28T16:31:32.000Z | [
"region:us"
] | davidprinz | null | null | 0 | 20 | 2023-10-21T13:39:51 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_examples: 52
configs:
- config_name: default
data_files:
- split: train
path: train-*
---
# Internat training dataset
This is an internal training dataset | 259 | [
[
-0.00550079345703125,
0.0179290771484375,
-0.01849365234375,
0.019927978515625,
-0.01519012451171875,
0.0239715576171875,
-0.00006794929504394531,
0.0102691650390625,
-0.02587890625,
0.0222625732421875,
-0.0400390625,
-0.0227508544921875,
-0.0235443115234375,
... |
kheopsai/codecivil | 2023-10-22T17:22:35.000Z | [
"region:us"
] | kheopsai | null | null | 0 | 20 | 2023-10-22T17:22:08 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
jjonhwa/SECOND_RETRIEVE_PROCESSED_150 | 2023-10-23T09:55:51.000Z | [
"region:us"
] | jjonhwa | null | null | 0 | 20 | 2023-10-23T09:55:41 | ---
dataset_info:
features:
- name: ctxs
list:
- name: score
dtype: float64
- name: text
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 143544172
num_examples: 30979
download_size: 69158772
dataset_size: 143544172
---
# Dataset Card for "SECOND_RETRIEVE_PROCESSED_150"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 481 | [
[
-0.031219482421875,
-0.024383544921875,
0.012420654296875,
0.0230255126953125,
-0.019378662109375,
-0.0039825439453125,
0.0147705078125,
-0.017120361328125,
0.057830810546875,
0.040130615234375,
-0.06414794921875,
-0.030609130859375,
-0.05194091796875,
-0.01... |
zwhe99/FireAct | 2023-10-24T06:57:02.000Z | [
"region:us"
] | zwhe99 | null | null | 0 | 20 | 2023-10-24T06:55:06 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
Ka4on/ultrasound_train | 2023-10-25T20:08:16.000Z | [
"region:us"
] | Ka4on | null | null | 0 | 20 | 2023-10-25T19:58:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
ostapeno/qa-openai_batched_icl0_clen512_maxD-1_maxC2500_0 | 2023-10-26T11:18:35.000Z | [
"region:us"
] | ostapeno | null | null | 0 | 20 | 2023-10-26T11:18:22 | ## model_setting: openai_batched
## max_context_length: 512
## max_tokens_instruction: 2048
## max_tokens_response: 1024
## top_p: 0.9
## num_iterations: 1
## temperature: 0.7
## max_documents_per_subject: -1
## max_contexts_per_subject: 2500
## icl_examples: 0
## icl_dataset: lukaemon/mmlu
## icl_split: validation
## icl_use_options: True
| 342 | [
[
-0.042083740234375,
-0.02545166015625,
0.0092926025390625,
0.04541015625,
-0.04241943359375,
-0.0213165283203125,
-0.00826263427734375,
0.006015777587890625,
-0.005100250244140625,
0.0236358642578125,
-0.049041748046875,
-0.0423583984375,
-0.042510986328125,
... |
tahrirchi/uz-books | 2023-10-28T19:11:13.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"language:uz",
"license:apache-2.0",
"uz",
"books",
"region:us"
... | tahrirchi | null | null | 6 | 20 | 2023-10-27T16:35:16 | ---
configs:
- config_name: default
data_files:
- split: original
path: data/original-*
- split: lat
path: data/lat-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: original
num_bytes: 19244856855
num_examples: 39712
- name: lat
num_bytes: 13705512346
num_examples: 39712
download_size: 16984559355
dataset_size: 32950369201
annotations_creators:
- no-annotation
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
multilinguality:
- monolingual
language:
- uz
size_categories:
- 10M<n<100M
pretty_name: UzBooks
license: apache-2.0
tags:
- uz
- books
---
# Dataset Card for BookCorpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://tahrirchi.uz/grammatika-tekshiruvi](https://tahrirchi.uz/grammatika-tekshiruvi)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 16.98 GB
- **Size of the generated dataset:** 32.95 GB
- **Total amount of disk used:** 49.93 GB
### Dataset Summary
In an effort to democratize research on low-resource languages, we release UzBooks dataset, a cleaned book corpus consisting of nearly 40000 books in Uzbek Language divided into two branches: "original" and "lat," representing the OCRed (Latin and Cyrillic) and fully Latin versions of the texts, respectively.
Please refer to our [blogpost](https://tahrirchi.uz/grammatika-tekshiruvi) and paper (Coming soon!) for further details.
To load and use dataset, run this script:
```python
from datasets import load_dataset
uz_books=load_dataset("tahrirchi/uz-books")
```
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 16.98 GB
- **Size of the generated dataset:** 32.95 GB
- **Total amount of disk used:** 49.93 GB
An example of 'train' looks as follows.
```
{
"text": "Hamsa\nAlisher Navoiy ..."
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `text`: a `string` feature that contains text of the books.
### Data Splits
| name | |
|-----------------|--------:|
| original | 39712 |
| lat | 39712 |
## Dataset Creation
The books have been crawled from various internet sources and preprocessed using Optical Character Recognition techniques in [Tesseract OCR Engine](https://github.com/tesseract-ocr/tesseract). The latin version is created by converting the original dataset with highly curated scripts in order to put more emphasis on the research and development of the field.
## Citation
Please cite this model using the following format:
```
@online{Mamasaidov2023UzBooks,
author = {Mukhammadsaid Mamasaidov and Abror Shopulatov},
title = {UzBooks dataset},
year = {2023},
url = {https://huggingface.co/datasets/tahrirchi/uz-books},
note = {Accessed: 2023-10-28}, % change this date
urldate = {2023-10-28} % change this date
}
```
## Gratitude
We are thankful to these awesome organizations and people for helping to make it happen:
- [Ilya Gusev](https://github.com/IlyaGusev/): for advise throughout the process
- [David Dale](https://daviddale.ru): for advise throughout the process
## Contacts
We believe that this work will enable and inspire all enthusiasts around the world to open the hidden beauty of low-resource languages, in particular Uzbek.
For further development and issues about the dataset, please use m.mamasaidov@tahrirchi.uz or a.shopolatov@tahrirchi.uz to contact. | 5,005 | [
[
-0.013916015625,
-0.004650115966796875,
-0.0019378662109375,
-0.00897979736328125,
-0.03192138671875,
0.0037059783935546875,
-0.0131072998046875,
-0.0276336669921875,
0.005336761474609375,
0.0499267578125,
-0.047271728515625,
-0.0665283203125,
-0.015655517578125... |
stsudharsan/veshti-controlnet | 2023-10-29T13:58:14.000Z | [
"region:us"
] | stsudharsan | null | null | 0 | 20 | 2023-10-29T13:58:09 | ---
dataset_info:
features:
- name: image
dtype: image
- name: conditioning_img
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 14599706.0
num_examples: 143
download_size: 13484309
dataset_size: 14599706.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "veshti-controlnet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 531 | [
[
-0.0300750732421875,
-0.0177001953125,
0.0008792877197265625,
0.0112457275390625,
-0.015533447265625,
0.0111541748046875,
0.0197296142578125,
-0.00457000732421875,
0.07257080078125,
0.044036865234375,
-0.0592041015625,
-0.047607421875,
-0.03448486328125,
-0.... |
Jackmin108/cult-de-small | 2023-10-30T15:49:39.000Z | [
"license:apache-2.0",
"region:us"
] | Jackmin108 | null | null | 0 | 20 | 2023-10-30T15:46:46 | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path:
- data/train-0000.parquet
- data/train-0001.parquet
- data/train-0002.parquet
- data/train-0003.parquet
- data/train-0004.parquet
- data/train-0005.parquet
- data/train-0006.parquet
- data/train-0007.parquet
- split: validation
path:
- data/validation-0000.parquet
- data/validation-0001.parquet
- data/validation-0002.parquet
- data/validation-0003.parquet
- data/validation-0004.parquet
- data/validation-0005.parquet
- data/validation-0006.parquet
- data/validation-0007.parquet
---
Hello
| 776 | [
[
-0.0313720703125,
-0.049102783203125,
0.041259765625,
-0.0159149169921875,
-0.0120849609375,
0.01277923583984375,
0.052825927734375,
-0.04278564453125,
0.068603515625,
0.05682373046875,
-0.035552978515625,
-0.017730712890625,
-0.052825927734375,
0.0165710449... |
kat33/test-fun-chunk32 | 2023-10-31T02:09:03.000Z | [
"region:us"
] | kat33 | null | null | 0 | 20 | 2023-10-31T00:51:19 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sayan1101/identity_finetune_data | 2023-10-31T16:46:15.000Z | [
"region:us"
] | sayan1101 | null | null | 0 | 20 | 2023-10-31T12:27:39 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 384598
num_examples: 1181
- name: test
num_bytes: 68966
num_examples: 209
download_size: 219586
dataset_size: 453564
---
# Dataset Card for "identity_finetune_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 582 | [
[
-0.04119873046875,
-0.0246734619140625,
0.007541656494140625,
0.0024242401123046875,
-0.0150604248046875,
-0.005855560302734375,
0.0174102783203125,
-0.011505126953125,
0.05255126953125,
0.0275726318359375,
-0.05291748046875,
-0.04864501953125,
-0.03265380859375... |
ScalableMath/rm_data | 2023-11-02T08:21:59.000Z | [
"region:us"
] | ScalableMath | null | null | 0 | 20 | 2023-11-02T08:20:25 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tau/fs | 2022-02-03T16:05:28.000Z | [
"region:us"
] | tau | null | null | 0 | 19 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
hackathon-pln-es/poems-es | 2022-03-27T18:39:08.000Z | [
"license:wtfpl",
"region:us"
] | hackathon-pln-es | null | null | 4 | 19 | 2022-03-21T18:36:23 | ---
license: wtfpl
---
Dataset descargado de la página kaggle.com.
El archivo original contenía información en inglés y posteriormente fue traducida para su uso.
El dataset contiene las columnas:
- Autor: corresponde al autor del poema.
- Contenido: contiene todo el poema.
- Nombre del poema: contiene el título del poema.
- Años: corresponde al tiempo en que fue hecho el poema.
- Tipo: contiene el tipo que pertenece el poema. | 440 | [
[
-0.0200347900390625,
-0.0212554931640625,
-0.0019044876098632812,
0.03326416015625,
-0.039520263671875,
-0.0001131296157836914,
-0.0101470947265625,
-0.02532958984375,
0.0343017578125,
0.05645751953125,
-0.05828857421875,
-0.0595703125,
-0.04473876953125,
0.... |
roman_urdu_hate_speech | 2023-01-25T15:03:53.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ur",
"license:mit",
"binary classification",
"... | null | The Roman Urdu Hate-Speech and Offensive Language Detection (RUHSOLD) dataset is a Roman Urdu dataset of tweets annotated by experts in the relevant language. The authors develop the gold-standard for two sub-tasks. First sub-task is based on binary labels of Hate-Offensive content and Normal content (i.e., inoffensive language). These labels are self-explanatory. The authors refer to this sub-task as coarse-grained classification. Second sub-task defines Hate-Offensive content with four labels at a granular level. These labels are the most relevant for the demographic of users who converse in RU and are defined in related literature. The authors refer to this sub-task as fine-grained classification. The objective behind creating two gold-standards is to enable the researchers to evaluate the hate speech detection approaches on both easier (coarse-grained) and challenging (fine-grained) scenarios. \ | @inproceedings{rizwan2020hate,
title={Hate-speech and offensive language detection in roman Urdu},
author={Rizwan, Hammad and Shakeel, Muhammad Haroon and Karim, Asim},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
pages={2512--2522},
year={2020}
} | 1 | 19 | 2022-03-25T15:51:45 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- ur
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
pretty_name: roman_urdu_hate_speech
tags:
- binary classification
dataset_info:
- config_name: Coarse_Grained
features:
- name: tweet
dtype: string
- name: label
dtype:
class_label:
names:
'0': Abusive/Offensive
'1': Normal
splits:
- name: train
num_bytes: 725719
num_examples: 7208
- name: test
num_bytes: 218087
num_examples: 2002
- name: validation
num_bytes: 79759
num_examples: 800
download_size: 927937
dataset_size: 1023565
- config_name: Fine_Grained
features:
- name: tweet
dtype: string
- name: label
dtype:
class_label:
names:
'0': Abusive/Offensive
'1': Normal
'2': Religious Hate
'3': Sexism
'4': Profane/Untargeted
splits:
- name: train
num_bytes: 723670
num_examples: 7208
- name: test
num_bytes: 219359
num_examples: 2002
- name: validation
num_bytes: 723670
num_examples: 7208
download_size: 1519423
dataset_size: 1666699
---
# Dataset Card for roman_urdu_hate_speech
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [roman_urdu_hate_speech homepage](https://aclanthology.org/2020.emnlp-main.197/)
- **Repository:** [roman_urdu_hate_speech repository](https://github.com/haroonshakeel/roman_urdu_hate_speech)
- **Paper:** [Hate-Speech and Offensive Language Detection in Roman Urdu](https://aclanthology.org/2020.emnlp-main.197.pdf)
- **Leaderboard:** [N/A]
- **Point of Contact:** [M. Haroon Shakeel](mailto:m.shakeel@lums.edu.pk)
### Dataset Summary
The Roman Urdu Hate-Speech and Offensive Language Detection (RUHSOLD) dataset is a Roman Urdu dataset of tweets annotated by experts in the relevant language. The authors develop the gold-standard for two sub-tasks. First sub-task is based on binary labels of Hate-Offensive content and Normal content (i.e., inoffensive language). These labels are self-explanatory. The authors refer to this sub-task as coarse-grained classification. Second sub-task defines Hate-Offensive content with four labels at a granular level. These labels are the most relevant for the demographic of users who converse in RU and are defined in related literature. The authors refer to this sub-task as fine-grained classification. The objective behind creating two gold-standards is to enable the researchers to evaluate the hate speech detection approaches on both easier (coarse-grained) and challenging (fine-grained) scenarios.
### Supported Tasks and Leaderboards
- 'multi-class-classification', 'text-classification-other-binary classification': The dataset can be used for both multi class classification as well as for binary classification as it contains both coarse grained and fine grained labels.
### Languages
The text of this dataset is Roman Urdu. The associated BCP-47 code is 'ur'.
## Dataset Structure
### Data Instances
The dataset consists of two parts divided as a set of two types, Coarse grained examples and Fine Grained examples. The difference is that in the coarse grained example the tweets are labelled as abusive or normal whereas in the fine grained version there are several classes of hate associated with a tweet.
For the Coarse grained segment of the dataset the label mapping is:-
Task 1: Coarse-grained Classification Labels
0: Abusive/Offensive
1: Normal
Whereas for the Fine Grained segment of the dataset the label mapping is:-
Task 2: Fine-grained Classification Labels
0: Abusive/Offensive
1: Normal
2: Religious Hate
3: Sexism
4: Profane/Untargeted
An example from Roman Urdu Hate Speech looks as follows:
```
{
'tweet': 'there are some yahodi daboo like imran chore zakat khore'
'label': 0
}
```
### Data Fields
-tweet:a string denoting the tweet which has been selected by using a random sampling from a tweet base of 50000 tweets to select 10000 tweets and annotated for the dataset.
-label:An annotation manually labeled by three independent annotators, during the annotation process, all conflicts are resolved by a majority vote among three annotators.
### Data Splits
The data of each of the segments, Coarse Grained and Fine Grained is further split into training, validation and test set. The data is split in train, test, and validation sets with 70,20,10 split ratio using stratification based on fine-grained labels.
The use of stratified sampling is deemed necessary to preserve the same labels ratio across all splits.
The Final split sizes are as follows:
Train Valid Test
7209 2003 801
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was created by Hammad Rizwan, Muhammad Haroon Shakeel, Asim Karim during work done at Department of Computer Science, Lahore University of Management Sciences (LUMS), Lahore, Pakistan.
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [Roman Urdu Hate Speech Dataset Repository](https://github.com/haroonshakeel/roman_urdu_hate_speech) which is under MIT License.
### Citation Information
```bibtex
@inproceedings{rizwan2020hate,
title={Hate-speech and offensive language detection in roman Urdu},
author={Rizwan, Hammad and Shakeel, Muhammad Haroon and Karim, Asim},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
pages={2512--2522},
year={2020}
}
```
### Contributions
Thanks to [@bp-high](https://github.com/bp-high), for adding this dataset. | 7,400 | [
[
-0.03021240234375,
-0.055328369140625,
-0.0135040283203125,
0.0182647705078125,
-0.01277923583984375,
0.01397705078125,
-0.0310821533203125,
-0.0322265625,
0.01413726806640625,
0.0274200439453125,
-0.0253448486328125,
-0.06829833984375,
-0.0687255859375,
0.0... |
Anon126/my-raft-submission | 2022-05-01T10:50:18.000Z | [
"benchmark:raft",
"region:us"
] | Anon126 | @InProceedings{huggingface:dataset,
title = {A great new dataset},
author={huggingface, Inc.
},
year={2020}
} | 0 | 19 | 2022-05-01T10:48:53 | ---
benchmark: raft
type: prediction
submission_name: none
---
# RAFT submissions for my-raft-submission
## Submitting to the leaderboard
To make a submission to the [leaderboard](https://huggingface.co/spaces/ought/raft-leaderboard), there are three main steps:
1. Generate predictions on the unlabeled test set of each task
2. Validate the predictions are compatible with the evaluation framework
3. Push the predictions to the Hub!
See the instructions below for more details.
### Rules
1. To prevent overfitting to the public leaderboard, we only evaluate **one submission per week**. You can push predictions to the Hub as many times as you wish, but we will only evaluate the most recent commit in a given week.
2. Transfer or meta-learning using other datasets, including further pre-training on other corpora, is allowed.
3. Use of unlabeled test data is allowed, as is it always available in the applied setting. For example, further pre-training using the unlabeled data for a task would be permitted.
4. Systems may be augmented with information retrieved from the internet, e.g. via automated web searches.
### Submission file format
For each task in RAFT, you should create a CSV file called `predictions.csv` with your model's predictions on the unlabeled test set. Each file should have exactly 2 columns:
* ID (int)
* Label (string)
See the dummy predictions in the `data` folder for examples with the expected format. Here is a simple example that creates a majority-class baseline:
```python
from pathlib import Path
import pandas as pd
from collections import Counter
from datasets import load_dataset, get_dataset_config_names
tasks = get_dataset_config_names("ought/raft")
for task in tasks:
# Load dataset
raft_subset = load_dataset("ought/raft", task)
# Compute majority class over training set
counter = Counter(raft_subset["train"]["Label"])
majority_class = counter.most_common(1)[0][0]
# Load predictions file
preds = pd.read_csv(f"data/{task}/predictions.csv")
# Convert label IDs to label names
preds["Label"] = raft_subset["train"].features["Label"].int2str(majority_class)
# Save predictions
preds.to_csv(f"data/{task}/predictions.csv", index=False)
```
As you can see in the example, each `predictions.csv` file should be stored in the task's subfolder in `data` and at the end you should have something like the following:
```
data
├── ade_corpus_v2
│ ├── predictions.csv
│ └── task.json
├── banking_77
│ ├── predictions.csv
│ └── task.json
├── neurips_impact_statement_risks
│ ├── predictions.csv
│ └── task.json
├── one_stop_english
│ ├── predictions.csv
│ └── task.json
├── overruling
│ ├── predictions.csv
│ └── task.json
├── semiconductor_org_types
│ ├── predictions.csv
│ └── task.json
├── systematic_review_inclusion
│ ├── predictions.csv
│ └── task.json
├── tai_safety_research
│ ├── predictions.csv
│ └── task.json
├── terms_of_service
│ ├── predictions.csv
│ └── task.json
├── tweet_eval_hate
│ ├── predictions.csv
│ └── task.json
└── twitter_complaints
├── predictions.csv
└── task.json
```
### Validate your submission
To ensure that your submission files are correctly formatted, run the following command from the root of the repository:
```
python cli.py validate
```
If everything is correct, you should see the following message:
```
All submission files validated! ✨ 🚀 ✨
Now you can make a submission 🤗
```
### Push your submission to the Hugging Face Hub!
The final step is to commit your files and push them to the Hub:
```
python cli.py submit
```
If there are no errors, you should see the following message:
```
Submission successful! 🎉 🥳 🎉
Your submission will be evaulated on Sunday 05 September 2021 ⏳
```
where the evaluation is run every Sunday and your results will be visible on the leaderboard. | 3,878 | [
[
-0.0309906005859375,
-0.039947509765625,
0.016998291015625,
0.037628173828125,
-0.0023632049560546875,
-0.01087188720703125,
-0.024139404296875,
-0.016143798828125,
0.0256500244140625,
0.03265380859375,
-0.0487060546875,
-0.0474853515625,
-0.045166015625,
0.... | |
strombergnlp/itu_faroese_danish | 2022-07-01T15:43:48.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:da",
"language:fo",
"license:cc-by-4.0",
"arxiv:2206.08727",
"doi:10.57967/hf/0515",
"region:us"
... | strombergnlp | \ | \ | 3 | 19 | 2022-05-11T17:11:24 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- da
- fo
license:
- cc-by-4.0
multilinguality:
- multilingual
pretty_name: ITU Faroese Danish parallel text
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- translation
task_ids: []
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:** [https://arxiv.org/abs/2206.08727](https://arxiv.org/abs/2206.08727)
- **Leaderboard:**
- **Point of Contact:** [Leon Derczynski](https://github.com/leondz)
### Dataset Summary
This is a native-speaker-generated parallel corpus of Faroese and Danish
### Supported Tasks and Leaderboards
*
### Languages
* Danish
* Faroese
## Dataset Structure
### Data Instances
3995 parallel sentences
### Data Fields
* `id`: the sentence pair ID, `string`
* `origin`: the original sentence identifier text, `string`
* `fo`: the Faroese text, `string`
* `da`: the Danish text, `string`
### Data Splits
Monolithic
## Dataset Creation
### Curation Rationale
To gather a broad range of topics about the Faroes and the rest of the world, to enable a general-purpose Faroese:Danish translation system
### Source Data
#### Initial Data Collection and Normalization
* EUROparl Danish
* Dimmaletting, Faroese newspaper
* Tatoeba Danish / Faroese
#### Who are the source language producers?
### Annotations
#### Annotation process
No annotations
#### Who are the annotators?
Two Faroese native speakers, one male one female, in their 20s, masters degrees, living in Denmark
### Personal and Sensitive Information
None due to the sources used
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
This collection of Faroese is curated by Leon Derczynski
### Licensing Information
Creative Commons Attribution 4.0
### Citation Information
```
``` | 3,002 | [
[
-0.038238525390625,
-0.04449462890625,
0.02880859375,
0.021728515625,
-0.022247314453125,
-0.0002865791320800781,
-0.03570556640625,
-0.0238189697265625,
0.042236328125,
0.042327880859375,
-0.047821044921875,
-0.08575439453125,
-0.039459228515625,
0.03469848... |
valurank/News_Articles_Categorization | 2023-08-27T05:49:31.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | valurank | null | null | 0 | 19 | 2022-05-25T21:46:45 | ---
license:
- other
language:
- en
multilinguality:
- monolingual
task_categories:
- text-classification
task_ids:
- multi-class-classification
---
# Dataset Card for News_Articles_Categorization
## Table of Contents
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Source Data](#source-data)
## Dataset Description
3722 News Articles classified into different categories namely: World, Politics, Tech, Entertainment, Sport, Business, Health, and Science
## Languages
The text in the dataset is in English
## Dataset Structure
The dataset consists of two columns namely Text and Category.
The Text column consists of the news article and the Category column consists of the class each article belongs to
## Source Data
The dataset is scrapped across different news platforms
| 857 | [
[
-0.0205535888671875,
-0.0265960693359375,
-0.01129150390625,
0.02447509765625,
-0.050933837890625,
0.03436279296875,
0.0024585723876953125,
-0.007320404052734375,
0.03363037109375,
0.0355224609375,
-0.03460693359375,
-0.07135009765625,
-0.033050537109375,
0.... |
lmqg/qg_koquad | 2022-12-02T18:53:42.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:squad_es",
"language:ko",
"license:cc-by-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | lmqg | [KorQuAD](https://huggingface.co/datasets/squad_kor_v1) dataset for question generation (QG) task. | @inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
} | 3 | 19 | 2022-06-02T23:42:21 | ---
license: cc-by-4.0
pretty_name: KorQuAD for question generation
language: ko
multilinguality: monolingual
size_categories: 10K<n<100K
source_datasets: squad_es
task_categories:
- text-generation
task_ids:
- language-modeling
tags:
- question-generation
---
# Dataset Card for "lmqg/qg_korquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in
["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992).
This is a modified version of [KorQuAD](https://huggingface.co/datasets/squad_kor_v1) for question generation (QG) task.
Since the original dataset only contains training/validation set, we manually sample test set from training set, which
has no overlap in terms of the paragraph with the training set.
### Supported Tasks and Leaderboards
* `question-generation`: The dataset is assumed to be used to train a model for question generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Korean (ko)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"question": "함수해석학이 주목하는 탐구는?",
"paragraph": "변화에 대한 이해와 묘사는 자연과학에 있어서 일반적인 주제이며, 미적분학은 변화를 탐구하는 강력한 도구로서 발전되었다. 함수는 변화하는 양을 묘사함에 있어서 중추적인 개념으로써 떠오르게 된다. 실수와 실변수로 구성된 함수의 엄밀한 탐구가 실해석학이라는 분야로 알려지게 되었고, 복소수에 대한 이와 같은 탐구분야는 복소해석학이라고 한다. 함수해석학은 함수의 공간(특히 무한차원)의 탐구에 주목한다. 함수해석학의 많은 응용분야 중 하나가 양자역학이다. 많은 문제들이 자연스럽게 양과 그 양의 변화율의 관계로 귀착되고, 이러한 문제들이 미분방정식으로 다루어진다. 자연의 많은 현상들이 동역학계로 기술될 수 있다. 혼돈 이론은 이러한 예측 불가능한 현상을 탐구하는 데 상당한 기여를 한다.",
"answer": "함수의 공간(특히 무한차원)의 탐구",
"sentence": "함수해석학은 함수의 공간(특히 무한차원)의 탐구 에 주목한다.",
"paragraph_sentence": '변화에 대한 이해와 묘사는 자연과학에 있어서 일반적인 주제이며, 미적분학은 변화를 탐구하는 강력한 도구로서 발전되었다. 함수는 변화하는 양을 묘사함에 있어서 중추적인 개념으로써 떠오르게 된다. 실수와 실변수로 구성된 함수의 엄밀한 탐구가 실해석학이라는 분야로 알려지게 되었고, 복소수에 대한 이와 같은 탐구 분야는 복소해석학이라고 한다. <hl> 함수해석학은 함수의 공간(특히 무한차원)의 탐구 에 주목한다. <hl> 함수해석학의 많은 응용분야 중 하나가 양자역학이다. 많은 문제들이 자연스럽게 양과 그 양의 변화율의 관계로 귀착되고, 이러한 문제들이 미분방정식으로 다루어진다. 자연의 많은 현상들이 동역학계로 기술될 수 있다. 혼돈 이론은 이러한 예측 불가능한 현상을 탐구하는 데 상당한 기여를 한다.',
"paragraph_answer": '변화에 대한 이해와 묘사는 자연과학에 있어서 일반적인 주제이며, 미적분학은 변화를 탐구하는 강력한 도구로서 발전되었다. 함수는 변화하는 양을 묘사함에 있어서 중추적인 개념으로써 떠오르게 된다. 실수와 실변수로 구성된 함수의 엄밀한 탐구가 실해석학이라는 분야로 알려지게 되었고, 복소수에 대한 이와 같은 탐구 분야는 복소해석학이라고 한다. 함수해석학은 <hl> 함수의 공간(특히 무한차원)의 탐구 <hl>에 주목한다. 함수해석학의 많은 응용분야 중 하나가 양자역학이다. 많은 문제들이 자연스럽게 양과 그 양의 변화율의 관계로 귀착되고, 이러한 문제들이 미분방정식으로 다루어진다. 자연의 많은 현상들이 동역학계로 기술될 수 있다. 혼돈 이론은 이러한 예측 불가능한 현상을 탐구하는 데 상당한 기여를 한다.',
"sentence_answer": "함수해석학은 <hl> 함수의 공간(특히 무한차원)의 탐구 <hl> 에 주목한다."
}
```
The data fields are the same among all splits.
- `question`: a `string` feature.
- `paragraph`: a `string` feature.
- `answer`: a `string` feature.
- `sentence`: a `string` feature.
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
`paragraph_sentence` feature is for sentence-aware question generation.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|54556| 5766 |5766 |
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration: {A} {U}nified {B}enchmark and {E}valuation",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | 4,660 | [
[
-0.0469970703125,
-0.0718994140625,
0.0278167724609375,
0.03692626953125,
-0.0235748291015625,
-0.013641357421875,
0.013946533203125,
-0.0094146728515625,
0.01837158203125,
0.02569580078125,
-0.042724609375,
-0.032379150390625,
-0.024993896484375,
0.01702880... |
BeIR/webis-touche2020-generated-queries | 2022-10-23T06:14:11.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | 0 | 19 | 2022-06-17T13:19:45 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. | 13,988 | [
[
-0.0396728515625,
-0.03985595703125,
0.010955810546875,
0.003665924072265625,
0.004230499267578125,
0.00008660554885864258,
-0.0081939697265625,
-0.018890380859375,
0.0216827392578125,
0.005954742431640625,
-0.034332275390625,
-0.0545654296875,
-0.02638244628906... |
mounikaiiith/Telugu_Sentiment | 2022-07-04T15:05:31.000Z | [
"license:cc-by-4.0",
"region:us"
] | mounikaiiith | null | null | 1 | 19 | 2022-06-19T12:06:15 | ---
license: cc-by-4.0
---
Do cite the below reference for using the dataset:
@article{marreddy2022resource, title={Am I a Resource-Poor Language? Data Sets, Embeddings, Models and Analysis for four different NLP tasks in Telugu Language},
author={Marreddy, Mounika and Oota, Subba Reddy and Vakada, Lakshmi Sireesha and Chinni, Venkata Charan and Mamidi, Radhika},
journal={Transactions on Asian and Low-Resource Language Information Processing}, publisher={ACM New York, NY} }
If you want to use the two classes (positive and negative) from the dataset, do cite the below reference:
@article{marreddy2022multi,
title={Multi-Task Text Classification using Graph Convolutional Networks for Large-Scale Low Resource Language},
author={Marreddy, Mounika and Oota, Subba Reddy and Vakada, Lakshmi Sireesha and Chinni, Venkata Charan and Mamidi, Radhika},
journal={arXiv preprint arXiv:2205.01204},
year={2022}
}
| 921 | [
[
-0.018280029296875,
-0.028533935546875,
-0.005199432373046875,
0.0201263427734375,
-0.016326904296875,
-0.01418304443359375,
-0.019989013671875,
-0.017730712890625,
0.0196685791015625,
0.032440185546875,
-0.00904083251953125,
-0.018310546875,
-0.03338623046875,
... |
pscotti/naturalscenesdataset | 2023-10-31T20:02:10.000Z | [
"region:us"
] | pscotti | null | null | 5 | 19 | 2022-07-03T19:09:47 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
sepidmnorozy/Vietnamese_sentiment | 2022-08-16T12:16:05.000Z | [
"region:us"
] | sepidmnorozy | null | null | 1 | 19 | 2022-08-16T12:15:12 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
jakartaresearch/indo-movie-subtitle | 2022-08-16T13:20:23.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:id",
"license:cc-by-4.0",
"movie",
"subtitle",
"indonesian",
"r... | jakartaresearch | This dataset is built as a playground for analyzing text on movie subtitle | null | 1 | 19 | 2022-08-16T13:10:05 | ---
annotations_creators:
- no-annotation
language:
- id
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Indonesian Movie Subtitle
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- movie
- subtitle
- indonesian
task_categories:
- text-generation
task_ids:
- language-modeling
---
# Dataset Card for Indonesian Movie Subtitle
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset. | 2,830 | [
[
-0.0301361083984375,
-0.0391845703125,
-0.01348876953125,
0.0124053955078125,
-0.041290283203125,
0.010162353515625,
-0.0137176513671875,
-0.018646240234375,
0.050079345703125,
0.06256103515625,
-0.058807373046875,
-0.056121826171875,
-0.054290771484375,
0.0... |
namban/ledgar_training | 2022-09-04T05:11:51.000Z | [
"region:us"
] | namban | null | null | 0 | 19 | 2022-09-04T04:43:24 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
truongpdd/vietnews-dataset | 2022-09-09T04:54:20.000Z | [
"region:us"
] | truongpdd | null | null | 1 | 19 | 2022-09-09T04:06:45 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.02880859375,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.0170135498046875,
-0.052093505859375,
-0.01497650146484375,
-0.0604248046875,
0.0379028... |
bigbio/pharmaconer | 2022-12-22T15:46:15.000Z | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | bigbio | PharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track
This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).
The annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
For further information, please visit https://temu.bsc.es/pharmaconer/ or send an email to encargo-pln-life@bsc.es | @inproceedings{gonzalez2019pharmaconer,
title = "PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",
author = "Gonzalez-Agirre, Aitor and
Marimon, Montserrat and
Intxaurrondo, Ander and
Rabal, Obdulia and
Villegas, Marta and
Krallinger, Martin",
booktitle = "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-5701",
doi = "10.18653/v1/D19-5701",
pages = "1--10",
} | 1 | 19 | 2022-11-13T22:11:24 |
---
language:
- es
bigbio_language:
- Spanish
license: cc-by-4.0
multilinguality: monolingual
bigbio_license_shortname: CC_BY_4p0
pretty_name: PharmaCoNER
homepage: https://temu.bsc.es/pharmaconer/index.php/datasets/
bigbio_pubmed: False
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
- TEXT_CLASSIFICATION
---
# Dataset Card for PharmaCoNER
## Dataset Description
- **Homepage:** https://temu.bsc.es/pharmaconer/index.php/datasets/
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER,TXTCLASS
### Subtrack 1
PharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track
This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).
The annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
For further information, please visit https://temu.bsc.es/pharmaconer/ or send an email to encargo-pln-life@bsc.es
SUBTRACK 1: NER offset and entity type classification
The first subtrack consists in the classical entity-based or instanced-based evaluation that requires that system outputs match exactly the beginning and end locations of each entity tag, as well as match the entity annotation type of the gold standard annotations.
### Subtrack 2
PharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track
This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).
The annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
For further information, please visit https://temu.bsc.es/pharmaconer/ or send an email to encargo-pln-life@bsc.es
SUBTRACK 2: CONCEPT INDEXING
In the second subtask, a list of unique SNOMED concept identifiers have to be generated for each document. The predictions are compared to the manually annotated concept ids corresponding to chemical compounds and pharmacological substances.
### Full Task
PharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track
This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).
The annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
For further information, please visit https://temu.bsc.es/pharmaconer/ or send an email to encargo-pln-life@bsc.es
SUBTRACK 1: NER offset and entity type classification
The first subtrack consists in the classical entity-based or instanced-based evaluation that requires that system outputs match exactly the beginning and end locations of each entity tag, as well as match the entity annotation type of the gold standard annotations.
SUBTRACK 2: CONCEPT INDEXING
In the second subtask, a list of unique SNOMED concept identifiers have to be generated for each document. The predictions are compared to the manually annotated concept ids corresponding to chemical compounds and pharmacological substances.
## Citation Information
```
@inproceedings{gonzalez2019pharmaconer,
title = "PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",
author = "Gonzalez-Agirre, Aitor and
Marimon, Montserrat and
Intxaurrondo, Ander and
Rabal, Obdulia and
Villegas, Marta and
Krallinger, Martin",
booktitle = "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-5701",
doi = "10.18653/v1/D19-5701",
pages = "1--10",
}
```
| 5,520 | [
[
-0.019317626953125,
-0.04119873046875,
0.03363037109375,
0.00244903564453125,
-0.029052734375,
0.00482177734375,
-0.0035915374755859375,
-0.046234130859375,
0.053558349609375,
0.033294677734375,
-0.036041259765625,
-0.053558349609375,
-0.065673828125,
0.0292... |
EddieChen372/vietnamese-wiki-segmented | 2022-11-22T12:45:58.000Z | [
"region:us"
] | EddieChen372 | null | null | 2 | 19 | 2022-11-22T12:43:48 | ---
dataset_info:
features:
- name: segmented_text
dtype: string
- name: segmented_title
dtype: string
splits:
- name: train
num_bytes: 1302569257
num_examples: 1273469
download_size: 604393096
dataset_size: 1302569257
---
# Dataset Card for "vietnamese-wiki-segmented"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 432 | [
[
-0.0390625,
-0.041168212890625,
0.021697998046875,
0.013824462890625,
-0.036529541015625,
-0.00357818603515625,
0.0151214599609375,
-0.0035076141357421875,
0.0560302734375,
0.05615234375,
-0.054351806640625,
-0.0655517578125,
-0.036163330078125,
0.0013208389... |
matchbench/Walmart-Amazon | 2022-12-12T05:38:13.000Z | [
"region:us"
] | matchbench | null | null | 0 | 19 | 2022-12-01T02:01:46 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
1aurent/ICDAR-2011 | 2023-09-23T18:58:09.000Z | [
"size_categories:1K<n<10K",
"license:unknown",
"online handwriting",
"offline handwriting",
"signature",
"verification",
"region:us"
] | 1aurent | null | null | 0 | 19 | 2022-12-01T21:08:23 | ---
license: unknown
size_categories:
- 1K<n<10K
tags:
- online handwriting
- offline handwriting
- signature
- verification
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': genuine
'1': forgeries
- name: forger
dtype: int32
- name: writer
dtype: uint32
- name: attempt
dtype: uint32
splits:
- name: train
num_bytes: 240159596.0
num_examples: 937
- name: test
num_bytes: 466376280.094
num_examples: 2534
download_size: 793149429
dataset_size: 706535876.094
---
# ICDAR 2011 Signature Verification Competition (SigComp2011)
http://iapr-tc11.org/mediawiki/index.php/ICDAR_2011_Signature_Verification_Competition_(SigComp2011)
The collection contains simultaneously acquired online and offline samples.
The collection contains offline and online signature samples. The offline dataset comprises PNG images, scanned at 400 dpi, RGB color. The online dataset comprises ascii files with the format: X, Y, Z (per line).
Marcus Liwicki, Michael Blumenstein, Elisa van den Heuvel, Charles E.H. Berger, Reinoud D. Stoel, Bryan Found, Xiaohong Chen, Muhammad Imran Malik. "SigComp11: Signature Verification Competition for On- and Offline Skilled Forgeries", Proc. 11th Int. Conference on Document Analysis and Recognition, 2011
| 1,487 | [
[
-0.022674560546875,
0.00006794929504394531,
0.01396942138671875,
-0.0034236907958984375,
-0.037109375,
0.042083740234375,
0.006618499755859375,
-0.06268310546875,
0.0188751220703125,
0.0310516357421875,
-0.035888671875,
-0.0248565673828125,
-0.056976318359375,
... |
Jzuluaga/atco2_corpus_1h | 2022-12-05T11:15:31.000Z | [
"task_categories:automatic-speech-recognition",
"multilinguality:monolingual",
"language:en",
"audio",
"automatic-speech-recognition",
"en-atc",
"en",
"noisy-speech-recognition",
"speech-recognition",
"arxiv:2211.04054",
"region:us"
] | Jzuluaga | null | null | 1 | 19 | 2022-12-05T10:37:25 | ---
dataset_info:
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: segment_start_time
dtype: float32
- name: segment_end_time
dtype: float32
- name: duration
dtype: float32
splits:
- name: test
num_bytes: 113872168.0
num_examples: 871
download_size: 113467762
dataset_size: 113872168.0
tags:
- audio
- automatic-speech-recognition
- en-atc
- en
- noisy-speech-recognition
- speech-recognition
task_categories:
- automatic-speech-recognition
language:
- en
multilinguality:
- monolingual
---
# Dataset Card for ATCO2 test set corpus (1hr set)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages and Other Details](#languages-and-other-details)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [ATCO2 project homepage](https://www.atco2.org/)
- **Repository:** [ATCO2 corpus](https://github.com/idiap/atco2-corpus)
- **Paper:** [ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications](https://arxiv.org/abs/2211.04054)
### Dataset Summary
ATCO2 project aims at developing a unique platform allowing to collect, organize and pre-process air-traffic control (voice communication) data from air space. This project has received funding from the Clean Sky 2 Joint Undertaking (JU) under grant agreement No 864702. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and the Clean Sky 2 JU members other than the Union.
The project collected the real-time voice communication between air-traffic controllers and pilots available either directly through publicly accessible radio frequency channels or indirectly from air-navigation service providers (ANSPs). In addition to the voice communication data, contextual information is available in a form of metadata (i.e. surveillance data). The dataset consists of two distinct packages:
- A corpus of 5000+ hours (pseudo-transcribed) of air-traffic control speech collected across different airports (Sion, Bern, Zurich, etc.) in .wav format for speech recognition. Speaker distribution is 90/10% between males and females and the group contains native and non-native speakers of English.
- A corpus of 4 hours (transcribed) of air-traffic control speech collected across different airports (Sion, Bern, Zurich, etc.) in .wav format for speech recognition. Speaker distribution is 90/10% between males and females and the group contains native and non-native speakers of English. This corpus has been transcribed with orthographic information in XML format with speaker noise information, SNR values and others. Read Less
- A free sample of the 4 hours transcribed data is in [ATCO2 project homepage](https://www.atco2.org/data)
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`. Already adapted/fine-tuned models are available here --> [Wav2Vec 2.0 LARGE mdel](https://huggingface.co/Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-uwb-atcc-and-atcosim).
### Languages and other details
The text and the recordings are in English. For more information see Table 3 and Table 4 of [ATCO2 corpus paper](https://arxiv.org/abs/2211.04054)
## Dataset Structure
### Data Fields
- `id (string)`: a string of recording identifier for each example, corresponding to its.
- `audio (audio)`: audio data for the given ID
- `text (string)`: transcript of the file already normalized. Follow these repositories for more details [w2v2-air-traffic](https://github.com/idiap/w2v2-air-traffic) and [bert-text-diarization-atc](https://github.com/idiap/bert-text-diarization-atc)
- `segment_start_time (float32)`: segment start time (normally 0)
- `segment_end_time (float32): segment end time
- `duration (float32)`: duration of the recording, compute as segment_end_time - segment_start_time
## Additional Information
### Licensing Information
The licensing status of the ATCO2-test-set-1h corpus is in the file **ATCO2-ASRdataset-v1_beta - End-User Data Agreement** in the data folder. Download the data in [ATCO2 project homepage](https://www.atco2.org/data)
### Citation Information
Contributors who prepared, processed, normalized and uploaded the dataset in HuggingFace:
```
@article{zuluaga2022how,
title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022bertraffic,
title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022atco2,
title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
journal={arXiv preprint arXiv:2211.04054},
year={2022}
}
```
| 5,749 | [
[
-0.0219879150390625,
-0.042205810546875,
0.0014848709106445312,
0.011138916015625,
-0.02203369140625,
0.0138702392578125,
-0.0304718017578125,
-0.053436279296875,
0.01568603515625,
0.0290985107421875,
-0.025238037109375,
-0.0419921875,
-0.041473388671875,
-0... |
crystina-z/mmarco-corpus | 2022-12-06T12:23:36.000Z | [
"region:us"
] | crystina-z | mMARCO translated datasets | @misc{bonifacio2021mmarco,
title={mMARCO: A Multilingual Version of the MS MARCO Passage Ranking Dataset},
author={Luiz Henrique Bonifacio and Israel Campiotti and Vitor Jeronymo and Hugo Queiroz Abonizio and Roberto Lotufo and Rodrigo Nogueira},
year={2021},
eprint={2108.13897},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | 0 | 19 | 2022-12-06T12:01:59 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Dahoas/sft-gptj-synthetic-prompt-responses | 2022-12-19T16:20:41.000Z | [
"region:us"
] | Dahoas | null | null | 0 | 19 | 2022-12-19T16:20:27 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Xpitfire/cmp_facade | 2023-01-15T01:43:17.000Z | [
"task_categories:image-segmentation",
"language:en",
"license:mit",
"building",
"facade",
"region:us"
] | Xpitfire | null | null | 1 | 19 | 2023-01-09T22:51:59 | ---
license: mit
task_categories:
- image-segmentation
language:
- en
tags:
- building
- facade
---
# CMP Facade Database
We present a dataset of facade images assembled at the Center for Machine Perception, which includes 606 rectified images of facades from various sources, which have been manually annotated. The facades are from different cities around the world and diverse architectural styles.
Documentation
Data origin, format and processing, annotation principles for 12 classes are specified in the report.
- facade
- molding
- cornice
- pillar
- window
- door
- sill
- blind
- balcony
- shop
- deco
- background
Link to original website:
https://cmp.felk.cvut.cz/~tylecr1/facade/
Citation
Please use the following reference to cite the dataset:
```latex
@INPROCEEDINGS{Tylecek13,
author = {Radim Tyle{\v c}ek and Radim {\v S}{\' a}ra},
title = {Spatial Pattern Templates for Recognition of Objects with Regular Structure},
booktitle = {Proc. GCPR},
year = {2013},
address = {Saarbrucken, Germany},
}
``` | 1,031 | [
[
-0.04534912109375,
-0.039764404296875,
0.035430908203125,
0.0125274658203125,
0.01348114013671875,
-0.015106201171875,
-0.0013418197631835938,
-0.0265045166015625,
-0.01045989990234375,
0.061798095703125,
-0.02850341796875,
-0.0849609375,
-0.0012674331665039062,... |
Cohere/wikipedia-22-12-ko-embeddings | 2023-03-22T16:55:35.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:multilingual",
"language:ko",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | 2 | 19 | 2023-01-13T23:51:11 | ---
language:
- ko
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (ko) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (ko)](https://ko.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ko-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ko-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-ko-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | 3,803 | [
[
-0.050140380859375,
-0.0498046875,
0.01386260986328125,
0.001338958740234375,
-0.0136260986328125,
-0.00601959228515625,
-0.02301025390625,
-0.0179290771484375,
0.043487548828125,
-0.0014123916625976562,
-0.03759765625,
-0.0634765625,
-0.045501708984375,
0.0... |
Cohere/wikipedia-22-12-ar-embeddings | 2023-03-22T16:52:28.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ar",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | 2 | 19 | 2023-01-14T02:00:24 | ---
annotations_creators:
- expert-generated
language:
- ar
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# Wikipedia (ar) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (ar)](https://ar.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ar-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ar-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-ar-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | 3,845 | [
[
-0.051971435546875,
-0.050323486328125,
0.01160430908203125,
0.000102996826171875,
-0.01247406005859375,
-0.006591796875,
-0.02215576171875,
-0.0182342529296875,
0.044158935546875,
-0.002323150634765625,
-0.0361328125,
-0.0625,
-0.046844482421875,
0.01554107... |
mwz/ursum | 2023-05-14T13:03:37.000Z | [
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:ur",
"license:mit",
"region:us"
] | mwz | null | null | 0 | 19 | 2023-01-14T09:24:32 | ---
license: mit
task_categories:
- summarization
- text-generation
- text2text-generation
language:
- ur
pretty_name: ursum
size_categories:
- 10K<n<100K
---
# Urdu Summarization
## Dataset Overview
The Urdu Summarization dataset contains news articles in Urdu language along with their summaries. The dataset contains a total of 48,071 news articles collected from the BBC Urdu website. Each article is labeled with its headline, summary, and full text.
## Dataset Details
The dataset contains the following columns:
- id (string): Unique identifier for each article
- url (string): URL for the original article
- title (string): Headline of the article
- summary (string): Summary of the article
- text (string): Full text of the article
The dataset is distributed under the MIT License.
## Data Collection
The data was collected from the BBC Urdu website using web scraping techniques. The articles were published between 2003 and 2020, covering a wide range of topics such as politics, sports, technology, and entertainment.
## Data Preprocessing
The text data was preprocessed to remove any HTML tags and non-Urdu characters. The summaries were created by human annotators, who read the full text of the articles and summarized the main points. The dataset was split into training, validation, and test sets, with 80%, 10%, and 10% of the data in each set respectively.
## Potential Use Cases
This dataset can be used for training and evaluating models for automatic summarization of Urdu text. It can also be used for research in natural language processing, machine learning, and information retrieval.
## Acknowledgements
I thank the BBC Urdu team for publishing the news articles on their website and making them publicly available. We also thank the human annotators who created the summaries for the articles.
## Relevant Papers
No papers have been published yet using this dataset.
## License
The dataset is distributed under the MIT License. | 1,964 | [
[
-0.0281829833984375,
-0.0029754638671875,
0.0017499923706054688,
0.05194091796875,
-0.036102294921875,
0.0123748779296875,
-0.00484466552734375,
-0.007598876953125,
0.01338958740234375,
0.046600341796875,
-0.025360107421875,
-0.055511474609375,
-0.05560302734375... |
jonathan-roberts1/WHU-RS19 | 2023-03-26T11:22:05.000Z | [
"license:cc-by-4.0",
"region:us"
] | jonathan-roberts1 | null | null | 1 | 19 | 2023-01-25T16:10:10 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': airport
'1': beach
'2': bridge
'3': commercial
'4': desert
'5': farmland
'6': football field
'7': forest
'8': industrial
'9': meadow
'10': mountain
'11': park
'12': parking
'13': pond
'14': port
'15': railway station
'16': residential
'17': river
'18': viaduct
splits:
- name: train
num_bytes: 115362308.8
num_examples: 1005
download_size: 113327264
dataset_size: 115362308.8
license: cc-by-4.0
---
# Dataset Card for "WHU-RS19"
## Dataset Description
- **Paper:** [Structural high-resolution satellite image indexing](https://hal.science/hal-00458685/document)
- **Paper:** [Satellite image classification via two-layer sparse coding with biased image representation](https://ieeexplore.ieee.org/iel5/8859/4357975/05545358.pdf)
### Licensing Information
Public Domain
## Citation Information
[Structural high-resolution satellite image indexing](https://hal.science/hal-00458685/document)
[Satellite image classification via two-layer sparse coding with biased image representation](https://ieeexplore.ieee.org/iel5/8859/4357975/05545358.pdf)
```
@article{xia2009structural,
title={Structural high-resolution satellite image indexing},
author={Xia, Gui-Song and Yang, Wen and Delon, Julie and Gousseau, Yann and Sun, Hong and Ma{\^\i}tre, Henri},
year={2009}
}
@article{dai2010satellite,
title={Satellite image classification via two-layer sparse coding with biased image representation},
author={Dai, Dengxin and Yang, Wen},
journal={IEEE Geoscience and remote sensing letters},
volume={8},
number={1},
pages={173--176},
year={2010},
publisher={IEEE}
}
``` | 1,924 | [
[
-0.03656005859375,
-0.029205322265625,
0.00858306884765625,
0.00936126708984375,
-0.030029296875,
-0.01021575927734375,
-0.0094146728515625,
-0.0355224609375,
0.009613037109375,
0.0078125,
-0.0318603515625,
-0.03363037109375,
-0.049468994140625,
0.0031642913... |
LIDIA-HESSEN/vencortex-BusinessNewsDataset | 2023-01-25T17:09:54.000Z | [
"region:us"
] | LIDIA-HESSEN | null | null | 2 | 19 | 2023-01-25T17:09:47 | ---
dataset_info:
features:
- name: title
dtype: string
- name: image
dtype: string
- name: text
dtype: string
- name: url
dtype: string
- name: type
dtype: string
- name: context_id
dtype: string
- name: source
dtype: string
- name: date
dtype: string
splits:
- name: train
num_bytes: 290733891
num_examples: 469361
download_size: 123671926
dataset_size: 290733891
---
# Dataset Card for "BusinessNewsDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 609 | [
[
-0.035369873046875,
-0.0271148681640625,
0.003528594970703125,
0.0276947021484375,
-0.0272979736328125,
0.001079559326171875,
0.0140533447265625,
-0.020721435546875,
0.06329345703125,
0.03570556640625,
-0.07379150390625,
-0.058013916015625,
-0.03515625,
-0.0... |
codkiller0911/kotlin_code | 2023-02-11T16:42:21.000Z | [
"size_categories:1K<n<10K",
"language:en",
"kotlin",
"android",
"region:us"
] | codkiller0911 | null | null | 0 | 19 | 2023-02-11T14:39:47 | ---
language:
- en
tags:
- kotlin
- android
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset kotlin_code
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This Dataset contains Kotlin functions with there documentation. This dataset can be useful in fine-tuning or creating new models for developing models which can generate the code documentaiton
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 1,578 | [
[
-0.0164947509765625,
-0.0204315185546875,
0.0015001296997070312,
0.0176544189453125,
-0.0180511474609375,
-0.001796722412109375,
-0.0039520263671875,
0.003452301025390625,
0.0157318115234375,
0.0640869140625,
-0.050689697265625,
-0.07269287109375,
-0.04421997070... |
adilbekovich/Sentiment140Twitter | 2023-03-05T20:16:56.000Z | [
"region:us"
] | adilbekovich | null | null | 0 | 19 | 2023-02-25T06:40:40 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.01497650146484375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052490234375,
0.005062103271484375,
0.051361083984375,
0.01702880859375,
-0.05206298828125,
-0.01494598388671875,
-0.06036376953125,
0.037... |
lansinuote/gen.1.celeba | 2023-03-24T03:46:24.000Z | [
"region:us"
] | lansinuote | null | null | 0 | 19 | 2023-03-24T03:36:48 | ---
dataset_info:
features:
- name: image
dtype: image
- name: 5_o_Clock_Shadow
dtype: int64
- name: Arched_Eyebrows
dtype: int64
- name: Attractive
dtype: int64
- name: Bags_Under_Eyes
dtype: int64
- name: Bald
dtype: int64
- name: Bangs
dtype: int64
- name: Big_Lips
dtype: int64
- name: Big_Nose
dtype: int64
- name: Black_Hair
dtype: int64
- name: Blond_Hair
dtype: int64
- name: Blurry
dtype: int64
- name: Brown_Hair
dtype: int64
- name: Bushy_Eyebrows
dtype: int64
- name: Chubby
dtype: int64
- name: Double_Chin
dtype: int64
- name: Eyeglasses
dtype: int64
- name: Goatee
dtype: int64
- name: Gray_Hair
dtype: int64
- name: Heavy_Makeup
dtype: int64
- name: High_Cheekbones
dtype: int64
- name: Male
dtype: int64
- name: Mouth_Slightly_Open
dtype: int64
- name: Mustache
dtype: int64
- name: Narrow_Eyes
dtype: int64
- name: No_Beard
dtype: int64
- name: Oval_Face
dtype: int64
- name: Pale_Skin
dtype: int64
- name: Pointy_Nose
dtype: int64
- name: Receding_Hairline
dtype: int64
- name: Rosy_Cheeks
dtype: int64
- name: Sideburns
dtype: int64
- name: Smiling
dtype: int64
- name: Straight_Hair
dtype: int64
- name: Wavy_Hair
dtype: int64
- name: Wearing_Earrings
dtype: int64
- name: Wearing_Hat
dtype: int64
- name: Wearing_Lipstick
dtype: int64
- name: Wearing_Necklace
dtype: int64
- name: Wearing_Necktie
dtype: int64
- name: Young
dtype: int64
splits:
- name: train
num_bytes: 1474211218.427
num_examples: 202599
download_size: 1396302346
dataset_size: 1474211218.427
---
# Dataset Card for "gen.1.celeba"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,917 | [
[
-0.049774169921875,
-0.029205322265625,
0.002429962158203125,
0.01561737060546875,
-0.005634307861328125,
0.0035991668701171875,
0.013275146484375,
-0.01312255859375,
0.06756591796875,
0.029693603515625,
-0.05523681640625,
-0.051727294921875,
-0.04693603515625,
... |
saier/unarXive_citrec | 2023-04-02T01:28:05.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|10.5281/zenodo.7752615",
"language:en",
"license:cc-by-sa-4.0",
"a... | saier | null | null | 3 | 19 | 2023-03-24T12:13:20 | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
pretty_name: unarXive citation recommendation
size_categories:
- 1M<n<10M
tags:
- arXiv.org
- arXiv
- citation recommendation
- citation
- reference
- publication
- paper
- preprint
- section
- physics
- mathematics
- computer science
- cs
task_categories:
- text-classification
task_ids:
- multi-class-classification
source_datasets:
- extended|10.5281/zenodo.7752615
dataset_info:
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: marker
dtype: string
- name: marker_offsets
sequence:
sequence: int64
- name: label
dtype: string
config_name: .
splits:
- name: train
num_bytes: 5457336094
num_examples: 2043192
- name: test
num_bytes: 551012459
num_examples: 225084
- name: validation
num_bytes: 586422261
num_examples: 225348
download_size: 7005370567
dataset_size: 6594770814
---
# Dataset Card for unarXive citation recommendation
## Dataset Description
* **Homepage:** [https://github.com/IllDepence/unarXive](https://github.com/IllDepence/unarXive)
* **Paper:** [unarXive 2022: All arXiv Publications Pre-Processed for NLP, Including Structured Full-Text and Citation Network](https://arxiv.org/abs/2303.14957)
### Dataset Summary
The unarXive citation recommendation dataset contains 2.5 Million paragraphs from computer science papers and with an annotated citation marker. The paragraphs and citation information is derived from [unarXive](https://github.com/IllDepence/unarXive).
Note that citation infromation is only given as the [OpenAlex](https://openalex.org/) ID of the cited paper. An important consideration for models is therefore if the data is used *as is*, or if additional information of the cited papers (metadata, abstracts, full-text, etc.) is used.
The dataset can be used as follows.
```
from datasets import load_dataset
citrec_data = load_dataset('saier/unarXive_citrec')
citrec_data = citrec_data.class_encode_column('label') # assign target label column
citrec_data = citrec_data.remove_columns('_id') # remove sample ID column
```
## Dataset Structure
### Data Instances
Each data instance contains the paragraph’s text as well as information on one of the contained citation markers, in the form of a label (cited document OpenAlex ID), citation marker, and citation marker offset. An example is shown below.
```
{'_id': '7c1464bb-1f0f-4b38-b1a3-85754eaf6ad1',
'label': 'https://openalex.org/W3115081393',
'marker': '[1]',
'marker_offsets': [[316, 319]],
'text': 'Data: For sentiment analysis on Hindi-English CM tweets, we used the '
'dataset provided by the organizers of Task 9 at SemEval-2020.\n'
'The training dataset consists of 14 thousand tweets.\n'
'Whereas, the validation dataset as well as the test dataset contain '
'3 thousand tweets each.\n'
'The details of the dataset are given in [1]}.\n'
'For this task, we did not use any external dataset.\n'}
```
### Data Splits
The data is split into training, development, and testing data as follows.
* Training: 2,043,192 instances
* Development: 225,084 instances
* Testing: 225,348 instances
## Dataset Creation
### Source Data
The paragraph texts are extracted from the data set [unarXive](https://github.com/IllDepence/unarXive).
#### Who are the source language producers?
The paragraphs were written by the authors of the arXiv papers. In file `license_info.jsonl` author and text licensing information can be found for all samples, An example is shown below.
```
{'authors': 'Yusuke Sekikawa, Teppei Suzuki',
'license': 'http://creativecommons.org/licenses/by/4.0/',
'paper_arxiv_id': '2011.09852',
'sample_ids': ['cc375518-347c-43d0-bfb2-f88564d66df8',
'18dc073e-a48e-488e-b34c-e5fc3cb8a4ca',
'0c2e89b3-d863-4bc2-9e11-8f6c48d867cb',
'd85e46cf-b11d-49b6-801b-089aa2dd037d',
'92915cea-17ab-4a98-aad2-417f6cdd53d2',
'e88cb422-47b7-4f69-9b0b-fbddf8140d98',
'4f5094a4-0e6e-46ae-a34d-e15ce0b9803c',
'59003494-096f-4a7c-ad65-342b74eed561',
'6a99b3f5-217e-4d3d-a770-693483ef8670']}
```
### Annotations
Citation information in unarXive is automatically determined ([see implementation](https://github.com/IllDepence/unarXive/blob/master/src/match_references_openalex.py)).
<!--
## Considerations for Using the Data
### Discussion and Biases
TODO
### Other Known Limitations
TODO
-->
## Additional Information
### Licensing information
The dataset is released under the Creative Commons Attribution-ShareAlike 4.0.
### Citation Information
```
@inproceedings{Saier2023unarXive,
author = {Saier, Tarek and Krause, Johan and F\"{a}rber, Michael},
title = {{unarXive 2022: All arXiv Publications Pre-Processed for NLP, Including Structured Full-Text and Citation Network}},
booktitle = {Proceedings of the 23rd ACM/IEEE Joint Conference on Digital Libraries},
year = {2023},
series = {JCDL '23}
}
```
| 5,208 | [
[
-0.0233917236328125,
-0.0277252197265625,
0.0223846435546875,
0.034149169921875,
-0.00954437255859375,
-0.00951385498046875,
-0.0047760009765625,
-0.01517486572265625,
0.01039886474609375,
0.022430419921875,
-0.0242462158203125,
-0.060272216796875,
-0.0310668945... |
pain/Arabic-Tweets | 2023-04-08T10:02:07.000Z | [
"language:ar",
"license:cc-by-4.0",
"region:us"
] | pain | null | null | 7 | 19 | 2023-04-04T07:56:20 | ---
license: cc-by-4.0
language:
- ar
---
# Dataset Card for Dataset Arabic-Tweets
## Dataset Description
- **Homepage:** https://ieee-dataport.org/open-access/masc-massive-arabic-speech-corpus
- **Paper:** https://ieeexplore.ieee.org/document/10022652
### Dataset Summary
This dataset has been collected from twitter which is more than 41 GB of clean data of Arabic Tweets with nearly 4-billion Arabic words (12-million unique Arabic words).
### Languages
Arabic
### Source Data
Twitter
### Example on data loading using streaming:
```py
from datasets import load_dataset
dataset = load_dataset("pain/Arabic-Tweets",split='train', streaming=True)
print(next(iter(dataset)))
```
### Example on data loading without streaming "It will be downloaded locally":
```py
from datasets import load_dataset
dataset = load_dataset("pain/Arabic-Tweets",split='train')
print(dataset["train"][0])
```
#### Initial Data Collection and Normalization
The collected data comprises 100 GB of Twitter raw data. Only tweets with Arabic characters were crawled. It was observed that the new data contained a large number of Persian tweets as well as many Arabic words with repeated characters. Because of this and in order to improve the data efficiency the raw data was processed as follows: hashtags, mentions, and links were removed; tweets that contain Persian characters, 3 consecutive characters, or a singlecharacter word were dropped out; normalization of Arabic letters was considered.
This has resulted in more than 41 GB of clean data with nearly 4-billion Arabic words (12-million unique Arabic words).
## Considerations for Using the Data
- This data has been collected to create a language model. The tweets published without checking the tweets data. Therefore, we are not responsible for any tweets content at all.
### Licensing Information
[Creative Commons Attribution](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@INPROCEEDINGS{10022652,
author={Al-Fetyani, Mohammad and Al-Barham, Muhammad and Abandah, Gheith and Alsharkawi, Adham and Dawas, Maha},
booktitle={2022 IEEE Spoken Language Technology Workshop (SLT)},
title={MASC: Massive Arabic Speech Corpus},
year={2023},
volume={},
number={},
pages={1006-1013},
doi={10.1109/SLT54892.2023.10022652}}
``` | 2,329 | [
[
-0.01396942138671875,
-0.0404052734375,
0.0084075927734375,
0.007396697998046875,
-0.0267181396484375,
0.0247344970703125,
-0.03173828125,
-0.0185546875,
0.019195556640625,
0.0196685791015625,
-0.0287933349609375,
-0.06622314453125,
-0.05609130859375,
0.0035... |
OpenHust/vietnamese-summarization | 2023-06-23T06:28:09.000Z | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:vi",
"region:us"
] | OpenHust | null | null | 3 | 19 | 2023-04-07T15:09:32 | ---
task_categories:
- summarization
language:
- vi
size_categories:
- 10K<n<100K
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | 1,617 | [
[
-0.038238525390625,
-0.0298309326171875,
-0.0035991668701171875,
0.027130126953125,
-0.0323486328125,
0.0037822723388671875,
-0.0172271728515625,
-0.020172119140625,
0.049041748046875,
0.04046630859375,
-0.06353759765625,
-0.08062744140625,
-0.052947998046875,
... |
hackathon-somos-nlp-2023/podcasts-ner-es | 2023-04-09T23:40:50.000Z | [
"task_categories:token-classification",
"size_categories:n<1K",
"language:es",
"license:mit",
"region:us"
] | hackathon-somos-nlp-2023 | null | null | 9 | 19 | 2023-04-08T23:40:02 | ---
dataset_info:
features:
- name: text
dtype: string
- name: annotation
list:
- name: end
dtype: int64
- name: label
dtype: string
- name: start
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 43389.8358778626
num_examples: 209
- name: test
num_bytes: 11003.164122137405
num_examples: 53
download_size: 42448
dataset_size: 54393
task_categories:
- token-classification
language:
- es
size_categories:
- n<1K
license: mit
---
# Dataset Card for "podcasts-ner-es"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
- [Team members](#team-members)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset comprises of small text snippets extracted from the "Deforme Semanal" podcast,
accompanied by annotations that identify the presence of a predetermined set of entities.
The purpose of this dataset is to facilitate Named Entity Recognition (NER) tasks.
The dataset was created to aid in the identification of entities such as famous people, books, or films in podcasts.
The transcription of the audio was first done, followed by annotation with GPT-3 and curation with Argilla.
The dataset is in Spanish, covering mostly topics such as love, feminism, and art, which are the main themes of the podcast.
### Supported Tasks and Leaderboards
Named Entity Recognition
### Languages
The dataset is in Spanish and the language used is primarily informal.
It is important to note that the language may include aggressive or offensive content.
## Dataset Structure
### Data Instances
```
{
"text":"Tengo 39 años, pues, ya veré cuándo yo quiero dejar de comer ternera, está mal, porque hay sobre explotación y todo esto, muy mal."
"annotation": [ { "end": 13, "label": "DATES", "start": 6 } ]
"id": "53c4748e-dbd2-4cf5-946f-d134b0bf6155"
}
```
### Data Fields
`text`: Snippet of text of no more than 512 characters extracted from a podcast episode.
`id`: Unique identification number for each instance in the dataset.
`annotation`: List of dictonary-like format with the following fields:
- `end`: end character of the entity ocurrence in the text.
- `start`: start character of the entity ocurrence in the text.
- `label`: label for the entity from the predefined set of entities. The label of the entities is one of:
'people', 'products', 'books', 'animals', 'organizations', 'topics', 'dates', 'places', 'artista', 'objects','songs', and 'films'.
### Data Splits
The dataset was shuffled and split using the `train_test_split` function from the Hugging Face datasets library.
The split was made with a train size of 0.8 and a seed of 42.
## Dataset Creation
### Curation Rationale
We created this dataset with the aim of making the information from our favorite podcasts more accessible, as retrieving information from audio formats can be challenging.
We chose to focus on the Named Entity Recognition (NER) task as it was relatively easy to annotate and validate.
### Source Data
#### Initial Data Collection and Normalization
We collected the data from a playlist on YouTube containing approximately 15 episodes of the "Deforme Semanal" podcast.
You can find the playlist at this [link](https://www.youtube.com/playlist?list=PLLbN7SMQhMVZoXhtQ00AyebQE_-ttDrs9).
We then transcribed the audio stream using OpenAI's Whisper (medium size) and split the resulting text files
into chunks of less than 512 characters.
### Annotations
#### Annotation process
To annotate the texts, we used OpenAI's API and GPT-3, with the following prompt:
```
Perform named entity recognition in Spanish. The classes are books, films, video games, songs, places, dates, topics, organizations, and people. The output should follow the format:
[{'class': 'people', 'text': 'name of the person'}, {'class': 'books', 'start': 'name of the book'}]
Sentence:
```
Finally, to ensure the quality of the dataset, we validated the annotations using Argilla by checking that the tokens were classified
correctly.
## Considerations for Using the Data
### Discussion of Biases
The dataset was obtained from the "Deforme Semanal" podcast, which primarily focuses on art, feminism, and culture.
As a result, the data is directly related to the topics and individuals discussed in these contexts. Additionally,
the language used in the podcast is informal and can be aggressive or offensive at times, which may be reflected in the dataset.
Although we attempted to minimize these biases during the validation process, their effectiveness is likely limited.
### Other Known Limitations
One issue that we have encountered with the token/entity data is that there can be some ambiguity in terms of their distinctions.
In some cases, it may not be clear how to differentiate between two tokens or entities, which can impact the accuracy
and effectiveness of models trained on this data.
Furthermore, the dataset size is relatively small, which can pose a challenge when it comes to training machine learning models.
With a limited amount of data, it can be difficult to capture the full range of variations and patterns in the data,
and overfitting can become a concern. This is especially true when dealing with complex models that require a large
amount of data to train effectively.
## Team members
[David Mora](https://huggingface.co/DavidFM43)
[Sergio Perez](https://huggingface.co/sergiopperez)
[Albeto Fernandez](https://huggingface.co/AlbertoFH98)
| 6,601 | [
[
-0.05340576171875,
-0.02923583984375,
0.00374603271484375,
0.0217742919921875,
-0.019744873046875,
-0.002162933349609375,
-0.034088134765625,
-0.035919189453125,
0.057403564453125,
0.0293426513671875,
-0.05572509765625,
-0.05926513671875,
-0.054840087890625,
... |
nanakonoda/xnli_cm | 2023-04-18T13:58:12.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:extended|xnli",
"language:en",
"language:de",
"language:fr",
"mode classification",
"aligned",
"code-mixed",
... | nanakonoda | This dataset was generated from XNLI using the CodeMixed Text Generator for a binary text classification task. | # @InProceedings{huggingface:dataset,
# title = {A great new dataset},
# author={huggingface, Inc.
# },
# year={2020}
# } | 0 | 19 | 2023-04-11T18:47:31 | ---
annotations_creators:
- expert-generated
language:
- en
- de
- fr
language_creators:
- found
license: []
multilinguality:
- multilingual
pretty_name: XNLI Code-Mixed Corpus
size_categories:
- 1M<n<10M
source_datasets:
- extended|xnli
tags:
- mode classification
- aligned
- code-mixed
task_categories:
- text-classification
task_ids: []
dataset_info:
- config_name: de_ec
features:
- name: text
dtype: string
- name: label
dtype: int64
# class_label:
# names:
# '0': spoken
# '1': written
splits:
- name: train
num_bytes: 576
num_examples: 2490
- name: test
num_bytes: 194139776
num_examples: 1610549
- config_name: de_ml
features:
- name: text
dtype: string
- name: label
dtype: int64
# class_label:
# names:
# '0': spoken
# '1': written
splits:
- name: train
num_bytes: 576
num_examples: 2490
- name: test
num_bytes: 87040
num_examples: 332326
- config_name: fr_ec
features:
- name: text
dtype: string
- name: label
dtype: int64
# class_label:
# names:
# '0': spoken
# '1': written
splits:
- name: train
num_bytes: 576
num_examples: 2490
- name: test
num_bytes: 564416
num_examples: 2562631
- config_name: fr_ml
features:
- name: text
dtype: string
- name: label
dtype: int64
# class_label:
# names:
# '0': spoken
# '1': written
splits:
- name: train
num_bytes: 576
num_examples: 2490
- name: test
num_bytes: 361472
num_examples: 1259159
download_size: 1376728
dataset_size: 1376704
---
# Dataset Card for XNLI Code-Mixed Corpus
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
### Supported Tasks and Leaderboards
Binary mode classification (spoken vs written)
### Languages
- English
- German
- French
- German-English code-mixed by Equivalence Constraint Theory
- German-English code-mixed by Matrix Language Theory
- French-English code-mixed by Equivalence Constraint Theory
- German-English code-mixed by Matrix Language Theory
## Dataset Structure
### Data Instances
{
'text': "And he said , Mama , I 'm home",
'label': 0
}
### Data Fields
- text: sentence
- label: binary label of text (0: spoken 1: written)
### Data Splits
- de-ec
- train (English, German, French monolingual):
- test (German-English code-mixed by Equivalence Constraint Theory):
- de-ml:
- train (English, German, French monolingual):
- test (German-English code-mixed by Matrix Language Theory):
- fr-ec
- train (English, German, French monolingual):
- test (French-English code-mixed by Equivalence Constraint Theory):
- fr-ml:
- train (English, German, French monolingual):
- test (French-English code-mixed by Matrix Language Theory):
### Other Statistics
#### Average Sentence Length
- German
- train:
- test:
- French
- train:
- test:
#### Label Split
- train:
- 0:
- 1:
- test:
- 0:
- 1:
## Dataset Creation
### Curation Rationale
Using the XNLI Parallel Corpus, we generated a code-mixed corpus using CodeMixed Text Generator.
The XNLI Parallel Corpus is available here:
https://huggingface.co/datasets/nanakonoda/xnli_parallel
It was created from the XNLI corpus.
More information is available in the datacard for the XNLI Parallel Corpus.
Here is the link and citation for the original CodeMixed Text Generator paper.
https://github.com/microsoft/CodeMixed-Text-Generator
```
@inproceedings{rizvi-etal-2021-gcm,
title = "{GCM}: A Toolkit for Generating Synthetic Code-mixed Text",
author = "Rizvi, Mohd Sanad Zaki and
Srinivasan, Anirudh and
Ganu, Tanuja and
Choudhury, Monojit and
Sitaram, Sunayana",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.24",
pages = "205--211",
abstract = "Code-mixing is common in multilingual communities around the world, and processing it is challenging due to the lack of labeled and unlabeled data. We describe a tool that can automatically generate code-mixed data given parallel data in two languages. We implement two linguistic theories of code-mixing, the Equivalence Constraint theory and the Matrix Language theory to generate all possible code-mixed sentences in the language-pair, followed by sampling of the generated data to generate natural code-mixed sentences. The toolkit provides three modes: a batch mode, an interactive library mode and a web-interface to address the needs of researchers, linguists and language experts. The toolkit can be used to generate unlabeled text data for pre-trained models, as well as visualize linguistic theories of code-mixing. We plan to release the toolkit as open source and extend it by adding more implementations of linguistic theories, visualization techniques and better sampling techniques. We expect that the release of this toolkit will help facilitate more research in code-mixing in diverse language pairs.",
}
```
### Source Data
XNLI Parallel Corpus
https://huggingface.co/datasets/nanakonoda/xnli_parallel
#### Original Source Data
XNLI Parallel Corpus was created using the XNLI Corpus.
https://github.com/facebookresearch/XNLI
Here is the citation for the original XNLI paper.
```
@InProceedings{conneau2018xnli,
author = "Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin",
title = "XNLI: Evaluating Cross-lingual Sentence Representations",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing",
year = "2018",
publisher = "Association for Computational Linguistics",
location = "Brussels, Belgium",
}
```
#### Initial Data Collection and Normalization
We removed all punctuation from the XNLI Parallel Corpus except apostrophes.
#### Who are the source language producers?
N/A
### Annotations
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
N/A
### Discussion of Biases
N/A
### Other Known Limitations
N/A
## Additional Information
### Dataset Curators
N/A
### Licensing Information
N/A
### Citation Information
### Contributions
N/A | 7,051 | [
[
-0.03570556640625,
-0.034454345703125,
-0.0005393028259277344,
0.034027099609375,
-0.013092041015625,
0.0293426513671875,
-0.047027587890625,
-0.035888671875,
0.0469970703125,
0.0204010009765625,
-0.04156494140625,
-0.053802490234375,
-0.0239715576171875,
0.... |
h2oai/openassistant_oasst1 | 2023-04-19T04:43:13.000Z | [
"language:en",
"license:apache-2.0",
"gpt",
"llm",
"large language model",
"open-source",
"region:us"
] | h2oai | null | null | 6 | 19 | 2023-04-16T01:58:01 | ---
license: apache-2.0
language:
- en
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
tags:
- gpt
- llm
- large language model
- open-source
---
# h2oGPT Data Card
## Summary
H2O.ai's `openassistant_oasst1` is an open-source instruct-type dataset for fine-tuning of large language models, licensed for commercial use.
- Number of rows: `46283`
- Number of columns: `3`
- Column names: `['input', 'prompt_type', 'source']`
## Source
- [Original Open Assistant data in tree structure](https://huggingface.co/datasets/OpenAssistant/oasst1)
- [This flattened dataset created by script in h2oGPT repository](https://github.com/h2oai/h2ogpt/blob/45e6183171fb16691ad7d3ab006fad973f971e98/create_data.py#L1253)
| 762 | [
[
-0.00466156005859375,
-0.04364013671875,
0.0114898681640625,
0.005908966064453125,
-0.0105133056640625,
-0.009918212890625,
0.01126861572265625,
-0.01110076904296875,
-0.0016012191772460938,
0.0242462158203125,
-0.0236968994140625,
-0.0501708984375,
-0.027481079... |
LevMuchnik/SupremeCourtOfIsrael | 2023-04-27T06:01:49.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-retrieval",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"task_ids:document-retrieval",
"size_categories:100K<n<1M",
"language:he",
"license:openrail",
"legal, verdicts, metadata, hebrew",
... | LevMuchnik | null | null | 4 | 19 | 2023-04-21T11:49:35 | ---
license: openrail
language:
- he
tags:
- legal, verdicts, metadata, hebrew
pretty_name: Supreme Court Israel - Public Verdicts and Decisions
size_categories:
- 100K<n<1M
task_ids:
- language-modeling
- masked-language-modeling
- document-retrieval
task_categories:
- text-generation
- fill-mask
- text-retrieval
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
Lev Muchnik, lev.muchnik@mail.huji.ac.il
### Dataset Summary
This dataset represents a 2022 snapshot of the Supreme Court of Israel public verdicts and decisions supported by rich metadata. The 5.31GB dataset represents 751,194 documents.
Overall, the dataset contains 2.68 Gb of text.
It can be loaded with the dataset package:
```
import datasets
data = datasets.load_dataset('LevMuchnik/SupremeCourtOfIsrael')
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The vast majority of the documents in the database are in Hebrew. A small number of documents are in English.
## Dataset Structure
The dataset is a json lines file with each line corresponding to a single document and containing document identification, text and metadata.
### Data Instances
[More Information Needed]
### Data Fields
The file contains the following fields:
- case_id - running number for cases
- download_time - when the document was downloaded (datetime)
- number_of_case_documents - number of documents in the current case
- file_name - full name of the document file, including relative path
- Id - document id
- CaseId - case id
- VerdictDt - Date of the document (datetime)
- CreatedDate - Date of when the document was inserted into the Supreme Court database
- CaseNum - case number
- CaseDesc - Unique case identifier. This id is used to reference cases within the Israeli legal system
- Pages - number of pages in the original document
- Path - relative path to the document
- CaseName - formal name of the case
- FileName - document file name, without path
- DocName -document file name, without path
- Year - document creation year
- TypeCode - enumeration of document types (see Type field below)
- Type - Document type
- פסק-דין 84339
- החלטה 663099
- צו ביניים 22
- פסקי דין באנגלית 310
- צו על תנאי 200
- צו 2606
- פד"י 302
- תקצירים 316
- Technical - boolean indicator of whether the document is technical or not.
- CodeVolume - ?
- document_hash - 258-bit hashtag of the document name. Used internally to uniquely identify the document
- text - text of the document. Multiple newlines and other document formating elements (paragraphs,lists, etc.) are preserved.
- html_title - document title extracted from the HTML
- VerdictsDt - date of the verdict
- meta_case_nm - formal case name,
- meta_sec_appeal - integer or None
- meta_side_ty - case type, list of strings
- meta_verdict_file_nm - name of the verdict file
- meta_judge - list of names of the cases judges
- meta_mador_nm - name of the court instance (e.g. בג"ץ)
- meta_side_nm - list of the case parties, list of strings
- meta_verdict_dt - date of the verdict
- meta_case_dt - date of the case
- meta_verdict_nbr -
- meta_ProgId - name of the software used to create the document (None, Word, etc)
- meta_is_technical - whether the document is technical, {'false', 'true'}
- meta_judge_nm_last - last names of the judges (list of strings)
- meta_case_nbr - formal number of the case (same as CaseDesc)
- meta_verdict_ty - type of the decision (same as Type)
- meta_lawyer_nm - list of lawyer names, list of strings or None
- meta_judge_nm_first - list of judges' first names, list of strings
- meta_verdict_pages - number of document cases
- meta_inyan_nm - court בג"ץ
- meta_court_nm - court (e.g. בית המשפט העליון )
### Data Splits
The entire dataset is qualified as 'train'.
## Dataset Creation
2023-04-22
### Curation Rationale
[More Information Needed]
### Source Data
https://supreme.court.gov.il/
#### Initial Data Collection and Normalization
The data was colleted by crawling the Israeli Supreme Court website.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The data contained in this dataset is public.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Prof. Lev Muchnik, Hebrew University of Jerusalem
Dr. Inbal Yahav Shenberger, Tel Aviv University
### Licensing Information
[More Information Needed]
### Citation Information
Lev Muchnik, Inbal Yahav, Ariel Nevo, Avichay Chriqui, Tim Shektov, 2023, The Israeli Supreme Court Dataset
### Contributions
The authours would like to thank the Israeli Innovation Authority (grants #78560 and #78561) for their support in creating of this dataset. | 5,125 | [
[
-0.0178070068359375,
-0.0264892578125,
0.0294189453125,
0.01348876953125,
-0.033905029296875,
-0.01497650146484375,
-0.00971221923828125,
-0.007232666015625,
0.0107879638671875,
0.040618896484375,
-0.028167724609375,
-0.08038330078125,
-0.04437255859375,
-0.... |
recastai/LAION-art-EN-improved-captions | 2023-06-24T04:19:50.000Z | [
"language:en",
"license:cc-by-4.0",
"region:us"
] | recastai | null | null | 8 | 19 | 2023-04-26T03:37:46 | ---
license: cc-by-4.0
dataset_info:
features:
- name: orig_caption
dtype: string
- name: generated_caption
dtype: string
- name: key
dtype: string
- name: url
dtype: string
- name: index
dtype: int64
splits:
- name: train
num_bytes: 681710086
num_examples: 2684160
download_size: 441945582
dataset_size: 681710086
language:
- en
---
# Dataset Card for LAION-art-EN-improved-captions
### Dataset Summary
This dataset has been created by **Re:cast AI** for improving the semantic relationship of image-caption pairs. `generated_captions` were created in a semi-supervised fashion using the **Salesforce/blip2-flan-t5-xxl** model.
### Supported Tasks
Fine-tuning text-to-image generators (e.g. stable-diffusion), or a searchable prompt database (requires faiss-index).
## Dataset Structure
### Data Fields
- orig_caption
- generated_caption
- key
- index
- url
### Data Splits
- train
### Source Data
LAION-Art | 968 | [
[
-0.00479888916015625,
-0.029693603515625,
0.005138397216796875,
0.00524139404296875,
-0.046875,
-0.0002238750457763672,
-0.004390716552734375,
-0.0223388671875,
0.0203857421875,
0.050048828125,
-0.049285888671875,
-0.0504150390625,
-0.026214599609375,
0.0124... |
Nan-Do/code-search-net-python | 2023-05-15T00:55:15.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:summarization",
"language:en",
"license:apache-2.0",
"code",
"python",
"CodeSearchNet",
"region:us"
] | Nan-Do | null | null | 12 | 19 | 2023-05-14T00:42:57 | ---
dataset_info:
features:
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
- name: partition
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 1772584117
num_examples: 455243
download_size: 598837908
dataset_size: 1772584117
license: apache-2.0
task_categories:
- text-generation
- text2text-generation
- summarization
language:
- en
tags:
- code
- python
- CodeSearchNet
pretty_name: Python CodeSearchNet with Summaries
---
# Dataset Card for "code-search-net-python"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/code-search-net-python
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/code-search-net-python
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This dataset is the Python portion of the CodeSarchNet annotated with a summary column.
The code-search-net dataset includes open source functions that include comments found at GitHub.
The summary is a short description of what the function does.
### Languages
The dataset's comments are in English and the functions are coded in Python
### Data Splits
Train, test, validation labels are included in the dataset as a column.
## Dataset Creation
May of 2023
### Curation Rationale
This dataset can be used to generate instructional (or many other interesting) datasets that are useful to train LLMs
### Source Data
The CodeSearchNet dataset can be found at https://www.kaggle.com/datasets/omduggineni/codesearchnet
### Annotations
This datasets include a summary column including a short description of the function.
#### Annotation process
The annotation procedure was done using [Salesforce](https://huggingface.co/Salesforce) T5 summarization models.
A sample notebook of the process can be found at https://github.com/Nan-Do/OpenAssistantInstructionResponsePython
The annontations have been cleaned to make sure there are no repetitions and/or meaningless summaries. (some may still be present in the dataset)
### Licensing Information
Apache 2.0 | 2,650 | [
[
-0.028900146484375,
-0.023956298828125,
-0.00501251220703125,
0.024749755859375,
-0.0101470947265625,
-0.0149688720703125,
-0.018463134765625,
-0.00933074951171875,
0.0518798828125,
0.0264434814453125,
-0.036956787109375,
-0.05133056640625,
-0.025543212890625,
... |
TrainingDataPro/pose_estimation | 2023-09-14T16:47:12.000Z | [
"task_categories:image-classification",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"finance",
"region:us"
] | TrainingDataPro | The dataset is primarly intended to dentify and predict the positions of major
joints of a human body in an image. It consists of people's photographs with
body part labeled with keypoints. | @InProceedings{huggingface:dataset,
title = {pose_estimation},
author = {TrainingDataPro},
year = {2023}
} | 2 | 19 | 2023-05-19T11:17:45 | ---
license: cc-by-nc-nd-4.0
task_categories:
- image-classification
language:
- en
tags:
- code
- finance
dataset_info:
features:
- name: image_id
dtype: uint32
- name: image
dtype: image
- name: mask
dtype: image
- name: shapes
dtype: string
splits:
- name: train
num_bytes: 142645152
num_examples: 29
download_size: 137240523
dataset_size: 142645152
---
# Pose Estimation
The dataset is primarly intended to dentify and predict the positions of major joints of a human body in an image. It consists of people's photographs with body part labeled with keypoints.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=pose_estimation) to discuss your requirements, learn about the price and buy the dataset.

# Data Format
Each image from `EP` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the key points. For each point, the x and y coordinates are provided, and there is a `Presumed_Location` attribute, indicating whether the point is presumed or accurately defined.
# Example of XML file structure
.png?generation=1684358333663868&alt=media)
# Labeled body parts
Each keypoint is ordered and corresponds to the concrete part of the body:
0. **Nose**
1. **Neck**
2. **Right shoulder**
3. **Right elbow**
4. **Right wrist**
5. **Left shoulder**
6. **Left elbow**
7. **Left wrist**
8. **Right hip**
9. **Right knee**
10. **Right foot**
11. **Left hip**
12. **Left knee**
13. **Left foot**
14. **Right eye**
15. **Left eye**
16. **Right ear**
17. **Left ear**
# Keypoint annotation is made in accordance with your requirements.
## [**TrainingData**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=pose_estimation) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** | 2,493 | [
[
-0.0245361328125,
-0.022491455078125,
0.03631591796875,
-0.005466461181640625,
-0.0178375244140625,
-0.00815582275390625,
0.0164337158203125,
-0.039093017578125,
0.0286407470703125,
0.0243988037109375,
-0.05084228515625,
-0.083740234375,
-0.046234130859375,
... |
voidful/StrategyQA | 2023-05-20T16:06:43.000Z | [
"region:us"
] | voidful | null | null | 1 | 19 | 2023-05-20T16:02:29 | A Question Answering Benchmark with Implicit Reasoning Strategies
The StrategyQA dataset was created through a crowdsourcing pipeline for eliciting creative and diverse yes/no questions that require implicit reasoning steps. To solve questions in StrategyQA, the reasoning steps should be inferred using a strategy. To guide and evaluate the question answering process, each example in StrategyQA was annotated with a decomposition into reasoning steps for answering it, and Wikipedia paragraphs that provide evidence for the answer to each step.
Illustrated in the figure below: Questions in StrategyQA (Q1) require implicit reasoning, in contrast to multi-step questions that explicitly specify the reasoning process (Q2). Each training example contains a question (Q1), yes/no answer (A), decomposition (D), and evidence paragraphs (E).
[strategyqa_test](https://huggingface.co/datasets/voidful/StrategyQA/resolve/main/strategyqa_test.json)
[strategyqa_train](https://huggingface.co/datasets/voidful/StrategyQA/blob/main/strategyqa_train.json)
[strategyqa_train_filtered](https://huggingface.co/datasets/voidful/StrategyQA/blob/main/strategyqa_train_filtered.json)
[strategyqa_train_paragraphs](https://huggingface.co/datasets/voidful/StrategyQA/blob/main/strategyqa_train_paragraphs.json)
Paper
Title: Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies
Authors: Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, Jonathan Berant
Transactions of the Association for Computational Linguistics (TACL), 2021
Citation:
```
@article{geva2021strategyqa,
title = {{Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies}},
author = {Geva, Mor and Khashabi, Daniel and Segal, Elad and Khot, Tushar and Roth, Dan and Berant, Jonathan},
journal = {Transactions of the Association for Computational Linguistics (TACL)},
year = {2021},
}
``` | 1,950 | [
[
-0.043426513671875,
-0.060394287109375,
0.06597900390625,
-0.0002720355987548828,
0.0037078857421875,
-0.010101318359375,
0.0004930496215820312,
-0.012237548828125,
-0.0271148681640625,
0.01251983642578125,
-0.0706787109375,
-0.025604248046875,
-0.01893615722656... |
raygx/NepCov19TweetsPlus | 2023-07-01T04:10:37.000Z | [
"region:us"
] | raygx | null | null | 0 | 19 | 2023-05-27T09:23:10 | ---
dataset_info:
features:
- name: Sentiment
dtype: int64
- name: Sentences
dtype: string
splits:
- name: train
num_bytes: 14110875
num_examples: 41541
download_size: 5219950
dataset_size: 14110875
---
# Dataset Card for "NepCov19TweetsPlus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 405 | [
[
-0.039947509765625,
-0.01346588134765625,
0.0003809928894042969,
0.037567138671875,
-0.0258941650390625,
0.005290985107421875,
0.0139617919921875,
-0.006923675537109375,
0.0718994140625,
0.027496337890625,
-0.06951904296875,
-0.0443115234375,
-0.044342041015625,... |
grantprice/DND-NLP | 2023-06-09T23:34:20.000Z | [
"region:us"
] | grantprice | null | null | 1 | 19 | 2023-06-06T20:51:17 | Entry not found | 15 | [
[
-0.0214080810546875,
-0.01497650146484375,
0.057098388671875,
0.028839111328125,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005046844482421875,
0.051361083984375,
0.016998291015625,
-0.05206298828125,
-0.01497650146484375,
-0.06036376953125,
0... |
tuetschek/atis | 2023-06-11T18:24:58.000Z | [
"region:us"
] | tuetschek | null | null | 0 | 19 | 2023-06-11T16:16:00 | Entry not found | 15 | [
[
-0.021392822265625,
-0.01494598388671875,
0.05718994140625,
0.028839111328125,
-0.0350341796875,
0.046539306640625,
0.052490234375,
0.00507354736328125,
0.051361083984375,
0.01702880859375,
-0.052093505859375,
-0.01494598388671875,
-0.06036376953125,
0.03790... |
KaiLv/UDR_Java | 2023-06-21T12:40:15.000Z | [
"region:us"
] | KaiLv | null | null | 0 | 19 | 2023-06-21T12:39:27 | ---
dataset_info:
features:
- name: idx
dtype: int64
- name: question
dtype: string
- name: target
dtype: string
- name: len_question
dtype: int64
- name: len_target
dtype: int64
splits:
- name: train
num_bytes: 105539111
num_examples: 164514
- name: validation
num_bytes: 3088869
num_examples: 5172
- name: test
num_bytes: 6865702
num_examples: 10928
- name: debug
num_bytes: 64147056
num_examples: 100000
download_size: 77259976
dataset_size: 179640738
---
# Dataset Card for "UDR_Java"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 699 | [
[
-0.038726806640625,
-0.019073486328125,
0.00733184814453125,
0.00847625732421875,
-0.01154327392578125,
0.0079193115234375,
0.017578125,
-0.0084075927734375,
0.0430908203125,
0.039306640625,
-0.0401611328125,
-0.061279296875,
-0.040435791015625,
-0.013038635... |
ChanceFocus/flare-finqa | 2023-08-18T20:03:26.000Z | [
"region:us"
] | ChanceFocus | null | null | 3 | 19 | 2023-06-25T16:40:22 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 27056024
num_examples: 6251
- name: valid
num_bytes: 3764872
num_examples: 883
- name: test
num_bytes: 4846110
num_examples: 1147
download_size: 0
dataset_size: 35667006
---
# Dataset Card for "flare-finqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 571 | [
[
-0.054931640625,
-0.01276397705078125,
0.0019388198852539062,
0.0103302001953125,
-0.00917816162109375,
0.01849365234375,
0.025115966796875,
-0.0185546875,
0.0679931640625,
0.0423583984375,
-0.061126708984375,
-0.046661376953125,
-0.0249481201171875,
-0.0150... |
jjzha/green | 2023-09-07T12:14:02.000Z | [
"language:en",
"license:cc-by-4.0",
"region:us"
] | jjzha | null | null | 0 | 19 | 2023-07-04T13:43:24 | ---
license: cc-by-4.0
language: en
---
This is the skill dataset created by:
```
@inproceedings{green-etal-2022-development,
title = "Development of a Benchmark Corpus to Support Entity Recognition in Job Descriptions",
author = "Green, Thomas and
Maynard, Diana and
Lin, Chenghua",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.128",
pages = "1201--1208",
}
```
There are no document delimiters, task is on the sentence-level.
Number of samples (sentences):
- train: 8669
- dev: 964
- test: 335
Sources:
- TotalJobs (UK): https://www.kaggle.com/datasets/airiddha/trainrev1
Type of tags:
- Generic BIO tags with key `tags_skill`
- Finer grained labels of BIO tags are
- `SKILL`: Tasks that can be performed, or attributes and abilities (including soft skills) that enable people to perform tasks.
- `QUALIFICATION`: Official certifications obtained through taking a course or passing an exam or appraisal.
- `EXPERIENCE`: Lengths of time relating to a position or skill.
- `OCCUPATION`: Job titles, including abbreviations and acronyms.
- `DOMAIN`: Areas of industry in which someone might have knowledge or experience.
- Also has part-of-speech tags, indicated by `pos`.
Sample:
```
{
"idx": 959,
"tokens": ["negotiating", "and", "commercial", "skills", "Conscientious", "and", "thorough", "by", "nature"],
"tags_skill": ["B-SKILL", "I-SKILL", "I-SKILL", "I-SKILL", "I-SKILL", "O", "B-SKILL", "O", "O"],
"pos": ["NN", "CC", "JJ", "NNS", "JJ", "CC", "JJ", "IN", "NN"]
}
``` | 1,763 | [
[
-0.00875091552734375,
-0.03643798828125,
0.02484130859375,
0.0029754638671875,
-0.006237030029296875,
0.0137176513671875,
-0.0246124267578125,
-0.024810791015625,
0.0253448486328125,
0.040985107421875,
-0.046661376953125,
-0.06317138671875,
-0.0533447265625,
... |
rdpahalavan/UNSW-NB15 | 2023-07-22T21:41:28.000Z | [
"task_categories:text-classification",
"task_categories:tabular-classification",
"size_categories:100M<n<1B",
"license:apache-2.0",
"Network Intrusion Detection",
"Cybersecurity",
"Network Packets",
"UNSW-NB15",
"region:us"
] | rdpahalavan | null | null | 0 | 19 | 2023-07-08T07:19:33 | ---
license: apache-2.0
task_categories:
- text-classification
- tabular-classification
tags:
- Network Intrusion Detection
- Cybersecurity
- Network Packets
- UNSW-NB15
size_categories:
- 100M<n<1B
---
We have developed a Python package as a wrapper around Hugging Face Hub and Hugging Face Datasets library to access this dataset easily.
# NIDS Datasets
The `nids-datasets` package provides functionality to download and utilize specially curated and extracted datasets from the original UNSW-NB15 and CIC-IDS2017 datasets. These datasets, which initially were only flow datasets, have been enhanced to include packet-level information from the raw PCAP files. The dataset contains both packet-level and flow-level data for over 230 million packets, with 179 million packets from UNSW-NB15 and 54 million packets from CIC-IDS2017.
## Installation
Install the `nids-datasets` package using pip:
```shell
pip install nids-datasets
```
Import the package in your Python script:
```python
from nids_datasets import Dataset, DatasetInfo
```
## Dataset Information
The `nids-datasets` package currently supports two datasets: [UNSW-NB15](https://research.unsw.edu.au/projects/unsw-nb15-dataset) and [CIC-IDS2017](https://www.unb.ca/cic/datasets/ids-2017.html). Each of these datasets contains a mix of normal traffic and different types of attack traffic, which are identified by their respective labels. The UNSW-NB15 dataset has 10 unique class labels, and the CIC-IDS2017 dataset has 24 unique class labels.
- UNSW-NB15 Labels: 'normal', 'exploits', 'dos', 'fuzzers', 'generic', 'reconnaissance', 'worms', 'shellcode', 'backdoor', 'analysis'
- CIC-IDS2017 Labels: 'BENIGN', 'FTP-Patator', 'SSH-Patator', 'DoS slowloris', 'DoS Slowhttptest', 'DoS Hulk', 'Heartbleed', 'Web Attack – Brute Force', 'Web Attack – XSS', 'Web Attack – SQL Injection', 'Infiltration', 'Bot', 'PortScan', 'DDoS', 'normal', 'exploits', 'dos', 'fuzzers', 'generic', 'reconnaissance', 'worms', 'shellcode', 'backdoor', 'analysis', 'DoS GoldenEye'
## Subsets of the Dataset
Each dataset consists of four subsets:
1. Network-Flows - Contains flow-level data.
2. Packet-Fields - Contains packet header information.
3. Packet-Bytes - Contains packet byte information in the range (0-255).
4. Payload-Bytes - Contains payload byte information in the range (0-255).
Each subset contains 18 files (except Network-Flows, which has one file), where the data is stored in parquet format. In total, this package provides access to 110 files. You can choose to download all subsets or select specific subsets or specific files depending on your analysis requirements.
## Getting Information on the Datasets
The `DatasetInfo` function provides a summary of the dataset in a pandas dataframe format. It displays the number of packets for each class label across all 18 files in the dataset. This overview can guide you in selecting specific files for download and analysis.
```python
df = DatasetInfo(dataset='UNSW-NB15') # or dataset='CIC-IDS2017'
df
```
## Downloading the Datasets
The `Dataset` class allows you to specify the dataset, subset, and files that you are interested in. The specified data will then be downloaded.
```python
dataset = 'UNSW-NB15' # or 'CIC-IDS2017'
subset = ['Network-Flows', 'Packet-Fields', 'Payload-Bytes'] # or 'all' for all subsets
files = [3, 5, 10] # or 'all' for all files
data = Dataset(dataset=dataset, subset=subset, files=files)
data.download()
```
The directory structure after downloading files:
```
UNSW-NB15
│
├───Network-Flows
│ └───UNSW_Flow.parquet
│
├───Packet-Fields
│ ├───Packet_Fields_File_3.parquet
│ ├───Packet_Fields_File_5.parquet
│ └───Packet_Fields_File_10.parquet
│
└───Payload-Bytes
├───Payload_Bytes_File_3.parquet
├───Payload_Bytes_File_5.parquet
└───Payload_Bytes_File_10.parquet
```
You can then load the parquet files using pandas:
```python
import pandas as pd
df = pd.read_parquet('UNSW-NB15/Packet-Fields/Packet_Fields_File_10.parquet')
```
## Merging Subsets
The `merge()` method allows you to merge all data of each packet across all subsets, providing both flow-level and packet-level information in a single file.
```python
data.merge()
```
The merge method, by default, uses the details specified while instantiating the `Dataset` class. You can also pass subset=list of subsets and files=list of files you want to merge.
The directory structure after merging files:
```
UNSW-NB15
│
├───Network-Flows
│ └───UNSW_Flow.parquet
│
├───Packet-Fields
│ ├───Packet_Fields_File_3.parquet
│ ├───Packet_Fields_File_5.parquet
│ └───Packet_Fields_File_10.parquet
│
├───Payload-Bytes
│ ├───Payload_Bytes_File_3.parquet
│ ├───Payload_Bytes_File_5.parquet
│ └───Payload_Bytes_File_10.parquet
│
└───Network-Flows+Packet-Fields+Payload-Bytes
├───Network_Flows+Packet_Fields+Payload_Bytes_File_3.parquet
├───Network_Flows+Packet_Fields+Payload_Bytes_File_5.parquet
└───Network_Flows+Packet_Fields+Payload_Bytes_File_10.parquet
```
## Extracting Bytes
Packet-Bytes and Payload-Bytes subset contains the first 1500-1600 bytes. To retrieve all bytes (up to 65535 bytes) from the Packet-Bytes and Payload-Bytes subsets, use the `Bytes()` method. This function requires files in the Packet-Fields subset to operate. You can specify how many bytes you want to extract by passing the max_bytes parameter.
```python
data.bytes(payload=True, max_bytes=2500)
```
Use packet=True to extract packet bytes. You can also pass files=list of files to retrieve bytes.
The directory structure after extracting bytes:
```
UNSW-NB15
│
├───Network-Flows
│ └───UNSW_Flow.parquet
│
├───Packet-Fields
│ ├───Packet_Fields_File_3.parquet
│ ├───Packet_Fields_File_5.parquet
│ └───Packet_Fields_File_10.parquet
│
├───Payload-Bytes
│ ├───Payload_Bytes_File_3.parquet
│ ├───Payload_Bytes_File_5.parquet
│ └───Payload_Bytes_File_10.parquet
│
├───Network-Flows+Packet-Fields+Payload-Bytes
│ ├───Network_Flows+Packet_Fields+Payload_Bytes_File_3.parquet
│ ├───Network_Flows+Packet_Fields+Payload_Bytes_File_5.parquet
│ └───Network_Flows+Packet_Fields+Payload_Bytes_File_10.parquet
│
└───Payload-Bytes-2500
├───Payload_Bytes_File_3.parquet
├───Payload_Bytes_File_5.parquet
└───Payload_Bytes_File_10.parquet
```
## Reading the Datasets
The `read()` method allows you to read files using Hugging Face's `load_dataset` method, one subset at a time. The dataset and files parameters are optional if the same details are used to instantiate the `Dataset` class.
```python
dataset = data.read(dataset='UNSW-NB15', subset='Packet-Fields', files=[1,2])
```
The `read()` method returns a dataset that you can convert to a pandas dataframe or save to a CSV, parquet, or any other desired file format:
```python
df = dataset.to_pandas()
dataset.to_csv('file_path_to_save.csv')
dataset.to_parquet('file_path_to_save.parquet')
```
For scenarios where you want to process one packet at a time, you can use the `stream=True` parameter:
```python
dataset = data.read(dataset='UNSW-NB15', subset='Packet-Fields', files=[1,2], stream=True)
print(next(iter(dataset)))
```
## Notes
The size of these datasets is large, and depending on the subset(s) selected and the number of bytes extracted, the operations can be resource-intensive. Therefore, it's recommended to ensure you have sufficient disk space and RAM when using this package. | 7,422 | [
[
-0.03741455078125,
-0.052276611328125,
-0.006866455078125,
0.04571533203125,
-0.00701141357421875,
-0.007965087890625,
0.009918212890625,
-0.02398681640625,
0.048675537109375,
0.050567626953125,
-0.024139404296875,
-0.0250701904296875,
-0.0347900390625,
0.02... |
kowndinya23/wikipedia-attribution-corpus | 2023-07-24T07:53:13.000Z | [
"region:us"
] | kowndinya23 | null | null | 0 | 19 | 2023-07-24T07:45:34 | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 21505788594
num_examples: 39441096
download_size: 10408148033
dataset_size: 21505788594
---
# Dataset Card for "wikipedia-attribution-corpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 417 | [
[
-0.046295166015625,
-0.01529693603515625,
0.0164794921875,
0.01093292236328125,
-0.00994873046875,
-0.004398345947265625,
-0.0033931732177734375,
-0.014373779296875,
0.0633544921875,
0.0243682861328125,
-0.0478515625,
-0.058868408203125,
-0.037841796875,
-0.... |
tiwes/aa_de_ss | 2023-08-02T11:23:25.000Z | [
"region:us"
] | tiwes | null | null | 0 | 19 | 2023-08-02T11:22:56 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
P1ayer-1/college-texts-annas-archive-v1 | 2023-08-06T19:34:14.000Z | [
"region:us"
] | P1ayer-1 | null | null | 0 | 19 | 2023-08-06T19:34:07 | ---
dataset_info:
features:
- name: o_syllabus_id
dtype: int64
- name: zlibrary_id
dtype: int64
- name: date_added
dtype: string
- name: date_modified
dtype: string
- name: extension
dtype: string
- name: filesize
dtype: float64
- name: filesize_reported
dtype: int64
- name: md5
dtype: string
- name: md5_reported
dtype: string
- name: title
dtype: string
- name: author
dtype: string
- name: publisher
dtype: string
- name: language
dtype: string
- name: series
dtype: string
- name: volume
dtype: string
- name: edition
dtype: string
- name: year
dtype: string
- name: pages
dtype: string
- name: description
dtype: string
- name: cover_url
dtype: string
- name: in_libgen
dtype: int64
- name: pilimi_torrent
dtype: string
- name: unavailable
dtype: int64
splits:
- name: train
num_bytes: 43480060
num_examples: 43206
download_size: 20519971
dataset_size: 43480060
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "college-texts-annas-archive-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,298 | [
[
-0.030029296875,
-0.0216217041015625,
0.0175323486328125,
-0.00048732757568359375,
-0.011993408203125,
-0.00777435302734375,
0.031280517578125,
-0.00926971435546875,
0.0726318359375,
0.0316162109375,
-0.0638427734375,
-0.0552978515625,
-0.05560302734375,
0.0... |
brando/debug0_af | 2023-08-10T23:10:04.000Z | [
"license:apache-2.0",
"region:us"
] | brando | null | null | 0 | 19 | 2023-08-09T01:46:02 | ---
license: apache-2.0
---
If you find this please cite it:
```
@software{brando2021ultimateutils,
author={Brando Miranda},
title={Ultimate Utils - the Ultimate Utils library for Machine Learning and Artificial Intelligence},
url={https://github.com/brando90/ultimate-utils},
year={2021}
}
```
it's not suppose to be used by people yet. It's under apache license too. | 386 | [
[
-0.0039215087890625,
0.0091094970703125,
0.0284576416015625,
0.023101806640625,
-0.0167694091796875,
0.0026416778564453125,
0.01361083984375,
-0.0301971435546875,
0.017120361328125,
0.036468505859375,
-0.045196533203125,
-0.0421142578125,
-0.030364990234375,
... |
luisroque/instruct-python-llama2-500k | 2023-08-18T09:44:26.000Z | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | luisroque | null | null | 1 | 19 | 2023-08-17T17:59:11 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1046127202
num_examples: 501349
download_size: 530786217
dataset_size: 1046127202
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-sa-3.0
task_categories:
- text-generation
language:
- en
pretty_name: Instruct Python 500k
size_categories:
- 100K<n<1M
---
# Fine-tuning Instruct Llama2 Stack Overflow Python Q&A
## Transformed Dataset
### Objective
The transformed dataset is designed for fine-tuning LLMs to improve Python coding assistance by focusing on high-quality content from Stack Overflow. It has around 500k instructions.
### Structure
- **Question-Answer Pairing**: Questions and answers are paired using the `ParentId` linkage.
- **Quality Focus**: Only top-rated answers for each question are retained.
- **HTML Tag Removal**: All HTML tags in the content are removed.
- **Combined Question Field**: Each question's title and body are merged.
- **Filtering**: Entries with negative scores or those not containing Python code structures are excluded.
Final columns:
- `score_question`
- `score_answer`
- `question`
- `answer`
### Llama2 Transformation
The dataset has been transformed to match the Llama2 prompt structure, which is relevant for the model's fine-tuning. The format is the following:
`<s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_message }} [/INST]`
Where:
- `system_prompt` gives context or instructions to the model.
- `user_message` is the user's query following the system prompt, expecting a particular response from the model.
This structure ensures the training aligns with Llama2's expectations, optimizing the fine-tuning quality.
## Original Dataset
The dataset contains questions and answers from Stack Overflow with the `python` tag, covering the period from August 2, 2008, to October 19, 2016.
## License
All contributions are under the [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/). Attribution is required. The original dataset was posted [here](https://www.kaggle.com/datasets/stackoverflow/pythonquestions).
Keep in touch: [LinkedIn](https://www.linkedin.com/in/luisbrasroque/) | 2,235 | [
[
-0.021209716796875,
-0.055572509765625,
0.02099609375,
0.01474761962890625,
-0.01371002197265625,
-0.00997161865234375,
-0.011566162109375,
-0.024993896484375,
-0.00817108154296875,
0.045257568359375,
-0.061920166015625,
-0.039093017578125,
-0.03350830078125,
... |
ProgramComputer/voxceleb | 2023-09-16T08:50:12.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_categories:image-classification",
"task_categories:video-classification",
"size_categories:100K<n<1M",
"license:cc-by-4.0",
"arxiv:1706.08612",
"doi:10.57967/hf/0999",
"region:us"
] | ProgramComputer | null | null | 4 | 19 | 2023-08-17T18:57:37 | ---
task_categories:
- automatic-speech-recognition
- audio-classification
- image-classification
- video-classification
size_categories:
- 100K<n<1M
license: cc-by-4.0
---
## Dataset Description
- **Homepage:** [VoxCeleb](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/)
# Multipart Zips
Already joined zips for convenience but these specified files are *NOT* part of the original datasets
vox2_mp4_1.zip - vox2_mp4_6.zip
vox2_aac_1.zip - vox2_aac_2.zip
# Joining Zip
```
cat vox1_dev* > vox1_dev_wav.zip
```
```
cat vox2_dev_aac* > vox2_aac.zip
```
```
cat vox2_dev_mp4* > vox2_mp4.zip
```
### Citation Information
```
@article{Nagrani19,
author = "Arsha Nagrani and Joon~Son Chung and Weidi Xie and Andrew Zisserman",
title = "Voxceleb: Large-scale speaker verification in the wild",
journal = "Computer Science and Language",
year = "2019",
publisher = "Elsevier",
}
@inProceedings{Chung18b,
author = "Chung, J.~S. and Nagrani, A. and Zisserman, A.",
title = "VoxCeleb2: Deep Speaker Recognition",
booktitle = "INTERSPEECH",
year = "2018",
}
@article{DBLP:journals/corr/NagraniCZ17,
author = {Arsha Nagrani and
Joon Son Chung and
Andrew Zisserman},
title = {VoxCeleb: a large-scale speaker identification dataset},
journal = {CoRR},
volume = {abs/1706.08612},
year = {2017},
url = {http://arxiv.org/abs/1706.08612},
eprinttype = {arXiv},
eprint = {1706.08612},
timestamp = {Mon, 13 Aug 2018 16:47:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/NagraniCZ17.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@ProgramComputer](https://github.com/ProgramComputer) for adding this dataset. | 1,823 | [
[
-0.04571533203125,
-0.039337158203125,
0.00817108154296875,
0.01232147216796875,
-0.006626129150390625,
0.00200653076171875,
-0.0364990234375,
-0.0159454345703125,
0.01409912109375,
0.034515380859375,
-0.0450439453125,
-0.04840087890625,
-0.0268402099609375,
... |
kelSidenna/softwareReq-data | 2023-08-18T04:06:40.000Z | [
"region:us"
] | kelSidenna | null | null | 2 | 19 | 2023-08-18T04:00:39 | Entry not found | 15 | [
[
-0.0213775634765625,
-0.014984130859375,
0.05718994140625,
0.0288543701171875,
-0.0350341796875,
0.046478271484375,
0.052520751953125,
0.005062103271484375,
0.051361083984375,
0.016998291015625,
-0.0521240234375,
-0.01496124267578125,
-0.0604248046875,
0.037... |
thomasavare/italian-dataset-deepl | 2023-08-21T10:48:24.000Z | [
"language:en",
"language:it",
"region:us"
] | thomasavare | null | null | 0 | 19 | 2023-08-19T15:12:08 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: english
dtype: string
- name: italian
dtype: string
- name: Class
dtype: string
- name: Class_index
dtype: float64
splits:
- name: train
num_bytes: 62294
num_examples: 500
download_size: 22849
dataset_size: 62294
language:
- en
- it
---
# Dataset Card for "italian-dataset-deepl"
English to italian translation was made with Deepl API.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 633 | [
[
-0.03424072265625,
-0.03460693359375,
0.0211334228515625,
0.01654052734375,
-0.0195465087890625,
0.01129913330078125,
-0.016693115234375,
-0.03778076171875,
0.044281005859375,
0.03228759765625,
-0.065185546875,
-0.0797119140625,
-0.037200927734375,
0.0013494... |
Fsoft-AIC/the-vault-class | 2023-10-11T16:42:43.000Z | [
"task_categories:text-generation",
"multilinguality:multiprogramming languages",
"language:code",
"language:en",
"license:mit",
"arxiv:2305.06156",
"region:us"
] | Fsoft-AIC | The Vault is a multilingual code-text dataset with over 40 million pairs covering 10 popular programming languages.
It is the largest corpus containing parallel code-text data. By building upon The Stack, a massive raw code sample collection,
the Vault offers a comprehensive and clean resource for advancing research in code understanding and generation. It provides a
high-quality dataset that includes code-text pairs at multiple levels, such as class and inline-level, in addition to the function level.
The Vault can serve many purposes at multiple levels. | @article{manh2023vault,
title={The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation},
author={Manh, Dung Nguyen and Hai, Nam Le and Dau, Anh TV and Nguyen, Anh Minh and Nghiem, Khanh and Guo, Jin and Bui, Nghi DQ},
journal={arXiv preprint arXiv:2305.06156},
year={2023}
} | 1 | 19 | 2023-08-22T07:11:11 | ---
language:
- code
- en
multilinguality:
- multiprogramming languages
task_categories:
- text-generation
license: mit
dataset_info:
features:
- name: identifier
dtype: string
- name: repo
dtype: string
- name: path
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
dtype: string
- name: original_docstring
dtype: string
- name: comment
dtype: string
- name: docstring_tokens
dtype: string
- name: docstring
dtype: string
- name: original_string
dtype: string
pretty_name: The Vault Function
viewer: true
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Statistics](#dataset-statistics)
- [Usage](#usage)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [FSoft-AI4Code/TheVault](https://github.com/FSoft-AI4Code/TheVault)
- **Paper:** [The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation](https://arxiv.org/abs/2305.06156)
- **Contact:** support.ailab@fpt.com
- **Website:** https://www.fpt-aicenter.com/ai-residency/
<p align="center">
<img src="https://raw.githubusercontent.com/FSoft-AI4Code/TheVault/main/assets/the-vault-4-logo-png.png" width="300px" alt="logo">
</p>
<div align="center">
# The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation
</div>
## Dataset Summary
The Vault dataset is a comprehensive, large-scale, multilingual parallel dataset that features high-quality code-text pairs derived from The Stack, the largest permissively-licensed source code dataset.
We provide The Vault which contains code snippets from 10 popular programming languages such as Java, JavaScript, Python, Ruby, Rust, Golang, C#, C++, C, and PHP. This dataset provides multiple code-snippet levels, metadata, and 11 docstring styles for enhanced usability and versatility.
## Supported Tasks
The Vault can be used for pretraining LLMs or downstream code-text interaction tasks. A number of tasks related to code understanding and geneartion can be constructed using The Vault such as *code summarization*, *text-to-code generation* and *code search*.
## Languages
The natural language text (docstring) is in English.
10 programming languages are supported in The Vault: `Python`, `Java`, `JavaScript`, `PHP`, `C`, `C#`, `C++`, `Go`, `Ruby`, `Rust`
*Note: C and Go are not contained in this repo due to the nonexistence of traditional classes in these languages.*
## Dataset Structure
### Data Instances
```
{
"hexsha": "78b961a6673ec1e12f8d95c33ef081f75561a87c",
"repo": "AIS-Bonn/sl-cutscenes",
"path": "sl_cutscenes/object_models.py",
"license": [
"MIT"
],
"language": "Python",
"identifier": "MeshLoader",
"original_docstring": "\n Class to load the meshes for the objects in a scene.\n ",
"docstring": "Class to load the meshes for the objects in a scene.",
"docstring_tokens": [
"Class",
"to",
"load",
"the",
"meshes",
"for",
"the",
"objects",
"in",
"a",
"scene",
"."
],
"code": "class MeshLoader:\n \"\"\"\n Class to load the meshes for the objects in a scene.\n \"\"\"\n\n def __init__(self):\n \"\"\"Module initializer\"\"\"\n self.base_dir = CONSTANTS.MESH_BASE_DIR\n self.text_dir = CONSTANTS.TEXT_BASE_DIR\n self.reset()\n\n def reset(self):\n self.loaded_meshes = []\n\n def get_meshes(self):\n \"\"\" \"\"\"\n extract_singular = lambda x: x[0] if len(x) == 1 else x\n return [extract_singular(item) for item in self.loaded_meshes]\n\n def load_meshes(self, obj_info: List[object_info.ObjectInfo], **kwargs):\n \"\"\"\n Loads the meshes whose information is given in parameter 'obj_info.\n Each call of this method APPENDS a list to the loaded_meshes attribute.\n :param obj_info: The object information of the meshes to be loaded.\n :param kwargs: additional mesh modifiers such as scale, specified with a leading 'mod_'\n \"\"\"\n paths = []\n for obj in obj_info:\n path = self.text_dir if obj.name.endswith(\"_floor\") or obj.name.endswith(\"_wall\") else self.base_dir\n paths.append((path / obj.mesh_fp).resolve())\n scales = [obj.scale for obj in obj_info]\n class_ids = [obj.class_id for obj in obj_info]\n mod_scales = kwargs.get(\"mod_scale\", [1.0] * len(scales))\n scales = [s * ms for (s, ms) in zip(scales, mod_scales)]\n flags = [mesh_flags(obj) for obj in obj_info]\n meshes = sl.Mesh.load_threaded(filenames=paths, flags=flags)\n\n # Setup class IDs\n for _, (mesh, scale, class_id) in enumerate(zip(meshes, scales, class_ids)):\n pt = torch.eye(4)\n pt[:3, :3] *= scale\n mesh.pretransform = pt\n mesh.class_index = class_id\n\n info_mesh_tuples = list(zip(obj_info, meshes))\n self.loaded_meshes.append(info_mesh_tuples)",
"code_tokens": [
"class",
"MeshLoader",
":",
"def",
"__init__",
"(",
"self",
")",
":",
"\"\"\"Module initializer\"\"\"",
"self",
".",
"base_dir",
"=",
"CONSTANTS",
".",
"MESH_BASE_DIR",
"self",
".",
"text_dir",
"=",
"CONSTANTS",
".",
"TEXT_BASE_DIR",
"self",
".",
"reset",
"(",
")",
"def",
"reset",
"(",
"self",
")",
":",
"self",
".",
"loaded_meshes",
"=",
"[",
"]",
"def",
"get_meshes",
"(",
"self",
")",
":",
"\"\"\" \"\"\"",
"extract_singular",
"=",
"lambda",
"x",
":",
"x",
"[",
"0",
"]",
"if",
"len",
"(",
"x",
")",
"==",
"1",
"else",
"x",
"return",
"[",
"extract_singular",
"(",
"item",
")",
"for",
"item",
"in",
"self",
".",
"loaded_meshes",
"]",
"def",
"load_meshes",
"(",
"self",
",",
"obj_info",
":",
"List",
"[",
"object_info",
".",
"ObjectInfo",
"]",
",",
"**",
"kwargs",
")",
":",
"\"\"\"\n Loads the meshes whose information is given in parameter 'obj_info.\n Each call of this method APPENDS a list to the loaded_meshes attribute.\n :param obj_info: The object information of the meshes to be loaded.\n :param kwargs: additional mesh modifiers such as scale, specified with a leading 'mod_'\n \"\"\"",
"paths",
"=",
"[",
"]",
"for",
"obj",
"in",
"obj_info",
":",
"path",
"=",
"self",
".",
"text_dir",
"if",
"obj",
".",
"name",
".",
"endswith",
"(",
"\"_floor\"",
")",
"or",
"obj",
".",
"name",
".",
"endswith",
"(",
"\"_wall\"",
")",
"else",
"self",
".",
"base_dir",
"paths",
".",
"append",
"(",
"(",
"path",
"/",
"obj",
".",
"mesh_fp",
")",
".",
"resolve",
"(",
")",
")",
"scales",
"=",
"[",
"obj",
".",
"scale",
"for",
"obj",
"in",
"obj_info",
"]",
"class_ids",
"=",
"[",
"obj",
".",
"class_id",
"for",
"obj",
"in",
"obj_info",
"]",
"mod_scales",
"=",
"kwargs",
".",
"get",
"(",
"\"mod_scale\"",
",",
"[",
"1.0",
"]",
"*",
"len",
"(",
"scales",
")",
")",
"scales",
"=",
"[",
"s",
"*",
"ms",
"for",
"(",
"s",
",",
"ms",
")",
"in",
"zip",
"(",
"scales",
",",
"mod_scales",
")",
"]",
"flags",
"=",
"[",
"mesh_flags",
"(",
"obj",
")",
"for",
"obj",
"in",
"obj_info",
"]",
"meshes",
"=",
"sl",
".",
"Mesh",
".",
"load_threaded",
"(",
"filenames",
"=",
"paths",
",",
"flags",
"=",
"flags",
")",
"for",
"_",
",",
"(",
"mesh",
",",
"scale",
",",
"class_id",
")",
"in",
"enumerate",
"(",
"zip",
"(",
"meshes",
",",
"scales",
",",
"class_ids",
")",
")",
":",
"pt",
"=",
"torch",
".",
"eye",
"(",
"4",
")",
"pt",
"[",
":",
"3",
",",
":",
"3",
"]",
"*=",
"scale",
"mesh",
".",
"pretransform",
"=",
"pt",
"mesh",
".",
"class_index",
"=",
"class_id",
"info_mesh_tuples",
"=",
"list",
"(",
"zip",
"(",
"obj_info",
",",
"meshes",
")",
")",
"self",
".",
"loaded_meshes",
".",
"append",
"(",
"info_mesh_tuples",
")"
],
"short_docstring": "Class to load the meshes for the objects in a scene.",
"short_docstring_tokens": [
"Class",
"to",
"load",
"the",
"meshes",
"for",
"the",
"objects",
"in",
"a",
"scene",
"."
],
"comment": [
"\"\"\"\n Class to load the meshes for the objects in a scene.\n \"\"\"",
"\"\"\"Module initializer\"\"\"",
"\"\"\" \"\"\"",
"\"\"\"\n Loads the meshes whose information is given in parameter 'obj_info.\n Each call of this method APPENDS a list to the loaded_meshes attribute.\n :param obj_info: The object information of the meshes to be loaded.\n :param kwargs: additional mesh modifiers such as scale, specified with a leading 'mod_'\n \"\"\"",
"# Setup class IDs"
],
"parameters": [],
"docstring_params": {
"returns": [],
"raises": [],
"params": [],
"outlier_params": [],
"others": []
}
}
```
### Data Fields
Data fields for function level:
- **hexsha** (string): the unique git hash of file
- **repo** (string): the owner/repo
- **path** (string): the full path to the original file
- **license** (list): licenses in the repo
- **language** (string): the programming language
- **identifier** (string): the function or method name
- **original_string** (string): original version of function/class node
- **original_docstring** (string): the raw string before tokenization or parsing
- **code** (string): the part of the original that is code
- **code_tokens** (list): tokenized version of `code`
- **short_docstring** (string): short, brief summarization (first line of the docstring)
- **short_docstring_tokens** (list): tokenized version of `short_docstring
- **docstring** (string): the top-level comment or docstring (docstring version without param’s doc, return, exception fields, etc)
- **docstring_tokens** (list): tokenized version of docstring
- **comment** (list): list of comments (line) inside the function/class
- **parameters** (list): List of parameters and its type (type can be None)
- **docstring_params** (dict): Dictionary of the parsed information from docstring
See [here](https://github.com/FSoft-AI4Code/TheVault/blob/main/data/README.md) for more details and examples.
### Data Splits
In this repo, the class level data is not split, and contained in only train set.
## Dataset Statistics
|Language | Number of samples |
|:-----------|------------------------:|
|Python | 422,187 |
|Java | 4,872,485 |
|JavaScript | 291,479 |
|PHP | 1,173,916 |
|C# | 1,437,800 |
|C++ | 174,370 |
|Ruby | 353,859 |
|Rust | 93,311 |
|C | - |
|Go | - |
|TOTAL | **9,121,300** |
## Usage
You can load The Vault dataset using datasets library: ```pip install datasets```
```python
from datasets import load_dataset
# Load full class level dataset
dataset = load_dataset("Fsoft-AIC/the-vault-class")
# specific language (e.g. Python)
dataset = load_dataset("Fsoft-AIC/the-vault-class", languages=['Python'])
# dataset streaming
data = load_dataset("Fsoft-AIC/the-vault-class", streaming= True)
for sample in iter(data['train']):
print(sample)
```
A back up dataset can be downloaded in azure storage. See [Download The Vault from Azure blob storage](https://github.com/FSoft-AI4Code/TheVault#download-via-link).
## Additional information
### Licensing Information
MIT License
### Citation Information
```
@article{manh2023vault,
title={The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation},
author={Manh, Dung Nguyen and Hai, Nam Le and Dau, Anh TV and Nguyen, Anh Minh and Nghiem, Khanh and Guo, Jin and Bui, Nghi DQ},
journal={arXiv preprint arXiv:2305.06156},
year={2023}
}
```
### Contributions
This dataset is developed by [FSOFT AI4Code team](https://github.com/FSoft-AI4Code). | 14,859 | [
[
-0.0287933349609375,
-0.052490234375,
0.01456451416015625,
0.029083251953125,
0.004016876220703125,
0.00543212890625,
0.008544921875,
-0.0177154541015625,
-0.0006165504455566406,
0.035888671875,
-0.0384521484375,
-0.060302734375,
-0.03271484375,
0.0051765441... |
Cubpaw/voxelgym_5c_critic_42x42_300000 | 2023-09-01T13:46:58.000Z | [
"region:us"
] | Cubpaw | null | null | 0 | 19 | 2023-09-01T13:42:11 | ---
dataset_info:
features:
- name: image
dtype: image
- name: astar_path
dtype: image
- name: pred_path
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1814909280.0
num_examples: 240000
- name: validation
num_bytes: 453592740.0
num_examples: 60000
download_size: 261367246
dataset_size: 2268502020.0
---
# Dataset Card for "voxelgym_5c_critic_42x42_300000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 555 | [
[
-0.0650634765625,
-0.00457000732421875,
0.0213165283203125,
0.0234375,
-0.007297515869140625,
-0.006496429443359375,
0.0017290115356445312,
0.006366729736328125,
0.038482666015625,
0.036529541015625,
-0.046630859375,
-0.05810546875,
-0.02130126953125,
-0.006... |
boapps/kmdb_classification | 2023-09-21T11:43:34.000Z | [
"region:us"
] | boapps | null | null | 0 | 19 | 2023-09-03T21:11:21 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
- name: title
dtype: string
- name: description
dtype: string
- name: keywords
sequence: string
- name: label
dtype: int64
- name: url
dtype: string
- name: date
dtype: string
- name: is_hand_annoted
dtype: bool
- name: score
dtype: float64
- name: title_score
dtype: float64
splits:
- name: train
num_bytes: 187493981
num_examples: 45683
- name: test
num_bytes: 13542701
num_examples: 3605
- name: validation
num_bytes: 25309037
num_examples: 6579
download_size: 139938458
dataset_size: 226345719
---
# Dataset Card for "kmdb_classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,002 | [
[
-0.0543212890625,
-0.00926971435546875,
0.01898193359375,
-0.00014698505401611328,
-0.0209197998046875,
0.001911163330078125,
0.0188140869140625,
-0.0169525146484375,
0.048248291015625,
0.031707763671875,
-0.05572509765625,
-0.07415771484375,
-0.044586181640625,... |
C-MTEB/T2Reranking_en2zh | 2023-09-09T16:11:54.000Z | [
"region:us"
] | C-MTEB | null | null | 1 | 19 | 2023-09-09T16:11:24 | ---
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
dataset_info:
features:
- name: query
dtype: string
- name: positive
sequence: string
- name: negative
sequence: string
splits:
- name: dev
num_bytes: 206929387
num_examples: 6129
download_size: 120405829
dataset_size: 206929387
---
# Dataset Card for "T2Reranking_en2zh"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 526 | [
[
-0.01187896728515625,
-0.0136260986328125,
0.0091400146484375,
0.0287628173828125,
-0.023101806640625,
-0.000012755393981933594,
0.0180816650390625,
-0.0162353515625,
0.04486083984375,
0.0302276611328125,
-0.05609130859375,
-0.048492431640625,
-0.034271240234375... |
prakhargupta94/recipe_llama | 2023-09-09T19:28:16.000Z | [
"region:us"
] | prakhargupta94 | null | null | 0 | 19 | 2023-09-09T19:25:16 | Entry not found | 15 | [
[
-0.02142333984375,
-0.014984130859375,
0.057220458984375,
0.0288238525390625,
-0.03509521484375,
0.04656982421875,
0.052520751953125,
0.00506591796875,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060455322265625,
0.03793334... |
Minami-su/roleplay_multiturn_chat_1k_zh_v0.1 | 2023-10-03T09:39:45.000Z | [
"language:zh",
"roleplay",
"multiturn_chat",
"region:us"
] | Minami-su | null | null | 7 | 19 | 2023-09-13T01:54:10 | ---
language:
- zh
tags:
- roleplay
- multiturn_chat
---
## 介绍
基于self-instruct生成的多轮对话roleplay数据,约1k条不同的人格数据和对话
## 存在问题:
1.基于模型自身生成,所以roleplay存在模型本身价值观融入情况,导致roleplay不够真实,不够准确。
## 关于我自己:
我是小雨的开发者,小雨是一个情感ai,人格ai,如果对小雨感兴趣的话欢迎支持一下,她目前在bilibili直播,目前我仍在不断的改进
url:https://live.bilibili.com/27357528?broadcast_type=0&is_room_feed=1&spm_id_from=333.999.live_users_card.0.click&live_from=86001
## 注:
使用本数据集请注明来源
## 引用
```
@misc{selfinstruct,
title={Self-Instruct: Aligning Language Model with Self Generated Instructions},
author={Wang, Yizhong and Kordi, Yeganeh and Mishra, Swaroop and Liu, Alisa and Smith, Noah A. and Khashabi, Daniel and Hajishirzi, Hannaneh},
journal={arXiv preprint arXiv:2212.10560},
year={2022}
}
```
| 727 | [
[
-0.002483367919921875,
-0.0592041015625,
-0.0083770751953125,
0.052520751953125,
-0.0173797607421875,
-0.025177001953125,
0.006893157958984375,
-0.0222930908203125,
0.0020294189453125,
0.033660888671875,
-0.0478515625,
-0.025421142578125,
-0.037017822265625,
... |
lchakkei/OpenOrca-Traditional-Chinese | 2023-10-11T08:29:08.000Z | [
"task_categories:conversational",
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:feature-extra... | lchakkei | null | null | 2 | 19 | 2023-09-16T03:15:44 | ---
language:
- zh
license: mit
size_categories:
- 10M<n<100M
task_categories:
- conversational
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
pretty_name: OpenOrca-Chinese
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: id
dtype: string
- name: system_prompt
dtype: string
- name: question
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 6477736021
num_examples: 4233915
download_size: 4104476393
dataset_size: 6477736021
---
<p><h1>🐋 OpenOrca-Chinese 数据集!🐋</h1></p>
感謝 [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) 資料集的發布,為廣大NLP研究人員和開發者帶來了寶貴的資源!
這是一個對 [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) 資料集中文翻譯的版本,翻譯引擎為 Google 翻譯,希望能為中文 LLM 研究做出一點點貢獻。
<br/>
# Dataset Summary
The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688).
Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.
It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.
The data is primarily used for training and evaluation in the field of natural language processing.
<a name="dataset-structure"></a>
# Dataset Structure
<a name="data-instances"></a>
## Data Instances
A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.
The response is then entered into the response field.
<a name="data-fields"></a>
## Data Fields
The fields are:
1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.
2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint
3) 'question', representing a question entry as provided by the FLAN Collection
4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.
| 2,342 | [
[
-0.044708251953125,
-0.05999755859375,
0.00553131103515625,
0.008880615234375,
-0.00838470458984375,
-0.0222625732421875,
-0.0184783935546875,
-0.05621337890625,
0.037384033203125,
0.0462646484375,
-0.0298004150390625,
-0.04815673828125,
-0.0249481201171875,
... |
deven367/babylm-100M-children-stories | 2023-09-16T05:17:25.000Z | [
"region:us"
] | deven367 | null | null | 0 | 19 | 2023-09-16T05:17:12 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 17676869
num_examples: 76758
- name: valid
num_bytes: 1425137
num_examples: 5996
- name: test
num_bytes: 1804421
num_examples: 7959
download_size: 12749002
dataset_size: 20906427
---
# Dataset Card for "babylm-100M-children-stories"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 661 | [
[
-0.038665771484375,
-0.014068603515625,
-0.001781463623046875,
0.0277557373046875,
-0.019287109375,
0.0078582763671875,
0.0222320556640625,
-0.017822265625,
0.043243408203125,
0.032501220703125,
-0.07586669921875,
-0.043914794921875,
-0.03564453125,
-0.02955... |
jmelsbach/real-estate-instructions-small | 2023-09-17T17:57:59.000Z | [
"region:us"
] | jmelsbach | null | null | 0 | 19 | 2023-09-17T17:55:53 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 951120
num_examples: 500
download_size: 469994
dataset_size: 951120
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "real-estate-instructions-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 529 | [
[
-0.03582763671875,
-0.040985107421875,
0.0222930908203125,
0.006427764892578125,
-0.009918212890625,
-0.02276611328125,
-0.00275421142578125,
0.01056671142578125,
0.048858642578125,
0.042724609375,
-0.049102783203125,
-0.057098388671875,
-0.0135498046875,
-0... |
nafi-zaman/celloscope_bangla_ner_dataset | 2023-10-15T07:56:40.000Z | [
"region:us"
] | nafi-zaman | null | null | 0 | 19 | 2023-09-19T09:24:05 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence: int64
splits:
- name: train
num_bytes: 44407902
num_examples: 255364
- name: validation
num_bytes: 5565860
num_examples: 31920
- name: test
num_bytes: 5557975
num_examples: 31921
download_size: 8233066
dataset_size: 55531737
---
# Dataset Card for "celloscope_bangla_ner_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 723 | [
[
-0.041259765625,
-0.01012420654296875,
0.003589630126953125,
0.0160064697265625,
-0.023895263671875,
-0.00780487060546875,
0.0296173095703125,
-0.0126495361328125,
0.04522705078125,
0.036834716796875,
-0.0360107421875,
-0.06475830078125,
-0.0435791015625,
-0... |
AnhTong/truyenfull | 2023-09-23T10:26:20.000Z | [
"region:us"
] | AnhTong | null | null | 0 | 19 | 2023-09-23T06:46:06 | ---
dataset_info:
features:
- name: title
dtype: string
- name: link
dtype: string
- name: content
dtype: string
splits:
- name: ds_1
num_bytes: 688765404
num_examples: 47546
- name: ds_2
num_bytes: 686452540
num_examples: 49325
- name: ds_3
num_bytes: 662112505
num_examples: 46766
- name: ds_4
num_bytes: 631547999
num_examples: 47222
- name: ds_5
num_bytes: 645861526
num_examples: 49358
- name: ds_6
num_bytes: 669993661
num_examples: 49112
- name: ds_7
num_bytes: 662999345
num_examples: 48904
- name: ds_8
num_bytes: 713727150
num_examples: 49245
- name: ds_9
num_bytes: 651720408
num_examples: 48605
- name: ds_10
num_bytes: 966575566
num_examples: 48809
- name: ds_12
num_bytes: 762515180
num_examples: 49725
- name: ds_13
num_bytes: 686909655
num_examples: 48973
- name: ds_14
num_bytes: 610358320
num_examples: 48564
- name: ds_15
num_bytes: 616740599
num_examples: 49389
download_size: 4862424797
dataset_size: 9656279858
---
# Dataset Card for "truyenfull"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,267 | [
[
-0.0341796875,
-0.022613525390625,
0.003971099853515625,
0.01873779296875,
-0.029022216796875,
0.0034923553466796875,
0.02001953125,
-0.0247802734375,
0.06219482421875,
0.03546142578125,
-0.06304931640625,
-0.06378173828125,
-0.043548583984375,
-0.0314331054... |
MLNTeam-Unical/NFT-70M_text | 2023-09-28T15:33:32.000Z | [
"task_categories:time-series-forecasting",
"task_categories:text-classification",
"task_categories:feature-extraction",
"task_categories:text-generation",
"task_categories:zero-shot-classification",
"task_categories:text2text-generation",
"task_categories:sentence-similarity",
"task_categories:image-c... | MLNTeam-Unical | null | null | 0 | 19 | 2023-09-27T06:25:55 | ---
dataset_info:
features:
- name: id
dtype: string
- name: emb
sequence: float32
splits:
- name: train
num_bytes: 98031916170
num_examples: 31749685
download_size: 9751089154
dataset_size: 98031916170
size_categories:
- 10M<n<100M
license: cc-by-nc-4.0
task_categories:
- time-series-forecasting
- text-classification
- feature-extraction
- text-generation
- zero-shot-classification
- text2text-generation
- sentence-similarity
- image-classification
- image-to-text
- text-to-image
- text-retrieval
language:
- en
tags:
- Non-fungible Tokens
- Crypto
- Web3
- Art
- Multimodal Learning
pretty_name: NFT-70M_text
---
# Dataset Card for "NFT-70M_text"
## Dataset summary
The *NFT-70M_text* dataset is a companion for our released [**NFT-70M_transactions**](https://huggingface.co/datasets/MLNTeam-Unical/NFT-70M_transactions) dataset,
which is the largest and most up-to-date collection of Non-Fungible Tokens (NFT) transactions between 2021 and 2023 sourced from [OpenSea](https://opensea.io).
As we also reported in the "Data anonymization" section of the dataset card of *NFT-70M_transactions*,
the textual contents associated with the NFT data were replaced by identifiers to numerical vectors that represent an encrypted
representation (i.e., embeddings) of the text contents obtained via the [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) neural network model.
## Ethical use of data and informed consent
This data repository is made available for research and informational purposes only.
Any finding that might be drawn from the data provided within this repository should be intended to support decision-making regarding actions made on NFTs, and not to replace the human specialists.
*The authors are not responsible for any issues related to trading failures based on the data provided within this repository.*
## Terms of Usage
Please cite the following papers in any research product whose findings are based on the data provided within this repository:
- L. La Cava, D. Costa, A. Tagarelli: SONAR: Web-based Tool for Multimodal Exploration of Non-Fungible Token Inspiration Networks. In: Proc. ACM SIGIR 2023. Taipei, Taiwan, July 23-27 2023. DOI: https://doi.org/10.1145/3539618.3591821
- L. La Cava, D. Costa, A. Tagarelli: Visually Wired NFTs: Exploring the Role of Inspiration in Non-Fungible Tokens. CoRR abs/2303.17031 (2023). DOI: https://doi.org/10.48550/arXiv.2303.17031
- D. Costa, L. La Cava, A. Tagarelli: Show me your NFT and I tell you how it will perform: Multimodal representation learning for NFT selling price prediction. In: Proc. ACM WebConf 2023, pp. 1875-1885. Austin, TX, USA, 30 April 2023 – 4 May 2023. DOI: https://doi.org/10.1145/3543507.3583520
Data within this repository were fetched using the REST APIs provided by OpenSea. You should also acknowledge [OpenSea API]("https://docs.opensea.io/reference/api-overview).
## Liability statement
The authors hereby declare that they are not responsible for any harmful or objectionable content that may be contained within the data provided within this repository.
Users of the dataset are expected to exercise due diligence and responsibility when using the data, including but not limited to:
(i) Content Review: Users should review the dataset's contents carefully and assess its suitability for their intended purposes; (ii) Compliance: Users are responsible for ensuring that their use of the dataset complies with all applicable laws, regulations, and ethical standards;
(iii) Data Processing: Users may need to apply data preprocessing, filtering, or other techniques to remove or address any objectionable or harmful content as needed.
The authors of this dataset disclaim any liability for the accuracy, completeness, or suitability of the data and shall not be held responsible for any consequences resulting from the use or misuse of the dataset.
*By accessing and using this dataset, users acknowledge and accept this disclaimer.* | 4,028 | [
[
-0.0230712890625,
-0.04461669921875,
0.0171661376953125,
0.016510009765625,
-0.03662109375,
-0.01496124267578125,
-0.00788116455078125,
-0.051116943359375,
0.0372314453125,
0.0543212890625,
-0.051177978515625,
-0.04388427734375,
-0.039154052734375,
0.0231475... |
renumics/speech_commands_enrichment_only | 2023-09-28T12:25:09.000Z | [
"task_categories:audio-classification",
"task_ids:keyword-spotting",
"annotations_creators:other",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"source_datasets:extended|speech_commands",
"language:en",
"license:cc-by-4... | renumics | null | null | 0 | 19 | 2023-09-27T13:37:24 | ---
annotations_creators:
- other
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- extended|speech_commands
task_categories:
- audio-classification
task_ids:
- keyword-spotting
pretty_name: SpeechCommands
config_names:
- v0.01
- v0.02
tags:
- spotlight
- enriched
- renumics
- enhanced
- audio
- classification
- extended
dataset_info:
- config_name: enrichment_only
features:
- name: label_string
dtype: string
- name: probability
dtype: float64
- name: probability_vector
sequence: float32
- name: prediction
dtype: int64
- name: prediction_string
dtype: string
- name: embedding_reduced
sequence: float32
splits:
- name: train
num_bytes: 8763867
num_examples: 51093
- name: validation
num_bytes: 1165942
num_examples: 6799
- name: test
num_bytes: 528408
num_examples: 3081
download_size: 0
dataset_size: 10458217
- config_name: raw_and_enrichment_combined
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: label
dtype:
class_label:
names:
'0': 'yes'
'1': 'no'
'2': up
'3': down
'4': left
'5': right
'6': 'on'
'7': 'off'
'8': stop
'9': go
'10': zero
'11': one
'12': two
'13': three
'14': four
'15': five
'16': six
'17': seven
'18': eight
'19': nine
'20': bed
'21': bird
'22': cat
'23': dog
'24': happy
'25': house
'26': marvin
'27': sheila
'28': tree
'29': wow
'30': _silence_
- name: is_unknown
dtype: bool
- name: speaker_id
dtype: string
- name: utterance_id
dtype: int8
- name: logits
sequence: float64
- name: embedding
sequence: float32
- name: label_string
dtype: string
- name: probability
dtype: float64
- name: probability_vector
sequence: float32
- name: prediction
dtype: int64
- name: prediction_string
dtype: string
- name: embedding_reduced
sequence: float32
splits:
- name: train
num_bytes: 1803565876.375
num_examples: 51093
- name: validation
num_bytes: 240795605.125
num_examples: 6799
- name: test
num_bytes: 109673146.875
num_examples: 3081
download_size: 0
dataset_size: 2154034628.375
configs:
- config_name: enrichment_only
data_files:
- split: train
path: enrichment_only/train-*
- split: validation
path: enrichment_only/validation-*
- split: test
path: enrichment_only/test-*
- config_name: raw_and_enrichment_combined
data_files:
- split: train
path: raw_and_enrichment_combined/train-*
- split: validation
path: raw_and_enrichment_combined/validation-*
- split: test
path: raw_and_enrichment_combined/test-*
---
# Dataset Card for SpeechCommands
## Dataset Description
- **Homepage:** [Renumics Homepage](https://renumics.com/?hf-dataset-card=speech-commands-enrichment_only)
- **GitHub** [Spotlight](https://github.com/Renumics/spotlight)
- **Dataset Homepage** [tensorflow.org/datasets](https://www.tensorflow.org/datasets/catalog/speech_commands)
- **Paper:** [Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition](https://arxiv.org/pdf/1804.03209.pdf)
- **Leaderboard:** [More Information Needed]
### Dataset Summary
📊 [Data-centric AI](https://datacentricai.org) principles have become increasingly important for real-world use cases.
At [Renumics](https://renumics.com/?hf-dataset-card=speech-commands-enriched) we believe that classical benchmark datasets and competitions should be extended to reflect this development.
🔍 This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways:
1. Enable new researchers to quickly develop a profound understanding of the dataset.
2. Popularize data-centric AI principles and tooling in the ML community.
3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics.
📚 This dataset is an enriched version of the [SpeechCommands Dataset](https://huggingface.co/datasets/speech_commands).
### Explore the Dataset
There are two configurations of the dataset: **Enrichment only** provides the enrichments calculated by Renumics using the MIT AST transformer, while **raw_and_enrichment_combined** provides a concatenated dataset of the original speech commands and the enrichment.
The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool [Renumics Spotlight](https://github.com/Renumics/spotlight) enables that with just a few lines of code:
Install datasets and Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip):
```python
!pip install renumics-spotlight datasets[audio]
```
> **_Notice:_** On Linux, non-Python dependency on libsndfile package must be installed manually. See [Datasets - Installation](https://huggingface.co/docs/datasets/installation#audio) for more information.
Load the dataset from huggingface in your notebook and start exploring with a simple view:
```python
import datasets
from renumics import spotlight
from renumics.spotlight.layouts import debug_classification
dataset = datasets.load_dataset("renumics/speech_commands_enrichment_only", "raw_and_enrichment_combined")
joined_dataset = datasets.concatenate_datasets([dataset["train"], dataset["validation"], dataset["test"]])
layout = debug_classification(label='label_string', prediction='prediction', embedding='embedding_reduced',
features=["label", "prediction", "probability"], inspect={'audio': spotlight.Audio})
dtypes = {
"audio": spotlight.Audio,
"embedding_reduced": spotlight.Embedding
}
spotlight.show(
joined_dataset,
dtype=dtypes,
layout= layout
)
```
You can use the UI to interactively configure the view on the data. Depending on the concrete tasks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata.
As a plug and play option, you can check out the Huggingface space: [Huggingface Space for speech enrichment](https://huggingface.co/spaces/renumics/speech_commands_enrichment_space)
Alternatively, you can run the notebook exploration.ipynb locally.
### SpeechCommands Dataset
This is a set of one-second .wav audio files, each containing a single spoken
English word or background noise. These words are from a small set of commands, and are spoken by a
variety of different speakers. This data set is designed to help train simple
machine learning models. It is covered in more detail at [https://arxiv.org/abs/1804.03209](https://arxiv.org/abs/1804.03209).
Version 0.01 of the data set (configuration `"v0.01"`) was released on August 3rd 2017 and contains
64,727 audio files.
Version 0.02 of the data set (configuration `"v0.02"`) was released on April 11th 2018 and
contains 105,829 audio files.
### Supported Tasks and Leaderboards
* `keyword-spotting`: the dataset can be used to train and evaluate keyword
spotting systems. The task is to detect preregistered keywords by classifying utterances
into a predefined set of words. The task is usually performed on-device for the
fast response time. Thus, accuracy, model size, and inference time are all crucial.
### Languages
The language data in SpeechCommands is in English (BCP-47 `en`).
## Dataset Structure
### Data Instances
Example of a core word (`"label"` is a word, `"is_unknown"` is `False`):
```python
{
"file": "no/7846fd85_nohash_0.wav",
"audio": {
"path": "no/7846fd85_nohash_0.wav",
"array": array([ -0.00021362, -0.00027466, -0.00036621, ..., 0.00079346,
0.00091553, 0.00079346]),
"sampling_rate": 16000
},
"label": 1, # "no"
"is_unknown": False,
"speaker_id": "7846fd85",
"utterance_id": 0
}
```
Example of an auxiliary word (`"label"` is a word, `"is_unknown"` is `True`)
```python
{
"file": "tree/8b775397_nohash_0.wav",
"audio": {
"path": "tree/8b775397_nohash_0.wav",
"array": array([ -0.00854492, -0.01339722, -0.02026367, ..., 0.00274658,
0.00335693, 0.0005188]),
"sampling_rate": 16000
},
"label": 28, # "tree"
"is_unknown": True,
"speaker_id": "1b88bf70",
"utterance_id": 0
}
```
Example of background noise (`_silence_`) class:
```python
{
"file": "_silence_/doing_the_dishes.wav",
"audio": {
"path": "_silence_/doing_the_dishes.wav",
"array": array([ 0. , 0. , 0. , ..., -0.00592041,
-0.00405884, -0.00253296]),
"sampling_rate": 16000
},
"label": 30, # "_silence_"
"is_unknown": False,
"speaker_id": "None",
"utterance_id": 0 # doesn't make sense here
}
```
### Data Fields
* `file`: relative audio filename inside the original archive.
* `audio`: dictionary containing a relative audio filename,
a decoded audio array, and the sampling rate. Note that when accessing
the audio column: `dataset[0]["audio"]` the audio is automatically decoded
and resampled to `dataset.features["audio"].sampling_rate`.
Decoding and resampling of a large number of audios might take a significant
amount of time. Thus, it is important to first query the sample index before
the `"audio"` column, i.e. `dataset[0]["audio"]` should always be preferred
over `dataset["audio"][0]`.
* `label`: either word pronounced in an audio sample or background noise (`_silence_`) class.
Note that it's an integer value corresponding to the class name.
* `is_unknown`: if a word is auxiliary. Equals to `False` if a word is a core word or `_silence_`,
`True` if a word is an auxiliary word.
* `speaker_id`: unique id of a speaker. Equals to `None` if label is `_silence_`.
* `utterance_id`: incremental id of a word utterance within the same speaker.
### Data Splits
The dataset has two versions (= configurations): `"v0.01"` and `"v0.02"`. `"v0.02"`
contains more words (see section [Source Data](#source-data) for more details).
| | train | validation | test |
|----- |------:|-----------:|-----:|
| v0.01 | 51093 | 6799 | 3081 |
| v0.02 | 84848 | 9982 | 4890 |
Note that in train and validation sets examples of `_silence_` class are longer than 1 second.
You can use the following code to sample 1-second examples from the longer ones:
```python
def sample_noise(example):
# Use this function to extract random 1 sec slices of each _silence_ utterance,
# e.g. inside `torch.utils.data.Dataset.__getitem__()`
from random import randint
if example["label"] == "_silence_":
random_offset = randint(0, len(example["speech"]) - example["sample_rate"] - 1)
example["speech"] = example["speech"][random_offset : random_offset + example["sample_rate"]]
return example
```
## Dataset Creation
### Curation Rationale
The primary goal of the dataset is to provide a way to build and test small
models that can detect a single word from a set of target words and differentiate it
from background noise or unrelated speech with as few false positives as possible.
### Source Data
#### Initial Data Collection and Normalization
The audio files were collected using crowdsourcing, see
[aiyprojects.withgoogle.com/open_speech_recording](https://github.com/petewarden/extract_loudest_section)
for some of the open source audio collection code that was used. The goal was to gather examples of
people speaking single-word commands, rather than conversational sentences, so
they were prompted for individual words over the course of a five minute
session.
In version 0.01 thirty different words were recoded: "Yes", "No", "Up", "Down", "Left",
"Right", "On", "Off", "Stop", "Go", "Zero", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine",
"Bed", "Bird", "Cat", "Dog", "Happy", "House", "Marvin", "Sheila", "Tree", "Wow".
In version 0.02 more words were added: "Backward", "Forward", "Follow", "Learn", "Visual".
In both versions, ten of them are used as commands by convention: "Yes", "No", "Up", "Down", "Left",
"Right", "On", "Off", "Stop", "Go". Other words are considered to be auxiliary (in current implementation
it is marked by `True` value of `"is_unknown"` feature). Their function is to teach a model to distinguish core words
from unrecognized ones.
The `_silence_` label contains a set of longer audio clips that are either recordings or
a mathematical simulation of noise.
#### Who are the source language producers?
The audio files were collected using crowdsourcing.
### Annotations
#### Annotation process
Labels are the list of words prepared in advances.
Speakers were prompted for individual words over the course of a five minute
session.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Creative Commons BY 4.0 License ((CC-BY-4.0)[https://creativecommons.org/licenses/by/4.0/legalcode]).
### Citation Information
```
@article{speechcommandsv2,
author = { {Warden}, P.},
title = "{Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition}",
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1804.03209},
primaryClass = "cs.CL",
keywords = {Computer Science - Computation and Language, Computer Science - Human-Computer Interaction},
year = 2018,
month = apr,
url = {https://arxiv.org/abs/1804.03209},
}
```
### Contributions
[More Information Needed] | 14,312 | [
[
-0.0391845703125,
-0.058624267578125,
0.006084442138671875,
0.015411376953125,
-0.01556396484375,
-0.00775146484375,
-0.045074462890625,
-0.021331787109375,
0.033599853515625,
0.0185089111328125,
-0.055694580078125,
-0.06549072265625,
-0.045867919921875,
-0.... |
Doub7e/SD-CLIP-alignment-3000 | 2023-09-28T13:43:58.000Z | [
"region:us"
] | Doub7e | null | null | 0 | 19 | 2023-09-28T13:34:45 | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
- name: clip_pred
dtype: string
splits:
- name: train
num_bytes: 1385003606.0
num_examples: 3000
download_size: 1385015330
dataset_size: 1385003606.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "SD-CLIP-alignment-3000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 536 | [
[
-0.04998779296875,
0.000012695789337158203,
0.0243072509765625,
0.0294647216796875,
-0.00197601318359375,
0.0123291015625,
0.0340576171875,
0.002529144287109375,
0.0743408203125,
0.020294189453125,
-0.0767822265625,
-0.039764404296875,
-0.031036376953125,
-0... |
sitloboi2012/rvl_cdip_small_dataset | 2023-10-01T08:17:51.000Z | [
"region:us"
] | sitloboi2012 | null | null | 0 | 19 | 2023-10-01T08:17:49 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: string
splits:
- name: train
num_bytes: 1746183.0
num_examples: 15
download_size: 1643991
dataset_size: 1746183.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rvl_cdip_small_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 486 | [
[
-0.04998779296875,
-0.00992584228515625,
0.01354217529296875,
0.0245208740234375,
-0.0156402587890625,
-0.002414703369140625,
0.006591796875,
-0.00408172607421875,
0.04595947265625,
0.036956787109375,
-0.05523681640625,
-0.048004150390625,
-0.03424072265625,
... |
jangmin/ecommerce_purchase_history_v2 | 2023-10-03T05:31:00.000Z | [
"region:us"
] | jangmin | null | null | 1 | 19 | 2023-10-03T05:30:41 | ---
dataset_info:
features:
- name: user_id
dtype: int64
- name: day
dtype: string
- name: order_ts
dtype: string
- name: positive_prod_id
dtype: int64
- name: negative_prod_id
dtype: int64
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: effective_order_infos
list:
list:
- name: contents
list:
- name: category_id
dtype: int64
- name: product_id
dtype: int64
- name: text
dtype: string
- name: order_id
dtype: string
- name: order_ts
dtype: timestamp[us]
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 193522291
num_examples: 86264
- name: test
num_bytes: 74028559
num_examples: 21566
- name: conservative_test
num_bytes: 40121578
num_examples: 8236
download_size: 44200184
dataset_size: 307672428
---
# Dataset Card for "ecommerce_purchase_history_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,125 | [
[
-0.02581787109375,
-0.030120849609375,
0.015228271484375,
0.0018510818481445312,
-0.0052490234375,
-0.0136566162109375,
0.0301513671875,
-0.030914306640625,
0.048431396484375,
0.0423583984375,
-0.06842041015625,
-0.043731689453125,
-0.004573822021484375,
-0.... |
shossain/govreport-qa-5-2048 | 2023-10-03T17:47:39.000Z | [
"region:us"
] | shossain | null | null | 0 | 19 | 2023-10-03T17:47:37 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 133180
num_examples: 5
download_size: 45937
dataset_size: 133180
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "govreport-qa-5-2048"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 528 | [
[
-0.0347900390625,
-0.0005712509155273438,
0.030609130859375,
0.0186767578125,
-0.01500701904296875,
-0.00783538818359375,
0.04058837890625,
-0.01261138916015625,
0.049285888671875,
0.04266357421875,
-0.050384521484375,
-0.056884765625,
-0.029052734375,
-0.00... |
katielink/medtrain_raw | 2023-10-15T14:17:28.000Z | [
"license:mit",
"medical",
"region:us"
] | katielink | null | null | 1 | 19 | 2023-10-03T20:00:43 | ---
license: mit
dataset_info:
features:
- name: raw_card
dtype: string
- name: raw_tag
dtype: string
- name: deck
dtype: string
splits:
- name: train
num_bytes: 57798047
num_examples: 118879
download_size: 14194941
dataset_size: 57798047
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- medical
---
# Dataset Card for "medtrain_raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 547 | [
[
-0.0225677490234375,
0.00331878662109375,
0.01555633544921875,
0.0017156600952148438,
-0.01227569580078125,
-0.000015914440155029297,
0.01206207275390625,
-0.01537322998046875,
0.051483154296875,
0.019989013671875,
-0.06903076171875,
-0.050445556640625,
-0.02833... |
hieudinhpro/diffuision-dataset2 | 2023-10-05T16:32:39.000Z | [
"region:us"
] | hieudinhpro | null | null | 0 | 19 | 2023-10-04T03:07:43 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 138363142.634
num_examples: 9999
download_size: 138145195
dataset_size: 138363142.634
---
# Dataset Card for "diffuision-dataset2"
Dataset copy from "zoheb/sketch-scene" | 401 | [
[
-0.011199951171875,
-0.01715087890625,
0.01331329345703125,
0.031646728515625,
-0.058624267578125,
0.01538848876953125,
0.052215576171875,
-0.006847381591796875,
0.032012939453125,
0.034698486328125,
-0.04925537109375,
-0.0157012939453125,
-0.031585693359375,
... |
mathiaszinnen/odor | 2023-10-23T13:59:15.000Z | [
"task_categories:object-detection",
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-4.0",
"fine grained detection",
"small object detection",
"art",
"smell",
"olfaction",
"computational humanities",
"region:us"
] | mathiaszinnen | Real-world applications of computer vision in the humanities require algorithms to be robust against artistic abstraction, peripheral objects, and subtle differences between fine-grained target classes. Existing datasets provide instance-level
annotations on artworks but are generally biased towards the image centre and limited with regard to detailed object classes. The proposed ODOR dataset fills this gap, offering 38,116 object-level annotations across 4,712 images, spanning an extensive set of 139 fine-grained categories. Conducting a statistical analysis, we showcase challenging dataset properties, such as a detailed set of categories, dense and overlapping objects, and spatial distribution over the whole image canvas. Furthermore, we provide an extensive baseline analysis for object detection models and highlight the challenging properties of the dataset through a set of secondary studies. Inspiring further research on artwork object detection and broader visual cultural heritage studies, the dataset challenges researchers to explore the intersection of object recognition and smell perception. | TBD | 0 | 19 | 2023-10-04T08:52:53 | ---
task_categories:
- object-detection
language:
- en
pretty_name: Object Detection for Olfactory References (ODOR) Dataset
size_categories:
- 1K<n<10K
tags:
- fine grained detection
- small object detection
- art
- smell
- olfaction
- computational humanities
license: cc-by-4.0
---
# The Object Detection for Olfactory References (ODOR) Dataset
<!-- Provide a quick summary of the dataset. -->
Real-world applications of computer vision in the humanities require algorithms to be robust against artistic abstraction, peripheral objects, and subtle differences between fine-grained target classes.
Existing datasets provide instance-level annotations on artworks but are generally biased towards the image centre and limited with regard to detailed object classes.
The ODOR dataset fills this gap, offering 38,116 object-level annotations across 4,712 images, spanning an extensive set of 139 fine-grained categories.
It has challenging dataset properties, such as a detailed set of categories, dense and overlapping objects, and spatial distribution over the whole image canvas.
Inspiring further research on artwork object detection and broader visual cultural heritage studies, the dataset challenges researchers to explore the intersection of object recognition and smell perception.
You can download the dataset using Hugging Face:
```python
from datasets import load_dataset
ds = load_dataset("mathiaszinnen/odor")
```
This dataset has received funding from the Odeuropa EU H2020 project under grant agreement No. 101004469.
<!--
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
| 1,705 | [
[
-0.029754638671875,
-0.01788330078125,
0.052703857421875,
-0.01052093505859375,
-0.0014009475708007812,
-0.036651611328125,
0.0036449432373046875,
-0.049468994140625,
-0.0295257568359375,
0.0294342041015625,
-0.023468017578125,
-0.05560302734375,
-0.024810791015... |
NikiTricky/digital-bg | 2023-10-05T15:45:49.000Z | [
"task_categories:text-generation",
"task_categories:summarization",
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:bg",
"region:us"
] | NikiTricky | null | null | 0 | 19 | 2023-10-04T19:34:41 | ---
task_categories:
- text-generation
- summarization
- text-classification
language:
- bg
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: "posts.json"
---
# Digital.bg articles | 234 | [
[
-0.0419921875,
-0.05596923828125,
0.033782958984375,
0.027069091796875,
-0.0445556640625,
0.038360595703125,
0.00665283203125,
-0.051544189453125,
0.03216552734375,
0.03607177734375,
-0.05206298828125,
-0.035888671875,
-0.036468505859375,
0.0190887451171875,... |
AayushShah/Univeral_SQL_Three_Datasets_Combined_WithText_IDs | 2023-10-06T11:46:02.000Z | [
"region:us"
] | AayushShah | null | null | 1 | 19 | 2023-10-06T11:45:22 | ---
configs:
- config_name: default
data_files:
- split: context
path: data/context-*
- split: text_sql_v1
path: data/text_sql_v1-*
- split: sparc
path: data/sparc-*
dataset_info:
features:
- name: NATURAL_LANG
dtype: string
- name: SQL
dtype: string
- name: SCHEMA
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: context
num_bytes: 299674929
num_examples: 78519
- name: text_sql_v1
num_bytes: 899253880
num_examples: 220302
- name: sparc
num_bytes: 12250417
num_examples: 2846
download_size: 94153422
dataset_size: 1211179226
---
# Dataset Card for "Univeral_SQL_Three_Datasets_Combined_WithText_IDs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 913 | [
[
-0.02960205078125,
-0.016998291015625,
0.026763916015625,
0.0282440185546875,
-0.0296783447265625,
0.0029315948486328125,
0.00102996826171875,
-0.0110931396484375,
0.049835205078125,
0.04443359375,
-0.041168212890625,
-0.060638427734375,
-0.0182342529296875,
... |
c123ian/khan_academy_context | 2023-10-06T12:04:19.000Z | [
"region:us"
] | c123ian | null | null | 0 | 19 | 2023-10-06T12:01:29 | ---
dataset_info:
features:
- name: context
dtype: string
splits:
- name: train
num_bytes: 20828078
num_examples: 2167
download_size: 8344879
dataset_size: 20828078
---
# Dataset Card for "khan_academy_context"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 367 | [
[
-0.043792724609375,
-0.0194854736328125,
0.00896453857421875,
0.00022411346435546875,
0.004261016845703125,
0.01087188720703125,
0.0156402587890625,
-0.002445220947265625,
0.046112060546875,
0.0159149169921875,
-0.06671142578125,
-0.0548095703125,
-0.04400634765... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.