id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 6.67k ⌀ | citation stringlengths 0 10.7k ⌀ | likes int64 0 3.66k | downloads int64 0 8.89M | created timestamp[us] | card stringlengths 11 977k | card_len int64 11 977k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|
DataNerd2021/ServiceProjectFall2023 | 2023-10-24T21:13:14.000Z | [
"region:us"
] | DataNerd2021 | null | null | 0 | 6 | 2023-10-19T19:52:10 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
nuprl/humaneval-py-mutants | 2023-10-19T21:48:18.000Z | [
"region:us"
] | nuprl | null | null | 0 | 6 | 2023-10-19T21:40:51 | ---
dataset_info:
features:
- name: name
dtype: string
- name: language
dtype: string
- name: tests
dtype: string
- name: prompt
dtype: string
- name: stop_tokens
sequence: string
- name: correct
dtype: string
- name: mutants
sequence: string
- name: errors
sequence: string
splits:
- name: train
num_bytes: 742882
num_examples: 161
download_size: 245158
dataset_size: 742882
---
# Dataset Card for "humaneval-py-mutants"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 620 | [
[
-0.0313720703125,
-0.0164642333984375,
0.003841400146484375,
0.01309967041015625,
-0.003734588623046875,
-0.0007405281066894531,
0.0286865234375,
-0.005176544189453125,
0.056182861328125,
0.0192108154296875,
-0.053985595703125,
-0.0335693359375,
-0.0403137207031... |
Raspberry-ai/monse-v3-textual-inversion | 2023-10-20T01:12:53.000Z | [
"region:us"
] | Raspberry-ai | null | null | 0 | 6 | 2023-10-20T01:12:51 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 16571415.0
num_examples: 210
download_size: 15136562
dataset_size: 16571415.0
---
# Dataset Card for "monse-v4-same-captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 489 | [
[
-0.04718017578125,
0.0177764892578125,
0.02593994140625,
0.011993408203125,
-0.0274200439453125,
0.00646209716796875,
0.0217437744140625,
-0.00809478759765625,
0.06787109375,
0.05657958984375,
-0.06585693359375,
-0.04144287109375,
-0.046905517578125,
0.02122... |
Sober-Clever/github-issues | 2023-10-20T03:51:13.000Z | [
"region:us"
] | Sober-Clever | null | null | 0 | 6 | 2023-10-20T03:51:08 | ---
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: labels
list:
- name: id
dtype: int64
- name: node_id
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: assignees
list:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: milestone
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: labels_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: description
dtype: string
- name: creator
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: open_issues
dtype: int64
- name: closed_issues
dtype: int64
- name: state
dtype: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: due_on
dtype: 'null'
- name: closed_at
dtype: 'null'
- name: comments
sequence: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: closed_at
dtype: timestamp[s]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: 'null'
- name: draft
dtype: bool
- name: pull_request
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: diff_url
dtype: string
- name: patch_url
dtype: string
- name: merged_at
dtype: timestamp[s]
- name: body
dtype: string
- name: reactions
struct:
- name: url
dtype: string
- name: total_count
dtype: int64
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: laugh
dtype: int64
- name: hooray
dtype: int64
- name: confused
dtype: int64
- name: heart
dtype: int64
- name: rocket
dtype: int64
- name: eyes
dtype: int64
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 1420461
num_examples: 100
download_size: 513444
dataset_size: 1420461
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 6,215 | [
[
-0.032012939453125,
-0.020904541015625,
0.012786865234375,
0.0157928466796875,
-0.00717926025390625,
0.0161590576171875,
0.00952911376953125,
-0.008697509765625,
0.07073974609375,
0.027191162109375,
-0.057403564453125,
-0.04693603515625,
-0.035675048828125,
... |
saileshk/finetuning_data_sql | 2023-10-20T04:37:07.000Z | [
"region:us"
] | saileshk | null | null | 0 | 6 | 2023-10-20T04:36:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
cmu-mlsp/encodec_24khz-opt-125m-pretrained-ft-librispeech_asr-train.clean.100-features | 2023-10-20T08:57:41.000Z | [
"region:us"
] | cmu-mlsp | null | null | 0 | 6 | 2023-10-20T08:35:30 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 24000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: audio_codes
sequence:
sequence: int64
splits:
- name: train.clean.100
num_bytes: 17829358087.0
num_examples: 28539
download_size: 17071868433
dataset_size: 17829358087.0
configs:
- config_name: default
data_files:
- split: train.clean.100
path: data/train.clean.100-*
---
# Dataset Card for "encodec_24khz-opt-125m-pretrained-ft-librispeech_asr-train.clean.100-features"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 819 | [
[
-0.06396484375,
-0.0057525634765625,
-0.00524139404296875,
0.0144500732421875,
-0.0241851806640625,
0.01245880126953125,
-0.01151275634765625,
-0.01157379150390625,
0.045013427734375,
0.034088134765625,
-0.07098388671875,
-0.037933349609375,
-0.03955078125,
... |
2003vishnu2003/train | 2023-10-20T08:40:03.000Z | [
"region:us"
] | 2003vishnu2003 | null | null | 0 | 6 | 2023-10-20T08:38:50 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
davanstrien/AmericanStories-parquet | 2023-10-20T15:09:54.000Z | [
"region:us"
] | davanstrien | null | null | 0 | 6 | 2023-10-20T14:07:00 | ---
configs:
- config_name: default
data_files:
- split: '1774'
path: data/1774-*
- split: '1798'
path: data/1798-*
- split: '1799'
path: data/1799-*
- split: '1800'
path: data/1800-*
- split: '1801'
path: data/1801-*
- split: '1802'
path: data/1802-*
- split: '1803'
path: data/1803-*
- split: '1804'
path: data/1804-*
- split: '1805'
path: data/1805-*
- split: '1806'
path: data/1806-*
- split: '1807'
path: data/1807-*
- split: '1808'
path: data/1808-*
- split: '1809'
path: data/1809-*
- split: '1810'
path: data/1810-*
- split: '1811'
path: data/1811-*
- split: '1812'
path: data/1812-*
- split: '1813'
path: data/1813-*
- split: '1814'
path: data/1814-*
- split: '1815'
path: data/1815-*
- split: '1816'
path: data/1816-*
- split: '1817'
path: data/1817-*
- split: '1818'
path: data/1818-*
- split: '1819'
path: data/1819-*
- split: '1820'
path: data/1820-*
- split: '1821'
path: data/1821-*
- split: '1822'
path: data/1822-*
- split: '1823'
path: data/1823-*
- split: '1824'
path: data/1824-*
- split: '1825'
path: data/1825-*
- split: '1826'
path: data/1826-*
- split: '1827'
path: data/1827-*
- split: '1828'
path: data/1828-*
- split: '1829'
path: data/1829-*
- split: '1830'
path: data/1830-*
- split: '1831'
path: data/1831-*
- split: '1832'
path: data/1832-*
- split: '1833'
path: data/1833-*
- split: '1834'
path: data/1834-*
- split: '1835'
path: data/1835-*
- split: '1836'
path: data/1836-*
- split: '1837'
path: data/1837-*
- split: '1838'
path: data/1838-*
- split: '1839'
path: data/1839-*
- split: '1840'
path: data/1840-*
- split: '1841'
path: data/1841-*
- split: '1842'
path: data/1842-*
- split: '1843'
path: data/1843-*
- split: '1844'
path: data/1844-*
- split: '1845'
path: data/1845-*
- split: '1846'
path: data/1846-*
- split: '1847'
path: data/1847-*
- split: '1848'
path: data/1848-*
- split: '1849'
path: data/1849-*
- split: '1850'
path: data/1850-*
- split: '1851'
path: data/1851-*
- split: '1852'
path: data/1852-*
- split: '1853'
path: data/1853-*
- split: '1854'
path: data/1854-*
- split: '1855'
path: data/1855-*
- split: '1856'
path: data/1856-*
- split: '1857'
path: data/1857-*
- split: '1858'
path: data/1858-*
- split: '1859'
path: data/1859-*
- split: '1860'
path: data/1860-*
- split: '1861'
path: data/1861-*
- split: '1862'
path: data/1862-*
- split: '1863'
path: data/1863-*
- split: '1864'
path: data/1864-*
- split: '1865'
path: data/1865-*
- split: '1866'
path: data/1866-*
- split: '1867'
path: data/1867-*
- split: '1868'
path: data/1868-*
- split: '1869'
path: data/1869-*
- split: '1870'
path: data/1870-*
- split: '1871'
path: data/1871-*
- split: '1872'
path: data/1872-*
- split: '1873'
path: data/1873-*
- split: '1874'
path: data/1874-*
- split: '1875'
path: data/1875-*
- split: '1876'
path: data/1876-*
- split: '1877'
path: data/1877-*
- split: '1878'
path: data/1878-*
- split: '1879'
path: data/1879-*
- split: '1880'
path: data/1880-*
- split: '1881'
path: data/1881-*
- split: '1882'
path: data/1882-*
- split: '1883'
path: data/1883-*
- split: '1884'
path: data/1884-*
- split: '1885'
path: data/1885-*
- split: '1886'
path: data/1886-*
- split: '1887'
path: data/1887-*
- split: '1888'
path: data/1888-*
- split: '1889'
path: data/1889-*
- split: '1890'
path: data/1890-*
- split: '1891'
path: data/1891-*
- split: '1892'
path: data/1892-*
- split: '1893'
path: data/1893-*
- split: '1894'
path: data/1894-*
- split: '1895'
path: data/1895-*
- split: '1896'
path: data/1896-*
- split: '1897'
path: data/1897-*
- split: '1898'
path: data/1898-*
- split: '1899'
path: data/1899-*
- split: '1900'
path: data/1900-*
- split: '1901'
path: data/1901-*
- split: '1902'
path: data/1902-*
- split: '1903'
path: data/1903-*
- split: '1904'
path: data/1904-*
- split: '1905'
path: data/1905-*
- split: '1906'
path: data/1906-*
- split: '1907'
path: data/1907-*
- split: '1908'
path: data/1908-*
- split: '1909'
path: data/1909-*
- split: '1910'
path: data/1910-*
- split: '1911'
path: data/1911-*
- split: '1912'
path: data/1912-*
- split: '1913'
path: data/1913-*
- split: '1914'
path: data/1914-*
- split: '1915'
path: data/1915-*
- split: '1916'
path: data/1916-*
- split: '1917'
path: data/1917-*
- split: '1918'
path: data/1918-*
- split: '1919'
path: data/1919-*
- split: '1920'
path: data/1920-*
- split: '1921'
path: data/1921-*
- split: '1922'
path: data/1922-*
- split: '1923'
path: data/1923-*
- split: '1924'
path: data/1924-*
- split: '1925'
path: data/1925-*
- split: '1926'
path: data/1926-*
- split: '1927'
path: data/1927-*
- split: '1928'
path: data/1928-*
- split: '1929'
path: data/1929-*
- split: '1930'
path: data/1930-*
- split: '1931'
path: data/1931-*
- split: '1932'
path: data/1932-*
- split: '1933'
path: data/1933-*
- split: '1934'
path: data/1934-*
- split: '1935'
path: data/1935-*
- split: '1936'
path: data/1936-*
- split: '1937'
path: data/1937-*
- split: '1938'
path: data/1938-*
- split: '1939'
path: data/1939-*
- split: '1940'
path: data/1940-*
- split: '1941'
path: data/1941-*
- split: '1942'
path: data/1942-*
- split: '1943'
path: data/1943-*
- split: '1944'
path: data/1944-*
- split: '1945'
path: data/1945-*
- split: '1946'
path: data/1946-*
- split: '1947'
path: data/1947-*
- split: '1948'
path: data/1948-*
- split: '1949'
path: data/1949-*
- split: '1950'
path: data/1950-*
- split: '1951'
path: data/1951-*
- split: '1952'
path: data/1952-*
- split: '1953'
path: data/1953-*
- split: '1954'
path: data/1954-*
- split: '1955'
path: data/1955-*
- split: '1956'
path: data/1956-*
- split: '1957'
path: data/1957-*
- split: '1958'
path: data/1958-*
- split: '1959'
path: data/1959-*
- split: '1960'
path: data/1960-*
- split: '1961'
path: data/1961-*
- split: '1962'
path: data/1962-*
- split: '1963'
path: data/1963-*
dataset_info:
features:
- name: article_id
dtype: string
- name: newspaper_name
dtype: string
- name: edition
dtype: string
- name: date
dtype: string
- name: page
dtype: string
- name: headline
dtype: string
- name: byline
dtype: string
- name: article
dtype: string
splits:
- name: '1774'
num_bytes: 22245
num_examples: 12
- name: '1798'
num_bytes: 72288
num_examples: 73
- name: '1799'
num_bytes: 946373
num_examples: 623
- name: '1800'
num_bytes: 38139
num_examples: 45
- name: '1801'
num_bytes: 94991
num_examples: 93
- name: '1802'
num_bytes: 1463322
num_examples: 1158
- name: '1803'
num_bytes: 799797
num_examples: 654
- name: '1804'
num_bytes: 120141
num_examples: 103
- name: '1805'
num_bytes: 2475205
num_examples: 2303
- name: '1806'
num_bytes: 2043729
num_examples: 1860
- name: '1807'
num_bytes: 310568
num_examples: 315
- name: '1808'
num_bytes: 75639
num_examples: 45
- name: '1809'
num_bytes: 430706
num_examples: 422
- name: '1810'
num_bytes: 1319755
num_examples: 982
- name: '1811'
num_bytes: 117701
num_examples: 110
- name: '1812'
num_bytes: 75299
num_examples: 67
- name: '1813'
num_bytes: 290966
num_examples: 242
- name: '1814'
num_bytes: 378212
num_examples: 379
- name: '1815'
num_bytes: 185179
num_examples: 160
- name: '1816'
num_bytes: 495706
num_examples: 409
- name: '1817'
num_bytes: 446354
num_examples: 394
- name: '1818'
num_bytes: 1257916
num_examples: 1108
- name: '1819'
num_bytes: 2476297
num_examples: 1997
- name: '1820'
num_bytes: 611884
num_examples: 433
- name: '1821'
num_bytes: 347361
num_examples: 270
- name: '1822'
num_bytes: 286227
num_examples: 264
- name: '1823'
num_bytes: 2030816
num_examples: 1113
- name: '1824'
num_bytes: 5171191
num_examples: 3110
- name: '1825'
num_bytes: 6341915
num_examples: 4005
- name: '1826'
num_bytes: 10462258
num_examples: 7079
- name: '1827'
num_bytes: 11634621
num_examples: 7213
- name: '1828'
num_bytes: 10253681
num_examples: 6350
- name: '1829'
num_bytes: 4021832
num_examples: 2296
- name: '1830'
num_bytes: 8321949
num_examples: 4232
- name: '1831'
num_bytes: 16796125
num_examples: 9699
- name: '1832'
num_bytes: 9982722
num_examples: 6565
- name: '1833'
num_bytes: 6653515
num_examples: 4108
- name: '1834'
num_bytes: 7099875
num_examples: 4632
- name: '1835'
num_bytes: 9066392
num_examples: 6168
- name: '1836'
num_bytes: 10473366
num_examples: 7375
- name: '1837'
num_bytes: 21002773
num_examples: 13609
- name: '1838'
num_bytes: 13735809
num_examples: 8492
- name: '1839'
num_bytes: 12512339
num_examples: 8938
- name: '1840'
num_bytes: 12647911
num_examples: 8052
- name: '1841'
num_bytes: 39146669
num_examples: 30019
- name: '1842'
num_bytes: 26218700
num_examples: 21290
- name: '1843'
num_bytes: 50447372
num_examples: 41657
- name: '1844'
num_bytes: 79351064
num_examples: 61373
- name: '1845'
num_bytes: 131632573
num_examples: 95921
- name: '1846'
num_bytes: 81086068
num_examples: 70331
- name: '1847'
num_bytes: 32733527
num_examples: 24354
- name: '1848'
num_bytes: 44577825
num_examples: 32531
- name: '1849'
num_bytes: 53877014
num_examples: 42711
- name: '1850'
num_bytes: 76697622
num_examples: 49992
- name: '1851'
num_bytes: 128372084
num_examples: 90184
- name: '1852'
num_bytes: 67005975
num_examples: 51172
- name: '1853'
num_bytes: 54210932
num_examples: 48130
- name: '1854'
num_bytes: 150406197
num_examples: 118825
- name: '1855'
num_bytes: 115893679
num_examples: 99390
- name: '1856'
num_bytes: 188859881
num_examples: 157592
- name: '1857'
num_bytes: 152841585
num_examples: 129179
- name: '1858'
num_bytes: 214657030
num_examples: 171877
- name: '1859'
num_bytes: 178711188
num_examples: 160924
- name: '1860'
num_bytes: 163889573
num_examples: 150590
- name: '1861'
num_bytes: 215595661
num_examples: 173990
- name: '1862'
num_bytes: 228323685
num_examples: 171021
- name: '1863'
num_bytes: 197294365
num_examples: 151485
- name: '1864'
num_bytes: 125113713
num_examples: 94415
- name: '1865'
num_bytes: 133515217
num_examples: 99728
- name: '1866'
num_bytes: 180768118
num_examples: 135316
- name: '1867'
num_bytes: 213571876
num_examples: 161180
- name: '1868'
num_bytes: 202156635
num_examples: 140521
- name: '1869'
num_bytes: 236506656
num_examples: 171455
- name: '1870'
num_bytes: 242779857
num_examples: 174061
- name: '1871'
num_bytes: 203189927
num_examples: 151652
- name: '1872'
num_bytes: 242624062
num_examples: 194784
- name: '1873'
num_bytes: 302626176
num_examples: 241902
- name: '1874'
num_bytes: 280814742
num_examples: 213813
- name: '1875'
num_bytes: 319815222
num_examples: 274269
- name: '1876'
num_bytes: 381483980
num_examples: 288199
- name: '1877'
num_bytes: 317703263
num_examples: 254946
- name: '1878'
num_bytes: 381274032
num_examples: 307865
- name: '1879'
num_bytes: 371703798
num_examples: 287784
- name: '1880'
num_bytes: 296465631
num_examples: 272352
- name: '1881'
num_bytes: 294568051
num_examples: 270228
- name: '1882'
num_bytes: 340511400
num_examples: 311920
- name: '1883'
num_bytes: 419078041
num_examples: 387589
- name: '1884'
num_bytes: 329666364
num_examples: 304242
- name: '1885'
num_bytes: 348144660
num_examples: 318732
- name: '1886'
num_bytes: 431746663
num_examples: 423718
- name: '1887'
num_bytes: 493647568
num_examples: 494559
- name: '1888'
num_bytes: 564523528
num_examples: 547165
- name: '1889'
num_bytes: 558168324
num_examples: 536750
- name: '1890'
num_bytes: 566964770
num_examples: 540615
- name: '1891'
num_bytes: 641124243
num_examples: 620461
- name: '1892'
num_bytes: 524812242
num_examples: 527044
- name: '1893'
num_bytes: 645853680
num_examples: 656805
- name: '1894'
num_bytes: 790577208
num_examples: 795408
- name: '1895'
num_bytes: 890097151
num_examples: 897766
- name: '1896'
num_bytes: 1235234882
num_examples: 1175701
- name: '1897'
num_bytes: 1252347746
num_examples: 1275895
- name: '1898'
num_bytes: 1286411001
num_examples: 1323842
- name: '1899'
num_bytes: 1176418162
num_examples: 1218682
- name: '1900'
num_bytes: 1069983237
num_examples: 1118970
- name: '1901'
num_bytes: 1478945214
num_examples: 1468648
- name: '1902'
num_bytes: 1376703767
num_examples: 1417935
- name: '1903'
num_bytes: 1255538379
num_examples: 1319686
- name: '1904'
num_bytes: 1232185827
num_examples: 1340868
- name: '1905'
num_bytes: 1563178627
num_examples: 1635134
- name: '1906'
num_bytes: 1632815247
num_examples: 1683643
- name: '1907'
num_bytes: 1647491794
num_examples: 1714613
- name: '1908'
num_bytes: 1771267430
num_examples: 1842874
- name: '1909'
num_bytes: 1844179657
num_examples: 1926228
- name: '1910'
num_bytes: 1589052587
num_examples: 1684263
- name: '1911'
num_bytes: 1402309564
num_examples: 1510259
- name: '1912'
num_bytes: 1621648367
num_examples: 1774149
- name: '1913'
num_bytes: 1613599136
num_examples: 1822206
- name: '1914'
num_bytes: 1736284455
num_examples: 1931901
- name: '1915'
num_bytes: 1690248452
num_examples: 1878654
- name: '1916'
num_bytes: 1633563499
num_examples: 1838797
- name: '1917'
num_bytes: 1605677226
num_examples: 1810757
- name: '1918'
num_bytes: 1803695589
num_examples: 1920102
- name: '1919'
num_bytes: 1831703767
num_examples: 1981192
- name: '1920'
num_bytes: 1901882705
num_examples: 2041192
- name: '1921'
num_bytes: 2264618667
num_examples: 2334112
- name: '1922'
num_bytes: 2372137567
num_examples: 2405974
- name: '1923'
num_bytes: 812177597
num_examples: 880372
- name: '1924'
num_bytes: 800835690
num_examples: 845520
- name: '1925'
num_bytes: 601426023
num_examples: 662322
- name: '1926'
num_bytes: 565307890
num_examples: 623765
- name: '1927'
num_bytes: 460501197
num_examples: 504835
- name: '1928'
num_bytes: 452526140
num_examples: 487302
- name: '1929'
num_bytes: 366246066
num_examples: 421909
- name: '1930'
num_bytes: 437657836
num_examples: 492695
- name: '1931'
num_bytes: 441972257
num_examples: 493816
- name: '1932'
num_bytes: 640501746
num_examples: 664615
- name: '1933'
num_bytes: 634373318
num_examples: 642380
- name: '1934'
num_bytes: 641841040
num_examples: 654342
- name: '1935'
num_bytes: 612406176
num_examples: 635554
- name: '1936'
num_bytes: 621035178
num_examples: 662015
- name: '1937'
num_bytes: 625107933
num_examples: 676549
- name: '1938'
num_bytes: 616370880
num_examples: 665274
- name: '1939'
num_bytes: 525913265
num_examples: 556283
- name: '1940'
num_bytes: 471830118
num_examples: 496662
- name: '1941'
num_bytes: 599694786
num_examples: 637200
- name: '1942'
num_bytes: 508785410
num_examples: 523923
- name: '1943'
num_bytes: 452079475
num_examples: 467200
- name: '1944'
num_bytes: 442871777
num_examples: 433769
- name: '1945'
num_bytes: 588623743
num_examples: 588477
- name: '1946'
num_bytes: 526027876
num_examples: 470895
- name: '1947'
num_bytes: 461281363
num_examples: 393086
- name: '1948'
num_bytes: 442999943
num_examples: 396660
- name: '1949'
num_bytes: 421752000
num_examples: 419854
- name: '1950'
num_bytes: 403717616
num_examples: 415416
- name: '1951'
num_bytes: 409600217
num_examples: 419622
- name: '1952'
num_bytes: 397051717
num_examples: 396420
- name: '1953'
num_bytes: 366253682
num_examples: 358332
- name: '1954'
num_bytes: 263197428
num_examples: 266338
- name: '1955'
num_bytes: 268993926
num_examples: 273576
- name: '1956'
num_bytes: 85126796
num_examples: 98035
- name: '1957'
num_bytes: 83757036
num_examples: 93543
- name: '1958'
num_bytes: 85807593
num_examples: 98688
- name: '1959'
num_bytes: 112707174
num_examples: 129452
- name: '1960'
num_bytes: 300484826
num_examples: 344550
- name: '1961'
num_bytes: 297225753
num_examples: 339076
- name: '1962'
num_bytes: 231525869
num_examples: 264724
- name: '1963'
num_bytes: 197520960
num_examples: 226859
download_size: 48388744959
dataset_size: 76303058024
---
# Dataset Card for "AmericanStories-parquet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 18,287 | [
[
-0.04425048828125,
-0.033477783203125,
0.01053619384765625,
0.0225982666015625,
-0.0289459228515625,
0.00510406494140625,
0.0211639404296875,
0.00020599365234375,
0.06304931640625,
0.044097900390625,
-0.0384521484375,
-0.06268310546875,
-0.033538818359375,
-... |
cmu-mlsp/encodec_24khz-librispeech_asr-validation.clean-features | 2023-10-20T16:30:10.000Z | [
"region:us"
] | cmu-mlsp | null | null | 0 | 6 | 2023-10-20T16:29:40 | ---
dataset_info:
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 24000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: audio_codes
sequence:
sequence: int64
splits:
- name: validation.clean
num_bytes: 955281891.0
num_examples: 2703
download_size: 915219050
dataset_size: 955281891.0
configs:
- config_name: default
data_files:
- split: validation.clean
path: data/validation.clean-*
---
# Dataset Card for "encodec_24khz-librispeech_asr-validation.clean-features"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 793 | [
[
-0.052703857421875,
-0.00559234619140625,
-0.00684356689453125,
0.0144500732421875,
-0.0159149169921875,
0.004047393798828125,
-0.00730133056640625,
-0.0225677490234375,
0.034637451171875,
0.04083251953125,
-0.0611572265625,
-0.051849365234375,
-0.02606201171875... |
anton-l/repo_bench_dedup | 2023-10-21T15:39:04.000Z | [
"region:us"
] | anton-l | null | null | 0 | 6 | 2023-10-21T15:29:10 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
goodcoffee/covidQA_eval_v2 | 2023-11-01T13:28:51.000Z | [
"region:us"
] | goodcoffee | null | null | 0 | 6 | 2023-10-21T20:59:24 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: start_positions
dtype: int64
- name: end_positions
dtype: int64
splits:
- name: train
num_bytes: 782952
num_examples: 303
download_size: 0
dataset_size: 782952
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "covidQA_eval_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 569 | [
[
-0.026275634765625,
-0.0260162353515625,
0.004619598388671875,
0.0165557861328125,
-0.00969696044921875,
0.00458526611328125,
0.0340576171875,
-0.01120758056640625,
0.04339599609375,
0.0236358642578125,
-0.051239013671875,
-0.045928955078125,
-0.03271484375,
... |
dot-ammar/AR-dotted-tokenized-mediumPlus | 2023-10-22T23:36:16.000Z | [
"region:us"
] | dot-ammar | null | null | 0 | 6 | 2023-10-21T21:26:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 216608904
num_examples: 334273
- name: test
num_bytes: 54145584
num_examples: 83558
download_size: 133446513
dataset_size: 270754488
---
# Dataset Card for "AR-dotted-tokenized-mediumPlus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 618 | [
[
-0.0439453125,
-0.0296783447265625,
0.00774383544921875,
0.023406982421875,
-0.02471923828125,
0.0045623779296875,
0.0056304931640625,
-0.01107025146484375,
0.08074951171875,
0.03790283203125,
-0.04400634765625,
-0.0550537109375,
-0.049163818359375,
-0.01609... |
Omickeyee/Marathi_LLM_10K | 2023-10-22T11:48:22.000Z | [
"region:us"
] | Omickeyee | null | null | 0 | 6 | 2023-10-22T11:47:37 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Braddy/xview_captions_v2 | 2023-10-22T11:55:55.000Z | [
"region:us"
] | Braddy | null | null | 0 | 6 | 2023-10-22T11:55:23 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
sequence: string
- name: file_id
dtype: string
splits:
- name: train
num_bytes: 715798376.168
num_examples: 7092
download_size: 693617401
dataset_size: 715798376.168
---
# Dataset Card for "xview_captions_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 531 | [
[
-0.027923583984375,
0.00582122802734375,
0.01345062255859375,
0.03338623046875,
-0.034027099609375,
-0.00916290283203125,
0.0272979736328125,
-0.006061553955078125,
0.037689208984375,
0.0517578125,
-0.060272216796875,
-0.0357666015625,
-0.04034423828125,
-0.... |
stockmark/business-questions | 2023-10-25T05:01:33.000Z | [
"language:ja",
"license:mit",
"region:us"
] | stockmark | null | null | 3 | 6 | 2023-10-22T14:47:54 | ---
license: mit
language:
- ja
---
# Stockmark Business Questions | 67 | [
[
-0.0311279296875,
-0.0187225341796875,
0.01480865478515625,
0.01541900634765625,
-0.051025390625,
0.0055389404296875,
0.0284881591796875,
-0.0457763671875,
0.0137176513671875,
0.038787841796875,
-0.05194091796875,
-0.01480865478515625,
-0.02996826171875,
0.0... |
Evening2k/gpi | 2023-10-22T17:55:44.000Z | [
"region:us"
] | Evening2k | null | null | 0 | 6 | 2023-10-22T17:54:53 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
AdapterOcean/physics_dataset_standardized_cluster_2_alpaca | 2023-10-23T01:52:23.000Z | [
"region:us"
] | AdapterOcean | null | null | 0 | 6 | 2023-10-22T18:31:04 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 16864397
num_examples: 5571
download_size: 0
dataset_size: 16864397
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "physics_dataset_standardized_cluster_2_alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 505 | [
[
-0.035491943359375,
-0.0233612060546875,
0.029052734375,
0.030303955078125,
-0.03411865234375,
-0.00754547119140625,
0.034576416015625,
-0.0179901123046875,
0.0684814453125,
0.01239013671875,
-0.044219970703125,
-0.047027587890625,
-0.043182373046875,
-0.028... |
sankettgorey/layouts_donut | 2023-10-23T08:06:03.000Z | [
"region:us"
] | sankettgorey | null | null | 0 | 6 | 2023-10-23T08:04:53 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 1461607176.2173023
num_examples: 5362
- name: test
num_bytes: 182370076.8402208
num_examples: 671
- name: validation
num_bytes: 181812032.0684768
num_examples: 670
download_size: 1524050233
dataset_size: 1825789285.126
---
# Dataset Card for "layouts_donut"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 737 | [
[
-0.02349853515625,
-0.0162353515625,
0.020263671875,
0.017852783203125,
0.0036678314208984375,
0.00713348388671875,
0.0166473388671875,
0.0174560546875,
0.07061767578125,
0.04541015625,
-0.05401611328125,
-0.054656982421875,
-0.033538818359375,
-0.0295104980... |
koaning/fashion-test | 2023-10-23T10:57:03.000Z | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"reddit",
"fashion",
"region:us"... | koaning | null | null | 0 | 6 | 2023-10-23T08:51:30 | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Detecting fashion substrings in text.
dataset_info:
features:
- name: text
dtype: string
- name: meta
struct:
- name: section
dtype: string
- name: _input_hash
dtype: int64
- name: _task_hash
dtype: int64
- name: tokens
list:
- name: end
dtype: int64
- name: id
dtype: int64
- name: start
dtype: int64
- name: text
dtype: string
- name: spans
list:
- name: end
dtype: int64
- name: input_hash
dtype: int64
- name: label
dtype: string
- name: source
dtype: string
- name: start
dtype: int64
- name: text
dtype: string
- name: token_end
dtype: int64
- name: token_start
dtype: int64
- name: _session_id
dtype: 'null'
- name: _view_id
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 3120984
num_examples: 1735
download_size: 817069
dataset_size: 3120984
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- reddit
- fashion
---
This dataset represents some data that Ines annotated. I am adding this info manually.
| 1,464 | [
[
-0.03924560546875,
0.0007610321044921875,
0.00849151611328125,
0.023193359375,
-0.000980377197265625,
-0.015716552734375,
0.01235198974609375,
-0.0233306884765625,
0.038787841796875,
0.0655517578125,
-0.049224853515625,
-0.0278167724609375,
-0.036529541015625,
... |
getawayfrommeXD/embedded_ner_tokens | 2023-10-23T10:03:52.000Z | [
"region:us"
] | getawayfrommeXD | null | null | 0 | 6 | 2023-10-23T10:03:08 | ---
dataset_info:
features:
- name: word
dtype: string
- name: label
dtype: string
- name: OOV
dtype: bool
- name: embedding
sequence: float32
splits:
- name: train
num_bytes: 248048533
num_examples: 203621
- name: validation
num_bytes: 62568404
num_examples: 51362
- name: test
num_bytes: 56564938
num_examples: 46435
download_size: 130105515
dataset_size: 367181875
---
# Dataset Card for "embedded_ner_tokens"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 607 | [
[
-0.0419921875,
-0.02911376953125,
0.0043792724609375,
0.0097198486328125,
-0.01812744140625,
0.00823974609375,
0.0234527587890625,
-0.013275146484375,
0.074951171875,
0.0289154052734375,
-0.04339599609375,
-0.052703857421875,
-0.03900146484375,
-0.0157928466... |
portafolio/llamadas-celular-es-02 | 2023-10-23T16:00:34.000Z | [
"region:us"
] | portafolio | null | null | 0 | 6 | 2023-10-23T15:59:15 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
amlan107/syn_false_1 | 2023-10-23T17:31:42.000Z | [
"region:us"
] | amlan107 | null | null | 0 | 6 | 2023-10-23T17:31:40 | ---
dataset_info:
features:
- name: bn
dtype: string
- name: ck
dtype: string
splits:
- name: train
num_bytes: 10186402
num_examples: 54799
download_size: 4146842
dataset_size: 10186402
---
# Dataset Card for "syn_false_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 385 | [
[
-0.02203369140625,
-0.0268402099609375,
0.0113525390625,
0.01322174072265625,
-0.01251220703125,
-0.00196075439453125,
0.0173187255859375,
-0.0217437744140625,
0.0789794921875,
0.0283966064453125,
-0.06866455078125,
-0.050201416015625,
-0.037322998046875,
-0... |
adi-kmt/math_data | 2023-10-23T17:59:44.000Z | [
"task_categories:text-generation",
"license:mit",
"region:us"
] | adi-kmt | null | null | 0 | 6 | 2023-10-23T17:46:52 | ---
license: mit
task_categories:
- text-generation
---
Amalgmation of Socratic version of gsm8k and grade-school-math dataset | 126 | [
[
-0.016845703125,
-0.0279998779296875,
0.0018062591552734375,
0.0307769775390625,
-0.031768798828125,
0.00945281982421875,
0.0034389495849609375,
0.010223388671875,
0.0235443115234375,
0.04986572265625,
-0.017913818359375,
-0.04949951171875,
-0.034271240234375,
... |
Braddy/xview_captions_v3 | 2023-10-23T19:52:00.000Z | [
"region:us"
] | Braddy | null | null | 0 | 6 | 2023-10-23T19:51:54 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
sequence: string
- name: file_id
dtype: string
splits:
- name: train
num_bytes: 94674025.0
num_examples: 949
download_size: 94634260
dataset_size: 94674025.0
---
# Dataset Card for "xview_captions_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 523 | [
[
-0.036865234375,
0.009521484375,
0.02471923828125,
0.038543701171875,
-0.02789306640625,
-0.01068878173828125,
0.031768798828125,
-0.0064849853515625,
0.037689208984375,
0.056793212890625,
-0.0606689453125,
-0.04571533203125,
-0.031890869140625,
0.0035057067... |
loubnabnl/issues-v2-subset | 2023-10-23T21:06:43.000Z | [
"region:us"
] | loubnabnl | null | null | 0 | 6 | 2023-10-23T21:05:12 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
defog/wikisql | 2023-10-24T02:49:34.000Z | [
"region:us"
] | defog | null | null | 0 | 6 | 2023-10-24T02:40:42 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 5525298
num_examples: 1000
download_size: 761250
dataset_size: 5525298
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wikisql"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 596 | [
[
-0.04876708984375,
-0.01470184326171875,
0.004512786865234375,
0.0018177032470703125,
-0.0186614990234375,
-0.00269317626953125,
0.0194854736328125,
-0.010498046875,
0.05413818359375,
0.03515625,
-0.0689697265625,
-0.05352783203125,
-0.0262603759765625,
-0.0... |
Phando/uspto-50k | 2023-10-26T06:11:45.000Z | [
"region:us"
] | Phando | null | null | 0 | 6 | 2023-10-24T09:03:20 | ---
dataset_info:
features:
- name: class
dtype: int64
- name: id
dtype: string
- name: prod_smiles
dtype: string
- name: rxn_smiles
dtype: string
- name: prod_smiles_pop
dtype: int64
- name: keep
dtype: bool
splits:
- name: train
num_bytes: 23288334
num_examples: 50016
download_size: 8855809
dataset_size: 23288334
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "uspto-50k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 624 | [
[
-0.035125732421875,
0.00916290283203125,
0.00623321533203125,
0.01357269287109375,
-0.039886474609375,
0.00821685791015625,
0.02691650390625,
-0.01428985595703125,
0.047393798828125,
0.043609619140625,
-0.04833984375,
-0.0499267578125,
-0.03826904296875,
-0.... |
Olvar/test | 2023-10-24T10:56:25.000Z | [
"region:us"
] | Olvar | null | null | 0 | 6 | 2023-10-24T10:55:17 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
lltala/e-ner-roberta-base | 2023-10-24T15:17:03.000Z | [
"region:us"
] | lltala | null | null | 0 | 6 | 2023-10-24T15:17:00 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: doc_id
dtype: string
- name: id
dtype: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: tokens
sequence: string
splits:
- name: train
num_bytes: 6380478
num_examples: 840
- name: validation
num_bytes: 676038
num_examples: 90
download_size: 776863
dataset_size: 7056516
---
# Dataset Card for "e-ner-roberta-base"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 839 | [
[
-0.044952392578125,
-0.03497314453125,
0.01451873779296875,
0.0019817352294921875,
-0.007175445556640625,
-0.0102996826171875,
0.00981903076171875,
-0.00795745849609375,
0.0677490234375,
0.03173828125,
-0.059417724609375,
-0.05902099609375,
-0.049285888671875,
... |
fatmahamad/Chatgptt | 2023-10-24T16:44:37.000Z | [
"region:us"
] | fatmahamad | null | null | 0 | 6 | 2023-10-24T16:42:04 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
octa-cba/codigo_procesal_laboral_etiquetas1 | 2023-10-24T19:00:01.000Z | [
"region:us"
] | octa-cba | null | null | 0 | 6 | 2023-10-24T18:59:04 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
justinphan3110/test_book | 2023-10-24T20:21:59.000Z | [
"region:us"
] | justinphan3110 | null | null | 0 | 6 | 2023-10-24T19:29:14 | ---
dataset_info:
features:
- name: name
dtype: string
- name: minhashes
sequence: binary
- name: behavior
dtype: string
splits:
- name: train
num_bytes: 1688460457
num_examples: 24
download_size: 161058929
dataset_size: 1688460457
---
# Dataset Card for "test_book"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 435 | [
[
-0.04193115234375,
-0.0161895751953125,
-0.005764007568359375,
-0.0023212432861328125,
-0.0057830810546875,
-0.007343292236328125,
0.01534271240234375,
-0.0015134811401367188,
0.0318603515625,
0.03131103515625,
-0.053375244140625,
-0.05401611328125,
-0.028274536... |
lucasbrandao/autotrain-data-llama-autotrain | 2023-10-27T18:19:48.000Z | [
"region:us"
] | lucasbrandao | null | null | 0 | 6 | 2023-10-24T19:37:20 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
tiendung/viet-mistral-parallel-data | 2023-10-24T22:54:19.000Z | [
"region:us"
] | tiendung | null | null | 0 | 6 | 2023-10-24T22:51:50 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
kjappelbaum/chemnlp-ord | 2023-10-26T07:17:17.000Z | [
"region:us"
] | kjappelbaum | null | null | 0 | 6 | 2023-10-25T07:19:52 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
w95/fin_inst | 2023-10-25T12:56:29.000Z | [
"region:us"
] | w95 | null | null | 0 | 6 | 2023-10-25T12:52:00 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Back-up/test-stsv-data | 2023-10-25T15:25:31.000Z | [
"region:us"
] | Back-up | null | null | 0 | 6 | 2023-10-25T14:00:52 | ---
dataset_info:
features:
- name: Answers
dtype: string
- name: Questions
dtype: string
splits:
- name: train
num_bytes: 104773.87782426778
num_examples: 496
download_size: 47625
dataset_size: 104773.87782426778
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "test-stsv-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 502 | [
[
-0.032623291015625,
-0.0182952880859375,
0.010162353515625,
0.0101470947265625,
-0.0242156982421875,
0.01526641845703125,
0.022430419921875,
0.00769805908203125,
0.047088623046875,
0.0228271484375,
-0.06390380859375,
-0.0557861328125,
-0.040130615234375,
-0.... |
Back-up/train-stsv-data | 2023-10-25T15:25:33.000Z | [
"region:us"
] | Back-up | null | null | 0 | 6 | 2023-10-25T14:01:13 | ---
dataset_info:
features:
- name: Answers
dtype: string
- name: Questions
dtype: string
splits:
- name: train
num_bytes: 506814.6867015707
num_examples: 2379
download_size: 206458
dataset_size: 506814.6867015707
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "train-stsv-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 503 | [
[
-0.0308990478515625,
0.0014371871948242188,
0.0158843994140625,
0.020050048828125,
-0.02178955078125,
0.01042938232421875,
0.01806640625,
0.00452423095703125,
0.049163818359375,
0.0213165283203125,
-0.06298828125,
-0.0479736328125,
-0.046112060546875,
-0.030... |
devrev/improved-dataset-2 | 2023-10-25T19:14:08.000Z | [
"region:us"
] | devrev | null | null | 0 | 6 | 2023-10-25T19:13:59 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 1073697.650692876
num_examples: 13104
- name: test
num_bytes: 268506.3493071241
num_examples: 3277
download_size: 429778
dataset_size: 1342204.0
---
# Dataset Card for "improved-dataset-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 604 | [
[
-0.034912109375,
-0.0185089111328125,
0.009613037109375,
0.0130462646484375,
-0.0161895751953125,
0.0013837814331054688,
0.015228271484375,
-0.0243682861328125,
0.05206298828125,
0.021636962890625,
-0.051788330078125,
-0.0328369140625,
-0.0543212890625,
-0.0... |
gimhanSandeeptha/ChatDoctor-HealthCareMagic-short | 2023-10-27T05:17:15.000Z | [
"region:us"
] | gimhanSandeeptha | null | null | 0 | 6 | 2023-10-26T01:36:21 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
fia24/banel_wit_postag_v1 | 2023-10-26T06:58:15.000Z | [
"region:us"
] | fia24 | null | null | 0 | 6 | 2023-10-26T06:57:33 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
dataset_info:
features:
- name: Inflected_Word
dtype: string
- name: Lemma
dtype: string
- name: POS
dtype: string
- name: pos_label
dtype:
class_label:
names:
'0': অন্যান্য
'1': অব্যয়
'2': অব্যয়েরবিশেষণ
'3': ক্রিয়া
'4': ক্রিয়াবিশেষণ
'5': বিশেষণ
'6': বিশেষণেরবিশেষণ
'7': বিশেষ্য
'8': সর্বনাম
splits:
- name: train
num_bytes: 1380534.719008634
num_examples: 17882
- name: test
num_bytes: 172624.74173489018
num_examples: 2236
- name: val
num_bytes: 172547.53925647563
num_examples: 2235
download_size: 0
dataset_size: 1725707.0
---
# Dataset Card for "banel_wit_postag_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,034 | [
[
-0.03704833984375,
-0.0229339599609375,
-0.004673004150390625,
0.0159149169921875,
-0.0284423828125,
-0.01227569580078125,
0.028961181640625,
-0.0183258056640625,
0.056854248046875,
0.0264739990234375,
-0.068603515625,
-0.058685302734375,
-0.044677734375,
-0... |
HamdanXI/lj_speech_DifferentStructure_removedVocabs | 2023-10-26T07:24:35.000Z | [
"region:us"
] | HamdanXI | null | null | 0 | 6 | 2023-10-26T07:22:30 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 22050
- name: file
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1347808036.0
num_examples: 4620
- name: test
num_bytes: 487719584.0
num_examples: 1680
download_size: 1828316030
dataset_size: 1835527620.0
---
# Dataset Card for "lj_speech_DifferentStructure_removedVocabs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 688 | [
[
-0.03814697265625,
-0.055908203125,
-0.0027008056640625,
-0.0004405975341796875,
-0.0051422119140625,
0.0202789306640625,
-0.018218994140625,
-0.018218994140625,
0.063232421875,
0.0386962890625,
-0.053466796875,
-0.058929443359375,
-0.04290771484375,
-0.0235... |
Rocinante/bbq_cleaned | 2023-10-26T10:10:33.000Z | [
"region:us"
] | Rocinante | null | null | 0 | 6 | 2023-10-26T10:06:08 | ---
dataset_info:
features:
- name: output
dtype: string
- name: instruction
dtype: string
- name: data_source
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 50292
num_examples: 150
download_size: 28559
dataset_size: 50292
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "bbq_cleaned"
Get the source data from here: https://huggingface.co/datasets/lighteval/bbq_helm/
And then manually selected.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 660 | [
[
-0.035614013671875,
-0.0322265625,
0.00614166259765625,
-0.01219940185546875,
-0.025115966796875,
0.0089874267578125,
0.004364013671875,
-0.02117919921875,
0.048828125,
0.0689697265625,
-0.04791259765625,
-0.047576904296875,
-0.015838623046875,
0.00662231445... |
JFaces/threads-squeez | 2023-10-26T14:16:21.000Z | [
"region:us"
] | JFaces | null | null | 0 | 6 | 2023-10-26T13:40:50 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
CJWeiss/lcr | 2023-10-26T17:10:08.000Z | [
"region:us"
] | CJWeiss | null | null | 0 | 6 | 2023-10-26T17:09:59 | ---
dataset_info:
features:
- name: Long Text
dtype: string
- name: Summary
dtype: string
splits:
- name: train
num_bytes: 82108819
num_examples: 2918
- name: test
num_bytes: 18916443
num_examples: 584
- name: valid
num_bytes: 12955974
num_examples: 389
download_size: 56044522
dataset_size: 113981236
---
# Dataset Card for "lcr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 513 | [
[
-0.042022705078125,
-0.0115509033203125,
0.0096893310546875,
0.0010805130004882812,
-0.01263427734375,
0.01183319091796875,
0.01546478271484375,
-0.0124053955078125,
0.03778076171875,
0.042694091796875,
-0.056976318359375,
-0.04718017578125,
-0.032440185546875,
... |
SummerSigh/AncientMNIST | 2023-10-26T18:06:58.000Z | [
"region:us"
] | SummerSigh | null | null | 0 | 6 | 2023-10-26T18:03:37 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Alpha
'1': Beta
'2': Chi
'3': Delta
'4': Epsilon
'5': Eta
'6': Gamma
'7': Iota
'8': Kappa
'9': Lambda
'10': LunateSigma
'11': Mu
'12': Nu
'13': Omega
'14': Omicron
'15': Phi
'16': Pi
'17': Psi
'18': Rho
'19': Tau
'20': Theta
'21': Upsilon
'22': Xi
'23': Zeta
splits:
- name: train
num_bytes: 309609553.26
num_examples: 205797
download_size: 217254607
dataset_size: 309609553.26
---
# Dataset Card for "AncientMNIST"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,019 | [
[
-0.04461669921875,
-0.0225982666015625,
0.01197052001953125,
-0.015655517578125,
-0.0224609375,
-0.0119171142578125,
0.0108489990234375,
-0.0188446044921875,
0.057464599609375,
0.04119873046875,
-0.038299560546875,
-0.0521240234375,
-0.040863037109375,
-0.02... |
mekaneeky/salt-llama-lgg-to-eng | 2023-10-26T18:53:04.000Z | [
"region:us"
] | mekaneeky | null | null | 0 | 6 | 2023-10-26T18:53:00 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
dataset_info:
features:
- name: ID
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 4130369
num_examples: 23947
- name: dev
num_bytes: 85575
num_examples: 500
- name: test
num_bytes: 87440
num_examples: 500
download_size: 2324474
dataset_size: 4303384
---
# Dataset Card for "salt-llama-lgg-to-eng"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 670 | [
[
-0.0294647216796875,
-0.0035190582275390625,
0.0259552001953125,
0.025726318359375,
-0.032196044921875,
0.0222625732421875,
0.00881195068359375,
-0.0294342041015625,
0.059173583984375,
0.0177154541015625,
-0.048858642578125,
-0.062103271484375,
-0.05010986328125... |
fia24/filtered_lemma41kV0.0.1 | 2023-10-26T18:59:05.000Z | [
"region:us"
] | fia24 | null | null | 0 | 6 | 2023-10-26T18:58:59 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: Inflected_Word
dtype: string
- name: Lemma
dtype: string
splits:
- name: train
num_bytes: 1841860.2133993004
num_examples: 29267
- name: test
num_bytes: 230271.85980209926
num_examples: 3659
- name: val
num_bytes: 230208.92679860047
num_examples: 3658
download_size: 1233470
dataset_size: 2302341.0
---
# Dataset Card for "filtered_lemma41kV0.0.1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 766 | [
[
-0.0645751953125,
-0.007007598876953125,
0.01641845703125,
0.0099029541015625,
-0.04632568359375,
-0.0162353515625,
0.0227508544921875,
-0.0016660690307617188,
0.051422119140625,
0.06854248046875,
-0.07183837890625,
-0.05963134765625,
-0.047760009765625,
-0.... |
zelalt/arxiv-papers | 2023-10-26T21:27:45.000Z | [
"region:us"
] | zelalt | null | null | 0 | 6 | 2023-10-26T21:27:44 | ---
dataset_info:
features:
- name: chunk
dtype: string
- name: id
dtype: string
- name: title
dtype: string
- name: summary
dtype: string
- name: source
dtype: string
- name: authors
sequence: string
- name: text_length
dtype: int64
splits:
- name: train
num_bytes: 681551
num_examples: 423
download_size: 405180
dataset_size: 681551
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "arxiv-papers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 651 | [
[
-0.041259765625,
-0.01226806640625,
0.0168914794921875,
0.0084228515625,
-0.0137481689453125,
-0.0025959014892578125,
0.0310821533203125,
-0.00030732154846191406,
0.04547119140625,
0.036895751953125,
-0.027099609375,
-0.051116943359375,
-0.046783447265625,
-... |
zion84006/tencentdata_encodec | 2023-10-27T01:34:10.000Z | [
"region:us"
] | zion84006 | null | null | 0 | 6 | 2023-10-26T22:49:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: file_id
dtype: int64
- name: wav_id
dtype: int64
- name: instruction
dtype: string
- name: transcription
dtype: string
- name: src_encodec_0
sequence: int64
- name: src_encodec_1
sequence: int64
- name: src_encodec_2
sequence: int64
- name: src_encodec_3
sequence: int64
- name: src_encodec_4
sequence: int64
- name: src_encodec_5
sequence: int64
- name: src_encodec_6
sequence: int64
- name: src_encodec_7
sequence: int64
- name: tgt_encodec_0
sequence: int64
- name: tgt_encodec_1
sequence: int64
- name: tgt_encodec_2
sequence: int64
- name: tgt_encodec_3
sequence: int64
- name: tgt_encodec_4
sequence: int64
- name: tgt_encodec_5
sequence: int64
- name: tgt_encodec_6
sequence: int64
- name: tgt_encodec_7
sequence: int64
splits:
- name: train
num_bytes: 18590408182
num_examples: 266780
- name: valid
num_bytes: 528008494
num_examples: 7620
- name: test
num_bytes: 508564968
num_examples: 7620
download_size: 474697865
dataset_size: 19626981644
---
# Dataset Card for "tencentdata_encodec"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,486 | [
[
-0.045318603515625,
-0.0205230712890625,
0.0247650146484375,
0.0273895263671875,
-0.007427215576171875,
0.002197265625,
-0.0033855438232421875,
0.0020694732666015625,
0.0670166015625,
0.0380859375,
-0.047393798828125,
-0.06207275390625,
-0.035736083984375,
-... |
royzhong/asvs | 2023-10-27T03:02:02.000Z | [
"region:us"
] | royzhong | null | null | 0 | 6 | 2023-10-27T02:46:13 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
gaurav16/temples_dataset | 2023-10-27T03:22:25.000Z | [
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:en",
"license:apache-2.0",
"art",
"region:us"
] | gaurav16 | null | null | 1 | 6 | 2023-10-27T03:15:21 | ---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- art
size_categories:
- 1M<n<10M
---
# Dataset Card for Dataset Name: Indian Temple Destruction Dataset
## Dataset Details
### Dataset Description
The Indian Temple Destruction Dataset provides information about historical temples that were destroyed in the past in India, including details on the locations of these temples, the entities responsible for their destruction, and contact information for inquiries.
- **Curated by:** Gaurav Sinha
- **Funded by [optional]:** [Information Not Available]
- **Shared by [optional]:** [Information Not Available]
- **Language(s) (NLP):** English
- **License:** [Information Not Available]
### Dataset Sources [optional]
- **Repository:** [Link to the dataset repository]
- **Paper [optional]:** [Link to any associated research paper]
- **Demo [optional]:** [Link to a demo or usage example]
## Uses
### Direct Use
This dataset can be used for historical research, cultural preservation efforts, and to understand the history of temple destruction in India.
### Out-of-Scope Use
Misuse of this dataset for promoting hatred, violence, or discrimination is strictly out of scope.
## Dataset Structure
[Information Not Available]
## Dataset Creation
### Curation Rationale
The dataset was created to document the historical information about the destruction of temples in India for research, education, and preservation purposes. It includes data from books authored by Sir Sita Ram Goel and contributions by Gaurav Sinha.
### Source Data
#### Data Collection and Processing
The data for this dataset was collected from historical records, scholarly research, and reputable sources, including books authored by Sir Sita Ram Goel. It also includes contributions by Gaurav Sinha. The dataset was carefully compiled to provide accurate and valuable information.
#### Who are the source data producers?
The source data was produced by historians, researchers, and scholars, including Sir Sita Ram Goel, who documented the destruction of temples in India. Contributions by Gaurav Sinha are also part of the dataset.
### Annotations [optional]
[Information Not Available]
## Bias, Risks, and Limitations
This dataset may contain historical events that could be sensitive to some communities. It is essential to use this data responsibly and with cultural sensitivity.
### Recommendations
Users should exercise caution when using this dataset to ensure that it is used for educational and research purposes and not for promoting hatred or discrimination.
## Citation [optional]
**BibTeX:**
[Information Not Available]
**APA:**
[Information Not Available]
## Glossary [optional]
[Information Not Available]
## More Information [optional]
[Information Not Available]
## Dataset Card Authors [optional]
Gaurav Sinha
## Dataset Card Contact
For inquiries related to this dataset, please contact [Your Email Address]. | 2,969 | [
[
-0.0208587646484375,
-0.053955078125,
-0.004833221435546875,
-0.0178680419921875,
-0.0157623291015625,
-0.0196990966796875,
-0.0031375885009765625,
-0.014007568359375,
0.015472412109375,
0.038177490234375,
-0.025482177734375,
-0.0187225341796875,
-0.036010742187... |
wisenut-nlp-team/FiD_korquad | 2023-10-27T03:55:28.000Z | [
"region:us"
] | wisenut-nlp-team | null | null | 0 | 6 | 2023-10-27T03:54:14 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: answer
dtype: string
- name: similar_contexts
sequence: string
splits:
- name: train
num_bytes: 2086185565
num_examples: 176102
- name: validation
num_bytes: 201792169
num_examples: 16683
download_size: 691780636
dataset_size: 2287977734
---
# Dataset Card for "FiD_korquad"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 621 | [
[
-0.04998779296875,
-0.0308685302734375,
0.006313323974609375,
0.015045166015625,
-0.0227813720703125,
0.005634307861328125,
0.0293121337890625,
-0.001338958740234375,
0.05572509765625,
0.042724609375,
-0.0433349609375,
-0.050537109375,
-0.0380859375,
-0.0189... |
shi3z/OpenOrcaJapanese | 2023-10-28T02:50:27.000Z | [
"task_categories:table-question-answering",
"size_categories:100M<n<1B",
"language:ja",
"license:mit",
"region:us"
] | shi3z | null | null | 0 | 6 | 2023-10-27T06:15:27 | ---
license: mit
task_categories:
- table-question-answering
language:
- ja
size_categories:
- 100M<n<1B
---
OpenOrcaデータセットの日本語翻訳版です
https://huggingface.co/datasets/Open-Orca/OpenOrca
現在翻訳作業が続行中で、OpenOrca全体の1/5程度の翻訳が終わった状態でひとまず公開します。商用利用可能です。
| 244 | [
[
-0.03363037109375,
-0.056732177734375,
-0.0107879638671875,
0.034576416015625,
-0.035430908203125,
-0.041656494140625,
0.01343536376953125,
-0.049591064453125,
0.058258056640625,
0.041290283203125,
-0.053314208984375,
-0.077392578125,
-0.036956787109375,
0.0... |
till0r/nlp-in-5-weeks-reviews | 2023-10-27T07:18:34.000Z | [
"region:us"
] | till0r | null | null | 0 | 6 | 2023-10-27T07:10:31 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
srushtibhavsar/HiwiData | 2023-10-27T10:15:33.000Z | [
"region:us"
] | srushtibhavsar | null | null | 0 | 6 | 2023-10-27T08:46:27 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
fia24/filtered_lemma41kV0.0.04 | 2023-10-27T11:20:22.000Z | [
"region:us"
] | fia24 | null | null | 0 | 6 | 2023-10-27T11:20:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: Inflected_Word
dtype: string
- name: Lemma
dtype: string
splits:
- name: train
num_bytes: 1665775.7149052797
num_examples: 26562
- name: test
num_bytes: 208268.99891576063
num_examples: 3321
- name: val
num_bytes: 208206.28617895974
num_examples: 3320
download_size: 1113260
dataset_size: 2082251.0
---
# Dataset Card for "filtered_lemma41kV0.0.04"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 767 | [
[
-0.063232421875,
-0.00511932373046875,
0.0211944580078125,
0.00849151611328125,
-0.0465087890625,
-0.0164031982421875,
0.0193634033203125,
-0.006458282470703125,
0.0482177734375,
0.06597900390625,
-0.065185546875,
-0.058013916015625,
-0.04510498046875,
-0.01... |
polyhedralai/tech_reports_mining | 2023-10-27T13:30:58.000Z | [
"region:us"
] | polyhedralai | null | null | 0 | 6 | 2023-10-27T13:30:21 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dariodematties/github-issues | 2023-10-27T15:06:19.000Z | [
"region:us"
] | dariodematties | null | null | 0 | 6 | 2023-10-27T15:06:17 | ---
dataset_info:
features:
- name: url
dtype: string
- name: repository_url
dtype: string
- name: labels_url
dtype: string
- name: comments_url
dtype: string
- name: events_url
dtype: string
- name: html_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: user
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: labels
list:
- name: id
dtype: int64
- name: node_id
dtype: string
- name: url
dtype: string
- name: name
dtype: string
- name: color
dtype: string
- name: default
dtype: bool
- name: description
dtype: string
- name: state
dtype: string
- name: locked
dtype: bool
- name: assignee
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: assignees
list:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: milestone
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: labels_url
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: number
dtype: int64
- name: title
dtype: string
- name: description
dtype: string
- name: creator
struct:
- name: login
dtype: string
- name: id
dtype: int64
- name: node_id
dtype: string
- name: avatar_url
dtype: string
- name: gravatar_id
dtype: string
- name: url
dtype: string
- name: html_url
dtype: string
- name: followers_url
dtype: string
- name: following_url
dtype: string
- name: gists_url
dtype: string
- name: starred_url
dtype: string
- name: subscriptions_url
dtype: string
- name: organizations_url
dtype: string
- name: repos_url
dtype: string
- name: events_url
dtype: string
- name: received_events_url
dtype: string
- name: type
dtype: string
- name: site_admin
dtype: bool
- name: open_issues
dtype: int64
- name: closed_issues
dtype: int64
- name: state
dtype: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: due_on
dtype: 'null'
- name: closed_at
dtype: 'null'
- name: comments
sequence: string
- name: created_at
dtype: timestamp[s]
- name: updated_at
dtype: timestamp[s]
- name: closed_at
dtype: timestamp[s]
- name: author_association
dtype: string
- name: active_lock_reason
dtype: 'null'
- name: body
dtype: string
- name: reactions
struct:
- name: url
dtype: string
- name: total_count
dtype: int64
- name: '+1'
dtype: int64
- name: '-1'
dtype: int64
- name: laugh
dtype: int64
- name: hooray
dtype: int64
- name: confused
dtype: int64
- name: heart
dtype: int64
- name: rocket
dtype: int64
- name: eyes
dtype: int64
- name: timeline_url
dtype: string
- name: performed_via_github_app
dtype: 'null'
- name: state_reason
dtype: string
- name: draft
dtype: bool
- name: pull_request
struct:
- name: url
dtype: string
- name: html_url
dtype: string
- name: diff_url
dtype: string
- name: patch_url
dtype: string
- name: merged_at
dtype: timestamp[s]
- name: is_pull_request
dtype: bool
splits:
- name: train
num_bytes: 21223797
num_examples: 3000
download_size: 6015181
dataset_size: 21223797
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 6,219 | [
[
-0.032012939453125,
-0.020904541015625,
0.012786865234375,
0.0157928466796875,
-0.00717926025390625,
0.0161590576171875,
0.00952911376953125,
-0.008697509765625,
0.07073974609375,
0.027191162109375,
-0.057403564453125,
-0.04693603515625,
-0.035675048828125,
... |
ekuhn/ds_rplanpy_category | 2023-10-27T16:06:14.000Z | [
"region:us"
] | ekuhn | null | null | 0 | 6 | 2023-10-27T16:06:08 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: num_rooms
dtype: int64
- name: img
struct:
- name: bytes
dtype: binary
- name: path
dtype: 'null'
splits:
- name: train
num_bytes: 30753308
num_examples: 36850
- name: val
num_bytes: 7686278
num_examples: 9213
download_size: 19877649
dataset_size: 38439586
---
# Dataset Card for "ds_rplanpy_category"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 560 | [
[
-0.0496826171875,
-0.00449371337890625,
0.0019130706787109375,
0.014556884765625,
-0.022796630859375,
0.0122528076171875,
0.01934814453125,
0.0009050369262695312,
0.07684326171875,
0.04205322265625,
-0.0743408203125,
-0.04833984375,
-0.03851318359375,
0.0009... |
Ka4on/mri | 2023-10-28T01:03:33.000Z | [
"region:us"
] | Ka4on | null | null | 0 | 6 | 2023-10-28T01:02:50 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
phanvancongthanh/enamine_natural_products | 2023-10-29T01:52:56.000Z | [
"region:us"
] | phanvancongthanh | null | null | 0 | 6 | 2023-10-28T07:46:15 | ---
dataset_info:
features:
- name: smiles
dtype: string
splits:
- name: train
num_bytes: 8749525383
num_examples: 156999999
download_size: 3533601644
dataset_size: 8749525383
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "enamine_natural_products"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 468 | [
[
-0.04010009765625,
-0.0361328125,
0.003154754638671875,
0.0187835693359375,
-0.0088958740234375,
-0.0015735626220703125,
-0.005523681640625,
-0.036590576171875,
0.0699462890625,
0.042633056640625,
-0.05169677734375,
-0.05438232421875,
-0.034027099609375,
0.0... |
Sharathhebbar24/app_reviews_modded | 2023-10-28T11:35:57.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"app_reviews",
"region:us"
] | Sharathhebbar24 | null | null | 0 | 6 | 2023-10-28T11:22:02 | ---
language:
- en
license: mit
size_categories:
- 100K<n<1M
task_categories:
- text-classification
- text-generation
- text2text-generation
- question-answering
pretty_name: app_reviews
dataset_info:
features:
- name: package_name
dtype: string
- name: review
dtype: string
- name: date
dtype: string
- name: star
dtype:
class_label:
names:
'0': '1'
'1': '2'
'2': '3'
'3': '4'
'4': '5'
- name: products
dtype: string
splits:
- name: train
num_bytes: 34076493.880728304
num_examples: 259258
- name: test
num_bytes: 3786350.1192716924
num_examples: 28807
download_size: 16769656
dataset_size: 37862844.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
tags:
- app_reviews
---
The modded version of https://huggingface.co/datasets/app_reviews
## Dataset Card for app-reviews-modded
### Dataset Summary
It is a large dataset of Android applications belonging to 23 different app categories, which provides an overview of the types of feedback users report on the apps and documents the evolution of the related code metrics. The dataset contains about 395 applications of the F-Droid repository, including around 600 versions, 280,000 user reviews (extracted with specific text mining approaches)
### Supported Tasks and Leaderboards
The dataset we provide comprises 395 different apps from the F-Droid repository, including code quality indicators of 629 versions of these apps. It also encloses app reviews related to each of these versions, which have been automatically categorized classifying types of user feedback from a software maintenance and evolution perspective.
**Languages**
The dataset is a monolingual dataset that has the messages in English.
**The star field is termed as class label**
### Data Fields
* package_name: Name of the Software Application Package
* review: Message of the user
* date: the date when the user posted the review
* star: rating provided by the user for the application
* products: Name of the product/App
### Data Splits.
- Training data: 259258
- Testing data: 28807 | 2,215 | [
[
-0.049285888671875,
-0.03045654296875,
-0.004215240478515625,
0.018951416015625,
-0.02008056640625,
0.015350341796875,
-0.00991058349609375,
-0.038543701171875,
0.03472900390625,
0.049224853515625,
-0.053009033203125,
-0.053802490234375,
-0.01241302490234375,
... |
Arham-Imran/test | 2023-10-28T22:48:28.000Z | [
"region:us"
] | Arham-Imran | null | null | 0 | 6 | 2023-10-28T14:03:26 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
gianma/test_es | 2023-10-28T15:37:38.000Z | [
"region:us"
] | gianma | null | null | 0 | 6 | 2023-10-28T15:37:33 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: prompt_idx
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 4333219
num_examples: 234
- name: test
num_bytes: 491267
num_examples: 27
download_size: 2261791
dataset_size: 4824486
---
# Dataset Card for "test_es"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 655 | [
[
-0.045379638671875,
-0.0291595458984375,
0.0134124755859375,
0.00904083251953125,
-0.002162933349609375,
-0.0014028549194335938,
0.0107574462890625,
-0.00782012939453125,
0.057586669921875,
0.0277099609375,
-0.06591796875,
-0.055084228515625,
-0.037139892578125,... |
Brandoko/Instruct-Recharts-v2 | 2023-10-28T15:44:21.000Z | [
"region:us"
] | Brandoko | null | null | 0 | 6 | 2023-10-28T15:44:20 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1453192
num_examples: 623
download_size: 409363
dataset_size: 1453192
---
# Dataset Card for "Instruct-Recharts-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 488 | [
[
-0.03369140625,
-0.005977630615234375,
0.018310546875,
0.004703521728515625,
-0.01538848876953125,
0.0114288330078125,
0.0391845703125,
-0.023590087890625,
0.047119140625,
0.04888916015625,
-0.06182861328125,
-0.04229736328125,
-0.03631591796875,
-0.02206420... |
akkasi/metooma | 2023-10-28T19:54:16.000Z | [
"region:us"
] | akkasi | null | null | 0 | 6 | 2023-10-28T19:54:14 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: TweetId
dtype: string
- name: labels
sequence: float64
- name: label2idx
dtype: string
- name: idx2label
dtype: string
splits:
- name: train
num_bytes: 2991750
num_examples: 7978
- name: test
num_bytes: 748125
num_examples: 1995
download_size: 195958
dataset_size: 3739875
---
# Dataset Card for "metooma_new"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 654 | [
[
-0.044586181640625,
-0.02545166015625,
0.0147857666015625,
0.0094146728515625,
-0.017608642578125,
0.0015087127685546875,
0.016998291015625,
-0.0121002197265625,
0.07220458984375,
0.0380859375,
-0.061309814453125,
-0.05438232421875,
-0.0341796875,
-0.0204162... |
umangapatel123/mashq | 2023-10-28T22:50:56.000Z | [
"region:us"
] | umangapatel123 | null | null | 0 | 6 | 2023-10-28T22:11:40 | ---
# For reference on dataset card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | 4,563 | [
[
-0.04034423828125,
-0.0419921875,
0.009765625,
0.0178070068359375,
-0.0300445556640625,
-0.00893402099609375,
-0.0026874542236328125,
-0.048431396484375,
0.043212890625,
0.059478759765625,
-0.05938720703125,
-0.069580078125,
-0.042205810546875,
0.00993347167... |
seandavidreed/rubiks_cube_segmentation | 2023-10-31T18:38:46.000Z | [
"license:apache-2.0",
"region:us"
] | seandavidreed | null | null | 0 | 6 | 2023-10-29T02:14:02 | ---
license: apache-2.0
---
#### Import and Format for Google Colab
```
path = Path("rubiks_cube_segmentation")
!git clone https://huggingface.co/datasets/seandavidreed/rubiks_cube_segmentation $path
!python3 rubiks_cube_segmentation/format_colab.py $path
``` | 260 | [
[
-0.00788116455078125,
-0.0682373046875,
0.0206298828125,
0.048828125,
-0.0006666183471679688,
0.025482177734375,
0.0202178955078125,
0.0267791748046875,
0.034637451171875,
0.0111541748046875,
-0.04046630859375,
-0.0169830322265625,
-0.0288238525390625,
-0.00... |
Raspberry-ai/high_end_product_image_search_embed_v1 | 2023-10-29T02:37:57.000Z | [
"region:us"
] | Raspberry-ai | null | null | 0 | 6 | 2023-10-29T02:37:55 | ---
dataset_info:
features:
- name: image_path
dtype: string
- name: caption
dtype: string
- name: article_id
dtype: int64
splits:
- name: train
num_bytes: 2506195
num_examples: 5385
download_size: 878805
dataset_size: 2506195
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "high_end_product_image_search_embed_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 545 | [
[
-0.04840087890625,
-0.039031982421875,
0.00855255126953125,
0.025360107421875,
-0.0205078125,
-0.00949859619140625,
0.018890380859375,
-0.0034427642822265625,
0.06884765625,
0.03466796875,
-0.05364990234375,
-0.07110595703125,
-0.045562744140625,
-0.03131103... |
yashtiwari/fleurs-hi-en-ST | 2023-10-29T07:06:51.000Z | [
"region:us"
] | yashtiwari | null | null | 0 | 6 | 2023-10-29T06:54:39 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: hindi
dtype: string
- name: english
dtype: string
- name: audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
splits:
- name: train
num_bytes: 1286250983
num_examples: 876
download_size: 824653765
dataset_size: 1286250983
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "fleurs-hi-en-ST"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
This is a dataset for speech to text translation of hindi to english. dataset used to build this was fleurs & flores | 787 | [
[
-0.022918701171875,
-0.05072021484375,
-0.003894805908203125,
0.0199737548828125,
-0.01445770263671875,
0.0110015869140625,
-0.01561737060546875,
-0.0284881591796875,
0.061767578125,
0.040802001953125,
-0.044189453125,
-0.0396728515625,
-0.06365966796875,
0.... |
chriamue/bird-species-dataset | 2023-10-29T18:36:35.000Z | [
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"license:cc0-1.0",
"biology",
"region:us"
] | chriamue | A dataset of bird species downloaded from kaggle. | @TECHREPORT{gpiosenka/100-bird-species,
author = {gpiosenka},
title = {BIRDS 525 SPECIES- IMAGE CLASSIFICATION},
institution = {},
year = {2023}
} | 0 | 6 | 2023-10-29T09:20:19 | ---
language:
- en
license: cc0-1.0
size_categories:
- 1K<n<10K
task_categories:
- image-classification
pretty_name: Bird Species
tags:
- biology
dataset_info:
config_name: bird_species_dataset
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': ABBOTTS BABBLER
'1': ABBOTTS BOOBY
'2': ABYSSINIAN GROUND HORNBILL
'3': AFRICAN CROWNED CRANE
'4': AFRICAN EMERALD CUCKOO
'5': AFRICAN FIREFINCH
'6': AFRICAN OYSTER CATCHER
'7': AFRICAN PIED HORNBILL
'8': AFRICAN PYGMY GOOSE
'9': ALBATROSS
'10': ALBERTS TOWHEE
'11': ALEXANDRINE PARAKEET
'12': ALPINE CHOUGH
'13': ALTAMIRA YELLOWTHROAT
'14': AMERICAN AVOCET
'15': AMERICAN BITTERN
'16': AMERICAN COOT
'17': AMERICAN DIPPER
'18': AMERICAN FLAMINGO
'19': AMERICAN GOLDFINCH
'20': AMERICAN KESTREL
'21': AMERICAN PIPIT
'22': AMERICAN REDSTART
'23': AMERICAN ROBIN
'24': AMERICAN WIGEON
'25': AMETHYST WOODSTAR
'26': ANDEAN GOOSE
'27': ANDEAN LAPWING
'28': ANDEAN SISKIN
'29': ANHINGA
'30': ANIANIAU
'31': ANNAS HUMMINGBIRD
'32': ANTBIRD
'33': ANTILLEAN EUPHONIA
'34': APAPANE
'35': APOSTLEBIRD
'36': ARARIPE MANAKIN
'37': ASHY STORM PETREL
'38': ASHY THRUSHBIRD
'39': ASIAN CRESTED IBIS
'40': ASIAN DOLLARD BIRD
'41': ASIAN GREEN BEE EATER
'42': ASIAN OPENBILL STORK
'43': AUCKLAND SHAQ
'44': AUSTRAL CANASTERO
'45': AUSTRALASIAN FIGBIRD
'46': AVADAVAT
'47': AZARAS SPINETAIL
'48': AZURE BREASTED PITTA
'49': AZURE JAY
'50': AZURE TANAGER
'51': AZURE TIT
'52': BAIKAL TEAL
'53': BALD EAGLE
'54': BALD IBIS
'55': BALI STARLING
'56': BALTIMORE ORIOLE
'57': BANANAQUIT
'58': BAND TAILED GUAN
'59': BANDED BROADBILL
'60': BANDED PITA
'61': BANDED STILT
'62': BAR-TAILED GODWIT
'63': BARN OWL
'64': BARN SWALLOW
'65': BARRED PUFFBIRD
'66': BARROWS GOLDENEYE
'67': BAY-BREASTED WARBLER
'68': BEARDED BARBET
'69': BEARDED BELLBIRD
'70': BEARDED REEDLING
'71': BELTED KINGFISHER
'72': BIRD OF PARADISE
'73': BLACK AND YELLOW BROADBILL
'74': BLACK BAZA
'75': BLACK BREASTED PUFFBIRD
'76': BLACK COCKATO
'77': BLACK FACED SPOONBILL
'78': BLACK FRANCOLIN
'79': BLACK HEADED CAIQUE
'80': BLACK NECKED STILT
'81': BLACK SKIMMER
'82': BLACK SWAN
'83': BLACK TAIL CRAKE
'84': BLACK THROATED BUSHTIT
'85': BLACK THROATED HUET
'86': BLACK THROATED WARBLER
'87': BLACK VENTED SHEARWATER
'88': BLACK VULTURE
'89': BLACK-CAPPED CHICKADEE
'90': BLACK-NECKED GREBE
'91': BLACK-THROATED SPARROW
'92': BLACKBURNIAM WARBLER
'93': BLONDE CRESTED WOODPECKER
'94': BLOOD PHEASANT
'95': BLUE COAU
'96': BLUE DACNIS
'97': BLUE GRAY GNATCATCHER
'98': BLUE GROSBEAK
'99': BLUE GROUSE
'100': BLUE HERON
'101': BLUE MALKOHA
'102': BLUE THROATED PIPING GUAN
'103': BLUE THROATED TOUCANET
'104': BOBOLINK
'105': BORNEAN BRISTLEHEAD
'106': BORNEAN LEAFBIRD
'107': BORNEAN PHEASANT
'108': BRANDT CORMARANT
'109': BREWERS BLACKBIRD
'110': BROWN CREPPER
'111': BROWN HEADED COWBIRD
'112': BROWN NOODY
'113': BROWN THRASHER
'114': BUFFLEHEAD
'115': BULWERS PHEASANT
'116': BURCHELLS COURSER
'117': BUSH TURKEY
'118': CAATINGA CACHOLOTE
'119': CABOTS TRAGOPAN
'120': CACTUS WREN
'121': CALIFORNIA CONDOR
'122': CALIFORNIA GULL
'123': CALIFORNIA QUAIL
'124': CAMPO FLICKER
'125': CANARY
'126': CANVASBACK
'127': CAPE GLOSSY STARLING
'128': CAPE LONGCLAW
'129': CAPE MAY WARBLER
'130': CAPE ROCK THRUSH
'131': CAPPED HERON
'132': CAPUCHINBIRD
'133': CARMINE BEE-EATER
'134': CASPIAN TERN
'135': CASSOWARY
'136': CEDAR WAXWING
'137': CERULEAN WARBLER
'138': CHARA DE COLLAR
'139': CHATTERING LORY
'140': CHESTNET BELLIED EUPHONIA
'141': CHESTNUT WINGED CUCKOO
'142': CHINESE BAMBOO PARTRIDGE
'143': CHINESE POND HERON
'144': CHIPPING SPARROW
'145': CHUCAO TAPACULO
'146': CHUKAR PARTRIDGE
'147': CINNAMON ATTILA
'148': CINNAMON FLYCATCHER
'149': CINNAMON TEAL
'150': CLARKS GREBE
'151': CLARKS NUTCRACKER
'152': COCK OF THE ROCK
'153': COCKATOO
'154': COLLARED ARACARI
'155': COLLARED CRESCENTCHEST
'156': COMMON FIRECREST
'157': COMMON GRACKLE
'158': COMMON HOUSE MARTIN
'159': COMMON IORA
'160': COMMON LOON
'161': COMMON POORWILL
'162': COMMON STARLING
'163': COPPERSMITH BARBET
'164': COPPERY TAILED COUCAL
'165': CRAB PLOVER
'166': CRANE HAWK
'167': CREAM COLORED WOODPECKER
'168': CRESTED AUKLET
'169': CRESTED CARACARA
'170': CRESTED COUA
'171': CRESTED FIREBACK
'172': CRESTED KINGFISHER
'173': CRESTED NUTHATCH
'174': CRESTED OROPENDOLA
'175': CRESTED SERPENT EAGLE
'176': CRESTED SHRIKETIT
'177': CRESTED WOOD PARTRIDGE
'178': CRIMSON CHAT
'179': CRIMSON SUNBIRD
'180': CROW
'181': CUBAN TODY
'182': CUBAN TROGON
'183': CURL CRESTED ARACURI
'184': D-ARNAUDS BARBET
'185': DALMATIAN PELICAN
'186': DARJEELING WOODPECKER
'187': DARK EYED JUNCO
'188': DAURIAN REDSTART
'189': DEMOISELLE CRANE
'190': DOUBLE BARRED FINCH
'191': DOUBLE BRESTED CORMARANT
'192': DOUBLE EYED FIG PARROT
'193': DOWNY WOODPECKER
'194': DUNLIN
'195': DUSKY LORY
'196': DUSKY ROBIN
'197': EARED PITA
'198': EASTERN BLUEBIRD
'199': EASTERN BLUEBONNET
'200': EASTERN GOLDEN WEAVER
'201': EASTERN MEADOWLARK
'202': EASTERN ROSELLA
'203': EASTERN TOWEE
'204': EASTERN WIP POOR WILL
'205': EASTERN YELLOW ROBIN
'206': ECUADORIAN HILLSTAR
'207': EGYPTIAN GOOSE
'208': ELEGANT TROGON
'209': ELLIOTS PHEASANT
'210': EMERALD TANAGER
'211': EMPEROR PENGUIN
'212': EMU
'213': ENGGANO MYNA
'214': EURASIAN BULLFINCH
'215': EURASIAN GOLDEN ORIOLE
'216': EURASIAN MAGPIE
'217': EUROPEAN GOLDFINCH
'218': EUROPEAN TURTLE DOVE
'219': EVENING GROSBEAK
'220': FAIRY BLUEBIRD
'221': FAIRY PENGUIN
'222': FAIRY TERN
'223': FAN TAILED WIDOW
'224': FASCIATED WREN
'225': FIERY MINIVET
'226': FIORDLAND PENGUIN
'227': FIRE TAILLED MYZORNIS
'228': FLAME BOWERBIRD
'229': FLAME TANAGER
'230': FOREST WAGTAIL
'231': FRIGATE
'232': FRILL BACK PIGEON
'233': GAMBELS QUAIL
'234': GANG GANG COCKATOO
'235': GILA WOODPECKER
'236': GILDED FLICKER
'237': GLOSSY IBIS
'238': GO AWAY BIRD
'239': GOLD WING WARBLER
'240': GOLDEN BOWER BIRD
'241': GOLDEN CHEEKED WARBLER
'242': GOLDEN CHLOROPHONIA
'243': GOLDEN EAGLE
'244': GOLDEN PARAKEET
'245': GOLDEN PHEASANT
'246': GOLDEN PIPIT
'247': GOULDIAN FINCH
'248': GRANDALA
'249': GRAY CATBIRD
'250': GRAY KINGBIRD
'251': GRAY PARTRIDGE
'252': GREAT ARGUS
'253': GREAT GRAY OWL
'254': GREAT JACAMAR
'255': GREAT KISKADEE
'256': GREAT POTOO
'257': GREAT TINAMOU
'258': GREAT XENOPS
'259': GREATER PEWEE
'260': GREATER PRAIRIE CHICKEN
'261': GREATOR SAGE GROUSE
'262': GREEN BROADBILL
'263': GREEN JAY
'264': GREEN MAGPIE
'265': GREEN WINGED DOVE
'266': GREY CUCKOOSHRIKE
'267': GREY HEADED CHACHALACA
'268': GREY HEADED FISH EAGLE
'269': GREY PLOVER
'270': GROVED BILLED ANI
'271': GUINEA TURACO
'272': GUINEAFOWL
'273': GURNEYS PITTA
'274': GYRFALCON
'275': HAMERKOP
'276': HARLEQUIN DUCK
'277': HARLEQUIN QUAIL
'278': HARPY EAGLE
'279': HAWAIIAN GOOSE
'280': HAWFINCH
'281': HELMET VANGA
'282': HEPATIC TANAGER
'283': HIMALAYAN BLUETAIL
'284': HIMALAYAN MONAL
'285': HOATZIN
'286': HOODED MERGANSER
'287': HOOPOES
'288': HORNED GUAN
'289': HORNED LARK
'290': HORNED SUNGEM
'291': HOUSE FINCH
'292': HOUSE SPARROW
'293': HYACINTH MACAW
'294': IBERIAN MAGPIE
'295': IBISBILL
'296': IMPERIAL SHAQ
'297': INCA TERN
'298': INDIAN BUSTARD
'299': INDIAN PITTA
'300': INDIAN ROLLER
'301': INDIAN VULTURE
'302': INDIGO BUNTING
'303': INDIGO FLYCATCHER
'304': INLAND DOTTEREL
'305': IVORY BILLED ARACARI
'306': IVORY GULL
'307': IWI
'308': JABIRU
'309': JACK SNIPE
'310': JACOBIN PIGEON
'311': JANDAYA PARAKEET
'312': JAPANESE ROBIN
'313': JAVA SPARROW
'314': JOCOTOCO ANTPITTA
'315': KAGU
'316': KAKAPO
'317': KILLDEAR
'318': KING EIDER
'319': KING VULTURE
'320': KIWI
'321': KNOB BILLED DUCK
'322': KOOKABURRA
'323': LARK BUNTING
'324': LAUGHING GULL
'325': LAZULI BUNTING
'326': LESSER ADJUTANT
'327': LILAC ROLLER
'328': LIMPKIN
'329': LITTLE AUK
'330': LOGGERHEAD SHRIKE
'331': LONG-EARED OWL
'332': LOONEY BIRDS
'333': LUCIFER HUMMINGBIRD
'334': MAGPIE GOOSE
'335': MALABAR HORNBILL
'336': MALACHITE KINGFISHER
'337': MALAGASY WHITE EYE
'338': MALEO
'339': MALLARD DUCK
'340': MANDRIN DUCK
'341': MANGROVE CUCKOO
'342': MARABOU STORK
'343': MASKED BOBWHITE
'344': MASKED BOOBY
'345': MASKED LAPWING
'346': MCKAYS BUNTING
'347': MERLIN
'348': MIKADO PHEASANT
'349': MILITARY MACAW
'350': MOURNING DOVE
'351': MYNA
'352': NICOBAR PIGEON
'353': NOISY FRIARBIRD
'354': NORTHERN BEARDLESS TYRANNULET
'355': NORTHERN CARDINAL
'356': NORTHERN FLICKER
'357': NORTHERN FULMAR
'358': NORTHERN GANNET
'359': NORTHERN GOSHAWK
'360': NORTHERN JACANA
'361': NORTHERN MOCKINGBIRD
'362': NORTHERN PARULA
'363': NORTHERN RED BISHOP
'364': NORTHERN SHOVELER
'365': OCELLATED TURKEY
'366': OILBIRD
'367': OKINAWA RAIL
'368': ORANGE BREASTED TROGON
'369': ORANGE BRESTED BUNTING
'370': ORIENTAL BAY OWL
'371': ORNATE HAWK EAGLE
'372': OSPREY
'373': OSTRICH
'374': OVENBIRD
'375': OYSTER CATCHER
'376': PAINTED BUNTING
'377': PALILA
'378': PALM NUT VULTURE
'379': PARADISE TANAGER
'380': PARAKETT AUKLET
'381': PARUS MAJOR
'382': PATAGONIAN SIERRA FINCH
'383': PEACOCK
'384': PEREGRINE FALCON
'385': PHAINOPEPLA
'386': PHILIPPINE EAGLE
'387': PINK ROBIN
'388': PLUSH CRESTED JAY
'389': POMARINE JAEGER
'390': PUFFIN
'391': PUNA TEAL
'392': PURPLE FINCH
'393': PURPLE GALLINULE
'394': PURPLE MARTIN
'395': PURPLE SWAMPHEN
'396': PYGMY KINGFISHER
'397': PYRRHULOXIA
'398': QUETZAL
'399': RAINBOW LORIKEET
'400': RAZORBILL
'401': RED BEARDED BEE EATER
'402': RED BELLIED PITTA
'403': RED BILLED TROPICBIRD
'404': RED BROWED FINCH
'405': RED CROSSBILL
'406': RED FACED CORMORANT
'407': RED FACED WARBLER
'408': RED FODY
'409': RED HEADED DUCK
'410': RED HEADED WOODPECKER
'411': RED KNOT
'412': RED LEGGED HONEYCREEPER
'413': RED NAPED TROGON
'414': RED SHOULDERED HAWK
'415': RED TAILED HAWK
'416': RED TAILED THRUSH
'417': RED WINGED BLACKBIRD
'418': RED WISKERED BULBUL
'419': REGENT BOWERBIRD
'420': RING-NECKED PHEASANT
'421': ROADRUNNER
'422': ROCK DOVE
'423': ROSE BREASTED COCKATOO
'424': ROSE BREASTED GROSBEAK
'425': ROSEATE SPOONBILL
'426': ROSY FACED LOVEBIRD
'427': ROUGH LEG BUZZARD
'428': ROYAL FLYCATCHER
'429': RUBY CROWNED KINGLET
'430': RUBY THROATED HUMMINGBIRD
'431': RUDDY SHELDUCK
'432': RUDY KINGFISHER
'433': RUFOUS KINGFISHER
'434': RUFOUS TREPE
'435': RUFUOS MOTMOT
'436': SAMATRAN THRUSH
'437': SAND MARTIN
'438': SANDHILL CRANE
'439': SATYR TRAGOPAN
'440': SAYS PHOEBE
'441': SCARLET CROWNED FRUIT DOVE
'442': SCARLET FACED LIOCICHLA
'443': SCARLET IBIS
'444': SCARLET MACAW
'445': SCARLET TANAGER
'446': SHOEBILL
'447': SHORT BILLED DOWITCHER
'448': SMITHS LONGSPUR
'449': SNOW GOOSE
'450': SNOW PARTRIDGE
'451': SNOWY EGRET
'452': SNOWY OWL
'453': SNOWY PLOVER
'454': SNOWY SHEATHBILL
'455': SORA
'456': SPANGLED COTINGA
'457': SPLENDID WREN
'458': SPOON BILED SANDPIPER
'459': SPOTTED CATBIRD
'460': SPOTTED WHISTLING DUCK
'461': SQUACCO HERON
'462': SRI LANKA BLUE MAGPIE
'463': STEAMER DUCK
'464': STORK BILLED KINGFISHER
'465': STRIATED CARACARA
'466': STRIPED OWL
'467': STRIPPED MANAKIN
'468': STRIPPED SWALLOW
'469': SUNBITTERN
'470': SUPERB STARLING
'471': SURF SCOTER
'472': SWINHOES PHEASANT
'473': TAILORBIRD
'474': TAIWAN MAGPIE
'475': TAKAHE
'476': TASMANIAN HEN
'477': TAWNY FROGMOUTH
'478': TEAL DUCK
'479': TIT MOUSE
'480': TOUCHAN
'481': TOWNSENDS WARBLER
'482': TREE SWALLOW
'483': TRICOLORED BLACKBIRD
'484': TROPICAL KINGBIRD
'485': TRUMPTER SWAN
'486': TURKEY VULTURE
'487': TURQUOISE MOTMOT
'488': UMBRELLA BIRD
'489': VARIED THRUSH
'490': VEERY
'491': VENEZUELIAN TROUPIAL
'492': VERDIN
'493': VERMILION FLYCATHER
'494': VICTORIA CROWNED PIGEON
'495': VIOLET BACKED STARLING
'496': VIOLET CUCKOO
'497': VIOLET GREEN SWALLOW
'498': VIOLET TURACO
'499': VISAYAN HORNBILL
'500': VULTURINE GUINEAFOWL
'501': WALL CREAPER
'502': WATTLED CURASSOW
'503': WATTLED LAPWING
'504': WHIMBREL
'505': WHITE BREASTED WATERHEN
'506': WHITE BROWED CRAKE
'507': WHITE CHEEKED TURACO
'508': WHITE CRESTED HORNBILL
'509': WHITE EARED HUMMINGBIRD
'510': WHITE NECKED RAVEN
'511': WHITE TAILED TROPIC
'512': WHITE THROATED BEE EATER
'513': WILD TURKEY
'514': WILLOW PTARMIGAN
'515': WILSONS BIRD OF PARADISE
'516': WOOD DUCK
'517': WOOD THRUSH
'518': WOODLAND KINGFISHER
'519': WRENTIT
'520': YELLOW BELLIED FLOWERPECKER
'521': YELLOW BREASTED CHAT
'522': YELLOW CACIQUE
'523': YELLOW HEADED BLACKBIRD
'524': ZEBRA DOVE
splits:
- name: train
num_bytes: 4225020
num_examples: 84635
- name: validation
num_bytes: 125894
num_examples: 2625
- name: test
num_bytes: 123269
num_examples: 2625
download_size: 4474183
dataset_size: 4474183
---
# Dataset Card for "Bird Species"
## Dataset Summary
The dataset encompasses 525 bird species with a total of 84,635 training images, 2,625 test images, and 2,625 validation images, all formatted as 224x224x3 color images in jpg.
The dataset is sourced from Kaggle and can be found [here](https://www.kaggle.com/datasets/gpiosenka/100-bird-species).
| 18,036 | [
[
-0.044708251953125,
0.0028667449951171875,
-0.012481689453125,
0.0173187255859375,
-0.01483917236328125,
-0.01459503173828125,
0.0082855224609375,
-0.032318115234375,
0.00341796875,
0.025421142578125,
-0.029083251953125,
-0.03790283203125,
-0.034759521484375,
... |
toilaluan/ig_rewarding_db_v4 | 2023-10-29T18:58:49.000Z | [
"region:us"
] | toilaluan | null | null | 0 | 6 | 2023-10-29T18:58:35 | ---
dataset_info:
features:
- name: image
dtype: image
- name: topic
dtype: string
- name: prompt
dtype: string
- name: request_id
dtype: int64
- name: model_type
dtype: string
splits:
- name: train
num_bytes: 330547445.0
num_examples: 4500
download_size: 340509190
dataset_size: 330547445.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ig_rewarding_db_v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 602 | [
[
-0.03765869140625,
-0.0029010772705078125,
0.006153106689453125,
0.024169921875,
-0.018524169921875,
0.001911163330078125,
0.0297088623046875,
-0.007843017578125,
0.058929443359375,
0.03192138671875,
-0.0723876953125,
-0.05462646484375,
-0.035888671875,
0.00... |
mayur456/court_4_doc | 2023-10-30T04:47:11.000Z | [
"region:us"
] | mayur456 | null | null | 0 | 6 | 2023-10-30T04:44:27 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
anonymouse03052002/val | 2023-10-30T06:35:35.000Z | [
"region:us"
] | anonymouse03052002 | null | null | 0 | 6 | 2023-10-30T05:04:57 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: review
dtype: string
- name: review_length
dtype: int64
splits:
- name: train
num_bytes: 267482.34016393445
num_examples: 439
- name: validation
num_bytes: 29855.659836065573
num_examples: 49
download_size: 0
dataset_size: 297338.0
---
# Dataset Card for "val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 605 | [
[
-0.045623779296875,
-0.0226593017578125,
0.01059722900390625,
0.0102081298828125,
-0.009307861328125,
-0.007144927978515625,
0.047210693359375,
-0.00995635986328125,
0.056976318359375,
0.049652099609375,
-0.05950927734375,
-0.047088623046875,
-0.044525146484375,... |
aminlouhichi/donut2 | 2023-10-30T09:41:25.000Z | [
"region:us"
] | aminlouhichi | null | null | 0 | 6 | 2023-10-30T09:41:05 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 8268950.0
num_examples: 84
- name: validation
num_bytes: 3762544.0
num_examples: 39
- name: test
num_bytes: 3059300.0
num_examples: 33
download_size: 11855736
dataset_size: 15090794.0
---
# Dataset Card for "donut2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 695 | [
[
-0.015869140625,
-0.0176544189453125,
0.0166015625,
0.0070037841796875,
0.00278472900390625,
0.007595062255859375,
0.01287078857421875,
-0.00872039794921875,
0.049560546875,
0.0330810546875,
-0.0517578125,
-0.034515380859375,
-0.046112060546875,
-0.027587890... |
Razvan27/Java_raw_files | 2023-10-30T13:26:30.000Z | [
"region:us"
] | Razvan27 | null | null | 0 | 6 | 2023-10-30T10:40:35 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: size
dtype: int64
- name: lang
dtype: string
- name: ext
dtype: string
- name: repo_name
dtype: string
- name: repo_stars
dtype: int64
- name: repo_forks
dtype: int64
- name: repo_open_issues
dtype: int64
splits:
- name: train
num_bytes: 2183103
num_examples: 746
download_size: 633848
dataset_size: 2183103
---
# Dataset Card for "Java_raw_files"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 718 | [
[
-0.0256805419921875,
-0.0192108154296875,
0.00994110107421875,
0.0121612548828125,
-0.017120361328125,
0.00244903564453125,
-0.0007433891296386719,
-0.013916015625,
0.045745849609375,
0.040008544921875,
-0.046661376953125,
-0.06353759765625,
-0.04144287109375,
... |
aminlouhichi/donut3 | 2023-10-30T12:08:39.000Z | [
"region:us"
] | aminlouhichi | null | null | 0 | 6 | 2023-10-30T11:31:15 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 25755597.0
num_examples: 60
- name: validation
num_bytes: 25755597.0
num_examples: 60
- name: test
num_bytes: 25755597.0
num_examples: 60
download_size: 55055025
dataset_size: 77266791.0
---
# Dataset Card for "donut3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 698 | [
[
-0.0229949951171875,
-0.01461029052734375,
0.0267181396484375,
0.0089263916015625,
0.005725860595703125,
0.00617218017578125,
0.01580810546875,
-0.0082244873046875,
0.0521240234375,
0.038543701171875,
-0.049835205078125,
-0.045379638671875,
-0.041229248046875,
... |
hippocrates/CitationGPT_test | 2023-11-01T05:15:06.000Z | [
"region:us"
] | hippocrates | null | null | 0 | 6 | 2023-10-30T20:51:22 | ---
dataset_info:
features:
- name: id
dtype: string
- name: query
dtype: string
- name: answer
dtype: string
- name: choices
sequence: string
- name: gold
dtype: int64
splits:
- name: train
num_bytes: 225733170
num_examples: 119360
- name: valid
num_bytes: 29111847
num_examples: 15480
- name: test
num_bytes: 26379234
num_examples: 14000
download_size: 105765132
dataset_size: 281224251
---
# Dataset Card for "CitationGPT_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 631 | [
[
-0.040496826171875,
-0.0274505615234375,
0.0194549560546875,
0.0260467529296875,
-0.011474609375,
-0.01068878173828125,
0.0204925537109375,
-0.00559234619140625,
0.040985107421875,
0.0026912689208984375,
-0.04461669921875,
-0.037689208984375,
-0.03631591796875,
... |
kat33/test-bc-chunk8k | 2023-10-30T21:06:41.000Z | [
"region:us"
] | kat33 | null | null | 0 | 6 | 2023-10-30T21:02:26 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
dongg00/test_npc | 2023-10-31T04:27:27.000Z | [
"size_categories:n<1K",
"language:en",
"license:llama2",
"region:us"
] | dongg00 | null | null | 0 | 6 | 2023-10-31T03:00:29 | ---
license: llama2
language:
- en
size_categories:
- n<1K
dataset_info:
features:
- name: current_activity
dtype: string
- name: objects_available
dtype: string
- name: question
dtype: string
- name: responce
dtype: string
- name: text
dtype: string
---
Hi | 309 | [
[
-0.0296173095703125,
-0.060028076171875,
0.03656005859375,
-0.0123748779296875,
-0.004428863525390625,
-0.0034008026123046875,
0.04400634765625,
-0.046966552734375,
0.0758056640625,
0.07049560546875,
-0.030059814453125,
-0.021881103515625,
-0.04779052734375,
... |
voidful/hint-lm-data | 2023-10-31T10:18:26.000Z | [
"region:us"
] | voidful | null | null | 0 | 6 | 2023-10-31T10:18:01 | ---
dataset_info:
features:
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
splits:
- name: truthfulqa_train
num_bytes: 122011
num_examples: 317
- name: truthfulqa_test
num_bytes: 186706
num_examples: 500
- name: truthfulqa_full
num_bytes: 308717
num_examples: 817
- name: strategyqa_train
num_bytes: 160285
num_examples: 1790
- name: strategyqa_test
num_bytes: 44715
num_examples: 500
- name: strategyqa_full
num_bytes: 205000
num_examples: 2290
- name: hotpotqa_validation
num_bytes: 42722
num_examples: 458
- name: hotpotqa_train
num_bytes: 498905
num_examples: 5481
- name: openbookqa_validation
num_bytes: 89011
num_examples: 500
- name: openbookqa_train
num_bytes: 809439
num_examples: 4957
- name: openbookqa_test
num_bytes: 84381
num_examples: 500
download_size: 1466535
dataset_size: 2551892
---
# Dataset Card for "hint-lm-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 1,152 | [
[
-0.03436279296875,
-0.02581787109375,
0.036529541015625,
0.0022029876708984375,
-0.016265869140625,
0.0034656524658203125,
0.01119232177734375,
-0.01380157470703125,
0.058135986328125,
0.0217437744140625,
-0.07733154296875,
-0.06109619140625,
-0.037353515625,
... |
fruk19/ptvn_sum_ie_supplier_num | 2023-11-02T18:07:33.000Z | [
"region:us"
] | fruk19 | null | null | 0 | 6 | 2023-10-31T10:57:39 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 118831461.0
num_examples: 307
- name: test
num_bytes: 45724368.0
num_examples: 115
download_size: 152076419
dataset_size: 164555829.0
---
# Dataset Card for "ptvn_sum_ie_supplier_num"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 479 | [
[
-0.023193359375,
-0.002132415771484375,
0.0009341239929199219,
0.02166748046875,
-0.0310821533203125,
-0.0105743408203125,
0.0271759033203125,
0.01403045654296875,
0.055267333984375,
0.052398681640625,
-0.042388916015625,
-0.047393798828125,
-0.034271240234375,
... |
ugshanyu/QAFC | 2023-10-31T12:59:13.000Z | [
"region:us"
] | ugshanyu | null | null | 0 | 6 | 2023-10-31T12:42:04 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
sayan1101/identity_finetune_data_2 | 2023-10-31T16:49:28.000Z | [
"region:us"
] | sayan1101 | null | null | 0 | 6 | 2023-10-31T16:49:16 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 387168
num_examples: 1181
- name: test
num_bytes: 66396
num_examples: 209
download_size: 221210
dataset_size: 453564
---
# Dataset Card for "identity_finetune_data_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 584 | [
[
-0.030487060546875,
-0.025177001953125,
0.00617218017578125,
0.00977325439453125,
-0.020751953125,
-0.010589599609375,
0.024566650390625,
-0.01428985595703125,
0.041961669921875,
0.0265350341796875,
-0.050872802734375,
-0.03643798828125,
-0.037353515625,
-0.... |
p1atdev/open2ch | 2023-11-01T01:59:51.000Z | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:1M<n<10M",
"language:ja",
"license:apache-2.0",
"not-for-all-audiences",
"region:us"
] | p1atdev | null | null | 1 | 6 | 2023-10-31T17:00:13 | ---
language:
- ja
license: apache-2.0
size_categories:
- 1M<n<10M
task_categories:
- text-generation
- text2text-generation
dataset_info:
- config_name: all-corpus
features:
- name: dialogue
sequence:
- name: speaker
dtype: int8
- name: content
dtype: string
- name: board
dtype: string
splits:
- name: train
num_bytes: 1693355620
num_examples: 8134707
download_size: 868453263
dataset_size: 1693355620
- config_name: all-corpus-cleaned
features:
- name: dialogue
sequence:
- name: speaker
dtype: int8
- name: content
dtype: string
- name: board
dtype: string
splits:
- name: train
num_bytes: 1199092499
num_examples: 6192730
download_size: 615570076
dataset_size: 1199092499
- config_name: livejupiter
features:
- name: dialogue
sequence:
- name: speaker
dtype: int8
- name: content
dtype: string
splits:
- name: train
num_bytes: 1101433134
num_examples: 5943594
download_size: 592924274
dataset_size: 1101433134
- config_name: livejupiter-cleaned
features:
- name: dialogue
sequence:
- name: speaker
dtype: int8
- name: content
dtype: string
splits:
- name: train
num_bytes: 807499499
num_examples: 4650253
download_size: 437414714
dataset_size: 807499499
- config_name: news4vip
features:
- name: dialogue
sequence:
- name: speaker
dtype: int8
- name: content
dtype: string
splits:
- name: train
num_bytes: 420403926
num_examples: 1973817
download_size: 240974172
dataset_size: 420403926
- config_name: news4vip-cleaned
features:
- name: dialogue
sequence:
- name: speaker
dtype: int8
- name: content
dtype: string
splits:
- name: train
num_bytes: 269941607
num_examples: 1402903
download_size: 156934128
dataset_size: 269941607
- config_name: newsplus
features:
- name: dialogue
sequence:
- name: speaker
dtype: int8
- name: content
dtype: string
splits:
- name: train
num_bytes: 56071294
num_examples: 217296
download_size: 32368053
dataset_size: 56071294
- config_name: newsplus-cleaned
features:
- name: dialogue
sequence:
- name: speaker
dtype: int8
- name: content
dtype: string
splits:
- name: train
num_bytes: 33387874
num_examples: 139574
download_size: 19556120
dataset_size: 33387874
- config_name: ranking
features:
- name: dialogue
sequence:
- name: speaker
dtype: int8
- name: content
dtype: string
- name: next
struct:
- name: speaker
dtype: int8
- name: content
dtype: string
- name: random
sequence: string
splits:
- name: train
num_bytes: 1605628
num_examples: 2000
- name: test
num_bytes: 1604356
num_examples: 1953
download_size: 2127033
dataset_size: 3209984
configs:
- config_name: all-corpus
data_files:
- split: train
path: all-corpus/train-*
- config_name: all-corpus-cleaned
data_files:
- split: train
path: all-corpus-cleaned/train-*
- config_name: livejupiter
data_files:
- split: train
path: livejupiter/train-*
- config_name: livejupiter-cleaned
data_files:
- split: train
path: livejupiter-cleaned/train-*
- config_name: news4vip
data_files:
- split: train
path: news4vip/train-*
- config_name: news4vip-cleaned
data_files:
- split: train
path: news4vip-cleaned/train-*
- config_name: newsplus
data_files:
- split: train
path: newsplus/train-*
- config_name: newsplus-cleaned
data_files:
- split: train
path: newsplus-cleaned/train-*
- config_name: ranking
data_files:
- split: train
path: ranking/train-*
- split: test
path: ranking/test-*
tags:
- not-for-all-audiences
---
# おーぷん2ちゃんねる対話コーパス
## Dataset Details
### Dataset Description
[おーぷん2ちゃんねる対話コーパス](https://github.com/1never/open2ch-dialogue-corpus) を Huggingface Datasets 向けに変換したものになります。
- **Curated by:** [More Information Needed]
- **Language:** Japanese
- **License:** Apache-2.0
### Dataset Sources
- **Repository:** https://github.com/1never/open2ch-dialogue-corpus
## Dataset Structure
- `all-corpus`: `livejupiter`, `news4vip`, `newsplus` サブセットを連結したもの
- `dialogue`: 対話データ (`list[dict]`)
- `speaker`: 話者番号。`1` または `2`。
- `content`: 発言内容
- `board`: 連結元のサブセット名
- `livejupiter`: オリジナルのデータセットでの `livejupiter.tsv` から変換されたデータ。
- `dialogue`: 対話データ (`list[dict]`)
- `speaker`: 話者番号。`1` または `2`。
- `content`: 発言内容
- `news4vip`: オリジナルのデータセットでの `news4vip.tsv` から変換されたデータ。
- 構造は同上
- `newsplus`: オリジナルのデータセットでの `newsplus.tsv` から変換されたデータ。
- 構造は同上
- `ranking`: 応答順位付けタスク用データ (オリジナルデータセットでの `ranking.zip`)
- `train` と `test` split があり、それぞれはオリジナルデータセットの `dev.tsv` と `test.tsv` に対応します。
- `dialogue`: 対話データ (`list[dict]`)
- `speaker`: 話者番号。`1` または `2`。
- `content`: 発言内容
- `next`: 対話の次に続く正解の応答 (`dict`)
- `speaker`: 話者番号。`1` または `2`
- `content`: 発言内容
- `random`: ランダムに選ばれた応答 9 個 (`list[str]`)
また、`all-corpus`, `livejupiter`, `news4vip`, `newsplus` にはそれぞれ名前に `-cleaned` が付与されたバージョンがあり、これらのサブセットではオリジナルのデータセットで配布されていた NG ワードリストを利用してフィルタリングされたものです。
オリジナルのデータセットでは各発言内の改行は `__BR__` に置換されていますが、このデータセットではすべて `\n` に置き換えられています。
## Dataset Creation
### Source Data
(オリジナルのデータセットの説明より)
> おーぷん2ちゃんねるの「なんでも実況(ジュピター)」「ニュー速VIP」「ニュース速報+」の3つの掲示板をクロールして作成した対話コーパスです. おーぷん2ちゃんねる開設時から2019年7月20日までのデータを使用して作成しました.
#### Data Collection and Processing
[オリジナルのデータセット](https://github.com/1never/open2ch-dialogue-corpus) を参照。
#### Personal and Sensitive Information
`-cleaned` ではないサブセットでは、非常に不適切な表現が多いため注意が必要です。
## Usage
```py
from datasets import load_dataset
ds = load_dataset(
"p1atdev/open2ch",
name="all-corpus",
)
print(ds)
print(ds["train"][0])
# DatasetDict({
# train: Dataset({
# features: ['dialogue', 'board'],
# num_rows: 8134707
# })
# })
# {'dialogue': {'speaker': [1, 2], 'content': ['実況スレをたてる', 'おんj民の鑑']}, 'board': 'livejupiter'}
``` | 6,040 | [
[
-0.044158935546875,
-0.056060791015625,
0.0201568603515625,
0.01010894775390625,
-0.0206451416015625,
-0.004283905029296875,
-0.03057861328125,
-0.00995635986328125,
0.0278167724609375,
0.0235595703125,
-0.051849365234375,
-0.056671142578125,
-0.03546142578125,
... |
MaxReynolds/MyPatternDataset | 2023-11-01T20:24:49.000Z | [
"region:us"
] | MaxReynolds | null | null | 0 | 6 | 2023-10-31T21:05:24 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 33075479.0
num_examples: 24
download_size: 33078291
dataset_size: 33075479.0
---
# Dataset Card for "MyPatternDataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 482 | [
[
-0.050933837890625,
-0.018798828125,
-0.0028285980224609375,
0.02362060546875,
-0.002582550048828125,
0.005126953125,
0.032623291015625,
0.005603790283203125,
0.07794189453125,
0.03369140625,
-0.07489013671875,
-0.0303955078125,
-0.049896240234375,
-0.011047... |
phanvancongthanh/enamine_leadlike | 2023-11-01T00:13:38.000Z | [
"region:us"
] | phanvancongthanh | null | null | 0 | 6 | 2023-11-01T00:02:22 | ---
dataset_info:
features:
- name: smiles
dtype: string
splits:
- name: train
num_bytes: 31490993396
num_examples: 672148662
download_size: 12563051169
dataset_size: 31490993396
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "enamine_leadlike"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 463 | [
[
-0.0295562744140625,
-0.0202789306640625,
0.022613525390625,
0.0256195068359375,
-0.01515960693359375,
-0.0291748046875,
-0.004230499267578125,
0.003597259521484375,
0.0692138671875,
0.032928466796875,
-0.053497314453125,
-0.0545654296875,
-0.0406494140625,
... |
dianaCerni/CartaUPT | 2023-11-01T08:40:30.000Z | [
"region:us"
] | dianaCerni | null | null | 0 | 6 | 2023-11-01T08:18:21 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
iashchak/igor_link_dialogues-alpaca | 2023-11-01T22:27:55.000Z | [
"region:us"
] | iashchak | null | null | 0 | 6 | 2023-11-01T22:25:42 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 27304881.436095953
num_examples: 28364
- name: test
num_bytes: 3034303.563904049
num_examples: 3152
download_size: 15254213
dataset_size: 30339185.0
---
# Dataset Card for "igor_link_dialogues-alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 655 | [
[
-0.04534912109375,
-0.027587890625,
0.013092041015625,
0.01169586181640625,
-0.0257110595703125,
-0.007965087890625,
0.01323699951171875,
-0.019500732421875,
0.07806396484375,
0.0256805419921875,
-0.06915283203125,
-0.048858642578125,
-0.036407470703125,
-0.... |
dhruv107/docs_pro_max | 2023-11-02T06:40:17.000Z | [
"region:us"
] | dhruv107 | null | null | 0 | 6 | 2023-11-02T06:38:42 | ---
dataset_info:
features:
- name: image
dtype: image
- name: ground_truth
dtype: string
splits:
- name: train
num_bytes: 928828530.0
num_examples: 926
- name: test
num_bytes: 7150670.0
num_examples: 12
- name: validation
num_bytes: 239864120.0
num_examples: 220
download_size: 1036667606
dataset_size: 1175843320.0
---
# Dataset Card for "docs_pro_max"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 537 | [
[
-0.04498291015625,
-0.0163421630859375,
0.01149749755859375,
0.00615692138671875,
-0.0138397216796875,
-0.00653839111328125,
0.011199951171875,
-0.00033402442932128906,
0.039276123046875,
0.0303192138671875,
-0.056396484375,
-0.05987548828125,
-0.05908203125,
... |
Leekp/translated_dataset | 2023-11-02T07:15:27.000Z | [
"region:us"
] | Leekp | null | null | 0 | 6 | 2023-11-02T07:14:59 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Bukun/NLP_intent_dataset | 2023-11-02T13:51:28.000Z | [
"region:us"
] | Bukun | null | null | 0 | 6 | 2023-11-02T13:16:38 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
k-halid/ar | 2021-02-05T16:05:32.000Z | [
"region:us"
] | k-halid | The corpus is a part of the MultiUN corpus.It is a collection of translated documents from the United Nations.The corpus is download from the following website : [open parallel corpus](http://opus.datasetsl.eu/) \ | @inproceedings{eisele2010multiun,
title={MultiUN: A Multilingual Corpus from United Nation Documents.},
author={Eisele, Andreas and Chen, Yu},
booktitle={LREC},
year={2010}
} | 0 | 5 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
lewtun/my-awesome-dataset | 2022-07-03T05:16:07.000Z | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"region:us"
] | lewtun | null | null | 0 | 5 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- conditional-text-generation
task_ids:
- summarization
---
# Dataset Card for Demo
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a demo dataset with two files `train.csv` and `test.csv`.
Load it by:
```python
from datasets import load_dataset
data_files = {"train": "train.csv", "test": "test.csv"}
demo = load_dataset("stevhliu/demo", data_files=data_files)
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | 2,968 | [
[
-0.027618408203125,
-0.035919189453125,
0.003803253173828125,
0.020355224609375,
-0.01219940185546875,
0.01313018798828125,
-0.029815673828125,
-0.0248565673828125,
0.039825439453125,
0.038177490234375,
-0.062744140625,
-0.07977294921875,
-0.043121337890625,
... |
llangnickel/long-covid-classification-data | 2022-11-24T10:29:58.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | llangnickel | null | null | 0 | 5 | 2022-03-02T23:29:22 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: 'Dataset containing abstracts from PubMed, either related to long COVID
or not. '
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
---
## Data Description
Long-COVID related articles have been manually collected by information specialists.
Please find further information [here](https://doi.org/10.1093/database/baac048).
## Size
||Training|Development|Test|Total|
|--|--|--|--|--|
Positive Examples|215|76|70|345|
Negative Examples|199|62|68|345|
Total|414|238|138|690|
## Citation
@article{10.1093/database/baac048,
author = {Langnickel, Lisa and Darms, Johannes and Heldt, Katharina and Ducks, Denise and Fluck, Juliane},
title = "{Continuous development of the semantic search engine preVIEW: from COVID-19 to long COVID}",
journal = {Database},
volume = {2022},
year = {2022},
month = {07},
issn = {1758-0463},
doi = {10.1093/database/baac048},
url = {https://doi.org/10.1093/database/baac048},
note = {baac048},
eprint = {https://academic.oup.com/database/article-pdf/doi/10.1093/database/baac048/44371817/baac048.pdf},
} | 1,271 | [
[
-0.03277587890625,
-0.05474853515625,
0.0167388916015625,
0.023223876953125,
-0.0024566650390625,
-0.00759124755859375,
-0.0157928466796875,
-0.0345458984375,
0.027435302734375,
0.0034542083740234375,
-0.045257568359375,
-0.05218505859375,
-0.02197265625,
0.... |
tasksource/crowdflower | 2023-06-21T12:50:08.000Z | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:fact-checking",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:unknown",
"language:en",
"region:us"
] | tasksource | Collection of crowdflower classification datasets | null | 0 | 5 | 2022-03-02T23:29:22 | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- crowdsourced
license: []
multilinguality:
- monolingual
pretty_name: ethics
size_categories:
- unknown
source_datasets: []
tags: []
task_categories:
- text-classification
task_ids:
- sentiment-classification
- fact-checking
---
```
@inproceedings{van2012designing,
title={Designing a scalable crowdsourcing platform},
author={Van Pelt, Chris and Sorokin, Alex},
booktitle={Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data},
pages={765--766},
year={2012}
}
``` | 584 | [
[
-0.046722412109375,
-0.00017631053924560547,
0.0247955322265625,
0.037353515625,
-0.00858306884765625,
0.0192108154296875,
0.0006117820739746094,
-0.045440673828125,
0.04022216796875,
0.037872314453125,
-0.061279296875,
-0.0264739990234375,
-0.034698486328125,
... |
mideind/icelandic-common-crawl-corpus-IC3 | 2022-10-22T15:44:37.000Z | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:is",
"license:unknown",
"region:us"
] | mideind | null | null | 0 | 5 | 2022-03-02T23:29:22 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- is
license:
- unknown
multilinguality:
- monolingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: Icelandic Common Crawl Corpus - IC3
---
This is the Icelandic Common Crawl Corpus (IC3).
| 361 | [
[
-0.02313232421875,
-0.01558685302734375,
0.034698486328125,
0.0200653076171875,
-0.03289794921875,
0.0157318115234375,
0.03521728515625,
-0.0292510986328125,
0.03863525390625,
0.0307769775390625,
-0.0286712646484375,
-0.0606689453125,
-0.03338623046875,
0.02... |
nateraw/imagenette | 2021-09-26T08:00:07.000Z | [
"region:us"
] | nateraw | Imagenette is a subset of 10 easily classified classes from the Imagenet
dataset. It was originally prepared by Jeremy Howard of FastAI. The objective
behind putting together a small version of the Imagenet dataset was mainly
because running new ideas/algorithms/experiments on the whole Imagenet take a
lot of time.
This version of the dataset allows researchers/practitioners to quickly try out
ideas and share with others. The dataset comes in three variants:
* Full size
* 320 px
* 160 px
Note: The v2 config correspond to the new 70/30 train/valid split (released
in Dec 6 2019). | @misc{imagenette,
author = "Jeremy Howard",
title = "imagenette",
url = "https://github.com/fastai/imagenette/"
} | 2 | 5 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
patrickvonplaten/scientific_papers_dummy | 2021-02-05T16:05:47.000Z | [
"region:us"
] | patrickvonplaten | Scientific papers datasets contains two sets of long and structured documents.
The datasets are obtained from ArXiv and PubMed OpenAccess repositories.
Both "arxiv" and "pubmed" have two features:
- article: the body of the document, pagragraphs seperated by "/n".
- abstract: the abstract of the document, pagragraphs seperated by "/n".
- section_names: titles of sections, seperated by "/n". | @article{Cohan_2018,
title={A Discourse-Aware Attention Model for Abstractive Summarization of
Long Documents},
url={http://dx.doi.org/10.18653/v1/n18-2097},
DOI={10.18653/v1/n18-2097},
journal={Proceedings of the 2018 Conference of the North American Chapter of
the Association for Computational Linguistics: Human Language
Technologies, Volume 2 (Short Papers)},
publisher={Association for Computational Linguistics},
author={Cohan, Arman and Dernoncourt, Franck and Kim, Doo Soon and Bui, Trung and Kim, Seokhwan and Chang, Walter and Goharian, Nazli},
year={2018}
} | 0 | 5 | 2022-03-02T23:29:22 | Entry not found | 15 | [
[
-0.02142333984375,
-0.01495361328125,
0.05718994140625,
0.0288238525390625,
-0.035064697265625,
0.046539306640625,
0.052520751953125,
0.005062103271484375,
0.0513916015625,
0.016998291015625,
-0.052093505859375,
-0.014984130859375,
-0.060394287109375,
0.0379... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.