author stringlengths 2 29 โ | cardData null | citation stringlengths 0 9.58k โ | description stringlengths 0 5.93k โ | disabled bool 1 class | downloads float64 1 1M โ | gated bool 2 classes | id stringlengths 2 108 | lastModified stringlengths 24 24 | paperswithcode_id stringlengths 2 45 โ | private bool 2 classes | sha stringlengths 40 40 | siblings list | tags list | readme_url stringlengths 57 163 | readme stringlengths 0 977k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
bobfu | null | null | null | false | null | false | bobfu/cats | 2022-10-31T06:27:06.000Z | null | false | aa7407539ed836835ed51916fd092c02ce1dea1b | [] | [
"license:cc0-1.0"
] | https://huggingface.co/datasets/bobfu/cats/resolve/main/README.md | ---
license: cc0-1.0
---
|
nrajsubramanian | null | null | null | false | null | false | nrajsubramanian/usfaq | 2022-10-31T06:57:45.000Z | null | false | c9369bf40a8f0788c3d438e9998d161d7f183910 | [] | [
"license:mit"
] | https://huggingface.co/datasets/nrajsubramanian/usfaq/resolve/main/README.md | ---
license: mit
---
|
KETI-AIR | null | There is no citation information | # ์ ๋ฌธ๋ถ์ผ ์-ํยท์ค-ํ ๋ฒ์ญ ๋ง๋ญ์น (์ํ)
## Usage
```python
from datasets import load_dataset
raw_datasets = load_dataset(
"aihub_koenzh_food_translation.py",
"base",
cache_dir="huggingface_datasets",
data_dir="data",
ignore_verifications=True,
)
dataset_train = raw_datasets["train"]
for item in dataset_train:
print(item)
exit()
```
## ๋ฐ์ดํฐ ๊ด๋ จ ๋ฌธ์์ฒ
| ๋ด๋น์๋ช
| ์ ํ๋ฒํธ | ์ด๋ฉ์ผ |
| ------------- | ------------- | ------------- |
| ์ต๊ท๋ | 1833-5926 | ken.choi@twigfarm.net |
## Copyright
### ๋ฐ์ดํฐ ์๊ฐ
AI ํ๋ธ์์ ์ ๊ณต๋๋ ์ธ๊ณต์ง๋ฅ ํ์ต์ฉ ๋ฐ์ดํฐ(์ดํ โAI๋ฐ์ดํฐโ๋ผ๊ณ ํจ)๋ ๊ณผํ๊ธฐ์ ์ ๋ณดํต์ ๋ถ์ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ใ์ง๋ฅ์ ๋ณด์ฐ์
์ธํ๋ผ ์กฐ์ฑใ ์ฌ์
์ ์ผํ์ผ๋ก ๊ตฌ์ถ๋์์ผ๋ฉฐ, ๋ณธ ์ฌ์
์ ์ โง๋ฌดํ์ ๊ฒฐ๊ณผ๋ฌผ์ธ ๋ฐ์ดํฐ, AI ์์ฉ๋ชจ๋ธ ๋ฐ ๋ฐ์ดํฐ ์ ์๋๊ตฌ์ ์์ค, ๊ฐ์ข
๋งค๋ด์ผ ๋ฑ(์ดํ โAI๋ฐ์ดํฐ ๋ฑโ)์ ๋ํ ์ผ์ฒด์ ๊ถ๋ฆฌ๋ AI๋ฐ์ดํฐ ๋ฑ์ ๊ตฌ์ถ ์ํ๊ธฐ๊ด ๋ฐ ์ฐธ์ฌ๊ธฐ๊ด(์ดํ โ์ํ๊ธฐ๊ด ๋ฑโ)๊ณผ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ์์ต๋๋ค.
๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ธ๊ณต์ง๋ฅ ๊ธฐ์ ๋ฐ ์ ํยท์๋น์ค ๋ฐ์ ์ ์ํ์ฌ ๊ตฌ์ถํ์์ผ๋ฉฐ, ์ง๋ฅํ ์ ํใป์๋น์ค, ์ฑ๋ด ๋ฑ ๋ค์ํ ๋ถ์ผ์์ ์๋ฆฌ์ ใป๋น์๋ฆฌ์ ์ฐ๊ตฌใป๊ฐ๋ฐ ๋ชฉ์ ์ผ๋ก ํ์ฉํ ์ ์์ต๋๋ค.
### ๋ฐ์ดํฐ ์ด์ฉ์ ์ฑ
- ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ๊ธฐ ์ํด์ ๋ค์ ์ฌํญ์ ๋์ํ๋ฉฐ ์ค์ํด์ผ ํจ์ ๊ณ ์งํฉ๋๋ค.
1. ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ ๋์๋ ๋ฐ๋์ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ์ฌ์
๊ฒฐ๊ณผ์์ ๋ฐํ์ผ ํ๋ฉฐ, ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ 2์ฐจ์ ์ ์๋ฌผ์๋ ๋์ผํ๊ฒ ๋ฐํ์ผ ํฉ๋๋ค.
2. ๊ตญ์ธ์ ์์ฌํ๋ ๋ฒ์ธ, ๋จ์ฒด ๋๋ ๊ฐ์ธ์ด AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ๊ธฐ ์ํด์๋ ์ํ๊ธฐ๊ด ๋ฑ ๋ฐ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์๊ณผ ๋ณ๋๋ก ํฉ์๊ฐ ํ์ํฉ๋๋ค.
3. ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ๊ตญ์ธ ๋ฐ์ถ์ ์ํด์๋ ์ํ๊ธฐ๊ด ๋ฑ ๋ฐ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์๊ณผ ๋ณ๋๋ก ํฉ์๊ฐ ํ์ํฉ๋๋ค.
4. ๋ณธ AI๋ฐ์ดํฐ๋ ์ธ๊ณต์ง๋ฅ ํ์ต๋ชจ๋ธ์ ํ์ต์ฉ์ผ๋ก๋ง ์ฌ์ฉํ ์ ์์ต๋๋ค. ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉ์ ๋ชฉ์ ์ด๋ ๋ฐฉ๋ฒ, ๋ด์ฉ ๋ฑ์ด ์๋ฒํ๊ฑฐ๋ ๋ถ์ ํฉํ๋ค๊ณ ํ๋จ๋ ๊ฒฝ์ฐ ์ ๊ณต์ ๊ฑฐ๋ถํ ์ ์์ผ๋ฉฐ, ์ด๋ฏธ ์ ๊ณตํ ๊ฒฝ์ฐ ์ด์ฉ์ ์ค์ง์ AI ๋ฐ์ดํฐ ๋ฑ์ ํ์, ํ๊ธฐ ๋ฑ์ ์๊ตฌํ ์ ์์ต๋๋ค.
5. ์ ๊ณต ๋ฐ์ AI๋ฐ์ดํฐ ๋ฑ์ ์ํ๊ธฐ๊ด ๋ฑ๊ณผ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ์น์ธ์ ๋ฐ์ง ์์ ๋ค๋ฅธ ๋ฒ์ธ, ๋จ์ฒด ๋๋ ๊ฐ์ธ์๊ฒ ์ด๋ํ๊ฒ ํ๊ฑฐ๋ ์ ๊ณต, ์๋, ๋์ฌ, ํ๋งคํ์ฌ์๋ ์๋ฉ๋๋ค.
6. AI๋ฐ์ดํฐ ๋ฑ์ ๋ํด์ ์ 4ํญ์ ๋ฐ๋ฅธ ๋ชฉ์ ์ธ ์ด์ฉ, ์ 5ํญ์ ๋ฐ๋ฅธ ๋ฌด๋จ ์ด๋, ์ ๊ณต, ์๋, ๋์ฌ, ํ๋งค ๋ฑ์ ๊ฒฐ๊ณผ๋ก ์ธํ์ฌ ๋ฐ์ํ๋ ๋ชจ๋ ๋ฏผใปํ์ฌ ์์ ์ฑ
์์ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ ๋ฒ์ธ, ๋จ์ฒด ๋๋ ๊ฐ์ธ์๊ฒ ์์ต๋๋ค.
7. ์ด์ฉ์๋ AI ํ๋ธ ์ ๊ณต ๋ฐ์ดํฐ์
๋ด์ ๊ฐ์ธ์ ๋ณด ๋ฑ์ด ํฌํจ๋ ๊ฒ์ด ๋ฐ๊ฒฌ๋ ๊ฒฝ์ฐ, ์ฆ์ AI ํ๋ธ์ ํด๋น ์ฌ์ค์ ์ ๊ณ ํ๊ณ ๋ค์ด๋ก๋ ๋ฐ์ ๋ฐ์ดํฐ์
์ ์ญ์ ํ์ฌ์ผ ํฉ๋๋ค.
8. AI ํ๋ธ๋ก๋ถํฐ ์ ๊ณต๋ฐ์ ๋น์๋ณ ์ ๋ณด(์ฌํ์ ๋ณด ํฌํจ)๋ฅผ ์ธ๊ณต์ง๋ฅ ์๋น์ค ๊ฐ๋ฐ ๋ฑ์ ๋ชฉ์ ์ผ๋ก ์์ ํ๊ฒ ์ด์ฉํ์ฌ์ผ ํ๋ฉฐ, ์ด๋ฅผ ์ด์ฉํด์ ๊ฐ์ธ์ ์ฌ์๋ณํ๊ธฐ ์ํ ์ด๋ ํ ํ์๋ ํ์ฌ์๋ ์๋ฉ๋๋ค.
9. ํฅํ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์์ ํ์ฉ์ฌ๋กใป์ฑ๊ณผ ๋ฑ์ ๊ดํ ์คํ์กฐ์ฌ๋ฅผ ์ํ ํ ๊ฒฝ์ฐ ์ด์ ์ฑ์คํ๊ฒ ์ํ์ฌ์ผ ํฉ๋๋ค.
### ๋ฐ์ดํฐ ๋ค์ด๋ก๋ ์ ์ฒญ๋ฐฉ๋ฒ
1. AI ํ๋ธ๋ฅผ ํตํด ์ ๊ณต ์ค์ธ AI๋ฐ์ดํฐ ๋ฑ์ ๋ค์ด๋ก๋ ๋ฐ๊ธฐ ์ํด์๋ ๋ณ๋์ ์ ์ฒญ์ ๋ณธ์ธ ํ์ธ๊ณผ ์ ๋ณด ์ ๊ณต, ๋ชฉ์ ์ ๋ฐํ๋ ์ ์ฐจ๊ฐ ํ์ํฉ๋๋ค.
2. AI๋ฐ์ดํฐ๋ฅผ ์ ์ธํ ๋ฐ์ดํฐ ์ค๋ช
, ์ ์ ๋๊ตฌ ๋ฑ์ ๋ณ๋์ ์ ์ฒญ ์ ์ฐจ๋ ๋ก๊ทธ์ธ ์์ด ์ด์ฉ์ด ๊ฐ๋ฅํฉ๋๋ค.
3. ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ด ๊ถ๋ฆฌ์๊ฐ ์๋ AI๋ฐ์ดํฐ ๋ฑ์ ํด๋น ๊ธฐ๊ด์ ์ด์ฉ์ ์ฑ
๊ณผ ๋ค์ด๋ก๋ ์ ์ฐจ๋ฅผ ๋ฐ๋ผ์ผ ํ๋ฉฐ ์ด๋ AI ํ๋ธ์ ๊ด๋ จ์ด ์์์ ์๋ ค ๋๋ฆฝ๋๋ค. | false | 66 | false | KETI-AIR/aihub_koenzh_food_translation | 2022-10-31T07:24:55.000Z | null | false | 7b51ba33f0c9b9420b5706367a9a1b388ae51edb | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/KETI-AIR/aihub_koenzh_food_translation/resolve/main/README.md | ---
license: apache-2.0
---
|
KETI-AIR | null | There is no citation information | # ํ๊ตญ์ด-์์ด ๋ฒ์ญ ๋ง๋ญ์น(๊ธฐ์ ๊ณผํ)
## ์๊ฐ
- ๊ธฐ์ ๊ณผํ(์ธ๊ณต์ง๋ฅ, ๋น
๋ฐ์ดํฐ, IT, SNS, ์ํ, ํนํ ๋ฑ) ๋ถ์ผ ๋ฑ ํ-์ ๋ฒ์ญ ์ ํ๋๊ฐ ์๋์ ์ผ๋ก ๋ฎ์ ๋ถ์ผ์ ๋ฐ์ดํฐ ๊ตฌ์ถ์ ํตํด AI ๊ธฐ๋ฐ ๋ฒ์ญ ๊ธฐ์ ๊ฐ๋ฐ์ ํ์ฉํ ์ ์๋ ํ์ต ๋ฐ์ดํฐ์
์ ๊ตฌ์ถํ์ฌ ๋ณด๋ค ์ํํ ๊ธฐ์ ๊ณผํ ๋ถ์ผ ๊ด๋ จ ์ ๋ณด ์ํต ๋๋ชจ
## ๊ตฌ์ถ๋ชฉ์
- ๊ธฐ์ ๊ณผํ ๋ถ์ผ (ICT, ์ ๊ธฐ/์ ์/๊ธฐ๊ณ, ์ํ) ํ-์ ๋ง๋ญ์น 150๋ง ๋ฌธ์ฅ ๊ตฌ์ถ. ์ธ๊ณต์ง๋ฅ ๋ฒ์ญ ํ์ต์ ํ์ฉ๋๊ธฐ ์ํ ๋ฐ์ดํฐ์
## Usage
```python
from datasets import load_dataset
raw_datasets = load_dataset(
"aihub_scitech20_translation.py",
"base",
cache_dir="huggingface_datasets",
data_dir="data",
ignore_verifications=True,
)
dataset_train = raw_datasets["train"]
for item in dataset_train:
print(item)
exit()
```
## ๋ฐ์ดํฐ ๊ด๋ จ ๋ฌธ์์ฒ
| ๋ด๋น์๋ช
| ์ ํ๋ฒํธ | ์ด๋ฉ์ผ |
| ------------- | ------------- | ------------- |
| ๋ฐฑ์ ํธ(ํธ์๊ทธํ) | 02-1833-5926 | ceo@twigfarm.net |
## Copyright
### ๋ฐ์ดํฐ ์๊ฐ
AI ํ๋ธ์์ ์ ๊ณต๋๋ ์ธ๊ณต์ง๋ฅ ํ์ต์ฉ ๋ฐ์ดํฐ(์ดํ โAI๋ฐ์ดํฐโ๋ผ๊ณ ํจ)๋ ๊ณผํ๊ธฐ์ ์ ๋ณดํต์ ๋ถ์ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ใ์ง๋ฅ์ ๋ณด์ฐ์
์ธํ๋ผ ์กฐ์ฑใ ์ฌ์
์ ์ผํ์ผ๋ก ๊ตฌ์ถ๋์์ผ๋ฉฐ, ๋ณธ ์ฌ์
์ ์ โง๋ฌดํ์ ๊ฒฐ๊ณผ๋ฌผ์ธ ๋ฐ์ดํฐ, AI ์์ฉ๋ชจ๋ธ ๋ฐ ๋ฐ์ดํฐ ์ ์๋๊ตฌ์ ์์ค, ๊ฐ์ข
๋งค๋ด์ผ ๋ฑ(์ดํ โAI๋ฐ์ดํฐ ๋ฑโ)์ ๋ํ ์ผ์ฒด์ ๊ถ๋ฆฌ๋ AI๋ฐ์ดํฐ ๋ฑ์ ๊ตฌ์ถ ์ํ๊ธฐ๊ด ๋ฐ ์ฐธ์ฌ๊ธฐ๊ด(์ดํ โ์ํ๊ธฐ๊ด ๋ฑโ)๊ณผ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ์์ต๋๋ค.
๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ธ๊ณต์ง๋ฅ ๊ธฐ์ ๋ฐ ์ ํยท์๋น์ค ๋ฐ์ ์ ์ํ์ฌ ๊ตฌ์ถํ์์ผ๋ฉฐ, ์ง๋ฅํ ์ ํใป์๋น์ค, ์ฑ๋ด ๋ฑ ๋ค์ํ ๋ถ์ผ์์ ์๋ฆฌ์ ใป๋น์๋ฆฌ์ ์ฐ๊ตฌใป๊ฐ๋ฐ ๋ชฉ์ ์ผ๋ก ํ์ฉํ ์ ์์ต๋๋ค.
### ๋ฐ์ดํฐ ์ด์ฉ์ ์ฑ
- ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ๊ธฐ ์ํด์ ๋ค์ ์ฌํญ์ ๋์ํ๋ฉฐ ์ค์ํด์ผ ํจ์ ๊ณ ์งํฉ๋๋ค.
1. ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ ๋์๋ ๋ฐ๋์ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ์ฌ์
๊ฒฐ๊ณผ์์ ๋ฐํ์ผ ํ๋ฉฐ, ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ 2์ฐจ์ ์ ์๋ฌผ์๋ ๋์ผํ๊ฒ ๋ฐํ์ผ ํฉ๋๋ค.
2. ๊ตญ์ธ์ ์์ฌํ๋ ๋ฒ์ธ, ๋จ์ฒด ๋๋ ๊ฐ์ธ์ด AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ๊ธฐ ์ํด์๋ ์ํ๊ธฐ๊ด ๋ฑ ๋ฐ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์๊ณผ ๋ณ๋๋ก ํฉ์๊ฐ ํ์ํฉ๋๋ค.
3. ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ๊ตญ์ธ ๋ฐ์ถ์ ์ํด์๋ ์ํ๊ธฐ๊ด ๋ฑ ๋ฐ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์๊ณผ ๋ณ๋๋ก ํฉ์๊ฐ ํ์ํฉ๋๋ค.
4. ๋ณธ AI๋ฐ์ดํฐ๋ ์ธ๊ณต์ง๋ฅ ํ์ต๋ชจ๋ธ์ ํ์ต์ฉ์ผ๋ก๋ง ์ฌ์ฉํ ์ ์์ต๋๋ค. ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉ์ ๋ชฉ์ ์ด๋ ๋ฐฉ๋ฒ, ๋ด์ฉ ๋ฑ์ด ์๋ฒํ๊ฑฐ๋ ๋ถ์ ํฉํ๋ค๊ณ ํ๋จ๋ ๊ฒฝ์ฐ ์ ๊ณต์ ๊ฑฐ๋ถํ ์ ์์ผ๋ฉฐ, ์ด๋ฏธ ์ ๊ณตํ ๊ฒฝ์ฐ ์ด์ฉ์ ์ค์ง์ AI ๋ฐ์ดํฐ ๋ฑ์ ํ์, ํ๊ธฐ ๋ฑ์ ์๊ตฌํ ์ ์์ต๋๋ค.
5. ์ ๊ณต ๋ฐ์ AI๋ฐ์ดํฐ ๋ฑ์ ์ํ๊ธฐ๊ด ๋ฑ๊ณผ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ์น์ธ์ ๋ฐ์ง ์์ ๋ค๋ฅธ ๋ฒ์ธ, ๋จ์ฒด ๋๋ ๊ฐ์ธ์๊ฒ ์ด๋ํ๊ฒ ํ๊ฑฐ๋ ์ ๊ณต, ์๋, ๋์ฌ, ํ๋งคํ์ฌ์๋ ์๋ฉ๋๋ค.
6. AI๋ฐ์ดํฐ ๋ฑ์ ๋ํด์ ์ 4ํญ์ ๋ฐ๋ฅธ ๋ชฉ์ ์ธ ์ด์ฉ, ์ 5ํญ์ ๋ฐ๋ฅธ ๋ฌด๋จ ์ด๋, ์ ๊ณต, ์๋, ๋์ฌ, ํ๋งค ๋ฑ์ ๊ฒฐ๊ณผ๋ก ์ธํ์ฌ ๋ฐ์ํ๋ ๋ชจ๋ ๋ฏผใปํ์ฌ ์์ ์ฑ
์์ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ ๋ฒ์ธ, ๋จ์ฒด ๋๋ ๊ฐ์ธ์๊ฒ ์์ต๋๋ค.
7. ์ด์ฉ์๋ AI ํ๋ธ ์ ๊ณต ๋ฐ์ดํฐ์
๋ด์ ๊ฐ์ธ์ ๋ณด ๋ฑ์ด ํฌํจ๋ ๊ฒ์ด ๋ฐ๊ฒฌ๋ ๊ฒฝ์ฐ, ์ฆ์ AI ํ๋ธ์ ํด๋น ์ฌ์ค์ ์ ๊ณ ํ๊ณ ๋ค์ด๋ก๋ ๋ฐ์ ๋ฐ์ดํฐ์
์ ์ญ์ ํ์ฌ์ผ ํฉ๋๋ค.
8. AI ํ๋ธ๋ก๋ถํฐ ์ ๊ณต๋ฐ์ ๋น์๋ณ ์ ๋ณด(์ฌํ์ ๋ณด ํฌํจ)๋ฅผ ์ธ๊ณต์ง๋ฅ ์๋น์ค ๊ฐ๋ฐ ๋ฑ์ ๋ชฉ์ ์ผ๋ก ์์ ํ๊ฒ ์ด์ฉํ์ฌ์ผ ํ๋ฉฐ, ์ด๋ฅผ ์ด์ฉํด์ ๊ฐ์ธ์ ์ฌ์๋ณํ๊ธฐ ์ํ ์ด๋ ํ ํ์๋ ํ์ฌ์๋ ์๋ฉ๋๋ค.
9. ํฅํ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์์ ํ์ฉ์ฌ๋กใป์ฑ๊ณผ ๋ฑ์ ๊ดํ ์คํ์กฐ์ฌ๋ฅผ ์ํ ํ ๊ฒฝ์ฐ ์ด์ ์ฑ์คํ๊ฒ ์ํ์ฌ์ผ ํฉ๋๋ค.
### ๋ฐ์ดํฐ ๋ค์ด๋ก๋ ์ ์ฒญ๋ฐฉ๋ฒ
1. AI ํ๋ธ๋ฅผ ํตํด ์ ๊ณต ์ค์ธ AI๋ฐ์ดํฐ ๋ฑ์ ๋ค์ด๋ก๋ ๋ฐ๊ธฐ ์ํด์๋ ๋ณ๋์ ์ ์ฒญ์ ๋ณธ์ธ ํ์ธ๊ณผ ์ ๋ณด ์ ๊ณต, ๋ชฉ์ ์ ๋ฐํ๋ ์ ์ฐจ๊ฐ ํ์ํฉ๋๋ค.
2. AI๋ฐ์ดํฐ๋ฅผ ์ ์ธํ ๋ฐ์ดํฐ ์ค๋ช
, ์ ์ ๋๊ตฌ ๋ฑ์ ๋ณ๋์ ์ ์ฒญ ์ ์ฐจ๋ ๋ก๊ทธ์ธ ์์ด ์ด์ฉ์ด ๊ฐ๋ฅํฉ๋๋ค.
3. ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ด ๊ถ๋ฆฌ์๊ฐ ์๋ AI๋ฐ์ดํฐ ๋ฑ์ ํด๋น ๊ธฐ๊ด์ ์ด์ฉ์ ์ฑ
๊ณผ ๋ค์ด๋ก๋ ์ ์ฐจ๋ฅผ ๋ฐ๋ผ์ผ ํ๋ฉฐ ์ด๋ AI ํ๋ธ์ ๊ด๋ จ์ด ์์์ ์๋ ค ๋๋ฆฝ๋๋ค. | false | 73 | false | KETI-AIR/aihub_scitech20_translation | 2022-10-31T08:12:50.000Z | null | false | 442f8c4c00aae04c37fcb44e7ecb44023af2b9ee | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/KETI-AIR/aihub_scitech20_translation/resolve/main/README.md | ---
license: apache-2.0
---
|
KETI-AIR | null | There is no citation information | # ํ๊ตญ์ด-์์ด ๋ฒ์ญ ๋ง๋ญ์น(์ฌํ๊ณผํ)
## ์๊ฐ
- ์ฌํ๊ณผํ(์ ์น, ๊ฒฝ์ , ๊ธ์ต, ํ์ , ๊ต์ก, ๋ฒ๋ฅ ๋ฑ) ๋ถ์ผ ๋ฑ ํ-์ ๋ฒ์ญ ์ ํ๋๊ฐ ์๋์ ์ผ๋ก ๋ฎ์ ๋ถ์ผ์ ๋ฐ์ดํฐ ๊ตฌ์ถ์ ํตํด AI ๊ธฐ๋ฐ ๋ฒ์ญ ๊ธฐ์ ๊ฐ๋ฐ์ ํ์ฉํ ์ ์๋ ํ์ต ๋ฐ์ดํฐ์
์ ๊ตฌ์ถํ์ฌ ๋ณด๋ค ์ํํ ์ฌํ๊ณผํ ๋ถ์ผ ๊ด๋ จ ์ ๋ณด ์ํต ๋๋ชจ
## ๊ตฌ์ถ๋ชฉ์
- ์ฌํ ๊ณผํ ๋ถ์ผ(๋ฒ๋ฅ , ๊ต์ก, ๊ฒฝ์ , ๋ฌธํ/๊ด๊ด/์์ )์ ํ-์ ๋ง๋ญ์น 150๋ง ๋ฌธ์ฅ ๊ตฌ์ถ. ์ธ๊ณต์ง๋ฅ ๋ฒ์ญ ํ์ต์ ํ์ฉ๋๊ธฐ ์ํ ๋ฐ์ดํฐ์
## Usage
```python
from datasets import load_dataset
raw_datasets = load_dataset(
"aihub_socialtech20_translation.py",
"base",
cache_dir="huggingface_datasets",
data_dir="data",
ignore_verifications=True,
)
dataset_train = raw_datasets["train"]
for item in dataset_train:
print(item)
exit()
```
## ๋ฐ์ดํฐ ๊ด๋ จ ๋ฌธ์์ฒ
| ๋ด๋น์๋ช
| ์ ํ๋ฒํธ | ์ด๋ฉ์ผ |
| ------------- | ------------- | ------------- |
| ๋ฐฑ์ ํธ(ํธ์๊ทธํ) | 02-1833-5926 | ceo@twigfarm.net |
## Copyright
### ๋ฐ์ดํฐ ์๊ฐ
AI ํ๋ธ์์ ์ ๊ณต๋๋ ์ธ๊ณต์ง๋ฅ ํ์ต์ฉ ๋ฐ์ดํฐ(์ดํ โAI๋ฐ์ดํฐโ๋ผ๊ณ ํจ)๋ ๊ณผํ๊ธฐ์ ์ ๋ณดํต์ ๋ถ์ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ใ์ง๋ฅ์ ๋ณด์ฐ์
์ธํ๋ผ ์กฐ์ฑใ ์ฌ์
์ ์ผํ์ผ๋ก ๊ตฌ์ถ๋์์ผ๋ฉฐ, ๋ณธ ์ฌ์
์ ์ โง๋ฌดํ์ ๊ฒฐ๊ณผ๋ฌผ์ธ ๋ฐ์ดํฐ, AI ์์ฉ๋ชจ๋ธ ๋ฐ ๋ฐ์ดํฐ ์ ์๋๊ตฌ์ ์์ค, ๊ฐ์ข
๋งค๋ด์ผ ๋ฑ(์ดํ โAI๋ฐ์ดํฐ ๋ฑโ)์ ๋ํ ์ผ์ฒด์ ๊ถ๋ฆฌ๋ AI๋ฐ์ดํฐ ๋ฑ์ ๊ตฌ์ถ ์ํ๊ธฐ๊ด ๋ฐ ์ฐธ์ฌ๊ธฐ๊ด(์ดํ โ์ํ๊ธฐ๊ด ๋ฑโ)๊ณผ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ์์ต๋๋ค.
๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ธ๊ณต์ง๋ฅ ๊ธฐ์ ๋ฐ ์ ํยท์๋น์ค ๋ฐ์ ์ ์ํ์ฌ ๊ตฌ์ถํ์์ผ๋ฉฐ, ์ง๋ฅํ ์ ํใป์๋น์ค, ์ฑ๋ด ๋ฑ ๋ค์ํ ๋ถ์ผ์์ ์๋ฆฌ์ ใป๋น์๋ฆฌ์ ์ฐ๊ตฌใป๊ฐ๋ฐ ๋ชฉ์ ์ผ๋ก ํ์ฉํ ์ ์์ต๋๋ค.
### ๋ฐ์ดํฐ ์ด์ฉ์ ์ฑ
- ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ๊ธฐ ์ํด์ ๋ค์ ์ฌํญ์ ๋์ํ๋ฉฐ ์ค์ํด์ผ ํจ์ ๊ณ ์งํฉ๋๋ค.
1. ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ ๋์๋ ๋ฐ๋์ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ์ฌ์
๊ฒฐ๊ณผ์์ ๋ฐํ์ผ ํ๋ฉฐ, ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ 2์ฐจ์ ์ ์๋ฌผ์๋ ๋์ผํ๊ฒ ๋ฐํ์ผ ํฉ๋๋ค.
2. ๊ตญ์ธ์ ์์ฌํ๋ ๋ฒ์ธ, ๋จ์ฒด ๋๋ ๊ฐ์ธ์ด AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ๊ธฐ ์ํด์๋ ์ํ๊ธฐ๊ด ๋ฑ ๋ฐ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์๊ณผ ๋ณ๋๋ก ํฉ์๊ฐ ํ์ํฉ๋๋ค.
3. ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ๊ตญ์ธ ๋ฐ์ถ์ ์ํด์๋ ์ํ๊ธฐ๊ด ๋ฑ ๋ฐ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์๊ณผ ๋ณ๋๋ก ํฉ์๊ฐ ํ์ํฉ๋๋ค.
4. ๋ณธ AI๋ฐ์ดํฐ๋ ์ธ๊ณต์ง๋ฅ ํ์ต๋ชจ๋ธ์ ํ์ต์ฉ์ผ๋ก๋ง ์ฌ์ฉํ ์ ์์ต๋๋ค. ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉ์ ๋ชฉ์ ์ด๋ ๋ฐฉ๋ฒ, ๋ด์ฉ ๋ฑ์ด ์๋ฒํ๊ฑฐ๋ ๋ถ์ ํฉํ๋ค๊ณ ํ๋จ๋ ๊ฒฝ์ฐ ์ ๊ณต์ ๊ฑฐ๋ถํ ์ ์์ผ๋ฉฐ, ์ด๋ฏธ ์ ๊ณตํ ๊ฒฝ์ฐ ์ด์ฉ์ ์ค์ง์ AI ๋ฐ์ดํฐ ๋ฑ์ ํ์, ํ๊ธฐ ๋ฑ์ ์๊ตฌํ ์ ์์ต๋๋ค.
5. ์ ๊ณต ๋ฐ์ AI๋ฐ์ดํฐ ๋ฑ์ ์ํ๊ธฐ๊ด ๋ฑ๊ณผ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ์น์ธ์ ๋ฐ์ง ์์ ๋ค๋ฅธ ๋ฒ์ธ, ๋จ์ฒด ๋๋ ๊ฐ์ธ์๊ฒ ์ด๋ํ๊ฒ ํ๊ฑฐ๋ ์ ๊ณต, ์๋, ๋์ฌ, ํ๋งคํ์ฌ์๋ ์๋ฉ๋๋ค.
6. AI๋ฐ์ดํฐ ๋ฑ์ ๋ํด์ ์ 4ํญ์ ๋ฐ๋ฅธ ๋ชฉ์ ์ธ ์ด์ฉ, ์ 5ํญ์ ๋ฐ๋ฅธ ๋ฌด๋จ ์ด๋, ์ ๊ณต, ์๋, ๋์ฌ, ํ๋งค ๋ฑ์ ๊ฒฐ๊ณผ๋ก ์ธํ์ฌ ๋ฐ์ํ๋ ๋ชจ๋ ๋ฏผใปํ์ฌ ์์ ์ฑ
์์ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ ๋ฒ์ธ, ๋จ์ฒด ๋๋ ๊ฐ์ธ์๊ฒ ์์ต๋๋ค.
7. ์ด์ฉ์๋ AI ํ๋ธ ์ ๊ณต ๋ฐ์ดํฐ์
๋ด์ ๊ฐ์ธ์ ๋ณด ๋ฑ์ด ํฌํจ๋ ๊ฒ์ด ๋ฐ๊ฒฌ๋ ๊ฒฝ์ฐ, ์ฆ์ AI ํ๋ธ์ ํด๋น ์ฌ์ค์ ์ ๊ณ ํ๊ณ ๋ค์ด๋ก๋ ๋ฐ์ ๋ฐ์ดํฐ์
์ ์ญ์ ํ์ฌ์ผ ํฉ๋๋ค.
8. AI ํ๋ธ๋ก๋ถํฐ ์ ๊ณต๋ฐ์ ๋น์๋ณ ์ ๋ณด(์ฌํ์ ๋ณด ํฌํจ)๋ฅผ ์ธ๊ณต์ง๋ฅ ์๋น์ค ๊ฐ๋ฐ ๋ฑ์ ๋ชฉ์ ์ผ๋ก ์์ ํ๊ฒ ์ด์ฉํ์ฌ์ผ ํ๋ฉฐ, ์ด๋ฅผ ์ด์ฉํด์ ๊ฐ์ธ์ ์ฌ์๋ณํ๊ธฐ ์ํ ์ด๋ ํ ํ์๋ ํ์ฌ์๋ ์๋ฉ๋๋ค.
9. ํฅํ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์์ ํ์ฉ์ฌ๋กใป์ฑ๊ณผ ๋ฑ์ ๊ดํ ์คํ์กฐ์ฌ๋ฅผ ์ํ ํ ๊ฒฝ์ฐ ์ด์ ์ฑ์คํ๊ฒ ์ํ์ฌ์ผ ํฉ๋๋ค.
### ๋ฐ์ดํฐ ๋ค์ด๋ก๋ ์ ์ฒญ๋ฐฉ๋ฒ
1. AI ํ๋ธ๋ฅผ ํตํด ์ ๊ณต ์ค์ธ AI๋ฐ์ดํฐ ๋ฑ์ ๋ค์ด๋ก๋ ๋ฐ๊ธฐ ์ํด์๋ ๋ณ๋์ ์ ์ฒญ์ ๋ณธ์ธ ํ์ธ๊ณผ ์ ๋ณด ์ ๊ณต, ๋ชฉ์ ์ ๋ฐํ๋ ์ ์ฐจ๊ฐ ํ์ํฉ๋๋ค.
2. AI๋ฐ์ดํฐ๋ฅผ ์ ์ธํ ๋ฐ์ดํฐ ์ค๋ช
, ์ ์ ๋๊ตฌ ๋ฑ์ ๋ณ๋์ ์ ์ฒญ ์ ์ฐจ๋ ๋ก๊ทธ์ธ ์์ด ์ด์ฉ์ด ๊ฐ๋ฅํฉ๋๋ค.
3. ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ด ๊ถ๋ฆฌ์๊ฐ ์๋ AI๋ฐ์ดํฐ ๋ฑ์ ํด๋น ๊ธฐ๊ด์ ์ด์ฉ์ ์ฑ
๊ณผ ๋ค์ด๋ก๋ ์ ์ฐจ๋ฅผ ๋ฐ๋ผ์ผ ํ๋ฉฐ ์ด๋ AI ํ๋ธ์ ๊ด๋ จ์ด ์์์ ์๋ ค ๋๋ฆฝ๋๋ค. | false | 73 | false | KETI-AIR/aihub_socialtech20_translation | 2022-10-31T08:13:36.000Z | null | false | 0ba2c99b0dde16ac5fe281bba5c99b4203039ea2 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/KETI-AIR/aihub_socialtech20_translation/resolve/main/README.md | ---
license: apache-2.0
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-staging-eval-project-083d71a4-50b6-4074-aa7d-a46eddb83f06-42 | 2022-10-31T09:11:37.000Z | null | false | 892faabeccc027ec862b3889a6cb232ea04d4558 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:emotion"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-083d71a4-50b6-4074-aa7d-a46eddb83f06-42/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: autoevaluate/multi-class-classification
metrics: ['matthews_correlation']
dataset_name: emotion
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-staging-eval-project-fe056b5c-7e36-4094-b3f2-84d1fbaaf77c-53 | 2022-10-31T09:25:45.000Z | null | false | 50549635611eefc47cc7852b05fa7838e6b32ea3 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:emotion"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-fe056b5c-7e36-4094-b3f2-84d1fbaaf77c-53/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: autoevaluate/multi-class-classification
metrics: ['matthews_correlation']
dataset_name: emotion
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
Sombredems | null | null | null | false | null | false | Sombredems/sags | 2022-10-31T14:06:08.000Z | null | false | 51ebe9dbdca6c10696c926181cea1f5e339d9aaa | [] | [
"license:other"
] | https://huggingface.co/datasets/Sombredems/sags/resolve/main/README.md | ---
license: other
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-staging-eval-project-6da44258-8968-4823-8933-3375e1cfee89-64 | 2022-10-31T10:45:45.000Z | null | false | 08d5a56fbbfbd8f8e7c6372cfb2f43159388f872 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:emotion"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-6da44258-8968-4823-8933-3375e1cfee89-64/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- emotion
eval_info:
task: multi_class_classification
model: autoevaluate/multi-class-classification
metrics: ['matthews_correlation']
dataset_name: emotion
dataset_config: default
dataset_split: test
col_mapping:
text: text
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-staging-eval-project-0d3aacb2-653b-459b-af2f-2d90d5362791-75 | 2022-10-31T11:00:48.000Z | null | false | 35619762a828711029111dac816e3be6bfb33059 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-0d3aacb2-653b-459b-af2f-2d90d5362791-75/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
bankawat | null | null | null | false | 6 | false | bankawat/ASR | 2022-11-01T01:23:00.000Z | null | false | b940c76750ef805c687e0e49d274edcfb00e7214 | [] | [
"license:unknown"
] | https://huggingface.co/datasets/bankawat/ASR/resolve/main/README.md | ---
license: unknown
---
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-staging-eval-project-95ce44b7-7684-4cf4-b396-d486367937e4-86 | 2022-10-31T11:29:54.000Z | null | false | 66839876b5ad5337aa11c89d71db04f3e1e2ff15 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-95ce44b7-7684-4cf4-b396-d486367937e4-86/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-staging-eval-project-f69c187c-a1f8-462d-8272-41a77bd1f8ed-97 | 2022-10-31T11:32:57.000Z | null | false | 4e0cf3f26014b3ececa0fe89260099593caeb3c0 | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:glue"
] | https://huggingface.co/datasets/autoevaluate/autoeval-staging-eval-project-f69c187c-a1f8-462d-8272-41a77bd1f8ed-97/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- glue
eval_info:
task: binary_classification
model: autoevaluate/binary-classification
metrics: ['matthews_correlation']
dataset_name: glue
dataset_config: sst2
dataset_split: validation
col_mapping:
text: sentence
target: label
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. |
dominguesm | null | null | null | false | 4 | false | dominguesm/positive-reframing-ptbr-dataset | 2022-10-31T12:43:59.000Z | null | false | c8287a1fdc3bb36bdbc84293a1a34cf4ee5384c5 | [] | [
"arxiv:2204.02952"
] | https://huggingface.co/datasets/dominguesm/positive-reframing-ptbr-dataset/resolve/main/README.md | ---
dataset_info:
features:
- name: original_text
dtype: string
- name: reframed_text
dtype: string
- name: strategy
dtype: string
- name: strategy_original_text
dtype: string
splits:
- name: dev
num_bytes: 318805
num_examples: 835
- name: test
num_bytes: 321952
num_examples: 835
- name: train
num_bytes: 2586935
num_examples: 6679
download_size: 1845244
dataset_size: 3227692
---
# positive-reframing-ptbr-dataset
Version translated into pt-br of the dataset available in the work ["Inducing Positive Perspectives with Text Reframing"](https://arxiv.org/abs/2204.02952). Used in model [positive-reframing-ptbr](https://huggingface.co/dominguesm/positive-reframing-ptbr).
**Citation:**
> Ziems, C., Li, M., Zhang, A., & Yang, D. (2022). Inducing Positive Perspectives with Text Reframing. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL)_.
**BibTeX:**
```tex
@inproceedings{ziems-etal-2022-positive-frames,
title = "Inducing Positive Perspectives with Text Reframing",
author = "Ziems, Caleb and
Li, Minzhi and
Zhang, Anthony and
Yang, Diyi",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics",
month = may,
year = "2022",
address = "Online and Dublin, Ireland",
publisher = "Association for Computational Linguistics"
}
``` |
ChristianOrr | null | null | null | false | null | false | ChristianOrr/mnist | 2022-11-01T13:09:41.000Z | null | false | 230d88b7e15e1fd2b0df276cb236559be413bff8 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/ChristianOrr/mnist/resolve/main/README.md | ---
license: apache-2.0
---
|
Dizex | null | null | null | false | 120 | false | Dizex/FoodBase | 2022-10-31T12:48:53.000Z | null | false | eb792fb79d79a7e3b3b12eaea26dfb5a6ec23deb | [] | [] | https://huggingface.co/datasets/Dizex/FoodBase/resolve/main/README.md | ---
dataset_info:
features:
- name: nltk_tokens
sequence: string
- name: iob_tags
sequence: string
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 2040036
num_examples: 600
- name: val
num_bytes: 662190
num_examples: 200
download_size: 353747
dataset_size: 2702226
---
# Dataset Card for "FoodBase"
Dataset for FoodBase corpus introduced in [this paper](https://academic.oup.com/database/article/doi/10.1093/database/baz121/5611291).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
idamarinella | null | null | null | false | 1 | false | idamarinella/portrait | 2022-10-31T13:17:36.000Z | null | false | ba2198ba8d43324b6a3f3a6c0781465378d17944 | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/idamarinella/portrait/resolve/main/README.md | ---
license: afl-3.0
---
|
rufimelo | null | null | null | false | null | false | rufimelo/PortugueseLegalSentences-v2 | 2022-11-01T13:14:38.000Z | null | false | 5f56df48ab1ed088c122e2d73cd696e66e22e8e2 | [] | [
"annotations_creators:no-annotation",
"language_creators:found",
"language:pt",
"license:apache-2.0",
"multilinguality:monolingual",
"source_datasets:original"
] | https://huggingface.co/datasets/rufimelo/PortugueseLegalSentences-v2/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- pt
license:
- apache-2.0
multilinguality:
- monolingual
source_datasets:
- original
---
# Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
Extended version of rufimelo/PortugueseLegalSentences-v1
200000/200000/100000
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
|
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-subjqa-grocery-9dee2c-1945965520 | 2022-10-31T14:45:47.000Z | null | false | 3bddddbe0ef0f314a548753b200ec3e681492a8e | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:subjqa"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-subjqa-grocery-9dee2c-1945965520/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- subjqa
eval_info:
task: extractive_question_answering
model: SiraH/bert-finetuned-squad
metrics: []
dataset_name: subjqa
dataset_config: grocery
dataset_split: train
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: SiraH/bert-finetuned-squad
* Dataset: subjqa
* Config: grocery
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sushant-joshi](https://huggingface.co/sushant-joshi) for evaluating this model. |
Norod78 | null | null | null | false | 24 | false | Norod78/cartoon-blip-captions | 2022-11-09T16:27:57.000Z | null | false | db95ae658758c7b2337a54a2facabefe3af9698a | [] | [
"size_categories:n<1K",
"task_categories:text-to-image",
"license:cc-by-nc-sa-4.0",
"annotations_creators:machine-generated",
"language:en",
"language_creators:other",
"multilinguality:monolingual"
] | https://huggingface.co/datasets/Norod78/cartoon-blip-captions/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 190959102.953
num_examples: 3141
download_size: 190279356
dataset_size: 190959102.953
pretty_name: 'Cartoon BLIP captions'
size_categories:
- n<1K
tags: []
task_categories:
- text-to-image
license: cc-by-nc-sa-4.0
annotations_creators:
- machine-generated
language:
- en
language_creators:
- other
multilinguality:
- monolingual
---
# Dataset Card for "cartoon-blip-captions"
|
LiveEvil | null | null | null | false | 14 | false | LiveEvil/lucyrev1 | 2022-10-31T15:40:56.000Z | null | false | 607724cf2959d50f0a171e8ff42a7233f96dcd19 | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/LiveEvil/lucyrev1/resolve/main/README.md | ---
license: apache-2.0
---
|
Mostafa3zazi | null | null | null | false | 13 | false | Mostafa3zazi/Arabic_SQuAD | 2022-10-31T19:32:25.000Z | null | false | 17d5b9dafdaa266f17aedfaa0154fe56411cdb44 | [] | [] | https://huggingface.co/datasets/Mostafa3zazi/Arabic_SQuAD/resolve/main/README.md | ---
dataset_info:
features:
- name: index
dtype: string
- name: question
dtype: string
- name: context
dtype: string
- name: text
dtype: string
- name: answer_start
dtype: int64
- name: c_id
dtype: int64
splits:
- name: train
num_bytes: 61868003
num_examples: 48344
download_size: 10512179
dataset_size: 61868003
---
# Dataset Card for "Arabic_SQuAD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
---
# Citation
```
@inproceedings{mozannar-etal-2019-neural,
title = "Neural {A}rabic Question Answering",
author = "Mozannar, Hussein and
Maamary, Elie and
El Hajal, Karl and
Hajj, Hazem",
booktitle = "Proceedings of the Fourth Arabic Natural Language Processing Workshop",
month = aug,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W19-4612",
doi = "10.18653/v1/W19-4612",
pages = "108--118",
abstract = "This paper tackles the problem of open domain factual Arabic question answering (QA) using Wikipedia as our knowledge source. This constrains the answer of any question to be a span of text in Wikipedia. Open domain QA for Arabic entails three challenges: annotated QA datasets in Arabic, large scale efficient information retrieval and machine reading comprehension. To deal with the lack of Arabic QA datasets we present the Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles, and a machine translation of the Stanford Question Answering Dataset (Arabic-SQuAD). Our system for open domain question answering in Arabic (SOQAL) is based on two components: (1) a document retriever using a hierarchical TF-IDF approach and (2) a neural reading comprehension model using the pre-trained bi-directional transformer BERT. Our experiments on ARCD indicate the effectiveness of our approach with our BERT-based reader achieving a 61.3 F1 score, and our open domain system SOQAL achieving a 27.6 F1 score.",
}
```
--- |
Jirui | null | null | null | false | null | false | Jirui/testing | 2022-10-31T19:42:52.000Z | null | false | 60d116ecea74a9d94acfbebd19dd061ab42f627a | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/Jirui/testing/resolve/main/README.md | ---
license: afl-3.0
---
|
ProGamerGov | null | null | null | false | 5 | false | ProGamerGov/StableDiffusion-v1-5-Regularization-Images | 2022-11-15T16:34:59.000Z | null | false | 76d4499ddfce3e6c4d0ebeadee1fb3d19d5677bf | [] | [
"license:mit"
] | https://huggingface.co/datasets/ProGamerGov/StableDiffusion-v1-5-Regularization-Images/resolve/main/README.md | ---
license: mit
---
A collection of regularization / class instance datasets for the [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) model to use for DreamBooth prior preservation loss training. Files labeled with "mse vae" used the [stabilityai/sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse) VAE. For ease of use, datasets are stored as zip files containing 512x512 PNG images. The number of images in each zip file is specified at the end of the filename.
There is currently a bug where HuggingFace is incorrectly reporting that the datasets are pickled. They are not picked, they are simple ZIP files containing the images.
Currently this repository contains the following datasets (datasets are named after the prompt they used):
Art Styles
* "**artwork style**": 4125 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**illustration style**": 3050 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**erotic photography**": 2760 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
People
* "**person**": 2115 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**woman**": 4420 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**guy**": 4820 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**supermodel**": 4411 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**bikini model**": 4260 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**sexy athlete**": 5020 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**femme fatale**": 4725 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
Animals
* "**kitty**": 5100 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**cat**": 2050 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
Vehicles
* "**fighter jet**": 1600 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**train**": 2669 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
* "**car**": 3150 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
Themes
* "**cyberpunk**": 3040 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE.
I used the "Generate Forever" feature in [AUTOMATIC1111's WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to create thousands of images for each dataset. Every image in a particular dataset uses the exact same settings, with only the seed value being different.
You can use my regularization / class image datasets with: https://github.com/ShivamShrirao/diffusers, https://github.com/JoePenna/Dreambooth-Stable-Diffusion, https://github.com/TheLastBen/fast-stable-diffusion, and any other DreamBooth projects that have support for prior preservation loss.
|
FAERS-PubMed | null | null | null | false | 9 | false | FAERS-PubMed/FAERS-filenames-latest | 2022-11-07T18:39:11.000Z | null | false | 985d299a0bd9817d6d0dba79f732a37883bbda1b | [] | [] | https://huggingface.co/datasets/FAERS-PubMed/FAERS-filenames-latest/resolve/main/README.md | ---
dataset_info:
features:
- name: filenames
dtype: string
splits:
- name: train
num_bytes: 1590
num_examples: 60
download_size: 0
dataset_size: 1590
---
# Dataset Card for "FAERS-filenames-latest"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FAERS-PubMed | null | null | null | false | null | false | FAERS-PubMed/FAERS-filenames-2022-10-31 | 2022-10-31T23:12:27.000Z | null | false | ca2858e018bf1d532e71be25b14fdc669356045a | [] | [] | https://huggingface.co/datasets/FAERS-PubMed/FAERS-filenames-2022-10-31/resolve/main/README.md | ---
dataset_info:
features:
- name: filenames
dtype: string
splits:
- name: train
num_bytes: 1590
num_examples: 60
download_size: 1039
dataset_size: 1590
---
# Dataset Card for "FAERS-filenames-2022-10-31"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FAERS-PubMed | null | null | null | false | 5 | false | FAERS-PubMed/PubMed-filenames-latest | 2022-11-11T22:51:22.000Z | null | false | be0a531dd08a41da4763966dc217be6f4b1d8e9d | [] | [] | https://huggingface.co/datasets/FAERS-PubMed/PubMed-filenames-latest/resolve/main/README.md | ---
dataset_info:
features:
- name: filenames
dtype: string
splits:
- name: train
num_bytes: 72410
num_examples: 1114
download_size: 0
dataset_size: 72410
---
# Dataset Card for "PubMed-filenames-latest"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FAERS-PubMed | null | null | null | false | null | false | FAERS-PubMed/PubMed-filenames-2022-10-31 | 2022-10-31T23:26:40.000Z | null | false | a6c9202839ffeaefe3ccb747d4801397b524c584 | [] | [] | https://huggingface.co/datasets/FAERS-PubMed/PubMed-filenames-2022-10-31/resolve/main/README.md | ---
dataset_info:
features:
- name: filenames
dtype: string
splits:
- name: train
num_bytes: 72410
num_examples: 1114
download_size: 8582
dataset_size: 72410
---
# Dataset Card for "PubMed-filenames-2022-10-31"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
digiSilk | null | null | null | false | null | false | digiSilk/real_ruby | 2022-11-01T00:06:52.000Z | null | false | 3bc134f4be0eb287bca607e529ef11f06b7cea62 | [] | [] | https://huggingface.co/datasets/digiSilk/real_ruby/resolve/main/README.md | My initial attempt at creating a dataset intended to create a customized model to include Ruby. |
Onur-Ozbek-Crafty-Apes-VFX | null | null | null | false | 10 | false | Onur-Ozbek-Crafty-Apes-VFX/CAVFX-LAION | 2022-11-01T10:24:40.000Z | null | false | 4845af940bf5042c1ddd28df29cf32d12c88b1d3 | [] | [
"license:mit"
] | https://huggingface.co/datasets/Onur-Ozbek-Crafty-Apes-VFX/CAVFX-LAION/resolve/main/README.md | ---
license: mit
---
|
shahidul034 | null | null | null | false | 56 | false | shahidul034/text_summarization_dataset1 | 2022-11-01T02:13:08.000Z | null | false | 4d005b3e1a5f1e558bf1e53ba4d4c6835c9fc667 | [] | [] | https://huggingface.co/datasets/shahidul034/text_summarization_dataset1/resolve/main/README.md | ---
dataset_info:
features:
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 129017829
num_examples: 106525
download_size: 43557623
dataset_size: 129017829
---
# Dataset Card for "text_summarization_dataset1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shahidul034 | null | null | null | false | 28 | false | shahidul034/text_summarization_dataset2 | 2022-11-01T02:14:47.000Z | null | false | 55b0bfdf562703f905a60e4522bb56547c7406e8 | [] | [] | https://huggingface.co/datasets/shahidul034/text_summarization_dataset2/resolve/main/README.md | ---
dataset_info:
features:
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 125954432
num_examples: 105252
download_size: 42217690
dataset_size: 125954432
---
# Dataset Card for "text_summarization_dataset2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shahidul034 | null | null | null | false | 26 | false | shahidul034/text_summarization_dataset3 | 2022-11-01T02:15:51.000Z | null | false | 01b5203a600c3bde5dbf229adee63962608e0714 | [] | [] | https://huggingface.co/datasets/shahidul034/text_summarization_dataset3/resolve/main/README.md | ---
dataset_info:
features:
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 123296943
num_examples: 103365
download_size: 41220771
dataset_size: 123296943
---
# Dataset Card for "text_summarization_dataset3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shahidul034 | null | null | null | false | 18 | false | shahidul034/text_summarization_dataset4 | 2022-11-01T02:16:16.000Z | null | false | a4910c6c1646eacfcb88f7703e2e0bd7fdee559c | [] | [] | https://huggingface.co/datasets/shahidul034/text_summarization_dataset4/resolve/main/README.md | ---
dataset_info:
features:
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 111909333
num_examples: 87633
download_size: 38273895
dataset_size: 111909333
---
# Dataset Card for "text_summarization_dataset4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
autoevaluate | null | null | null | false | null | false | autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-cadd10-1947965536 | 2022-11-01T02:41:47.000Z | null | false | 945ac8484e1efc07ad26996071343822dad8dc3b | [] | [
"type:predictions",
"tags:autotrain",
"tags:evaluation",
"datasets:adversarial_qa"
] | https://huggingface.co/datasets/autoevaluate/autoeval-eval-adversarial_qa-adversarialQA-cadd10-1947965536/resolve/main/README.md | ---
type: predictions
tags:
- autotrain
- evaluation
datasets:
- adversarial_qa
eval_info:
task: extractive_question_answering
model: 123tarunanand/roberta-base-finetuned
metrics: []
dataset_name: adversarial_qa
dataset_config: adversarialQA
dataset_split: validation
col_mapping:
context: context
question: question
answers-text: answers.text
answers-answer_start: answers.answer_start
---
# Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 123tarunanand/roberta-base-finetuned
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MHassanSaleem](https://huggingface.co/MHassanSaleem) for evaluating this model. |
n1ghtf4l1 | null | null | null | false | null | false | n1ghtf4l1/super-collider | 2022-11-01T04:23:41.000Z | null | false | 5c0ac9c4b877a715105c979c30e06e6e15dd4754 | [] | [
"license:mit"
] | https://huggingface.co/datasets/n1ghtf4l1/super-collider/resolve/main/README.md | ---
license: mit
---
|
Poison413 | null | null | null | false | null | false | Poison413/Installation01 | 2022-11-01T07:21:12.000Z | null | false | abd91a59bfb0d76131319a2a5288dee5cb26bf58 | [] | [
"doi:10.57967/hf/0080",
"license:unknown"
] | https://huggingface.co/datasets/Poison413/Installation01/resolve/main/README.md | ---
license: unknown
---
|
hakancam | null | null | null | false | null | false | hakancam/avats | 2022-11-01T06:15:11.000Z | null | false | c499f832e3b97fec8889ddd10ec8765f7386474a | [] | [
"license:bigscience-openrail-m"
] | https://huggingface.co/datasets/hakancam/avats/resolve/main/README.md | ---
license: bigscience-openrail-m
---
|
ctu-aic | null | null | null | false | null | false | ctu-aic/ctkfacts | 2022-11-01T06:47:03.000Z | null | false | 16a66c3fda4c2dbb68195d70bf51148d3edb86cf | [] | [
"arxiv:2201.11115",
"license:cc-by-sa-3.0"
] | https://huggingface.co/datasets/ctu-aic/ctkfacts/resolve/main/README.md | ---
license: cc-by-sa-3.0
---
# CTKFacts dataset for Document retrieval
Czech Natural Language Inference dataset of ~3K *evidence*-*claim* pairs labelled with SUPPORTS, REFUTES or NOT ENOUGH INFO veracity labels. Extracted from a round of fact-checking experiments concluded and described within the [CsFEVER andCTKFacts: Acquiring Czech data for Fact Verification](https://arxiv.org/abs/2201.11115) paper currently being revised for publication in LREV journal.
## NLI version
Can be found at https://huggingface.co/datasets/ctu-aic/ctkfacts_nli |
qanastek | null | @article{baker2015automatic,
title={Automatic semantic classification of scientific literature according to the hallmarks of cancer},
author={Baker, Simon and Silins, Ilona and Guo, Yufan and Ali, Imran and H{\"o}gberg, Johan and Stenius, Ulla and Korhonen, Anna},
journal={Bioinformatics},
volume={32},
number={3},
pages={432--440},
year={2015},
publisher={Oxford University Press}
}
@article{baker2017cancer,
title={Cancer Hallmarks Analytics Tool (CHAT): a text mining approach to organize and evaluate scientific literature on cancer},
author={Baker, Simon and Ali, Imran and Silins, Ilona and Pyysalo, Sampo and Guo, Yufan and H{\"o}gberg, Johan and Stenius, Ulla and Korhonen, Anna},
journal={Bioinformatics},
volume={33},
number={24},
pages={3973--3981},
year={2017},
publisher={Oxford University Press}
}
@article{baker2017cancer,
title={Cancer hallmark text classification using convolutional neural networks},
author={Baker, Simon and Korhonen, Anna-Leena and Pyysalo, Sampo},
year={2016}
}
@article{baker2017initializing,
title={Initializing neural networks for hierarchical multi-label text classification},
author={Baker, Simon and Korhonen, Anna},
journal={BioNLP 2017},
pages={307--315},
year={2017}
} | The Hallmarks of Cancer Corpus for text classification
The Hallmarks of Cancer (HOC) Corpus consists of 1852 PubMed
publication abstracts manually annotated by experts according
to a taxonomy. The taxonomy consists of 37 classes in a
hierarchy. Zero or more class labels are assigned to each
sentence in the corpus. The labels are found under the "labels"
directory, while the tokenized text can be found under "text"
directory. The filenames are the corresponding PubMed IDs (PMID).
In addition to the HOC corpus, we also have the
[Cancer Hallmarks Analytics Tool](http://chat.lionproject.net/)
which classifes all of PubMed according to the HoC taxonomy. | false | 1 | false | qanastek/HoC | 2022-11-01T15:03:11.000Z | null | false | 6f8ce801f8cf4cc9d58c08f61f3424ad612f2f67 | [] | [
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:found",
"language:en",
"size_categories:1K<n<10K",
"source_datasets:original",
"task_categories:text-classification",
"task_ids:multi-class-classification",
"language_bcp47:en-US"
] | https://huggingface.co/datasets/qanastek/HoC/resolve/main/README.md | ---
annotations_creators:
- machine-generated
- expert-generated
language_creators:
- found
language:
- en
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
pretty_name: HoC
language_bcp47:
- en-US
---
# HoC : Hallmarks of Cancer Corpus
## Table of Contents
- [Dataset Card for [Needs More Information]](#dataset-card-for-needs-more-information)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [No Warranty](#no-warranty)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://s-baker.net/resource/hoc/
- **Repository:** https://github.com/sb895/Hallmarks-of-Cancer
- **Paper:** https://academic.oup.com/bioinformatics/article/32/3/432/1743783
- **Leaderboard:** https://paperswithcode.com/dataset/hoc-1
- **Point of Contact:** [Yanis Labrak](mailto:yanis.labrak@univ-avignon.fr)
### Dataset Summary
The Hallmarks of Cancer Corpus for text classification
The Hallmarks of Cancer (HOC) Corpus consists of 1852 PubMed publication abstracts manually annotated by experts according to a taxonomy. The taxonomy consists of 37 classes in a hierarchy. Zero or more class labels are assigned to each sentence in the corpus. The labels are found under the "labels" directory, while the tokenized text can be found under "text" directory. The filenames are the corresponding PubMed IDs (PMID).
In addition to the HOC corpus, we also have the [Cancer Hallmarks Analytics Tool](http://chat.lionproject.net/) which classifes all of PubMed according to the HoC taxonomy.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for `multi-class-classification`.
### Languages
The corpora consists of PubMed article only in english:
- `English - United States (en-US)`
## Load the dataset with HuggingFace
```python
from datasets import load_dataset
dataset = load_dataset("qanastek/HoC")
validation = dataset["validation"]
print("First element of the validation set : ", validation[0])
```
## Dataset Structure
### Data Instances
```json
{
"document_id": "12634122_5",
"text": "Genes that were overexpressed in OM3 included oncogenes , cell cycle regulators , and those involved in signal transduction , whereas genes for DNA repair enzymes and inhibitors of transformation and metastasis were suppressed .",
"label": [9, 5, 0, 6]
}
```
### Data Fields
`document_id`: Unique identifier of the document.
`text`: Raw text of the PubMed abstracts.
`label`: One of the 10 currently known hallmarks of cancer.
| Hallmark | Search term |
|:-------------------------------------------:|:-------------------------------------------:|
| 1. Sustaining proliferative signaling (PS) | Proliferation Receptor Cancer |
| | 'Growth factor' Cancer |
| | 'Cell cycle' Cancer |
| 2. Evading growth suppressors (GS) | 'Cell cycle' Cancer |
| | 'Contact inhibition' |
| 3. Resisting cell death (CD) | Apoptosis Cancer |
| | Necrosis Cancer |
| | Autophagy Cancer |
| 4. Enabling replicative immortality (RI) | Senescence Cancer |
| | Immortalization Cancer |
| 5. Inducing angiogenesis (A) | Angiogenesis Cancer |
| | 'Angiogenic factor' |
| 6. Activating invasion & metastasis (IM) | Metastasis Invasion Cancer |
| 7. Genome instability & mutation (GI) | Mutation Cancer |
| | 'DNA repair' Cancer |
| | Adducts Cancer |
| | 'Strand breaks' Cancer |
| | 'DNA damage' Cancer |
| 8. Tumor-promoting inflammation (TPI) | Inflammation Cancer |
| | 'Oxidative stress' Cancer |
| | Inflammation 'Immune response' Cancer |
| 9. Deregulating cellular energetics (CE) | Glycolysis Cancer; 'Warburg effect' Cancer |
| 10. Avoiding immune destruction (ID) | 'Immune system' Cancer |
| | Immunosuppression Cancer |
### Data Splits
Distribution of data for the 10 hallmarks:
| **Hallmark** | **No. abstracts** | **No. sentences** |
|:------------:|:-----------------:|:-----------------:|
| 1. PS | 462 | 993 |
| 2. GS | 242 | 468 |
| 3. CD | 430 | 883 |
| 4. RI | 115 | 295 |
| 5. A | 143 | 357 |
| 6. IM | 291 | 667 |
| 7. GI | 333 | 771 |
| 8. TPI | 194 | 437 |
| 9. CE | 105 | 213 |
| 10. ID | 108 | 226 |
## Dataset Creation
### Source Data
#### Who are the source language producers?
The corpus has been produced and uploaded by Baker Simon and Silins Ilona and Guo Yufan and Ali Imran and Hogberg Johan and Stenius Ulla and Korhonen Anna.
### Personal and Sensitive Information
The corpora is free of personal or sensitive information.
## Additional Information
### Dataset Curators
__HoC__: Baker Simon and Silins Ilona and Guo Yufan and Ali Imran and Hogberg Johan and Stenius Ulla and Korhonen Anna
__Hugging Face__: Labrak Yanis (Not affiliated with the original corpus)
### Licensing Information
```plain
GNU General Public License v3.0
```
```plain
Permissions
- Commercial use
- Modification
- Distribution
- Patent use
- Private use
Limitations
- Liability
- Warranty
Conditions
- License and copyright notice
- State changes
- Disclose source
- Same license
```
### Citation Information
We would very much appreciate it if you cite our publications:
[Automatic semantic classification of scientific literature according to the hallmarks of cancer](https://academic.oup.com/bioinformatics/article/32/3/432/1743783)
```bibtex
@article{baker2015automatic,
title={Automatic semantic classification of scientific literature according to the hallmarks of cancer},
author={Baker, Simon and Silins, Ilona and Guo, Yufan and Ali, Imran and H{\"o}gberg, Johan and Stenius, Ulla and Korhonen, Anna},
journal={Bioinformatics},
volume={32},
number={3},
pages={432--440},
year={2015},
publisher={Oxford University Press}
}
```
[Cancer Hallmarks Analytics Tool (CHAT): a text mining approach to organize and evaluate scientific literature on cancer](https://www.repository.cam.ac.uk/bitstream/handle/1810/265268/btx454.pdf?sequence=8&isAllowed=y)
```bibtex
@article{baker2017cancer,
title={Cancer Hallmarks Analytics Tool (CHAT): a text mining approach to organize and evaluate scientific literature on cancer},
author={Baker, Simon and Ali, Imran and Silins, Ilona and Pyysalo, Sampo and Guo, Yufan and H{\"o}gberg, Johan and Stenius, Ulla and Korhonen, Anna},
journal={Bioinformatics},
volume={33},
number={24},
pages={3973--3981},
year={2017},
publisher={Oxford University Press}
}
```
[Cancer hallmark text classification using convolutional neural networks](https://www.repository.cam.ac.uk/bitstream/handle/1810/270037/BIOTXTM2016.pdf?sequence=1&isAllowed=y)
```bibtex
@article{baker2017cancer,
title={Cancer hallmark text classification using convolutional neural networks},
author={Baker, Simon and Korhonen, Anna-Leena and Pyysalo, Sampo},
year={2016}
}
```
[Initializing neural networks for hierarchical multi-label text classification](http://www.aclweb.org/anthology/W17-2339)
```bibtex
@article{baker2017initializing,
title={Initializing neural networks for hierarchical multi-label text classification},
author={Baker, Simon and Korhonen, Anna},
journal={BioNLP 2017},
pages={307--315},
year={2017}
}
```
|
ellabettison | null | null | null | false | 2 | false | ellabettison/processed_bert_dataset_padded_med | 2022-11-01T11:04:13.000Z | null | false | fc0003ddab02485923b6daf58c6288773c752036 | [] | [] | https://huggingface.co/datasets/ellabettison/processed_bert_dataset_padded_med/resolve/main/README.md | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: test
num_bytes: 12801600.0
num_examples: 100000
- name: train
num_bytes: 115214400.0
num_examples: 900000
download_size: 17728113
dataset_size: 128016000.0
---
# Dataset Card for "processed_bert_dataset_padded_med"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ellabettison | null | null | null | false | 20 | false | ellabettison/processed_gpt2_dataset_padded_med | 2022-11-01T12:00:28.000Z | null | false | 742779c2c744cf24c656739d35fc5897e262ee07 | [] | [] | https://huggingface.co/datasets/ellabettison/processed_gpt2_dataset_padded_med/resolve/main/README.md | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: test
num_bytes: 10801200.0
num_examples: 100000
- name: train
num_bytes: 97210800.0
num_examples: 900000
download_size: 16878257
dataset_size: 108012000.0
---
# Dataset Card for "processed_gpt2_dataset_padded_med"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
bond005 | null | null | null | false | 29 | false | bond005/sova_rudevices | 2022-11-01T15:59:30.000Z | null | false | d9197eacfb0afff29d90a2d4e7d0d98a5dfb54bc | [] | [
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language:ru",
"license:cc-by-4.0",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"source_datasets:extended",
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification"
] | https://huggingface.co/datasets/bond005/sova_rudevices/resolve/main/README.md | ---
pretty_name: RuDevices
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- ru
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id:
size_categories:
- 10K<n<100k
source_datasets:
- extended
task_categories:
- automatic-speech-recognition
- audio-classification
---
# Dataset Card for sova_rudevices
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [SOVA RuDevices](https://github.com/sovaai/sova-dataset)
- **Repository:** [SOVA Dataset](https://github.com/sovaai/sova-dataset)
- **Leaderboard:** [The ๐ค Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [SOVA.ai](mailto:support@sova.ai)
### Dataset Summary
SOVA Dataset is free public STT/ASR dataset. It consists of several parts, one of them is SOVA RuDevices. This part is an acoustic corpus of approximately 100 hours of 16kHz Russian live speech with manual annotating, prepared by [SOVA.ai team](https://github.com/sovaai).
Authors do not divide the dataset into train, validation and test subsets. Therefore, I was compelled to prepare this splitting. The training subset includes more than 82 hours, the validation subset includes approximately 6 hours, and the test subset includes approximately 6 hours too.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER.
### Languages
The audio is in Russian.
## Dataset Structure
### Data Instances
A typical data point comprises the audio data, usually called `audio` and its transcription, called `transcription`. Any additional information about the speaker and the passage which contains the transcription is not provided.
```
{'audio': {'path': '/home/bond005/datasets/sova_rudevices/data/train/00003ec0-1257-42d1-b475-db1cd548092e.wav',
'array': array([ 0.00787354, 0.00735474, 0.00714111, ...,
-0.00018311, -0.00015259, -0.00018311]), dtype=float32),
'sampling_rate': 16000},
'transcription': 'ะผะฝะต ะฟะพะปัััะต ััะฐะปะพ'}
```
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- transcription: the transcription of the audio file.
### Data Splits
This dataset consists of three splits: training, validation, and test. This splitting was realized with accounting of internal structure of SOVA RuDevices (the validation split is based on the subdirectory `0`, and the test split is based on the subdirectory `1` of the original dataset), but audio recordings of the same speakers can be in different splits at the same time (the opposite is not guaranteed).
| | Train | Validation | Test |
| ----- | ------ | ---------- | ----- |
| examples | 81607 | 5835 | 5799 |
| hours | 82.4h | 5.9h | 5.8h |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
All recorded audio files were manually annotated.
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Egor Zubarev, Timofey Moskalets, and SOVA.ai team.
### Licensing Information
[Creative Commons BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@misc{sova2021rudevices,
author = {Zubarev, Egor and Moskalets, Timofey and SOVA.ai},
title = {SOVA RuDevices Dataset: free public STT/ASR dataset with manually annotated live speech},
publisher = {GitHub},
journal = {GitHub repository},
year = {2021},
howpublished = {\url{https://github.com/sovaai/sova-dataset}},
}
```
### Contributions
Thanks to [@bond005](https://github.com/bond005) for adding this dataset. |
Deepak2846 | null | null | null | false | 83 | false | Deepak2846/name | 2022-11-04T15:38:27.000Z | null | false | 95ea49255399ff8095f00fd10776858582deec6d | [] | [
"license:unknown"
] | https://huggingface.co/datasets/Deepak2846/name/resolve/main/README.md | ---
license: unknown
---
|
rufimelo | null | null | null | false | null | false | rufimelo/PortugueseLegalSentences-v3 | 2022-11-01T13:15:47.000Z | null | false | d278dfd8a801d43f5f3ce23228118d8d53faca81 | [] | [
"annotations_creators:no-annotation",
"language_creators:found",
"language:pt",
"license:apache-2.0",
"multilinguality:monolingual",
"source_datasets:original"
] | https://huggingface.co/datasets/rufimelo/PortugueseLegalSentences-v3/resolve/main/README.md | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- pt
license:
- apache-2.0
multilinguality:
- monolingual
source_datasets:
- original
---
# Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
Extended version of rufimelo/PortugueseLegalSentences-v1
400000/50000/50000
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
|
KETI-AIR | null | There is no citation information | # ๋ด์ค ๊ธฐ์ฌ ๊ธฐ๊ณ๋
ํด ๋ฐ์ดํฐ
## ์๊ฐ
๊ตญ๋ด ์ข
ํฉ์ผ๊ฐ์ง ๋ฐ ์ง์ญ์ ๋ฌธ์ ๋ด์ค๊ธฐ์ฌ๋ฅผ ์ง๋ฌธ์ผ๋ก ํ์ฉ, ์์ฐ์ด ์ง์ ์๋ต์ผ๋ก ์ด๋ฃจ์ด์ง ์ธ๊ณต์ง๋ฅ ํ์ต ๋ฐ์ดํฐ
## ๊ตฌ์ถ๋ชฉ์
๊ตญ๋ด ์ธ๋ก ์ฌ(์ค์์ผ๋ณด ๋ฑ ์ข
ํฉ์ผ๊ฐ์ง ๋ฐ ์ง๋ฐฉ์ง)์ ๋ด์ค๊ธฐ์ฌ๋ฅผ ์ง๋ฌธ์ผ๋ก ํ์ฉํ์ฌ 4๊ฐ์ง ์ ํ์ ์ง๋ฌธ-๋ต๋ณ ์ธํธ๋ฅผ ์์ฑ, ์ธ๊ณต์ง๋ฅ์ ํ๋ จํ๊ธฐ ์ํ ๋ฐ์ดํฐ์
## Usage
```python
from datasets import load_dataset
raw_datasets = load_dataset(
"aihub_news_mrc.py",
cache_dir="huggingface_datasets",
data_dir="data",
ignore_verifications=True,
)
dataset_train = raw_datasets["train"]
for item in dataset_train:
print(item)
exit()
```
## ๋ฐ์ดํฐ ๊ด๋ จ ๋ฌธ์์ฒ
| ๋ด๋น์๋ช
| ์ ํ๋ฒํธ | ์ด๋ฉ์ผ |
| ------------- | ------------- | ------------- |
| ๊น๋ฏผ๊ฒฝ | 02-6952-9201 | mkgenie@42maru.ai |
## Copyright
### ๋ฐ์ดํฐ ์๊ฐ
AI ํ๋ธ์์ ์ ๊ณต๋๋ ์ธ๊ณต์ง๋ฅ ํ์ต์ฉ ๋ฐ์ดํฐ(์ดํ โAI๋ฐ์ดํฐโ๋ผ๊ณ ํจ)๋ ๊ณผํ๊ธฐ์ ์ ๋ณดํต์ ๋ถ์ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ใ์ง๋ฅ์ ๋ณด์ฐ์
์ธํ๋ผ ์กฐ์ฑใ ์ฌ์
์ ์ผํ์ผ๋ก ๊ตฌ์ถ๋์์ผ๋ฉฐ, ๋ณธ ์ฌ์
์ ์ โง๋ฌดํ์ ๊ฒฐ๊ณผ๋ฌผ์ธ ๋ฐ์ดํฐ, AI ์์ฉ๋ชจ๋ธ ๋ฐ ๋ฐ์ดํฐ ์ ์๋๊ตฌ์ ์์ค, ๊ฐ์ข
๋งค๋ด์ผ ๋ฑ(์ดํ โAI๋ฐ์ดํฐ ๋ฑโ)์ ๋ํ ์ผ์ฒด์ ๊ถ๋ฆฌ๋ AI๋ฐ์ดํฐ ๋ฑ์ ๊ตฌ์ถ ์ํ๊ธฐ๊ด ๋ฐ ์ฐธ์ฌ๊ธฐ๊ด(์ดํ โ์ํ๊ธฐ๊ด ๋ฑโ)๊ณผ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ์์ต๋๋ค.
๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ธ๊ณต์ง๋ฅ ๊ธฐ์ ๋ฐ ์ ํยท์๋น์ค ๋ฐ์ ์ ์ํ์ฌ ๊ตฌ์ถํ์์ผ๋ฉฐ, ์ง๋ฅํ ์ ํใป์๋น์ค, ์ฑ๋ด ๋ฑ ๋ค์ํ ๋ถ์ผ์์ ์๋ฆฌ์ ใป๋น์๋ฆฌ์ ์ฐ๊ตฌใป๊ฐ๋ฐ ๋ชฉ์ ์ผ๋ก ํ์ฉํ ์ ์์ต๋๋ค.
### ๋ฐ์ดํฐ ์ด์ฉ์ ์ฑ
- ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ๊ธฐ ์ํด์ ๋ค์ ์ฌํญ์ ๋์ํ๋ฉฐ ์ค์ํด์ผ ํจ์ ๊ณ ์งํฉ๋๋ค.
1. ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ ๋์๋ ๋ฐ๋์ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ์ฌ์
๊ฒฐ๊ณผ์์ ๋ฐํ์ผ ํ๋ฉฐ, ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ 2์ฐจ์ ์ ์๋ฌผ์๋ ๋์ผํ๊ฒ ๋ฐํ์ผ ํฉ๋๋ค.
2. ๊ตญ์ธ์ ์์ฌํ๋ ๋ฒ์ธ, ๋จ์ฒด ๋๋ ๊ฐ์ธ์ด AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ๊ธฐ ์ํด์๋ ์ํ๊ธฐ๊ด ๋ฑ ๋ฐ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์๊ณผ ๋ณ๋๋ก ํฉ์๊ฐ ํ์ํฉ๋๋ค.
3. ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ๊ตญ์ธ ๋ฐ์ถ์ ์ํด์๋ ์ํ๊ธฐ๊ด ๋ฑ ๋ฐ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์๊ณผ ๋ณ๋๋ก ํฉ์๊ฐ ํ์ํฉ๋๋ค.
4. ๋ณธ AI๋ฐ์ดํฐ๋ ์ธ๊ณต์ง๋ฅ ํ์ต๋ชจ๋ธ์ ํ์ต์ฉ์ผ๋ก๋ง ์ฌ์ฉํ ์ ์์ต๋๋ค. ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉ์ ๋ชฉ์ ์ด๋ ๋ฐฉ๋ฒ, ๋ด์ฉ ๋ฑ์ด ์๋ฒํ๊ฑฐ๋ ๋ถ์ ํฉํ๋ค๊ณ ํ๋จ๋ ๊ฒฝ์ฐ ์ ๊ณต์ ๊ฑฐ๋ถํ ์ ์์ผ๋ฉฐ, ์ด๋ฏธ ์ ๊ณตํ ๊ฒฝ์ฐ ์ด์ฉ์ ์ค์ง์ AI ๋ฐ์ดํฐ ๋ฑ์ ํ์, ํ๊ธฐ ๋ฑ์ ์๊ตฌํ ์ ์์ต๋๋ค.
5. ์ ๊ณต ๋ฐ์ AI๋ฐ์ดํฐ ๋ฑ์ ์ํ๊ธฐ๊ด ๋ฑ๊ณผ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ์น์ธ์ ๋ฐ์ง ์์ ๋ค๋ฅธ ๋ฒ์ธ, ๋จ์ฒด ๋๋ ๊ฐ์ธ์๊ฒ ์ด๋ํ๊ฒ ํ๊ฑฐ๋ ์ ๊ณต, ์๋, ๋์ฌ, ํ๋งคํ์ฌ์๋ ์๋ฉ๋๋ค.
6. AI๋ฐ์ดํฐ ๋ฑ์ ๋ํด์ ์ 4ํญ์ ๋ฐ๋ฅธ ๋ชฉ์ ์ธ ์ด์ฉ, ์ 5ํญ์ ๋ฐ๋ฅธ ๋ฌด๋จ ์ด๋, ์ ๊ณต, ์๋, ๋์ฌ, ํ๋งค ๋ฑ์ ๊ฒฐ๊ณผ๋ก ์ธํ์ฌ ๋ฐ์ํ๋ ๋ชจ๋ ๋ฏผใปํ์ฌ ์์ ์ฑ
์์ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ ๋ฒ์ธ, ๋จ์ฒด ๋๋ ๊ฐ์ธ์๊ฒ ์์ต๋๋ค.
7. ์ด์ฉ์๋ AI ํ๋ธ ์ ๊ณต ๋ฐ์ดํฐ์
๋ด์ ๊ฐ์ธ์ ๋ณด ๋ฑ์ด ํฌํจ๋ ๊ฒ์ด ๋ฐ๊ฒฌ๋ ๊ฒฝ์ฐ, ์ฆ์ AI ํ๋ธ์ ํด๋น ์ฌ์ค์ ์ ๊ณ ํ๊ณ ๋ค์ด๋ก๋ ๋ฐ์ ๋ฐ์ดํฐ์
์ ์ญ์ ํ์ฌ์ผ ํฉ๋๋ค.
8. AI ํ๋ธ๋ก๋ถํฐ ์ ๊ณต๋ฐ์ ๋น์๋ณ ์ ๋ณด(์ฌํ์ ๋ณด ํฌํจ)๋ฅผ ์ธ๊ณต์ง๋ฅ ์๋น์ค ๊ฐ๋ฐ ๋ฑ์ ๋ชฉ์ ์ผ๋ก ์์ ํ๊ฒ ์ด์ฉํ์ฌ์ผ ํ๋ฉฐ, ์ด๋ฅผ ์ด์ฉํด์ ๊ฐ์ธ์ ์ฌ์๋ณํ๊ธฐ ์ํ ์ด๋ ํ ํ์๋ ํ์ฌ์๋ ์๋ฉ๋๋ค.
9. ํฅํ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์์ ํ์ฉ์ฌ๋กใป์ฑ๊ณผ ๋ฑ์ ๊ดํ ์คํ์กฐ์ฌ๋ฅผ ์ํ ํ ๊ฒฝ์ฐ ์ด์ ์ฑ์คํ๊ฒ ์ํ์ฌ์ผ ํฉ๋๋ค.
### ๋ฐ์ดํฐ ๋ค์ด๋ก๋ ์ ์ฒญ๋ฐฉ๋ฒ
1. AI ํ๋ธ๋ฅผ ํตํด ์ ๊ณต ์ค์ธ AI๋ฐ์ดํฐ ๋ฑ์ ๋ค์ด๋ก๋ ๋ฐ๊ธฐ ์ํด์๋ ๋ณ๋์ ์ ์ฒญ์ ๋ณธ์ธ ํ์ธ๊ณผ ์ ๋ณด ์ ๊ณต, ๋ชฉ์ ์ ๋ฐํ๋ ์ ์ฐจ๊ฐ ํ์ํฉ๋๋ค.
2. AI๋ฐ์ดํฐ๋ฅผ ์ ์ธํ ๋ฐ์ดํฐ ์ค๋ช
, ์ ์ ๋๊ตฌ ๋ฑ์ ๋ณ๋์ ์ ์ฒญ ์ ์ฐจ๋ ๋ก๊ทธ์ธ ์์ด ์ด์ฉ์ด ๊ฐ๋ฅํฉ๋๋ค.
3. ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ด ๊ถ๋ฆฌ์๊ฐ ์๋ AI๋ฐ์ดํฐ ๋ฑ์ ํด๋น ๊ธฐ๊ด์ ์ด์ฉ์ ์ฑ
๊ณผ ๋ค์ด๋ก๋ ์ ์ฐจ๋ฅผ ๋ฐ๋ผ์ผ ํ๋ฉฐ ์ด๋ AI ํ๋ธ์ ๊ด๋ จ์ด ์์์ ์๋ ค ๋๋ฆฝ๋๋ค. | false | 277 | false | KETI-AIR/aihub_news_mrc | 2022-11-02T07:43:03.000Z | null | false | 44c359b77af23165acac3dfe32a092aa7a9c00fb | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/KETI-AIR/aihub_news_mrc/resolve/main/README.md | ---
license: apache-2.0
---
|
kmeng | null | null | null | false | null | false | kmeng/CEUSN | 2022-11-03T18:11:24.000Z | null | false | 40b3da0d325bf3f45c998f874e4ac5b35d4d92ae | [] | [
"license:unknown"
] | https://huggingface.co/datasets/kmeng/CEUSN/resolve/main/README.md | ---
license: unknown
---
|
LiveEvil | null | null | null | false | 2 | false | LiveEvil/TestText | 2022-11-01T18:53:47.000Z | null | false | 24d4cac8c5b21c7396382d6cc6952dabe95c8dcb | [] | [
"license:openrail"
] | https://huggingface.co/datasets/LiveEvil/TestText/resolve/main/README.md | ---
license: openrail
---
|
ashraq | null | null | null | false | 12 | false | ashraq/fashion-product-images-small | 2022-11-01T20:25:52.000Z | null | false | 3859c76db2f6f3d3b9a3863345e3ccdbff75879d | [] | [] | https://huggingface.co/datasets/ashraq/fashion-product-images-small/resolve/main/README.md | ---
dataset_info:
features:
- name: id
dtype: int64
- name: gender
dtype: string
- name: masterCategory
dtype: string
- name: subCategory
dtype: string
- name: articleType
dtype: string
- name: baseColour
dtype: string
- name: season
dtype: string
- name: year
dtype: float64
- name: usage
dtype: string
- name: productDisplayName
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 546202015.44
num_examples: 44072
download_size: 271496441
dataset_size: 546202015.44
---
# Dataset Card for "fashion-product-images-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Data was obtained from [here](https://www.kaggle.com/datasets/paramaggarwal/fashion-product-images-small) |
Valentingmz | null | null | null | false | null | false | Valentingmz/Repositor | 2022-11-01T20:39:51.000Z | null | false | caf62a8694ff3c9fa6523dc1f74d446569fded46 | [] | [] | https://huggingface.co/datasets/Valentingmz/Repositor/resolve/main/README.md | 

 |
LiveEvil | null | null | null | false | 2 | false | LiveEvil/mysheet | 2022-11-01T20:54:32.000Z | null | false | 031c7b7df6f699fdcd5041c2810bab60907dc354 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/LiveEvil/mysheet/resolve/main/README.md | ---
license: openrail
---
|
LiveEvil | null | null | null | false | null | false | LiveEvil/autotrain-data-mysheet | 2022-11-01T20:55:52.000Z | null | false | a311ec1ad64e5e5a005e8759b8dde88acecc42eb | [] | [
"language:en"
] | https://huggingface.co/datasets/LiveEvil/autotrain-data-mysheet/resolve/main/README.md | ---
language:
- en
---
# AutoTrain Dataset for project: mysheet
## Dataset Description
This dataset has been automatically processed by AutoTrain for project mysheet.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"context": "The term \u201cpseudocode\u201d refers to writing code in a humanly understandable language such as English, and breaking it down to its core concepts.",
"question": "What is pseudocode?",
"answers.text": [
"Pseudocode is breaking down your code in English."
],
"answers.answer_start": [
33
]
},
{
"context": "Python is an interactive programming language designed for API and Machine Learning use.",
"question": "What is Python?",
"answers.text": [
"Python is an interactive programming language."
],
"answers.answer_start": [
0
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 3 |
| valid | 1 |
|
LiveEvil | null | null | null | false | null | false | LiveEvil/EsCheck-Paragraph | 2022-11-02T15:15:44.000Z | null | false | 36cf8a781bf9396d6b7e7fb536ef635571fbec77 | [] | [
"license:openrail"
] | https://huggingface.co/datasets/LiveEvil/EsCheck-Paragraph/resolve/main/README.md | ---
license: openrail
---
This is a ParaModeler, for rating hook/grabbers of an introduction paragraph. |
learningbot | null | null | null | false | null | false | learningbot/hadoop | 2022-11-01T23:24:51.000Z | null | false | 3393491a7c997952b11efaa843193f618d82f6cb | [] | [
"license:gpl-3.0"
] | https://huggingface.co/datasets/learningbot/hadoop/resolve/main/README.md | ---
license: gpl-3.0
---
|
henryscheible | null | null | null | false | 22 | false | henryscheible/crows_pairs | 2022-11-02T02:25:56.000Z | null | false | 7c394b430826ee4b382c888e833699dffaea5423 | [] | [] | https://huggingface.co/datasets/henryscheible/crows_pairs/resolve/main/README.md | ---
dataset_info:
features:
- name: label
dtype: int64
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: test
num_bytes: 146765.59151193633
num_examples: 302
- name: train
num_bytes: 586090.4084880636
num_examples: 1206
download_size: 113445
dataset_size: 732856.0
---
# Dataset Card for "crows_pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
beyond | null | null | null | false | 2 | false | beyond/chinese_clean_passages_80m | 2022-11-02T05:32:57.000Z | null | false | ae53e77f172e94cca3bb9d685eb6660c7917f35d | [] | [] | https://huggingface.co/datasets/beyond/chinese_clean_passages_80m/resolve/main/README.md | ---
dataset_info:
features:
- name: passage
dtype: string
splits:
- name: train
num_bytes: 18979214734
num_examples: 88328203
download_size: 1025261393
dataset_size: 18979214734
---
# `chinese_clean_passages_80m`
ๅ
ๅซ**8ๅไฝไธ**๏ผ88328203๏ผไธช**็บฏๅ**ไธญๆๆฎต่ฝ๏ผไธๅ
ๅซไปปไฝๅญๆฏใๆฐๅญใ\
Containing more than **80 million pure \& clean** Chinese passages, without any letters/digits/special tokens.
ๆๆฌ้ฟๅบฆๅคง้จๅไปไบ50\~200ไธชๆฑๅญไน้ดใ\
The passage length is approximately 50\~200 Chinese characters.
้่ฟ`datasets.load_dataset()`ไธ่ฝฝๆฐๆฎ๏ผไผไบง็38ไธชๅคงๅฐ็บฆ340M็ๆฐๆฎๅ
๏ผๅ
ฑ็บฆ12GB๏ผๆไปฅ่ฏท็กฎไฟๆ่ถณๅค็ฉบ้ดใ\
Downloading the dataset will result in 38 data shards each of which is about 340M and 12GB in total. Make sure there's enough space in your device:)
```
>>>
passage_dataset = load_dataset('beyond/chinese_clean_passages_80m')
<<<
Downloading data: 100%|โ| 341M/341M [00:06<00:00, 52.0MB
Downloading data: 100%|โ| 342M/342M [00:06<00:00, 54.4MB
Downloading data: 100%|โ| 341M/341M [00:06<00:00, 49.1MB
Downloading data: 100%|โ| 341M/341M [00:14<00:00, 23.5MB
Downloading data: 100%|โ| 341M/341M [00:10<00:00, 33.6MB
Downloading data: 100%|โ| 342M/342M [00:07<00:00, 43.1MB
...(38 data shards)
```
---
Acknowledgment:\
ๆฐๆฎๆฏๅบไบ[CLUEไธญๆ้ข่ฎญ็ป่ฏญๆ้](https://github.com/CLUEbenchmark/CLUE)่ฟ่กๅค็ใ่ฟๆปคๅพๅฐ็ใ\
This dataset is processed/filtered from the [CLUE pre-training corpus](https://github.com/CLUEbenchmark/CLUE).
|
pseeej | null | null | null | false | null | false | pseeej/animal-crossing-data | 2022-11-02T03:31:55.000Z | null | false | 0701ea3fa42db65b7237cab8e916a35659c5b845 | [] | [] | https://huggingface.co/datasets/pseeej/animal-crossing-data/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 7209776.0
num_examples: 389
download_size: 7181848
dataset_size: 7209776.0
---
# Dataset Card for "animal-crossing-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
gary109 | null | null | null | false | 3 | false | gary109/onset-drums_corpora_parliament_processed | 2022-11-07T09:06:30.000Z | null | false | b56c72916faa2075b017047087a8285da099683d | [] | [] | https://huggingface.co/datasets/gary109/onset-drums_corpora_parliament_processed/resolve/main/README.md | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 292227
num_examples: 1068
download_size: 87028
dataset_size: 292227
---
# Dataset Card for "onset-drums_corpora_parliament_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dhmeltzer | null | null | null | false | 4 | false | dhmeltzer/goodreads_test | 2022-11-02T04:14:57.000Z | null | false | 0179bb2c085b52b01ca23991c7581c136b76e0e6 | [] | [] | https://huggingface.co/datasets/dhmeltzer/goodreads_test/resolve/main/README.md | ---
dataset_info:
features:
- name: review_text
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 1010427121
num_examples: 478033
download_size: 496736771
dataset_size: 1010427121
---
# Dataset Card for "goodreads_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dhmeltzer | null | null | null | false | 3 | false | dhmeltzer/goodreads_train | 2022-11-02T04:16:00.000Z | null | false | dfefc099c175c50fa26da17038a2970fc6808171 | [] | [] | https://huggingface.co/datasets/dhmeltzer/goodreads_train/resolve/main/README.md | ---
dataset_info:
features:
- name: rating
dtype: int64
- name: review_text
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 1893978314
num_examples: 900000
download_size: 928071460
dataset_size: 1893978314
---
# Dataset Card for "goodreads_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Harmony22 | null | null | null | false | null | false | Harmony22/The-stonks | 2022-11-02T07:38:24.000Z | null | false | f03ddd3203868f65e565b39d1af1cf5e1df228f8 | [] | [
"license:cc-by-nc-nd-4.0"
] | https://huggingface.co/datasets/Harmony22/The-stonks/resolve/main/README.md | ---
license: cc-by-nc-nd-4.0
---
|
annabelng | null | null | null | false | null | false | annabelng/nymemes | 2022-11-02T08:02:09.000Z | null | false | 6fd649a5748873d108c8a785a38a55ddca291260 | [] | [] | https://huggingface.co/datasets/annabelng/nymemes/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 3760740114.362
num_examples: 32933
download_size: 4007130292
dataset_size: 3760740114.362
---
# Dataset Card for "nymemes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lewtun | null | null | null | false | 60 | false | lewtun/music_genres | 2022-11-02T10:27:30.000Z | null | false | 1fafac00f14590feb94984ee7dc1adc861179fc7 | [] | [] | https://huggingface.co/datasets/lewtun/music_genres/resolve/main/README.md | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: song_id
dtype: int64
- name: genre_id
dtype: int64
- name: genre
dtype: string
splits:
- name: test
num_bytes: 1978321742.996
num_examples: 5076
- name: train
num_bytes: 7844298868.902
num_examples: 19909
download_size: 9793244255
dataset_size: 9822620611.898
---
# Dataset Card for "music_genres"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
KETI-AIR | null | There is no citation information | # ํ์ ๋ฌธ์ ๋์ ๊ธฐ๊ณ๋
ํด ๋ฐ์ดํฐ
## ์๊ฐ
ํ์ ๋ฌธ์๋ฅผ ํ์ฉํ์ฌ ๊ธฐ๊ณ๋
ํด ๋ชจ๋ธ ์์ฑ์ ์ํ ์ง๋ฌธ-์ง๋ฌธ-๋ต๋ณ์ผ๋ก ๊ตฌ์ฑ๋ ์ธ๊ณต์ง๋ฅ ํ์ต ๋ฐ์ดํฐ
## ๊ตฌ์ถ๋ชฉ์
๊ธฐ๊ณ๋
ํด ๋ชจ๋ธ ๊ฐ๋ฐ, ์ง์์๋ต ์๋น์ค ๊ตฌ์ถ ๋ฑ์ ํ์ฉ ๊ฐ๋ฅํ ๋๊ท๋ชจ ์ธ๊ณต์ง๋ฅ ํ์ต์ฉ ๋ฐ์ดํฐ์
์ผ๋ก์ ๋น์ ํ ํ
์คํธ์ธ ํ์ ๋ฌธ์๋ฅผ ์ด์ฉํ์ฌ ํ์ ์ผ๋ฐ ํ
์คํธ ๋ฐ์ดํฐ์ ๋ํ ๋ค์ํ ํ์์ ์ง์์๋ต ๊ตฌ์ถ
## Usage
```python
from datasets import load_dataset
raw_datasets = load_dataset(
"aihub_admin_docs_mrc.py",
cache_dir="huggingface_datasets",
data_dir="data",
ignore_verifications=True,
)
dataset_train = raw_datasets["train"]
for item in dataset_train:
print(item)
exit()
```
## ๋ฐ์ดํฐ ๊ด๋ จ ๋ฌธ์์ฒ
| ๋ด๋น์๋ช
| ์ ํ๋ฒํธ | ์ด๋ฉ์ผ |
| ------------- | ------------- | ------------- |
| ๊น๋ฏผ๊ฒฝ | 02-6952-9201 | mkgenie@42maru.ai |
## Copyright
### ๋ฐ์ดํฐ ์๊ฐ
AI ํ๋ธ์์ ์ ๊ณต๋๋ ์ธ๊ณต์ง๋ฅ ํ์ต์ฉ ๋ฐ์ดํฐ(์ดํ โAI๋ฐ์ดํฐโ๋ผ๊ณ ํจ)๋ ๊ณผํ๊ธฐ์ ์ ๋ณดํต์ ๋ถ์ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ใ์ง๋ฅ์ ๋ณด์ฐ์
์ธํ๋ผ ์กฐ์ฑใ ์ฌ์
์ ์ผํ์ผ๋ก ๊ตฌ์ถ๋์์ผ๋ฉฐ, ๋ณธ ์ฌ์
์ ์ โง๋ฌดํ์ ๊ฒฐ๊ณผ๋ฌผ์ธ ๋ฐ์ดํฐ, AI ์์ฉ๋ชจ๋ธ ๋ฐ ๋ฐ์ดํฐ ์ ์๋๊ตฌ์ ์์ค, ๊ฐ์ข
๋งค๋ด์ผ ๋ฑ(์ดํ โAI๋ฐ์ดํฐ ๋ฑโ)์ ๋ํ ์ผ์ฒด์ ๊ถ๋ฆฌ๋ AI๋ฐ์ดํฐ ๋ฑ์ ๊ตฌ์ถ ์ํ๊ธฐ๊ด ๋ฐ ์ฐธ์ฌ๊ธฐ๊ด(์ดํ โ์ํ๊ธฐ๊ด ๋ฑโ)๊ณผ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ์์ต๋๋ค.
๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ธ๊ณต์ง๋ฅ ๊ธฐ์ ๋ฐ ์ ํยท์๋น์ค ๋ฐ์ ์ ์ํ์ฌ ๊ตฌ์ถํ์์ผ๋ฉฐ, ์ง๋ฅํ ์ ํใป์๋น์ค, ์ฑ๋ด ๋ฑ ๋ค์ํ ๋ถ์ผ์์ ์๋ฆฌ์ ใป๋น์๋ฆฌ์ ์ฐ๊ตฌใป๊ฐ๋ฐ ๋ชฉ์ ์ผ๋ก ํ์ฉํ ์ ์์ต๋๋ค.
### ๋ฐ์ดํฐ ์ด์ฉ์ ์ฑ
- ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ๊ธฐ ์ํด์ ๋ค์ ์ฌํญ์ ๋์ํ๋ฉฐ ์ค์ํด์ผ ํจ์ ๊ณ ์งํฉ๋๋ค.
1. ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ ๋์๋ ๋ฐ๋์ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ์ฌ์
๊ฒฐ๊ณผ์์ ๋ฐํ์ผ ํ๋ฉฐ, ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ 2์ฐจ์ ์ ์๋ฌผ์๋ ๋์ผํ๊ฒ ๋ฐํ์ผ ํฉ๋๋ค.
2. ๊ตญ์ธ์ ์์ฌํ๋ ๋ฒ์ธ, ๋จ์ฒด ๋๋ ๊ฐ์ธ์ด AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ๊ธฐ ์ํด์๋ ์ํ๊ธฐ๊ด ๋ฑ ๋ฐ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์๊ณผ ๋ณ๋๋ก ํฉ์๊ฐ ํ์ํฉ๋๋ค.
3. ๋ณธ AI๋ฐ์ดํฐ ๋ฑ์ ๊ตญ์ธ ๋ฐ์ถ์ ์ํด์๋ ์ํ๊ธฐ๊ด ๋ฑ ๋ฐ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์๊ณผ ๋ณ๋๋ก ํฉ์๊ฐ ํ์ํฉ๋๋ค.
4. ๋ณธ AI๋ฐ์ดํฐ๋ ์ธ๊ณต์ง๋ฅ ํ์ต๋ชจ๋ธ์ ํ์ต์ฉ์ผ๋ก๋ง ์ฌ์ฉํ ์ ์์ต๋๋ค. ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉ์ ๋ชฉ์ ์ด๋ ๋ฐฉ๋ฒ, ๋ด์ฉ ๋ฑ์ด ์๋ฒํ๊ฑฐ๋ ๋ถ์ ํฉํ๋ค๊ณ ํ๋จ๋ ๊ฒฝ์ฐ ์ ๊ณต์ ๊ฑฐ๋ถํ ์ ์์ผ๋ฉฐ, ์ด๋ฏธ ์ ๊ณตํ ๊ฒฝ์ฐ ์ด์ฉ์ ์ค์ง์ AI ๋ฐ์ดํฐ ๋ฑ์ ํ์, ํ๊ธฐ ๋ฑ์ ์๊ตฌํ ์ ์์ต๋๋ค.
5. ์ ๊ณต ๋ฐ์ AI๋ฐ์ดํฐ ๋ฑ์ ์ํ๊ธฐ๊ด ๋ฑ๊ณผ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ ์น์ธ์ ๋ฐ์ง ์์ ๋ค๋ฅธ ๋ฒ์ธ, ๋จ์ฒด ๋๋ ๊ฐ์ธ์๊ฒ ์ด๋ํ๊ฒ ํ๊ฑฐ๋ ์ ๊ณต, ์๋, ๋์ฌ, ํ๋งคํ์ฌ์๋ ์๋ฉ๋๋ค.
6. AI๋ฐ์ดํฐ ๋ฑ์ ๋ํด์ ์ 4ํญ์ ๋ฐ๋ฅธ ๋ชฉ์ ์ธ ์ด์ฉ, ์ 5ํญ์ ๋ฐ๋ฅธ ๋ฌด๋จ ์ด๋, ์ ๊ณต, ์๋, ๋์ฌ, ํ๋งค ๋ฑ์ ๊ฒฐ๊ณผ๋ก ์ธํ์ฌ ๋ฐ์ํ๋ ๋ชจ๋ ๋ฏผใปํ์ฌ ์์ ์ฑ
์์ AI๋ฐ์ดํฐ ๋ฑ์ ์ด์ฉํ ๋ฒ์ธ, ๋จ์ฒด ๋๋ ๊ฐ์ธ์๊ฒ ์์ต๋๋ค.
7. ์ด์ฉ์๋ AI ํ๋ธ ์ ๊ณต ๋ฐ์ดํฐ์
๋ด์ ๊ฐ์ธ์ ๋ณด ๋ฑ์ด ํฌํจ๋ ๊ฒ์ด ๋ฐ๊ฒฌ๋ ๊ฒฝ์ฐ, ์ฆ์ AI ํ๋ธ์ ํด๋น ์ฌ์ค์ ์ ๊ณ ํ๊ณ ๋ค์ด๋ก๋ ๋ฐ์ ๋ฐ์ดํฐ์
์ ์ญ์ ํ์ฌ์ผ ํฉ๋๋ค.
8. AI ํ๋ธ๋ก๋ถํฐ ์ ๊ณต๋ฐ์ ๋น์๋ณ ์ ๋ณด(์ฌํ์ ๋ณด ํฌํจ)๋ฅผ ์ธ๊ณต์ง๋ฅ ์๋น์ค ๊ฐ๋ฐ ๋ฑ์ ๋ชฉ์ ์ผ๋ก ์์ ํ๊ฒ ์ด์ฉํ์ฌ์ผ ํ๋ฉฐ, ์ด๋ฅผ ์ด์ฉํด์ ๊ฐ์ธ์ ์ฌ์๋ณํ๊ธฐ ์ํ ์ด๋ ํ ํ์๋ ํ์ฌ์๋ ์๋ฉ๋๋ค.
9. ํฅํ ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์์ ํ์ฉ์ฌ๋กใป์ฑ๊ณผ ๋ฑ์ ๊ดํ ์คํ์กฐ์ฌ๋ฅผ ์ํ ํ ๊ฒฝ์ฐ ์ด์ ์ฑ์คํ๊ฒ ์ํ์ฌ์ผ ํฉ๋๋ค.
### ๋ฐ์ดํฐ ๋ค์ด๋ก๋ ์ ์ฒญ๋ฐฉ๋ฒ
1. AI ํ๋ธ๋ฅผ ํตํด ์ ๊ณต ์ค์ธ AI๋ฐ์ดํฐ ๋ฑ์ ๋ค์ด๋ก๋ ๋ฐ๊ธฐ ์ํด์๋ ๋ณ๋์ ์ ์ฒญ์ ๋ณธ์ธ ํ์ธ๊ณผ ์ ๋ณด ์ ๊ณต, ๋ชฉ์ ์ ๋ฐํ๋ ์ ์ฐจ๊ฐ ํ์ํฉ๋๋ค.
2. AI๋ฐ์ดํฐ๋ฅผ ์ ์ธํ ๋ฐ์ดํฐ ์ค๋ช
, ์ ์ ๋๊ตฌ ๋ฑ์ ๋ณ๋์ ์ ์ฒญ ์ ์ฐจ๋ ๋ก๊ทธ์ธ ์์ด ์ด์ฉ์ด ๊ฐ๋ฅํฉ๋๋ค.
3. ํ๊ตญ์ง๋ฅ์ ๋ณด์ฌํ์งํฅ์์ด ๊ถ๋ฆฌ์๊ฐ ์๋ AI๋ฐ์ดํฐ ๋ฑ์ ํด๋น ๊ธฐ๊ด์ ์ด์ฉ์ ์ฑ
๊ณผ ๋ค์ด๋ก๋ ์ ์ฐจ๋ฅผ ๋ฐ๋ผ์ผ ํ๋ฉฐ ์ด๋ AI ํ๋ธ์ ๊ด๋ จ์ด ์์์ ์๋ ค ๋๋ฆฝ๋๋ค. | false | 230 | false | KETI-AIR/aihub_admin_docs_mrc | 2022-11-02T10:19:12.000Z | null | false | e92eef786328238456e467d116c53d7d914c1e0e | [] | [
"license:apache-2.0"
] | https://huggingface.co/datasets/KETI-AIR/aihub_admin_docs_mrc/resolve/main/README.md | ---
license: apache-2.0
---
|
lmvasque | null | null | null | false | null | false | lmvasque/caes | 2022-11-11T18:09:24.000Z | null | false | 6fd41bb2494326e92dd46a92a1aeff50fbce4fdd | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/lmvasque/caes/resolve/main/README.md | ---
license: cc-by-4.0
---
## About this dataset
The [CAES](http://galvan.usc.es/caes/) [(Parodi, 2015)](https://www.tandfonline.com/doi/full/10.1080/23247797.2015.1084685?cookieSet=1) dataset, also referred as the โCorpus de Aprendices del Espaรฑolโ (CAES), is a collection of texts created by Spanish L2 learners from Spanish learning centres and universities. These students had different learning levels, different backgrounds (11 native languages) and various levels of experience with the language. We used web scraping techniques to download a portion of the full dataset since its current website only provides content filtered by categories that have to be manually selected. The readability level of each text in CAES follows the [Common European Framework of Reference for Languages (CEFR)](https://www.coe.int/en/web/common-european-framework-reference-languages). The [raw version](https://huggingface.co/datasets/lmvasque/caes/blob/main/caes.raw.csv) of this corpus also contains information about the learners and the type of assignments with which they were assigned to create each text.
We have downloaded this dataset from its original [website](https://galvan.usc.es/caes/search) to make it available to the community. If you use this data, please credit the original author and our work as well (see citations below).
## About the splits
We have uploaded two versions of the CAES corpus:
- **caes.raw.csv**: raw data from the website with no further filtering. It includes information about the learners and the type/topic of their assignments.
- **caes.jsonl**: this data is limited to the text samples, the original levels of readability and our standardised category according to these: simple/complex and basic/intermediate/advanced. You can check for more details about these splits in our [paper](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link).
## Citation
If you use our splits in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)"
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
We have extracted the CAES corpus from their [website](https://galvan.usc.es/caes/search). If you use this corpus, please also cite their work as follows:
```
@article{Parodi2015,
author = "Giovanni Parodi",
title = "Corpus de aprendices de espaรฑol (CAES)",
journal = "Journal of Spanish Language Teaching",
volume = "2",
number = "2",
pages = "194-200",
year = "2015",
publisher = "Routledge",
doi = "10.1080/23247797.2015.1084685",
URL = "https://doi.org/10.1080/23247797.2015.1084685",
eprint = "https://doi.org/10.1080/23247797.2015.1084685"
}
```
You can also find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark). |
lmvasque | null | null | null | false | null | false | lmvasque/coh-metrix-esp | 2022-11-11T17:44:04.000Z | null | false | 189a95069a1544141fd9c21f638b979b106460f1 | [] | [
"license:cc-by-sa-4.0"
] | https://huggingface.co/datasets/lmvasque/coh-metrix-esp/resolve/main/README.md | ---
license: cc-by-sa-4.0
---
## About this dataset
The dataset Coh-Metrix-Esp (Cuentos) [(Quispesaravia et al., 2016)](https://aclanthology.org/L16-1745/) is a collection of 100 documents consisting of 50 children fables (โsimpleโ texts) and 50 stories for adults (โcomplexโ texts) scrapped from the web. If you use this data, please credit the original website and our work as well (see citations below).
## Citation
If you use our splits in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)".
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
#### Coh-Metrix-Esp (Cuentos)
```
@inproceedings{quispesaravia-etal-2016-coh,
title = "{C}oh-{M}etrix-{E}sp: A Complexity Analysis Tool for Documents Written in {S}panish",
author = "Quispesaravia, Andre and
Perez, Walter and
Sobrevilla Cabezudo, Marco and
Alva-Manchego, Fernando",
booktitle = "Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}'16)",
month = may,
year = "2016",
address = "Portoro{\v{z}}, Slovenia",
publisher = "European Language Resources Association (ELRA)",
url = "https://aclanthology.org/L16-1745",
pages = "4694--4698",
}
```
You can also find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark). |
lmvasque | null | null | null | false | null | false | lmvasque/hablacultura | 2022-11-11T17:42:13.000Z | null | false | 9a9ece7cc079929fb0902994f71e5c63f4284e11 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/lmvasque/hablacultura/resolve/main/README.md | ---
license: cc-by-4.0
---
## About this dataset
This dataset was collected from [HablaCultura.com](https://hablacultura.com/) a website with resources for Spanish students, labeled by instructors following the [Common European Framework of Reference for Languages (CEFR)](https://www.coe.int/en/web/common-european-framework-reference-languages). We have scraped the freely available articles from its original [website](https://hablacultura.com/) to make it available to the community. If you use this data, please credit the original [website](https://hablacultura.com/) and our work as well.
## Citation
If you use our splits in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)".
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
You can also find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark). |
lmvasque | null | null | null | false | null | false | lmvasque/kwiziq | 2022-11-11T17:40:47.000Z | null | false | b8ec1babb569f217a0248fb05f8323539bf90d96 | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/lmvasque/kwiziq/resolve/main/README.md | ---
license: cc-by-4.0
---
## About this dataset
This dataset was collected from [kwiziq.com](https://www.kwiziq.com/), a website dedicated to aid Spanish learning through automated methods. It also provides articles in different CEFR-based levels. We have scraped the freely available articles from its original [website](https://www.kwiziq.com/) to make it available to the community. If you use this data, please credit the original [website]((https://www.kwiziq.com/) and our work as well.
## Citation
If you use our splits in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)".
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
You can also find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark).
|
bertcryss | null | null | null | false | 4 | false | bertcryss/my_dataset | 2022-11-02T10:51:49.000Z | null | false | 6d30a49b0ec390c1d2df97104bf7a0ae3c772434 | [] | [] | https://huggingface.co/datasets/bertcryss/my_dataset/resolve/main/README.md | ---
dataset_info:
features:
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 89233.0
num_examples: 1
download_size: 84560
dataset_size: 89233.0
---
# Dataset Card for "my_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Julie1901 | null | null | null | false | null | false | Julie1901/pictures | 2022-11-02T11:10:16.000Z | null | false | 7b732531620accba4bbedd431b7f8a6100be6d41 | [] | [] | https://huggingface.co/datasets/Julie1901/pictures/resolve/main/README.md | |
loubnabnl | null | null | null | false | 6 | false | loubnabnl/pii_labeling_dataset | 2022-11-02T12:40:15.000Z | null | false | acc74af5096f2f78f2714cc780dafe46c40e9cb7 | [] | [] | https://huggingface.co/datasets/loubnabnl/pii_labeling_dataset/resolve/main/README.md | ---
dataset_info:
features:
- name: content
dtype: string
- name: licenses
sequence: string
- name: repository_name
dtype: string
- name: path
dtype: string
- name: size
dtype: int64
- name: lang
dtype: string
splits:
- name: train
num_bytes: 8303808.681818182
num_examples: 1000
download_size: 3542729
dataset_size: 8303808.681818182
---
# Dataset Card for "pii_labeling_dataset"
Dataset for PII annotation with 1000 random files from 11 different programming languages in [the-stack-smol](https://huggingface.co/datasets/bigcode/the-stack-smol). Below is the number of samples in each langauge.
```python
{"python": 200, "c++": 200, "javascript": 100, "java": 100, "typescript": 100, "php": 100, "c": 40, "c-sharp": 40, "markdown": 40, "go":40, "ruby": 40}
``` |
lewtun | null | null | null | false | 1 | false | lewtun/audio-test-push | 2022-11-02T11:36:48.000Z | null | false | 9361d38c024c137755d8cefe9be826dc16be4885 | [] | [] | https://huggingface.co/datasets/lewtun/audio-test-push/resolve/main/README.md | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: song_id
dtype: int64
- name: genre_id
dtype: int64
- name: genre
dtype: string
splits:
- name: test
num_bytes: 3994705.0
num_examples: 10
- name: train
num_bytes: 3738678.0
num_examples: 10
download_size: 7730848
dataset_size: 7733383.0
---
# Dataset Card for "audio-test-push"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ghomasHudson | null | null | null | false | 1 | false | ghomasHudson/muld_OpenSubtitles | 2022-11-02T11:56:13.000Z | null | false | a5e76a325594cc02dfb1cba47f07c497ab01bf60 | [] | [] | https://huggingface.co/datasets/ghomasHudson/muld_OpenSubtitles/resolve/main/README.md | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: metadata
dtype: string
splits:
- name: test
num_bytes: 176793874
num_examples: 1385
- name: train
num_bytes: 1389584660
num_examples: 27749
download_size: 967763941
dataset_size: 1566378534
---
# Dataset Card for "muld_OpenSubtitles"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ghomasHudson | null | null | null | false | null | false | ghomasHudson/muld_AO3_Style_Change_Detection | 2022-11-02T12:06:59.000Z | null | false | 282a412b73478e5e843367c5ece3d3f8660f05b0 | [] | [] | https://huggingface.co/datasets/ghomasHudson/muld_AO3_Style_Change_Detection/resolve/main/README.md | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: metadata
dtype: string
splits:
- name: test
num_bytes: 282915635
num_examples: 2352
- name: train
num_bytes: 762370660
num_examples: 6354
- name: validation
num_bytes: 83699681
num_examples: 705
download_size: 677671983
dataset_size: 1128985976
---
# Dataset Card for "muld_AO3_Style_Change_Detection"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
frankier | null | null | null | false | 27 | false | frankier/processed_multiscale_rt_critics | 2022-11-07T07:45:06.000Z | null | false | 7da6c071ef1567fe3af348832bbd12f379160ad0 | [] | [] | https://huggingface.co/datasets/frankier/processed_multiscale_rt_critics/resolve/main/README.md | ---
dataset_info:
features:
- name: movie_title
dtype: string
- name: publisher_name
dtype: string
- name: critic_name
dtype: string
- name: review_content
dtype: string
- name: review_score
dtype: string
- name: grade_type
dtype: string
- name: orig_num
dtype: float32
- name: orig_denom
dtype: float32
- name: label
dtype: uint8
- name: scale_points
dtype: uint8
- name: multiplier
dtype: uint8
- name: group_id
dtype: uint32
splits:
- name: test
num_bytes: 32106586
num_examples: 148289
- name: train
num_bytes: 131588808
num_examples: 607259
download_size: 74091855
dataset_size: 163695394
---
# Dataset Card for "processed_multiscale_rt_critics"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ghomasHudson | null | null | null | false | null | false | ghomasHudson/muld_NarrativeQA | 2022-11-02T12:24:41.000Z | null | false | 63b6d26bb53a87c2b8ea9c9428bee6ab7a7532ef | [] | [] | https://huggingface.co/datasets/ghomasHudson/muld_NarrativeQA/resolve/main/README.md | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
sequence: string
splits:
- name: test
num_bytes: 3435452065
num_examples: 10143
- name: train
num_bytes: 11253796383
num_examples: 32747
- name: validation
num_bytes: 1176625993
num_examples: 3373
download_size: 8819172017
dataset_size: 15865874441
---
# Dataset Card for "muld_NarrativeQA"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
GEM | null | @misc{gehrmann2022TaTA,
Author = {Sebastian Gehrmann and Sebastian Ruder and Vitaly Nikolaev and Jan A. Botha and Michael Chavinda and Ankur Parikh and Clara Rivera},
Title = {TaTa: A Multilingual Table-to-Text Dataset for African Languages},
Year = {2022},
Eprint = {arXiv:2211.00142},
} | Dataset loader for TaTA: A Multilingual Table-to-Text Dataset for African Languages | false | null | false | GEM/TaTA | 2022-11-03T14:23:59.000Z | null | false | 8df0b33afd830cd72656e23c6b1cedec2b285b37 | [] | [
"arxiv:2211.00142",
"arxiv:2112.12870",
"annotations_creators:none",
"language_creators:unknown",
"language:ar",
"language:en",
"language:fr",
"language:ha",
"language:ig",
"language:pt",
"language:ru",
"language:sw",
"language:yo",
"multilinguality:yes",
"size_categories:unknown",
"source_datasets:original",
"task_categories:table-to-text",
"tags:data-to-text",
"license:cc-by-sa-4.0"
] | https://huggingface.co/datasets/GEM/TaTA/resolve/main/README.md | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- ar
- en
- fr
- ha
- ig
- pt
- ru
- sw
- yo
multilinguality:
- yes
size_categories:
- unknown
source_datasets:
- original
task_categories:
- table-to-text
task_ids: []
pretty_name: TaTA
tags:
- data-to-text
license: cc-by-sa-4.0
dataset_info:
features:
- name: gem_id
dtype: string
- name: example_id
dtype: string
- name: title
dtype: string
- name: unit_of_measure
dtype: string
- name: chart_type
dtype: string
- name: was_translated
dtype: string
- name: table_data
dtype: string
- name: linearized_input
dtype: string
- name: table_text
sequence: string
- name: target
dtype: string
splits:
- name: ru
num_bytes: 308435
num_examples: 210
- name: test
num_bytes: 1691383
num_examples: 763
- name: train
num_bytes: 10019272
num_examples: 6962
- name: validation
num_bytes: 1598442
num_examples: 754
download_size: 18543506
dataset_size: 13617532
---
# Dataset Card for GEM/TaTA
## Dataset Description
- **Homepage:** https://github.com/google-research/url-nlp
- **Repository:** https://github.com/google-research/url-nlp
- **Paper:** https://arxiv.org/abs/2211.00142
- **Leaderboard:** https://github.com/google-research/url-nlp
- **Point of Contact:** Sebastian Ruder
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/TaTA).
### Dataset Summary
Existing data-to-text generation datasets are mostly limited to English. Table-to-Text in African languages (TaTA) addresses this lack of data as the first large multilingual table-to-text dataset with a focus on African languages. TaTA was created by transcribing figures and accompanying text in bilingual reports by the Demographic and Health Surveys Program, followed by professional translation to make the dataset fully parallel. TaTA includes 8,700 examples in nine languages including four African languages (Hausa, Igbo, Swahili, and Yorรนbรก) and a zero-shot test language (Russian).
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/TaTA')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/TaTA).
#### website
[Github](https://github.com/google-research/url-nlp)
#### paper
[ArXiv](https://arxiv.org/abs/2211.00142)
#### authors
Sebastian Gehrmann, Sebastian Ruder , Vitaly Nikolaev, Jan A. Botha, Michael Chavinda, Ankur Parikh, Clara Rivera
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Github](https://github.com/google-research/url-nlp)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Github](https://github.com/google-research/url-nlp)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[ArXiv](https://arxiv.org/abs/2211.00142)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@misc{gehrmann2022TaTA,
Author = {Sebastian Gehrmann and Sebastian Ruder and Vitaly Nikolaev and Jan A. Botha and Michael Chavinda and Ankur Parikh and Clara Rivera},
Title = {TaTa: A Multilingual Table-to-Text Dataset for African Languages},
Year = {2022},
Eprint = {arXiv:2211.00142},
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Sebastian Ruder
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
ruder@google.com
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
yes
#### Leaderboard Link
<!-- info: Provide a link to the leaderboard. -->
<!-- scope: periscope -->
[Github](https://github.com/google-research/url-nlp)
#### Leaderboard Details
<!-- info: Briefly describe how the leaderboard evaluates models. -->
<!-- scope: microscope -->
The paper introduces a metric StATA which is trained on human ratings and which is used to rank approaches submitted to the leaderboard.
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`, `Portuguese`, `Arabic`, `French`, `Hausa`, `Swahili (macrolanguage)`, `Igbo`, `Yoruba`, `Russian`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
The language is taken from reports by the demographic and health surveys program.
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The dataset poses significant reasoning challenges and is thus meant as a way to asses the verbalization and reasoning capabilities of structure-to-text models.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Data-to-Text
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Summarize key information from a table in a single sentence.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`industry`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Google Research
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Sebastian Gehrmann, Sebastian Ruder , Vitaly Nikolaev, Jan A. Botha, Michael Chavinda, Ankur Parikh, Clara Rivera
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
Google Research
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Sebastian Gehrmann (Google Research)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
- `example_id`: The ID of the example. Each ID (e.g., `AB20-ar-1`) consists of three parts: the document ID, the language ISO 639-1 code, and the index of the table within the document.
- `title`: The title of the table.
- `unit_of_measure`: A description of the numerical value of the data. E.g., percentage of households with clean water.
- `chart_type`: The kind of chart associated with the data. We consider the following (normalized) types: horizontal bar chart, map chart, pie graph, tables, line chart, pie chart, vertical chart type, line graph, vertical bar chart, and other.
- `was_translated`: Whether the table was transcribed in the original language of the report or translated.
- `table_data`: The table content is a JSON-encoded string of a two-dimensional list, organized by row, from left to right, starting from the top of the table. Number of items varies per table. Empty cells are given as empty string values in the corresponding table cell.
- `table_text`: The sentences forming the description of each table are encoded as a JSON object. In the case of more than one sentence, these are separated by commas. Number of items varies per table.
- `linearized_input`: A single string that contains the table content separated by vertical bars, i.e., |. Including title, unit of measurement, and the content of each cell including row and column headers in between brackets, i.e., (Medium Empowerment, Mali, 17.9).
#### Reason for Structure
<!-- info: How was the dataset structure determined? -->
<!-- scope: microscope -->
The structure includes all available information for the infographics on which the dataset is based.
#### How were labels chosen?
<!-- info: How were the labels chosen? -->
<!-- scope: microscope -->
Annotators looked through English text to identify sentences that describe an infographic. They then identified the corresponding location of the parallel non-English document. All sentences were extracted.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
```
{
"example_id": "FR346-en-39",
"title": "Trends in early childhood mortality rates",
"unit_of_measure": "Deaths per 1,000 live births for the 5-year period before the survey",
"chart_type": "Line chart",
"was_translated": "False",
"table_data": "[[\"\", \"Child mortality\", \"Neonatal mortality\", \"Infant mortality\", \"Under-5 mortality\"], [\"1990 JPFHS\", 5, 21, 34, 39], [\"1997 JPFHS\", 6, 19, 29, 34], [\"2002 JPFHS\", 5, 16, 22, 27], [\"2007 JPFHS\", 2, 14, 19, 21], [\"2009 JPFHS\", 5, 15, 23, 28], [\"2012 JPFHS\", 4, 14, 17, 21], [\"2017-18 JPFHS\", 3, 11, 17, 19]]",
"table_text": [
"neonatal, infant, child, and under-5 mortality rates for the 5 years preceding each of seven JPFHS surveys (1990 to 2017-18).",
"Under-5 mortality declined by half over the period, from 39 to 19 deaths per 1,000 live births.",
"The decline in mortality was much greater between the 1990 and 2007 surveys than in the most recent period.",
"Between 2012 and 2017-18, under-5 mortality decreased only modestly, from 21 to 19 deaths per 1,000 live births, and infant mortality remained stable at 17 deaths per 1,000 births."
],
"linearized_input": "Trends in early childhood mortality rates | Deaths per 1,000 live births for the 5-year period before the survey | (Child mortality, 1990 JPFHS, 5) (Neonatal mortality, 1990 JPFHS, 21) (Infant mortality, 1990 JPFHS, 34) (Under-5 mortality, 1990 JPFHS, 39) (Child mortality, 1997 JPFHS, 6) (Neonatal mortality, 1997 JPFHS, 19) (Infant mortality, 1997 JPFHS, 29) (Under-5 mortality, 1997 JPFHS, 34) (Child mortality, 2002 JPFHS, 5) (Neonatal mortality, 2002 JPFHS, 16) (Infant mortality, 2002 JPFHS, 22) (Under-5 mortality, 2002 JPFHS, 27) (Child mortality, 2007 JPFHS, 2) (Neonatal mortality, 2007 JPFHS, 14) (Infant mortality, 2007 JPFHS, 19) (Under-5 mortality, 2007 JPFHS, 21) (Child mortality, 2009 JPFHS, 5) (Neonatal mortality, 2009 JPFHS, 15) (Infant mortality, 2009 JPFHS, 23) (Under-5 mortality, 2009 JPFHS, 28) (Child mortality, 2012 JPFHS, 4) (Neonatal mortality, 2012 JPFHS, 14) (Infant mortality, 2012 JPFHS, 17) (Under-5 mortality, 2012 JPFHS, 21) (Child mortality, 2017-18 JPFHS, 3) (Neonatal mortality, 2017-18 JPFHS, 11) (Infant mortality, 2017-18 JPFHS, 17) (Under-5 mortality, 2017-18 JPFHS, 19)"
}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
- `Train`: Training set, includes examples with 0 or more references.
- `Validation`: Validation set, includes examples with 3 or more references.
- `Test`: Test set, includes examples with 3 or more references.
- `Ru`: Russian zero-shot set. Includes English and Russian examples (Russian is not includes in any of the other splits).
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The same table across languages is always in the same split, i.e., if table X is in the test split in language A, it will also be in the test split in language B. In addition to filtering examples without transcribed table values, every example of the development and test splits has at least 3 references.
From the examples that fulfilled these criteria, 100 tables were sampled for both development and test for a total of 800 examples each. A manual review process excluded a few tables in each set, resulting in a training set of 6,962 tables, a development set of 752 tables, and a test set of 763 tables.
####
<!-- info: What does an outlier of the dataset in terms of length/perplexity/embedding look like? -->
<!-- scope: microscope -->
There are tables without references, without values, and others that are very large. The dataset is distributed as-is, but the paper describes multiple strategies to deal with data issues.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
There is no other multilingual data-to-text dataset that is parallel over languages. Moreover, over 70% of references in the dataset require reasoning and it is thus of very high quality and challenging for models.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
yes
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
More languages, parallel across languages, grounded in infographics, not centered on Western entities or source documents
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
reasoning, verbalization, content planning
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
no
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
#### Pointers to Resources
<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
The background section of the [paper](https://arxiv.org/abs/2211.00142) provides a list of related datasets.
#### Technical Terms
<!-- info: Technical terms used in this card and the dataset and their definitions -->
<!-- scope: microscope -->
- `data-to-text`: Term that refers to NLP tasks in which the input is structured information and the output is natural language.
## Previous Results
### Previous Results
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`Other: Other Metrics`
#### Other Metrics
<!-- info: Definitions of other metrics -->
<!-- scope: periscope -->
`StATA`: A new metric associated with TaTA that is trained on human judgments and which has a much higher correlation with them.
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
The creators used a human evaluation that measured [attribution](https://arxiv.org/abs/2112.12870) and reasoning capabilities of various models. Based on these ratings, they trained a new metric and showed that existing metrics fail to measure attribution.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
no
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The curation rationale is to create a multilingual data-to-text dataset that is high-quality and challenging.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
The communicative goal is to describe a table in a single sentence.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
The language was produced by USAID as part of the Demographic and Health Surveys program (https://dhsprogram.com/).
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
The topics are related to fertility, family planning, maternal and child health, gender, and nutrition.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
validated by crowdworker
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
expert created
#### Number of Raters
<!-- info: What is the number of raters -->
<!-- scope: telescope -->
11<n<50
#### Rater Qualifications
<!-- info: Describe the qualifications required of an annotator. -->
<!-- scope: periscope -->
Professional annotator who is a fluent speaker of the respective language
#### Raters per Training Example
<!-- info: How many annotators saw each training example? -->
<!-- scope: periscope -->
0
#### Raters per Test Example
<!-- info: How many annotators saw each test example? -->
<!-- scope: periscope -->
1
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
yes
#### Which Annotation Service
<!-- info: Which annotation services were used? -->
<!-- scope: periscope -->
`other`
#### Annotation Values
<!-- info: Purpose and values for each annotation -->
<!-- scope: microscope -->
The additional annotations are for system outputs and references and serve to develop metrics for this task.
#### Any Quality Control?
<!-- info: Quality control measures? -->
<!-- scope: telescope -->
validated by data curators
#### Quality Control Details
<!-- info: Describe the quality control measures that were taken. -->
<!-- scope: microscope -->
Ratings were compared to a small (English) expert-curated set of ratings to ensure high agreement. There were additional rounds of training and feedback to annotators to ensure high quality judgments.
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Other Consented Downstream Use
<!-- info: What other downstream uses of the data did the original data creators and the data curators consent to? -->
<!-- scope: microscope -->
In addition to data-to-text generation, the dataset can be used for translation or multimodal research.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
The DHS program only publishes aggregate survey information and thus, no personal information is included.
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
yes
#### Details on how Dataset Addresses the Needs
<!-- info: Describe how this dataset addresses the needs of underserved communities. -->
<!-- scope: microscope -->
The dataset is focusing on data about African countries and the languages included in the dataset are spoken in Africa. It aims to improve the representation of African languages in the NLP and NLG communities.
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
no
#### Are the Language Producers Representative of the Language?
<!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
<!-- scope: periscope -->
The language producers for this dataset are those employed by the DHS program which is a US-funded program. While the data is focused on African countries, there may be implicit western biases in how the data is presented.
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`open license - commercial use allowed`
### Known Technical Limitations
#### Technical Limitations
<!-- info: Describe any known technical limitations, such as spurrious correlations, train/test overlap, annotation biases, or mis-annotations, and cite the works that first identified these limitations when possible. -->
<!-- scope: microscope -->
While tables were transcribed in the available languages, the majority of the tables were published in English as the first language. Professional translators were used to translate the data, which makes it plausible that some translationese exists in the data. Moreover, it was unavoidable to collect reference sentences that are only partially entailed by the source tables.
#### Unsuited Applications
<!-- info: When using a model trained on this dataset in a setting where users or the public may interact with its predictions, what are some pitfalls to look out for? In particular, describe some applications of the general task featured in this dataset that its curation or properties make it less suitable for. -->
<!-- scope: microscope -->
The domain of health reports includes potentially sensitive topics relating to reproduction, violence, sickness, and death. Perceived negative values could be used to amplify stereotypes about people from the respective regions or countries. The intended academic use of this dataset is to develop and evaluate models that neutrally report the content of these tables but not use the outputs to make value judgments, and these applications are thus discouraged.
|
alfredodeza | null | null | null | false | 2 | false | alfredodeza/world-junior-championships-results | 2022-11-02T15:41:33.000Z | null | false | dfbc45e3c26ef1a03ef6e9e8c5e3d3da3ffc50f9 | [] | [
"license:mit"
] | https://huggingface.co/datasets/alfredodeza/world-junior-championships-results/resolve/main/README.md | ---
license: mit
---
|
ficsort | null | """
_FEATURES = Features(
{
"id": Value("int32"),
"tokens": Sequence(Value("string")),
"ner": Sequence(
ClassLabel(
names=[
"O",
"B-PER",
"I-PER",
"B-ORG",
"I-ORG",
"B-LOC",
"I-LOC",
"B-MISC",
"I-MISC",
]
)
),
"document_id": Value("int32"),
"sentence_id": Value("int32")
}
)
class SzegedNERConfig(BuilderConfig): | The recognition and classification of proper nouns and names in plain text is of key importance in Natural Language
Processing (NLP) as it has a beneficial effect on the performance of various types of applications, including
Information Extraction, Machine Translation, Syntactic Parsing/Chunking, etc. | false | 9 | false | ficsort/SzegedNER | 2022-11-02T15:56:22.000Z | null | false | 048280e285175987c092a96b6149c032fcecc0c7 | [] | [
"annotations_creators:expert-generated",
"language:hu",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"tags:hungarian",
"tags:szeged",
"tags:ner",
"task_categories:token-classification",
"task_ids:named-entity-recognition"
] | https://huggingface.co/datasets/ficsort/SzegedNER/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- hu
language_creators:
- other
license: []
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: SzegedNER
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- hungarian
- szeged
- ner
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Introduction
The recognition and classification of proper nouns and names in plain text is of key importance in Natural Language Processing (NLP) as it has a beneficial effect on the performance of various types of applications, including Information Extraction, Machine Translation, Syntactic Parsing/Chunking, etc.
## Corpus of Business Newswire Texts (business)
The Named Entity Corpus for Hungarian is a subcorpus of the Szeged Treebank, which contains full syntactic annotations done manually by linguist experts. A significant part of these texts has been annotated with Named Entity class labels in line with the annotation standards used on the CoNLL-2003 shared task.
Statistical data on Named Entities occurring in the corpus:
```
| tokens | phrases
------ | ------ | -------
non NE | 200067 |
PER | 1921 | 982
ORG | 20433 | 10533
LOC | 1501 | 1294
MISC | 2041 | 1662
```
### Reference
> Gyรถrgy Szarvas, Richรกrd Farkas, Lรกszlรณ Felfรถldi, Andrรกs Kocsor, Jรกnos Csirik: Highly accurate Named Entity corpus for Hungarian. International Conference on Language Resources and Evaluation 2006, Genova (Italy)
## Criminal NE corpus (criminal)
The Hungarian National Corpus and its Heti Vilรกggazdasรกg (HVG) subcorpus provided the basis for corpus text selection: articles related to the topic of financially liable offences were selected and annotated for the categories person, organization, location and miscellaneous.
There are two annotated versions of the corpus. When preparing the tag-for-meaning annotation, our linguists took into consideration the context in which the Named Entity under investigation occurred, thus, it was not the primary sense of the Named Entity that determined the tag (e.g. Manchester=LOC) but its contextual reference (e.g. Manchester won the Premier League=ORG). As for tag-for-tag annotation, these cases were not differentiated: tags were always given on the basis of the primary sense.
Statistical data on Named Entities occurring in the corpus:
```
| tag-for-meaning | tag-for-tag
------ | --------------- | -----------
non NE | 200067 |
PER | 8101 | 8121
ORG | 8782 | 9480
LOC | 5049 | 5391
MISC | 1917 | 854
```
## Metadata
dataset_info:
- config_name: business
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
7: B-MISC
8: I-MISC
- name: document_id
dtype: string
- name: sentence_id
dtype: string
splits:
- name: original
num_bytes: 4452207
num_examples: 9573
- name: test
num_bytes: 856798
num_examples: 1915
- name: train
num_bytes: 3171931
num_examples: 6701
- name: validation
num_bytes: 423478
num_examples: 957
download_size: 0
dataset_size: 8904414
- config_name: criminal
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
0: O
1: B-PER
2: I-PER
3: B-ORG
4: I-ORG
5: B-LOC
6: I-LOC
7: B-MISC
8: I-MISC
- name: document_id
dtype: string
- name: sentence_id
dtype: string
splits:
- name: original
num_bytes: 2807970
num_examples: 5375
- name: test
num_bytes: 520959
num_examples: 1089
- name: train
num_bytes: 1989662
num_examples: 3760
- name: validation
num_bytes: 297349
num_examples: 526
download_size: 0
dataset_size: 5615940
|
LiveEvil | null | null | null | false | 1 | false | LiveEvil/autotrain-data-testtextexists | 2022-11-03T15:55:01.000Z | null | false | 64335ac3f9bfae6f6e2b467c6c904820ede01999 | [] | [
"language:en"
] | https://huggingface.co/datasets/LiveEvil/autotrain-data-testtextexists/resolve/main/README.md | ---
language:
- en
task_categories:
- text-scoring
---
# AutoTrain Dataset for project: testtextexists
## Dataset Description
This dataset has been automatically processed by AutoTrain for project testtextexists.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "According to the National Soft Drink Association, the annual consumption of soda by the U.S. citizens is 600 cans",
"target": 66.0
},
{
"text": "Experts say new vaccines are fake!",
"target": 50.0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='float32', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 19 |
| valid | 18 |
|
Meiruofeng | null | null | null | false | 38 | false | Meiruofeng/test | 2022-11-05T03:28:10.000Z | null | false | c466f287741cdebbe8a01c14f11b0b3a10ba3b36 | [] | [] | https://huggingface.co/datasets/Meiruofeng/test/resolve/main/README.md | |
allenai | null | @inproceedings{Cohan2019EMNLP,
title={Pretrained Language Models for Sequential Sentence Classification},
author={Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Dan Weld},
year={2019},
booktitle={EMNLP},
} | As a step toward better document-level understanding, we explore classification of a sequence of sentences into their corresponding categories, a task that requires understanding sentences in context of the document. Recent successful models for this task have used hierarchical models to contextualize sentence representations, and Conditional Random Fields (CRFs) to incorporate dependencies between subsequent labels. In this work, we show that pretrained language models, BERT (Devlin et al., 2018) in particular, can be used for this task to capture contextual dependencies without the need for hierarchical encoding nor a CRF. Specifically, we construct a joint sentence representation that allows BERT Transformer layers to directly utilize contextual information from all words in all sentences. Our approach achieves state-of-the-art results on four datasets, including a new dataset of structured scientific abstracts. | false | 16 | false | allenai/csabstruct | 2022-11-02T17:54:38.000Z | null | false | 82e266d8effde67520d50532587b5f000237b50a | [] | [
"arxiv:1909.04054",
"license:apache-2.0"
] | https://huggingface.co/datasets/allenai/csabstruct/resolve/main/README.md | ---
license: apache-2.0
---
# CSAbstruct
CSAbstruct was created as part of *"Pretrained Language Models for Sequential Sentence Classification"* ([ACL Anthology][2], [arXiv][1], [GitHub][6]).
It contains 2,189 manually annotated computer science abstracts with sentences annotated according to their rhetorical roles in the abstract, similar to the [PUBMED-RCT][3] categories.
## Dataset Construction Details
CSAbstruct is a new dataset of annotated computer science abstracts with sentence labels according to their rhetorical roles.
The key difference between this dataset and [PUBMED-RCT][3] is that PubMed abstracts are written according to a predefined structure, whereas computer science papers are free-form.
Therefore, there is more variety in writing styles in CSAbstruct.
CSAbstruct is collected from the Semantic Scholar corpus [(Ammar et a3., 2018)][4].
E4ch sentence is annotated by 5 workers on the [Figure-eight platform][5], with one of 5 categories `{BACKGROUND, OBJECTIVE, METHOD, RESULT, OTHER}`.
We use 8 abstracts (with 51 sentences) as test questions to train crowdworkers.
Annotators whose accuracy is less than 75% are disqualified from doing the actual annotation job.
The annotations are aggregated using the agreement on a single sentence weighted by the accuracy of the annotator on the initial test questions.
A confidence score is associated with each instance based on the annotator initial accuracy and agreement of all annotators on that instance.
We then split the dataset 75%/15%/10% into train/dev/test partitions, such that the test set has the highest confidence scores.
Agreement rate on a random subset of 200 sentences is 75%, which is quite high given the difficulty of the task.
Compared with [PUBMED-RCT][3], our dataset exhibits a wider variety of writ- ing styles, since its abstracts are not written with an explicit structural template.
## Dataset Statistics
| Statistic | Avg ยฑ std |
|--------------------------|-------------|
| Doc length in sentences | 6.7 ยฑ 1.99 |
| Sentence length in words | 21.8 ยฑ 10.0 |
| Label | % in Dataset |
|---------------|--------------|
| `BACKGROUND` | 33% |
| `METHOD` | 32% |
| `RESULT` | 21% |
| `OBJECTIVE` | 12% |
| `OTHER` | 03% |
## Citation
If you use this dataset, please cite the following paper:
```
@inproceedings{Cohan2019EMNLP,
title={Pretrained Language Models for Sequential Sentence Classification},
author={Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Dan Weld},
year={2019},
booktitle={EMNLP},
}
```
[1]: https://arxiv.org/abs/1909.04054
[2]: https://aclanthology.org/D19-1383
[3]: https://github.com/Franck-Dernoncourt/pubmed-rct
[4]: https://aclanthology.org/N18-3011/
[5]: https://www.figure-eight.com/
[6]: https://github.com/allenai/sequential_sentence_classification
|
shunk031 | null | @INPROCEEDINGS{caesar2018cvpr,
title={COCO-Stuff: Thing and stuff classes in context},
author={Caesar, Holger and Uijlings, Jasper and Ferrari, Vittorio},
booktitle={Computer vision and pattern recognition (CVPR), 2018 IEEE conference on},
organization={IEEE},
year={2018}
} | COCO-Stuff augments all 164K images of the popular COCO dataset with pixel-level stuff annotations. These annotations can be used for scene understanding tasks like semantic segmentation, object detection and image captioning. | false | 1 | false | shunk031/cocostuff | 2022-11-04T01:53:17.000Z | null | false | 325d994d8b815216cdbafab67e5de47e62fc3931 | [] | [
"arxiv:1612.03716",
"language:en",
"license:cc-by-4.0",
"tags:computer-vision",
"tags:object-detection",
"tags:ms-coco",
"datasets:stuff-thing",
"datasets:stuff-only",
"metrics:accuracy",
"metrics:iou"
] | https://huggingface.co/datasets/shunk031/cocostuff/resolve/main/README.md | ---
language:
- en
license: cc-by-4.0
tags:
- computer-vision
- object-detection
- ms-coco
datasets:
- stuff-thing
- stuff-only
metrics:
- accuracy
- iou
---
# Dataset Card for COCO-Stuff
[](https://github.com/shunk031/huggingface-datasets_cocostuff/actions/workflows/ci.yaml)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- Homepage: https://github.com/nightrome/cocostuff
- Repository: https://github.com/nightrome/cocostuff
- Paper (preprint): https://arxiv.org/abs/1612.03716
- Paper (CVPR2018): https://openaccess.thecvf.com/content_cvpr_2018/html/Caesar_COCO-Stuff_Thing_and_CVPR_2018_paper.html
### Dataset Summary
COCO-Stuff is the largest existing dataset with dense stuff and thing annotations.
From the paper:
> Semantic classes can be either things (objects with a well-defined shape, e.g. car, person) or stuff (amorphous background regions, e.g. grass, sky). While lots of classification and detection works focus on thing classes, less attention has been given to stuff classes. Nonetheless, stuff classes are important as they allow to explain important aspects of an image, including (1) scene type; (2) which thing classes are likely to be present and their location (through contextual reasoning); (3) physical attributes, material types and geometric properties of the scene. To understand stuff and things in context we introduce COCO-Stuff, which augments all 164K images of the COCO 2017 dataset with pixel-wise annotations for 91 stuff classes. We introduce an efficient stuff annotation protocol based on superpixels, which leverages the original thing annotations. We quantify the speed versus quality trade-off of our protocol and explore the relation between annotation time and boundary complexity. Furthermore, we use COCO-Stuff to analyze: (a) the importance of stuff and thing classes in terms of their surface cover and how frequently they are mentioned in image captions; (b) the spatial relations between stuff and things, highlighting the rich contextual relations that make our dataset unique; (c) the performance of a modern semantic segmentation method on stuff and thing classes, and whether stuff is easier to segment than things.
### Dataset Preprocessing
### Supported Tasks and Leaderboards
### Languages
All of annotations use English as primary language.
## Dataset Structure
### Data Instances
When loading a specific configuration, users has to append a version dependent suffix:
```python
from datasets import load_dataset
load_dataset("shunk031/cocostuff", "stuff-thing")
```
#### stuff-things
An example of looks as follows.
```json
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x480 at 0x7FCA033C9C40>,
'image_filename': '000000000009.jpg',
'image_id': '9',
'width': 640
'height': 480,
'objects': [
{
'object_id': '121',
'x': 0,
'y': 11,
'w': 640,
'h': 469,
'name': 'food-other'
},
{
'object_id': '143',
'x': 0,
'y': 0
'w': 640,
'h': 480,
'name': 'plastic'
},
{
'object_id': '165',
'x': 0,
'y': 0,
'w': 319,
'h': 118,
'name': 'table'
},
{
'object_id': '183',
'x': 0,
'y': 2,
'w': 631,
'h': 472,
'name': 'unknown-183'
}
],
'stuff_map': <PIL.PngImagePlugin.PngImageFile image mode=L size=640x480 at 0x7FCA0222D880>,
}
```
#### stuff-only
An example of looks as follows.
```json
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x480 at 0x7FCA033C9C40>,
'image_filename': '000000000009.jpg',
'image_id': '9',
'width': 640
'height': 480,
'objects': [
{
'object_id': '121',
'x': 0,
'y': 11,
'w': 640,
'h': 469,
'name': 'food-other'
},
{
'object_id': '143',
'x': 0,
'y': 0
'w': 640,
'h': 480,
'name': 'plastic'
},
{
'object_id': '165',
'x': 0,
'y': 0,
'w': 319,
'h': 118,
'name': 'table'
},
{
'object_id': '183',
'x': 0,
'y': 2,
'w': 631,
'h': 472,
'name': 'unknown-183'
}
]
}
```
### Data Fields
#### stuff-things
- `image`: A `PIL.Image.Image` object containing the image.
- `image_id`: Unique numeric ID of the image.
- `image_filename`: File name of the image.
- `width`: Image width.
- `height`: Image height.
- `stuff_map`: A `PIL.Image.Image` object containing the Stuff + thing PNG-style annotations
- `objects`: Holds a list of `Object` data classes:
- `object_id`: Unique numeric ID of the object.
- `x`: x coordinate of bounding box's top left corner.
- `y`: y coordinate of bounding box's top left corner.
- `w`: Bounding box width.
- `h`: Bounding box height.
- `name`: object name
#### stuff-only
- `image`: A `PIL.Image.Image` object containing the image.
- `image_id`: Unique numeric ID of the image.
- `image_filename`: File name of the image.
- `width`: Image width.
- `height`: Image height.
- `objects`: Holds a list of `Object` data classes:
- `object_id`: Unique numeric ID of the object.
- `x`: x coordinate of bounding box's top left corner.
- `y`: y coordinate of bounding box's top left corner.
- `w`: Bounding box width.
- `h`: Bounding box height.
- `name`: object name
### Data Splits
| name | train | validation |
|-------------|--------:|-----------:|
| stuff-thing | 118,280 | 5,000 |
| stuff-only | 118,280 | 5,000 |
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
From the paper:
> COCO-Stuff contains 172 classes: 80 thing, 91 stuff, and 1 class unlabeled. The 80 thing classes are the same as in COCO [35]. The 91 stuff classes are curated by an expert annotator. The class unlabeled is used in two situations: if a label does not belong to any of the 171 predefined classes, or if the annotator cannot infer the label of a pixel.
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
COCO-Stuff is a derivative work of the COCO dataset. The authors of COCO do not in any form endorse this work. Different licenses apply:
- COCO images: [Flickr Terms of use](http://cocodataset.org/#termsofuse)
- COCO annotations: [Creative Commons Attribution 4.0 License](http://cocodataset.org/#termsofuse)
- COCO-Stuff annotations & code: [Creative Commons Attribution 4.0 License](http://cocodataset.org/#termsofuse)
### Citation Information
```bibtex
@INPROCEEDINGS{caesar2018cvpr,
title={COCO-Stuff: Thing and stuff classes in context},
author={Caesar, Holger and Uijlings, Jasper and Ferrari, Vittorio},
booktitle={Computer vision and pattern recognition (CVPR), 2018 IEEE conference on},
organization={IEEE},
year={2018}
}
```
### Contributions
Thanks to [@nightrome](https://github.com/nightrome) for publishing the COCO-Stuff dataset.
|
Vanimal0221 | null | null | null | false | null | false | Vanimal0221/VaanceFace | 2022-11-02T19:17:09.000Z | null | false | 6b103e4b7fd9abf2d1aa6af0a2aa5ce8536af705 | [] | [
"license:artistic-2.0"
] | https://huggingface.co/datasets/Vanimal0221/VaanceFace/resolve/main/README.md | ---
license: artistic-2.0
---
|
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/sciamano | 2022-11-02T21:15:27.000Z | null | false | 88835bf225b88600767b73618ad4f6aa7ea4d77d | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/sciamano/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Sciamano Artist Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by sciamano"```
If it is to strong just add [] around it.
Trained until 14000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/xlHVUJ4.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/Nsqdc5Q.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/Av4NTd8.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/ctVCTiY.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/kO6IE4S.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/john_kafka | 2022-11-02T21:25:38.000Z | null | false | 768e7ebca5725cd852f4579d170a8726b061619d | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/john_kafka/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# John Kafka Artist Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by john_kafka"```
If it is to strong just add [] around it.
Trained until 6000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/aCnC1zv.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/FdBuWbG.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/1rkuXkZ.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/5N9Wp7q.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/v2AkXjU.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
Nerfgun3 | null | null | null | false | null | false | Nerfgun3/shatter_style | 2022-11-02T21:30:48.000Z | null | false | f480d9dfb53d9f3a663001496e929c9184cbeeea | [] | [
"language:en",
"tags:stable-diffusion",
"tags:text-to-image",
"license:creativeml-openrail-m"
] | https://huggingface.co/datasets/Nerfgun3/shatter_style/resolve/main/README.md | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Shatter Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"drawn by shatter_style"```
If it is to strong just add [] around it.
Trained until 6000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/ebXN3C2.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/7zUtEDQ.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/uEuKyBP.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/qRJ5o3E.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/FybZxbO.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
hushell | null | null | null | false | null | false | hushell/meta_dataset_h5 | 2022-11-02T23:46:22.000Z | null | false | fb2872529db40dea4c95368a88460eef589a5763 | [] | [
"license:cc-by-nc-sa-4.0"
] | https://huggingface.co/datasets/hushell/meta_dataset_h5/resolve/main/README.md | ---
license: cc-by-nc-sa-4.0
---
|
connorhoehn | null | Connor Hoehn | null | false | 51 | false | connorhoehn/card_display_v1 | 2022-11-03T02:21:11.000Z | null | false | b62839591f22b070148a84e852aea9183a01778c | [] | [
"language:en"
] | https://huggingface.co/datasets/connorhoehn/card_display_v1/resolve/main/README.md | ---
language:
- en
dataset_info:
- config_name: card-detection
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
list:
- name: category_id
dtype:
class_label:
names:
0: boxed
1: grid
2: spread
3: stack
- name: image_id
dtype: string
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: iscrowd
dtype: bool
splits:
- name: train
download_size: 96890427
dataset_size: 0
- config_name: display-detection
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
list:
- name: category_id
dtype:
class_label:
names:
0: boxed
1: grid
2: spread
3: stack
- name: image_id
dtype: string
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: iscrowd
dtype: bool
splits:
- name: train
num_bytes: 42942
num_examples: 154
download_size: 96967919
dataset_size: 42942
---
|
liyongsea | null | null | PTB-XL, a large publicly available electrocardiography dataset | false | 21 | false | liyongsea/PTB-XL | 2022-11-03T15:57:19.000Z | null | false | a4d0d1862c7cb8176bcdf098ee2b11705dcb6800 | [] | [
"license:other"
] | https://huggingface.co/datasets/liyongsea/PTB-XL/resolve/main/README.md | ---
license: other
---
|
sabita9 | null | null | null | false | null | false | sabita9/mauricio-macri-2 | 2022-11-03T03:33:23.000Z | null | false | 48df4de700a2757b6122b4b3633aeb5c36120473 | [] | [
"license:mit"
] | https://huggingface.co/datasets/sabita9/mauricio-macri-2/resolve/main/README.md | ---
license: mit
---
|
juliensimon | null | null | null | false | 5 | false | juliensimon/food102-stockholm | 2022-11-03T05:21:44.000Z | null | false | 44a663ee108faca3a7b09990500bd566b3847e5d | [] | [] | https://huggingface.co/datasets/juliensimon/food102-stockholm/resolve/main/README.md | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
0: apple_pie
1: baby_back_ribs
2: baklava
3: beef_carpaccio
4: beef_tartare
5: beet_salad
6: beignets
7: bibimbap
8: bread_pudding
9: breakfast_burrito
10: bruschetta
11: caesar_salad
12: cannoli
13: caprese_salad
14: carrot_cake
15: ceviche
16: cheese_plate
17: cheesecake
18: chicken_curry
19: chicken_quesadilla
20: chicken_wings
21: chocolate_cake
22: chocolate_mousse
23: churros
24: clam_chowder
25: club_sandwich
26: crab_cakes
27: creme_brulee
28: croque_madame
29: cup_cakes
30: deviled_eggs
31: donuts
32: dumplings
33: edamame
34: eggs_benedict
35: escargots
36: falafel
37: filet_mignon
38: fish_and_chips
39: foie_gras
40: french_fries
41: french_onion_soup
42: french_toast
43: fried_calamari
44: fried_rice
45: frozen_yogurt
46: garlic_bread
47: gnocchi
48: greek_salad
49: grilled_cheese_sandwich
50: grilled_salmon
51: guacamole
52: gyoza
53: hamburger
54: hot_and_sour_soup
55: hot_dog
56: huevos_rancheros
57: hummus
58: ice_cream
59: lasagna
60: lobster_bisque
61: lobster_roll_sandwich
62: macaroni_and_cheese
63: macarons
64: miso_soup
65: mussels
66: nachos
67: omelette
68: onion_rings
69: oysters
70: pad_thai
71: paella
72: pancakes
73: panna_cotta
74: peking_duck
75: pho
76: pizza
77: pork_chop
78: poutine
79: prime_rib
80: pulled_pork_sandwich
81: ramen
82: ravioli
83: red_velvet_cake
84: risotto
85: samosa
86: sashimi
87: scallops
88: seaweed_salad
89: shrimp_and_grits
90: spaghetti_bolognese
91: spaghetti_carbonara
92: spring_rolls
93: steak
94: strawberry_shortcake
95: sushi
96: swedish_meatballs
97: tacos
98: takoyaki
99: tiramisu
100: tuna_tartare
101: waffles
splits:
- name: test
num_bytes: 1313331456.8001626
num_examples: 25301
- name: train
num_bytes: 3855197470.2528377
num_examples: 75900
download_size: 5154346740
dataset_size: 5168528927.053
---
# Dataset Card for "food102-stockholm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
0xJustin | null | null | null | false | 66 | false | 0xJustin/Dungeons-and-Diffusion | 2022-11-12T04:42:41.000Z | null | false | af96f4b3b543cab5d4501c2c49bc03ef8a85f2da | [] | [] | https://huggingface.co/datasets/0xJustin/Dungeons-and-Diffusion/resolve/main/README.md | This is the dataset! Not the .ckpt trained model - the model is located here: https://huggingface.co/0xJustin/Dungeons-and-Diffusion/tree/main
This dataset includes ~2500 images of fantasy RPG character art. This dataset has a distribution of races and classes, though only races are annotated right now.
Additionally, BLIP captions were generated for all examples.
Thus, there are two datasets- one with the human generated race annotation formatted as 'D&D Character, {race}'
BLIP captions are formatted as 'D&D Character, {race} {caption}' for example: 'D&D Character, drow a woman with horns and horns'
Distribution of races:
({'kenku': 31,
'drow': 162,
'tiefling': 285,
'dwarf': 116,
'dragonborn': 110,
'gnome': 72,
'orc': 184,
'aasimar': 74,
'kobold': 61,
'aarakocra': 24,
'tabaxi': 123,
'genasi': 126,
'human': 652,
'elf': 190,
'goblin': 80,
'halfling': 52,
'centaur': 22,
'firbolg': 76,
'goliath': 35})
There is a high chance some images are mislabelled! Please feel free to enrich this dataset with whatever attributes you think might be useful! |
J3H0X77K | null | null | null | false | null | false | J3H0X77K/CHAMOX | 2022-11-03T06:50:53.000Z | null | false | b66f8130f392e1d994cd96d646ac3a27ae93bdec | [] | [
"license:afl-3.0"
] | https://huggingface.co/datasets/J3H0X77K/CHAMOX/resolve/main/README.md | ---
license: afl-3.0
---
|
amphora | null | null | null | false | 3 | false | amphora/KorFin-ABSA | 2022-11-04T03:36:00.000Z | null | false | 22a9cbd93ddcece0c69ed46d54da12f78cd7088e | [] | [
"annotations_creators:expert-generated",
"language:ko",
"language_creators:expert-generated",
"license:apache-2.0",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:klue",
"tags:sentiment analysis",
"tags:aspect based sentiment analysis",
"tags:finance",
"task_categories:text-classification",
"task_ids:topic-classification",
"task_ids:sentiment-classification"
] | https://huggingface.co/datasets/amphora/KorFin-ABSA/resolve/main/README.md | ---
annotations_creators:
- expert-generated
language:
- ko
language_creators:
- expert-generated
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: KorFin-ABSA
size_categories:
- 1K<n<10K
source_datasets:
- klue
tags:
- sentiment analysis
- aspect based sentiment analysis
- finance
task_categories:
- text-classification
task_ids:
- topic-classification
- sentiment-classification
---
# Dataset Card for KorFin-ABSA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
The KorFin-ABSA includes 3,002 sentences with (aspect, polarity) pairs annotated. The sentences were collected from [KLUE-TC](https://klue-benchmark.com/tasks/66/overview/description). Annotation of the dataset is described in the paper (about to be published).
### Supported Tasks and Leaderboards
This dataset supports the following tasks:
* Aspect-Based Sentiment Classification
### Languages
Korean
## Dataset Structure
### Data Instances
Each instance consists of a single sentence, aspect, and corresponding polarity (POSITIVE/NEGATIVE/NEUTRAL).
```
{
"title": "LGU๏ผ 1๋ถ๊ธฐ ์์
์ต 1์ฒ706์ต์โฆ๋ง์ผํ
๋น์ฉ ๊ฐ์",
"aspect": "LG U+",
'sentiment': 'NEUTRAL',
'url': 'https://news.naver.com/main/read.nhn?mode=LS2D&mid=shm&sid1=105&sid2=227&oid=001&aid=0008363739',
'annotator_id': 'A_01',
'Type': 'single'
}
```
### Data Fields
* title:
* aspect:
* sentiment:
* url:
* annotator_id:
* url:
### Data Splits
The dataset currently does not contain standard data splits.
## Additional Information
You can download the data via:
```
from datasets import load_dataset
dataset = load_dataset("amphora/KorFin-ABSA")
```
Please find more information about the code and how the data was collected in the paper (About to be added.).
### Licensing Information
[apache-2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Citation Information
Please cite this data using:
```
About to be added
```
### Contributions
Thanks to [@Albertmade](https://github.com/h-albert-lee), [@amphora](https://github.com/guijinSON) for making this dataset.
|
nev | null | null | null | false | null | false | nev/worm-activity-data | 2022-11-03T09:02:28.000Z | null | false | 065fc8ac2f9921f39cd03a5003377589a48293ee | [] | [
"license:cc-by-4.0"
] | https://huggingface.co/datasets/nev/worm-activity-data/resolve/main/README.md | ---
license: cc-by-4.0
---
|
lewtun | null | null | null | false | 1 | false | lewtun/music_genres_small | 2022-11-03T13:36:49.000Z | null | false | ab4b90142da320df49a31aaa9fa8df1df67d123f | [] | [] | https://huggingface.co/datasets/lewtun/music_genres_small/resolve/main/README.md | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: song_id
dtype: int64
- name: genre_id
dtype: int64
- name: genre
dtype: string
splits:
- name: train
num_bytes: 392427659.9527852
num_examples: 1000
download_size: 390675126
dataset_size: 392427659.9527852
---
# Dataset Card for "music_genres_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Markmus | null | null | null | false | null | false | Markmus/amazon-shoe-reviews | 2022-11-03T13:41:50.000Z | null | false | 17e87976452beb6cd28dd83ee3b98604fca98632 | [] | [] | https://huggingface.co/datasets/Markmus/amazon-shoe-reviews/resolve/main/README.md | ---
dataset_info:
features:
- name: labels
dtype: int64
- name: text
dtype: string
splits:
- name: test
num_bytes: 1871962.8
num_examples: 10000
- name: train
num_bytes: 16847665.2
num_examples: 90000
download_size: 10939033
dataset_size: 18719628.0
---
# Dataset Card for "amazon-shoe-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Matthaios | null | null | null | false | 2 | false | Matthaios/amazon-shoe-reviews | 2022-11-03T13:43:56.000Z | null | false | c0e1f6c4ab0b7ec8268e9eed39185c002df10344 | [] | [] | https://huggingface.co/datasets/Matthaios/amazon-shoe-reviews/resolve/main/README.md | ---
dataset_info:
features:
- name: labels
dtype: int64
- name: text
dtype: string
splits:
- name: test
num_bytes: 1871962.8
num_examples: 10000
- name: train
num_bytes: 16847665.2
num_examples: 90000
download_size: 10939031
dataset_size: 18719628.0
---
# Dataset Card for "amazon-shoe-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Wannita | null | null | null | false | null | false | Wannita/PyCoder | 2022-11-03T14:31:36.000Z | null | false | 62ad9144b47a3ea76959499677b60f5c45d189aa | [] | [
"license:mit"
] | https://huggingface.co/datasets/Wannita/PyCoder/resolve/main/README.md | ---
license: mit
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.