text-classification
bool
2 classes
text
stringlengths
0
664k
false
# Mario Maker 2 user first clears Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 user first clears dataset consists of 17.8 million first clears from Nintendo's online service totaling around 157MB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 user first clears dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_user_first_cleared", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '14510618610706594411', 'data_id': 25199891 } ``` Each row is a unique first clear in the level denoted by the `data_id` done by the player denoted by the `pid`. You can also download the full dataset. Note that this will download ~157MB: ```python ds = load_dataset("TheGreatRambler/mm2_user_first_cleared", split="train") ``` ## Data Structure ### Data Instances ```python { 'pid': '14510618610706594411', 'data_id': 25199891 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|The player ID of this user, an unsigned 64 bit integer as a string| |data_id|int|The data ID of the level this user first cleared| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
false
# Mario Maker 2 user world records Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 user world records dataset consists of 15.3 million world records from Nintendo's online service totaling around 215MB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 user world records dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_user_world_record", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '14510618610706594411', 'data_id': 24866513 } ``` Each row is a unique world record in the level denoted by the `data_id` done by the player denoted by the `pid`. You can also download the full dataset. Note that this will download ~215MB: ```python ds = load_dataset("TheGreatRambler/mm2_user_world_record", split="train") ``` ## Data Structure ### Data Instances ```python { 'pid': '14510618610706594411', 'data_id': 24866513 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|The player ID of this user, an unsigned 64 bit integer as a string| |data_id|int|The data ID of the level this user got world record on| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
false
# Mario Maker 2 super worlds Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 super worlds dataset consists of 289 thousand super worlds from Nintendo's online service totaling around 13.5GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 super worlds dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_world", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '14510618610706594411', 'world_id': 'c96012bef256ba6b_20200513204805563301', 'worlds': 1, 'levels': 5, 'planet_type': 0, 'created': 1589420886, 'unk1': [some binary data], 'unk5': 3, 'unk6': 1, 'unk7': 1, 'thumbnail': [some binary data] } ``` Each row is a unique super world denoted by the `world_id` created by the player denoted by the `pid`. Thumbnails are binary PNGs. `unk1` describes the super world itself, including the world map, but its format is unknown as of now. You can also download the full dataset. Note that this will download ~13.5GB: ```python ds = load_dataset("TheGreatRambler/mm2_world", split="train") ``` ## Data Structure ### Data Instances ```python { 'pid': '14510618610706594411', 'world_id': 'c96012bef256ba6b_20200513204805563301', 'worlds': 1, 'levels': 5, 'planet_type': 0, 'created': 1589420886, 'unk1': [some binary data], 'unk5': 3, 'unk6': 1, 'unk7': 1, 'thumbnail': [some binary data] } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|The player ID of the user who created this super world| |world_id|string|World ID| |worlds|int|Number of worlds| |levels|int|Number of levels| |planet_type|int|Planet type, enum below| |created|int|UTC timestamp of when this super world was created| |unk1|bytes|Unknown| |unk5|int|Unknown| |unk6|int|Unknown| |unk7|int|Unknown| |thumbnail|bytes|The thumbnail, as a JPEG binary| |thumbnail_url|string|The old URL of this thumbnail| |thumbnail_size|int|The filesize of this thumbnail| |thumbnail_filename|string|The filename of this thumbnail| ### Data Splits The dataset only contains a train split. ## Enums The dataset contains some enum integer fields. This can be used to convert back to their string equivalents: ```python SuperWorldPlanetType = { 0: "Earth", 1: "Moon", 2: "Sand", 3: "Green", 4: "Ice", 5: "Ringed", 6: "Red", 7: "Spiral" } ``` <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset consists of super worlds from many different Mario Maker 2 players globally and as such harmful depictions could be present in their super world thumbnails.
false
# Mario Maker 2 super world levels Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 super world levels dataset consists of 3.3 million super world levels from Nintendo's online service and adds onto `TheGreatRambler/mm2_world`. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_world_levels", split="train") print(next(iter(ds))) #OUTPUT: { 'pid': '14510618610706594411', 'data_id': 19170881, 'ninjis': 23 } ``` Each row is a level within a super world owned by player `pid` that is denoted by `data_id`. Each level contains some number of ninjis `ninjis`, a rough metric for their popularity. ## Data Structure ### Data Instances ```python { 'pid': '14510618610706594411', 'data_id': 19170881, 'ninjis': 23 } ``` ### Data Fields |Field|Type|Description| |---|---|---| |pid|string|The player ID of the user who created the super world with this level| |data_id|int|The data ID of the level| |ninjis|int|Number of ninjis shown on this level| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
false
# Mario Maker 2 ninjis Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 ninjis dataset consists of 3 million ninji replays from Nintendo's online service totaling around 12.5GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it The Mario Maker 2 ninjis dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_ninji", streaming=True, split="train") print(next(iter(ds))) #OUTPUT: { 'data_id': 12171034, 'pid': '4748613890518923485', 'time': 83388, 'replay': [some binary data] } ``` Each row is a ninji run in the level denoted by the `data_id` done by the player denoted by the `pid`, The length of this ninji run is `time` in milliseconds. `replay` is a gzip compressed binary file format describing the animation frames and coordinates of the player throughout the run. Parsing the replay is as follows: ```python from datasets import load_dataset import zlib import struct ds = load_dataset("TheGreatRambler/mm2_ninji", streaming=True, split="train") row = next(iter(ds)) replay = zlib.decompress(row["replay"]) frames = struct.unpack(">I", replay[0x10:0x14])[0] character = replay[0x14] character_mapping = { 0: "Mario", 1: "Luigi", 2: "Toad", 3: "Toadette" } # player_state is between 0 and 14 and varies between gamestyles # as outlined below. Determining the gamestyle of a particular run # and rendering the level being played requires TheGreatRambler/mm2_ninji_level player_state_base = { 0: "Run/Walk", 1: "Jump", 2: "Swim", 3: "Climbing", 5: "Sliding", 7: "Dry bones shell", 8: "Clown car", 9: "Cloud", 10: "Boot", 11: "Walking cat" } player_state_nsmbu = { 4: "Sliding", 6: "Turnaround", 10: "Yoshi", 12: "Acorn suit", 13: "Propeller active", 14: "Propeller neutral" } player_state_sm3dw = { 4: "Sliding", 6: "Turnaround", 7: "Clear pipe", 8: "Cat down attack", 13: "Propeller active", 14: "Propeller neutral" } player_state_smb1 = { 4: "Link down slash", 5: "Crouching" } player_state_smw = { 10: "Yoshi", 12: "Cape" } print("Frames: %d\nCharacter: %s" % (frames, character_mapping[character])) current_offset = 0x3C # Ninji updates are reported every 4 frames for i in range((frames + 2) // 4): flags = replay[current_offset] >> 4 player_state = replay[current_offset] & 0x0F current_offset += 1 x = struct.unpack("<H", replay[current_offset:current_offset + 2])[0] current_offset += 2 y = struct.unpack("<H", replay[current_offset:current_offset + 2])[0] current_offset += 2 if flags & 0b00000110: unk1 = replay[current_offset] current_offset += 1 in_subworld = flags & 0b00001000 print("Frame %d:\n Flags: %s,\n Animation state: %d,\n X: %d,\n Y: %d,\n In subworld: %s" % (i, bin(flags), player_state, x, y, in_subworld)) #OUTPUT: Frames: 5006 Character: Mario Frame 0: Flags: 0b0, Animation state: 0, X: 2672, Y: 2288, In subworld: 0 Frame 1: Flags: 0b0, Animation state: 0, X: 2682, Y: 2288, In subworld: 0 Frame 2: Flags: 0b0, Animation state: 0, X: 2716, Y: 2288, In subworld: 0 ... Frame 1249: Flags: 0b0, Animation state: 1, X: 59095, Y: 3749, In subworld: 0 Frame 1250: Flags: 0b0, Animation state: 1, X: 59246, Y: 3797, In subworld: 0 Frame 1251: Flags: 0b0, Animation state: 1, X: 59402, Y: 3769, In subworld: 0 ``` You can also download the full dataset. Note that this will download ~12.5GB: ```python ds = load_dataset("TheGreatRambler/mm2_ninji", split="train") ``` ## Data Structure ### Data Instances ```python { 'data_id': 12171034, 'pid': '4748613890518923485', 'time': 83388, 'replay': [some binary data] } ``` ### Data Fields |Field|Type|Description| |---|---|---| |data_id|int|The data ID of the level this run occured in| |pid|string|Player ID of the player| |time|int|Length in milliseconds of the run| |replay|bytes|Replay file of this run| ### Data Splits The dataset only contains a train split. <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data The dataset contains no harmful language or depictions.
false
# Mario Maker 2 ninji levels Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets) ## Dataset Description The Mario Maker 2 ninji levels dataset consists of 21 ninji levels from Nintendo's online service and aids `TheGreatRambler/mm2_ninji`. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022. ### How to use it You can load and iterate through the dataset with the following code: ```python from datasets import load_dataset ds = load_dataset("TheGreatRambler/mm2_ninji_level", split="train") print(next(iter(ds))) #OUTPUT: { 'data_id': 12171034, 'name': 'Rolling Snowballs', 'description': 'Make your way through the snowfields, and keep an eye\nout for Spikes and Snow Pokeys! Stomping on Snow Pokeys\nwill turn them into small snowballs, which you can pick up\nand throw. Play this course as many times as you want,\nand see if you can find the fastest way to the finish!', 'uploaded': 1575532800, 'ended': 1576137600, 'gamestyle': 3, 'theme': 6, 'medal_time': 26800, 'clear_condition': 0, 'clear_condition_magnitude': 0, 'unk3_0': 1309513, 'unk3_1': 62629737, 'unk3_2': 4355893, 'unk5': 1, 'unk6': 0, 'unk9': 0, 'level_data': [some binary data] } ``` Each row is a ninji level denoted by `data_id`. `TheGreatRambler/mm2_ninji` refers to these levels. `level_data` is the same format used in `TheGreatRambler/mm2_level` and the provided Kaitai struct file and `level.py` can be used to decode it: ```python from datasets import load_dataset from kaitaistruct import KaitaiStream from io import BytesIO from level import Level import zlib ds = load_dataset("TheGreatRambler/mm2_ninji_level", split="train") level_data = next(iter(ds))["level_data"] level = Level(KaitaiStream(BytesIO(zlib.decompress(level_data)))) # NOTE level.overworld.objects is a fixed size (limitation of Kaitai struct) # must iterate by object_count or null objects will be included for i in range(level.overworld.object_count): obj = level.overworld.objects[i] print("X: %d Y: %d ID: %s" % (obj.x, obj.y, obj.id)) #OUTPUT: X: 1200 Y: 400 ID: ObjId.block X: 1360 Y: 400 ID: ObjId.block X: 1360 Y: 240 ID: ObjId.block X: 1520 Y: 240 ID: ObjId.block X: 1680 Y: 240 ID: ObjId.block X: 1680 Y: 400 ID: ObjId.block X: 1840 Y: 400 ID: ObjId.block X: 2000 Y: 400 ID: ObjId.block X: 2160 Y: 400 ID: ObjId.block X: 2320 Y: 400 ID: ObjId.block X: 2480 Y: 560 ID: ObjId.block X: 2480 Y: 720 ID: ObjId.block X: 2480 Y: 880 ID: ObjId.block X: 2160 Y: 880 ID: ObjId.block ``` ## Data Structure ### Data Instances ```python { 'data_id': 12171034, 'name': 'Rolling Snowballs', 'description': 'Make your way through the snowfields, and keep an eye\nout for Spikes and Snow Pokeys! Stomping on Snow Pokeys\nwill turn them into small snowballs, which you can pick up\nand throw. Play this course as many times as you want,\nand see if you can find the fastest way to the finish!', 'uploaded': 1575532800, 'ended': 1576137600, 'gamestyle': 3, 'theme': 6, 'medal_time': 26800, 'clear_condition': 0, 'clear_condition_magnitude': 0, 'unk3_0': 1309513, 'unk3_1': 62629737, 'unk3_2': 4355893, 'unk5': 1, 'unk6': 0, 'unk9': 0, 'level_data': [some binary data] } ``` ### Data Fields |Field|Type|Description| |---|---|---| |data_id|int|The data ID of this ninji level| |name|string|Name| |description|string|Description| |uploaded|int|UTC timestamp of when this was uploaded| |ended|int|UTC timestamp of when this event ended| |gamestyle|int|Gamestyle, enum below| |theme|int|Theme, enum below| |medal_time|int|Time to get a medal in milliseconds| |clear_condition|int|Clear condition, enum below| |clear_condition_magnitude|int|If applicable, the magnitude of the clear condition| |unk3_0|int|Unknown| |unk3_1|int|Unknown| |unk3_2|int|Unknown| |unk5|int|Unknown| |unk6|int|Unknown| |unk9|int|Unknown| |level_data|bytes|The GZIP compressed decrypted level data, a kaitai struct file is provided to read this| |one_screen_thumbnail|bytes|The one screen course thumbnail, as a JPEG binary| |one_screen_thumbnail_url|string|The old URL of this thumbnail| |one_screen_thumbnail_size|int|The filesize of this thumbnail| |one_screen_thumbnail_filename|string|The filename of this thumbnail| |entire_thumbnail|bytes|The entire course thumbnail, as a JPEG binary| |entire_thumbnail_url|string|The old URL of this thumbnail| |entire_thumbnail_size|int|The filesize of this thumbnail| |entire_thumbnail_filename|string|The filename of this thumbnail| ### Data Splits The dataset only contains a train split. ## Enums The dataset contains some enum integer fields. They match those used by `TheGreatRambler/mm2_level` for the most part, but they are reproduced below: ```python GameStyles = { 0: "SMB1", 1: "SMB3", 2: "SMW", 3: "NSMBU", 4: "SM3DW" } CourseThemes = { 0: "Overworld", 1: "Underground", 2: "Castle", 3: "Airship", 4: "Underwater", 5: "Ghost house", 6: "Snow", 7: "Desert", 8: "Sky", 9: "Forest" } ClearConditions = { 137525990: "Reach the goal without landing after leaving the ground.", 199585683: "Reach the goal after defeating at least/all (n) Mechakoopa(s).", 272349836: "Reach the goal after defeating at least/all (n) Cheep Cheep(s).", 375673178: "Reach the goal without taking damage.", 426197923: "Reach the goal as Boomerang Mario.", 436833616: "Reach the goal while wearing a Shoe.", 713979835: "Reach the goal as Fire Mario.", 744927294: "Reach the goal as Frog Mario.", 751004331: "Reach the goal after defeating at least/all (n) Larry(s).", 900050759: "Reach the goal as Raccoon Mario.", 947659466: "Reach the goal after defeating at least/all (n) Blooper(s).", 976173462: "Reach the goal as Propeller Mario.", 994686866: "Reach the goal while wearing a Propeller Box.", 998904081: "Reach the goal after defeating at least/all (n) Spike(s).", 1008094897: "Reach the goal after defeating at least/all (n) Boom Boom(s).", 1051433633: "Reach the goal while holding a Koopa Shell.", 1061233896: "Reach the goal after defeating at least/all (n) Porcupuffer(s).", 1062253843: "Reach the goal after defeating at least/all (n) Charvaargh(s).", 1079889509: "Reach the goal after defeating at least/all (n) Bullet Bill(s).", 1080535886: "Reach the goal after defeating at least/all (n) Bully/Bullies.", 1151250770: "Reach the goal while wearing a Goomba Mask.", 1182464856: "Reach the goal after defeating at least/all (n) Hop-Chops.", 1219761531: "Reach the goal while holding a Red POW Block. OR Reach the goal after activating at least/all (n) Red POW Block(s).", 1221661152: "Reach the goal after defeating at least/all (n) Bob-omb(s).", 1259427138: "Reach the goal after defeating at least/all (n) Spiny/Spinies.", 1268255615: "Reach the goal after defeating at least/all (n) Bowser(s)/Meowser(s).", 1279580818: "Reach the goal after defeating at least/all (n) Ant Trooper(s).", 1283945123: "Reach the goal on a Lakitu's Cloud.", 1344044032: "Reach the goal after defeating at least/all (n) Boo(s).", 1425973877: "Reach the goal after defeating at least/all (n) Roy(s).", 1429902736: "Reach the goal while holding a Trampoline.", 1431944825: "Reach the goal after defeating at least/all (n) Morton(s).", 1446467058: "Reach the goal after defeating at least/all (n) Fish Bone(s).", 1510495760: "Reach the goal after defeating at least/all (n) Monty Mole(s).", 1656179347: "Reach the goal after picking up at least/all (n) 1-Up Mushroom(s).", 1665820273: "Reach the goal after defeating at least/all (n) Hammer Bro(s.).", 1676924210: "Reach the goal after hitting at least/all (n) P Switch(es). OR Reach the goal while holding a P Switch.", 1715960804: "Reach the goal after activating at least/all (n) POW Block(s). OR Reach the goal while holding a POW Block.", 1724036958: "Reach the goal after defeating at least/all (n) Angry Sun(s).", 1730095541: "Reach the goal after defeating at least/all (n) Pokey(s).", 1780278293: "Reach the goal as Superball Mario.", 1839897151: "Reach the goal after defeating at least/all (n) Pom Pom(s).", 1969299694: "Reach the goal after defeating at least/all (n) Peepa(s).", 2035052211: "Reach the goal after defeating at least/all (n) Lakitu(s).", 2038503215: "Reach the goal after defeating at least/all (n) Lemmy(s).", 2048033177: "Reach the goal after defeating at least/all (n) Lava Bubble(s).", 2076496776: "Reach the goal while wearing a Bullet Bill Mask.", 2089161429: "Reach the goal as Big Mario.", 2111528319: "Reach the goal as Cat Mario.", 2131209407: "Reach the goal after defeating at least/all (n) Goomba(s)/Galoomba(s).", 2139645066: "Reach the goal after defeating at least/all (n) Thwomp(s).", 2259346429: "Reach the goal after defeating at least/all (n) Iggy(s).", 2549654281: "Reach the goal while wearing a Dry Bones Shell.", 2694559007: "Reach the goal after defeating at least/all (n) Sledge Bro(s.).", 2746139466: "Reach the goal after defeating at least/all (n) Rocky Wrench(es).", 2749601092: "Reach the goal after grabbing at least/all (n) 50-Coin(s).", 2855236681: "Reach the goal as Flying Squirrel Mario.", 3036298571: "Reach the goal as Buzzy Mario.", 3074433106: "Reach the goal as Builder Mario.", 3146932243: "Reach the goal as Cape Mario.", 3174413484: "Reach the goal after defeating at least/all (n) Wendy(s).", 3206222275: "Reach the goal while wearing a Cannon Box.", 3314955857: "Reach the goal as Link.", 3342591980: "Reach the goal while you have Super Star invincibility.", 3346433512: "Reach the goal after defeating at least/all (n) Goombrat(s)/Goombud(s).", 3348058176: "Reach the goal after grabbing at least/all (n) 10-Coin(s).", 3353006607: "Reach the goal after defeating at least/all (n) Buzzy Beetle(s).", 3392229961: "Reach the goal after defeating at least/all (n) Bowser Jr.(s).", 3437308486: "Reach the goal after defeating at least/all (n) Koopa Troopa(s).", 3459144213: "Reach the goal after defeating at least/all (n) Chain Chomp(s).", 3466227835: "Reach the goal after defeating at least/all (n) Muncher(s).", 3481362698: "Reach the goal after defeating at least/all (n) Wiggler(s).", 3513732174: "Reach the goal as SMB2 Mario.", 3649647177: "Reach the goal in a Koopa Clown Car/Junior Clown Car.", 3725246406: "Reach the goal as Spiny Mario.", 3730243509: "Reach the goal in a Koopa Troopa Car.", 3748075486: "Reach the goal after defeating at least/all (n) Piranha Plant(s)/Jumping Piranha Plant(s).", 3797704544: "Reach the goal after defeating at least/all (n) Dry Bones.", 3824561269: "Reach the goal after defeating at least/all (n) Stingby/Stingbies.", 3833342952: "Reach the goal after defeating at least/all (n) Piranha Creeper(s).", 3842179831: "Reach the goal after defeating at least/all (n) Fire Piranha Plant(s).", 3874680510: "Reach the goal after breaking at least/all (n) Crates(s).", 3974581191: "Reach the goal after defeating at least/all (n) Ludwig(s).", 3977257962: "Reach the goal as Super Mario.", 4042480826: "Reach the goal after defeating at least/all (n) Skipsqueak(s).", 4116396131: "Reach the goal after grabbing at least/all (n) Coin(s).", 4117878280: "Reach the goal after defeating at least/all (n) Magikoopa(s).", 4122555074: "Reach the goal after grabbing at least/all (n) 30-Coin(s).", 4153835197: "Reach the goal as Balloon Mario.", 4172105156: "Reach the goal while wearing a Red POW Box.", 4209535561: "Reach the Goal while riding Yoshi.", 4269094462: "Reach the goal after defeating at least/all (n) Spike Top(s).", 4293354249: "Reach the goal after defeating at least/all (n) Banzai Bill(s)." } ``` <!-- TODO create detailed statistics --> ## Dataset Creation The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset. ## Considerations for Using the Data As these 21 levels were made and vetted by Nintendo the dataset contains no harmful language or depictions.
false
# Dataset Card for Literary fictions of Gallica ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://doi.org/10.5281/zenodo.4660197 - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The collection "Fiction littéraire de Gallica" includes 19,240 public domain documents from the digital platform of the French National Library that were originally classified as novels or, more broadly, as literary fiction in prose. It consists of 372 tables of data in tsv format for each year of publication from 1600 to 1996 (all the missing years are in the 17th and 20th centuries). Each table is structured at the page-level of each novel (5,723,986 pages in all). It contains the complete text with the addition of some metadata. It can be opened in Excel or, preferably, with the new data analysis environments in R or Python (tidyverse, pandas…) This corpus can be used for large-scale quantitative analyses in computational humanities. The OCR text is presented in a raw format without any correction or enrichment in order to be directly processed for text mining purposes. The extraction is based on a historical categorization of the novels: the Y2 or Ybis classification. This classification, invented in 1730, is the only one that has been continuously applied to the BNF collections now available in the public domain (mainly before 1950). Consequently, the dataset is based on a definition of "novel" that is generally contemporary of the publication. A French data paper (in PDF and HTML) presents the construction process of the Y2 category and describes the structuring of the corpus. It also gives several examples of possible uses for computational humanities projects. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances ``` { 'main_id': 'bpt6k97892392_p174', 'catalogue_id': 'cb31636383z', 'titre': "L'île du docteur Moreau", 'nom_auteur': 'Wells', 'prenom_auteur': 'Herbert George', 'date': 1946, 'document_ocr': 99, 'date_enligne': '07/08/2017', 'gallica': 'http://gallica.bnf.fr/ark:/12148/bpt6k97892392/f174', 'page': 174, 'texte': "_p_ dans leur expression et leurs gestes souples, d au- c tres semblables à des estropiés, ou si étrangement i défigurées qu'on eût dit les êtres qui hantent nos M rêves les plus sinistres. Au delà, se trouvaient d 'un côté les lignes onduleuses -des roseaux, de l'autre, s un dense enchevêtrement de palmiers nous séparant du ravin des 'huttes et, vers le Nord, l horizon brumeux du Pacifique. - _p_ — Soixante-deux, soixante-trois, compta Mo- H reau, il en manque quatre. J _p_ — Je ne vois pas l'Homme-Léopard, dis-je. | Tout à coup Moreau souffla une seconde fois dans son cor, et à ce son toutes les bêtes humai- ' nes se roulèrent et se vautrèrent dans la poussière. Alors se glissant furtivement hors des roseaux, rampant presque et essayant de rejoindre le cercle des autres derrière le dos de Moreau, parut l'Homme-Léopard. Le dernier qui vint fut le petit Homme-Singe. Les autres, échauffés et fatigués par leurs gesticulations, lui lancèrent de mauvais regards. _p_ — Assez! cria Moreau, de sa voix sonore et ferme. Toutes les bêtes s'assirent sur leurs talons et cessèrent leur adoration. - _p_ — Où est celui |qui enseigne la Loi? demanda Moreau." } ``` ### Data Fields - `main_id`: Unique identifier of the page of the roman. - `catalogue_id`: Identifier of the edition in the BNF catalogue. - `titre`: Title of the edition as it appears in the catalog. - `nom_auteur`: Author's name. - `prenom_auteur`: Author's first name. - `date`: Year of edition. - `document_ocr`: Estimated quality of ocerization for the whole document as a percentage of words probably well recognized (from 1-100). - `date_enligne`: Date of the online publishing of the digitization on Gallica. - `gallica`: URL of the document on Gallica. - `page`: Document page number (this is the pagination of the digital file, not the one of the original document). - `texte`: Page text, as rendered by OCR. ### Data Splits The dataset contains a single "train" split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [Creative Commons Zero v1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/legalcode). ### Citation Information ``` @dataset{langlais_pierre_carl_2021_4751204, author = {Langlais, Pierre-Carl}, title = {{Fictions littéraires de Gallica / Literary fictions of Gallica}}, month = apr, year = 2021, publisher = {Zenodo}, version = 1, doi = {10.5281/zenodo.4751204}, url = {https://doi.org/10.5281/zenodo.4751204} } ``` ### Contributions Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
false
# Dataset Card for G-KOMET ### Dataset Summary G-KOMET 1.0 is a corpus of metaphorical expressions in spoken Slovene language, covering around 50,000 lexical units across 5695 sentences. The corpus contains samples from the Gos corpus of spoken Slovene and includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse. It is also annotated with idioms and metonymies. Note that these are both annotated as metaphor types. This is different from the annotations in [KOMET](https://huggingface.co/datasets/cjvt/komet), where these are both considered a type of frame. We keep the data as untouched as possible and let the user decide how they want to handle this. ### Supported Tasks and Leaderboards Metaphor detection, metonymy detection, metaphor type classification, metaphor frame classification. ### Languages Slovenian. ## Dataset Structure ### Data Instances A sample instance from the dataset: ``` { 'document_name': 'G-Komet001.xml', 'idx': 3, 'idx_paragraph': 0, 'idx_sentence': 3, 'sentence_words': ['no', 'zdaj', 'samo', 'še', 'za', 'eno', 'orientacijo'], 'met_type': [ {'type': 'MRWi', 'word_indices': [6]} ], 'met_frame': [ {'type': 'spatial_orientation', 'word_indices': [6]} ] } ``` The sentence comes from the document `G-Komet001.xml`, is the 3rd sentence in the document and is the 3rd sentence inside the 0th paragraph in the document. The word "orientacijo" is annotated as an indirect metaphor-related word (`MRWi`). It is also annotated with the frame "spatial_orientation". ### Data Fields - `document_name`: a string containing the name of the document in which the sentence appears; - `idx`: a uint32 containing the index of the sentence inside its document; - `idx_paragraph`: a uint32 containing the index of the paragraph in which the sentence appears; - `idx_sentence`: a uint32 containing the index of the sentence inside its paragraph; containing the consecutive number of the paragraph inside the current news article; - `sentence_words`: words in the sentence; - `met_type`: metaphors in the sentence, marked by their type and word indices; - `met_frame`: metaphor frames in the sentence, marked by their type (frame name) and word indices. ## Dataset Creation The corpus contains samples from the GOS corpus of spoken Slovene and includes a balanced set of transcriptions of informative, educational, entertaining, private, and public discourse. It contains hand-annotated metaphor-related words, i.e. linguistic expressions that have the potential for people to interpret them as metaphors, idioms, i.e. multi-word units in which at least one word has been used metaphorically, and metonymies, expressions that we use to express something else. For more information, please check out the paper (which is in Slovenian language) or contact the dataset author. ## Additional Information ### Dataset Curators Špela Antloga. ### Licensing Information CC BY-NC-SA 4.0 ### Citation Information ``` @InProceedings{antloga2022gkomet, title = {Korpusni pristopi za identifikacijo metafore in metonimije: primer metonimije v korpusu gKOMET}, author={Antloga, \v{S}pela}, booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities (Student papers)}, year={2022}, pages={271-277} } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
false
false
# How Resilient are Imitation Learning Methods to Sub-Optimal Experts? ## Related Work Trajectories used in [How Resilient are Imitation Learning Methods to Sub-Optimal Experts?]() The code that uses this data is on GitHub: https://github.com/NathanGavenski/How-resilient-IL-methods-are # Structure These trajectories are formed by using [Stable Baselines](https://stable-baselines.readthedocs.io/en/master/). Each file is a dictionary of a set of trajectories with the following keys: * actions: the action in the given timestamp `t` * obs: current state in the given timestamp `t` * rewards: reward retrieved after the action in the given timestamp `t` * episode_returns: The aggregated reward of each episode (each file consists of 5000 runs) * episode_Starts: Whether that `obs` is the first state of an episode (boolean list) ## Citation Information ``` @inproceedings{gavenski2022how, title={How Resilient are Imitation Learning Methods to Sub-Optimal Experts?}, author={Nathan Gavenski and Juarez Monteiro and Adilson Medronha and Rodrigo Barros}, booktitle={2022 Brazilian Conference on Intelligent Systems (BRACIS)}, year={2022}, organization={IEEE} } ``` ## Contact: - [Nathan Schneider Gavenski](nathan.gavenski@edu.pucrs.br) - [Juarez Monteiro](juarez.santos@edu.pucrs.br) - [Adilson Medronha](adilson.medronha@edu.pucrs.br) - [Rodrigo C. Barros](rodrigo.barros@pucrs.br)
false
# AutoTrain Dataset for project: nllb_600_ft ## Dataset Description This dataset has been automatically processed by AutoTrain for project nllb_600_ft. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "feat_id": "772", "feat_URL": "https://en.wikivoyage.org/wiki/Apia", "feat_domain": "wikivoyage", "feat_topic": "Travel", "feat_has_image": "0", "feat_has_hyperlink": "0", "text": "All the ships were sunk, except for one British cruiser. Nearly 200 American and German lives were lost.", "target": "\u0628\u0647\u200c\u062c\u0632 \u06cc\u06a9 \u06a9\u0634\u062a\u06cc \u062c\u0646\u06af\u06cc \u0627\u0646\u06af\u0644\u06cc\u0633\u06cc \u0647\u0645\u0647 \u06a9\u0634\u062a\u06cc\u200c\u0647\u0627 \u063a\u0631\u0642 \u0634\u062f\u0646\u062f\u060c \u0648 \u0646\u0632\u062f\u06cc\u06a9 \u0628\u0647 200 \u0646\u0641\u0631 \u0622\u0645\u0631\u06cc\u06a9\u0627\u06cc\u06cc \u0648 \u0622\u0644\u0645\u0627\u0646\u06cc \u062c\u0627\u0646 \u062e\u0648\u062f \u0631\u0627 \u0627\u0632 \u062f\u0633\u062a \u062f\u0627\u062f\u0646\u062f." }, { "feat_id": "195", "feat_URL": "https://en.wikinews.org/wiki/Mitt_Romney_wins_Iowa_Caucus_by_eight_votes_over_surging_Rick_Santorum", "feat_domain": "wikinews", "feat_topic": "Politics", "feat_has_image": "0", "feat_has_hyperlink": "0", "text": "Bachmann, who won the Ames Straw Poll in August, decided to end her campaign.", "target": "\u0628\u0627\u062e\u0645\u0646\u060c \u06a9\u0647 \u062f\u0631 \u0645\u0627\u0647 \u0622\u06af\u0648\u0633\u062a \u0628\u0631\u0646\u062f\u0647 \u0646\u0638\u0631\u0633\u0646\u062c\u06cc \u0622\u0645\u0633 \u0627\u0633\u062a\u0631\u0627\u0648 \u0634\u062f\u060c \u062a\u0635\u0645\u06cc\u0645 \u06af\u0631\u0641\u062a \u06a9\u0645\u067e\u06cc\u0646 \u062e\u0648\u062f \u0631\u0627 \u062e\u0627\u062a\u0645\u0647 \u062f\u0647\u062f." } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "feat_id": "Value(dtype='string', id=None)", "feat_URL": "Value(dtype='string', id=None)", "feat_domain": "Value(dtype='string', id=None)", "feat_topic": "Value(dtype='string', id=None)", "feat_has_image": "Value(dtype='string', id=None)", "feat_has_hyperlink": "Value(dtype='string', id=None)", "text": "Value(dtype='string', id=None)", "target": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 1608 | | valid | 402 |
false
# Dataset Card for ssj500k **Important**: there exists another HF implementation of the dataset ([classla/ssj500k](https://huggingface.co/datasets/classla/ssj500k)), but it seems to be more narrowly focused. **This implementation is designed for more general use** - the CLASSLA version seems to expose only the specific training/validation/test annotations used in the CLASSLA library, for only a subset of the data. ### Dataset Summary The ssj500k training corpus contains about 500 000 tokens manually annotated on the levels of tokenization, sentence segmentation, morphosyntactic tagging, and lemmatization. It is also partially annotated for the following tasks: - named entity recognition (config `named_entity_recognition`) - dependency parsing(*), Universal Dependencies style (config `dependency_parsing_ud`) - dependency parsing, JOS/MULTEXT-East style (config `dependency_parsing_jos`) - semantic role labeling (config `semantic_role_labeling`) - multi-word expressions (config `multiword_expressions`) If you want to load all the data along with their partial annotations, please use the config `all_data`. \* _The UD dependency parsing labels are included here for completeness, but using the dataset [universal_dependencies](https://huggingface.co/datasets/universal_dependencies) should be preferred for dependency parsing applications to ensure you are using the most up-to-date data._ ### Supported Tasks and Leaderboards Sentence tokenization, sentence segmentation, morphosyntactic tagging, lemmatization, named entity recognition, dependency parsing, semantic role labeling, multi-word expression detection. ### Languages Slovenian. ## Dataset Structure ### Data Instances A sample instance from the dataset (using the config `all_data`): ``` { 'id_doc': 'ssj1', 'idx_par': 0, 'idx_sent': 0, 'id_words': ['ssj1.1.1.t1', 'ssj1.1.1.t2', 'ssj1.1.1.t3', 'ssj1.1.1.t4', 'ssj1.1.1.t5', 'ssj1.1.1.t6', 'ssj1.1.1.t7', 'ssj1.1.1.t8', 'ssj1.1.1.t9', 'ssj1.1.1.t10', 'ssj1.1.1.t11', 'ssj1.1.1.t12', 'ssj1.1.1.t13', 'ssj1.1.1.t14', 'ssj1.1.1.t15', 'ssj1.1.1.t16', 'ssj1.1.1.t17', 'ssj1.1.1.t18', 'ssj1.1.1.t19', 'ssj1.1.1.t20', 'ssj1.1.1.t21', 'ssj1.1.1.t22', 'ssj1.1.1.t23', 'ssj1.1.1.t24'], 'words': ['"', 'Tistega', 'večera', 'sem', 'preveč', 'popil', ',', 'zgodilo', 'se', 'je', 'mesec', 'dni', 'po', 'tem', ',', 'ko', 'sem', 'izvedel', ',', 'da', 'me', 'žena', 'vara', '.'], 'lemmas': ['"', 'tisti', 'večer', 'biti', 'preveč', 'popiti', ',', 'zgoditi', 'se', 'biti', 'mesec', 'dan', 'po', 'ta', ',', 'ko', 'biti', 'izvedeti', ',', 'da', 'jaz', 'žena', 'varati', '.'], 'msds': ['UPosTag=PUNCT', 'UPosTag=DET|Case=Gen|Gender=Masc|Number=Sing|PronType=Dem', 'UPosTag=NOUN|Case=Gen|Gender=Masc|Number=Sing', 'UPosTag=AUX|Mood=Ind|Number=Sing|Person=1|Polarity=Pos|Tense=Pres|VerbForm=Fin', 'UPosTag=DET|PronType=Ind', 'UPosTag=VERB|Aspect=Perf|Gender=Masc|Number=Sing|VerbForm=Part', 'UPosTag=PUNCT', 'UPosTag=VERB|Aspect=Perf|Gender=Neut|Number=Sing|VerbForm=Part', 'UPosTag=PRON|PronType=Prs|Reflex=Yes|Variant=Short', 'UPosTag=AUX|Mood=Ind|Number=Sing|Person=3|Polarity=Pos|Tense=Pres|VerbForm=Fin', 'UPosTag=NOUN|Animacy=Inan|Case=Acc|Gender=Masc|Number=Sing', 'UPosTag=NOUN|Case=Gen|Gender=Masc|Number=Plur', 'UPosTag=ADP|Case=Loc', 'UPosTag=DET|Case=Loc|Gender=Neut|Number=Sing|PronType=Dem', 'UPosTag=PUNCT', 'UPosTag=SCONJ', 'UPosTag=AUX|Mood=Ind|Number=Sing|Person=1|Polarity=Pos|Tense=Pres|VerbForm=Fin', 'UPosTag=VERB|Aspect=Perf|Gender=Masc|Number=Sing|VerbForm=Part', 'UPosTag=PUNCT', 'UPosTag=SCONJ', 'UPosTag=PRON|Case=Acc|Number=Sing|Person=1|PronType=Prs|Variant=Short', 'UPosTag=NOUN|Case=Nom|Gender=Fem|Number=Sing', 'UPosTag=VERB|Aspect=Imp|Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin', 'UPosTag=PUNCT'], 'has_ne_ann': True, 'has_ud_dep_ann': True, 'has_jos_dep_ann': True, 'has_srl_ann': True, 'has_mwe_ann': True, 'ne_tags': ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'], 'ud_dep_head': [5, 2, 5, 5, 5, -1, 7, 5, 7, 7, 7, 10, 13, 10, 17, 17, 17, 13, 22, 22, 22, 22, 17, 5], 'ud_dep_rel': ['punct', 'det', 'obl', 'aux', 'advmod', 'root', 'punct', 'parataxis', 'expl', 'aux', 'obl', 'nmod', 'case', 'nmod', 'punct', 'mark', 'aux', 'acl', 'punct', 'mark', 'obj', 'nsubj', 'ccomp', 'punct'], 'jos_dep_head': [-1, 2, 5, 5, 5, -1, -1, -1, 7, 7, 7, 10, 13, 10, -1, 17, 17, 13, -1, 22, 22, 22, 17, -1], 'jos_dep_rel': ['Root', 'Atr', 'AdvO', 'PPart', 'AdvM', 'Root', 'Root', 'Root', 'PPart', 'PPart', 'AdvO', 'Atr', 'Atr', 'Atr', 'Root', 'Conj', 'PPart', 'Atr', 'Root', 'Conj', 'Obj', 'Sb', 'Obj', 'Root'], 'srl_info': [ {'idx_arg': 2, 'idx_head': 5, 'role': 'TIME'}, {'idx_arg': 4, 'idx_head': 5, 'role': 'QUANT'}, {'idx_arg': 10, 'idx_head': 7, 'role': 'TIME'}, {'idx_arg': 20, 'idx_head': 22, 'role': 'PAT'}, {'idx_arg': 21, 'idx_head': 22, 'role': 'ACT'}, {'idx_arg': 22, 'idx_head': 17, 'role': 'RESLT'} ], 'mwe_info': [ {'type': 'IRV', 'word_indices': [7, 8]} ] } ``` ### Data Fields The following attributes are present in the most general config (`all_data`). Please see below for attributes present in the specific configs. - `id_doc`: a string containing the identifier of the document; - `idx_par`: an int32 containing the consecutive number of the paragraph, which the current sentence is a part of; - `idx_sent`: an int32 containing the consecutive number of the current sentence inside the current paragraph; - `id_words`: a list of strings containing the identifiers of words - potentially redundant, helpful for connecting the dataset with external datasets like coref149; - `words`: a list of strings containing the words in the current sentence; - `lemmas`: a list of strings containing the lemmas in the current sentence; - `msds`: a list of strings containing the morphosyntactic description of words in the current sentence; - `has_ne_ann`: a bool indicating whether the current example has named entities annotated; - `has_ud_dep_ann`: a bool indicating whether the current example has dependencies (in UD style) annotated; - `has_jos_dep_ann`: a bool indicating whether the current example has dependencies (in JOS style) annotated; - `has_srl_ann`: a bool indicating whether the current example has semantic roles annotated; - `has_mwe_ann`: a bool indicating whether the current example has multi-word expressions annotated; - `ne_tags`: a list of strings containing the named entity tags encoded using IOB2 - if `has_ne_ann=False` all tokens are annotated with `"N/A"`; - `ud_dep_head`: a list of int32 containing the head index for each word (using UD guidelines) - the head index of the root word is `-1`; if `has_ud_dep_ann=False` all tokens are annotated with `-2`; - `ud_dep_rel`: a list of strings containing the relation with the head for each word (using UD guidelines) - if `has_ud_dep_ann=False` all tokens are annotated with `"N/A"`; - `jos_dep_head`: a list of int32 containing the head index for each word (using JOS guidelines) - the head index of the root word is `-1`; if `has_jos_dep_ann=False` all tokens are annotated with `-2`; - `jos_dep_rel`: a list of strings containing the relation with the head for each word (using JOS guidelines) - if `has_jos_dep_ann=False` all tokens are annotated with `"N/A"`; - `srl_info`: a list of dicts, each containing index of the argument word, the head (verb) word, and the semantic role - if `has_srl_ann=False` this list is empty; - `mwe_info`: a list of dicts, each containing word indices and the type of a multi-word expression; #### Data fields in 'named_entity_recognition' ``` ['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'ne_tags'] ``` #### Data fields in 'dependency_parsing_ud' ``` ['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'ud_dep_head', 'ud_dep_rel'] ``` #### Data fields in 'dependency_parsing_jos' ``` ['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'jos_dep_head', 'jos_dep_rel'] ``` #### Data fields in 'semantic_role_labeling' ``` ['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'srl_info'] ``` #### Data fields in 'multiword_expressions' ``` ['id_doc', 'idx_par', 'idx_sent', 'id_words', 'words', 'lemmas', 'msds', 'mwe_info'] ``` ## Additional Information ### Dataset Curators Simon Krek; et al. (please see http://hdl.handle.net/11356/1434 for the full list) ### Licensing Information CC BY-NC-SA 4.0. ### Citation Information The paper describing the dataset: ``` @InProceedings{krek2020ssj500k, title = {The ssj500k Training Corpus for Slovene Language Processing}, author={Krek, Simon and Erjavec, Tomaž and Dobrovoljc, Kaja and Gantar, Polona and Arhar Holdt, Spela and Čibej, Jaka and Brank, Janez}, booktitle={Proceedings of the Conference on Language Technologies and Digital Humanities}, year={2020}, pages={24-33} } ``` The resource itself: ``` @misc{krek2021clarinssj500k, title = {Training corpus ssj500k 2.3}, author = {Krek, Simon and Dobrovoljc, Kaja and Erjavec, Toma{\v z} and Mo{\v z}e, Sara and Ledinek, Nina and Holz, Nanika and Zupan, Katja and Gantar, Polona and Kuzman, Taja and {\v C}ibej, Jaka and Arhar Holdt, {\v S}pela and Kav{\v c}i{\v c}, Teja and {\v S}krjanec, Iza and Marko, Dafne and Jezer{\v s}ek, Lucija and Zajc, Anja}, url = {http://hdl.handle.net/11356/1434}, year = {2021} } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
true
# Dataset Card for SemCor ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://web.eecs.umich.edu/~mihalcea/downloads.html#semcor - **Repository:** - **Paper:** https://aclanthology.org/H93-1061/ - **Leaderboard:** - **Point of Contact:** ### Dataset Summary SemCor 3.0 was automatically created from SemCor 1.6 by mapping WordNet 1.6 to WordNet 3.0 senses. SemCor 1.6 was created and is property of Princeton University. Some (few) word senses from WordNet 1.6 were dropped, and therefore they cannot be retrieved anymore in the 3.0 database. A sense of 0 (wnsn=0) is used to symbolize a missing sense in WordNet 3.0. The automatic mapping was performed within the Language and Information Technologies lab at UNT, by Rada Mihalcea (rada@cs.unt.edu). THIS MAPPING IS PROVIDED "AS IS" AND UNT MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, UNT MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE. In agreement with the license from Princeton Univerisity, you are granted permission to use, copy, modify and distribute this database for any purpose and without fee and royalty is hereby granted, provided that you agree to comply with the Princeton copyright notice and statements, including the disclaimer, and that the same appear on ALL copies of the database, including modifications that you make for internal use or for distribution. Both LICENSE and README files distributed with the SemCor 1.6 package are included in the current distribution of SemCor 3.0. ### Languages English ## Additional Information ### Licensing Information WordNet Release 1.6 Semantic Concordance Release 1.6 This software and database is being provided to you, the LICENSEE, by Princeton University under the following license. By obtaining, using and/or copying this software and database, you agree that you have read, understood, and will comply with these terms and conditions.: Permission to use, copy, modify and distribute this software and database and its documentation for any purpose and without fee or royalty is hereby granted, provided that you agree to comply with the following copyright notice and statements, including the disclaimer, and that the same appear on ALL copies of the software, database and documentation, including modifications that you make for internal use or for distribution. WordNet 1.6 Copyright 1997 by Princeton University. All rights reserved. THIS SOFTWARE AND DATABASE IS PROVIDED "AS IS" AND PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS. The name of Princeton University or Princeton may not be used in advertising or publicity pertaining to distribution of the software and/or database. Title to copyright in this software, database and any associated documentation shall at all times remain with Princeton University and LICENSEE agrees to preserve same. ### Citation Information ```bibtex @inproceedings{miller-etal-1993-semantic, title = "A Semantic Concordance", author = "Miller, George A. and Leacock, Claudia and Tengi, Randee and Bunker, Ross T.", booktitle = "{H}uman {L}anguage {T}echnology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993", year = "1993", url = "https://aclanthology.org/H93-1061", } ``` ### Contributions Thanks to [@thesofakillers](https://github.com/thesofakillers) for adding this dataset, converting from xml to csv.
true
# Dataset Card for sd-nlp ## Table of Contents - [Dataset Card for [EMBO/sd-nlp-non-tokenized]](#dataset-card-for-dataset-name) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://sourcedata.embo.org - **Repository:** https://github.com/source-data/soda-roberta - **Paper:** - **Leaderboard:** - **Point of Contact:** thomas.lemberger@embo.org, jorge.abreu@embo.org ### Dataset Summary This dataset is based on the content of the SourceData (https://sourcedata.embo.org) database, which contains manually annotated figure legends written in English and extracted from scientific papers in the domain of cell and molecular biology (Liechti et al, Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471). Unlike the dataset [`sd-nlp`](https://huggingface.co/datasets/EMBO/sd-nlp), pre-tokenized with the `roberta-base` tokenizer, this dataset is not previously tokenized, but just splitted into words. Users can therefore use it to fine-tune other models. Additional details at https://github.com/source-data/soda-roberta ### Supported Tasks and Leaderboards Tags are provided as [IOB2-style tags](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)). `PANELIZATION`: figure captions (or figure legends) are usually composed of segments that each refer to one of several 'panels' of the full figure. Panels tend to represent results obtained with a coherent method and depicts data points that can be meaningfully compared to each other. `PANELIZATION` provide the start (B-PANEL_START) of these segments and allow to train for recogntion of the boundary between consecutive panel lengends. `NER`: biological and chemical entities are labeled. Specifically the following entities are tagged: - `SMALL_MOLECULE`: small molecules - `GENEPROD`: gene products (genes and proteins) - `SUBCELLULAR`: subcellular components - `CELL`: cell types and cell lines. - `TISSUE`: tissues and organs - `ORGANISM`: species - `EXP_ASSAY`: experimental assays `ROLES`: the role of entities with regard to the causal hypotheses tested in the reported results. The tags are: - `CONTROLLED_VAR`: entities that are associated with experimental variables and that subjected to controlled and targeted perturbations. - `MEASURED_VAR`: entities that are associated with the variables measured and the object of the measurements. `BORING`: entities are marked with the tag `BORING` when they are more of descriptive value and not directly associated with causal hypotheses ('boring' is not an ideal choice of word, but it is short...). Typically, these entities are so-called 'reporter' geneproducts, entities used as common baseline across samples, or specify the context of the experiment (cellular system, species, etc...). ### Languages The text in the dataset is English. ## Dataset Structure ### Data Instances ```json {'text': '(E) Quantification of the number of cells without γ-Tubulin at centrosomes (γ-Tub -) in pachytene and diplotene spermatocytes in control, Plk1(∆/∆) and BI2536-treated spermatocytes. Data represent average of two biological replicates per condition. ', 'labels': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 13, 14, 14, 14, 14, 14, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 4, 4, 4, 4, 4, 0, 0, 0, 0, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 0, 0, 3, 4, 4, 4, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 4, 4, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 2, 2, 2, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]} ``` ### Data Fields - `text`: `str` of the text - `label_ids` dictionary composed of list of strings on a character-level: - `entity_types`: `list` of `strings` for the IOB2 tags for entity type; possible value in `["O", "I-SMALL_MOLECULE", "B-SMALL_MOLECULE", "I-GENEPROD", "B-GENEPROD", "I-SUBCELLULAR", "B-SUBCELLULAR", "I-CELL", "B-CELL", "I-TISSUE", "B-TISSUE", "I-ORGANISM", "B-ORGANISM", "I-EXP_ASSAY", "B-EXP_ASSAY"]` - `panel_start`: `list` of `strings` for IOB2 tags `["O", "B-PANEL_START"]` ### Data Splits ```python DatasetDict({ train: Dataset({ features: ['text', 'labels'], num_rows: 66085 }) test: Dataset({ features: ['text', 'labels'], num_rows: 8225 }) validation: Dataset({ features: ['text', 'labels'], num_rows: 7948 }) }) ``` ## Dataset Creation ### Curation Rationale The dataset was built to train models for the automatic extraction of a knowledge graph based from the scientific literature. The dataset can be used to train character-based models for text segmentation and named entity recognition. ### Source Data #### Initial Data Collection and Normalization Figure legends were annotated according to the SourceData framework described in Liechti et al 2017 (Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471). The curation tool at https://curation.sourcedata.io was used to segment figure legends into panel legends, tag enities, assign experiemental roles and normalize with standard identifiers (not available in this dataset). The source data was downloaded from the SourceData API (https://api.sourcedata.io) on 21 Jan 2021. #### Who are the source language producers? The examples are extracted from the figure legends from scientific papers in cell and molecular biology. ### Annotations #### Annotation process The annotations were produced manually with expert curators from the SourceData project (https://sourcedata.embo.org) #### Who are the annotators? Curators of the SourceData project. ### Personal and Sensitive Information None known. ## Considerations for Using the Data ### Social Impact of Dataset Not applicable. ### Discussion of Biases The examples are heavily biased towards cell and molecular biology and are enriched in examples from papers published in EMBO Press journals (https://embopress.org) ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Thomas Lemberger, EMBO. ### Licensing Information CC BY 4.0 ### Citation Information [More Information Needed] ### Contributions Thanks to [@tlemberger](https://github.com/tlemberger>) and [@drAbreu](https://github.com/drAbreu>) for adding this dataset.
false
# Dataset Card for CoSimLex ### Dataset Summary The dataset contains human similarity ratings for pairs of words. The annotators were presented with contexts that contained both of the words in the pair and the dataset features two different contexts per pair. The words were sourced from the English, Croatian, Finnish and Slovenian versions of the original Simlex dataset. Statistics: - 340 English pairs (config `en`), - 112 Croatian pairs (config `hr`), - 111 Slovenian pairs (config `sl`), - 24 Finnish pairs (config `fi`). ### Supported Tasks and Leaderboards Graded word similarity in context. ### Languages English, Croatian, Slovenian, Finnish. ## Dataset Structure ### Data Instances A sample instance from the dataset: ``` { 'word1': 'absence', 'word2': 'presence', 'context1': 'African slaves from Angola and Mozambique were also present, but in fewer numbers than in other Brazilian areas, because Paraná was a poor region that did not need much slave manpower. The immigration grew in the mid-19th century, mostly composed of Italian, German, Polish, Ukrainian, and Japanese peoples. While Poles and Ukrainians are present in Paraná, their <strong>presence</strong> in the rest of Brazil is almost <strong>absence</strong>.', 'context2': 'The Chinese had become almost impossible to deal with because of the turmoil associated with the cultural revolution. The North Vietnamese <strong>presence</strong> in Eastern Cambodia had grown so large that it was destabilizing Cambodia politically and economically. Further, when the Cambodian left went underground in the late 1960s, Sihanouk had to make concessions to the right in the <strong>absence</strong> of any force that he could play off against them.', 'sim1': 2.2699999809265137, 'sim2': 1.3700000047683716, 'stdev1': 2.890000104904175, 'stdev2': 1.7899999618530273, 'pvalue': 0.2409999966621399, 'word1_context1': 'absence', 'word2_context1': 'presence', 'word1_context2': 'absence', 'word2_context2': 'presence' } ``` ### Data Fields - `word1`: a string representing the first word in the pair. Uninflected form. - `word2`: a string representing the second word in the pair. Uninflected form. - `context1`: a string representing the first context containing the pair of words. The target words are marked with a `<strong></strong>` labels. - `context2`: a string representing the second context containing the pair of words. The target words are marked with a `<strong></strong>` labels. - `sim1`: a float representing the mean of the similarity scores within the first context. - `sim2`: a float representing the mean of the similarity scores within the second context. - `stdev1`: a float representing the standard Deviation for the scores within the first context. - `stdev2`: a float representing the standard deviation for the scores within the second context. - `pvalue`: a float representing the p-value calculated using the Mann-Whitney U test. - `word1_context1`: a string representing the inflected version of the first word as it appears in the first context. - `word2_context1`: a string representing the inflected version of the second word as it appears in the first context. - `word1_context2`: a string representing the inflected version of the first word as it appears in the second context. - `word2_context2`: a string representing the inflected version of the second word as it appears in the second context. ## Additional Information ### Dataset Curators Carlos Armendariz; et al. (please see http://hdl.handle.net/11356/1308 for the full list) ### Licensing Information GNU GPL v3.0. ### Citation Information ``` @inproceedings{armendariz-etal-2020-semeval, title = "{SemEval-2020} {T}ask 3: Graded Word Similarity in Context ({GWSC})", author = "Armendariz, Carlos S. and Purver, Matthew and Pollak, Senja and Ljube{\v{s}}i{\'{c}}, Nikola and Ul{\v{c}}ar, Matej and Robnik-{\v{S}}ikonja, Marko and Vuli{\'{c}}, Ivan and Pilehvar, Mohammad Taher", booktitle = "Proceedings of the 14th International Workshop on Semantic Evaluation", year = "2020", address="Online" } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
true
# Dataset Card for Lexicap ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - ## Dataset Structure ### Data Instances Train and test dataset. j ### Data Fields ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ### Contributions
false
Converted to json version of dataset from [Koziev/NLP_Datasets](https://github.com/Koziev/NLP_Datasets/blob/master/Conversations/Data/extract_dialogues_from_anekdots.tar.xz)
false
# UD_Catalan-AnCora ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Website:** https://github.com/UniversalDependencies/UD_Catalan-AnCora - **Point of Contact:** [Daniel Zeman](zeman@ufal.mff.cuni.cz) ### Dataset Summary This dataset is composed of the annotations from the [AnCora corpus](http://clic.ub.edu/corpus/), projected on the [Universal Dependencies treebank](https://universaldependencies.org/). We use the POS annotations of this corpus as part of the [Catalan Language Understanding Benchmark (CLUB)](https://club.aina.bsc.es/). ### Supported Tasks and Leaderboards POS tagging ### Languages The dataset is in Catalan (`ca-CA`) ## Dataset Structure ### Data Instances Three conllu files. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines: 1) Word lines containing the annotation of a word/token in 10 fields separated by single tab characters (see below). 2) Blank lines marking sentence boundaries. 3) Comment lines starting with hash (#). ### Data Fields Word lines contain the following fields: 1) ID: Word index, integer starting at 1 for each new sentence; may be a range for multiword tokens; may be a decimal number for empty nodes (decimal numbers can be lower than 1 but must be greater than 0). 2) FORM: Word form or punctuation symbol. 3) LEMMA: Lemma or stem of word form. 4) UPOS: Universal part-of-speech tag. 5) XPOS: Language-specific part-of-speech tag; underscore if not available. 6) FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension; underscore if not available. 7) HEAD: Head of the current word, which is either a value of ID or zero (0). 8) DEPREL: Universal dependency relation to the HEAD (root iff HEAD = 0) or a defined language-specific subtype of one. 9) DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs. 10) MISC: Any other annotation. From: [https://universaldependencies.org](https://universaldependencies.org/guidelines.html) ### Data Splits - ca_ancora-ud-train.conllu - ca_ancora-ud-dev.conllu - ca_ancora-ud-test.conllu ## Dataset Creation ### Curation Rationale [N/A] ### Source Data - [UD_Catalan-AnCora](https://github.com/UniversalDependencies/UD_Catalan-AnCora) #### Initial Data Collection and Normalization The original annotation was done in a constituency framework as a part of the [AnCora project](http://clic.ub.edu/corpus/) at the University of Barcelona. It was converted to dependencies by the [Universal Dependencies team](https://universaldependencies.org/) and used in the CoNLL 2009 shared task. The CoNLL 2009 version was later converted to HamleDT and to Universal Dependencies. For more information on the AnCora project, visit the [AnCora site](http://clic.ub.edu/corpus/). To learn about the Universal Dependences, visit the webpage [https://universaldependencies.org](https://universaldependencies.org) #### Who are the source language producers? For more information on the AnCora corpus and its sources, visit the [AnCora site](http://clic.ub.edu/corpus/). ### Annotations #### Annotation process For more information on the first AnCora annotation, visit the [AnCora site](http://clic.ub.edu/corpus/). #### Who are the annotators? For more information on the AnCora annotation team, visit the [AnCora site](http://clic.ub.edu/corpus/). ### Personal and Sensitive Information No personal or sensitive information included. ## Considerations for Using the Data ### Social Impact of Dataset This dataset contributes to the development of language models in Catalan, a low-resource language. ### Discussion of Biases [N/A] ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators ### Licensing Information This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">CC Attribution 4.0 International License</a>. ### Citation Information The following paper must be cited when using this corpus: Taulé, M., M.A. Martí, M. Recasens (2008) 'Ancora: Multilevel Annotated Corpora for Catalan and Spanish', Proceedings of 6th International Conference on Language Resources and Evaluation. Marrakesh (Morocco). To cite the Universal Dependencies project: Rueter, J. (Creator), Erina, O. (Contributor), Klementeva, J. (Contributor), Ryabov, I. (Contributor), Tyers, F. M. (Contributor), Zeman, D. (Contributor), Nivre, J. (Creator) (15 Nov 2020). Universal Dependencies version 2.7 Erzya JR. Universal Dependencies Consortium.
true
# Dataset Card for Weakly supervised AG News Dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary AG is a collection of more than 1 million news articles. News articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine which has been running since July, 2004. The dataset is provided by the academic comunity for research purposes in data mining (clustering, classification, etc), information retrieval (ranking, search, etc), xml, data compression, data streaming, and any other non-commercial activity. For more information, please refer to the link http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html . The Weakly supervised AG News Dataset was created by Team 44 of FSDL 2022 course with the only purpose of experimenting with weak supervision techniques. It was assumed that only the labels of the original test set and 20% of the training set were available. The labels in the training set were obtained by creating weak labels with LFs and denoising them with Snorkel's label model. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields text: a string feature label: a classification label, with possible values including World (0), Sports (1), Business (2), Sci/Tech (3). ### Data Splits - Training set with probabilistic labels from weak supervision: 37340 - Unlabeled data: 58660 - Validation set: 24000 - Test set: 7600 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to Xiang Zhang (xiang.zhang@nyu.edu) for adding this dataset to the HF Dataset Hub.
true
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
# Dataset Card for AISegment.cn - Matting Human datasets ## Table of Contents - [Dataset Card for AISegment.cn - Matting Human datasets](#dataset-card-for-aisegmentcn---matting-human-datasets) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Structure](#dataset-structure) - [Licensing Information](#licensing-information) ## Dataset Description Quoting the [dataset's github](https://github.com/aisegmentcn/matting_human_datasets) (translated by Apple Translator): > This dataset is currently the largest portrait matting dataset, containing 34,427 images and corresponding matting results. > The data set was marked by the high quality of Beijing Play Star Convergence Technology Co. Ltd., and the portrait soft segmentation model trained using this data set has been commercialized. > The original images in the dataset are from `Flickr`, `Baidu`, and `Taobao`. After face detection and area cropping, a half-length portrait of 600\*800 was generated. > The clip_img directory is a half-length portrait image in the format jpg; the matting directory is the corresponding matting file (convenient to confirm the matting quality), the format is png, you should first extract the alpha map from the png image before training. - **Repository:** [aisegmentcn/matting_human_datasets](https://github.com/aisegmentcn/matting_human_datasets) ## Dataset Structure ```text └── data/ ├── clip_img/ │ └── {group-id}/ │ └── clip_{subgroup-id}/ │ └── {group-id}-{img-id}.jpg └── matting/ └── {group-id}/ └── matting_{subgroup-id}/ └── {group-id}-{img-id}.png ``` The input `data/clip_img/1803151818/clip_00000000/1803151818-00000003.jpg` matches the label `data/matting/1803151818/matting_00000000/1803151818-00000003.png` ### Licensing Information See authors [Github](https://github.com/aisegmentcn/matting_human_datasets)
true
# AutoTrain Dataset for project: github-emotion-surprise ## Dataset Description Dataset used in the paper: Imran et al., ["Data Augmentation for Improving Emotion Recognition in Software Engineering Communication"](https://arxiv.org/abs/2208.05573), ASE-2022. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "feat_id": 704844644, "text": "This change doesn't affect anything but makes the code more clear. If you look at the line about, `currentUrlTree` is set to `urlAfterRedirects`.", "feat_Anger": 0, "feat_Love": 0, "feat_Fear": 0, "feat_Joy": 1, "feat_Sadness": 0, "target": 0 }, { "feat_id": 886568180, "text": "Thanks very much for your feedback [USER] Your point is totally fair. My intention was to highlight that camelCase or dash-case class names are perfectly fine to use in Angular templates. Most people, especially beginners, do not know that and end up using the `ngClass` directive. Do you think that rewording the alert towards that direction would make sense?", "feat_Anger": 0, "feat_Love": 1, "feat_Fear": 0, "feat_Joy": 0, "feat_Sadness": 0, "target": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "feat_id": "Value(dtype='int64', id=None)", "text": "Value(dtype='string', id=None)", "feat_Anger": "Value(dtype='int64', id=None)", "feat_Love": "Value(dtype='int64', id=None)", "feat_Fear": "Value(dtype='int64', id=None)", "feat_Joy": "Value(dtype='int64', id=None)", "feat_Sadness": "Value(dtype='int64', id=None)", "target": "ClassLabel(num_classes=2, names=['0', '1'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 1600 | | valid | 400 |
false
# Dataset Card for Lipogram-e ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage**: https://github.com/Hellisotherpeople/Constrained-Text-Generation-Studio - **Repository**: https://github.com/Hellisotherpeople/Constrained-Text-Generation-Studio - **Paper** Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio - **Leaderboard**: https://github.com/Hellisotherpeople/Constrained-Text-Generation-Studio - **Point of Contact**: https://www.linkedin.com/in/allen-roush-27721011b/ ### Dataset Summary ![Gadsby](https://upload.wikimedia.org/wikipedia/commons/1/1d/Gadsby_%28book_cover%29.jpg) ![Eunoia](https://upload.wikimedia.org/wikipedia/en/1/12/Eunoia_%28book%29.png) ![A Void](https://images-na.ssl-images-amazon.com/images/S/compressed.photo.goodreads.com/books/1388699493i/28294.jpg) This is a dataset of 3 English books which do not contain the letter "e" in them. This dataset includes all of "Gadsby" by Ernest Vincent Wright, all of "A Void" by Georges Perec, and almost all of "Eunoia" by Christian Bok (except for the single chapter that uses the letter "e" in it) This dataset is contributed as part of a paper titled "Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio" to appear at COLING 2022. This dataset and the works within them are examples of Lipograms, which are works where a letter or string is systematically omitted. Lipograms are an example of hard-constrained writing. ### Supported Tasks and Leaderboards The main task for this dataset is Constrained Text Generation - but all types of language modeling are suitable. ### Languages English ## Dataset Structure ### Data Instances Each is extracted directly from the available pdf or epub documents converted to txt using pandoc. ### Data Fields Text. The name of each work appears before the work starts and again at the end, so the books can be trivially split again if necessary. ### Data Splits None given. The way I do so in the paper is to extract the final 20% of each book, and concatenate these together. This may not be the most ideal way to do a train/test split, but I couldn't think of a better way. I did not believe randomly sampling was appropriate, but I could be wrong. ## Dataset Creation ### Curation Rationale One way that we could extract text from datasets that doesn't use the letter "e" in it would be to simply computationally parse through large existing datasets for blocks or sentences which don't have the letter "e" in them. Unfortunately, this is extremely unlikely to lead to coherent or meaningful text. Doing so over increasingly large blocks or spans is likely to result in fewer and fewer examples. While the preparation of such a dataset would be fascinating in its own right - it is more interesting from the perspective of fine-tuning language models to have large scale prose narratives which fulfill the given constraint. This constraint of omitting the letter "e" is attractive because several book length works exist which do this. ### Source Data #### Initial Data Collection and Normalization Project Gutenberg #### Who are the source language producers? Ernest Vincent Wright Georges Perec Christian Bok ### Annotations #### Annotation process None #### Who are the annotators? n/a ### Personal and Sensitive Information None ## Considerations for Using the Data There may be conversion artifacts. I noticed 3 cases of the letter "e" being hallucinated from the pdf conversion of "a void" that I had to fix manually. They were reading special characters as the letter "e", and were not due to the authors making mistakes themselves. This implies that at least a few OCR errors exist. ### Social Impact of Dataset These books have existed for a awhile now, so it's unlikely that this will have dramatic Social Impact. ### Discussion of Biases This dataset is 100% biased against the letter "e". There may be biases present in contents of these works. It's recommended to read the books before using this in any non research application to verify that they are not problematic. ### Other Known Limitations It's possible that more works exist but were not well known enough for the authors to find them and include them. Finding such inclusions would be grounds for iteration of this dataset (e.g. a version 1.1 would be released). The goal of this project is to eventually encompass all book length english language "e" lipograms. ## Additional Information n/a ### Dataset Curators Allen Roush ### Licensing Information MIT ### Citation Information TBA ### Contributions Thanks to [@Hellisotherpeople](https://github.com/Hellisotherpeople) for adding this dataset.
false
# Dataset Card for "lmqg/qg_frquad" ***IMPORTANT***: This is a dummy dataset for [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad). The original FRQuAD requires to fill a form (https://fquad.illuin.tech/) to get the data, and our lmqg/qg_frquad follows FQuAD's license. If you need lmqg/qg_frquad, please first request the access to FQuAD on their website https://fquad.illuin.tech/ . Once you obtain the access, we will add you to our lmqg group so that you can access https://huggingface.co/datasets/lmqg/qg_frquad. Leave a comment to the [discussion page](https://huggingface.co/datasets/lmqg/qg_frquad_dummy/discussions/1) to request access to the `lmqg/qg_frquad` after being granted FQuAD access! ## Dataset Description - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/) ### Dataset Summary This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in ["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992). This is a modified version of [FQuAD](https://huggingface.co/datasets/fquad) for question generation (QG) task. Since the original dataset only contains training/validation set, we manually sample test set from training set, which has no overlap in terms of the paragraph with the training set. ***IMPORTANT NOTE:*** The license of this dataset belongs to [FQuAD](https://fquad.illuin.tech/), so please check the guideline there and request the right to access the dataset [here](https://fquad.illuin.tech/) promptly if you use the datset. ### Supported Tasks and Leaderboards * `question-generation`: The dataset is assumed to be used to train a model for question generation. Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail). ### Languages French (fr) ## Dataset Structure An example of 'train' looks as follows. ``` { 'answer': '16 janvier 1377', 'question': 'Quand est-ce que Grégoire XI arrive à Rome ?', 'sentence': "Le pape poursuit son voyage jusqu'à Rome en passant par Corneto où il parvient le 6 décembre 1376, puis il arrive à Rome le 16 janvier 1377 en remontant le Tibre.", 'paragraph': "Quant à Catherine, elle part par voie terrestre en passant par Saint-Tropez, Varazze, puis Gênes. C'est dans cette dernière ville que, selon la Legenda minore, elle aurait de nouveau rencontré Grégoire XI. Le pape poursuit son voyage jusqu'à Rome en passant par Corneto où il parvient le 6 décembre 1376, puis il arrive à Rome le 16 janvier 1377 en remontant le Tibre.", 'sentence_answer': "Le pape poursuit son voyage jusqu'à Rome en passant par Corneto où il parvient le 6 décembre 1376, puis il arrive à Rome le <hl> 16 janvier 1377 <hl> en remontant le Tibre.", 'paragraph_answer': "Quant à Catherine, elle part par voie terrestre en passant par Saint-Tropez, Varazze, puis Gênes. C'est dans cette dernière ville que, selon la Legenda minore, elle aurait de nouveau rencontré Grégoire XI. Le pape poursuit son voyage jusqu'à Rome en passant par Corneto où il parvient le 6 décembre 1376, puis il arrive à Rome le <hl> 16 janvier 1377 <hl> en remontant le Tibre.", 'paragraph_sentence': "Quant à Catherine, elle part par voie terrestre en passant par Saint-Tropez, Varazze, puis Gênes. C'est dans cette dernière ville que, selon la Legenda minore, elle aurait de nouveau rencontré Grégoire XI. <hl> Le pape poursuit son voyage jusqu'à Rome en passant par Corneto où il parvient le 6 décembre 1376, puis il arrive à Rome le 16 janvier 1377 en remontant le Tibre. <hl>" } ``` The data fields are the same among all splits. - `question`: a `string` feature. - `paragraph`: a `string` feature. - `answer`: a `string` feature. - `sentence`: a `string` feature. - `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`. - `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`. - `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`. Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model, but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and `paragraph_sentence` feature is for sentence-aware question generation. ## Data Splits |train|validation|test | |----:|---------:|----:| |17543| 3188 |3188 | ## Citation Information ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration: {A} {U}nified {B}enchmark and {E}valuation", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
true
# AutoTrain Dataset for project: ashwin_sentiment140dataset ## Dataset Description This dataset has been automatically processed by AutoTrain for project ashwin_sentiment140dataset. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "@JordainFTW i didnt watch them BUT CALEB PLAYS NAZI ZOMBIES TOOOOOO!!!!!!!!!! OMG OMG OMG! HE IS MY BESTFREIND! what do u needa tell me?", "target": 1 }, { "text": "@Jennymac22 too much info! good for you hun. I'm pleased for you. ", "target": 1 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=2, names=['0', '4'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 2399 | | valid | 601 |
true
# Dataset Card for Auditor Sentiment
false
# AutoTrain Dataset for project: oveja31 ## Dataset Description This dataset has been automatically processed by AutoTrain for project oveja31. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<1424x1424 RGB PIL image>", "target": 0 }, { "image": "<1627x1627 RGB PIL image>", "target": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(num_classes=1, names=['oveja'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 4 | | valid | 1 |
false
# Dataset Card for Lipogram-e ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage**: https://github.com/Hellisotherpeople/Constrained-Text-Generation-Studio - **Repository**: https://github.com/Hellisotherpeople/Constrained-Text-Generation-Studio - **Paper** Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio - **Leaderboard**: https://github.com/Hellisotherpeople/Constrained-Text-Generation-Studio - **Point of Contact**: https://www.linkedin.com/in/allen-roush-27721011b/ ### Dataset Summary ![Gadsby](https://www.gutenberg.org/cache/epub/6936/pg6936.cover.medium.jpg) This is a dataset of English books which only write using one syllable at a time. At this time, the dataset only contains Robinson Crusoe — in Words of One Syllable by Lucy Aikin and Daniel Defoe This dataset is contributed as part of a paper titled "Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio" to appear at COLING 2022. This dataset does not appear in the paper itself, but was gathered as a candidate constrained text generation dataset. ### Supported Tasks and Leaderboards The main task for this dataset is Constrained Text Generation - but all types of language modeling are suitable. ### Languages English ## Dataset Structure ### Data Instances Each is extracted directly from the available pdf or epub documents converted to txt using pandoc. ### Data Fields Text. The name of each work appears before the work starts and again at the end, so the books can be trivially split again if necessary. ### Data Splits None given. The way I do so in the paper is to extract the final 20% of each book, and concatenate these together. This may not be the most ideal way to do a train/test split, but I couldn't think of a better way. I did not believe randomly sampling was appropriate, but I could be wrong. ## Dataset Creation ### Curation Rationale There are several books which claim to only be written using one syllable words. A list of them can be found here: https://diyhomeschooler.com/2017/01/25/classics-in-words-of-one-syllable-free-ebooks/ Unfortunately, after careful human inspection, it appears that only one of these works actually does reliably maintain the one syllable constraint through the whole text. Outside of proper names, I cannot spot or computationally find a single example of a more-than-one-syllable word in this whole work. ### Source Data Robinson Crusoe — in Words of One Syllable by Lucy Aikin and Daniel Defoe #### Initial Data Collection and Normalization Project Gutenberg #### Who are the source language producers? Lucy Aikin and Daniel Defoe ### Annotations #### Annotation process None #### Who are the annotators? n/a ### Personal and Sensitive Information None ## Considerations for Using the Data There may be OCR conversion artifacts. ### Social Impact of Dataset These books have existed for a awhile now, so it's unlikely that this will have dramatic Social Impact. ### Discussion of Biases The only biases possible are related to the contents of Robinson Crusoe or the possibility of the authors changing Robinson Crusoe in some problematic way by using one-syllable words. This is unlikely, as this work was aimed at children. ### Other Known Limitations It's possible that more works exist but were not well known enough for the authors to find them and include them. Finding such inclusions would be grounds for iteration of this dataset (e.g. a version 1.1 would be released). The goal of this project is to eventually encompass all book length english language works that do not use more than one syllable in each of their words (except for names) ## Additional Information n/a ### Dataset Curators Allen Roush ### Licensing Information MIT ### Citation Information TBA ### Contributions Thanks to [@Hellisotherpeople](https://github.com/Hellisotherpeople) for adding this dataset.
false
true
# Dataset Card for Lex Fridman Podcasts Dataset This dataset is sourced from Andrej Karpathy's [Lexicap website](https://karpathy.ai/lexicap/) which contains English transcripts of Lex Fridman's wonderful podcast episodes. The transcripts were generated using OpenAI's large-sized [Whisper model]("https://github.com/openai/whisper")
true
# AutoTrain Dataset for project: fake-news ## Dataset Description This dataset has been automatically processed by AutoTrain for project fake-news. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "feat_author": "Brett Macdonald", "feat_published": "2016-10-28T00:58:00.000+03:00", "feat_title": "breaking hillary just lost the black vote trump is going all the way to the white house", "text": "dean james americas freedom fighters \nlast week the pentagon issued a defense department directive that allows department of defense dd personnel to carry firearms and employ deadly force while performing official duties \nthe defense department has been working on changing the gunfree zones on domestic military basis for several years in light of the deadly shootings at military sites in recent years \nmilitarycom reports that the directive also provides detailed guidance to the services for permitting soldiers sailors airmen marines and coast guard personnel to carry privately owned firearms on dod property it authorizes commanders and aboveto grant permission to dod personnel requesting to carry a privately owned firearm concealed or open carry on dod property for a personal protection purpose not related to performance of an official duty or status \nthe directive also makes clear that dod will consider further changes to grant standard authorizations for other dod personnel who are trained in the scaled use of force or who have been previously qualified to use a governmentissued firearm to carry a firearm in the performance of official duties on dod property this would allow dod with certain combat training to carry firearms without going through the additional step of making application with a commander \nkim smith at conservative tribune notes that the policy was a response to an nrabacked provision in the national defense authorization act that required the defense department to allow more service members to carry firearms on base \nit is a good first step in that it recognizes personal protection is a valid issue for service members but there are many roadblocks in the way of making that option available nra spokeswoman jennifer baker told the washington free beacon \nthose wishing to apply for permission to carry a firearm must be at least years old and meet all federal state and local laws the directive said \nit would appear that the pentagon saw no problems with implementing a policy for which presidentelect donald trump has expressed support \npresidentelect donald trump ran on removing gunfree zones from military bases on july breitbart news reported that trump pledged to end the gunfree scenarios for us troops by mandating that soldiers remain armed and on alert at our military bases \nthe immediate institution of this directive probably left president barack obama incensed but he undoubtedly realized that there was nothing he could do to prevent its implementation in a couple of months anyway and thats good news because it works to ensure the safety of our troops which should always be a priority \nlet us know what you think about this in the comments below \ngod bless", "feat_language": "english", "feat_site_url": "americasfreedomfighters.com", "feat_main_img_url": "http://www.americasfreedomfighters.com/wp-content/uploads/2016/10/22-1.jpg", "feat_type": "bs", "target": 0, "feat_title_without_stopwords": "breaking hillary lost black vote trump going way white house", "feat_text_without_stopwords": "dean james americas freedom fighters last week pentagon issued defense department directive allows department defense dd personnel carry firearms employ deadly force performing official duties defense department working changing gunfree zones domestic military basis several years light deadly shootings military sites recent years militarycom reports directive also provides detailed guidance services permitting soldiers sailors airmen marines coast guard personnel carry privately owned firearms dod property authorizes commanders aboveto grant permission dod personnel requesting carry privately owned firearm concealed open carry dod property personal protection purpose related performance official duty status directive also makes clear dod consider changes grant standard authorizations dod personnel trained scaled use force previously qualified use governmentissued firearm carry firearm performance official duties dod property would allow dod certain combat training carry firearms without going additional step making application commander kim smith conservative tribune notes policy response nrabacked provision national defense authorization act required defense department allow service members carry firearms base good first step recognizes personal protection valid issue service members many roadblocks way making option available nra spokeswoman jennifer baker told washington free beacon wishing apply permission carry firearm must least years old meet federal state local laws directive said would appear pentagon saw problems implementing policy presidentelect donald trump expressed support presidentelect donald trump ran removing gunfree zones military bases july breitbart news reported trump pledged end gunfree scenarios us troops mandating soldiers remain armed alert military bases immediate institution directive probably left president barack obama incensed undoubtedly realized nothing could prevent implementation couple months anyway thats good news works ensure safety troops always priority let us know think comments god bless", "feat_hasImage": 1.0 }, { "feat_author": "Joel Ross Taylor", "feat_published": "2016-10-26T22:46:37.443+03:00", "feat_title": "no title", "text": "announcement \nthe wrh server continues to be under intense attack by hillarys tantrum squad \nbut the site keeps bouncing back so if during the day you cannot connect wait a minute or two and try again thank you for your patience it is obvious the bad guys are in a state of total panic to act like this thought for the day we seek peace knowing that peace is the climate of freedom dwight d eisenhower your random dhs monitored phrase of the day dera \npaid advertising at what really happened may not represent the views and opinions of this website and its contributors no endorsement of products and services advertised is either expressed or implied \nhillary the spy updated info \nlet us start with an historical fact treason and betrayal by the highest levels is a common feature of history whether it is judas vs jesus brutus vs julius caesar benedict arnold the rosenbergs jonathan pollard aldrich ames robert hanssen it is just a fact of life it does happen \nback in when bill clinton was running for reelection he authorized the transfer of highly sensitive technology to china this technology had military applications and allowed china to close the gap in missile performance with the united states the transfers were opposed and severely criticized by the defense department \nat the same time bill clinton was transferring this technology to china huge donations began to pour into his reelection campaign from the us companies allowed to sell the technology to china and from american citizens of chinese descent the fact that they were us citizens allowed them to donate to political campaigns but it later emerged that they were acting as conduits for cash coming in from asian sources including chinese intelligence agencies the scandal eventually became known as chinagate \njohn huang \na close associate of indonesian industrialist james riady huang initially was appointed deputy secretary of commerce in by however he moved to the democratic national committee where he generated hundreds of thousands of dollars in illegal contributions from foreign sources huang later pleaded guilty to one felony count of campaign finance violations \ncharlie trie \nlike john huang trie raised hundreds of thousands of dollars in illegal contributions from foreign sources to democratic campaign entities he was a regular white house visitor and arranged meetings of foreign operators with clinton including one who was a chinese arms dealer his contribution to clintons legal defense fund was returned after it was found to have been largely funded by asian interests trie was convicted of violating campaign finance laws in \none of tries main sources of cash was chinese billionaire ng lap seng according to a senate report ng lap seng had connections to the chinese government seng was arrested in over an unrelated bribery case but this gave investigators the opportunity to question seng about the chinagate scandal former united nations general assembly president john ashe was also caught in the bribery case and was about to testify to the links between the clintons and seng when he was found dead that very morning initially reported as having died from a heart attack johns throat had obviously been crushed at that point the official story changed to him accidentally dropping a barbell on his own throat \nng lap seng with the clintons \njohnny chung \ngave more than to the democratic national committee prior to the campaign but it was returned after officials learned it came from illegal foreign sources chung later told a special senate committee investigating clinton campaign fundraising that of his contributions came from individuals in chinese intelligence chung pleaded guilty to bank fraud tax evasion and campaign finance violations \nchinagate documented by judicial watch was uncovered by judicial watch founder larry klayman technology companies allegedly made donations of millions of dollars to various democratic party entities including president bill clintons reelection campaign in return for permission to sell hightech secrets to china bernard schwartz and his loral space communication ltd later allegedly helped china to identify the cause of a rocket failure thereby advancing chinas missile program and threatening us national security according to records \nthis establishes a history of the clintons treating us secrets as their own personal property and selling them to raise money for campaigns \nis history repeating itself it appears so \nlet us consider a private email server with weak security at least one known totally open access point no encryption at all and outside the control and monitoring systems of the us government on which are parked many of the nations most closely guarded secrets as well as those of the united nations and other foreign governments it is already established that hillarys email was hacked one hacker named guccifer provided copies of emails to russia today which published them", "feat_language": "english", "feat_site_url": "westernjournalism.com", "feat_main_img_url": "http://static.westernjournalism.com/wp-content/uploads/2016/10/earnest-obama.jpg", "feat_type": "bias", "target": 1, "feat_title_without_stopwords": "title", "feat_text_without_stopwords": "maggie hassan left kelly ayotte hassan declares victory us senate race ayotte paul feelynew hampshire union leader update gov maggie hassan declared shes new hampshires us senate race unseating republican sen kelly ayotteduring hastilycalled press conference outside state house hassan said shes ahead enough votes survive returns outstanding towns lefti proud stand next united states senator new hampshire hassan said cheers large group supporters led congresswoman annie kuster hassans husband tomthe twoterm governor said hadnt spoken ayotteits clear maintained lead race hassan saidsen ayotte issued brief statement hassans event concede deferred secretary state bill gardners final resultsthis closely contested race beginning look forward results announced secretary state ensuring every vote counted race received historic level interest ayotte saidhassan said called congratulate govelect chris sununu newfields republican vowed work together smooth transition power states corner officewith percent vote counted hassan led ayotte nashua republican votes much less percent two voting precincts left reporta recount statewide race seems like real possibility margin small enough ayotte pay earlier story follows concord republican incumbent sen kelly ayotte told supporters early wednesday feeling really upbeat chances one closely watched expensive us senate races country wasnt ready claim victory democratic challenger gov maggie hassan earn return washington representing granite stateat ayotte took podium grappone conference center concord address supporters victory party dead heat hassan percent percent votes votes percent precincts state reportingjoe excited see tonight said ayotte feel really upbeat tonightayotte went thank supporters next gov sununuwe know hard worked grateful humbled fact would believe us right upbeat race believe strongly fact want every vote come talk every vote matters every person matters stategov hassan said race close call campaign maintained vote lead according numbers compiled staffwe still small sustainable lead saidhassan told crowd number smaller towns yet report numbers confident lead would hold campaign said numbers show hassan vote ayottes percent vote campaign said numbers include results big communities associated press yet count like salem derry lebanon portsmouth cities manchester nashua concord included hassan numbersthe governor headed home night urged supporters go home get sleepelection day marked end long campaign cycle granite state kicked nine months ago presidential primaries nine months ago didnt let final ballots cast around pm tuesdaythe ayottehassan contest expensive political race ever new hampshire million spent took center stage cycle alongside presidential race republican nominee donald trump democratic nominee hillary clinton cementing new hampshires status battleground state four electoral votes grabs race one half dozen around us closely watched tuesday outcome likely playing part deciding republicans retain control senate democrats regain majority lost two years agoit great night republicans new hampshire across country said nh gop chair jennifer horn new hampshire know republicans stand together republicans fight together win", "feat_hasImage": 1.0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "feat_author": "Value(dtype='string', id=None)", "feat_published": "Value(dtype='string', id=None)", "feat_title": "Value(dtype='string', id=None)", "text": "Value(dtype='string', id=None)", "feat_language": "Value(dtype='string', id=None)", "feat_site_url": "Value(dtype='string', id=None)", "feat_main_img_url": "Value(dtype='string', id=None)", "feat_type": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=2, names=['Fake', 'Real'], id=None)", "feat_title_without_stopwords": "Value(dtype='string', id=None)", "feat_text_without_stopwords": "Value(dtype='string', id=None)", "feat_hasImage": "Value(dtype='float64', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 1639 | | valid | 411 |
false
# Dataset Card for Lex Fridman Podcast Transcripts ## Table of Contents - [Dataset Card for Lex Fridman Podcast Transcripts](#dataset-card-for-lex-fridman-podcast-transcripts) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://karpathy.ai/lexicap/ - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** [@drewdresser](https://twitter.com/drewdresser) ### Dataset Summary These are transcripts from the Lex Fridman podcast. The podcast is hosted by Lex Fridman, a computer scientist at MIT. The podcast is a mix of interviews with researchers in AI and other fields, and discussions of current events in AI. The transcripts are generated using [OpenAI Whisper](https://github.com/openai/whisper), then made available on [Karpathy AI](https://karpathy.ai/lexicap/). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances ~325 ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
false
# Dataset Card for OLM September 2022 Wikipedia Pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from a September 2022 Wikipedia snapshot.
false
# Dataset Card for whisper-gpt ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
20221001 한국어 위키를 kss(backend=mecab)을 이용해서 문장 단위로 분리한 데이터 - 549262 articles, 4724064 sentences - 한국어 비중이 50% 이하거나 한국어 글자가 10자 이하인 경우를 제외
true
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Top news headline in finance from bbc-news ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields Sentiment label: Using threshold below 0 is negative (0) and above 0 is positive (1) [More Information Needed] ### Data Splits Train/Split Ratio is 0.9/0.1 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
# Dataset Card for "meddocan" ## Table of Contents - [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://temu.bsc.es/meddocan/index.php/datasets/](https://temu.bsc.es/meddocan/index.php/datasets/) - **Repository:** [https://github.com/PlanTL-GOB-ES/SPACCC_MEDDOCAN](https://github.com/PlanTL-GOB-ES/SPACCC_MEDDOCAN) - **Paper:** [http://ceur-ws.org/Vol-2421/MEDDOCAN_overview.pdf](http://ceur-ws.org/Vol-2421/MEDDOCAN_overview.pdf) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary A personal upload of the SPACC_MEDDOCAN corpus. The tokenization is made with the help of a custom [spaCy](https://spacy.io/) pipeline. ### Supported Tasks and Leaderboards Name Entity Recognition ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Data Fields The data fields are the same among all splits. ### Data Splits | name |train|validation|test| |---------|----:|---------:|---:| |meddocan|10312|5268|5155| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information From the [SPACCC_MEDDOCAN: Spanish Clinical Case Corpus - Medical Document Anonymization](https://github.com/PlanTL-GOB-ES/SPACCC_MEDDOCAN) page: > This work is licensed under a Creative Commons Attribution 4.0 International License. > > You are free to: Share — copy and redistribute the material in any medium or format Adapt — remix, transform, and build upon the material for any purpose, even commercially. Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. > > For more information, please see https://creativecommons.org/licenses/by/4.0/ ### Citation Information ``` @inproceedings{Marimon2019AutomaticDO, title={Automatic De-identification of Medical Texts in Spanish: the MEDDOCAN Track, Corpus, Guidelines, Methods and Evaluation of Results}, author={Montserrat Marimon and Aitor Gonzalez-Agirre and Ander Intxaurrondo and Heidy Rodriguez and Jose Lopez Martin and Marta Villegas and Martin Krallinger}, booktitle={IberLEF@SEPLN}, year={2019} } ``` ### Contributions Thanks to [@GuiGel](https://github.com/GuiGel) for adding this dataset.
true
This dataset is a quick-and-dirty benchmark for predicting ratings across different domains and on different rating scales based on text. It pulls in a bunch of rating datasets, takes at most 1000 instances from each and combines them into a big dataset. Requires the `kaggle` library to be installed, and kaggle API keys passed through environment variables or in ~/.kaggle/kaggle.json. See [the Kaggle docs](https://www.kaggle.com/docs/api#authentication).
true
Cleaned up version of the rotten tomatoes critic reviews dataset. The original is obtained from Kaggle: https://www.kaggle.com/datasets/stefanoleone992/rotten-tomatoes-movies-and-critic-reviews-dataset Data has been scraped from the publicly available website https://www.rottentomatoes.com as of 2020-10-31. The clean up process drops anything without both a review and a rating, as well as standardising the ratings onto several integer, ordinal scales. Requires the `kaggle` library to be installed, and kaggle API keys passed through environment variables or in ~/.kaggle/kaggle.json. See [the Kaggle docs](https://www.kaggle.com/docs/api#authentication). A processed version is available at https://huggingface.co/datasets/frankier/processed_multiscale_rt_critics
false
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#Summarization) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#text) - [Annotations](#summary) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage: Exercises ModifiedOrangeSumm-Abstract** - **Repository: krm/modified-orangeSum** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [Ceci est un petit essai et résulte de l'adjonction de quelques données personnelles à OrangeSum Abstract] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed]
false
# Dataset Card for BrWac ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [BrWaC homepage](https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC) - **Repository:** [BrWaC repository](https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC) - **Paper:** [The brWaC Corpus: A New Open Resource for Brazilian Portuguese](https://www.aclweb.org/anthology/L18-1686/) - **Point of Contact:** [Jorge A. Wagner Filho](mailto:jawfilho@inf.ufrgs.br) ### Dataset Summary The BrWaC (Brazilian Portuguese Web as Corpus) is a large corpus constructed following the Wacky framework, which was made public for research purposes. The current corpus version, released in January 2017, is composed by 3.53 million documents, 2.68 billion tokens and 5.79 million types. Please note that this resource is available solely for academic research purposes, and you agreed not to use it for any commercial applications. Manually download at https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC This is a Tiny version of the entire dataset for educational purposes. Please, refer to https://github.com/the-good-fellas/xlm-roberta-pt-br ### Supported Tasks and Leaderboards Initially meant for fill-mask task. ### Languages Brazilian Portuguese ## Dataset Creation ### Personal and Sensitive Information All data were extracted from public sites. ### Licensing Information MIT ### Citation Information ``` @inproceedings{wagner2018brwac, title={The brwac corpus: A new open resource for brazilian portuguese}, author={Wagner Filho, Jorge A and Wilkens, Rodrigo and Idiart, Marco and Villavicencio, Aline}, booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)}, year={2018} } ``` ### Contributions Thanks to [@the-good-fellas](https://github.com/the-good-fellas) for adding this dataset as hf format.
false
# AutoTrain Dataset for project: beccacp ## Dataset Description This dataset has been automatically processed by AutoTrain for project beccacp. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<1600x838 RGB PIL image>", "target": 1 }, { "image": "<1200x628 RGB PIL image>", "target": 1 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(num_classes=2, names=['Becca', 'Lucy'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 9 | | valid | 4 |
false
# Dataset Card for Skateboarding tricks Dataset used to train [Text to skateboarding image model](https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning). For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL jpeg, and `text` is the accompanying text caption.
false
# Dataset Card for OLM August 2022 Wikipedia Pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from an August 2022 Wikipedia snapshot.
false
# Dataset Card for OLM October 2022 Wikipedia Pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from an October 2022 Wikipedia snapshot.
false
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `train`, `validation` and `test` splits have been replaced by a __dense__ retriever. The retrieval pipeline used: - __query__: The `target` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`. - __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==9` Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.7790 | 0.4487 | 0.3438 | 0.4800 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.7856 | 0.4424 | 0.3534 | 0.4913 | Retrieval results on the `test` set: N/A. Test set is blind so we do not have any queries.
false
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `train`, `validation` and `test` splits have been replaced by a __dense__ retriever. The retrieval pipeline used: - __query__: The `background` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`. - __retriever__: [`facebook/contriever-msmarco`](https://huggingface.co/facebook/contriever-msmarco) via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==17` Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.4764 | 0.2395 | 0.2271 | 0.2418 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.4364 | 0.2125 | 0.2131 | 0.2074 | Retrieval results on the `test` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.4481 | 0.2224 | 0.2254 | 0.2100 |
true
# Repo Github Repo: [thamognya/TBertNLI](https://github.com/thamognya/TBertNLI) specifically in the [src/data directory](https://github.com/thamognya/TBertNLI/tree/master/src/data). # Sample ``` premise hypothesis label 0 this church choir sings to the masses as they ... the church is filled with song 0 1 this church choir sings to the masses as they ... a choir singing at a baseball game 2 2 a woman with a green headscarf blue shirt and ... the woman is young 1 3 a woman with a green headscarf blue shirt and ... the woman is very happy 0 4 a woman with a green headscarf blue shirt and ... the woman has been shot 2 ``` # Datsets Origin As of now the marked datasets have been used to make this dataset and the other ones are todo - [x] SNLI - [x] MultiNLI - SuperGLUE - FEVER - WIKI-FACTCHECK - [x] ANLI - more from huggingface # Reasons Just for finetuning of NLI models and purely made for NLI (not zero shot classification)
false
# Dataset Card for [Gitcoin ODS Hackathon GR15] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://gitcoin.co/issue/29389 - **Repository:** https://github.com/poupou-web3/GC-ODS-Sybil - **Point of Contact:** https://discord.com/channels/562828676480237578/1024788324826763284 ### Dataset Summary This data set was created in the context of the first [Gitcoin Open Data Science Hackathon](https://go.gitcoin.co/blog/open-data-science-hackathon). It contains all the transactions on the Ethereum and Polygon chains of the wallet that contributed to the Grant 15 of Gitcoin grants program. It was created in order to find patterns in the transactions of potential Sybil attackers by exploring their on-chain activity. ## Dataset Creation ### Source Data The wallet address from grant 15 was extracted from the data put together by the Gitcoin DAO. [GR_15_DATA](https://drive.google.com/drive/folders/17OdrV7SA0I56aDMwqxB6jMwoY3tjSf5w) The data was produced using [Etherscan API](https://etherscan.io/) and [PolygonScan API](https://polygonscan.com/) and using scripts available later at [repo](https://github.com/poupou-web3/GC-ODS-Sybil). An address contributing to the [GR_15_DATA](https://drive.google.com/drive/folders/17OdrV7SA0I56aDMwqxB6jMwoY3tjSf5w) with no found transaction on a chain will not appear in the data gathered. ** Careful the transaction data only contains "normal" transactions as described by the API provider.** ## Dataset Structure ### Data Instances There are 4 CSV files. - 2 for transactions: one for the Ethereum transactions and one for the Polygon transactions. - 2 for features: one for the Ethereum transactions and one for the Polygon transactions. ### Data Fields As provided by the [Etherscan API](https://etherscan.io/) and [PolygonScan API](https://polygonscan.com/). A column address was added for easier manipulation and to have all the transactions of all addresses in the same file. It is an unsupervised machine-learning task, there is no target column. Most of the extracted features have been extracted using [tsfresh](https://tsfresh.readthedocs.io/en/latest/). The code is available in the GitHub [repo](https://github.com/poupou-web3/GC-ODS-Sybil). It allows reproducing the extraction from the 2 transactions CSV. Column names are named by tsfresh, each feature can be found in the documentation for more detailed definitions. Following are the descriptions for features not explained in by tsfresh: - countUniqueInteracted : Count the number of unique addresses with which the wallet address has interacted. - countTx: The total number of transactions - ratioUniqueInteracted : countUniqueInteracted / countTx - outgoing: Number of outgoing transactions - outgoingRatio : outgoing / countTx ## Considerations for Using the Data ### Social Impact of Dataset The creation of the data set may help in fraud detection and defence in public goods funding. ## Additional Information ### Licensing Information MIT ### Citation Information Please cite this data set if you use it, especially in the hackathon context. ### Contributions Thanks to [@poupou-web3](https://github.com/poupou-web3) for adding this dataset.
false
# Dataset Card for Pokémon BLIP captions with English and Chinese. Dataset used to train Pokémon text to image model, add a Chinese Column of [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) BLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model. For each row the dataset contains image en_text (caption in English) and zh_text (caption in Chinese) keys. image is a varying size PIL jpeg, and text is the accompanying text caption. Only a train split is provided. The Chinese captions are translated by [Deepl](https://www.deepl.com/translator)
true
# Dataset Card for GLUE ## Table of Contents - [Dataset Card for GLUE](#dataset-card-for-glue) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [ax](#ax) - [cola](#cola) - [mnli](#mnli) - [mnli_matched](#mnli_matched) - [mnli_mismatched](#mnli_mismatched) - [mrpc](#mrpc) - [qnli](#qnli) - [qqp](#qqp) - [rte](#rte) - [sst2](#sst2) - [stsb](#stsb) - [wnli](#wnli) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [ax](#ax-1) - [cola](#cola-1) - [mnli](#mnli-1) - [mnli_matched](#mnli_matched-1) - [mnli_mismatched](#mnli_mismatched-1) - [mrpc](#mrpc-1) - [qnli](#qnli-1) - [qqp](#qqp-1) - [rte](#rte-1) - [sst2](#sst2-1) - [stsb](#stsb-1) - [wnli](#wnli-1) - [Data Fields](#data-fields) - [ax](#ax-2) - [cola](#cola-2) - [mnli](#mnli-2) - [mnli_matched](#mnli_matched-2) - [mnli_mismatched](#mnli_mismatched-2) - [mrpc](#mrpc-2) - [qnli](#qnli-2) - [qqp](#qqp-2) - [rte](#rte-2) - [sst2](#sst2-2) - [stsb](#stsb-2) - [wnli](#wnli-2) - [Data Splits](#data-splits) - [ax](#ax-3) - [cola](#cola-3) - [mnli](#mnli-3) - [mnli_matched](#mnli_matched-3) - [mnli_mismatched](#mnli_mismatched-3) - [mrpc](#mrpc-3) - [qnli](#qnli-3) - [qqp](#qqp-3) - [rte](#rte-3) - [sst2](#sst2-3) - [stsb](#stsb-3) - [wnli](#wnli-3) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 955.33 MB - **Size of the generated dataset:** 229.68 MB - **Total amount of disk used:** 1185.01 MB ### Dataset Summary GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems. ### Supported Tasks and Leaderboards The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks: #### ax A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset. #### cola The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence. #### mnli The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data. #### mnli_matched The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information. #### mnli_mismatched The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information. #### mrpc The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent. #### qnli The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. #### qqp The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent. #### rte The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency. #### sst2 The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels. #### stsb The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5. #### wnli The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI). ### Languages The language data in GLUE is in English (BCP-47 `en`) ## Dataset Structure ### Data Instances #### ax - **Size of downloaded dataset files:** 0.21 MB - **Size of the generated dataset:** 0.23 MB - **Total amount of disk used:** 0.44 MB An example of 'test' looks as follows. ``` { "premise": "The cat sat on the mat.", "hypothesis": "The cat did not sit on the mat.", "label": -1, "idx: 0 } ``` #### cola - **Size of downloaded dataset files:** 0.36 MB - **Size of the generated dataset:** 0.58 MB - **Total amount of disk used:** 0.94 MB An example of 'train' looks as follows. ``` { "sentence": "Our friends won't buy this analysis, let alone the next one we propose.", "label": 1, "id": 0 } ``` #### mnli - **Size of downloaded dataset files:** 298.29 MB - **Size of the generated dataset:** 78.65 MB - **Total amount of disk used:** 376.95 MB An example of 'train' looks as follows. ``` { "premise": "Conceptually cream skimming has two basic dimensions - product and geography.", "hypothesis": "Product and geography are what make cream skimming work.", "label": 1, "idx": 0 } ``` #### mnli_matched - **Size of downloaded dataset files:** 298.29 MB - **Size of the generated dataset:** 3.52 MB - **Total amount of disk used:** 301.82 MB An example of 'test' looks as follows. ``` { "premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.", "hypothesis": "Hierbas is a name worth looking out for.", "label": -1, "idx": 0 } ``` #### mnli_mismatched - **Size of downloaded dataset files:** 298.29 MB - **Size of the generated dataset:** 3.73 MB - **Total amount of disk used:** 302.02 MB An example of 'test' looks as follows. ``` { "premise": "What have you decided, what are you going to do?", "hypothesis": "So what's your decision?, "label": -1, "idx": 0 } ``` #### mrpc [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qqp [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### rte [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### sst2 [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### stsb [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### wnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Data Fields The data fields are the same among all splits. #### ax - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). - `idx`: a `int32` feature. #### cola - `sentence`: a `string` feature. - `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1). - `idx`: a `int32` feature. #### mnli - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). - `idx`: a `int32` feature. #### mnli_matched - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). - `idx`: a `int32` feature. #### mnli_mismatched - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). - `idx`: a `int32` feature. #### mrpc [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qqp [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### rte [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### sst2 [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### stsb [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### wnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Data Splits #### ax | |test| |---|---:| |ax |1104| #### cola | |train|validation|test| |----|----:|---------:|---:| |cola| 8551| 1043|1063| #### mnli | |train |validation_matched|validation_mismatched|test_matched|test_mismatched| |----|-----:|-----------------:|--------------------:|-----------:|--------------:| |mnli|392702| 9815| 9832| 9796| 9847| #### mnli_matched | |validation|test| |------------|---------:|---:| |mnli_matched| 9815|9796| #### mnli_mismatched | |validation|test| |---------------|---------:|---:| |mnli_mismatched| 9832|9847| #### mrpc [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### qqp [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### rte [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### sst2 [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### stsb [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### wnli [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{warstadt2018neural, title={Neural Network Acceptability Judgments}, author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R}, journal={arXiv preprint arXiv:1805.12471}, year={2018} } @inproceedings{wang2019glue, title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding}, author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.}, note={In the Proceedings of ICLR.}, year={2019} } Note that each GLUE dataset has its own citation. Please see the source to see the correct citation for each contained dataset. ``` ### Contributions Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
false
# Dataset Card for MyoQuant SDH Data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Structure](#dataset-structure) - [Data Instances and Splits](#data-instances-and-splits) - [Dataset Creation and Annotations](#dataset-creation-and-annotations) - [Source Data and annotation process](#source-data-and-annotation-process) - [Who are the annotators ?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases and Limitations](#discussion-of-biases-and-limitations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [The Team Behind this Dataset](#the-team-behind-this-dataset) - [Partners](#partners) ## Dataset Description - **Homepage:** https://github.com/lambda-science/MyoQuant - **Repository:** https://huggingface.co/corentinm7/MyoQuant-SDH-Model - **Paper:** Yet To Come - **Leaderboard:** N/A - **Point of Contact:** [**Corentin Meyer**, 3rd year PhD Student in the CSTB Team, ICube — CNRS — Unistra](https://cmeyer.fr) email: <corentin.meyer@etu.unistra.fr> ### Dataset Summary <p align="center"> <img src="https://i.imgur.com/mzALgZL.png" alt="MyoQuant Banner" style="border-radius: 25px;" /> </p> This dataset contains images of individual muscle fiber used to train [MyoQuant](https://github.com/lambda-science/MyoQuant) SDH Model. The goal of these data is to train a tool to classify SDH stained muscle fibers depending on the presence of mitochondria repartition anomalies. A pathological feature useful for diagnosis and classification in patient with congenital myopathies. ## Dataset Structure ### Data Instances and Splits A total of 16 787 single muscle fiber images are in the dataset, split in three sets: train, validation and test set. See the table for the exact count of images in each category: | | Train (72%) | Validation (8%) | Test (20%) | TOTAL | |---------|-------------|-----------------|------------|-------------| | control | 9165 | 1019 | 2546 | 12730 (76%) | | sick | 2920 | 325 | 812 | 4057 (24%) | | TOTAL | 12085 | 1344 | 3358 | 16787 | ## Dataset Creation and Annotations ### Source Data and annotation process To create this dataset of single muscle images, whole slide image of mice muscle fiber with SDH staining were taken from WT mice (1), BIN1 KO mice (10) and mutated DNM2 mice (7). Cells contained within these slides manually counted, labeled and classified in two categories: control (no anomaly) or sick (mitochondria anomaly) by two experts/annotators. Then all single muscle images were extracted from the image using CellPose to detect each individual cell’s boundaries. Resulting in 16787 images from 18 whole image slides. ### Who are the annotators? All data in this dataset were generated and manually annotated by two experts: - [**Quentin GIRAUD, PhD Student**](https://twitter.com/GiraudGiraud20) @ [Department Translational Medicine, IGBMC, CNRS UMR 7104](https://www.igbmc.fr/en/recherche/teams/pathophysiology-of-neuromuscular-diseases), 1 rue Laurent Fries, 67404 Illkirch, France <quentin.giraud@igbmc.fr> - **Charlotte GINESTE, Post-Doc** @ [Department Translational Medicine, IGBMC, CNRS UMR 7104](https://www.igbmc.fr/en/recherche/teams/pathophysiology-of-neuromuscular-diseases), 1 rue Laurent Fries, 67404 Illkirch, France <charlotte.gineste@igbmc.fr> A second pass of verification was done by: - **Bertrand VERNAY, Platform Leader** @ [Light Microscopy Facility, IGBMC, CNRS UMR 7104](https://www.igbmc.fr/en/plateformes-technologiques/photonic-microscopy), 1 rue Laurent Fries, 67404 Illkirch, France <bertrand.vernay@igbmc.fr> ### Personal and Sensitive Information All image data comes from mice, there is no personal nor sensitive information in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset The aim of this dataset is to improve congenital myopathies diagnosis by providing tools to automatically quantify specific pathogenic features in muscle fiber histology images. ### Discussion of Biases and Limitations This dataset has several limitations (non-exhaustive list): - The images are from mice and thus might not be ideal to represent actual mechanism in human muscle - The image comes only from two mice models with mutations in two genes (BIN1, DNM2) while congenital myopathies can be caused by a mutation in more than 35+ genes. - Only mitochondria anomaly was considered to classify cells as "sick", other anomalies were not considered, thus control cells might present other anomalies (such as what is called "cores" in congenital myopathies for examples) ## Additional Information ### Licensing Information This dataset is under the GNU AFFERO GENERAL PUBLIC LICENSE Version 3, to ensure that what's open-source, stays open-source and available to the community. ### Citation Information MyoQuant publication with model and data is yet to come. ## The Team Behind this Dataset **The creator, uploader and main maintainer of this dataset, associated model and MyoQuant is:** - **[Corentin Meyer, 3rd year PhD Student in the CSTB Team, ICube — CNRS — Unistra](https://cmeyer.fr) Email: <corentin.meyer@etu.unistra.fr> Github: [@lambda-science](https://github.com/lambda-science)** Special thanks to the experts that created the data for this dataset and all the time they spend counting cells : - **Quentin GIRAUD, PhD Student** @ [Department Translational Medicine, IGBMC, CNRS UMR 7104](https://www.igbmc.fr/en/recherche/teams/pathophysiology-of-neuromuscular-diseases), 1 rue Laurent Fries, 67404 Illkirch, France <quentin.giraud@igbmc.fr> - **Charlotte GINESTE, Post-Doc** @ [Department Translational Medicine, IGBMC, CNRS UMR 7104](https://www.igbmc.fr/en/recherche/teams/pathophysiology-of-neuromuscular-diseases), 1 rue Laurent Fries, 67404 Illkirch, France <charlotte.gineste@igbmc.fr> Last but not least thanks to Bertrand Vernay being at the origin of this project: - **Bertrand VERNAY, Platform Leader** @ [Light Microscopy Facility, IGBMC, CNRS UMR 7104](https://www.igbmc.fr/en/plateformes-technologiques/photonic-microscopy), 1 rue Laurent Fries, 67404 Illkirch, France <bertrand.vernay@igbmc.fr> ## Partners <p align="center"> <img src="https://i.imgur.com/m5OGthE.png" alt="Partner Banner" style="border-radius: 25px;" /> </p> MyoQuant-SDH-Data is born within the collaboration between the [CSTB Team @ ICube](https://cstb.icube.unistra.fr/en/index.php/Home) led by Julie D. Thompson, the [Morphological Unit of the Institute of Myology of Paris](https://www.institut-myologie.org/en/recherche-2/neuromuscular-investigation-center/morphological-unit/) led by Teresinha Evangelista, the [imagery platform MyoImage of Center of Research in Myology](https://recherche-myologie.fr/technologies/myoimage/) led by Bruno Cadot, [the photonic microscopy platform of the IGMBC](https://www.igbmc.fr/en/plateformes-technologiques/photonic-microscopy) led by Bertrand Vernay and the [Pathophysiology of neuromuscular diseases team @ IGBMC](https://www.igbmc.fr/en/igbmc/a-propos-de-ligbmc/directory/jocelyn-laporte) led by Jocelyn Laporte
false
# PyCoder This repository contains the dataset for the paper [Syntax-Aware On-the-Fly Code Completion](https://arxiv.org/abs/2211.04673) The sample code to run the model can be found in directory: "`assets/notebooks/inference.ipynb`" in our GitHub: https://github.com/awsm-research/pycoder. PyCoder is an auto code completion model which leverages a Multi-Task Training technique (MTT) to cooperatively learn the code prediction task and the type prediction task. For the type prediction task, we propose to leverage the standard Python token type information (e.g., String, Number, Name, Keyword), which is readily available and lightweight, instead of using the AST information which requires source code to be parsable for an extraction, limiting its ability to perform on-the-fly code completion (see Section 2.3 in our paper). More information can be found in our paper. If you use our code or PyCoder, please cite our paper. <pre><code>@article{takerngsaksiri2022syntax, title={Syntax-Aware On-the-Fly Code Completion}, author={Takerngsaksiri, Wannita and Tantithamthavorn, Chakkrit and Li, Yuan-Fang}, journal={arXiv preprint arXiv:2211.04673}, year={2022} }</code></pre>
false
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
# Dataset Card for "RickAndMorty-HorizontalMirror-blip-captions"
false
# Dataset Card for EUWikipedias: A dataset of Wikipedias in the EU languages ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch) ### Dataset Summary Wikipedia dataset containing cleaned articles of all languages. The datasets are built from the Wikipedia dump (https://dumps.wikimedia.org/) with one split per language. Each example contains the content of one full Wikipedia article with cleaning to strip markdown and unwanted sections (references, etc.). ### Supported Tasks and Leaderboards The dataset supports the tasks of fill-mask. ### Languages The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv ## Dataset Structure It is structured in the following format: {date}/{language}_{shard}.jsonl.xz At the moment only the date '20221120' is supported. Use the dataset like this: ```python from datasets import load_dataset dataset = load_dataset('joelito/EU_Wikipedias', date="20221120", language="de", split='train', streaming=True) ``` ### Data Instances The file format is jsonl.xz and there is one split available (`train`). | Source | Size (MB) | Words | Documents | Words/Document | |:-------------|------------:|-----------:|------------:|-----------------:| | 20221120.all | 86034 | 9506846949 | 26481379 | 359 | | 20221120.bg | 1261 | 88138772 | 285876 | 308 | | 20221120.cs | 1904 | 189580185 | 513851 | 368 | | 20221120.da | 679 | 74546410 | 286864 | 259 | | 20221120.de | 11761 | 1191919523 | 2740891 | 434 | | 20221120.el | 1531 | 103504078 | 215046 | 481 | | 20221120.en | 26685 | 3192209334 | 6575634 | 485 | | 20221120.es | 6636 | 801322400 | 1583597 | 506 | | 20221120.et | 538 | 48618507 | 231609 | 209 | | 20221120.fi | 1391 | 115779646 | 542134 | 213 | | 20221120.fr | 9703 | 1140823165 | 2472002 | 461 | | 20221120.ga | 72 | 8025297 | 57808 | 138 | | 20221120.hr | 555 | 58853753 | 198746 | 296 | | 20221120.hu | 1855 | 167732810 | 515777 | 325 | | 20221120.it | 5999 | 687745355 | 1782242 | 385 | | 20221120.lt | 409 | 37572513 | 203233 | 184 | | 20221120.lv | 269 | 25091547 | 116740 | 214 | | 20221120.mt | 29 | 2867779 | 5030 | 570 | | 20221120.nl | 3208 | 355031186 | 2107071 | 168 | | 20221120.pl | 3608 | 349900622 | 1543442 | 226 | | 20221120.pt | 3315 | 389786026 | 1095808 | 355 | | 20221120.ro | 1017 | 111455336 | 434935 | 256 | | 20221120.sk | 506 | 49612232 | 238439 | 208 | | 20221120.sl | 543 | 58858041 | 178472 | 329 | | 20221120.sv | 2560 | 257872432 | 2556132 | 100 | ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation This dataset has been created by downloading the wikipedias using [olm/wikipedia](https://huggingface.co/datasets/olm/wikipedia) for the 24 EU languages. For more information about the creation of the dataset please refer to prepare_wikipedias.py ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` TODO add citation ``` ### Contributions Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
false
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
true
# Dataset Card for Counterfactually Augmented SNLI ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) ## Dataset Description - **Repository:** [Learning the Difference that Makes a Difference with Counterfactually-Augmented Data](https://github.com/acmi-lab/counterfactually-augmented-data) - **Paper:** [Learning the Difference that Makes a Difference with Counterfactually-Augmented Data](https://openreview.net/forum?id=Sklgs0NFvr) - **Point of Contact:** [Sagnik Ray Choudhury](mailto:sagnikrayc@gmail.com) ### Dataset Summary The SNLI corpus (version 1.0) is a collection of 570k human-written English sentence pairs manually labeled for balanced classification with the labels entailment, contradiction, and neutral, supporting the task of natural language inference (NLI), also known as recognizing textual entailment (RTE). In the ICLR 2020 paper [Learning the Difference that Makes a Difference with Counterfactually-Augmented Data](https://openreview.net/forum?id=Sklgs0NFvr), Kaushik et. al. provided a dataset with counterfactual perturbations on the SNLI and IMDB data. This repository contains the original and counterfactual perturbations for the SNLI data, which was generated after processing the original data from [here](https://github.com/acmi-lab/counterfactually-augmented-data). ### Languages The language in the dataset is English as spoken by users of the website Flickr and as spoken by crowdworkers from Amazon Mechanical Turk. The BCP-47 code for English is en. ## Dataset Structure ### Data Instances For each instance, there is: - a string for the premise, - a string for the hypothesis, - a label: (entailment, contradiction, neutral) - a type: this tells whether the data point is the original SNLI data point or a counterfactual perturbation. - an idx. The ids correspond to the original id in the SNLI data. For example, if the original SNLI instance was `4626192243.jpg#3r1e`, there wil be 5 data points as follows: ```json lines { "idx": "4626192243.jpg#3r1e-orig", "premise": "A man with a beard is talking on the cellphone and standing next to someone who is lying down on the street.", "hypothesis": "A man is prone on the street while another man stands next to him.", "label": "entailment", "type": "original" } { "idx": "4626192243.jpg#3r1e-cf-0", "premise": "A man with a beard is talking on the cellphone and standing next to someone who is lying down on the street.", "hypothesis": "A man is talking to his wife on the cellphone.", "label": "neutral", "type": "cf" } { "idx": "4626192243.jpg#3r1e-cf-1", "premise": "A man with a beard is talking on the cellphone and standing next to someone who is on the street.", "hypothesis": "A man is prone on the street while another man stands next to him.", "label": "neutral", "type": "cf" } { "idx": "4626192243.jpg#3r1e-cf-2", "premise": "A man with a beard is talking on the cellphone and standing next to someone who is sitting on the street.", "hypothesis": "A man is prone on the street while another man stands next to him.", "label": "contradiction", "_type": "cf" } { "idx": "4626192243.jpg#3r1e-cf-3", "premise": "A man with a beard is talking on the cellphone and standing next to someone who is lying down on the street.", "hypothesis": "A man is alone on the street.", "label": "contradiction", "type": "cf" } ``` ### Data Splits Following SNLI, this dataset also has 3 splits: _train_, _validation_, and _test_. The original paper says this: ```aidl RP and RH, each comprised of 3332 pairs in train, 400 in validation, and 800 in test, leading to a total of 6664 pairs in train, 800 in validation, and 1600 in test in the revised dataset. ``` This means for _train_, there are 1666 original SNLI instances, and each has 4 counterfactual perturbations (from premise and hypothesis edit), leading to a total of 1666*5 = 8330 _train_ data points in this dataset. Similarly, _validation_ and _test_ has 200 and 400 original SNLI instances respectively, consequently 1000 and 2000 instances in total. | Dataset Split | Number of Instances in Split | |---------------|------------------------------| | Train | 8,330 | | Validation | 1,000 | | Test | 2,000 |
false
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
# Dataset Card for "tweetyface" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** [GitHub](https://github.com/ml-projects-kiel/OpenCampus-ApplicationofTransformers) ### Dataset Summary Dataset containing Tweets from prominent Twitter Users. The dataset has been created utilizing a crawler for the Twitter API. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English, German ## Dataset Structure ### Data Instances #### english - **Size of downloaded dataset files:** 4.77 MB - **Size of the generated dataset:** 5.92 MB - **Total amount of disk used:** 4.77 MB #### german - **Size of downloaded dataset files:** 2.58 MB - **Size of the generated dataset:** 3.10 MB - **Total amount of disk used:** 2.59 MB An example of 'validation' looks as follows. ``` { "text": "@SpaceX @Space_Station About twice as much useful mass to orbit as rest of Earth combined", "label": elonmusk, "idx": 1001283 } ``` ### Data Fields The data fields are the same among all splits and languages. - `text`: a `string` feature. - `label`: a classification label - `idx`: an `int64` feature. ### Data Splits | name | train | validation | | ------- | ----: | ---------: | | english | 27857 | 6965 | | german | 10254 | 2564 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed]
false
# Dataset Card for "lmqg/qa_squad" ## Dataset Description - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/) - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/) ### Dataset Summary This is the SQuAD v1 dataset with the train/validatio/test split used in [qg_squad](https://huggingface.co/datasets/lmqg/qg_squad). ### Supported Tasks and Leaderboards * `question-answering` ### Languages English (en) ## Dataset Structure ### Data Fields The data fields are the same among all splits. #### plain_text - `id`: a `string` feature of id - `title`: a `string` feature of title of the paragraph - `context`: a `string` feature of paragraph - `question`: a `string` feature of question - `answers`: a `json` feature of answers ### Data Splits |train |validation|test | |--------:|---------:|-------:| | 75,722| 10,570| 11,877| ## Citation Information ``` @article{2016arXiv160605250R, author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev}, Konstantin and {Liang}, Percy}, title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}", journal = {arXiv e-prints}, year = 2016, eid = {arXiv:1606.05250}, pages = {arXiv:1606.05250}, archivePrefix = {arXiv}, eprint = {1606.05250}, } ```
false
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
true
## Table of Contents - [Dataset Description](#dataset-description) - # Utilising Weak Supervision to Create S3D: A Sarcasm Annotated Dataset This is the repository for the S3D dataset published at EMNLP 2022. The dataset can help build sarcasm detection models. # S3D Summary The S3D dataset is our silver standard dataset of 100,000 tweets labelled for sarcasm using weak supervision by our **BERTweet-sarcasm-combined** model. These tweets can be accessed by using the Twitter API so that they can be used for other experiments. S3D contains 38879 tweets labelled as sarcastic, and 61211 tweets labelled as not being sarcastic. # Data Fields - Tweet ID: The ID of the labelled tweet - Label: A label to denote if a given tweet is sarcastic # Data Splits - Train: 70,000 - Valid: 15,000 - Test: 15,000
false
# Dataset Card for Quran audio Content * 7 Imam Full Quran Recitation: 7*6236 wav file - csv contains the Text info for 11k subset short wav file * Tarteel.io user dataset ~25k wav - csv contains the Text info for 18k subset of the accepted user quality
true
# Dataset Card for CONDA ## Table of Contents - [Dataset Description](#dataset-description) - [Abstract](#dataset-summary) - [Leaderboards](#leaderboards) - [Evaluation Metrics](#evaluation-metrics) - [Languages](#languages) - [Video](#video) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [CONDA](https://github.com/usydnlp/CONDA) - **Paper:** [CONDA: a CONtextual Dual-Annotated dataset for in-game toxicity understanding and detection](https://arxiv.org/abs/2106.06213) - **Point of Contact:** [Caren Han](caren.han@sydney.edu.au) ## Dataset Summary Traditional toxicity detection models have focused on the single utterance level without deeper understanding of context. We introduce CONDA, a new dataset for in-game toxic language detection enabling joint intent classification and slot filling analysis, which is the core task of Natural Language Understanding (NLU). The dataset consists of 45K utterances from 12K conversations from the chat logs of 1.9K completed Dota 2 matches. We propose a robust dual semantic-level toxicity framework, which handles utterance and token-level patterns, and rich contextual chatting history. Accompanying the dataset is a thorough in-game toxicity analysis, which provides comprehensive understanding of context at utterance, token, and dual levels. Inspired by NLU, we also apply its metrics to the toxicity detection tasks for assessing toxicity and game-specific aspects. We evaluate strong NLU models on CONDA, providing fine-grained results for different intent classes and slot classes. Furthermore, we examine the coverage of toxicity nature in our dataset by comparing it with other toxicity datasets. ## Leaderboards The Codalab leaderboard can be found at: https://codalab.lisn.upsaclay.fr/competitions/7827 ### Evaluation Metrics **JSA**(Joint Semantic Accuracy) is used for ranking. An utterance is deemed correctly analysed only if both utterance-level and all the token-level labels including Os are correctly predicted. Besides, the f1 score of **utterance-level** E(xplicit) and I(mplicit) classes, **token-level** T(oxicity), D(ota-specific), S(game Slang) classes will be shown on the leaderboard (but not used as the ranking metric). ## Languages English ## Video Please enjoy a video presentation covering the main points from our paper: <p align="centre"> [![ACL_video](https://img.youtube.com/vi/qRCPSSUuf18/0.jpg)](https://www.youtube.com/watch?v=qRCPSSUuf18) </p> ## Citation Information ``` @inproceedings{weld-etal-2021-conda, title = "{CONDA}: a {CON}textual Dual-Annotated dataset for in-game toxicity understanding and detection", author = "Weld, Henry and Huang, Guanghao and Lee, Jean and Zhang, Tongshu and Wang, Kunze and Guo, Xinghong and Long, Siqu and Poon, Josiah and Han, Caren", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.213", doi = "10.18653/v1/2021.findings-acl.213", pages = "2406--2416", } ```
false
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
Dataset of captioned spectrograms (text describing the sound).
false
# Dataset Card for cSQuAD1 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary A contrast set generated from the eval set of SQuAD. Questions and answers were modified to help detecting dataset artifacts. This dataset only contains a validation set, which should only be used to evaluate a model. ### Supported Tasks Question Answering (SQuAD). ### Languages English ## Dataset Structure ### Data Instances Dataset contains 100 instances ### Data Fields | Field | Description | |----------|-------------------------------------------------- | id | Id of document containing context | | title | Title of the document | | context | The context of the question | | question | The question to answer | | answers | A list of possible answers from the context | | answer_start | The index in context where the answer starts | ### Data Splits A single `eval` split is provided ## Dataset Creation Dataset was created by modifying a sample of 100 examples from SQuAD test split. ## Additional Information ### Licensing Information Apache 2.0 license ### Citation Information TODO: add citations
true
# Dataset Card for [Stackoverflow Post Questions] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Contributions](#contributions) ## Dataset Description Companies that sell Open-source software tools usually hire an army of Customer representatives to try to answer every question asked about their tool. The first step in this process is the prioritization of the question. The classification scale usually consists of 4 values, P0, P1, P2, and P3, with different meanings across every participant in the industry. On the other hand, every software developer in the world has dealt with Stack Overflow (SO); the amount of shared knowledge there is incomparable to any other website. Questions in SO are usually annotated and curated by thousands of people, providing metadata about the quality of the question. This dataset aims to provide an accurate prioritization for programming questions. ### Dataset Summary The dataset contains the title and body of stackoverflow questions and a label value(0,1,2,3) that was calculated using thresholds defined by SO badges. ### Languages English ## Dataset Structure title: string, body: string, label: int ### Data Splits The split is 40/40/20, where classes have been balaned to be around the same size. ## Dataset Creation The data set was extracted and labeled with the following query in BigQuery: ``` SELECT title, body, CASE WHEN score >= 100 OR favorite_count >= 100 OR view_count >= 10000 THEN 0 WHEN score >= 25 OR favorite_count >= 25 OR view_count >= 2500 THEN 1 WHEN score >= 10 OR favorite_count >= 10 OR view_count >= 1000 THEN 2 ELSE 3 END AS label FROM `bigquery-public-data`.stackoverflow.posts_questions ``` ### Source Data The data was extracted from the Big Query public dataset: `bigquery-public-data.stackoverflow.posts_questions` #### Initial Data Collection and Normalization The original dataset contained high class imbalance: label count 0 977424 1 2401534 2 3418179 3 16222990 Grand Total 23020127 The data was sampled from each class to have around the same amount of records on every class. ### Contributions Thanks to [@pacofvf](https://github.com/pacofvf) for adding this dataset.
false
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
# Dataset Card for Nail Biting Classification ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://huggingface.co/datasets/alecsharpie/nailbiting_classification](https://huggingface.co/datasets/alecsharpie/nailbiting_classification) - **Repository:** [https://github.com/alecsharpie/nomo_nailbiting](https://github.com/alecsharpie/nomo_nailbiting) - **Point of Contact:** [alecsharpie@gmail.com](alecsharpie@gmail.com) ### Dataset Summary A binary image dataset for classifying nailbiting. Images are cropped to only show the mouth area. Should contain edge cases such as drinking water, talking on the phone, scratching chin etc.. all in "no biting" category ## Dataset Structure ### Data Instances - 7147 Images - 14879790 bytes total - 12332617 bytes download ### Data Fields 128 x 64 (w x h, pixels) Black and white Labels - '0': biting - '1': no_biting ### Data Splits - train: 6629 (11965737 bytes) - test: 1471 (2914053 bytes) ## Dataset Creation ### Curation Rationale I wanted to create a notification system to help me stop biting my nails. It needed to contain lots of possible no-biting scenarios. eg talking on the phone ### Source Data #### Initial Data Collection and Normalization The data was scraped from stock images sites and photos of myself were taken with my webcam. MTCNN (https://github.com/ipazc/mtcnn) was then used to crop the images down to only the show the mouth area The images were then converted to a black & white colour scheme. ### Annotations #### Annotation process During the scraping process images were labelled with a description, which I then manually sanity checked. I labelled the ones of me manually. #### Who are the annotators? Alec Sharp ## Considerations for Using the Data ### Discussion of Biases & Limitations Tried to make the dataset diverse in terms of age and skin tone. Although, this dataset contains a large number of images of one subject (me) so is biased towards lower quality webcam pictures of a white male with a short beard. ### Dataset Curators Alec Sharp ### Licensing Information MIT ### Contributions Thanks to [@alecsharpie](https://github.com/alecsharpie) for adding this dataset.
false
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
# Dataset Card for lipo ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage: https://moleculenet.org/** - **Repository: https://github.com/deepchem/deepchem/tree/master** - **Paper: https://arxiv.org/abs/1703.00564** ### Dataset Summary `lipo` is a dataset included in [MoleculeNet](https://moleculenet.org/). It measures the experimental results of octanol/water distribution coefficient(logD at pH 7.4) ## Dataset Structure ### Data Fields Each split contains * `smiles`: the [SMILES](https://en.wikipedia.org/wiki/Simplified_molecular-input_line-entry_system) representation of a molecule * `selfies`: the [SELFIES](https://github.com/aspuru-guzik-group/selfies) representation of a molecule * `target`: octanol/water distribution coefficient(logD at pH 7.4) ### Data Splits The dataset is split into an 80/10/10 train/valid/test split using scaffold split. ### Source Data #### Initial Data Collection and Normalization Data was originially generated by the Pande Group at Standford ### Licensing Information This dataset was originally released under an MIT license ### Citation Information ``` @misc{https://doi.org/10.48550/arxiv.1703.00564, doi = {10.48550/ARXIV.1703.00564}, url = {https://arxiv.org/abs/1703.00564}, author = {Wu, Zhenqin and Ramsundar, Bharath and Feinberg, Evan N. and Gomes, Joseph and Geniesse, Caleb and Pappu, Aneesh S. and Leswing, Karl and Pande, Vijay}, keywords = {Machine Learning (cs.LG), Chemical Physics (physics.chem-ph), Machine Learning (stat.ML), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Physical sciences, FOS: Physical sciences}, title = {MoleculeNet: A Benchmark for Molecular Machine Learning}, publisher = {arXiv}, year = {2017}, copyright = {arXiv.org perpetual, non-exclusive license} } ``` ### Contributions Thanks to [@zanussbaum](https://github.com/zanussbaum) for adding this dataset.
true
# NLU Evaluation Data - English and German A labeled English **and German** language multi-domain dataset (21 domains) with 25K user utterances for human-robot interaction. This dataset is collected and annotated for evaluating NLU services and platforms. The detailed paper on this dataset can be found at arXiv.org: [Benchmarking Natural Language Understanding Services for building Conversational Agents](https://arxiv.org/abs/1903.05566) The dataset builds on the annotated data of the [xliuhw/NLU-Evaluation-Data](https://github.com/xliuhw/NLU-Evaluation-Data) repository. We have added an additional column (`answer_de`) by translating the texts in column `answer` into German. The translation was made with [DeepL](https://www.deepl.com/translator). ## Labels The columns `scenario` and `intent` can be used for classification tasks. However, we recommend to use even more fine-grained labels. For this purpose, a new label can be derived by concatenating `scenario` and `intent`. For example, this would turn "alarm" and "set" into "alarm_set". ## Dataset Quirks The original dataset contains some `NaN` values in the `answer` column. This means that there are also `NaN` values in the translations (`answer_de` column). These rows should be filtered. The dataset also contains duplicate values. ## Copyright Copyright (c) the authors of [xliuhw/NLU-Evaluation-Data](https://github.com/xliuhw/NLU-Evaluation-Data)\ Copyright (c) 2022 [Philip May](https://may.la/) All data is released under the [Creative Commons Attribution 4.0 International License (CC BY 4.0)](http://creativecommons.org/licenses/by/4.0/).
false
# WIFI RSSI Indoor Positioning Dataset A reliable and comprehensive public WiFi fingerprinting database for researchers to implement and compare the indoor localization’s methods.The database contains RSSI information from 6 APs conducted in different days with the support of autonomous robot. We use an autonomous robot to collect the WiFi fingerprint data. Our 3-wheel robot has multiple sensors including wheel odometer, an inertial measurement unit (IMU), a LIDAR, sonar sensors and a color and depth (RGB-D) camera. The robot can navigate to a target location to collect WiFi fingerprints automatically. The localization accuracy of the robot is 0.07 m ± 0.02 m. The dimension of the area is 21 m × 16 m. It has three long corridors. There are six APs and five of them provide two distinct MAC address for 2.4- and 5-GHz communications channels, respectively, except for one that only operates on 2.4-GHz frequency. There is one router can provide CSI information. # Data Format X Position (m), Y Position (m), RSSI Feature 1 (dBm), RSSI Feature 2 (dBm), RSSI Feature 3 (dBm), RSSI Feature 4 (dBm), ...
false
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
false
# stacked-xsum-1024 a "stacked" version of `xsum` 1. Original Dataset: copy of the base dataset 2. Stacked Rows: The original dataset is processed by stacking rows based on certain criteria: - Maximum Input Length: The maximum length for input sequences is 1024 tokens in the longt5 model tokenizer. - Maximum Output Length: The maximum length for output sequences is also 1024 tokens in the longt5 model tokenizer. 3. Special Token: The dataset utilizes the `[NEXT_CONCEPT]` token to indicate a new topic **within** the same summary. It is recommended to explicitly add this special token to your model's tokenizer before training, ensuring that it is recognized and processed correctly during downstream usage. 4. ## updates - dec 3: upload initial version - dec 4: upload v2 with basic data quality fixes (i.e. the `is_stacked` column) - dec 5 0500: upload v3 which has pre-randomised order and duplicate rows for document+summary dropped ## stats ![stats](https://i.imgur.com/TyyDthT.png) ## dataset details see the repo `.log` file for more details. train input ```python [2022-12-05 01:05:17] INFO:root:INPUTS - basic stats - train [2022-12-05 01:05:17] INFO:root:{'num_columns': 5, 'num_rows': 204045, 'num_unique_target': 203107, 'num_unique_text': 203846, 'summary - average chars': 125.46, 'summary - average tokens': 30.383719277610332, 'text input - average chars': 2202.42, 'text input - average tokens': 523.9222230390355} ``` stacked train: ```python [2022-12-05 04:47:01] INFO:root:stacked 181719 rows, 22326 rows were ineligible [2022-12-05 04:47:02] INFO:root:dropped 64825 duplicate rows, 320939 rows remain [2022-12-05 04:47:02] INFO:root:shuffling output with seed 323 [2022-12-05 04:47:03] INFO:root:STACKED - basic stats - train [2022-12-05 04:47:04] INFO:root:{'num_columns': 6, 'num_rows': 320939, 'num_unique_chapters': 320840, 'num_unique_summaries': 320101, 'summary - average chars': 199.89, 'summary - average tokens': 46.29925001324239, 'text input - average chars': 2629.19, 'text input - average tokens': 621.541532814647} ``` ## Citation If you find this useful in your work, please consider citing us. ``` @misc {stacked_summaries_2023, author = { {Stacked Summaries: Karim Foda and Peter Szemraj} }, title = { stacked-xsum-1024 (Revision 2d47220) }, year = 2023, url = { https://huggingface.co/datasets/stacked-summaries/stacked-xsum-1024 }, doi = { 10.57967/hf/0390 }, publisher = { Hugging Face } } ```
false
Over 20,000 512x512 mel spectrograms of 5 second samples of music from my Spotify liked playlist. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference using De-noising Diffusion Probabilistic Models. ``` x_res = 512 y_res = 512 sample_rate = 22050 n_fft = 2048 hop_length = 512 ```
false
# Dataset Card for Deezer ego nets ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](https://snap.stanford.edu/data/deezer_ego_nets.html)** - **Paper:**: (see citation) ### Dataset Summary The Deezer ego nets dataset contains ego-nets of Eastern European users collected from the music streaming service Deezer in February 2020. Nodes are users and edges are mutual follower relationships. ### Supported Tasks and Leaderboards The related task is the binary classification to predict gender for the ego node in the graph. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Fields Each row of a given file is a graph, with: - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under GPL-3.0 license. ### Citation Information See also [github](https://github.com/benedekrozemberczki/karateclub). ``` @inproceedings{karateclub, title = {{Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs}}, author = {Benedek Rozemberczki and Oliver Kiss and Rik Sarkar}, year = {2020}, pages = {3125–3132}, booktitle = {Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM '20)}, organization = {ACM}, } ```
false
# Dataset Card for MNIST ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](https://github.com/graphdeeplearning/benchmarking-gnns)** - **Paper:**: (see citation) ### Dataset Summary The `MNIST` dataset consists of 55000 images in 10 classes, represented as graphs. It comes from a computer vision dataset. ### Supported Tasks and Leaderboards `MNIST` should be used for multiclass graph classification. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | #graphs | 55,000 | | average #nodes | 70.6 | | average #edges | 564.5 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph - `pos` (list: 2 x #node): positional information of each node ### Data Splits This data is split. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under MIT license. ### Citation Information ``` @article{DBLP:journals/corr/abs-2003-00982, author = {Vijay Prakash Dwivedi and Chaitanya K. Joshi and Thomas Laurent and Yoshua Bengio and Xavier Bresson}, title = {Benchmarking Graph Neural Networks}, journal = {CoRR}, volume = {abs/2003.00982}, year = {2020}, url = {https://arxiv.org/abs/2003.00982}, eprinttype = {arXiv}, eprint = {2003.00982}, timestamp = {Sat, 23 Jan 2021 01:14:30 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2003-00982.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
false
# Dataset Card for CIFAR10 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](https://github.com/graphdeeplearning/benchmarking-gnns)** - **Paper:**: (see citation) ### Dataset Summary The `CIFAR10` dataset consists of 45000 images in 10 classes, represented as graphs. ### Supported Tasks and Leaderboards `CIFAR10` should be used for multiclass graph classification. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | #graphs | 45,000 | | average #nodes | 117.6 | | average #edges | 941.2 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph - `pos` (list: 2 x #node): positional information of each node ### Data Splits This data is split. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under MIT license. ### Citation Information ``` @article{DBLP:journals/corr/abs-2003-00982, author = {Vijay Prakash Dwivedi and Chaitanya K. Joshi and Thomas Laurent and Yoshua Bengio and Xavier Bresson}, title = {Benchmarking Graph Neural Networks}, journal = {CoRR}, volume = {abs/2003.00982}, year = {2020}, url = {https://arxiv.org/abs/2003.00982}, eprinttype = {arXiv}, eprint = {2003.00982}, timestamp = {Sat, 23 Jan 2021 01:14:30 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2003-00982.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
false
# Dataset Card for CSK ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [External Use](#external-use) - [PyGeometric](#pygeometric) - [Dataset Structure](#dataset-structure) - [Data Properties](#data-properties) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **[Homepage](https://github.com/graphdeeplearning/benchmarking-gnns)** - **Paper:**: (see citation) ### Dataset Summary The CSL dataset is a synthetic dataset, to test GNN expressivity. ### Supported Tasks and Leaderboards `CSL` should be used for binary graph classification, on isomoprhism or not. ## External Use ### PyGeometric To load in PyGeometric, do the following: ```python from datasets import load_dataset from torch_geometric.data import Data from torch_geometric.loader import DataLoader dataset_hf = load_dataset("graphs-datasets/<mydataset>") # For the train set (replace by valid or test as needed) dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]] dataset_pg = DataLoader(dataset_pg_list) ``` ## Dataset Structure ### Data Properties | property | value | |---|---| | #graphs | 150 | | average #nodes | 41.0 | | average #edges | 164.0 | ### Data Fields Each row of a given file is a graph, with: - `node_feat` (list: #nodes x #node-features): nodes - `edge_index` (list: 2 x #edges): pairs of nodes constituting edges - `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features - `y` (list: #labels): contains the number of labels available to predict - `num_nodes` (int): number of nodes of the graph ### Data Splits This data is split. It comes from the PyGeometric version of the dataset. ## Additional Information ### Licensing Information The dataset has been released under MIT license. ### Citation Information ``` @article{DBLP:journals/corr/abs-2003-00982, author = {Vijay Prakash Dwivedi and Chaitanya K. Joshi and Thomas Laurent and Yoshua Bengio and Xavier Bresson}, title = {Benchmarking Graph Neural Networks}, journal = {CoRR}, volume = {abs/2003.00982}, year = {2020}, url = {https://arxiv.org/abs/2003.00982}, eprinttype = {arXiv}, eprint = {2003.00982}, timestamp = {Sat, 23 Jan 2021 01:14:30 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2003-00982.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
false
# Dataset Card for "Kor_Jpn_Translation_Dataset" ### Dataset Summary AI-Hub에서 제공하는 한국어-일본어 번역 말뭉치 데이터(https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=127)를 사용하기 쉽게 정제했습니다. - 제공처 : AI-Hub(https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=127) - 제목 : 한국어-일본어 문화 분야 이중 말뭉치 - 구축분야 : 문화재/향토/K-Food, K-POP(한류)/대중문화_공연 콘텐츠, IT/컴퓨터/모바일, 금융/증시, 사회/노동/복지, 교육, 특허/기술, 자동차 - 구축량 : 150만 문장쌍 - 응용분야 : 언어모델, 자동번역 - 언어 : 원시어-한국어, 목적어-일본어 ### Supported Tasks and Leaderboards - Translation ### Languages - Kor - Jpan ## Dataset Structure features: - name: KOR dtype: string - name: JPN dtype: string splits: - name: train num_bytes: 294787449 num_examples: 840000 - name: val num_bytes: 88406929 num_examples: 252000 - name: test num_bytes: 37964427 num_examples: 108000 download_size: 289307354 dataset_size: 421158805 ### Data Splits splits: - name: train num_bytes: 294787449 num_examples: 840000 - name: val num_bytes: 88406929 num_examples: 252000 - name: test num_bytes: 37964427 num_examples: 108000 ### Contributions [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
false
## Dataset Description - **Repository:** https://github.com/shuyanzhou/docprompting - **Paper:** [DocPrompting: Generating Code by Retrieving the Docs](https://arxiv.org/pdf/2207.05987.pdf) ### Dataset Summary This is the natural language to bash generation dataset we harvested from the English subset of [`tldr`](https://github.com/tldr-pages/tldr) We split the dataset by bash commands. Every command in the dev and test set is held out from the training set. ### Supported Tasks and Leaderboards This dataset is used to evaluate code generations. ### Languages English - Bash ## Dataset Structure ```python dataset = load_dataset("neulab/tldr") DatasetDict({ train: Dataset({ features: ['question_id', 'nl', 'cmd', 'oracle_man', 'cmd_name', 'tldr_cmd_name', 'manual_exist', 'matching_info'], num_rows: 6414 }) test: Dataset({ features: ['question_id', 'nl', 'cmd', 'oracle_man', 'cmd_name', 'tldr_cmd_name', 'manual_exist', 'matching_info'], num_rows: 928 }) validation: Dataset({ features: ['question_id', 'nl', 'cmd', 'oracle_man', 'cmd_name', 'tldr_cmd_name', 'manual_exist', 'matching_info'], num_rows: 1845 }) }) code_docs = load_dataset("neulab/docprompting-conala", "docs") DatasetDict({ train: Dataset({ features: ['doc_id', 'doc_content'], num_rows: 439064 }) }) ``` ### Data Fields train/dev/test: - nl: The natural language intent - cmd: The reference code snippet - question_id: the unique id of a question - oracle_man: The `doc_id` of the functions used in the reference code snippet. The corresponding contents are in `doc` split - cmd_name: the bash command of this code snippet - tldr_cmd_name: the bash command used in tldr github repo. The `cmd_name` and `tldr_cmd_name` can be different due to naming difference - manual_exist: whether the manual exists in https://manned.org - matching_info: each code snippets have multiple tokens, this is the detailed reference doc matching on each token. docs: - doc_id: the id of a doc - doc_content: the content of the doc ## Dataset Creation The dataset was curated from [`tldr`](https://github.com/tldr-pages/tldr). The project aims to provide frequent usage of bash commands with natural language intents. For more details, please check the repo. ### Citation Information ``` @article{zhou2022doccoder, title={DocCoder: Generating Code by Retrieving and Reading Docs}, author={Zhou, Shuyan and Alon, Uri and Xu, Frank F and Jiang, Zhengbao and Neubig, Graham}, journal={arXiv preprint arXiv:2207.05987}, year={2022} } ```
true
# Dataset Card for "financial_news_sentiment_mixte_with_phrasebank_75" This is a customized version of the phrasebank dataset in which I kept only sentences validated by at least 75% annotators. In addition I added ~2000 articles of Canadian news where sentiment was validated manually. The dataset also include a column topic which contains one of the following value: * acquisition * other * quaterly financial release * appointment to new position * dividend * corporate update * drillings results * conference * share repurchase program * grant of stocks This was generated automatically using a zero-shot classification model and **was not** reviewed manually. ## References Original dataset is available here: [https://huggingface.co/datasets/financial_phrasebank]
false
<div align="center"> <img width="640" alt="keremberke/blood-cell-object-detection" src="https://huggingface.co/datasets/keremberke/blood-cell-object-detection/resolve/main/thumbnail.jpg"> </div> ### Dataset Labels ``` ['platelets', 'rbc', 'wbc'] ``` ### Number of Images ```json {'train': 255, 'test': 36, 'valid': 73} ``` ### How to Use - Install [datasets](https://pypi.org/project/datasets/): ```bash pip install datasets ``` - Load the dataset: ```python from datasets import load_dataset ds = load_dataset("keremberke/blood-cell-object-detection", name="full") example = ds['train'][0] ``` ### Roboflow Dataset Page [https://universe.roboflow.com/team-roboflow/blood-cell-detection-1ekwu/dataset/3](https://universe.roboflow.com/team-roboflow/blood-cell-detection-1ekwu/dataset/3?ref=roboflow2huggingface) ### Citation ``` @misc{ blood-cell-detection-1ekwu_dataset, title = { Blood Cell Detection Dataset }, type = { Open Source Dataset }, author = { Team Roboflow }, howpublished = { \\url{ https://universe.roboflow.com/team-roboflow/blood-cell-detection-1ekwu } }, url = { https://universe.roboflow.com/team-roboflow/blood-cell-detection-1ekwu }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { nov }, note = { visited on 2023-01-18 }, } ``` ### License Public Domain ### Dataset Summary This dataset was exported via roboflow.com on November 4, 2022 at 7:46 PM GMT Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time It includes 364 images. Cells are annotated in COCO format. The following pre-processing was applied to each image: * Auto-orientation of pixel data (with EXIF-orientation stripping) * Resize to 416x416 (Stretch) No image augmentation techniques were applied.
false
### Roboflow Dataset Page https://universe.roboflow.com/smoke-detection/smoke100-uwe4t/dataset/4 ### Dataset Labels ``` ['smoke'] ``` ### Citation ``` @misc{ smoke100-uwe4t_dataset, title = { Smoke100 Dataset }, type = { Open Source Dataset }, author = { Smoke Detection }, howpublished = { \\url{ https://universe.roboflow.com/smoke-detection/smoke100-uwe4t } }, url = { https://universe.roboflow.com/smoke-detection/smoke100-uwe4t }, journal = { Roboflow Universe }, publisher = { Roboflow }, year = { 2022 }, month = { dec }, note = { visited on 2023-01-02 }, } ``` ### License CC BY 4.0 ### Dataset Summary This dataset was exported via roboflow.ai on March 17, 2022 at 3:42 PM GMT It includes 21578 images. Smoke are annotated in COCO format. The following pre-processing was applied to each image: * Auto-orientation of pixel data (with EXIF-orientation stripping) * Resize to 640x640 (Stretch) No image augmentation techniques were applied.
false
# Dataset Card for `clueweb09/it` The `clueweb09/it` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/it). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=27,250,729 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_it', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
false
# Dataset Card for `clueweb09/pt` The `clueweb09/pt` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb09#clueweb09/pt). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=37,578,858 ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb09_pt', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
false
# Dataset Card for `clueweb12` The `clueweb12` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/clueweb12#clueweb12). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=733,019,372 This dataset is used by: [`clueweb12_touche-2020-task-2`](https://huggingface.co/datasets/irds/clueweb12_touche-2020-task-2), [`clueweb12_touche-2021-task-2`](https://huggingface.co/datasets/irds/clueweb12_touche-2021-task-2) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/clueweb12', 'docs') for record in docs: record # {'doc_id': ..., 'url': ..., 'date': ..., 'http_headers': ..., 'body': ..., 'body_content_type': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format.
false
# Dataset Card for `lotte/technology/test` The `lotte/technology/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/lotte#lotte/technology/test). # Data This dataset provides: - `docs` (documents, i.e., the corpus); count=638,509 This dataset is used by: [`lotte_technology_test_forum`](https://huggingface.co/datasets/irds/lotte_technology_test_forum), [`lotte_technology_test_search`](https://huggingface.co/datasets/irds/lotte_technology_test_search) ## Usage ```python from datasets import load_dataset docs = load_dataset('irds/lotte_technology_test', 'docs') for record in docs: record # {'doc_id': ..., 'text': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Santhanam2021ColBERTv2, title = "ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction", author = "Keshav Santhanam and Omar Khattab and Jon Saad-Falcon and Christopher Potts and Matei Zaharia", journal= "arXiv preprint arXiv:2112.01488", year = "2021", url = "https://arxiv.org/abs/2112.01488" } ```
false
# Dataset Card for `mmarco/v2/pt/train` The `mmarco/v2/pt/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/v2/pt/train). # Data This dataset provides: - `queries` (i.e., topics); count=808,731 - `qrels`: (relevance assessments); count=532,761 - `docpairs`; count=39,780,811 - For `docs`, use [`irds/mmarco_v2_pt`](https://huggingface.co/datasets/irds/mmarco_v2_pt) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/mmarco_v2_pt_train', 'queries') for record in queries: record # {'query_id': ..., 'text': ...} qrels = load_dataset('irds/mmarco_v2_pt_train', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} docpairs = load_dataset('irds/mmarco_v2_pt_train', 'docpairs') for record in docpairs: record # {'query_id': ..., 'doc_id_a': ..., 'doc_id_b': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @article{Bonifacio2021MMarco, title={{mMARCO}: A Multilingual Version of {MS MARCO} Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Israel Campiotti and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, journal={arXiv:2108.13897} } ```
false
# Dataset Card for `nyt/trec-core-2017` The `nyt/trec-core-2017` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package. For more information about the dataset, see the [documentation](https://ir-datasets.com/nyt#nyt/trec-core-2017). # Data This dataset provides: - `queries` (i.e., topics); count=50 - `qrels`: (relevance assessments); count=30,030 - For `docs`, use [`irds/nyt`](https://huggingface.co/datasets/irds/nyt) ## Usage ```python from datasets import load_dataset queries = load_dataset('irds/nyt_trec-core-2017', 'queries') for record in queries: record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...} qrels = load_dataset('irds/nyt_trec-core-2017', 'qrels') for record in qrels: record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...} ``` Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the data in 🤗 Dataset format. ## Citation Information ``` @inproceedings{Allan2017TrecCore, author = {James Allan and Donna Harman and Evangelos Kanoulas and Dan Li and Christophe Van Gysel and Ellen Vorhees}, title = {TREC 2017 Common Core Track Overview}, booktitle = {TREC}, year = {2017} } @article{Sandhaus2008Nyt, title={The new york times annotated corpus}, author={Sandhaus, Evan}, journal={Linguistic Data Consortium, Philadelphia}, volume={6}, number={12}, pages={e26752}, year={2008} } ```
false
This dataset contains 10 images of asterix and Obelix cartoon characters taken from internet
false