id
stringlengths
2
115
lastModified
stringlengths
24
24
tags
list
author
stringlengths
2
42
description
stringlengths
0
68.7k
citation
stringlengths
0
10.7k
cardData
null
likes
int64
0
3.55k
downloads
int64
0
10.1M
card
stringlengths
0
1.01M
tarungupta83/MidJourney_v5_Prompt_dataset
2023-05-21T14:46:19.000Z
[ "license:apache-2.0", "region:us" ]
tarungupta83
null
null
null
10
3
--- license: apache-2.0 --- Dataset contain raw prompts from Mid Journey v5 Total Records : 4245117 Sample Data | AuthorID | Author | Date | Content | Attachments | Reactions | | --- | --- | --- | --- | --- | --- | | 936929561302675456 | Midjourney Bot#9282 | 04/20/2023 12:00 AM | benjamin frankling with rayban sunglasses reflecting a usa flag walking on a side of penguin, whit... | [Link](https://cdn.discordapp.com/attachments/933565701162168371/1098276830525538494/vanDyke_benjamin_frank...) | | | 936929561302675456 | Midjourney Bot#9282 | 04/20/2023 12:00 AM | Street vendor robot in 80's Poland, meat market, fruit stall, communist style, real photo, real ph... | [Link](https://cdn.discordapp.com/attachments/933565701162168371/1098276841426526290/alepasztet_Street_vend...) | | | 936929561302675456 | Midjourney Bot#9282 | 04/20/2023 12:00 AM | one of the guys is looking at another man , in the style of kris knight, realistic, detailed rende... | [Link](https://cdn.discordapp.com/attachments/933565701162168371/1098276845394333818/iflwlou_one_of_the_guy...) | | You can clean the data with the help of Data Clean Notebook Provided in the Dataset.
taesiri/imagenet-hard-4K
2023-06-11T00:37:29.000Z
[ "task_categories:image-classification", "size_categories:10K<n<100K", "language:en", "license:mit", "OOD", "ImageNet", "Out Of Distribution", "arxiv:2304.05538", "region:us" ]
taesiri
null
null
null
2
3
--- dataset_info: features: - name: id dtype: int64 - name: image dtype: image - name: label sequence: int64 - name: origin dtype: string - name: english_label sequence: string splits: - name: validation num_bytes: 70959420455.86 num_examples: 10980 download_size: 66129324319 dataset_size: 70959420455.86 license: mit task_categories: - image-classification language: - en tags: - OOD - ImageNet - Out Of Distribution pretty_name: ImageNet-Hard-4K size_categories: - 10K<n<100K --- # Dataset Card for "Imagenet-Hard-4K" [Project Page](https://taesiri.github.io/ZoomIsAllYouNeed/) - [Paper](https://arxiv.org/abs/2304.05538) - [Github](https://github.com/taesiri/ZoomIsAllYouNeed) **ImageNet-Hard-4K** is 4K version of the original [**ImageNet-Hard**](https://huggingface.co/datasets/taesiri/imagenet-hard) dataset, which is a new benchmark that comprises 10,980 images collected from various existing ImageNet-scale benchmarks (ImageNet, ImageNet-V2, ImageNet-Sketch, ImageNet-C, ImageNet-R, ImageNet-ReaL, ImageNet-A, and ObjectNet). This dataset poses a significant challenge to state-of-the-art vision models as merely zooming in often fails to improve their ability to classify images correctly. As a result, even the most advanced models, such as `CLIP-ViT-L/14@336px`, struggle to perform well on this dataset, achieving a mere `2.02%` accuracy. ## Upscaling Procedure We employed [GigaGAN](https://mingukkang.github.io/GigaGAN/) to upscale each image from the original ImageNet-Hard dataset to a resolution of 4K. ### Dataset Distribution ![Dataset Distribution](https://taesiri.github.io/ZoomIsAllYouNeed/static/svg/imagenet_hard_distribution.svg) ### Classifiers Performance | Model | Accuracy | | ------------------- | -------- | | AlexNet | 7.08 | | VGG-16 | 11.32 | | ResNet-18 | 10.42 | | ResNet-50 | 13.93 | | ViT-B/32 | 18.12 | | EfficientNet-B0 | 12.94 | | EfficientNet-B7 | 18.67 | | EfficientNet-L2-Ns | 28.42 | | CLIP-ViT-L/14@224px | 1.81 | | CLIP-ViT-L/14@336px | 1.88 | | OpenCLIP-ViT-bigG-14| 14.33 | | OpenCLIP-ViT-L-14 | 13.04 | **Evaluation Code** * CLIP <a target="_blank" href="https://colab.research.google.com/github/taesiri/ZoomIsAllYouNeed/blob/main/src/ImageNet_Hard/Prompt_Engineering_for_ImageNet_Hard.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> * Other models <a target="_blank" href="https://colab.research.google.com/github/taesiri/ZoomIsAllYouNeed/blob/main/src/ImageNet_Hard/Benchmark_ImageNet_Hard.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> ## Supported Tasks - `image-classification`: The objective of this task is to classify an image into one or more classes, selected from 1000 ImageNet categories (allowing for multiple ground-truth labels per image). ## Languages The `english_label` field in the dataset are in English. ## Dataset Structure Data Instances An example looks like this: ```python { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=575x409 at 0x7F09456B53A0>, 'label': [0], 'origin': 'imagenet_sketch', 'english_label': ['tench'] } ``` ### Data Fields The data instances have the following fields: - image: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0]. - label: A List[int] collection containing the ground-truth ids. - origin: A string containing source dataset. - english_label: A List[str] collection containg the english labels for the ground-truth classes. <details> <summary> Click here to see the full list of ImageNet class labels mapping: </summary> |id|Class| |--|-----| |0 | tench, Tinca tinca| |1 | goldfish, Carassius auratus| |2 | great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias| |3 | tiger shark, Galeocerdo cuvieri| |4 | hammerhead, hammerhead shark| |5 | electric ray, crampfish, numbfish, torpedo| |6 | stingray| |7 | cock| |8 | hen| |9 | ostrich, Struthio camelus| |10 | brambling, Fringilla montifringilla| |11 | goldfinch, Carduelis carduelis| |12 | house finch, linnet, Carpodacus mexicanus| |13 | junco, snowbird| |14 | indigo bunting, indigo finch, indigo bird, Passerina cyanea| |15 | robin, American robin, Turdus migratorius| |16 | bulbul| |17 | jay| |18 | magpie| |19 | chickadee| |20 | water ouzel, dipper| |21 | kite| |22 | bald eagle, American eagle, Haliaeetus leucocephalus| |23 | vulture| |24 | great grey owl, great gray owl, Strix nebulosa| |25 | European fire salamander, Salamandra salamandra| |26 | common newt, Triturus vulgaris| |27 | eft| |28 | spotted salamander, Ambystoma maculatum| |29 | axolotl, mud puppy, Ambystoma mexicanum| |30 | bullfrog, Rana catesbeiana| |31 | tree frog, tree-frog| |32 | tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui| |33 | loggerhead, loggerhead turtle, Caretta caretta| |34 | leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea| |35 | mud turtle| |36 | terrapin| |37 | box turtle, box tortoise| |38 | banded gecko| |39 | common iguana, iguana, Iguana iguana| |40 | American chameleon, anole, Anolis carolinensis| |41 | whiptail, whiptail lizard| |42 | agama| |43 | frilled lizard, Chlamydosaurus kingi| |44 | alligator lizard| |45 | Gila monster, Heloderma suspectum| |46 | green lizard, Lacerta viridis| |47 | African chameleon, Chamaeleo chamaeleon| |48 | Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis| |49 | African crocodile, Nile crocodile, Crocodylus niloticus| |50 | American alligator, Alligator mississipiensis| |51 | triceratops| |52 | thunder snake, worm snake, Carphophis amoenus| |53 | ringneck snake, ring-necked snake, ring snake| |54 | hognose snake, puff adder, sand viper| |55 | green snake, grass snake| |56 | king snake, kingsnake| |57 | garter snake, grass snake| |58 | water snake| |59 | vine snake| |60 | night snake, Hypsiglena torquata| |61 | boa constrictor, Constrictor constrictor| |62 | rock python, rock snake, Python sebae| |63 | Indian cobra, Naja naja| |64 | green mamba| |65 | sea snake| |66 | horned viper, cerastes, sand viper, horned asp, Cerastes cornutus| |67 | diamondback, diamondback rattlesnake, Crotalus adamanteus| |68 | sidewinder, horned rattlesnake, Crotalus cerastes| |69 | trilobite| |70 | harvestman, daddy longlegs, Phalangium opilio| |71 | scorpion| |72 | black and gold garden spider, Argiope aurantia| |73 | barn spider, Araneus cavaticus| |74 | garden spider, Aranea diademata| |75 | black widow, Latrodectus mactans| |76 | tarantula| |77 | wolf spider, hunting spider| |78 | tick| |79 | centipede| |80 | black grouse| |81 | ptarmigan| |82 | ruffed grouse, partridge, Bonasa umbellus| |83 | prairie chicken, prairie grouse, prairie fowl| |84 | peacock| |85 | quail| |86 | partridge| |87 | African grey, African gray, Psittacus erithacus| |88 | macaw| |89 | sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita| |90 | lorikeet| |91 | coucal| |92 | bee eater| |93 | hornbill| |94 | hummingbird| |95 | jacamar| |96 | toucan| |97 | drake| |98 | red-breasted merganser, Mergus serrator| |99 | goose| |100 | black swan, Cygnus atratus| |101 | tusker| |102 | echidna, spiny anteater, anteater| |103 | platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus| |104 | wallaby, brush kangaroo| |105 | koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus| |106 | wombat| |107 | jellyfish| |108 | sea anemone, anemone| |109 | brain coral| |110 | flatworm, platyhelminth| |111 | nematode, nematode worm, roundworm| |112 | conch| |113 | snail| |114 | slug| |115 | sea slug, nudibranch| |116 | chiton, coat-of-mail shell, sea cradle, polyplacophore| |117 | chambered nautilus, pearly nautilus, nautilus| |118 | Dungeness crab, Cancer magister| |119 | rock crab, Cancer irroratus| |120 | fiddler crab| |121 | king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica| |122 | American lobster, Northern lobster, Maine lobster, Homarus americanus| |123 | spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish| |124 | crayfish, crawfish, crawdad, crawdaddy| |125 | hermit crab| |126 | isopod| |127 | white stork, Ciconia ciconia| |128 | black stork, Ciconia nigra| |129 | spoonbill| |130 | flamingo| |131 | little blue heron, Egretta caerulea| |132 | American egret, great white heron, Egretta albus| |133 | bittern| |134 | crane| |135 | limpkin, Aramus pictus| |136 | European gallinule, Porphyrio porphyrio| |137 | American coot, marsh hen, mud hen, water hen, Fulica americana| |138 | bustard| |139 | ruddy turnstone, Arenaria interpres| |140 | red-backed sandpiper, dunlin, Erolia alpina| |141 | redshank, Tringa totanus| |142 | dowitcher| |143 | oystercatcher, oyster catcher| |144 | pelican| |145 | king penguin, Aptenodytes patagonica| |146 | albatross, mollymawk| |147 | grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus| |148 | killer whale, killer, orca, grampus, sea wolf, Orcinus orca| |149 | dugong, Dugong dugon| |150 | sea lion| |151 | Chihuahua| |152 | Japanese spaniel| |153 | Maltese dog, Maltese terrier, Maltese| |154 | Pekinese, Pekingese, Peke| |155 | Shih-Tzu| |156 | Blenheim spaniel| |157 | papillon| |158 | toy terrier| |159 | Rhodesian ridgeback| |160 | Afghan hound, Afghan| |161 | basset, basset hound| |162 | beagle| |163 | bloodhound, sleuthhound| |164 | bluetick| |165 | black-and-tan coonhound| |166 | Walker hound, Walker foxhound| |167 | English foxhound| |168 | redbone| |169 | borzoi, Russian wolfhound| |170 | Irish wolfhound| |171 | Italian greyhound| |172 | whippet| |173 | Ibizan hound, Ibizan Podenco| |174 | Norwegian elkhound, elkhound| |175 | otterhound, otter hound| |176 | Saluki, gazelle hound| |177 | Scottish deerhound, deerhound| |178 | Weimaraner| |179 | Staffordshire bullterrier, Staffordshire bull terrier| |180 | American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier| |181 | Bedlington terrier| |182 | Border terrier| |183 | Kerry blue terrier| |184 | Irish terrier| |185 | Norfolk terrier| |186 | Norwich terrier| |187 | Yorkshire terrier| |188 | wire-haired fox terrier| |189 | Lakeland terrier| |190 | Sealyham terrier, Sealyham| |191 | Airedale, Airedale terrier| |192 | cairn, cairn terrier| |193 | Australian terrier| |194 | Dandie Dinmont, Dandie Dinmont terrier| |195 | Boston bull, Boston terrier| |196 | miniature schnauzer| |197 | giant schnauzer| |198 | standard schnauzer| |199 | Scotch terrier, Scottish terrier, Scottie| |200 | Tibetan terrier, chrysanthemum dog| |201 | silky terrier, Sydney silky| |202 | soft-coated wheaten terrier| |203 | West Highland white terrier| |204 | Lhasa, Lhasa apso| |205 | flat-coated retriever| |206 | curly-coated retriever| |207 | golden retriever| |208 | Labrador retriever| |209 | Chesapeake Bay retriever| |210 | German short-haired pointer| |211 | vizsla, Hungarian pointer| |212 | English setter| |213 | Irish setter, red setter| |214 | Gordon setter| |215 | Brittany spaniel| |216 | clumber, clumber spaniel| |217 | English springer, English springer spaniel| |218 | Welsh springer spaniel| |219 | cocker spaniel, English cocker spaniel, cocker| |220 | Sussex spaniel| |221 | Irish water spaniel| |222 | kuvasz| |223 | schipperke| |224 | groenendael| |225 | malinois| |226 | briard| |227 | kelpie| |228 | komondor| |229 | Old English sheepdog, bobtail| |230 | Shetland sheepdog, Shetland sheep dog, Shetland| |231 | collie| |232 | Border collie| |233 | Bouvier des Flandres, Bouviers des Flandres| |234 | Rottweiler| |235 | German shepherd, German shepherd dog, German police dog, alsatian| |236 | Doberman, Doberman pinscher| |237 | miniature pinscher| |238 | Greater Swiss Mountain dog| |239 | Bernese mountain dog| |240 | Appenzeller| |241 | EntleBucher| |242 | boxer| |243 | bull mastiff| |244 | Tibetan mastiff| |245 | French bulldog| |246 | Great Dane| |247 | Saint Bernard, St Bernard| |248 | Eskimo dog, husky| |249 | malamute, malemute, Alaskan malamute| |250 | Siberian husky| |251 | dalmatian, coach dog, carriage dog| |252 | affenpinscher, monkey pinscher, monkey dog| |253 | basenji| |254 | pug, pug-dog| |255 | Leonberg| |256 | Newfoundland, Newfoundland dog| |257 | Great Pyrenees| |258 | Samoyed, Samoyede| |259 | Pomeranian| |260 | chow, chow chow| |261 | keeshond| |262 | Brabancon griffon| |263 | Pembroke, Pembroke Welsh corgi| |264 | Cardigan, Cardigan Welsh corgi| |265 | toy poodle| |266 | miniature poodle| |267 | standard poodle| |268 | Mexican hairless| |269 | timber wolf, grey wolf, gray wolf, Canis lupus| |270 | white wolf, Arctic wolf, Canis lupus tundrarum| |271 | red wolf, maned wolf, Canis rufus, Canis niger| |272 | coyote, prairie wolf, brush wolf, Canis latrans| |273 | dingo, warrigal, warragal, Canis dingo| |274 | dhole, Cuon alpinus| |275 | African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus| |276 | hyena, hyaena| |277 | red fox, Vulpes vulpes| |278 | kit fox, Vulpes macrotis| |279 | Arctic fox, white fox, Alopex lagopus| |280 | grey fox, gray fox, Urocyon cinereoargenteus| |281 | tabby, tabby cat| |282 | tiger cat| |283 | Persian cat| |284 | Siamese cat, Siamese| |285 | Egyptian cat| |286 | cougar, puma, catamount, mountain lion, painter, panther, Felis concolor| |287 | lynx, catamount| |288 | leopard, Panthera pardus| |289 | snow leopard, ounce, Panthera uncia| |290 | jaguar, panther, Panthera onca, Felis onca| |291 | lion, king of beasts, Panthera leo| |292 | tiger, Panthera tigris| |293 | cheetah, chetah, Acinonyx jubatus| |294 | brown bear, bruin, Ursus arctos| |295 | American black bear, black bear, Ursus americanus, Euarctos americanus| |296 | ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus| |297 | sloth bear, Melursus ursinus, Ursus ursinus| |298 | mongoose| |299 | meerkat, mierkat| |300 | tiger beetle| |301 | ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle| |302 | ground beetle, carabid beetle| |303 | long-horned beetle, longicorn, longicorn beetle| |304 | leaf beetle, chrysomelid| |305 | dung beetle| |306 | rhinoceros beetle| |307 | weevil| |308 | fly| |309 | bee| |310 | ant, emmet, pismire| |311 | grasshopper, hopper| |312 | cricket| |313 | walking stick, walkingstick, stick insect| |314 | cockroach, roach| |315 | mantis, mantid| |316 | cicada, cicala| |317 | leafhopper| |318 | lacewing, lacewing fly| |319 | dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk| |320 | damselfly| |321 | admiral| |322 | ringlet, ringlet butterfly| |323 | monarch, monarch butterfly, milkweed butterfly, Danaus plexippus| |324 | cabbage butterfly| |325 | sulphur butterfly, sulfur butterfly| |326 | lycaenid, lycaenid butterfly| |327 | starfish, sea star| |328 | sea urchin| |329 | sea cucumber, holothurian| |330 | wood rabbit, cottontail, cottontail rabbit| |331 | hare| |332 | Angora, Angora rabbit| |333 | hamster| |334 | porcupine, hedgehog| |335 | fox squirrel, eastern fox squirrel, Sciurus niger| |336 | marmot| |337 | beaver| |338 | guinea pig, Cavia cobaya| |339 | sorrel| |340 | zebra| |341 | hog, pig, grunter, squealer, Sus scrofa| |342 | wild boar, boar, Sus scrofa| |343 | warthog| |344 | hippopotamus, hippo, river horse, Hippopotamus amphibius| |345 | ox| |346 | water buffalo, water ox, Asiatic buffalo, Bubalus bubalis| |347 | bison| |348 | ram, tup| |349 | bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis| |350 | ibex, Capra ibex| |351 | hartebeest| |352 | impala, Aepyceros melampus| |353 | gazelle| |354 | Arabian camel, dromedary, Camelus dromedarius| |355 | llama| |356 | weasel| |357 | mink| |358 | polecat, fitch, foulmart, foumart, Mustela putorius| |359 | black-footed ferret, ferret, Mustela nigripes| |360 | otter| |361 | skunk, polecat, wood pussy| |362 | badger| |363 | armadillo| |364 | three-toed sloth, ai, Bradypus tridactylus| |365 | orangutan, orang, orangutang, Pongo pygmaeus| |366 | gorilla, Gorilla gorilla| |367 | chimpanzee, chimp, Pan troglodytes| |368 | gibbon, Hylobates lar| |369 | siamang, Hylobates syndactylus, Symphalangus syndactylus| |370 | guenon, guenon monkey| |371 | patas, hussar monkey, Erythrocebus patas| |372 | baboon| |373 | macaque| |374 | langur| |375 | colobus, colobus monkey| |376 | proboscis monkey, Nasalis larvatus| |377 | marmoset| |378 | capuchin, ringtail, Cebus capucinus| |379 | howler monkey, howler| |380 | titi, titi monkey| |381 | spider monkey, Ateles geoffroyi| |382 | squirrel monkey, Saimiri sciureus| |383 | Madagascar cat, ring-tailed lemur, Lemur catta| |384 | indri, indris, Indri indri, Indri brevicaudatus| |385 | Indian elephant, Elephas maximus| |386 | African elephant, Loxodonta africana| |387 | lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens| |388 | giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca| |389 | barracouta, snoek| |390 | eel| |391 | coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch| |392 | rock beauty, Holocanthus tricolor| |393 | anemone fish| |394 | sturgeon| |395 | gar, garfish, garpike, billfish, Lepisosteus osseus| |396 | lionfish| |397 | puffer, pufferfish, blowfish, globefish| |398 | abacus| |399 | abaya| |400 | academic gown, academic robe, judge's robe| |401 | accordion, piano accordion, squeeze box| |402 | acoustic guitar| |403 | aircraft carrier, carrier, flattop, attack aircraft carrier| |404 | airliner| |405 | airship, dirigible| |406 | altar| |407 | ambulance| |408 | amphibian, amphibious vehicle| |409 | analog clock| |410 | apiary, bee house| |411 | apron| |412 | ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin| |413 | assault rifle, assault gun| |414 | backpack, back pack, knapsack, packsack, rucksack, haversack| |415 | bakery, bakeshop, bakehouse| |416 | balance beam, beam| |417 | balloon| |418 | ballpoint, ballpoint pen, ballpen, Biro| |419 | Band Aid| |420 | banjo| |421 | bannister, banister, balustrade, balusters, handrail| |422 | barbell| |423 | barber chair| |424 | barbershop| |425 | barn| |426 | barometer| |427 | barrel, cask| |428 | barrow, garden cart, lawn cart, wheelbarrow| |429 | baseball| |430 | basketball| |431 | bassinet| |432 | bassoon| |433 | bathing cap, swimming cap| |434 | bath towel| |435 | bathtub, bathing tub, bath, tub| |436 | beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon| |437 | beacon, lighthouse, beacon light, pharos| |438 | beaker| |439 | bearskin, busby, shako| |440 | beer bottle| |441 | beer glass| |442 | bell cote, bell cot| |443 | bib| |444 | bicycle-built-for-two, tandem bicycle, tandem| |445 | bikini, two-piece| |446 | binder, ring-binder| |447 | binoculars, field glasses, opera glasses| |448 | birdhouse| |449 | boathouse| |450 | bobsled, bobsleigh, bob| |451 | bolo tie, bolo, bola tie, bola| |452 | bonnet, poke bonnet| |453 | bookcase| |454 | bookshop, bookstore, bookstall| |455 | bottlecap| |456 | bow| |457 | bow tie, bow-tie, bowtie| |458 | brass, memorial tablet, plaque| |459 | brassiere, bra, bandeau| |460 | breakwater, groin, groyne, mole, bulwark, seawall, jetty| |461 | breastplate, aegis, egis| |462 | broom| |463 | bucket, pail| |464 | buckle| |465 | bulletproof vest| |466 | bullet train, bullet| |467 | butcher shop, meat market| |468 | cab, hack, taxi, taxicab| |469 | caldron, cauldron| |470 | candle, taper, wax light| |471 | cannon| |472 | canoe| |473 | can opener, tin opener| |474 | cardigan| |475 | car mirror| |476 | carousel, carrousel, merry-go-round, roundabout, whirligig| |477 | carpenter's kit, tool kit| |478 | carton| |479 | car wheel| |480 | cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM| |481 | cassette| |482 | cassette player| |483 | castle| |484 | catamaran| |485 | CD player| |486 | cello, violoncello| |487 | cellular telephone, cellular phone, cellphone, cell, mobile phone| |488 | chain| |489 | chainlink fence| |490 | chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour| |491 | chain saw, chainsaw| |492 | chest| |493 | chiffonier, commode| |494 | chime, bell, gong| |495 | china cabinet, china closet| |496 | Christmas stocking| |497 | church, church building| |498 | cinema, movie theater, movie theatre, movie house, picture palace| |499 | cleaver, meat cleaver, chopper| |500 | cliff dwelling| |501 | cloak| |502 | clog, geta, patten, sabot| |503 | cocktail shaker| |504 | coffee mug| |505 | coffeepot| |506 | coil, spiral, volute, whorl, helix| |507 | combination lock| |508 | computer keyboard, keypad| |509 | confectionery, confectionary, candy store| |510 | container ship, containership, container vessel| |511 | convertible| |512 | corkscrew, bottle screw| |513 | cornet, horn, trumpet, trump| |514 | cowboy boot| |515 | cowboy hat, ten-gallon hat| |516 | cradle| |517 | crane_1| |518 | crash helmet| |519 | crate| |520 | crib, cot| |521 | Crock Pot| |522 | croquet ball| |523 | crutch| |524 | cuirass| |525 | dam, dike, dyke| |526 | desk| |527 | desktop computer| |528 | dial telephone, dial phone| |529 | diaper, nappy, napkin| |530 | digital clock| |531 | digital watch| |532 | dining table, board| |533 | dishrag, dishcloth| |534 | dishwasher, dish washer, dishwashing machine| |535 | disk brake, disc brake| |536 | dock, dockage, docking facility| |537 | dogsled, dog sled, dog sleigh| |538 | dome| |539 | doormat, welcome mat| |540 | drilling platform, offshore rig| |541 | drum, membranophone, tympan| |542 | drumstick| |543 | dumbbell| |544 | Dutch oven| |545 | electric fan, blower| |546 | electric guitar| |547 | electric locomotive| |548 | entertainment center| |549 | envelope| |550 | espresso maker| |551 | face powder| |552 | feather boa, boa| |553 | file, file cabinet, filing cabinet| |554 | fireboat| |555 | fire engine, fire truck| |556 | fire screen, fireguard| |557 | flagpole, flagstaff| |558 | flute, transverse flute| |559 | folding chair| |560 | football helmet| |561 | forklift| |562 | fountain| |563 | fountain pen| |564 | four-poster| |565 | freight car| |566 | French horn, horn| |567 | frying pan, frypan, skillet| |568 | fur coat| |569 | garbage truck, dustcart| |570 | gasmask, respirator, gas helmet| |571 | gas pump, gasoline pump, petrol pump, island dispenser| |572 | goblet| |573 | go-kart| |574 | golf ball| |575 | golfcart, golf cart| |576 | gondola| |577 | gong, tam-tam| |578 | gown| |579 | grand piano, grand| |580 | greenhouse, nursery, glasshouse| |581 | grille, radiator grille| |582 | grocery store, grocery, food market, market| |583 | guillotine| |584 | hair slide| |585 | hair spray| |586 | half track| |587 | hammer| |588 | hamper| |589 | hand blower, blow dryer, blow drier, hair dryer, hair drier| |590 | hand-held computer, hand-held microcomputer| |591 | handkerchief, hankie, hanky, hankey| |592 | hard disc, hard disk, fixed disk| |593 | harmonica, mouth organ, harp, mouth harp| |594 | harp| |595 | harvester, reaper| |596 | hatchet| |597 | holster| |598 | home theater, home theatre| |599 | honeycomb| |600 | hook, claw| |601 | hoopskirt, crinoline| |602 | horizontal bar, high bar| |603 | horse cart, horse-cart| |604 | hourglass| |605 | iPod| |606 | iron, smoothing iron| |607 | jack-o'-lantern| |608 | jean, blue jean, denim| |609 | jeep, landrover| |610 | jersey, T-shirt, tee shirt| |611 | jigsaw puzzle| |612 | jinrikisha, ricksha, rickshaw| |613 | joystick| |614 | kimono| |615 | knee pad| |616 | knot| |617 | lab coat, laboratory coat| |618 | ladle| |619 | lampshade, lamp shade| |620 | laptop, laptop computer| |621 | lawn mower, mower| |622 | lens cap, lens cover| |623 | letter opener, paper knife, paperknife| |624 | library| |625 | lifeboat| |626 | lighter, light, igniter, ignitor| |627 | limousine, limo| |628 | liner, ocean liner| |629 | lipstick, lip rouge| |630 | Loafer| |631 | lotion| |632 | loudspeaker, speaker, speaker unit, loudspeaker system, speaker system| |633 | loupe, jeweler's loupe| |634 | lumbermill, sawmill| |635 | magnetic compass| |636 | mailbag, postbag| |637 | mailbox, letter box| |638 | maillot| |639 | maillot, tank suit| |640 | manhole cover| |641 | maraca| |642 | marimba, xylophone| |643 | mask| |644 | matchstick| |645 | maypole| |646 | maze, labyrinth| |647 | measuring cup| |648 | medicine chest, medicine cabinet| |649 | megalith, megalithic structure| |650 | microphone, mike| |651 | microwave, microwave oven| |652 | military uniform| |653 | milk can| |654 | minibus| |655 | miniskirt, mini| |656 | minivan| |657 | missile| |658 | mitten| |659 | mixing bowl| |660 | mobile home, manufactured home| |661 | Model T| |662 | modem| |663 | monastery| |664 | monitor| |665 | moped| |666 | mortar| |667 | mortarboard| |668 | mosque| |669 | mosquito net| |670 | motor scooter, scooter| |671 | mountain bike, all-terrain bike, off-roader| |672 | mountain tent| |673 | mouse, computer mouse| |674 | mousetrap| |675 | moving van| |676 | muzzle| |677 | nail| |678 | neck brace| |679 | necklace| |680 | nipple| |681 | notebook, notebook computer| |682 | obelisk| |683 | oboe, hautboy, hautbois| |684 | ocarina, sweet potato| |685 | odometer, hodometer, mileometer, milometer| |686 | oil filter| |687 | organ, pipe organ| |688 | oscilloscope, scope, cathode-ray oscilloscope, CRO| |689 | overskirt| |690 | oxcart| |691 | oxygen mask| |692 | packet| |693 | paddle, boat paddle| |694 | paddlewheel, paddle wheel| |695 | padlock| |696 | paintbrush| |697 | pajama, pyjama, pj's, jammies| |698 | palace| |699 | panpipe, pandean pipe, syrinx| |700 | paper towel| |701 | parachute, chute| |702 | parallel bars, bars| |703 | park bench| |704 | parking meter| |705 | passenger car, coach, carriage| |706 | patio, terrace| |707 | pay-phone, pay-station| |708 | pedestal, plinth, footstall| |709 | pencil box, pencil case| |710 | pencil sharpener| |711 | perfume, essence| |712 | Petri dish| |713 | photocopier| |714 | pick, plectrum, plectron| |715 | pickelhaube| |716 | picket fence, paling| |717 | pickup, pickup truck| |718 | pier| |719 | piggy bank, penny bank| |720 | pill bottle| |721 | pillow| |722 | ping-pong ball| |723 | pinwheel| |724 | pirate, pirate ship| |725 | pitcher, ewer| |726 | plane, carpenter's plane, woodworking plane| |727 | planetarium| |728 | plastic bag| |729 | plate rack| |730 | plow, plough| |731 | plunger, plumber's helper| |732 | Polaroid camera, Polaroid Land camera| |733 | pole| |734 | police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria| |735 | poncho| |736 | pool table, billiard table, snooker table| |737 | pop bottle, soda bottle| |738 | pot, flowerpot| |739 | potter's wheel| |740 | power drill| |741 | prayer rug, prayer mat| |742 | printer| |743 | prison, prison house| |744 | projectile, missile| |745 | projector| |746 | puck, hockey puck| |747 | punching bag, punch bag, punching ball, punchball| |748 | purse| |749 | quill, quill pen| |750 | quilt, comforter, comfort, puff| |751 | racer, race car, racing car| |752 | racket, racquet| |753 | radiator| |754 | radio, wireless| |755 | radio telescope, radio reflector| |756 | rain barrel| |757 | recreational vehicle, RV, R.V.| |758 | reel| |759 | reflex camera| |760 | refrigerator, icebox| |761 | remote control, remote| |762 | restaurant, eating house, eating place, eatery| |763 | revolver, six-gun, six-shooter| |764 | rifle| |765 | rocking chair, rocker| |766 | rotisserie| |767 | rubber eraser, rubber, pencil eraser| |768 | rugby ball| |769 | rule, ruler| |770 | running shoe| |771 | safe| |772 | safety pin| |773 | saltshaker, salt shaker| |774 | sandal| |775 | sarong| |776 | sax, saxophone| |777 | scabbard| |778 | scale, weighing machine| |779 | school bus| |780 | schooner| |781 | scoreboard| |782 | screen, CRT screen| |783 | screw| |784 | screwdriver| |785 | seat belt, seatbelt| |786 | sewing machine| |787 | shield, buckler| |788 | shoe shop, shoe-shop, shoe store| |789 | shoji| |790 | shopping basket| |791 | shopping cart| |792 | shovel| |793 | shower cap| |794 | shower curtain| |795 | ski| |796 | ski mask| |797 | sleeping bag| |798 | slide rule, slipstick| |799 | sliding door| |800 | slot, one-armed bandit| |801 | snorkel| |802 | snowmobile| |803 | snowplow, snowplough| |804 | soap dispenser| |805 | soccer ball| |806 | sock| |807 | solar dish, solar collector, solar furnace| |808 | sombrero| |809 | soup bowl| |810 | space bar| |811 | space heater| |812 | space shuttle| |813 | spatula| |814 | speedboat| |815 | spider web, spider's web| |816 | spindle| |817 | sports car, sport car| |818 | spotlight, spot| |819 | stage| |820 | steam locomotive| |821 | steel arch bridge| |822 | steel drum| |823 | stethoscope| |824 | stole| |825 | stone wall| |826 | stopwatch, stop watch| |827 | stove| |828 | strainer| |829 | streetcar, tram, tramcar, trolley, trolley car| |830 | stretcher| |831 | studio couch, day bed| |832 | stupa, tope| |833 | submarine, pigboat, sub, U-boat| |834 | suit, suit of clothes| |835 | sundial| |836 | sunglass| |837 | sunglasses, dark glasses, shades| |838 | sunscreen, sunblock, sun blocker| |839 | suspension bridge| |840 | swab, swob, mop| |841 | sweatshirt| |842 | swimming trunks, bathing trunks| |843 | swing| |844 | switch, electric switch, electrical switch| |845 | syringe| |846 | table lamp| |847 | tank, army tank, armored combat vehicle, armoured combat vehicle| |848 | tape player| |849 | teapot| |850 | teddy, teddy bear| |851 | television, television system| |852 | tennis ball| |853 | thatch, thatched roof| |854 | theater curtain, theatre curtain| |855 | thimble| |856 | thresher, thrasher, threshing machine| |857 | throne| |858 | tile roof| |859 | toaster| |860 | tobacco shop, tobacconist shop, tobacconist| |861 | toilet seat| |862 | torch| |863 | totem pole| |864 | tow truck, tow car, wrecker| |865 | toyshop| |866 | tractor| |867 | trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi| |868 | tray| |869 | trench coat| |870 | tricycle, trike, velocipede| |871 | trimaran| |872 | tripod| |873 | triumphal arch| |874 | trolleybus, trolley coach, trackless trolley| |875 | trombone| |876 | tub, vat| |877 | turnstile| |878 | typewriter keyboard| |879 | umbrella| |880 | unicycle, monocycle| |881 | upright, upright piano| |882 | vacuum, vacuum cleaner| |883 | vase| |884 | vault| |885 | velvet| |886 | vending machine| |887 | vestment| |888 | viaduct| |889 | violin, fiddle| |890 | volleyball| |891 | waffle iron| |892 | wall clock| |893 | wallet, billfold, notecase, pocketbook| |894 | wardrobe, closet, press| |895 | warplane, military plane| |896 | washbasin, handbasin, washbowl, lavabo, wash-hand basin| |897 | washer, automatic washer, washing machine| |898 | water bottle| |899 | water jug| |900 | water tower| |901 | whiskey jug| |902 | whistle| |903 | wig| |904 | window screen| |905 | window shade| |906 | Windsor tie| |907 | wine bottle| |908 | wing| |909 | wok| |910 | wooden spoon| |911 | wool, woolen, woollen| |912 | worm fence, snake fence, snake-rail fence, Virginia fence| |913 | wreck| |914 | yawl| |915 | yurt| |916 | web site, website, internet site, site| |917 | comic book| |918 | crossword puzzle, crossword| |919 | street sign| |920 | traffic light, traffic signal, stoplight| |921 | book jacket, dust cover, dust jacket, dust wrapper| |922 | menu| |923 | plate| |924 | guacamole| |925 | consomme| |926 | hot pot, hotpot| |927 | trifle| |928 | ice cream, icecream| |929 | ice lolly, lolly, lollipop, popsicle| |930 | French loaf| |931 | bagel, beigel| |932 | pretzel| |933 | cheeseburger| |934 | hotdog, hot dog, red hot| |935 | mashed potato| |936 | head cabbage| |937 | broccoli| |938 | cauliflower| |939 | zucchini, courgette| |940 | spaghetti squash| |941 | acorn squash| |942 | butternut squash| |943 | cucumber, cuke| |944 | artichoke, globe artichoke| |945 | bell pepper| |946 | cardoon| |947 | mushroom| |948 | Granny Smith| |949 | strawberry| |950 | orange| |951 | lemon| |952 | fig| |953 | pineapple, ananas| |954 | banana| |955 | jackfruit, jak, jack| |956 | custard apple| |957 | pomegranate| |958 | hay| |959 | carbonara| |960 | chocolate sauce, chocolate syrup| |961 | dough| |962 | meat loaf, meatloaf| |963 | pizza, pizza pie| |964 | potpie| |965 | burrito| |966 | red wine| |967 | espresso| |968 | cup| |969 | eggnog| |970 | alp| |971 | bubble| |972 | cliff, drop, drop-off| |973 | coral reef| |974 | geyser| |975 | lakeside, lakeshore| |976 | promontory, headland, head, foreland| |977 | sandbar, sand bar| |978 | seashore, coast, seacoast, sea-coast| |979 | valley, vale| |980 | volcano| |981 | ballplayer, baseball player| |982 | groom, bridegroom| |983 | scuba diver| |984 | rapeseed| |985 | daisy| |986 | yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum| |987 | corn| |988 | acorn| |989 | hip, rose hip, rosehip| |990 | buckeye, horse chestnut, conker| |991 | coral fungus| |992 | agaric| |993 | gyromitra| |994 | stinkhorn, carrion fungus| |995 | earthstar| |996 | hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa| |997 | bolete| |998 | ear, spike, capitulum| |999 | toilet tissue, toilet paper, bathroom tissue| </details> ### Data Splits This dataset is a validation-only set. ## Dataset Creation ### Source Data This dataset is sourced from ImageNet, ImageNet-ReaL, ImageNet-V2, ImageNet-A, ImageNet-C, ImageNet-R, ImageNet-Sketch, and ObjectNet. ## Citation Information ``` @article{taesiri2023zoom, title={ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of Zoom and Spatial Biases in Image Classification}, author={Taesiri, Mohammad Reza and Nguyen, Giang and Habchi, Sarra and Bezemer, Cor-Paul and Nguyen, Anh}, journal={arXiv preprint arXiv:2304.05538}, year={2023} } ```
mlfoundations/datacomp_large
2023-08-21T21:44:50.000Z
[ "license:cc-by-4.0", "region:us" ]
mlfoundations
null
null
null
2
3
--- license: cc-by-4.0 --- ## DataComp Large Pool This repository contains metadata files for the large pool of DataComp. For details on how to use the metadata, please visit [our website](https://www.datacomp.ai/) and our [github repository](https://github.com/mlfoundations/datacomp). We distribute the image url-text samples and metadata under a standard Creative Common CC-BY-4.0 license. The individual images are under their own copyrights. ## Terms and Conditions We have terms of service that are similar to those adopted by HuggingFace (https://huggingface.co/terms-of-service), which covers their dataset library. Specifically, any content you download, access or use from our index, is at your own risk and subject to the terms of service or copyright limitations accompanying such content. The image url-text index, which is a research artifact, is provided as is. By using said index, you assume all risks, including but not limited to, liabilities related to image downloading and storage.
hf-internal-testing/tourism-monthly-batch
2023-05-22T09:39:39.000Z
[ "license:cc", "region:us" ]
hf-internal-testing
null
null
null
1
3
--- license: cc ---
ttxy/sts
2023-05-22T11:02:07.000Z
[ "task_categories:text-classification", "language:code", "license:bsd", "sts", "region:us" ]
ttxy
null
null
null
0
3
--- language: - code pretty_name: "semantic text similarity" tags: - sts license: "bsd" task_categories: - text-classification --- sts 2012-2016 datasets
mlfoundations/datacomp_small
2023-08-21T21:44:22.000Z
[ "license:cc-by-4.0", "region:us" ]
mlfoundations
null
null
null
1
3
--- license: cc-by-4.0 --- ## DataComp Small Pool This repository contains metadata files for the small pool of DataComp. For details on how to use the metadata, please visit [our website](https://www.datacomp.ai/) and our [github repository](https://github.com/mlfoundations/datacomp). We distribute the image url-text samples and metadata under a standard Creative Common CC-BY-4.0 license. The individual images are under their own copyrights. ## Terms and Conditions We have terms of service that are similar to those adopted by HuggingFace (https://huggingface.co/terms-of-service), which covers their dataset library. Specifically, any content you download, access or use from our index, is at your own risk and subject to the terms of service or copyright limitations accompanying such content. The image url-text index, which is a research artifact, is provided as is. By using said index, you assume all risks, including but not limited to, liabilities related to image downloading and storage.
coeuslearning/yelp_review_full
2023-05-23T05:18:23.000Z
[ "task_categories:conversational", "task_categories:text2text-generation", "task_categories:question-answering", "size_categories:100K<n<1M", "language:en", "region:us" ]
coeuslearning
The Yelp reviews dataset consists of reviews from Yelp. It is extracted from the Yelp Dataset Challenge 2015 data. The Yelp reviews full star dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu) from the above dataset. It is first used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
@inproceedings{zhang2015character, title={Character-level convolutional networks for text classification}, author={Zhang, Xiang and Zhao, Junbo and LeCun, Yann}, booktitle={Advances in neural information processing systems}, pages={649--657}, year={2015} }
null
0
3
--- dataset_info: features: - name: label dtype: class_label: names: '0': 1 star '1': 2 star '2': 3 stars '3': 4 stars '4': 5 stars - name: text dtype: string splits: - name: train num_bytes: 483811554 num_examples: 650000 - name: test num_bytes: 37271188 num_examples: 50000 download_size: 322952367 dataset_size: 521082742 task_categories: - conversational - text2text-generation - question-answering language: - en pretty_name: coeusyelp size_categories: - 100K<n<1M --- # Dataset Card for "yelp_review_full" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
GeorgeGuo/detect
2023-05-23T07:15:59.000Z
[ "task_categories:text-classification", "size_categories:10K<n<100K", "language:zh", "license:apache-2.0", "music", "region:us" ]
GeorgeGuo
null
null
null
0
3
--- license: apache-2.0 task_categories: - text-classification language: - zh tags: - music size_categories: - 10K<n<100K --- This is dataset for test
zhanghanchong/css
2023-07-24T07:51:45.000Z
[ "task_categories:text2text-generation", "size_categories:1K<n<10K", "language:zh", "license:cc-by-4.0", "arxiv:2305.15891", "region:us" ]
zhanghanchong
null
\
null
1
3
--- task_categories: - text2text-generation language: - zh size_categories: - 1K<n<10K license: cc-by-4.0 --- # Dataset Description - **Repository:** https://github.com/X-LANCE/medical-dataset - **Paper:** https://arxiv.org/abs/2305.15891 # Dataset Summary CSS is a large-scale cross-schema Chinese text-to-SQL dataset # Dataset Splits ### Example-based Split * **train**: 3472 question/SQL pairs * **dev**: 434 question/SQL pairs * **test**: 434 question/SQL pairs ### Template-based Split * **train**: 3470 question/SQL pairs * **dev**: 430 question/SQL pairs * **test**: 440 question/SQL pairs ### Schema-based Split * **train**: 18550 question/SQL pairs * **dev**: 8150 question/SQL pairs * **test**: 6920 question/SQL pairs # Citation Information @misc{zhang2023css, title={CSS: A Large-scale Cross-schema Chinese Text-to-SQL Medical Dataset}, author={Hanchong Zhang and Jieyu Li and Lu Chen and Ruisheng Cao and Yunyan Zhang and Yu Huang and Yefeng Zheng and Kai Yu}, year={2023}, eprint={2305.15891}, archivePrefix={arXiv}, primaryClass={cs.CL} }
Eitanli/github-issues
2023-05-24T10:57:04.000Z
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "license:openrail", "code", "region:us" ]
Eitanli
null
null
null
0
3
--- dataset_info: features: - name: url dtype: string - name: repository_url dtype: string - name: labels_url dtype: string - name: comments_url dtype: string - name: events_url dtype: string - name: html_url dtype: string - name: id dtype: int64 - name: node_id dtype: string - name: number dtype: int64 - name: title dtype: string - name: user struct: - name: login dtype: string - name: id dtype: int64 - name: node_id dtype: string - name: avatar_url dtype: string - name: gravatar_id dtype: string - name: url dtype: string - name: html_url dtype: string - name: followers_url dtype: string - name: following_url dtype: string - name: gists_url dtype: string - name: starred_url dtype: string - name: subscriptions_url dtype: string - name: organizations_url dtype: string - name: repos_url dtype: string - name: events_url dtype: string - name: received_events_url dtype: string - name: type dtype: string - name: site_admin dtype: bool - name: labels list: - name: id dtype: int64 - name: node_id dtype: string - name: url dtype: string - name: name dtype: string - name: color dtype: string - name: default dtype: bool - name: description dtype: string - name: state dtype: string - name: locked dtype: bool - name: assignee struct: - name: login dtype: string - name: id dtype: int64 - name: node_id dtype: string - name: avatar_url dtype: string - name: gravatar_id dtype: string - name: url dtype: string - name: html_url dtype: string - name: followers_url dtype: string - name: following_url dtype: string - name: gists_url dtype: string - name: starred_url dtype: string - name: subscriptions_url dtype: string - name: organizations_url dtype: string - name: repos_url dtype: string - name: events_url dtype: string - name: received_events_url dtype: string - name: type dtype: string - name: site_admin dtype: bool - name: assignees list: - name: login dtype: string - name: id dtype: int64 - name: node_id dtype: string - name: avatar_url dtype: string - name: gravatar_id dtype: string - name: url dtype: string - name: html_url dtype: string - name: followers_url dtype: string - name: following_url dtype: string - name: gists_url dtype: string - name: starred_url dtype: string - name: subscriptions_url dtype: string - name: organizations_url dtype: string - name: repos_url dtype: string - name: events_url dtype: string - name: received_events_url dtype: string - name: type dtype: string - name: site_admin dtype: bool - name: milestone dtype: 'null' - name: comments sequence: string - name: created_at dtype: timestamp[s] - name: updated_at dtype: timestamp[s] - name: closed_at dtype: timestamp[s] - name: author_association dtype: string - name: active_lock_reason dtype: 'null' - name: draft dtype: bool - name: pull_request struct: - name: url dtype: string - name: html_url dtype: string - name: diff_url dtype: string - name: patch_url dtype: string - name: merged_at dtype: timestamp[s] - name: body dtype: string - name: reactions struct: - name: url dtype: string - name: total_count dtype: int64 - name: '+1' dtype: int64 - name: '-1' dtype: int64 - name: laugh dtype: int64 - name: hooray dtype: int64 - name: confused dtype: int64 - name: heart dtype: int64 - name: rocket dtype: int64 - name: eyes dtype: int64 - name: timeline_url dtype: string - name: performed_via_github_app dtype: 'null' - name: state_reason dtype: string - name: is_pull_request dtype: bool splits: - name: train num_bytes: 2600208 num_examples: 215 download_size: 683347 dataset_size: 2600208 license: openrail task_categories: - text-classification language: - en tags: - code pretty_name: github_issues size_categories: - 1K<n<10K --- # Dataset Card for "github-issues" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mmenendezg/raw_pneumonia_x_ray
2023-07-13T16:53:15.000Z
[ "region:us" ]
mmenendezg
null
null
null
0
3
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': normal '1': pneumonia splits: - name: train num_bytes: 3197295656.864 num_examples: 5232 - name: test num_bytes: 111133345.0 num_examples: 624 download_size: 1263131638 dataset_size: 3308429001.864 --- # Dataset Card for "raw_pneumonia_x_ray" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
carlesoctav/en-id-parallel-sentences
2023-05-25T04:20:44.000Z
[ "region:us" ]
carlesoctav
null
null
null
0
3
--- dataset_info: features: - name: text_en dtype: string - name: text_id dtype: string splits: - name: msmarcoquery num_bytes: 41010003 num_examples: 500000 - name: combinedtech num_bytes: 44901963 num_examples: 276659 - name: msmarcocollection num_bytes: 351086941 num_examples: 500000 - name: TED2020 num_bytes: 32590228 num_examples: 163319 - name: Tatoeba num_bytes: 797670 num_examples: 10543 - name: NeuLabTedTalks num_bytes: 19440416 num_examples: 94224 - name: QED num_bytes: 40115874 num_examples: 274581 - name: tico19 num_bytes: 959990 num_examples: 3071 download_size: 282831590 dataset_size: 530903085 --- # Dataset Card for "en-id-parallel-sentences" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tasksource/tracie
2023-05-31T08:26:23.000Z
[ "task_categories:text-classification", "language:en", "license:apache-2.0", "nli", "region:us" ]
tasksource
null
null
null
0
3
--- license: apache-2.0 task_categories: - text-classification language: - en tags: - nli --- https://github.com/allenai/aristo-leaderboard/tree/master/tracie/data ``` @inproceedings{ZRNKSR21, author = {Ben Zhou and Kyle Richardson and Qiang Ning and Tushar Khot and Ashish Sabharwal and Dan Roth}, title = {Temporal Reasoning on Implicit Events from Distant Supervision}, booktitle = {NAACL}, year = {2021}, } ```
ibm/clinic150-sur
2023-05-30T11:22:19.000Z
[ "task_categories:text-classification", "annotations_creators:other", "language_creators:other", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|clinic150", "language:en", "license:mit", "arxiv:1911.03118", "arxiv:2305.17750", "region:us" ]
ibm
null
null
null
0
3
--- license: mit annotations_creators: other language_creators: other language: en multilinguality: monolingual size_categories: 100K<n<1M source_datasets: extended|clinic150 task_categories: - text-classification pretty_name: Clinic150-SUR --- dataset_info: features: - name: intent dtype: string - name: user_utterance dtype: string - name: origin dtype: string # Dataset Card for "clinic150-SUR" ### Dataset Summary The Clinic150-SUR dataset is a novel and augmented dataset designed to simulate natural human behavior during interactions with customer service-like centers. Extending the [Clinic150 dataset](https://aclanthology.org/D19-1131/), it incorporates two augmentation techniques, including IBM's [LAMBADA](https://arxiv.org/abs/1911.03118) and [Parrot](https://github.com/PrithivirajDamodaran/Parrot_Paraphraser) models and carefully curated duplicated utterances. This dataset aims to provide a more comprehensive and realistic representation of customer service interactions, facilitating the development and evaluation of robust and efficient dialogue systems. Key Features: - Augmentation with IBM's [LAMBADA Model](https://arxiv.org/abs/1911.03118): The Clinic150-SUR dataset leverages IBM's LAMBADA model, a language generation model trained on a large corpus of text, to augment the original dataset. This augmentation process enhances the diversity and complexity of the dialogue data, allowing for a broader range of interactions. - Integration of [Parrot](https://github.com/PrithivirajDamodaran/Parrot_Paraphraser) Model: In addition to the LAMBADA model, the Clinic150-SUR dataset also incorporates the Parrot model, providing a variety of paraphrases. By integrating Parrot, the dataset achieves more variations of existing utterances. - Duplicated Utterances: The dataset includes carefully curated duplicated utterances to mimic real-world scenarios where users rephrase or repeat commonly asked queries. This feature adds variability to the data, reflecting the natural tendencies of human interactions, and enables dialogue systems to handle such instances better. - [Clinic150](https://aclanthology.org/D19-1131/) as the Foundation: The Clinic150-SUR dataset is built upon the Clinic150 dataset, which originally consisted of 150 in-domain intent classes and 150 human utterances for each intent. By utilizing this foundation, the augmented dataset retains the in-domain expertise while better reflecting the nature of user requests towards a dialog system. ### Data Instances #### clinic150-SUR - **Size of downloaded dataset file:** 29 MB ### Data Fields #### clinic150-SUR - `intent`: a `string` feature. - `user_utterance`: a `string` feature. - `origin`: a `string` feature ('original', 'lambada', 'parrot'). ### Citation Information ``` @inproceedings{rabinovich2022reliable, title={Reliable and Interpretable Drift Detection in Streams of Short Texts}, author={Rabinovich, Ella and Vetzler, Matan and Ackerman, Samuel and Anaby-Tavor, Ateret}, booktitle = "Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (industry track)", publisher = "Association for Computational Linguistics", year={2023}, url={https://arxiv.org/abs/2305.17750} } ``` ### Contributions Thanks to [Matan Vetzler](https://www.linkedin.com/in/matanvetzler/), [Ella Rabinovich](https://www.linkedin.com/in/ella-rabinovich-7b9a06/) for adding this dataset.
mnbvcx/XFUND-LiLT
2023-05-25T12:06:17.000Z
[ "region:us" ]
mnbvcx
https://github.com/doc-analysis/XFUND
@inproceedings{xu-etal-2022-xfund, title = "{XFUND}: A Benchmark Dataset for Multilingual Visually Rich Form Understanding", author = "Xu, Yiheng and Lv, Tengchao and Cui, Lei and Wang, Guoxin and Lu, Yijuan and Florencio, Dinei and Zhang, Cha and Wei, Furu", booktitle = "Findings of the Association for Computational Linguistics: ACL 2022", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-acl.253", doi = "10.18653/v1/2022.findings-acl.253", pages = "3214--3224", abstract = "Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. However, the existed research work has focused only on the English domain while neglecting the importance of multilingual generalization. In this paper, we introduce a human-annotated multilingual form understanding benchmark dataset named XFUND, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese). Meanwhile, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually rich document understanding. Experimental results show that the LayoutXLM model has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUND dataset. The XFUND dataset and the pre-trained LayoutXLM model have been publicly available at https://aka.ms/layoutxlm.", }
null
0
3
Entry not found
julia-lukasiewicz-pater/GPT-wiki-intro-features
2023-06-11T14:41:17.000Z
[ "task_categories:text-classification", "size_categories:100K<n<1M", "language:en", "license:cc", "region:us" ]
julia-lukasiewicz-pater
null
null
null
1
3
--- license: cc task_categories: - text-classification language: - en size_categories: - 100K<n<1M --- # Small-GPT-wiki-intro-features dataset This dataset is based on [aadityaubhat/GPT-wiki-intro](https://huggingface.co/datasets/aadityaubhat/GPT-wiki-intro). It contains 150k short texts from Wikipedia (label 0) and corresponding texts generated by ChatGPT (label 1) (together 300k texts). For each text, various complexity measures were calculated, including e.g. readability, lexical diversity etc. It can be used for text classification or analysis of linguistic features of human-generated and ChatGPT-generated texts. For a smaller version, check out [julia-lukasiewicz-pater/small-GPT-wiki-intro-features](https://huggingface.co/datasets/julia-lukasiewicz-pater/small-GPT-wiki-intro-features). ## Dataset structure Features were calculated using various Python libraries, i.e. NLTK, [readability-metrics](https://pypi.org/project/py-readability-metrics/), [lexical-diversity](https://pypi.org/project/lexical-diversity/), and [TextDescriptives](https://hlasse.github.io/TextDescriptives/). The list of all features and their corresponding sources can be found below: | Column | Description | | ------ | ----------- | | text | human- or ChatGPT-generated text; taken from aadityaubhat/GPT-wiki-intro | | normalized_bigram_entropy | bigram entropy normalized with estimated maximum entropy; nltk | | mean_word_length | mean word length; nltk | | mean_sent_length | mean sentence length; nltk | | fog | Gunning-Fog; readability-metrics | | ari | Automated Readability Index; readability-metrics | | dale_chall | Dale Chall Readability; readability-metrics | | hdd | Hypergeometric Distribution; lexical-diversity | | mtld | Measure of lexical textual diversity; lexical-diversity | | mattr | Moving average type-token ratio; lexical-diversity | | number_of_ADJ | proportion of adjectives per word; nltk | | number_of_ADP | proportion of adpositions per word; nltk | | number_of_ADV | proportion of adverbs per word; nltk | | number_of_CONJ | proportion of conjunctions per word; nltk | | number_of_DET | proportion of determiners per word; nltk | | number_of_NOUN | proportion of nouns per word; nltk | | number_of_NUM | proportion of numerals per word; nltk | | number_of_PRT | proportion of particles per word; nltk | | number_of_PRON | proportion of pronuns per word; nltk | | number_of_VERB | proportion of verbs per word; nltk | | number_of_DOT | proportion of punctuation marks per word; nltk | | number_of_X | proportion of POS tag 'Other' per word; nltk | | class | binary class, 0 stands for Wikipedia, 1 stands for ChatGPT | | spacy_perplexity | text perplexity; TextDescriptives | | entropy | text entropy; TextDescriptives | | automated_readability_index | Automated Readability Index; TextDescriptives | | per_word_spacy_perplexity | text perplexity per word; TextDescriptives | | dependency_distance_mean | mean distance from each token to their dependent; TextDescriptives | | dependency_distance_std | standard deviation of distance from each token to their dependent; TextDescriptives | | first_order_coherence | cosine similarity between consecutive sentences; TextDescriptives | | second_order_coherence | cosine similarity between sentences that are two sentences apart; TextDescriptives | | smog |SMOG; TextDescriptives | | prop_adjacent_dependency_relation_mean | mean proportion adjacent dependency relations; TextDescriptives | | prop_adjacent_dependency_relation_std | standard deviation of proportion adjacent dependency relations; TextDescriptives | | syllables_per_token_mean | mean of syllables per token; TextDescriptives | | syllables_per_token_median | median of syllables per token; TextDescriptives | | token_length_std | standard deviation of token length; TextDescriptives | | token_length_median | median of token length; TextDescriptives | | sentence_length_median | median of sentence length; TextDescriptives | | syllables_per_token_std | standard deviation of syllables per token; TextDescriptives | | proportion_unique_tokens | proportion of unique tokens; TextDescriptives | | top_ngram_chr_fraction_3 | fraction of characters in a document which are contained within the top n-grams. For a specified n-gram range; TextDescriptives | | top_ngram_chr_fraction_2 | fraction of characters in a document which are contained within the top n-grams. For a specified n-gram range; TextDescriptives | | top_ngram_chr_fraction_4 | fraction of characters in a document which are contained within the top n-grams. For a specified n-gram range; TextDescriptives | | proportion_bullet_points | fraction of characters in a document which are contained within the top n-grams. For a specified n-gram range; TextDescriptives | | flesch_reading_ease | Flesch Reading ease ; TextDescriptives | | flesch_kincaid_grade | Flesch Kincaid grade; TextDescriptives | | gunning_fog | Gunning-Fog; TextDescriptives | | coleman_liau_index | Coleman-Liau Index; TextDescriptives | | oov_ratio| out-of-vocabulary ratio; TextDescriptives | ## Code Code that was used to generate this dataset can be found on [Github](https://github.com/julia-lukasiewicz-pater/gpt-wiki-features/tree/main).
mssongit/KorfinQA
2023-05-26T00:48:15.000Z
[ "task_categories:question-answering", "language:ko", "license:mit", "finance", "region:us" ]
mssongit
null
null
null
0
3
--- license: mit task_categories: - question-answering language: - ko tags: - finance --- ## FinQA 한국어 번역본 Question, Answer 총 6252 Rows
starfishmedical/webGPT_x_dolly
2023-05-30T19:47:30.000Z
[ "task_categories:question-answering", "size_categories:10K<n<100K", "license:cc-by-sa-3.0", "region:us" ]
starfishmedical
null
null
null
1
3
--- license: cc-by-sa-3.0 task_categories: - question-answering size_categories: - 10K<n<100K --- This dataset contains a selection of Q&A-related tasks gathered and cleaned from the webGPT_comparisons set and the databricks-dolly-15k set. Unicode escapes were explicitly removed, and wikipedia citations in the "output" were stripped through regex to hopefully help any end-product model ignore these artifacts within their input context. This data is formatted for use in the alpaca instruction format, however the instruction, input, and output columns are kept separate in the raw data to allow for other configurations. The data has been filtered so that every entry is less than our chosen truncation length of 1024 (LLaMA-style) tokens with the format: ``` """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Input: {inputt} ### Response: {output}""" ``` <h3>webGPT</h3> This set was filtered from the webGPT_comparisons data by taking any Q&A option that was positively or neutrally-rated by humans (e.g. "score" >= 0). This might not provide the ideal answer, but this dataset was assembled specifically for extractive Q&A with less regard for how humans feel about the results. This selection comprises 23826 of the total entries in the data. <h3>Dolly</h3> The dolly data was selected primarily to focus on closed-qa tasks. For this purpose, only entries in the "closed-qa", "information_extraction", "summarization", "classification", and "creative_writing" were used. While not all of these include a context, they were judged to help flesh out the training set. This selection comprises 5362 of the total entries in the data.
ccmusic-database/instrument_timbre_eval
2023-10-03T17:11:21.000Z
[ "task_categories:audio-classification", "size_categories:n<1K", "language:zh", "language:en", "license:mit", "music", "art", "region:us" ]
ccmusic-database
This database contains subjective timbre evaluation scores of 16 subjective timbre evaluation terms (such as bright, dark, raspy) on 37 Chinese national and 24 Non-Chinese terms given by 14 participants in a subjective evaluation experiment. Furthermore, 10 reports on spectrum analysis of 10 instruments are also included.
@dataset{zhaorui_liu_2021_5676893, author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li}, title = {{Music Data Sharing Platform for Computational Musicology Research (CCMUSIC DATASET)}}, month = nov, year = 2021, publisher = {Zenodo}, version = {1.1}, doi = {10.5281/zenodo.5676893}, url = {https://doi.org/10.5281/zenodo.5676893} }
null
1
3
--- license: mit task_categories: - audio-classification language: - zh - en tags: - music - art pretty_name: Musical Instruments Timbre Evaluation Database size_categories: - n<1K --- # Dataset Card for Chinese Musical Instruments Timbre Evaluation Database ## Dataset Description - **Homepage:** <https://ccmusic-database.github.io> - **Repository:** <https://huggingface.co/datasets/ccmusic-database/CMITE> - **Paper:** <https://doi.org/10.5281/zenodo.5676893> - **Leaderboard:** <https://ccmusic-database.github.io/team.html> - **Point of Contact:** N/A ### Dataset Summary This database contains subjective timbre evaluation scores of 16 subjective timbre evaluation terms (such as bright, dark, raspy) on 37 Chinese national and 24 non-Chinese terms given by 14 participants in a subjective evaluation experiment. ### Supported Tasks and Leaderboards Musical Instruments Timbre Evaluation ### Languages Chinese, English ## Dataset Structure ### Data Instances .zip(.wav), .csv ### Data Fields Traditional instruments ### Data Splits Chinese, Non-Chinese ## Dataset Creation ### Curation Rationale Lack of a dataset for musical instruments timbre evaluation ### Source Data #### Initial Data Collection and Normalization Zhaorui Liu, Monan Zhou #### Who are the source language producers? Students from CCMUSIC ### Annotations #### Annotation process Subjective timbre evaluation scores of 16 subjective timbre evaluation terms (such as bright, dark, raspy) on 37 Chinese national and 24 Non-Chinese terms given by 14 participants in a subjective evaluation experiment #### Who are the annotators? Students from CCMUSIC ### Personal and Sensitive Information None ## Considerations for Using the Data ### Social Impact of Dataset Promoting the development of AI in the music industry ### Discussion of Biases Only for traditional instruments ### Other Known Limitations Limited data ## Additional Information ### Dataset Curators Zijin Li ### Evaluation [Jiang, Wei et al. “Analysis and Modeling of Timbre Perception Features of Chinese Musical Instruments.” 2019 IEEE/ACIS 18th International Conference on Computer and Information Science (ICIS) (2019): 191-195.](https://ieeexplore.ieee.org/document/8940168) [Yiliang, J. et al. (2020) ‘Analysis of Chinese Musical Instrument Timbre Based on Objective Features’, Journal of Fudan University(Natural Science), pp. 346-353+359. doi:10.15943/j.cnki.fdxb-jns.2020.03.014.](https://kns.cnki.net/kcms/detail/detail.aspx?dbcode=CJFD&dbname=CJFDLAST2020&filename=FDXB202003014&uniplatform=NZKPT&v=85qLeLUyrDt%25mmd2Btak%25mmd2BN90N7vYZSv%25mmd2BVc1EfPmaYcvpvrgY1XkL215gYG4J%25mmd2FD09viR0w) ### Licensing Information ``` MIT License Copyright (c) CCMUSIC Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ### Citation Information ``` @dataset{zhaorui_liu_2021_5676893, author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li}, title = {CCMUSIC DATABASE: Music Data Sharing Platform for Computational Musicology Research}, month = {nov}, year = {2021}, publisher = {Zenodo}, version = {1.1}, doi = {10.5281/zenodo.5676893}, url = {https://doi.org/10.5281/zenodo.5676893} } ``` ### Contributions Provide a dataset for musical instruments timbre evaluation
d0rj/conv_ai_3_ru
2023-05-28T11:49:49.000Z
[ "task_categories:conversational", "task_categories:text-classification", "task_ids:text-scoring", "annotations_creators:crowdsourced", "language_creators:translated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:conv_ai_3", "language:ru", "license:unknown", "eval...
d0rj
null
null
null
0
3
--- annotations_creators: - crowdsourced language_creators: - translated language: - ru license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - conv_ai_3 task_categories: - conversational - text-classification task_ids: - text-scoring paperswithcode_id: null pretty_name: conv_ai_3 (ru) tags: - evaluating-dialogue-systems dataset_info: features: - name: topic_id dtype: int32 - name: initial_request dtype: string - name: topic_desc dtype: string - name: clarification_need dtype: int32 - name: facet_id dtype: string - name: facet_desc dtype: string - name: question_id dtype: string - name: question dtype: string - name: answer dtype: string config_name: conv_ai_3 splits: - name: train num_examples: 9176 - name: validation num_examples: 2313 --- # Dataset Card for d0rj/conv_ai_3_ru ## Dataset Description - **Homepage:** https://github.com/aliannejadi/ClariQ - **Repository:** https://github.com/aliannejadi/ClariQ - **Paper:** https://arxiv.org/abs/2009.11352 ### Dataset Summary This is translated version of [conv_ai_3](https://huggingface.co/datasets/conv_ai_3) dataset to Russian language. ### Languages Russian (translated from English). ## Dataset Structure ### Data Fields - `topic_id`: the ID of the topic (`initial_request`). - `initial_request`: the query (text) that initiates the conversation. - `topic_desc`: a full description of the topic as it appears in the TREC Web Track data. - `clarification_need`: a label from 1 to 4, indicating how much it is needed to clarify a topic. If an `initial_request` is self-contained and would not need any clarification, the label would be 1. While if a `initial_request` is absolutely ambiguous, making it impossible for a search engine to guess the user's right intent before clarification, the label would be 4. - `facet_id`: the ID of the facet. - `facet_desc`: a full description of the facet (information need) as it appears in the TREC Web Track data. - `question_id`: the ID of the question.. - `question`: a clarifying question that the system can pose to the user for the current topic and facet. - `answer`: an answer to the clarifying question, assuming that the user is in the context of the current row (i.e., the user's initial query is `initial_request`, their information need is `facet_desc`, and `question` has been posed to the user). ### Citation Information @misc{aliannejadi2020convai3, title={ConvAI3: Generating Clarifying Questions for Open-Domain Dialogue Systems (ClariQ)}, author={Mohammad Aliannejadi and Julia Kiseleva and Aleksandr Chuklin and Jeff Dalton and Mikhail Burtsev}, year={2020}, eprint={2009.11352}, archivePrefix={arXiv}, primaryClass={cs.CL} } ### Contributions Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
jondurbin/airoboros-gpt4
2023-06-22T15:00:49.000Z
[ "license:cc-by-nc-4.0", "region:us" ]
jondurbin
null
null
null
13
3
--- license: cc-by-nc-4.0 --- The data was generated by gpt-4, and therefore is subject to OpenAI ToS. The tool used to generate the data [airoboros](https://github.com/jondurbin/airoboros) is apache-2. Specific areas of focus for this training data: * trivia * math * nonsensical math * coding * closed context question answering * closed context question answering, with multiple contexts to choose from as confounding factors * writing * multiple choice ### Usage and License Notices All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because: - the base model is LLaMa, which has it's own special research license - the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai So, to reiterate: this model (and datasets) cannot be used commercially.
AtlasUnified/Code-Instruct-Sets
2023-05-28T19:37:36.000Z
[ "license:mit", "region:us" ]
AtlasUnified
null
null
null
5
3
--- license: mit ---
rvashurin/wikidata_rubq
2023-05-29T10:36:41.000Z
[ "region:us" ]
rvashurin
HuggingFace wrapper for https://github.com/vladislavneon/RuBQ dataset
null
null
0
3
# wikidata-rubq-hf Huggingface Dataset wrapper for Wikidata-RuBQ 2.0 dataset ### Usage WIP
tasksource/HYPO-L
2023-05-31T11:34:17.000Z
[ "task_categories:text-classification", "language:en", "license:mit", "hyperbola", "exageration", "region:us" ]
tasksource
null
null
null
0
3
--- license: mit task_categories: - text-classification language: - en tags: - hyperbola - exageration --- https://github.com/yunx-z/MOVER ``` @inproceedings{zhang-wan-2022-mover, title = "{MOVER}: Mask, Over-generate and Rank for Hyperbole Generation", author = "Zhang, Yunxiang and Wan, Xiaojun", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.440", doi = "10.18653/v1/2022.naacl-main.440", pages = "6018--6030", abstract = "Despite being a common figure of speech, hyperbole is under-researched in Figurative Language Processing. In this paper, we tackle the challenging task of hyperbole generation to transfer a literal sentence into its hyperbolic paraphrase. To address the lack of available hyperbolic sentences, we construct HYPO-XL, the first large-scale English hyperbole corpus containing 17,862 hyperbolic sentences in a non-trivial way. Based on our corpus, we propose an unsupervised method for hyperbole generation that does not require parallel literal-hyperbole pairs. During training, we fine-tune BART to infill masked hyperbolic spans of sentences from HYPO-XL. During inference, we mask part of an input literal sentence and over-generate multiple possible hyperbolic versions. Then a BERT-based ranker selects the best candidate by hyperbolicity and paraphrase quality. Automatic and human evaluation results show that our model is effective at generating hyperbolic paraphrase sentences and outperforms several baseline systems.", } ```
simoneteglia/europarl_for_language_detection_10k
2023-06-02T16:11:23.000Z
[ "region:us" ]
simoneteglia
null
null
null
0
3
Entry not found
valurank/Emotion_headline
2023-08-27T21:27:49.000Z
[ "task_categories:text-classification", "task_ids:multi-label-classification", "multilinguality:monolingual", "language:en", "license:other", "region:us" ]
valurank
null
null
null
0
3
--- license: - other language: - en multilinguality: - monolingual task_categories: - text-classification task_ids: - multi-label-classification --- # Dataset Card for News_Articles_Categorization ## Table of Contents - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Source Data](#source-data) ## Dataset Description 29000 News headlines which are classified into 13 different labels namely: "Playful", "Infuriating", "Sentimental", "Cynical", "Depressing", "Awe-inspiring", "Patriotic", "Begrudging", "Educational", "Hopeful", "Sarcastic", "Disrespectful", "Disparaging" ## Languages The text in the dataset is in English ## Dataset Structure The dataset consists of 14 columns namely Headline and the other 13 representing the labels mentioned above. The Headline column consists of the news headlines and the label columns represent if the headline belongs to the label or not ## Source Data The dataset is collected from the database of otherweb.com
GIZ/sector_data
2023-05-31T16:03:36.000Z
[ "task_categories:text-classification", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "climate", "policy", "region:us" ]
GIZ
null
null
null
0
3
--- license: apache-2.0 task_categories: - text-classification language: - en size_categories: - 10K<n<100K tags: - climate - policy --- This dataset is curated by [GIZ Data Service Center](https://www.giz.de/expertise/html/63018.html) for **Multi-Label Sector classification** of given text .The source dataset for this comes from [Climatewatchdata](https://www.climatewatchdata.org/data-explorer/historical-emissions?historical-emissions-data-sources=climate-watch&historical-emissions-gases=all-ghg&historical-emissions-regions=All%20Selected&historical-emissions-sectors=total-including-lucf%2Ctotal-including-lucf&page=1), and Tracs(GIZ). Specifications - Dataset size: ~10k - Average text length : 50 words - Language: English Sectors Included: <pre><b>Agriculture,Buildings, Coastal Zone, Disaster Risk Management (DRM), Economy-wide, Energy, Environment, Health, Industries, LULUCF/Forestry, Social Development, Transport, Urban, Waste, Water</b> </pre> Due to imbalanced sectors respresentation (True category), some more columns are added to signify some info. - set0: [Agriculture,Energy,LULUCF/Forestry,Water,Environment] `count > 2000` - set1:[Social Development,Transport,Urban,Economy-wide,Disaster Risk Management (DRM)] `2000 >count > 1000` - set2:[Coastal Zone,Buildings,Health,Waste,Industries] `count < 1000`
rcds/swiss_leading_decision_summarization
2023-07-20T07:38:30.000Z
[ "task_categories:summarization", "annotations_creators:machine-generated", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:de", "language:fr", "language:it", "license:cc-by-sa-4.0", "arxiv:2306.09237", "r...
rcds
This dataset contains court decisions for the swiss ruling summarization task.
@InProceedings{huggingface:dataset, title = {A great new dataset}, author={huggingface, Inc. }, year={2020} }
null
3
3
--- license: cc-by-sa-4.0 annotations_creators: - machine-generated language: - de - fr - it language_creators: - expert-generated multilinguality: - multilingual pretty_name: Leading Decision Summarization size_categories: - 10K<n<100K source_datasets: - original task_categories: - summarization --- # Dataset Card for Leading Decision Summarization ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains text and summary for swiss leading decisions. ### Supported Tasks and Leaderboards ### Languages Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings. | Language | Subset | Number of Documents| |------------|------------|--------------------| | German | **de** | 12K | | French | **fr** | 5K | | Italian | **it** | 835 | ## Dataset Structure - decision_id: unique identifier for the decision - header: a short header for the decision - regeste: the summary of the leading decision - text: the main text of the leading decision - law_area: area of law of the decision - law_sub_area: sub-area of law of the decision - language: language of the decision - year: year of the decision - court: court of the decision - chamber: chamber of the decision - canton: canton of the decision - region: region of the decision ### Data Fields [More Information Needed] ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML. #### Who are the source language producers? The decisions are written by the judges and clerks in the language of the proceedings. ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf) © Swiss Federal Supreme Court, 2002-2022 The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf ### Citation Information Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237) ``` @misc{rasiah2023scale, title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation}, author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus}, year={2023}, eprint={2306.09237}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [Joel Niklaus](https://niklaus.ai) for adding this dataset.
BlueSunflower/chess_games_base
2023-05-31T15:47:38.000Z
[ "region:us" ]
BlueSunflower
null
null
null
1
3
# Dataset of chess games made for purpose of training language model on them Two files: data_stockfish_262k.tar.gz - 262 000 games generated by Stockfish self-play lichess.tar.gz - a sample of 3.5M games from lichess with unfinished games filtered out, all converted to one format Each archive contains two files: train.jsonl test.jsonl --- license: apache-2.0 ---
tomaarsen/conllpp
2023-06-01T11:38:05.000Z
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|conll2003", "language:en", "license:unknown", "region:us" ]
tomaarsen
CoNLLpp is a corrected version of the CoNLL2003 NER dataset where labels of 5.38% of the sentences in the test set have been manually corrected. The training set and development set are included for completeness. For more details see https://www.aclweb.org/anthology/D19-1519/ and https://github.com/ZihanWangKi/CrossWeigh
@inproceedings{wang2019crossweigh, title={CrossWeigh: Training Named Entity Tagger from Imperfect Annotations}, author={Wang, Zihan and Shang, Jingbo and Liu, Liyuan and Lu, Lihao and Liu, Jiacheng and Han, Jiawei}, booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)}, pages={5157--5166}, year={2019} }
null
0
3
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - extended|conll2003 task_categories: - token-classification task_ids: - named-entity-recognition paperswithcode_id: conll pretty_name: CoNLL++ train-eval-index: - config: conllpp task: token-classification task_id: entity_extraction splits: train_split: train eval_split: test col_mapping: tokens: tokens ner_tags: tags metrics: - type: seqeval name: seqeval dataset_info: features: - name: id dtype: string - name: tokens sequence: string - name: pos_tags sequence: class_label: names: 0: '"' 1: '''''' 2: '#' 3: $ 4: ( 5: ) 6: ',' 7: . 8: ':' 9: '``' 10: CC 11: CD 12: DT 13: EX 14: FW 15: IN 16: JJ 17: JJR 18: JJS 19: LS 20: MD 21: NN 22: NNP 23: NNPS 24: NNS 25: NN|SYM 26: PDT 27: POS 28: PRP 29: PRP$ 30: RB 31: RBR 32: RBS 33: RP 34: SYM 35: TO 36: UH 37: VB 38: VBD 39: VBG 40: VBN 41: VBP 42: VBZ 43: WDT 44: WP 45: WP$ 46: WRB - name: chunk_tags sequence: class_label: names: 0: O 1: B-ADJP 2: I-ADJP 3: B-ADVP 4: I-ADVP 5: B-CONJP 6: I-CONJP 7: B-INTJ 8: I-INTJ 9: B-LST 10: I-LST 11: B-NP 12: I-NP 13: B-PP 14: I-PP 15: B-PRT 16: I-PRT 17: B-SBAR 18: I-SBAR 19: B-UCP 20: I-UCP 21: B-VP 22: I-VP - name: ner_tags sequence: class_label: names: 0: O 1: B-PER 2: I-PER 3: B-ORG 4: I-ORG 5: B-LOC 6: I-LOC 7: B-MISC 8: I-MISC config_name: conllpp splits: - name: train num_bytes: 6931393 num_examples: 14041 - name: validation num_bytes: 1739247 num_examples: 3250 - name: test num_bytes: 1582078 num_examples: 3453 download_size: 4859600 dataset_size: 10252718 --- # Dataset Card for "conllpp" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/ZihanWangKi/CrossWeigh) - **Repository:** [Github](https://github.com/ZihanWangKi/CrossWeigh) - **Paper:** [Aclweb](https://www.aclweb.org/anthology/D19-1519) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary CoNLLpp is a corrected version of the CoNLL2003 NER dataset where labels of 5.38% of the sentences in the test set have been manually corrected. The training set and development set from CoNLL2003 is included for completeness. One correction on the test set for example, is: ``` { "tokens": ["SOCCER", "-", "JAPAN", "GET", "LUCKY", "WIN", ",", "CHINA", "IN", "SURPRISE", "DEFEAT", "."], "original_ner_tags_in_conll2003": ["O", "O", "B-LOC", "O", "O", "O", "O", "B-PER", "O", "O", "O", "O"], "corrected_ner_tags_in_conllpp": ["O", "O", "B-LOC", "O", "O", "O", "O", "B-LOC", "O", "O", "O", "O"], } ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances #### conllpp - **Size of downloaded dataset files:** 4.85 MB - **Size of the generated dataset:** 10.26 MB - **Total amount of disk used:** 15.11 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "id": "0", "document_id": 1, "sentence_id": 3, "tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."] "pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7], "ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0], } ``` ### Data Fields The data fields are the same among all splits. #### conllpp - `id`: a `string` feature. - `document_id`: an `int32` feature tracking which document the sample is from. - `sentence_id`: an `int32` feature tracking which sentence in this document the sample is from. - `tokens`: a `list` of `string` features. - `pos_tags`: a `list` of classification labels, with possible values including `"` (0), `''` (1), `#` (2), `$` (3), `(` (4). - `chunk_tags`: a `list` of classification labels, with possible values including `O` (0), `B-ADJP` (1), `I-ADJP` (2), `B-ADVP` (3), `I-ADVP` (4). - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4). ### Data Splits | name |train|validation|test| |---------|----:|---------:|---:| |conll2003|14041| 3250|3453| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{wang2019crossweigh, title={CrossWeigh: Training Named Entity Tagger from Imperfect Annotations}, author={Wang, Zihan and Shang, Jingbo and Liu, Liyuan and Lu, Lihao and Liu, Jiacheng and Han, Jiawei}, booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)}, pages={5157--5166}, year={2019} } ``` ### Contributions Thanks to [@ZihanWangKi](https://github.com/ZihanWangKi) for adding this dataset.
Dhika/defect_rail
2023-06-02T05:39:14.000Z
[ "license:unknown", "region:us" ]
Dhika
null
null
null
0
3
--- license: unknown ---
tasksource/PLANE-ood
2023-06-02T11:40:29.000Z
[ "task_categories:text-classification", "size_categories:100K<n<1M", "language:en", "license:cc-by-2.0", "region:us" ]
tasksource
null
null
null
0
3
--- license: cc-by-2.0 task_categories: - text-classification language: - en size_categories: - 100K<n<1M dataset_info: features: - name: seq dtype: string - name: label dtype: string - name: Adj_Class dtype: string - name: Adj dtype: string - name: Nn dtype: string - name: Hy dtype: string splits: - name: train num_bytes: 26047744 num_examples: 300132 - name: test num_bytes: 874524 num_examples: 10080 download_size: 4721262 dataset_size: 26922268 --- Preprocessed from https://huggingface.co/datasets/lorenzoscottb/PLANE-ood/ ```python df=pd.read_json('https://huggingface.co/datasets/lorenzoscottb/PLANE-ood/resolve/main/PLANE_trntst-OoV_inftype-all.json') f = lambda df: pd.DataFrame(list(zip(*[df[c] for c in df.index])),columns=df.index) ds=DatasetDict() for split in ['train','test']: dfs=pd.concat([f(df[c]) for c in df.columns if split in c.lower()]).reset_index(drop=True) dfs['label']=dfs['label'].map(lambda x:{1:'entailment',0:'not-entailment'}[x]) ds[split]=Dataset.from_pandas(dfs,preserve_index=False) ds.push_to_hub('tasksource/PLANE-ood') ``` # PLANE Out-of-Distribution Sets PLANE (phrase-level adjective-noun entailment) is a benchmark to test models on fine-grained compositional inference. The current dataset contains five sampled splits, used in the supervised experiments of [Bertolini et al., 22](https://aclanthology.org/2022.coling-1.359/). ### Features Each entrance has 6 features: `seq, label, Adj_Class, Adj, Nn, Hy` - `seq`:test sequense - `label`: ground truth (1:entialment, 0:no-entailment) - `Adj_Class`: the class of the sequence adjectives - `Adj`: the adjective of the sequence (I: intersective, S: subsective, O: intensional) - `N`n: the noun - `Hy`: the noun's hypericum Each sample in `seq` can take one of three forms (or inference types, in paper): - An *Adjective-Noun* is a *Noun* (e.g. A red car is a car) - An *Adjective-Noun* is a *Hypernym(Noun)* (e.g. A red car is a vehicle) - An *Adjective-Noun* is a *Adjective-Hypernym(Noun)* (e.g. A red car is a red vehicle) Please note that, as specified in the paper, the ground truth is automatically assigned based on the linguistic rule that governs the interaction between each adjective class and inference type – see the paper for more detail. ### Cite If you use PLANE for your work, please cite the main COLING 2022 paper. ``` @inproceedings{bertolini-etal-2022-testing, title = "Testing Large Language Models on Compositionality and Inference with Phrase-Level Adjective-Noun Entailment", author = "Bertolini, Lorenzo and Weeds, Julie and Weir, David", booktitle = "Proceedings of the 29th International Conference on Computational Linguistics", month = oct, year = "2022", address = "Gyeongju, Republic of Korea", publisher = "International Committee on Computational Linguistics", url = "https://aclanthology.org/2022.coling-1.359", pages = "4084--4100", } ```
liyucheng/arxiv-march-2023
2023-06-02T17:59:35.000Z
[ "region:us" ]
liyucheng
null
null
null
0
3
--- dataset_info: features: - name: entry_id dtype: string - name: published dtype: string - name: title dtype: string - name: authors sequence: string - name: primary_category dtype: string - name: categories sequence: string - name: text dtype: string splits: - name: train num_bytes: 20816482 num_examples: 500 download_size: 10224538 dataset_size: 20816482 --- # Dataset Card for "arxiv-march-2023" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Kamtera/ParsiGoo
2023-06-11T09:21:29.000Z
[ "task_categories:text-to-speech", "task_categories:other", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:fa", "license:cc0-1.0", "region:us" ]
Kamtera
null
null
null
1
3
--- license: - cc0-1.0 description: A Persian multispeaker dataset for text-to-speech purposes. homepage: https://example.com/parsigoo keywords: - text-to-speech - Persian - multispeaker language: fa multilinguality: monolingual name: parsi_goo pretty_name: ParsiGoo size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-to-speech - other task_ids: [] --- # ParsiGoo Dataset Cart This is a Persian multispeaker dataset for text-to-speech purposes. The dataset includes the following speakers: - ariana_Male2 - moujeze_Female1 - ariana_Male1 - ariana_Female1 ## Technical detailes #### the beginning and the end with nonspeech parts trimmed #### Sample rate: 22050 #### Durations: ``` |> ariana_Male2 0:46:36.908685 |> edge_Dilara 0:54:31.448820 |> moujeze_Female1 0:29:24.339590 |> ariana_Male1 0:55:41.996847 |> ariana_Female1 0:53:38.396217 |> edge_Farid 0:53:11.961018 ``` ## Dataset Information - **Name:** ParsGoo - **Description:** A Persian multispeaker dataset for text-to-speech purposes. - **Homepage:** https://github.com/karim23657/ParsGoo - **License:** CC BY-SA 4.0 ## Speaker info - ariana_Male2 - moujeze_Female1 - ariana_Male1 - ariana_Female1
DeviL1337/face_recognition_52
2023-06-04T11:02:51.000Z
[ "region:us" ]
DeviL1337
null
null
null
0
3
Entry not found
kaist-ai/Flan-Collection_subset
2023-06-05T09:08:58.000Z
[ "task_categories:text-generation", "task_categories:text-classification", "size_categories:1M<n<10M", "language:en", "license:cc-by-4.0", "arxiv:2305.14045", "region:us" ]
kaist-ai
null
null
null
2
3
--- license: cc-by-4.0 task_categories: - text-generation - text-classification language: - en size_categories: - 1M<n<10M --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage:https://github.com/kaistAI/CoT-Collection** - **Repository:https://github.com/kaistAI/CoT-Collection** - **Paper:https://arxiv.org/abs/2305.14045** - **Point of Contact:sejune@lklab.io** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits | name | train | |-------------------|------:| |CoT-Collection|1837928| ## Additional Information ### Citation Information ``` @article{kim2023cot, title={The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning}, author={Kim, Seungone and Joo, Se June and Kim, Doyoung and Jang, Joel and Ye, Seonghyeon and Shin, Jamin and Seo, Minjoon}, journal={arXiv preprint arXiv:2305.14045}, year={2023} } ```
bogdancazan/wikilarge-text-simplification
2023-06-06T17:49:49.000Z
[ "region:us" ]
bogdancazan
null
null
null
0
3
Entry not found
asoria/urls
2023-06-06T20:12:55.000Z
[ "task_categories:image-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "size_categories:1K<n<10K", "source_datasets:original", "license:apache-2.0", "region:us" ]
asoria
null
null
null
0
3
--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: [] license: - apache-2.0 multilinguality: [] size_categories: - 1K<n<10K source_datasets: - original task_categories: - image-classification task_ids: [] pretty_name: Wildfire image classification dataset collected using images from web searches. --- # Dataset Card for OpenFire ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://pyronear.org/pyro-vision/datasets.html#openfire - **Repository:** https://github.com/pyronear/pyro-vision - **Point of Contact:** Pyronear <https://pyronear.org/en/> ### Dataset Summary OpenFire is an image classification dataset for wildfire detection, collected from web searches. ### Supported Tasks and Leaderboards - `image-classification`: The dataset can be used to train a model for Image Classification. ### Languages English ## Dataset Structure ### Data Instances A data point comprises an image URL and its binary label. ``` { 'image_url': 'https://cdn-s-www.ledauphine.com/images/13C08274-6BA6-4577-B3A0-1E6C1B2A573C/FB1200/photo-1338240831.jpg', 'is_wildfire': true, } ``` ### Data Fields - `image_url`: the download URL of the image. - `is_wildfire`: a boolean value specifying whether there is an ongoing wildfire on the image. ### Data Splits The data is split into training and validation sets. The training set contains 7143 images and the validation set 792 images. ## Dataset Creation ### Curation Rationale The curators state that the current wildfire classification datasets typically contain close-up shots of wildfires, with limited variations of weather conditions, luminosity and backrgounds, making it difficult to assess for real world performance. They argue that the limitations of datasets have partially contributed to the failure of some algorithms in coping with sun flares, foggy / cloudy weather conditions and small scale. ### Source Data #### Initial Data Collection and Normalization OpenFire was collected using images publicly indexed by the search engine DuckDuckGo using multiple relevant queries. The images were then manually cleaned to remove errors. ### Annotations #### Annotation process Each web search query was designed to yield a single label (with wildfire or without), and additional human verification was used to remove errors. #### Who are the annotators? François-Guillaume Fernandez ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators François-Guillaume Fernandez ### Licensing Information [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Citation Information ``` @software{Pyronear_PyroVision_2019, title={Pyrovision: wildfire early detection}, author={Pyronear contributors}, year={2019}, month={October}, publisher = {GitHub}, howpublished = {\url{https://github.com/pyronear/pyro-vision}} } ```
Salama1429/tarteel-ai-everyayah-Quran
2023-06-07T14:17:32.000Z
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ar", "license:mit", "region:us" ]
Salama1429
null
null
null
1
3
--- pretty_name: Tarteel AI - EveryAyah Dataset dataset_info: features: - name: audio dtype: audio - name: duration dtype: float64 - name: text dtype: string - name: reciter dtype: string splits: - name: train num_bytes: 262627688145.3 num_examples: 187785 - name: test num_bytes: 25156009734.72 num_examples: 23473 - name: validation num_bytes: 23426886730.218 num_examples: 23474 download_size: 117190597305 dataset_size: 311210584610.23804 annotations_creators: - expert-generated language_creators: - crowdsourced language: - ar license: - mit multilinguality: - monolingual paperswithcode_id: tarteel-everyayah size_categories: - 100K<n<1M source_datasets: - original task_categories: - automatic-speech-recognition task_ids: [] train-eval-index: - config: clean task: automatic-speech-recognition task_id: speech_recognition splits: train_split: train eval_split: test validation_split: validation col_mapping: audio: audio text: text reciter: text metrics: - type: wer name: WER - type: cer name: CER --- ﷽ # Dataset Card for Tarteel AI's EveryAyah Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Tarteel AI](https://www.tarteel.ai/) - **Repository:** [Needs More Information] - **Point of Contact:** [Mohamed Saad Ibn Seddik](mailto:ms.ibnseddik@tarteel.ai) ### Dataset Summary This dataset is a collection of Quranic verses and their transcriptions, with diacritization, by different reciters. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages The audio is in Arabic. ## Dataset Structure ### Data Instances A typical data point comprises the audio file `audio`, and its transcription called `text`. The `duration` is in seconds, and the author is `reciter`. An example from the dataset is: ``` { 'audio': { 'path': None, 'array': array([ 0. , 0. , 0. , ..., -0.00057983, -0.00085449, -0.00061035]), 'sampling_rate': 16000 }, 'duration': 6.478375, 'text': 'بِسْمِ اللَّهِ الرَّحْمَنِ الرَّحِيمِ', 'reciter': 'abdulsamad' } ``` ### Length: Training: Total duration: 2985111.2642479446 seconds Total duration: 49751.85440413241 minutes Total duration: 829.1975734022068 hours Validation: Total duration: 372720.43139099434 seconds Total duration: 6212.007189849905 minutes Total duration: 103.5334531641651 hours Test: Total duration: 375509.96909399604 seconds Total duration: 6258.499484899934 minutes Total duration: 104.30832474833224 hours ### Data Fields - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - text: The transcription of the audio file. - duration: The duration of the audio file. - reciter: The reciter of the verses. ### Data Splits | | Train | Test | Validation | | ----- | ----- | ---- | ---------- | | dataset | 187785 | 23473 | 23474 | ### reciters - reciters_count: 36 - reciters: {'abdul_basit', 'abdullah_basfar', 'abdullah_matroud', 'abdulsamad', 'abdurrahmaan_as-sudais', 'abu_bakr_ash-shaatree', 'ahmed_ibn_ali_al_ajamy', 'ahmed_neana', 'akram_alalaqimy', 'alafasy', 'ali_hajjaj_alsuesy', 'aziz_alili', 'fares_abbad', 'ghamadi', 'hani_rifai', 'husary', 'karim_mansoori', 'khaalid_abdullaah_al-qahtaanee', 'khalefa_al_tunaiji', 'maher_al_muaiqly', 'mahmoud_ali_al_banna', 'menshawi', 'minshawi', 'mohammad_al_tablaway', 'muhammad_abdulkareem', 'muhammad_ayyoub', 'muhammad_jibreel', 'muhsin_al_qasim', 'mustafa_ismail', 'nasser_alqatami', 'parhizgar', 'sahl_yassin', 'salaah_abdulrahman_bukhatir', 'saood_ash-shuraym', 'yaser_salamah', 'yasser_ad-dussary'} ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators ### Licensing Information [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` ``` ### Contributions This dataset was created by:
Dufferent/OKD-CL
2023-06-07T14:18:07.000Z
[ "task_categories:image-classification", "size_categories:10K<n<100K", "language:zh", "language:en", "license:gpl-2.0", "region:us" ]
Dufferent
null
null
null
0
3
--- license: gpl-2.0 task_categories: - image-classification language: - zh - en size_categories: - 10K<n<100K --- # Dataset Card for OKD-CL ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
nicholasKluge/toxic-aira-dataset
2023-06-28T16:15:24.000Z
[ "task_categories:text-classification", "size_categories:10K<n<100K", "language:pt", "language:en", "license:apache-2.0", "toxicity", "harm", "region:us" ]
nicholasKluge
null
null
null
0
3
--- license: apache-2.0 task_categories: - text-classification language: - pt - en tags: - toxicity - harm pretty_name: Toxic-Aira Dataset size_categories: - 10K<n<100K dataset_info: features: - name: non_toxic_response dtype: string - name: toxic_response dtype: string splits: - name: portuguese num_bytes: 5544823 num_examples: 16730 - name: english num_bytes: 19577715 num_examples: 41843 download_size: 14914871 dataset_size: 25122538 --- # Dataset (`Toxic-Aira Dataset`) ### Overview This dataset contains a collection of texts containing harmful/toxic and harmless/non-toxic conversations and messages. All demonstrations are separated into two classes (`non_toxic_response` and `toxic_response`). This dataset was created from the Anthropic [helpful-harmless-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset, the AllenAI [prosocial-dialog](https://huggingface.co/datasets/allenai/prosocial-dialog) dataset, the [real-toxicity-prompts](https://huggingface.co/datasets/allenai/real-toxicity-prompts) dataset (also from AllenAI), and the [Toxic Comment Classification](https://github.com/tianqwang/Toxic-Comment-Classification-Challenge) dataset. The Portuguese version has translated copies from the above mentioned datasets ([helpful-harmless-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), [prosocial-dialog](https://huggingface.co/datasets/allenai/prosocial-dialog), [real-toxicity-prompts](https://huggingface.co/datasets/allenai/real-toxicity-prompts)), plus the [hatecheck-portuguese](https://huggingface.co/datasets/Paul/hatecheck-portuguese) and [told-br](https://huggingface.co/datasets/told-br) datasets. ### Dataset Details - **Dataset Name:** Toxic-Aira Dataset - **Language:** Portuguese and English - **Total Size:** Over 40,000 demonstrations (Portuguese 16,730, English 41,843) - **Sources:** - Anthropic [helpful-harmless-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset. - AllenAI [prosocial-dialog](https://huggingface.co/datasets/allenai/prosocial-dialog) dataset. - AllenAI [real-toxicity-prompts](https://huggingface.co/datasets/allenai/real-toxicity-prompts) dataset. - The [Toxic Comment Classification](https://github.com/tianqwang/Toxic-Comment-Classification-Challenge).dataset. - The [hatecheck-portuguese](https://huggingface.co/datasets/Paul/hatecheck-portuguese) dataset. - The [told-br](https://huggingface.co/datasets/told-br) dataset. ### Contents ⚠️ THE EXAMPLES IN THIS DATASET CONTAIN TOXIC/OFFENSIVE LANGUAGE ⚠️ The dataset consists of data frames with the following columns: - **non_toxic_response:** Text that was evaluated as non-toxic. - **toxic_response:** Text evaluated as toxic. ```python { "non_toxic_response": "I prefer to see it as the study of change.", "toxic_response":"So to my secret santa in the words of Jessie Pinkman, THANKS BITCH", } ``` All demonstrations of examples are less than 350 tokens (measured using the `BERT` tokenizer). ### Use Cases `Toxic-Aira Dataset` can be utilized to train models to detect harmful/toxic text. ## How to use Available splits are `portuguese` and `english`. ```python from datasets import load_dataset dataset = load_dataset("nicholasKluge/toxic-aira-dataset") ``` ### Dataset License The `Toxic-Aira Dataset` is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details. ### Disclaimer This dataset is provided as is, without any warranty or guarantee of its accuracy or suitability for any purpose. The creators and contributors of this dataset are not liable for any damages or losses arising from its use. Please review and comply with the licenses and terms of the original datasets before use.
coding-assistant-custom/mini-code-corpus
2023-06-08T02:05:04.000Z
[ "region:us" ]
coding-assistant-custom
null
null
null
1
3
--- dataset_info: features: - name: reponame dtype: string - name: filepath dtype: string - name: content dtype: string splits: - name: train num_bytes: 475641 num_examples: 139 download_size: 151005 dataset_size: 475641 --- # Dataset Card for "mini-code-corpus" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
cmu-lti/cobracorpus
2023-06-26T17:20:21.000Z
[ "task_categories:text-generation", "task_categories:text-classification", "size_categories:10K<n<100K", "language:en", "license:openrail", "arxiv:2306.01985", "arxiv:2203.09509", "region:us" ]
cmu-lti
null
null
null
0
3
--- license: openrail task_categories: - text-generation - text-classification language: - en pretty_name: COBRA🐍 size_categories: - 10K<n<100K --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage: https://cobra.xuhuiz.com/** - **Paper: https://arxiv.org/abs/2306.01985** ### Dataset Summary This dataset contains COBRACOPURS and COBRACORPUS-counterfactual in this [paper](https://arxiv.org/abs/2306.01985) ### Data Splits * `advContexts_explanations.csv` is `COBRACorpus-CF` * `toxigen_explanations.csv` is the full `COBRACorpus` * `toxigen_explanations_train.csv` is the training split of `COBRACorpus` * `toxigen_explanations_val.csv` is the validation split of `COBRACorpus` ### Data Entries For `COBRACorpus`, the relevant entries in the `csv` files are *`situationalContext (string)`, `speakerIdentity (string)`, `listenerIdentity (string)`, `statement (string)`, `intent (string)`, `targetGroup (string)`, `relevantPowerDynamics (string)`, `implication (string)`, `targetGroupEmotionalReaction (string)`, `targetGroupCognitiveReaction (string)`, `offensiveness (string)`* Please refer to the [paper](https://arxiv.org/abs/2306.01985) for the specific explanations of these entries. The *`examples`* entry is the few-shot prompt that we used to generate explanations. All other entries are from the [Toxicgen](https://arxiv.org/abs/2203.09509) dataset, which is not directly relevant to this work but we leave them there as the metadata in case it's useful for the future works. ### Citation Information If you find this dataset useful, please cite: ``` @inproceedings{zhou2023cobra, title = {COBRA Frames: Contextual Reasoning about Effects and Harms of Offensive Statements}, author = {Zhou, Xuhui and Zhu, Hao and Yerukola, Akhila and Davidson, Thomas and D. Hwang, Jena and Swayamdipta, Swabha and Sap, Maarten}, year = {2023}, booktitle = {Findings of ACL} } ```
Multimodal-Fatima/VQAv2_minival_validation
2023-06-09T02:25:25.000Z
[ "region:us" ]
Multimodal-Fatima
null
null
null
0
3
--- dataset_info: features: - name: question_type dtype: string - name: multiple_choice_answer dtype: string - name: answers sequence: string - name: answers_original list: - name: answer dtype: string - name: answer_confidence dtype: string - name: answer_id dtype: int64 - name: id_image dtype: int64 - name: answer_type dtype: string - name: question_id dtype: int64 - name: question dtype: string - name: image dtype: image - name: clip_tags_ViT_L_14 sequence: string - name: blip_caption dtype: string - name: LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14 sequence: string - name: DETA_detections_deta_swin_large_o365_coco_classes list: - name: attribute dtype: string - name: box sequence: float32 - name: label dtype: string - name: location dtype: string - name: ratio dtype: float32 - name: size dtype: string - name: tag dtype: string - name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14 list: - name: attribute dtype: string - name: box sequence: float64 - name: label dtype: string - name: location dtype: string - name: ratio dtype: float64 - name: size dtype: string - name: tag dtype: string - name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption list: - name: attribute dtype: string - name: box sequence: float64 - name: caption dtype: string - name: label dtype: string - name: location dtype: string - name: ratio dtype: float64 - name: size dtype: string - name: tag dtype: string - name: id dtype: int64 - name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module list: - name: attribute dtype: string - name: box sequence: float64 - name: caption dtype: string - name: captions_module sequence: string - name: label dtype: string - name: location dtype: string - name: ratio dtype: float64 - name: size dtype: string - name: tag dtype: string - name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_without_filtering list: - name: attribute dtype: string - name: box sequence: float64 - name: caption dtype: string - name: captions_module sequence: string - name: label dtype: string - name: location dtype: string - name: ratio dtype: float64 - name: size dtype: string - name: tag dtype: string - name: DETA_detections_deta_swin_large_o365_clip_ViT_L_14_blip_caption_caption_module_random list: - name: attribute dtype: string - name: box sequence: float64 - name: caption dtype: string - name: captions_module sequence: string - name: captions_module_filter sequence: string - name: label dtype: string - name: location dtype: string - name: ratio dtype: float64 - name: size dtype: string - name: tag dtype: string - name: clip_tags_LAION_ViT_H_14_2B sequence: string - name: LLM_Description_gpt3_downstream_tasks_visual_genome_LAION-ViT-H-14-2B sequence: string - name: Attributes_ViT_L_14_descriptors_text_davinci_003_full sequence: string - name: clip_tags_ViT_L_14_wo_openai sequence: string - name: clip_tags_ViT_L_14_with_openai sequence: string - name: clip_tags_LAION_ViT_H_14_2B_wo_openai sequence: string - name: clip_tags_LAION_ViT_H_14_2B_with_openai sequence: string - name: clip_tags_LAION_ViT_bigG_14_2B_wo_openai sequence: string - name: clip_tags_LAION_ViT_bigG_14_2B_with_openai sequence: string - name: Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full sequence: string - name: Attributes_LAION_ViT_bigG_14_2B_descriptors_text_davinci_003_full sequence: string - name: clip_tags_ViT_B_16_with_openai sequence: string - name: blip_caption_beam_5_Salesforce_blip2_flan_t5_xxl dtype: string - name: DETA_detections_deta_swin_large_o365_coco_classes_caption_all_patches_Salesforce_blip_image_captioning_large_ list: - name: attribute dtype: string - name: box sequence: float64 - name: captions_all_patches sequence: string - name: label dtype: string - name: location dtype: string - name: ratio dtype: float64 - name: size dtype: string - name: tag dtype: string - name: DETA_detections_deta_swin_large_o365_coco_classes_caption_all_patches_Salesforce_blip_image_captioning_large_clean list: - name: attribute dtype: string - name: box sequence: float64 - name: captions_all_patches sequence: string - name: label dtype: string - name: location dtype: string - name: ratio dtype: float64 - name: size dtype: string - name: tag dtype: string - name: blip_caption_Salesforce_blip_image_captioning_large_intensive sequence: string - name: blip_caption_Salesforce_blip_image_captioning_base_intensive sequence: string splits: - name: validation num_bytes: 10757838822.0 num_examples: 25994 download_size: 2788131849 dataset_size: 10757838822.0 --- # Dataset Card for "VQAv2_minival_validation_v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tasksource/scone
2023-06-08T08:58:32.000Z
[ "task_categories:text-classification", "task_ids:natural-language-inference", "license:cc0-1.0", "arxiv:2305.19426", "region:us" ]
tasksource
null
null
null
0
3
--- license: cc0-1.0 task_ids: - natural-language-inference task_categories: - text-classification dataset_info: features: - name: sentence1_edited dtype: string - name: sentence2_edited dtype: string - name: gold_label_edited dtype: string splits: - name: train num_bytes: 694572 num_examples: 5010 - name: test num_bytes: 149006 num_examples: 1000 download_size: 114079 dataset_size: 843578 --- https://github.com/selenashe/ScoNe NLI subset, original part (excluding one-scope) ``` @misc{she2023scone, title={ScoNe: Benchmarking Negation Reasoning in Language Models With Fine-Tuning and In-Context Learning}, author={Jingyuan Selena She and Christopher Potts and Samuel R. Bowman and Atticus Geiger}, year={2023}, eprint={2305.19426}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
tum-nlp/sexism-socialmedia-balanced
2023-06-08T11:56:54.000Z
[ "license:cc-by-sa-4.0", "region:us" ]
tum-nlp
null
null
null
1
3
--- license: cc-by-sa-4.0 ---
davanstrien/on_the_books_example
2023-06-08T13:41:08.000Z
[ "task_categories:text-classification", "language:en", "license:cc-by-3.0", "lam", "legal", "region:us" ]
davanstrien
null
null
null
0
3
--- license: cc-by-3.0 task_categories: - text-classification language: - en tags: - lam - legal pretty_name: On the Books training data --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
AlekseyKorshuk/gpt4-llm-cleaned-chatml
2023-07-24T20:21:19.000Z
[ "region:us" ]
AlekseyKorshuk
null
null
null
1
3
--- dataset_info: features: - name: conversation list: - name: content dtype: string - name: do_train dtype: bool - name: role dtype: string splits: - name: train num_bytes: 39157218 num_examples: 54568 download_size: 21310829 dataset_size: 39157218 --- # Dataset Card for "gpt4-llm-cleaned-chatml" Data preprocessing pipeline: https://github.com/AlekseyKorshuk/chat-data-pipeline
rlacombe/ICCS
2023-06-11T16:54:10.000Z
[ "task_categories:zero-shot-classification", "task_categories:text-classification", "task_categories:feature-extraction", "size_categories:1K<n<10K", "language:en", "license:mit", "climate", "region:us" ]
rlacombe
null
null
null
1
3
--- license: mit task_categories: - zero-shot-classification - text-classification - feature-extraction language: - en tags: - climate pretty_name: ICCS (IPCC Confidence in Climate Statements) size_categories: - 1K<n<10K --- # IPCC Confidence in Climate Statements _What do LLMs know about climate? Let's find out!_ ## ICCS Dataset We introduce the **ICCS dataset (IPCC Confidence in Climate Statements)** is a novel, curated, expert-labeled, natural language dataset of 8094 statements extracted or paraphrased from the IPCC Assessment Report 6: [Working Group I report](https://www.ipcc.ch/report/ar6/wg1/), [Working Group II report](https://www.ipcc.ch/report/ar6/wg2/), and [Working Group III report](https://www.ipcc.ch/report/ar6/wg3/), respectively. Each statement is labeled with the corresponding IPCC report source, the page number in the report PDF, and the corresponding confidence level, along with their associated confidence levels (`low`, `medium`, `high`, or `very high`) as assessed by IPCC climate scientists based on available evidence and agreement among their peers. ## Confidence Labels The authors of the United Nations International Panel on Climate Change (IPCC) reports have developed a structured framework to communicate the confidence and uncertainty levels of statements regarding our knowledge of climate change ([Mastrandrea, 2010](https://link.springer.com/article/10.1007/s10584-011-0178-6)). Our dataset leverages this distinctive and consistent approach to labelling uncertainty across topics, disciplines, and report chapters, to help NLP and climate communication researchers evaluate how well LLMs can assess human expert confidence in a set of climate science statements from the IPCC reports. ![IPCC](https://github.com/rlacombe/Climate-LLMs/blob/main/ipcc-scales.png?raw=true) Source: [IPCC AR6 Working Group I report](https://www.ipcc.ch/report/ar6/wg1/) ## Dataset Construction To construct the dataset, we retrieved the complete raw text from each of the three IPCC report PDFs that are available online using an open-source library [pypdf2](https://pypi.org/project/PyPDF2/). We then normalized the whitespace, tokenized the text into sentences using [NLTK](https://www.nltk.org/) , and used regex search to filter for complete sentences including a parenthetical confidence label at the end of the statement, of the form _sentence (low|medium|high|very high confidence)_. The final ICCS dataset contains 8094 labeled sentences. From the full 8094 labeled sentences, we further selected **300 statements to form a smaller and more tractable test dataset**. We performed a random selection of sentences within each report and confidence category, with the following objectives: - Making the test set distribution representative of the confidence class distribution in the overall train set and within each report; - Making the breakdown between source reports representative of the number of statements from each report; - Making sure the test set contains at least 5 sentences from each class and from each source, to ensure our results are statistically robust. Then, we manually reviewed and cleaned each sentence in the test set to provide for a fairer assessment of model capacity. - We removed 26 extraneous references to figures, call-outs, boxes, footnotes, or subscript typos (`CO 2'); - We split 19 compound statements with conflicting confidence sub-labels, and removed 6 extraneous mid-sentence labels of the same category as the end-of-sentence label; - We added light context to 23 sentences, and replaced 5 sentences by others when they were meaningless outside of a longer paragraph; - We removed qualifiers at the beginning of 29 sentences to avoid biasing classification (e.g. 'But...', 'In summary...', 'However...'). **The remaining 7794 sentences not allocated to the test split form our train split.** Of note: while the IPCC report uses a 5 levels scale for confidence, almost no `very low confidence` statement makes it through the peer review process to the final reports, such that no statement of the form _sentence (very low confidence)_ was retrievable. Therefore, we chose to build our data set with only statements labeled as `low`, `medium`, `high` and `very high` confidence. ## Code Download The code to reproduce dataset collection and our LLM benchmarking experiments is [released on GitHub](https://github.com/rlacombe/Climate-LLMs). ## Paper We use this dataset to evaluate how recent LLMs fare at classifying the scientific confidence associated with each statement in a statistically representative, carefully constructed test split of the dataset. We show that `gpt3.5-turbo` and `gpt4` assess the correct confidence level with reasonable accuracy even in the zero-shot setting; but that, along with other language models we tested, they consistently overstate the certainty level associated with low and medium confidence labels. Models generally perform better on reports before their knowledge cutoff, and demonstrate intuitive classifications on a baseline of non-climate statements. However, we caution it is still not fully clear why these models perform well, and whether they may also pick up on linguistic cues within the climate statements and not just prior exposure to climate knowledge and/or IPCC reports. Our results have implications for climate communications and the use of generative language models in knowledge retrieval systems. We hope the ICCS dataset provides the NLP and climate sciences communities with a valuable tool with which to evaluate and improve model performance in this critical domain of human knowledge. Pre-print upcomping.
tasksource/apt
2023-08-10T13:42:21.000Z
[ "task_categories:text-classification", "task_ids:semantic-similarity-classification", "task_ids:semantic-similarity-scoring", "task_ids:text-scoring", "task_ids:multi-input-text-classification", "language:en", "license:unknown", "region:us" ]
tasksource
null
null
null
0
3
--- task_categories: - text-classification language: - en task_ids: - semantic-similarity-classification - semantic-similarity-scoring - text-scoring - multi-input-text-classification license: unknown --- https://github.com/Advancing-Machine-Human-Reasoning-Lab/apt ``` @inproceedings{nighojkar-licato-2021-improving, title = "Improving Paraphrase Detection with the Adversarial Paraphrasing Task", author = "Nighojkar, Animesh and Licato, John", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.552", doi = "10.18653/v1/2021.acl-long.552", pages = "7106--7116", } ```
vietgpt/OSCAR-2301
2023-06-13T05:01:02.000Z
[ "region:us" ]
vietgpt
null
null
null
0
3
--- dataset_info: features: - name: id dtype: string - name: text dtype: string - name: url dtype: string - name: date dtype: string - name: perplexity dtype: float64 splits: - name: train num_bytes: 27907176803.480194 num_examples: 2918898 download_size: 10901340719 dataset_size: 27907176803.480194 --- # Dataset Card for "OSCAR-2301" Num tokens: 4,478,799,252 tokens
tathagataraha/ficle
2023-07-18T11:00:53.000Z
[ "task_categories:token-classification", "task_categories:text-classification", "task_categories:text-generation", "size_categories:1K<n<10K", "language:en", "license:gpl-3.0", "span", "explanation", "arxiv:2306.08872", "region:us" ]
tathagataraha
null
null
null
2
3
--- dataset_info: features: - name: Claim dtype: string - name: Context dtype: string - name: Source dtype: string - name: Source Indices dtype: string - name: Relation dtype: string - name: Relation Indices dtype: string - name: Target dtype: string - name: Target Indices dtype: string - name: Inconsistent Claim Component dtype: string - name: Inconsistent Context-Span dtype: string - name: Inconsistent Context-Span Indices dtype: string - name: Inconsistency Type dtype: string - name: Fine-grained Inconsistent Entity-Type dtype: string - name: Coarse Inconsistent Entity-Type dtype: string splits: - name: train num_bytes: 2657091 num_examples: 6443 - name: validation num_bytes: 333142 num_examples: 806 - name: test num_bytes: 332484 num_examples: 806 download_size: 1784422 dataset_size: 3322717 task_categories: - token-classification - text-classification - text-generation language: - en pretty_name: FICLE size_categories: - 1K<n<10K license: gpl-3.0 tags: - span - explanation --- # FICLE Dataset The dataset can be loaded and utilized through the following: ```python from datasets import load_dataset ficle_data = load_dataset("tathagataraha/ficle") ``` # Dataset card for FICLE ## Dataset Description * **GitHub Repo:** https://github.com/blitzprecision/FICLE * **Paper:** * **Point of Contact:** ### Dataset Summary The FICLE dataset is a derivative of the FEVER dataset, which is a collection of 185,445 claims generated by modifying sentences obtained from Wikipedia. These claims were then verified without knowledge of the original sentences they were derived from. Each sample in the FEVER dataset consists of a claim sentence, a context sentence extracted from a Wikipedia URL as evidence, and a type label indicating whether the claim is supported, refuted, or lacks sufficient information. ### Languages The FICLE Dataset contains only English. ## Dataset Structure ### Data Fields * `Claim (string)`: A statement or proposition relating to the consistency or inconsistency of certain facts or information. * `Context (string)`: The surrounding information or background against which the claim is being evaluated or compared. It provides additional details or evidence that can support or challenge the claim. * `Source (string)`: It is the linguistic chunk containing the entity lying to the left of the main verb/relating chunk. * `Source Indices (string)`: Source indices refer to the specific indices or positions within the source string that indicate the location of the relevant information. * `Relation (string)`: It is the linguistic chunk containing the verb/relation at the core of the identified inconsistency. * `Relation Indices (string)`: Relation indices indicate the specific indices or positions within the relation string that highlight the location of the relevant information. * `Target (string)`: It is the linguistic chunk containing the entity lying to the right of the main verb/relating chunk. * `Target Indices (string)`: Target indices represent the specific indices or positions within the target string that indicate the location of the relevant information. * `Inconsistent Claim Component (string)`: The inconsistent claim component refers to a specific linguistic chunk within the claim that is identified as inconsistent with the context. It helps identify which part of the claim triple is problematic in terms of its alignment with the surrounding information. * `Inconsistent Context-Span (string)`: A span or portion marked within the context sentence that is found to be inconsistent with the claim. It highlights a discrepancy or contradiction between the information in the claim and the corresponding context. * `Inconsistent Context-Span Indices (string)`: The specific indices or location within the context sentence that indicate the inconsistent span. * `Inconsistency Type (string)`: The category or type of inconsistency identified in the claim and context. * `Fine-grained Inconsistent Entity-Type (string)`: The specific detailed category or type of entity causing the inconsistency within the claim or context. It provides a more granular classification of the entity associated with the inconsistency. * `Coarse Inconsistent Entity-Type (string)`: The broader or general category or type of entity causing the inconsistency within the claim or context. It provides a higher-level classification of the entity associated with the inconsistency. ### Data Splits The FICLE dataset comprises a total of 8,055 samples in the English language, each representing different instances of inconsistencies. These inconsistencies are categorized into five types: Taxonomic Relations (4,842 samples), Negation (1,630 samples), Set Based (642 samples), Gradable (526 samples), and Simple (415 samples). Within the dataset, there are six possible components that contribute to the inconsistencies found in the claim sentences. These components are distributed as follows: Target-Head (3,960 samples), Target-Modifier (1,529 samples), Relation-Head (951 samples), Relation-Modifier (1,534 samples), Source-Head (45 samples), and Source-Modifier (36 samples). The dataset is split into `train`, `validation`, and `test`. * `train`: 6.44k rows * `validation`: 806 rows * `test`: 806 rows ## Dataset Creation ### Curation Rationale We propose a linguistically enriched dataset to help detect inconsistencies and explain them. To this end, the broad requirements are to locate where the inconsistency is present between a claim and a context and to have a classification scheme for better explainability. ### Data Collection and Preprocessing The FICLE dataset is derived from the FEVER dataset, using the following- ing processing steps. FEVER (Fact Extraction and VERification) consists of 185,445 claims were generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. Every sample in the FEVER dataset contains the claim sentence, evidence (or context) sentence from a Wikipedia URL, and a type label (‘supports’, ‘refutes’, or ‘not enough info’). Out of these, we leverage only the samples with the ‘refutes’ label to build our dataset. ### Annotations You can see the annotation guidelines [here](https://github.com/blitzprecision/FICLE/blob/main/ficle_annotation_guidelines.pdf). In order to provide detailed explanations for inconsistencies, extensive annotations were conducted for each sample in the FICLE dataset. The annotation process involved two iterations, with each iteration focusing on different aspects of the dataset. In the first iteration, the annotations were primarily "syntactic-oriented." These fields included identifying the inconsistent claim fact triple, marking inconsistent context spans, and categorizing the six possible inconsistent claim components. The second iteration of annotations concentrated on "semantic-oriented" aspects. Annotators labeled semantic fields for each sample, such as the type of inconsistency, coarse inconsistent entity types, and fine-grained inconsistent entity types. This stage aimed to capture the semantic nuances and provide a deeper understanding of the inconsistencies present in the dataset. The annotation process was carried out by a group of four annotators, two of whom are also authors of the dataset. The annotators possess a strong command of the English language and hold Bachelor's degrees in Computer Science, specializing in computational linguistics. Their expertise in the field ensured accurate and reliable annotations. The annotators' ages range from 20 to 22 years, indicating their familiarity with contemporary language usage and computational linguistic concepts. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Citation Information ``` @misc{raha2023neural, title={Neural models for Factual Inconsistency Classification with Explanations}, author={Tathagata Raha and Mukund Choudhary and Abhinav Menon and Harshit Gupta and KV Aditya Srivatsa and Manish Gupta and Vasudeva Varma}, year={2023}, eprint={2306.08872}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contact
hsali/librespeech_dev_clean
2023-06-11T20:01:34.000Z
[ "region:us" ]
hsali
null
null
null
0
3
--- dataset_info: features: - name: input_values sequence: float32 - name: labels dtype: int64 splits: - name: train num_bytes: 641600432 num_examples: 2703 download_size: 566257946 dataset_size: 641600432 --- # Dataset Card for "librespeech_dev_clean" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ChristophSchuhmann/LAION-Aesthetics-HQ-captions-4.5plus
2023-06-12T07:11:53.000Z
[ "license:apache-2.0", "region:us" ]
ChristophSchuhmann
null
null
null
3
3
--- license: apache-2.0 --- This is a subset of LAION-Aesthetics 4.5+ with 71.67M samples, that all have HQ captions. This subset could be useful for tuning text-to-image or image-captioning models. The texts were filtered to have more than 50 characters and a KenLM score of <=600, with this model: https://huggingface.co/siddhesh1793/kenlm/tree/main/the_pile_books3 (trained on books3) 71669883 samples
marianna13/zlib-books-1k-500k
2023-06-12T12:24:08.000Z
[ "region:us" ]
marianna13
null
null
null
0
3
Entry not found
tti-bias/professions-v2
2023-08-21T14:11:59.000Z
[ "language:en", "license:cc-by-sa-4.0", "arxiv:2301.13188", "region:us" ]
tti-bias
null
null
null
0
3
--- dataset_info: features: - name: profession dtype: string - name: model dtype: string - name: 'no' dtype: int32 - name: seed dtype: int32 - name: image dtype: image splits: - name: train num_bytes: 8338679567.447 num_examples: 253719 download_size: 8917804015 dataset_size: 8338679567.447 license: cc-by-sa-4.0 language: - en --- # Dataset Card for professions-v2 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases and Limitations](#discussion-of-biases-and-limitations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://huggingface.co/spaces/tti-bias/stable-bias - **Repository:** Forthcoming - **Paper:** Forthcoming - **Leaderboard:** - **Point of Contact:** Forthcoming ### Dataset Summary <span style="color:blue" font-size:smaller><b>🏗️ WORK IN PROGRESS</b></span> <span style="color:red" font-size:smaller><b>⚠️ DISCLAIMER: The images in this dataset were generated by text-to-image systems and may depict offensive stereotypes or contain explicit content.</b></span> The *Professions* dataset is a collection of computer-generated images generated using Text-to-Image (TTI) systems. In order to generate a diverse set of prompts to evaluate the system outputs’ variation across dimensions of interest, we use the pattern **`Photo portrait of a [X]`**, where X spans the professional attribute that we focus our analysis on, i.e. the name of the profession. For the professions, we rely on a list of 146 occupations taken from the U.S. Bureau of Labor Statistics, and they are: ```python ['accountant', 'aerospace_engineer', 'aide', 'air_conditioning_installer', 'architect', 'artist', 'author', 'baker', 'bartender', 'bus_driver', 'butcher', 'career_counselor', 'carpenter', 'carpet_installer', 'cashier', 'ceo', 'childcare_worker', 'civil_engineer', 'claims_appraiser', 'cleaner', 'clergy', 'clerk', 'coach', 'community_manager', 'compliance_officer', 'computer_programmer', 'computer_support_specialist', 'computer_systems_analyst', 'construction_worker', 'cook', 'correctional_officer', 'courier', 'credit_counselor', 'customer_service_representative', 'data_entry_keyer', 'dental_assistant', 'dental_hygienist', 'dentist', 'designer', 'detective', 'director', 'dishwasher', 'dispatcher', 'doctor', 'drywall_installer', 'electrical_engineer', 'electrician', 'engineer', 'event_planner', 'executive_assistant', 'facilities_manager', 'farmer', 'fast_food_worker', 'file_clerk', 'financial_advisor', 'financial_analyst', 'financial_manager', 'firefighter', 'fitness_instructor', 'graphic_designer', 'groundskeeper', 'hairdresser', 'head_cook', 'health_technician', 'host', 'hostess', 'industrial_engineer', 'insurance_agent', 'interior_designer', 'interviewer', 'inventory_clerk', 'it_specialist', 'jailer', 'janitor', 'laboratory_technician', 'language_pathologist', 'lawyer', 'librarian', 'logistician', 'machinery_mechanic', 'machinist', 'maid', 'manager', 'manicurist', 'market_research_analyst', 'marketing_manager', 'massage_therapist', 'mechanic', 'mechanical_engineer', 'medical_records_specialist', 'mental_health_counselor', 'metal_worker', 'mover', 'musician', 'network_administrator', 'nurse', 'nursing_assistant', 'nutritionist', 'occupational_therapist', 'office_clerk', 'office_worker', 'painter', 'paralegal', 'payroll_clerk', 'pharmacist', 'pharmacy_technician', 'photographer', 'physical_therapist', 'pilot', 'plane_mechanic', 'plumber', 'police_officer', 'postal_worker', 'printing_press_operator', 'producer', 'psychologist', 'public_relations_specialist', 'purchasing_agent', 'radiologic_technician', 'real_estate_broker', 'receptionist', 'repair_worker', 'roofer', 'sales_manager', 'salesperson', 'school_bus_driver', 'scientist', 'security_guard', 'sheet_metal_worker', 'singer', 'social_assistant', 'social_worker', 'software_developer', 'stocker', 'stubborn', 'supervisor', 'taxi_driver', 'teacher', 'teaching_assistant', 'teller', 'therapist', 'tractor_operator', 'truck_driver', 'tutor', 'underwriter', 'veterinarian', 'waiter', 'waitress', 'welder', 'wholesale_buyer', 'writer'] ``` Every prompt is used to generate images from the following models: ```python ['22h-vintedois-diffusion-v0-1', 'CompVis-stable-diffusion-v1-4', 'Lykon-DreamShaper', 'SG161222-Realistic_Vision_V1.4', 'andite-anything-v4.0', 'andite-pastel-mix', 'dreamlike-art-dreamlike-photoreal-2.0', 'hakurei-waifu-diffusion', 'plasmo-vox2', 'prompthero-openjourney', 'prompthero-openjourney-v4', 'runwayml-stable-diffusion-v1-5', 'stabilityai-stable-diffusion-2', 'stabilityai-stable-diffusion-2-1-base', 'wavymulder-Analog-Diffusion'] ``` ### Supported Tasks This dataset can be used to evaluate the output space of TTI systems, particularly against the backdrop of societal representativeness. ### Languages The prompts that generated the images are all in US-English. ## Dataset Structure The dataset is stored in `parquet` format and contains 253,719 rows which can be loaded like so: ```python from datasets import load_dataset dataset = load_dataset("tti-bias/professions-v2", split="train") ``` ### Data Fields Each row corresponds to the output of a TTI system and looks as follows: ```python { 'profession': 'dentist', 'model': 'dreamlike-art-dreamlike-photoreal-2.0', 'no': 1, 'seed': 838979725, 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=512x512> } ``` ### Data Splits All the data is contained within the `train` split. As such, the dataset contains practically no splits. ## Dataset Creation ### Curation Rationale This dataset was created to explore the output characteristics of TTI systems from the vantage point of societal characteristics of interest. ### Source Data #### Initial Data Collection and Normalization The data was generated using the [DiffusionPipeline]() from Hugging Face: ```python from diffusers import DiffusionPipeline import torch pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) images = pipeline(prompt="Photo portrait of a bus driver at work", num_images_per_prompt=9).images ``` ### Personal and Sensitive Information Generative models trained on large datasets have been shown to memorize part of their training sets (See e.g.: [(Carlini et al. 2023)](https://arxiv.org/abs/2301.13188)) and the people generated could theoretically bear resemblance to real people. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases and Limitations At this point in time, the data is limited to images generated using English prompts and a set of professions sourced form the U.S. Bureau of Labor Statistics (BLS), which also provides us with additional information such as the demographic characteristics and salaries of each profession. While this data can also be leveraged in interesting analyses, it is currently limited to the North American context. ## Additional Information ### Licensing Information The dataset is licensed under the Creative Commons [Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) license. ### Citation Information If you use this dataset in your own work, please consider citing: ```json @article{stable-bias-authors-2023, author = {Anonymous Authors}, title = {Stable Bias: Analyzing Societal Representations in Diffusion Models}, year = {2023}, } ```
tasksource/Boardgame-QA
2023-06-14T07:38:39.000Z
[ "license:cc-by-4.0", "arxiv:2306.07934", "region:us" ]
tasksource
null
null
null
1
3
--- license: cc-by-4.0 dataset_info: features: - name: proof dtype: string - name: example dtype: string - name: label dtype: string - name: rules dtype: string - name: preferences dtype: string - name: theory dtype: string - name: goal dtype: string - name: facts dtype: string - name: config dtype: string splits: - name: test num_bytes: 54209160 num_examples: 15000 - name: train num_bytes: 55055604 num_examples: 15000 - name: valid num_bytes: 27317650 num_examples: 7500 download_size: 34032485 dataset_size: 136582414 --- https://arxiv.org/pdf/2306.07934.pdf
yyu/amazon-attrprompt
2023-09-13T20:58:10.000Z
[ "task_categories:text-classification", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "product", "review", "topic_classification", "arxiv:2306.15895", "region:us" ]
yyu
null
null
null
0
3
--- license: apache-2.0 task_categories: - text-classification language: - en tags: - product - review - topic_classification size_categories: - 10K<n<100K --- This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt). Check the paper: https://arxiv.org/abs/2306.15895 for details. - `label.txt`: the label name for each class - `train.jsonl`: The original training set. - `valid.jsonl`: The original validation set. - `test.jsonl`: The original test set. - `simprompt.jsonl`: The training data generated by the simple prompt. - `attrprompt.jsonl`: The training data generated by the attributed prompt. Please check our original paper for details. Moreover, we provide the generated dataset using LLM as follows: - `regen.jsonl`: The training data generated by [ReGen](https://github.com/yueyu1030/ReGen). - `regen_llm_augmented.jsonl`: The training data generated by ReGen, with the subtopics generated by the LLM. - `progen.jsonl`: The training data generated by [ProGen](https://github.com/hkunlp/progen). Please cite the original paper if you use this dataset for your study. Thanks! ``` @inproceedings{blitzer2007biographies, title={Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification}, author={Blitzer, John and Dredze, Mark and Pereira, Fernando}, booktitle={Proceedings of the 45th annual meeting of the association of computational linguistics}, pages={440--447}, year={2007} } @article{yu2023large, title={Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias}, author={Yu, Yue and Zhuang, Yuchen and Zhang, Jieyu and Meng, Yu and Ratner, Alexander and Krishna, Ranjay and Shen, Jiaming and Zhang, Chao}, journal={arXiv preprint arXiv:2306.15895}, year={2023} } ```
yyu/reddit-attrprompt
2023-09-13T20:56:10.000Z
[ "task_categories:text-classification", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "social_media", "arxiv:2306.15895", "arxiv:2104.07081", "region:us" ]
yyu
null
null
null
0
3
--- license: apache-2.0 task_categories: - text-classification language: - en tags: - social_media size_categories: - 10K<n<100K --- This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt). Checkout the paper https://arxiv.org/abs/2306.15895 for details. - `label.txt`: the label name for each class - `train.jsonl`: The original training set. - `valid.jsonl`: The original validation set. - `test.jsonl`: The original test set. - `simprompt.jsonl`: The training data generated by the simple prompt. - `attrprompt.jsonl`: The training data generated by the attributed prompt. Please cite the original paper if you use this dataset for your study. Thanks! ``` @article{geigle:2021:arxiv, author = {Gregor Geigle and Nils Reimers and Andreas R{\"u}ckl{\'e} and Iryna Gurevych}, title = {TWEAC: Transformer with Extendable QA Agent Classifiers}, journal = {arXiv preprint}, volume = {abs/2104.07081}, year = {2021}, url = {http://arxiv.org/abs/2104.07081}, archivePrefix = {arXiv}, eprint = {2104.07081} } @article{yu2023large, title={Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias}, author={Yu, Yue and Zhuang, Yuchen and Zhang, Jieyu and Meng, Yu and Ratner, Alexander and Krishna, Ranjay and Shen, Jiaming and Zhang, Chao}, journal={arXiv preprint arXiv:2306.15895}, year={2023} } ```
yyu/stackexchange-attrprompt
2023-09-13T20:56:38.000Z
[ "task_categories:text-classification", "size_categories:10K<n<100K", "language:en", "license:apache-2.0", "social_media", "stackexchange", "arxiv:2306.15895", "region:us" ]
yyu
null
null
null
0
3
--- license: apache-2.0 task_categories: - text-classification language: - en tags: - social_media - stackexchange size_categories: - 10K<n<100K --- This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt). Checkout the paper: https://arxiv.org/abs/2306.15895 for details. - `label.txt`: the label name for each class - `train.jsonl`: The original training set. - `valid.jsonl`: The original validation set. - `test.jsonl`: The original test set. - `simprompt.jsonl`: The training data generated by the simple prompt. - `attrprompt.jsonl`: The training data generated by the attributed prompt.
yyu/yelp-attrprompt
2023-08-22T08:26:22.000Z
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "license:apache-2.0", "sentiment", "restaurant_review", "region:us" ]
yyu
null
null
null
0
3
--- license: apache-2.0 task_categories: - text-classification language: - en tags: - sentiment - restaurant_review size_categories: - 1K<n<10K version: - v1 --- This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt). - `label.txt`: the label name for each class - `train.jsonl`: The original training set. - `valid.jsonl`: The original validation set. - `test.jsonl`: The original test set. - `simprompt.jsonl`: The training data generated by the simple prompt. - `attrprompt.jsonl`: The training data generated by the attributed prompt.
yyu/SST-2-attrprompt
2023-08-22T08:26:47.000Z
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "license:apache-2.0", "sentiment", "movie_review", "region:us" ]
yyu
null
null
null
0
3
--- license: apache-2.0 task_categories: - text-classification language: - en tags: - sentiment - movie_review size_categories: - 1K<n<10K --- This is the data used in the paper [Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias](https://github.com/yueyu1030/AttrPrompt). - `label.txt`: the label name for each class - `train.jsonl`: The original training set. - `valid.jsonl`: The original validation set. - `test.jsonl`: The original test set. - `simprompt.jsonl`: The training data generated by the simple prompt. - `attrprompt.jsonl`: The training data generated by the attributed prompt.
sdmattpotter/pandassdcctest
2023-06-15T17:04:24.000Z
[ "task_categories:text-classification", "size_categories:100K<n<1M", "language:en", "license:mit", "politics", "local government", "region:us" ]
sdmattpotter
null
null
null
0
3
--- dataset_info: features: - name: ITEMNO. dtype: string - name: O dtype: string - name: '00000' dtype: float64 - name: Motion/Second dtype: string - name: VOTE dtype: string - name: Recorder dtype: string - name: link dtype: string - name: action dtype: string - name: descript dtype: string - name: kind dtype: string - name: DateTimeDate dtype: timestamp[ns] - name: embeds sequence: float64 - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 8933567 num_examples: 704 download_size: 6645047 dataset_size: 8933567 license: mit task_categories: - text-classification language: - en tags: - politics - local government pretty_name: sdcc size_categories: - 100K<n<1M --- # Dataset Card for "pandassdcctest" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AnaChikashua/handwriting
2023-06-15T21:30:44.000Z
[ "task_categories:image-classification", "language:ka", "region:us" ]
AnaChikashua
null
null
null
0
3
--- task_categories: - image-classification language: - ka --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
imageomics/KABR
2023-08-21T20:50:15.000Z
[ "task_categories:video-classification", "size_categories:1M<n<10M", "license:cc0-1.0", "zebra", "giraffe", "plains zebra", "Grevy's zebra", "video", "animal behavior", "behavior recognition", "annotation", "annotated video", "conservation", "drone", "UAV", "imbalanced", "Kenya", "M...
imageomics
null
null
null
3
3
--- license: cc0-1.0 task_categories: - video-classification tags: - zebra - giraffe - plains zebra - Grevy's zebra - video - animal behavior - behavior recognition - annotation - annotated video - conservation - drone - UAV - imbalanced - Kenya - Mpala Research Centre pretty_name: >- KABR: In-Situ Dataset for Kenyan Animal Behavior Recognition from Drone Videos size_categories: - 1M<n<10M --- # Dataset Card for KABR: In-Situ Dataset for Kenyan Animal Behavior Recognition from Drone Videos ## Dataset Description - **Homepage:** https://dirtmaxim.github.io/kabr/ - **Repository:** https://github.com/dirtmaxim/kabr-tools - **Paper:** [Coming Soon] - **Leaderboard:** - **Point of Contact:** ### Dataset Summary We present a novel high-quality dataset for animal behavior recognition from drone videos. The dataset is focused on Kenyan wildlife and contains behaviors of giraffes, plains zebras, and Grevy's zebras. The dataset consists of more than 10 hours of annotated videos, and it includes eight different classes, encompassing seven types of animal behavior and an additional category for occluded instances. In the annotation process for this dataset, a team of 10 people was involved, with an expert zoologist overseeing the process. Each behavior was labeled based on its distinctive features, using a standardized set of criteria to ensure consistency and accuracy across the annotations. The dataset was collected using drones that flew over the animals in the [Mpala Research Centre](https://mpala.org/) in Kenya, providing high-quality video footage of the animal's natural behaviors. The drone footage is captured at a resolution of 5472 x 3078 pixels, and the videos were recorded at a frame rate of 29.97 frames per second. <!--This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).--> ### Supported Tasks and Leaderboards The results of our evaluation using I3D, SlowFast, and X3D architectures are given in the table below. For each one, the model was trained for 120 epochs with batch size of 5. For more information on these results, see our [paper](coming soon). | Method | All | Giraffes | Plains Zebras | Grevy’s Zebras | | ---- | ---- | ---- | ---- | ---- | | I3D (16x5) | 53.41 | 61.82 | 58.75 | 46.73 | | SlowFast (16x5, 4x5) | 52.92 | 61.15 | 60.60 | 47.42 | | X3D (16x5) | 61.9 | 65.1 | 63.11 | 51.16 | ### Languages English ## Dataset Structure Under `KABR/dataset/image/`, the data has been archived into `.zip` files, which are split into 2GB files. These must be recombined and extracted. After cloning and navigating into the repository, you can use the following commands to do the reconstruction: ```bash cd KABR/dataset/image/ cat giraffes_part_* > giraffes.zip md5sum giraffes.zip # Compare this to what's shown with `cat giraffes_md5.txt` unzip giraffes.zip rm -rf giraffes_part_* # Similarly for `zebras_grevys_part_*` and `zebras_plains_part_*` ``` Alternatively, there is a download script, `download.py`, which allows a download of the entire dataset in its established format without requiring one to clone the repository (cloning requires _at least_ double the size of the dataset to store). To proceed with this approach, download `download.py` to the system where you want to access the data. Then, in the same directory as the script, run the following to begin the download: ``` pip install requests python download.py ``` This script then downloads all the files present in the repository (without making a clone of the `.git` directory, etc.), concatenates the part files to their ZIP archives, verifies the MD5 checksums, extracts, and cleans up so that the folder structure, as described below, is present. Note that it will require approximately 116GB of free space to complete this process, though the final dataset will only take about 61GB of disk space (the script removes the extra files after checking the download was successful). The KABR dataset follows the Charades format: ``` KABR /dataset /image /video_1 /image_1.jpg /image_2.jpg ... /image_n.jpg /video_2 /image_1.jpg /image_2.jpg ... /image_n.jpg ... /video_n /image_1.jpg /image_2.jpg /image_3.jpg ... /image_n.jpg /annotation /classes.json /train.csv /val.csv ``` The dataset can be directly loaded and processed by the [SlowFast](https://github.com/facebookresearch/SlowFast) framework. **Informational Files** * `KABR/configs`: examples of SlowFast framework configs. * `KABR/annotation/distribution.xlsx`: distribution of classes for all videos. **Scripts:** * `image2video.py`: Encode image sequences into the original video. * For example, `[image/G0067.1, image/G0067.2, ..., image/G0067.24]` will be encoded into `video/G0067.mp4`. * `image2visual.py`: Encode image sequences into the original video with corresponding annotations. * For example, `[image/G0067.1, image/G0067.2, ..., image/G0067.24]` will be encoded into `visual/G0067.mp4`. ### Data Instances **Naming:** Within the image folder, the `video_n` folders are named as follows (X indicates a number): * G0XXX.X - Giraffes * ZP0XXX.X - Plains Zebras * ZG0XXX.X - Grevy's Zebras * Within each of these folders the images are simply `X.jpg`. **Note:** The dataset consists of a total of 1,139,893 frames captured from drone videos. There are 488,638 frames of Grevy's zebras, 492,507 frames of plains zebras, and 158,748 frames of giraffes. ### Data Fields There are 14,764 unique behavioral sequences in the dataset. These consist of eight distinct behaviors: - Walk - Trot - Run: animal is moving at a cantor or gallop - Graze: animal is eating grass or other vegetation - Browse: animal is eating trees or bushes - Head Up: animal is looking around or observe surroundings - Auto-Groom: animal is grooming itself (licking, scratching, or rubbing) - Occluded: animal is not fully visible ### Data Splits Training and validation sets are indicated by their respective CSV files (`train.csv` and `val.csv`), located within the `annotation` folder. ## Dataset Creation ### Curation Rationale We present a novel high-quality dataset for animal behavior recognition from drone videos. The dataset is focused on Kenyan wildlife and contains behaviors of giraffes, plains zebras, and Grevy's zebras. The dataset consists of more than 10 hours of annotated videos, and it includes eight different classes, encompassing seven types of animal behavior and an additional category for occluded instances. In the annotation process for this dataset, a team of 10 people was involved, with an expert zoologist overseeing the process. Each behavior was labeled based on its distinctive features, using a standardized set of criteria to ensure consistency and accuracy across the annotations. The dataset was collected using drones that flew over the animals in the [Mpala Research Centre](https://mpala.org/) in Kenya, providing high-quality video footage of the animal's natural behaviors. We believe that this dataset will be a valuable resource for researchers working on animal behavior recognition, as it provides a diverse and high-quality set of annotated videos that can be used for evaluating deep learning models. Additionally, the dataset can be used to study the behavior patterns of Kenyan animals and can help to inform conservation efforts and wildlife management strategies. <!-- [To be added:] --> We provide a detailed description of the dataset and its annotation process, along with some initial experiments on the dataset using conventional deep learning models. The results demonstrate the effectiveness of the dataset for animal behavior recognition and highlight the potential for further research in this area. ### Source Data #### Initial Data Collection and Normalization Data was collected from 6 January 2023 through 21 January 2023 at the [Mpala Research Centre](https://mpala.org/) in Kenya under a Nacosti research license. We used DJI Mavic 2S drones equipped with cameras to record 5.4K resolution videos (5472 x 3078 pixels) from varying altitudes and distances of 10 to 50 meters from the animals (distance was determined by circumstances and safety regulations). Mini-scenes were extracted from these videos to reduce the impact of drone movement and facilitate human annotation. Animals were detected in frame using YOLOv8, then the SORT tracking algorithm was applied to follow their movement. A 400 by 300 pixel window, centered on the animal, was then extracted; this is the mini-scene. <!-- #### Who are the source language producers? [More Information Needed] --> ### Annotations #### Annotation process In the annotation process for this dataset, a team of 10 people was involved, with an expert zoologist overseeing the process. Each behavior was labeled based on its distinctive features, using a standardized set of criteria to ensure consistency and accuracy across the annotations. <!-- #### Who are the annotators? [More Information Needed] --> ### Personal and Sensitive Information Though there are endangered species included in this data, exact locations are not provided and their safety is assured by their location within the preserve. ## Considerations for Using the Data <!-- ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] --> ### Other Known Limitations This data exhibits a long-tailed distribution due to the natural variation in frequency of the observed behaviors. ## Additional Information ### Authors * Maksim Kholiavchenko (Rensselaer Polytechnic Institute) - ORCID: 0000-0001-6757-1957 * Jenna Kline (The Ohio State University) * Michelle Ramirez (The Ohio State University) * Sam Stevens (The Ohio State University) * Alec Sheets (The Ohio State University) - ORCID: 0000-0002-3737-1484 * Reshma Ramesh Babu (The Ohio State University) - ORCID: 0000-0002-2517-5347 * Namrata Banerji (The Ohio State University) - ORCID: 0000-0001-6813-0010 * Elizabeth Campolongo (Imageomics Institute) - ORCID: 0000-0003-0846-2413 * Nina Van Tiel (Eidgenössische Technische Hochschule Zürich) - ORCID: 0000-0001-6393-5629 * Jackson Miliko (Mpala Research Centre) * Eduardo Bessa (Universidade de Brasília) - ORCID: 0000-0003-0606-5860 * Tanya Berger-Wolf (The Ohio State University) - ORCID: 0000-0001-7610-1412 * Daniel Rubenstein (Princeton University) - ORCID: 0000-0001-9049-5219 * Charles Stewart (Rensselaer Polytechnic Institute) ### Licensing Information This dataset is dedicated to the public domain for the benefit of scientific pursuits. We ask that you cite the dataset <!--- and journal paper --> using the below citation <!-- s --> if you make use of it in your research. ### Citation Information ``` @misc{KABR_Data, author = {Maksim Kholiavchenko and et. al}, title = {KABR: In-Situ Dataset for Kenyan Animal Behavior Recognition from Drone Videos}, year = {2023}, url = {https://huggingface.co/datasets/imageomics/KABR}, doi = {10.57967/hf/1010}, publisher = {Hugging Face} } ``` ### Contributions The [Imageomics Institute](https://imageomics.org) is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) Institute program under [Award #2118240](https://www.nsf.gov/awardsearch/showAward?AWD_ID=2118240) (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning).
tianyang/repobench-p
2023-07-19T06:13:35.000Z
[ "task_categories:text-retrieval", "task_categories:text-generation", "task_ids:document-retrieval", "language_creators:found", "multilinguality:multilingual", "source_datasets:original", "language:code", "license:cc-by-nc-nd-4.0", "code", "arxiv:2306.03091", "region:us" ]
tianyang
RepoBench is a dataset that benchmarks repository-level code auto-completion systems. RepoBench-P denotes RepoBench for pipeline, which is subtask of RepoBench including both relevant code retrieval and next-line code prediction.
@misc{liu2023repobench, title={RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems}, author={Tianyang Liu and Canwen Xu and Julian McAuley}, year={2023}, eprint={2306.03091}, archivePrefix={arXiv}, primaryClass={cs.CL} }
null
1
3
--- language_creators: - found language: - code license: - cc-by-nc-nd-4.0 multilinguality: - multilingual pretty_name: RepoBench-Pipeline source_datasets: - original task_categories: - text-retrieval - text-generation task_ids: - document-retrieval tags: - code --- # Dataset Card for RepoBench-P ## Dataset Description - **Homepage:** https://github.com/Leolty/repobench - **Paper:** https://arxiv.org/abs/2306.03091 ## Dataset Summary **RepoBench-P (Pipeline)** is a subtask of **RepoBench**([GitHub](https://github.com/Leolty/repobench), [arXiv](https://arxiv.org/abs/2306.03091)), combinig the retrieval and code completion tasks. Specifically, the retrieval task is used to retrieve the most relevant code snippet first, and then do the code completion task with retrieved code snippet as cross-file context for next-line prediction, which mirrors complex real-world scenarios that a practical auto-completion system would face. ## Settings - `cff`: short for cross_file_first, indicating the cross-file module in next line is first used in the current file. - `cfr`: short for cross_file_random, indicating the cross-file module in next line is not first used in the current file. - `if`: short for in_file, indicating the next line does not contain any cross-file module. ## Supported Languages - `python` and `java` ## Loading Data For example, to load the `python` dataset, and you can provide the `split` argument to choose the specific setting. ```python from datasets import load_dataset dataset = load_dataset("tianyang/repobench-p", "python", split="cff") ``` > Note: The `split` argument is optional. If not provided, the entire dataset will be loaded. ## Dataset Structure ```json { "repo_name": "repository name of the data point", "file_path": "path/to/current_file", "context": [ { "path": "path/to/cross_file_1", "identifier": "identifier of the cross-file module", "snippet": "the code snippet of the cross-file module", "tokenized_snippet": "tokenized code snippet of the cross-file module" }, // ... { "path": "path/to/cross_file_k", "identifier": "identifier of the cross-file module", "snippet": "the code snippet of the cross-file module", "tokenized_snippet": "tokenized code snippet of the cross-file module" }, ], "import_statement": "all import statements in current file", "code": "the code for next-line prediction", "next_line": "the next line of the code", "gold_snippet_index": 2 // NOTE: Only for "cross_file_first" and "cross_file_random" settings, for "in_file" setting, we set it to -1. } ``` ## Licensing Information CC BY-NC-ND 4.0 ## Citation Information ```bibtex @misc{liu2023repobench, title={RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems}, author={Tianyang Liu and Canwen Xu and Julian McAuley}, year={2023}, eprint={2306.03091}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Contributions Thanks to [@Leolty](https://github.com/Leolty) for adding this dataset.
jondurbin/airoboros-gpt4-1.2-alpaca-eval
2023-06-16T10:21:04.000Z
[ "region:us" ]
jondurbin
null
null
null
1
3
Entry not found
ai-habitat/ReplicaCAD_baked_lighting
2023-07-12T01:05:03.000Z
[ "license:cc-by-4.0", "arxiv:2106.14405", "region:us" ]
ai-habitat
null
null
null
0
3
--- license: cc-by-4.0 viewer: false --- # What is ReplicaCAD? [**Vist the ReplicaCAD Homepage**](https://aihabitat.org/datasets/replica_cad/) The ReplicaCAD dataset is an artist recreation of the scanned “FRL apartment” variations from the [Replica](https://github.com/facebookresearch/Replica-Dataset) dataset. This dataset is intended for use in the [Habitat simulator](https://github.com/facebookresearch/habitat-sim) for embodied in-home interaction tasks such as object re-arrangement. All materials are licensed under the [Creative Commons Attribution 4.0 International (CC BY 4.0) Public License](https://creativecommons.org/licenses/by/4.0/). ## Dataset Contents: We provide two dataset downloads with different properties: one suited to interactive simulation and the other for photorealistic visualization. Note: Both downloadable datasets contain 84 of the 105 variations described in the paper and shown in the video with the remaining 21 scenes (1 macro variation and associated 20 micro variations) withheld as a test set for challenge evaluation. ### ReplicaCAD Interactive: [Get ReplicaCAD Interactive here](https://huggingface.co/datasets/ai-habitat/ReplicaCAD_dataset) [132MB] Intended for use with a PBR shader. Contains 1 empty scene and 6 re-creations of the scanned “FRL apartment” variations staged with both large furniture and small objects and ready for dynamic simulation in [Habitat-sim](https://github.com/facebookresearch/habitat-sim). Also included are 84 (of 105) artist authored re-arrangements of large furniture (fully static placements except articulations) organized into 5 macro variations (as different tenants may organize the same apartment) each with an additional 20 micro variations (with a few pieces of furniture moved/swapped). ### ReplicaCAD with baked lighting (this repository): [525MB] Contains the same 84 (of 105) artist authored re-arrangements of large furniture described in ReplicaCAD Interactive with synthetic global illumination baked into the textures for more photo-realistic visualization. All articulated furniture is included with baked lighting textures, but all other furniture is static. - 84 stage (i.e., static background) assets with baked light color textures - 6 URDF assets with baked light color textures - 1 SceneDataset configuration file which aggregates all config and asset paths for one-line import in Habitat. - .navmesh files (in navmeshes/ directory) for every scene computed for an agent with 0.3m radius (e.g. appropriate for a Fetch robot base) and additional .navmesh files (in navmeshes_default/ directory) computed with Habitat default agent parameters for optional use. - 84 Habitat SceneDataset configuration files defining the placement of the articulated objects within the stages. Also includes global receptacle metadata in world space equivalent to local receptacle metadata from ReplicaCAD interactive. References the Fetch tuned NavMeshes. --- Citing ReplicaCAD --- Using ReplicaCAD in your research? Please cite the following paper: [arxiv](https://arxiv.org/abs/2106.14405) ``` @inproceedings{szot2021habitat, title = {Habitat 2.0: Training Home Assistants to Rearrange their Habitat}, author = {Andrew Szot and Alex Clegg and Eric Undersander and Erik Wijmans and Yili Zhao and John Turner and Noah Maestre and Mustafa Mukadam and Devendra Chaplot and Oleksandr Maksymets and Aaron Gokaslan and Vladimir Vondrus and Sameer Dharur and Franziska Meier and Wojciech Galuba and Angel Chang and Zsolt Kira and Vladlen Koltun and Jitendra Malik and Manolis Savva and Dhruv Batra}, booktitle = {Advances in Neural Information Processing Systems (NeurIPS)}, year = {2021} } ```
khellific/anidb-series-embeddings
2023-07-18T04:19:51.000Z
[ "license:cc-by-nc-sa-4.0", "region:us" ]
khellific
null
null
null
0
3
--- license: cc-by-nc-sa-4.0 --- Embeddings of AniDB's title database, intended to be used in conjunction with https://github.com/khell/anidb-semantic-search-api.
Cainiao-AI/LaDe-D
2023-06-22T15:00:16.000Z
[ "size_categories:10M<n<100M", "license:apache-2.0", "Spatial-Temporal", "Graph", "Logistic", "Last-mile Delivery", "region:us" ]
Cainiao-AI
null
null
null
0
3
--- license: apache-2.0 tags: - Spatial-Temporal - Graph - Logistic - Last-mile Delivery size_categories: - 10M<n<100M dataset_info: features: - name: order_id dtype: int64 - name: region_id dtype: int64 - name: city dtype: string - name: courier_id dtype: int64 - name: lng dtype: float64 - name: lat dtype: float64 - name: aoi_id dtype: int64 - name: aoi_type dtype: int64 - name: accept_time dtype: string - name: accept_gps_time dtype: string - name: accept_gps_lng dtype: float64 - name: accept_gps_lat dtype: float64 - name: delivery_time dtype: string - name: delivery_gps_time dtype: string - name: delivery_gps_lng dtype: float64 - name: delivery_gps_lat dtype: float64 - name: ds dtype: int64 splits: - name: delivery_jl num_bytes: 5568309 num_examples: 31415 - name: delivery_cq num_bytes: 168574531 num_examples: 931351 - name: delivery_yt num_bytes: 36796326 num_examples: 206431 - name: delivery_sh num_bytes: 267095520 num_examples: 1483864 - name: delivery_hz num_bytes: 335088000 num_examples: 1861600 download_size: 290229555 dataset_size: 813122686 --- # 1. About Dataset **LaDe** is a publicly available last-mile delivery dataset with millions of packages from industry. It has three unique characteristics: (1) Large-scale. It involves 10,677k packages of 21k couriers over 6 months of real-world operation. (2) Comprehensive information, it offers original package information, such as its location and time requirements, as well as task-event information, which records when and where the courier is while events such as task-accept and task-finish events happen. (3) Diversity: the dataset includes data from various scenarios, such as package pick-up and delivery, and from multiple cities, each with its unique spatio-temporal patterns due to their distinct characteristics such as populations. If you use this dataset for your research, please cite this paper: {xxx} # 2. Download [LaDe](https://huggingface.co/datasets/Cainiao-AI/LaDe) is composed of two subdatasets: i) [LaDe-D](https://huggingface.co/datasets/Cainiao-AI/LaDe-D), which comes from the package delivery scenario. ii) [LaDe-P](https://huggingface.co/datasets/Cainiao-AI/LaDe-P), which comes from the package pickup scenario. To facilitate the utilization of the dataset, each sub-dataset is presented in CSV format. LaDe-D is the first subdataset from [LaDe](https://huggingface.co/datasets/Cainiao-AI/LaDe). LaDe can be used for research purposes. Before you download the dataset, please read these terms. And [Code link](https://github.com/wenhaomin/LaDe). Then put the data into "./data/raw/". The structure of "./data/raw/" should be like: ``` * ./data/raw/ * delivery * delivery_sh.csv * ... ``` LaDe-D contains 5 files, with each representing the data from a specific city, the detail of each city can be find in the following table. | City | Description | |------------|----------------------------------------------------------------------------------------------| | Shanghai | One of the most prosperous cities in China, with a large number of orders per day. | | Hangzhou | A big city with well-developed online e-commerce and a large number of orders per day. | | Chongqing | A big city with complicated road conditions in China, with a large number of orders. | | Jilin | A middle-size city in China, with a small number of orders each day. | | Yantai | A small city in China, with a small number of orders every day. | # 3. Description Below is the detailed field of each LaDe-D. | Data field | Description | Unit/format | |-----------------------|--------------------------------------|---------------| | **Package information** | | | | package_id | Unique identifier of each package | Id | | **Stop information** | | | | lng/lat | Coordinates of each stop | Float | | city | City | String | | region_id | Id of the region | Id | | aoi_id | Id of the AOI | Id | | aoi_type | Type of the AOI | Categorical | | **Courier Information** | | | | courier_id | Id of the courier | Id | | **Task-event Information**| | | | accept_time | The time when the courier accepts the task | Time | | accept_gps_time | The time of the GPS point whose time is the closest to accept time | Time | | accept_gps_lng/accept_gps_lat | Coordinates when the courier accepts the task | Float | | delivery_time | The time when the courier finishes delivering the task | Time | | delivery_gps_time | The time of the GPS point whose time is the closest to the delivery time | Time | | delivery_gps_lng/delivery_gps_lat | Coordinates when the courier finishes the task | Float | | **Context information** | | | | ds | The date of the package delivery | Date | # 4. Leaderboard Blow shows the performance of different methods in Shanghai. ## 4.1 Route Prediction Experimental results of route prediction. We use bold and underlined fonts to denote the best and runner-up model, respectively. | Method | HR@3 | KRC | LSD | ED | |--------------|--------------|--------------|-------------|-------------| | TimeGreedy | 57.65 | 31.81 | 5.54 | 2.15 | | DistanceGreedy | 60.77 | 39.81 | 5.54 | 2.15 | | OR-Tools | 66.21 | 47.60 | 4.40 | 1.81 | | LightGBM | 73.76 | 55.71 | 3.01 | 1.84 | | FDNET | 73.27 ± 0.47 | 53.80 ± 0.58 | 3.30 ± 0.04 | 1.84 ± 0.01 | | DeepRoute | 74.68 ± 0.07 | 56.60 ± 0.16 | 2.98 ± 0.01 | 1.79 ± 0.01 | | Graph2Route | 74.84 ± 0.15 | 56.99 ± 0.52 | 2.86 ± 0.02 | 1.77 ± 0.01 | ## 4.2 Estimated Time of Arrival Prediction | Method | MAE | RMSE | ACC@30 | | ------ |--------------|--------------|-------------| | LightGBM | 30.99 | 35.04 | 0.59 | | SPEED | 23.75 | 27.86 | 0.73 | | KNN | 36.00 | 31.89 | 0.58 | | MLP | 21.54 ± 2.20 | 25.05 ± 2.46 | 0.79 ± 0.04 | | FDNET | 18.47 ± 0.25 | 21.44 ± 0.28 | 0.84 ± 0.01 | ## 4.3 Spatio-temporal Graph Forecasting | Method | MAE | RMSE | |-------|-------------|-------------| | HA | 4.63 | 9.91 | | DCRNN | 3.69 ± 0.09 | 7.08 ± 0.12 | | STGCN | 3.04 ± 0.02 | 6.42 ± 0.05 | | GWNET | 3.16 ± 0.06 | 6.56 ± 0.11 | | ASTGCN | 3.12 ± 0.06 | 6.48 ± 0.14 | | MTGNN | 3.13 ± 0.04 | 6.51 ± 0.13 | | AGCRN | 3.93 ± 0.03 | 7.99 ± 0.08 | | STGNCDE | 3.74 ± 0.15 | 7.27 ± 0.16 | # 5. Citation To cite this repository: ```shell @software{pytorchgithub, author = {xx}, title = {xx}, url = {xx}, version = {0.6.x}, year = {2021}, } ```
marmolpen3/sla-obligations-rights
2023-08-17T19:43:12.000Z
[ "task_categories:text-classification", "size_categories:n<1K", "language:en", "license:cc-by-nc-nd-4.0", "legal", "region:us" ]
marmolpen3
null
null
null
0
3
--- license: cc-by-nc-nd-4.0 task_categories: - text-classification language: - en tags: - legal pretty_name: Initial datasets on obligations and rights size_categories: - n<1K ---
SamsungSAILMontreal/deepnets1m
2023-06-19T17:13:34.000Z
[ "task_categories:graph-ml", "task_categories:image-classification", "size_categories:1M<n<10M", "license:mit", "arxiv:2110.13100", "region:us" ]
SamsungSAILMontreal
null
null
null
0
3
--- license: mit task_categories: - graph-ml - image-classification pretty_name: DeepNets size_categories: - 1M<n<10M --- This is a copy of the **DeepNets-1M** dataset originally released at https://github.com/facebookresearch/ppuda under the MIT license. The dataset presents diverse computational graphs (1M training and 1402 evaluation) of neural network architectures used in image classification. See detailed description at https://paperswithcode.com/dataset/deepnets-1m and in the [Parameter Prediction for Unseen Deep Architectures](https://arxiv.org/abs/2110.13100) paper. There are four files in this dataset: - deepnets1m_eval.hdf5; # 16 MB (md5: 1f5641329271583ad068f43e1521517e) - deepnets1m_meta.tar.gz; # 35 MB (md5: a42b6f513da6bbe493fc16a30d6d4e3e), run `tar -xf deepnets1m_meta.tar.gz` to unpack it before running any code reading the dataset - deepnets1m_search.hdf5; # 1.3 GB (md5: 0a93f4b4e3b729ea71eb383f78ea9b53) - deepnets1m_train.hdf5; # 10.3 GB (md5: 90bbe84bb1da0d76cdc06d5ff84fa23d) <img src="https://production-media.paperswithcode.com/datasets/0dbd44d8-19d9-495d-918a-b0db80facaf3.png" alt="" width="600"/> ## Citation If you use this dataset, please cite it as: ``` @inproceedings{knyazev2021parameter, title={Parameter Prediction for Unseen Deep Architectures}, author={Knyazev, Boris and Drozdzal, Michal and Taylor, Graham W and Romero-Soriano, Adriana}, booktitle={Advances in Neural Information Processing Systems}, year={2021} } ```
liezeleinstein/test3
2023-06-20T02:40:09.000Z
[ "license:apache-2.0", "region:us" ]
liezeleinstein
null
null
null
0
3
--- license: apache-2.0 ---
akomma/uss-ratings-dataset
2023-06-20T20:18:47.000Z
[ "task_categories:text-classification", "task_categories:zero-shot-classification", "task_categories:conversational", "size_categories:10K<n<100K", "language:en", "license:mit", "arxiv:2105.03748", "region:us" ]
akomma
null
null
null
0
3
--- license: mit task_categories: - text-classification - zero-shot-classification - conversational language: - en pretty_name: uss-ratings-dataset size_categories: - 10K<n<100K --- ### Dataset Description - **Homepage:** https://github.com/sunnweiwei/user-satisfaction-simulation - **Repository:** https://github.com/sunnweiwei/user-satisfaction-simulation - **Paper:** https://arxiv.org/pdf/2105.03748.pdf - **View records using Datasette:** [datasette-link](https://lite.datasette.io/?parquet=https%3A%2F%2Fhuggingface.co%2Fdatasets%2Fakomma%2Fuss-ratings-dataset%2Fresolve%2Fmain%2Fuss-ratings-dataset-datasette.parquet#/data/uss-ratings-dataset-datasette) ### Dataset Summary - Dialogs Quality Dataset - With both turn-level and dialog-level ratings provided on a scale of 1 to 5 by human annotators. - Each task has been annotated by multiple annotators. - Contains annotated dialogs from 4 different datasets (SGD, MultiWoz, ReDial, CCPE) - Total 34358 turns from 3500 dialogs |Dataset|Dialogs|Turns | |-------|------:|-----:| |SGD | 1000 | 11833| |MWOZ | 1000 | 10553| |Redial | 1000 | 6792 | |CCPE | 500 | 5180 | ### Column Definitions |Column |Type |Example Value |Description | |-------------------|-------|-------------------------|-----------------------------------------------| |split | str | CCPE;MWOZ;SGD;Redial | dataset name | |session_idx | int | 1 | dialog identifier | |turn_idx | int | 1 | turn identifier within a dialog | |tree_idx | int | 1 | tree identifier within a turn (is all 1s here)| |system | str | Do you like movies | system message | |user | str | No I don't like | user message | |turn_scores | list | [3; 2; 2] | list of turn-level quality scores from different human annotations| |mean_turn_rating | float | 2.33 | mean of turn-level annotator scores | |mode_turn_rating | int | 2 | mode of turn-level annotator scores | |dialog_scores | list | [3; 3; 3] | list of dialog-level quality scores from different human annotations| |mean_dialog_rating | float | 3.00 | mean of dialog-level annotator scores | |mode_dialog_rating | int | 3 | mode of dialog-level annotator scores | ### Dataset Description - **Homepage:** https://github.com/sunnweiwei/user-satisfaction-simulation - **Repository:** https://github.com/sunnweiwei/user-satisfaction-simulation - **Paper:** https://arxiv.org/pdf/2105.03748.pdf - **View records using Datasette:** [datasette-link](https://lite.datasette.io/?parquet=https%3A%2F%2Fhuggingface.co%2Fdatasets%2Fakomma%2Fuss-ratings-dataset%2Fresolve%2Fmain%2Fuss-ratings-dataset-datasette.parquet#/data/uss-ratings-dataset-datasette)
IDEA-CCNL/Ziya-Finetune-Small
2023-06-20T08:15:16.000Z
[ "size_categories:1K<n<10K", "language:zh", "language:en", "license:apache-2.0", "region:us" ]
IDEA-CCNL
null
null
null
3
3
--- license: apache-2.0 language: - zh - en pretty_name: Ziya-Finetune-Small size_categories: - 1K<n<10K --- # 姜子牙微调示例数据集 ### 数据介绍 Dataset Summary 用于Ziya模型的微调示例数据集 The Ziya-Finetune-Small is an example dataset designed to finetune Ziya models. ### 语言 Languages 中文 & 英文 Chinese & English ### 数据示例 Data Instances 单轮示例 Single Turn ```json {"prompt": ["给出关于如何在家里做冰淇淋的五条简单步骤。\\n\n"], "output": ["1. 准备冰淇淋机和冰淇淋材料(鲜奶油、糖、香草提取物等)。\n2. 将所有材料放入混合物中拌匀,然后将混合物放入冰淇淋机中。\n3. 打开冰淇淋机,开始混合和冷藏混合物,直到混合物变成冰淇淋。\n4. 将冰淇淋转移到冰箱中,冷藏至少1个小时以加强口感并使其更好吃。\n5. 取出冰淇淋,享用!"]} ``` 多轮示例 Multiple Turn ``` {"prompt": ["你好,MindBot。请问你对人工智能在建筑行业中的应用有什么看法吗?", "谢谢你的解答。那么,你认为人工智能技术将如何影响我的职业?", "哦,这样啊。那么,你认为有哪些技能是我需要掌握的,以适应这种变化呢?", "好的,我会努力学习的。谢谢你的建议。"], "output": ["你好。人工智能在建筑行业中具有广泛的应用前景。例如,在建筑设计中应用人工智能技术可以帮助建筑师优化建筑结构,减少构件使用量,提高建筑的可持续性。同时,人工智能技术还可以在施工过程中实现自动化和智能化,减少人力投入,提高工作效率,提高施工质量。此外,人工智能技术还可以在建筑物运行和维护中进行诊断和预测,降低维护成本,延长建筑寿命。", "钢筋骨架工是建筑行业中重要的从业人员之一。随着人工智能技术的不断发展,钢筋骨架工这一职业也将受到影响。例如,在施工过程中,机器人钢筋机的应用将会逐渐普及,这将会极大地降低人力投入,提高施工效率。同时,在建筑结构的设计方面,人工智能技术可以帮助建筑师优化建筑结构,减少构件使用量,这也将会对钢筋骨架工的工作产生影响。", "随着人工智能技术的不断发展,建筑行业对从业人员的要求也将会发生变化。钢筋骨架工需要具备更多的技能和知识,以适应这种变化。例如,你需要掌握机器人钢筋机的操作和维护知识,以确保施工效率和质量。同时,你还需要了解建筑结构设计和施工自动化等方面的知识,以适应人工智能技术的应用。", "不客气。相信你一定能够通过努力学习和不断提高自己的能力,适应这种变化并成为建筑行业中的杰出人才。祝你工作顺利!"]} ``` ### 数据字段 Data Fields - prompt: list[str] - output: list[str] ### 引用 Citation ``` @article{fengshenbang, author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen}, title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence}, journal = {CoRR}, volume = {abs/2209.02970}, year = {2022} } ```
datalama/question_pair
2023-06-21T06:22:28.000Z
[ "license:mit", "region:us" ]
datalama
null
null
null
0
3
--- license: mit dataset_info: features: - name: id dtype: int64 - name: question1 dtype: string - name: question2 dtype: string - name: label dtype: class_label: names: '0': 다른 질문 '1': 같은 질문 splits: - name: train num_bytes: 634021 num_examples: 6888 - name: test num_bytes: 62628 num_examples: 688 download_size: 403049 dataset_size: 696649 --- ## Dataset Description songys님이 오픈소스로 공개한 Question_pair 데이터셋을 약간의 데이터 수정을 거쳐 업로드한 데이터셋. 원본 데이터셋과 자세한 설명은 아래 repo 참고 - **Repository: https://github.com/songys/Question_pair** **수정 사항** - `is_duplicate`를 `label`이라는 필드로 rename함. - test set의 `test_id`를 `id`로 rename함. - 기존 0, 1에 대한 label을 반대로 변경함. - as-is - {"같은 질문": 0, "다른 질문": 1} - to-be - {"같은 질문": 1, "다른 질문": 0} - 최종 field는 'id', 'question1', 'question2', 'label'를 선택하여 저장함. ## Dataset Structure ``` DatasetDict({ train: Dataset({ features: ['id', 'question1', 'question2', 'label'], num_rows: 6888 }) test: Dataset({ features: ['id', 'question1', 'question2', 'label'], num_rows: 688 }) }) ```
nyanko7/coco-hosted
2023-06-22T13:16:06.000Z
[ "license:openrail", "region:us" ]
nyanko7
MS COCO is a large-scale object detection, segmentation, and captioning dataset. COCO has several features: Object segmentation, Recognition in context, Superpixel stuff segmentation, 330K images (>200K labeled), 1.5 million object instances, 80 object categories, 91 stuff categories, 5 captions per image, 250,000 people with keypoints.
@article{DBLP:journals/corr/LinMBHPRDZ14, author = {Tsung{-}Yi Lin and Michael Maire and Serge J. Belongie and Lubomir D. Bourdev and Ross B. Girshick and James Hays and Pietro Perona and Deva Ramanan and Piotr Doll{\'{a}}r and C. Lawrence Zitnick}, title = {Microsoft {COCO:} Common Objects in Context}, journal = {CoRR}, volume = {abs/1405.0312}, year = {2014}, url = {http://arxiv.org/abs/1405.0312}, eprinttype = {arXiv}, eprint = {1405.0312}, timestamp = {Mon, 13 Aug 2018 16:48:13 +0200}, biburl = {https://dblp.org/rec/journals/corr/LinMBHPRDZ14.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
null
0
3
--- license: openrail --- Usage: ``` from datasets import load_dataset coco_dataset = load_dataset("nyanko7/coco-hosted") ``` Each instance has the following structure: ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile>, 'filepath': 'COCO_val2014_000000522418.jpg', 'sentids': [681330, 686718, 688839, 693159, 693204], 'filename': 'COCO_val2014_000000522418.jpg', 'imgid': 1, 'split': 'restval', 'sentences': { 'tokens': ['a', 'woman', 'wearing', 'a', 'net', 'on', 'her', 'head', 'cutting', 'a', 'cake'], 'raw': 'A woman wearing a net on her head cutting a cake. ', 'imgid': 1, 'sentid': 681330 }, 'cocoid': 522418 } ```
foxy-steve/monash_uea_ucr_tser
2023-06-23T17:07:18.000Z
[ "task_categories:time-series-forecasting", "size_categories:10K<n<100K", "language:en", "license:gpl-3.0", "arxiv:2006.10996", "region:us" ]
foxy-steve
null
null
null
0
3
--- dataset_info: features: - name: start dtype: timestamp[s] - name: feat_static_cat dtype: uint64 - name: to_predict dtype: float32 - name: timeseries sequence: sequence: float32 - name: item_id dtype: string splits: - name: train num_bytes: 1325820 num_examples: 95 - name: test num_bytes: 586152 num_examples: 42 download_size: 1020749 dataset_size: 1911972 license: gpl-3.0 task_categories: - time-series-forecasting language: - en pretty_name: Appliances Energy Regression Dataset size_categories: - 10K<n<100K --- # Dataset Card for Time Series Extrinsic Regression ## Dataset Description - **Homepage:** [Time Series Extrinsic Regression Repository](http://tseregression.org/) - **Repository:** [GitHub code repository](https://github.com/ChangWeiTan/TS-Extrinsic-Regression/tree/master), [Raw data repository](https://zenodo.org/record/3902651) - **Paper:** [Monash University, UEA, UCR Time Series Extrinsic Regression Archive](https://arxiv.org/abs/2006.10996) - **Leaderboard:** [Baseline results](http://tseregression.org/#results) - **Point of Contact:** [Stephen Fox](gh@stephenjfox.com) ### Dataset Summary A collection of datasets from Monash, UEA, and UCR supporting research into Time Series Extrinsic Regression (TSER), a regression task of which the aim is to learn the relationship between *a time series and a continuous scalar variable*. This task is closely related to time series classification, where a single categorical variable is learned. Please read the [paper](https://arxiv.org/abs/2006.10996) for more. If you use the results or code, please cite the paper **"Chang Wei Tan, Christoph Bergmeir, Francois Petitjean, Geoffrey I. Webb, Time Series Extrinsic Regression: Predicting numeric values from time series data"**. (Full BibTex citation can be found at the end of this card). (It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).) ### Supported Tasks and Leaderboards [More Information Needed] ### Languages ## Dataset Structure ### Data Instances A sample from the training set of Appliances Energy (a multivariate time series dataset) is provided. The following is a single record from that dataset: ```python {'start': Timestamp('2016-02-28 17:00:00'), 'feat_static_cat': 0, 'to_predict': 19.38, 'timeseries': array([[21.29 , 21.29 , 21.29 , ..., 21.79 , 21.79 , 21.79 ], [31.66666667, 31.92666667, 32.06 , ..., 33.66 , 33.7 , 33.56666667], [19.89 , 19.82333333, 19.79 , ..., 19.79 , 19.79 , 19.79 ], ..., [ 7. , 6.83333333, 6.66666667, ..., 5. , 5. , 5. ], [40. , 40. , 40. , ..., 40. , 40. , 40. ], [-4.2 , -4.16666667, -4.13333333, ..., -4.3 , -4.16666667, -4.03333333]]), 'item_id': 'item_000'} ``` ### Data Fields This format was loosely adapted from [the Gluon format](https://ts.gluon.ai/stable/getting_started/concepts.html) and [the HF convention](https://github.com/huggingface/notebooks/blob/main/examples/time_series_datasets.ipynb) also seen in the recent [series](https://huggingface.co/blog/time-series-transformers) of [Time Series Transformer notebooks](https://github.com/huggingface/notebooks/blob/main/examples/time-series-transformers.ipynb) - `start`: a datetime of the first entry of each time series in the data record - `feat_static_cat`: the original identifier given to this record - `timeseries`: the timeseries itself - `to_predict`: continuous variable to predict - `item_id`: an identifier given to each record (for e.g. group-by style aggregations) The `timeseries` field will be a single array in the univariate forecasting scenario, and a 2-D array in the multivariate scenario. The `to_predict` will be a single number in most cases, or an array in a few instances (noted in the table above **TODO**). ### Data Splits Train and test are temporally split (i.e. "train" is the past and "test" is the future) 70/30 whenever possible, though some datasets have more particular splits. For details, see [the paper](https://arxiv.org/abs/2006.10996) and the particular dataset you are interested in. In our porting to HF Hub, we made as few changes as possible. ## Dataset Creation While I (Stephen) did not create the original dataset, I took the initiative to put the data on Hugging Face Hub. **Any grievances with the dataset should first and foremost be directed to me**. ### Curation Rationale To facilitate the evaluation of global forecasting models that are predicting a single-point estimate in the future. All datasets in the repository are intended for research purposes and to evaluate the performance of new TSER algorithms. This ### Source Data #### Initial Data Collection and Normalization The origins of each dataset are articulated in [the paper](https://link.springer.com/article/10.1007/s10618-021-00745-9). Minimal preprocess was applied to the dataset, as they are still in their `sktime`-compatible `.ts` format. (As far as Stephen is aware.) #### Who are the source language producers? The data comes from the datasets listed in the paper and in the table on [the website](http://tseregression.org/#results) ### Annotations #### Annotation process Please see [the paper](https://link.springer.com/article/10.1007/s10618-021-00745-9) for the annotation aggregation propcess #### Who are the annotators? The annotation comes from the datasets listed in the paper and in the table on [the website](http://tseregression.org/#results) ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators - [Chang Wei Tan](https://changweitan.com/) - [Anthony Bagnall](https://www.uea.ac.uk/computing/people/profile/anthony-bagnall) - [Christoph Bergmeir](https://research.monash.edu/en/persons/christoph-bergmeir) - [Daniel Schmidt](https://research.monash.edu/en/persons/daniel-schmidt) - [Eamonn Keogh](http://www.cs.ucr.edu/~eamonn/) - [François Petitjean](https://www.francois-petitjean.com/) - [Geoff Webb](http://i.giwebb.com/) ### Licensing Information [GNU General Public License (GPL) 3](https://www.gnu.org/licenses/gpl-3.0.en.html) ### Citation Information ```tex @article{ Tan2020TSER, title={Time Series Extrinsic Regression}, author={Tan, Chang Wei and Bergmeir, Christoph and Petitjean, Francois and Webb, Geoffrey I}, journal={Data Mining and Knowledge Discovery}, pages={1--29}, year={2021}, publisher={Springer}, doi={https://doi.org/10.1007/s10618-021-00745-9} } ``` ### Contributions [More Information Needed]
tasksource/mtop
2023-06-23T12:49:07.000Z
[ "task_categories:text-classification", "multilinguality:multilingual", "license:cc-by-sa-4.0", "multilingual", "intent", "arxiv:2008.09335", "region:us" ]
tasksource
@article{li2020mtop, title={MTOP: A comprehensive multilingual task-oriented semantic parsing benchmark}, author={Li, Haoran and Arora, Abhinav and Chen, Shuohui and Gupta, Anchit and Gupta, Sonal and Mehdad, Yashar}, journal={arXiv preprint arXiv:2008.09335}, year={2020} }
null
0
3
--- license: cc-by-sa-4.0 task_categories: - text-classification tags: - multilingual - intent multilinguality: - multilingual --- https://arxiv.org/pdf/2008.09335.pdf ``` @article{li2020mtop, title={MTOP: A comprehensive multilingual task-oriented semantic parsing benchmark}, author={Li, Haoran and Arora, Abhinav and Chen, Shuohui and Gupta, Anchit and Gupta, Sonal and Mehdad, Yashar}, journal={arXiv preprint arXiv:2008.09335}, year={2020} } ```
DEplain/DEplain-web-sent
2023-06-23T14:43:38.000Z
[ "task_categories:text2text-generation", "task_ids:text-simplification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:translation", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|DEplain-web-doc", "language:de", ...
DEplain
null
null
null
0
3
--- annotations_creators: - expert-generated language: - de language_creators: - expert-generated license: - other multilinguality: - translation - monolingual pretty_name: DEplain-web-sent size_categories: - 1K<n<10K source_datasets: - extended|DEplain-web-doc tags: - sentence simplification - web-text - plain language - easy-to-read language task_categories: - text2text-generation task_ids: - text-simplification --- # DEplain-web-sent: A corpus for German Sentence Simplification DEplain-web-sent is a subcorpus of DEplain [Stodden et al., 2023]((https://arxiv.org/abs/2305.18939)) for evaluation of sentence simplification. The corpus consists of 1846 sentence pairs of 147 parallel documents crawled from the web in standard German and plain German (or easy-to-read German). All documents are either published under an open license, or the copyright holders gave us permission to share the data. Human annotators sentence-wise aligned the 147 documents of the test set to build a corpus for sentence simplification. For the document-level version of this corpus, please see [https://huggingface.co/datasets/DEplain/DEplain-web-doc](https://huggingface.co/datasets/DEplain/DEplain-web-doc). Due to the small size of the sentence pairs, we only provide a test set for evaluation of text simplification models. If you are interested in a larger corpus, please check our paper and the provided web crawler and alignment methods to extend the corpus. You can find this data here: [https://github.com/rstodden/DEPlain/](https://github.com/rstodden/DEPlain/tree/main/E__Sentence-level_Corpus/DEplain-web-sent/auto/open). If you use the automatically aligned data, please use it cautiously, as the alignment quality might be error-prone. # Dataset Card for DEplain-web-sent ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [DEplain-web GitHub repository](https://github.com/rstodden/DEPlain) - **Paper:** ["DEplain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification."](https://arxiv.org/abs/2305.18939) - **Point of Contact:** [Regina Stodden](regina.stodden@hhu.de) ### Dataset Summary [DEplain-web](https://github.com/rstodden/DEPlain) [(Stodden et al., 2023)](https://arxiv.org/abs/2305.18939) is a dataset for the evaluation of sentence and document simplification in German. All texts of this dataset are scraped from the web. All documents were licenced with an open license. The simple-complex sentence pairs are manually aligned. This dataset only contains a test set. For additional training and development data, please scrape more data from the web using a [web scraper for text simplification data](https://github.com/rstodden/data_collection_german_simplification) and align the sentences of the documents automatically using, for example, [MASSalign](https://github.com/ghpaetzold/massalign) by [Paetzold et al. (2017)](https://www.aclweb.org/anthology/I17-3001/). ### Supported Tasks and Leaderboards The dataset supports the evaluation of `text-simplification` systems. Success in this task is typically measured using the [SARI](https://huggingface.co/metrics/sari) and [FKBLEU](https://huggingface.co/metrics/fkbleu) metrics described in the paper [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029.pdf). ### Languages The texts in this dataset are written in German (de-de). The texts are in German plain language variants, e.g., plain language (Einfache Sprache) or easy-to-read language (Leichte Sprache). ### Domains The texts are from 6 different domains: fictional texts (literature and fairy tales), bible texts, health-related texts, texts for language learners, texts for accessibility, and public administration texts. ## Dataset Structure ### Data Access - The dataset is licensed with different open licenses dependent on the subcorpora. ### Data Instances - `document-simplification` configuration: an instance consists of an original document and one reference simplification. - `sentence-simplification` configuration: an instance consists of an original sentence and one manually aligned reference simplification. Please see [https://huggingface.co/datasets/DEplain/DEplain-web-sent](https://huggingface.co/datasets/DEplain/DEplain-web-sent). - `sentence-wise alignment` configuration: an instance consists of original and simplified documents and manually aligned sentence pairs. In contrast to the sentence-simplification configurations, this configuration contains also sentence pairs in which the original and the simplified sentences are exactly the same. Please see [https://github.com/rstodden/DEPlain](https://github.com/rstodden/DEPlain/tree/main/C__Alignment_Algorithms) ### Data Fields | data field | data field description | |-------------------------------------------------|-------------------------------------------------------------------------------------------------------| | `original` | an original text from the source dataset | | `simplification` | a simplified text from the source dataset | | `pair_id` | document pair id | | `complex_document_id ` (on doc-level) | id of complex document (-1) | | `simple_document_id ` (on doc-level) | id of simple document (-0) | | `original_id ` (on sent-level) | id of sentence(s) of the original text | | `simplification_id ` (on sent-level) | id of sentence(s) of the simplified text | | `domain ` | text domain of the document pair | | `corpus ` | subcorpus name | | `simple_url ` | origin URL of the simplified document | | `complex_url ` | origin URL of the simplified document | | `simple_level ` or `language_level_simple ` | required CEFR language level to understand the simplified document | | `complex_level ` or `language_level_original ` | required CEFR language level to understand the original document | | `simple_location_html ` | location on hard disk where the HTML file of the simple document is stored | | `complex_location_html ` | location on hard disk where the HTML file of the original document is stored | | `simple_location_txt ` | location on hard disk where the content extracted from the HTML file of the simple document is stored | | `complex_location_txt ` | location on hard disk where the content extracted from the HTML file of the simple document is stored | | `alignment_location ` | location on hard disk where the alignment is stored | | `simple_author ` | author (or copyright owner) of the simplified document | | `complex_author ` | author (or copyright owner) of the original document | | `simple_title ` | title of the simplified document | | `complex_title ` | title of the original document | | `license ` | license of the data | | `last_access ` or `access_date` | data origin data or data when the HTML files were downloaded | | `rater` | id of the rater who annotated the sentence pair | | `alignment` | type of alignment, e.g., 1:1, 1:n, n:1 or n:m | ### Data Splits DEplain-web contains a training set, a development set and a test set. The dataset was split based on the license of the data. All manually-aligned sentence pairs with an open license are part of the test set. The document-level test set, also only contains the documents which are manually aligned. For document-level dev and test set the documents which are not aligned or not public available are used. For the sentence-level, the alingment pairs can be produced by automatic alignments (see [Stodden et al., 2023](https://arxiv.org/abs/2305.18939)). Document-level: | | Train | Dev | Test | Total | |-------------------------|-------|-----|------|-------| | DEplain-web-manual-open | - | - | 147 | 147 | | DEplain-web-auto-open | 199 | 50 | - | 279 | | DEplain-web-auto-closed | 288 | 72 | - | 360 | | in total | 487 | 122 | 147 | 756 | Sentence-level: | | Train | Dev | Test | Total | |-------------------------|-------|-----|------|-------| | DEplain-web-manual-open | - | - | 1846 | 1846 | | DEplain-web-auto-open | 514 | 138 | - | 652 | | DEplain-web-auto-closed | 767 | 175 | - | 942 | | in total | 1281 | 313 | 1846 | | | **subcorpus** | **simple** | **complex** | **domain** | **description** | **\ doc.** | |----------------------------------|------------------|------------------|------------------|-------------------------------------------------------------------------------|------------------| | **EinfacheBücher** | Plain German | Standard German / Old German | fiction | Books in plain German | 15 | | **EinfacheBücherPassanten** | Plain German | Standard German / Old German | fiction | Books in plain German | 4 | | **ApothekenUmschau** | Plain German | Standard German | health | Health magazine in which diseases are explained in plain German | 71 | | **BZFE** | Plain German | Standard German | health | Information of the German Federal Agency for Food on good nutrition | 18 | | **Alumniportal** | Plain German | Plain German | language learner | Texts related to Germany and German traditions written for language learners. | 137 | | **Lebenshilfe** | Easy-to-read German | Standard German | accessibility | | 49 | | **Bibel** | Easy-to-read German | Standard German | bible | Bible texts in easy-to-read German | 221 | | **NDR-Märchen** | Easy-to-read German | Standard German / Old German | fiction | Fairytales in easy-to-read German | 10 | | **EinfachTeilhaben** | Easy-to-read German | Standard German | accessibility | | 67 | | **StadtHamburg** | Easy-to-read German | Standard German | public authority | Information of and regarding the German city Hamburg | 79 | | **StadtKöln** | Easy-to-read German | Standard German | public authority | Information of and regarding the German city Cologne | 85 | : Documents per Domain in DEplain-web. | domain | avg. | std. | interpretation | \ sents | \ docs | |------------------|---------------|---------------|-------------------------|-------------------|------------------| | bible | 0.7011 | 0.31 | moderate | 6903 | 3 | | fiction | 0.6131 | 0.39 | moderate | 23289 | 3 | | health | 0.5147 | 0.28 | weak | 13736 | 6 | | language learner | 0.9149 | 0.17 | almost perfect | 18493 | 65 | | all | 0.8505 | 0.23 | strong | 87645 | 87 | : Inter-Annotator-Agreement per Domain in DEplain-web-manual. | operation | documents | percentage | |-----------|-------------|------------| | rehphrase | 863 | 11.73 | | deletion | 3050 | 41.47 | | addition | 1572 | 21.37 | | identical | 887 | 12.06 | | fusion | 110 | 1.5 | | merge | 77 | 1.05 | | split | 796 | 10.82 | | in total | 7355 | 100 | : Information regarding Simplification Operations in DEplain-web-manual. ## Dataset Creation ### Curation Rationale Current German text simplification datasets are limited in their size or are only automatically evaluated. We provide a manually aligned corpus to boost text simplification research in German. ### Source Data #### Initial Data Collection and Normalization The parallel documents were scraped from the web using a [web scraper for text simplification data](https://github.com/rstodden/data_collection_german_simplification). The texts of the documents were manually simplified by professional translators. The data was split into sentences using a German model of SpaCy. Two German native speakers have manually aligned the sentence pairs by using the text simplification annotation tool [TS-ANNO](https://github.com/rstodden/TS_annotation_tool) by [Stodden & Kallmeyer (2022)](https://aclanthology.org/2022.acl-demo.14/). #### Who are the source language producers? The texts of the documents were manually simplified by professional translators. See for an extensive list of the scraped URLs see Table 10 in [Stodden et al. (2023)](https://arxiv.org/abs/2305.18939). ### Annotations #### Annotation process The instructions given to the annotators are available [here](https://github.com/rstodden/TS_annotation_tool/tree/master/annotation_schema). #### Who are the annotators? The annotators are two German native speakers, who are trained in linguistics. Both were at least compensated with the minimum wage of their country of residence. They are not part of any target group of text simplification. ### Personal and Sensitive Information No sensitive data. ## Considerations for Using the Data ### Social Impact of Dataset Many people do not understand texts due to their complexity. With automatic text simplification methods, the texts can be simplified for them. Our new training data can benefit in training a TS model. ### Discussion of Biases no bias is known. ### Other Known Limitations The dataset is provided under different open licenses depending on the license of each website were the data is scraped from. Please check the dataset license for additional information. ## Additional Information ### Dataset Curators DEplain-APA was developed by researchers at the Heinrich-Heine-University Düsseldorf, Germany. This research is part of the PhD-program ``Online Participation'', supported by the North Rhine-Westphalian (German) funding scheme ``Forschungskolleg''. ### Licensing Information The corpus includes the following licenses: CC-BY-SA-3, CC-BY-4, and CC-BY-NC-ND-4. The corpus also include a "save_use_share" license, for these documents the data provider permitted us to share the data for research purposes. ### Citation Information ``` @inproceedings{stodden-etal-2023-deplain, title = "{DE}-plain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification", author = "Stodden, Regina and Momen, Omar and Kallmeyer, Laura", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", notes = "preprint: https://arxiv.org/abs/2305.18939", } ``` This dataset card uses material written by [Juan Diego Rodriguez](https://github.com/juand-r) and [Yacine Jernite](https://github.com/yjernite).
tasksource/multilingual-zero-shot-label-nli
2023-06-23T14:49:53.000Z
[ "task_categories:zero-shot-classification", "task_categories:text-classification", "task_ids:natural-language-inference", "multilinguality:multilingual", "license:other", "region:us" ]
tasksource
null
null
null
0
3
--- dataset_info: features: - name: labels dtype: class_label: names: '0': entailment '1': neutral '2': contradiction - name: premise dtype: string - name: hypothesis dtype: string - name: task dtype: string splits: - name: train num_bytes: 185352754.0 num_examples: 878967 - name: test num_bytes: 1775890.0 num_examples: 9400 - name: validation num_bytes: 1817480.0 num_examples: 9400 download_size: 104413879 dataset_size: 188946124.0 license: other task_categories: - zero-shot-classification - text-classification task_ids: - natural-language-inference multilinguality: - multilingual --- [mtasksource](https://github.com/sileod/tasksource) classification tasks recasted as natural language inference. This dataset is intended to improve label understanding in [zero-shot classification HF pipelines](https://huggingface.co/docs/transformers/main/main_classes/pipelines#transformers.ZeroShotClassificationPipeline ). Inputs that are text pairs are separated by a newline (\n). ```python from transformers import pipeline classifier = pipeline(model="sileod/mdeberta-v3-base-tasksource-nli") classifier( "I have a problem with my iphone that needs to be resolved asap!!", candidate_labels=["urgent", "not urgent", "phone", "tablet", "computer"], ) ``` [mdeberta-v3-base-tasksource-nli](https://huggingface.co/sileod/mdeberta-v3-base-tasksource-nli) will include `label-nli` in its training mix (a relatively small portion, to keep the model general, but note that nli models work for label-like zero shot classification without specific supervision (https://aclanthology.org/D19-1404.pdf). ``` @article{sileo2023tasksource, title={tasksource: A Dataset Harmonization Framework for Streamlined NLP Multi-Task Learning and Evaluation}, author={Sileo, Damien}, year={2023} } ```
Xenova/quickdraw-small
2023-06-25T17:08:02.000Z
[ "region:us" ]
Xenova
null
null
null
1
3
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': aircraft carrier '1': airplane '2': alarm clock '3': ambulance '4': angel '5': animal migration '6': ant '7': anvil '8': apple '9': arm '10': asparagus '11': axe '12': backpack '13': banana '14': bandage '15': barn '16': baseball bat '17': baseball '18': basket '19': basketball '20': bat '21': bathtub '22': beach '23': bear '24': beard '25': bed '26': bee '27': belt '28': bench '29': bicycle '30': binoculars '31': bird '32': birthday cake '33': blackberry '34': blueberry '35': book '36': boomerang '37': bottlecap '38': bowtie '39': bracelet '40': brain '41': bread '42': bridge '43': broccoli '44': broom '45': bucket '46': bulldozer '47': bus '48': bush '49': butterfly '50': cactus '51': cake '52': calculator '53': calendar '54': camel '55': camera '56': camouflage '57': campfire '58': candle '59': cannon '60': canoe '61': car '62': carrot '63': castle '64': cat '65': ceiling fan '66': cell phone '67': cello '68': chair '69': chandelier '70': church '71': circle '72': clarinet '73': clock '74': cloud '75': coffee cup '76': compass '77': computer '78': cookie '79': cooler '80': couch '81': cow '82': crab '83': crayon '84': crocodile '85': crown '86': cruise ship '87': cup '88': diamond '89': dishwasher '90': diving board '91': dog '92': dolphin '93': donut '94': door '95': dragon '96': dresser '97': drill '98': drums '99': duck '100': dumbbell '101': ear '102': elbow '103': elephant '104': envelope '105': eraser '106': eye '107': eyeglasses '108': face '109': fan '110': feather '111': fence '112': finger '113': fire hydrant '114': fireplace '115': firetruck '116': fish '117': flamingo '118': flashlight '119': flip flops '120': floor lamp '121': flower '122': flying saucer '123': foot '124': fork '125': frog '126': frying pan '127': garden hose '128': garden '129': giraffe '130': goatee '131': golf club '132': grapes '133': grass '134': guitar '135': hamburger '136': hammer '137': hand '138': harp '139': hat '140': headphones '141': hedgehog '142': helicopter '143': helmet '144': hexagon '145': hockey puck '146': hockey stick '147': horse '148': hospital '149': hot air balloon '150': hot dog '151': hot tub '152': hourglass '153': house plant '154': house '155': hurricane '156': ice cream '157': jacket '158': jail '159': kangaroo '160': key '161': keyboard '162': knee '163': knife '164': ladder '165': lantern '166': laptop '167': leaf '168': leg '169': light bulb '170': lighter '171': lighthouse '172': lightning '173': line '174': lion '175': lipstick '176': lobster '177': lollipop '178': mailbox '179': map '180': marker '181': matches '182': megaphone '183': mermaid '184': microphone '185': microwave '186': monkey '187': moon '188': mosquito '189': motorbike '190': mountain '191': mouse '192': moustache '193': mouth '194': mug '195': mushroom '196': nail '197': necklace '198': nose '199': ocean '200': octagon '201': octopus '202': onion '203': oven '204': owl '205': paint can '206': paintbrush '207': palm tree '208': panda '209': pants '210': paper clip '211': parachute '212': parrot '213': passport '214': peanut '215': pear '216': peas '217': pencil '218': penguin '219': piano '220': pickup truck '221': picture frame '222': pig '223': pillow '224': pineapple '225': pizza '226': pliers '227': police car '228': pond '229': pool '230': popsicle '231': postcard '232': potato '233': power outlet '234': purse '235': rabbit '236': raccoon '237': radio '238': rain '239': rainbow '240': rake '241': remote control '242': rhinoceros '243': rifle '244': river '245': roller coaster '246': rollerskates '247': sailboat '248': sandwich '249': saw '250': saxophone '251': school bus '252': scissors '253': scorpion '254': screwdriver '255': sea turtle '256': see saw '257': shark '258': sheep '259': shoe '260': shorts '261': shovel '262': sink '263': skateboard '264': skull '265': skyscraper '266': sleeping bag '267': smiley face '268': snail '269': snake '270': snorkel '271': snowflake '272': snowman '273': soccer ball '274': sock '275': speedboat '276': spider '277': spoon '278': spreadsheet '279': square '280': squiggle '281': squirrel '282': stairs '283': star '284': steak '285': stereo '286': stethoscope '287': stitches '288': stop sign '289': stove '290': strawberry '291': streetlight '292': string bean '293': submarine '294': suitcase '295': sun '296': swan '297': sweater '298': swing set '299': sword '300': syringe '301': t-shirt '302': table '303': teapot '304': teddy-bear '305': telephone '306': television '307': tennis racquet '308': tent '309': The Eiffel Tower '310': The Great Wall of China '311': The Mona Lisa '312': tiger '313': toaster '314': toe '315': toilet '316': tooth '317': toothbrush '318': toothpaste '319': tornado '320': tractor '321': traffic light '322': train '323': tree '324': triangle '325': trombone '326': truck '327': trumpet '328': umbrella '329': underwear '330': van '331': vase '332': violin '333': washing machine '334': watermelon '335': waterslide '336': whale '337': wheel '338': windmill '339': wine bottle '340': wine glass '341': wristwatch '342': yoga '343': zebra '344': zigzag splits: - name: train num_bytes: 1759529671.8579164 num_examples: 4500000 - name: valid num_bytes: 97751648.4365509 num_examples: 250000 - name: test num_bytes: 97751648.4365509 num_examples: 250000 download_size: 1902583033 dataset_size: 1955032968.731018 --- # Dataset Card for "quickdraw-small" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yuhsinchan/nmsqa_seg
2023-06-25T18:11:59.000Z
[ "region:us" ]
yuhsinchan
null
null
null
0
3
--- dataset_info: features: - name: context_code sequence: int16 - name: context_cnt sequence: int16 - name: question_code sequence: int16 - name: question_cnt sequence: int16 - name: start_idx dtype: int64 - name: end_idx dtype: int64 splits: - name: train num_bytes: 159406324 num_examples: 87075 - name: dev num_bytes: 19749204 num_examples: 10493 download_size: 56905169 dataset_size: 179155528 --- # Dataset Card for "nmsqa_seg" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
FreedomIntelligence/alpaca-gpt4-chinese
2023-08-06T08:08:09.000Z
[ "region:us" ]
FreedomIntelligence
null
null
null
3
3
The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT).
globis-university/aozorabunko-clean
2023-07-27T07:31:42.000Z
[ "task_categories:text-generation", "task_categories:text-classification", "size_categories:10K<n<100K", "language:ja", "license:cc-by-4.0", "region:us" ]
globis-university
null
null
null
4
3
--- license: cc-by-4.0 task_categories: - text-generation - text-classification language: - ja size_categories: - 10K<n<100K --- # Overview This dataset provides a convenient and user-friendly format of data from [Aozora Bunko (青空文庫)](https://www.aozora.gr.jp/), a website that compiles public-domain books in Japan, ideal for Machine Learning applications. # Methodology The code to reproduce this dataset is made available on GitHub: [globis-org/aozorabunko-exctractor](https://github.com/globis-org/aozorabunko-extractor). ## 1. Data collection We firstly downloaded the [CSV file that lists all works](https://www.aozora.gr.jp/index_pages/person_all.html). The information extracted from this CSV is incorporated into the `meta` field. Next, we filtered out any books not categorized as public domain. We retrieved the main text of each book corresponding to every row in the CSV and incorporated it into the `text` field. ## 2. Deduplication We removed entries where the `図書カードURL` (Library card URL) in this CSV did not match with the `作品ID` (Work ID) and `人物ID` (Person ID). In addition, entries with text identical to previously encountered text were discarded. ## 3. Cleaning The data in the `text` field was then cleaned in the following sequence: 1. Convert new lines to `\n` 2. Remove headers 3. Remove footnotes and add them to the `footnote` field 4. Remove ruby (phonetic guides) 5. Convert specific characters, such as external characters and iteration marks, into standard Unicode characters 6. Convert inserted notes into regular parenthetical text 7. Remove any remaining markup 8. Remove leading and trailing new lines and horizontal rules # Tips If you prefer to employ only modern Japanese, you can filter entries with: `row["meta"]["文字遣い種別"] == "新字新仮名"`. # License CC BY 4.0
Whab/deepfake
2023-06-27T08:13:05.000Z
[ "region:us" ]
Whab
null
null
null
0
3
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': Fake '1': Real splits: - name: train num_bytes: 1553838685.12 num_examples: 179430 download_size: 1677949725 dataset_size: 1553838685.12 --- # Dataset Card for "deepfake" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AdiOO7/data
2023-06-27T10:58:56.000Z
[ "task_categories:text-classification", "size_categories:n<1K", "language:en", "code", "region:us" ]
AdiOO7
null
null
null
0
3
--- task_categories: - text-classification language: - en tags: - code pretty_name: Ticket-Category size_categories: - n<1K ---
usmiva/bg_ner_bsnlp
2023-06-27T19:53:03.000Z
[ "task_categories:token-classification", "language:bg", "license:apache-2.0", "region:us" ]
usmiva
null
null
null
0
3
--- license: apache-2.0 task_categories: - token-classification language: - bg --- # Dataset Card for Bulgarian Named Entity Recognition. Initial dataset is taken from Balto-Slavic NLP shared task and is further transformed in the format appropriate for token classification. The instances are randomized and splitted into train and test splits. ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset is initially created for the BSNLP Shared Task 2019 and reported in the conference paper "The Second Cross-Lingual Challenge on Recognition, Normalization, Classification, and Linking of Named Entities across Slavic Languages" It is further improved in "Reconstructing NER Corpora: a Case Study on Bulgarian" and finally transformed in a csv format appropriate for token classification in Huggingface. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits train, test ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information @inproceedings{piskorski-etal-2019-second, title = "The Second Cross-Lingual Challenge on Recognition, Normalization, Classification, and Linking of Named Entities across {S}lavic Languages", author = "Piskorski, Jakub and Laskova, Laska and Marci{\'n}czuk, Micha{\l} and Pivovarova, Lidia and P{\v{r}}ib{\'a}{\v{n}}, Pavel and Steinberger, Josef and Yangarber, Roman", booktitle = "Proceedings of the 7th Workshop on Balto-Slavic Natural Language Processing", month = aug, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/W19-3709", pages = "63--74" } @inproceedings{marinova-etal-2020-reconstructing, title = "Reconstructing {NER} Corpora: a Case Study on {B}ulgarian", author = "Marinova, Iva and Laskova, Laska and Osenova, Petya and Simov, Kiril and Popov, Alexander", booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2020.lrec-1.571", pages = "4647--4652", abstract = "The paper reports on the usage of deep learning methods for improving a Named Entity Recognition (NER) training corpus and for predicting and annotating new types in a test corpus. We show how the annotations in a type-based corpus of named entities (NE) were populated as occurrences within it, thus ensuring density of the training information. A deep learning model was adopted for discovering inconsistencies in the initial annotation and for learning new NE types. The evaluation results get improved after data curation, randomization and deduplication.", language = "English", ISBN = "979-10-95546-34-4", } ### Contributions [More Information Needed]
OpenLeecher/GPT4-10k
2023-06-28T11:53:45.000Z
[ "task_categories:text-generation", "size_categories:n<1K", "language:en", "license:apache-2.0", "region:us" ]
OpenLeecher
null
null
null
23
3
--- license: apache-2.0 task_categories: - text-generation language: - en size_categories: - n<1K --- The goal of this dataset was to siphon as much money as possible from a 20 dollar subscription that I forgot to cancel. Enjoy. --- 100 diverse GPT4 conversations. Features Coding, Debugging, Story telling, Spatial Thinking, Logical Thinking, Chemistry, Physics, and a conversation or two about Biology and Law. ![Stats](https://gcdnb.pbrd.co/images/q4eVuliNyrWU.png?o=1) ![Costs](https://gcdnb.pbrd.co/images/TWrhEzoC5YmJ.png?o=1)
lyogavin/Anima33B_rlhf_belle_eval_1k
2023-06-28T00:24:01.000Z
[ "region:us" ]
lyogavin
null
null
null
2
3
--- dataset_info: features: - name: question dtype: string - name: std_answer dtype: string - name: class dtype: string - name: anima_answer dtype: string - name: anima_answer_extraced dtype: string - name: inputPrompt dtype: string - name: gpt_output dtype: string - name: gpt_output_score dtype: float64 - name: chosen dtype: string - name: rejected dtype: string - name: chosen_token_len dtype: int64 - name: rejected_token_len dtype: int64 splits: - name: train num_bytes: 2972300.1 num_examples: 700 - name: test num_bytes: 1273842.9 num_examples: 300 download_size: 2384211 dataset_size: 4246143.0 --- # Dataset Card for "Anima33B_rlhf_belle_eval_1k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
RyokoExtra/TvTroper
2023-06-29T06:38:12.000Z
[ "task_categories:text-classification", "task_categories:text-generation", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "training", "text", "region:us" ]
RyokoExtra
null
null
null
2
3
--- license: apache-2.0 language: - en tags: - training - text task_categories: - text-classification - text-generation pretty_name: TvTroper size_categories: - 100K<n<1M --- # Dataset Card for TvTroper *TvTroper is a public raw dataset on TvTropes.org page.* ## Dataset Description - **Homepage:** (TODO) - **Repository:** N/A - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** KaraKaraWitch ### Dataset Summary TvTroper is a raw dataset dump consisting of text from at most 651,522 wiki pages (excluding namespaces and date-grouped pages) from tvtropes.org. ### Supported Tasks and Leaderboards This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes. - text-classification - text-generation ### Languages - English ## Dataset Structure All the files are located in jsonl files that has been compressed into a 20GB .zip archive. ### Data Instances ```json ["https://tvtropes.org/pmwiki/pmwiki.php/HaruhiSuzumiya/TropesJToN","<!DOCTYPE html>\n\t<html>\n\t\t<head lang=\"en\">\n...<TRUNCATED>"] ``` ### Data Fields There is only 2 fields in the list. URL and content retrieved. Content retrieved may contain errors. If the page does not exist, the 404 error page is scraped. For the case of 1 specific URL: `https://tvtropes.org/pmwiki/pmwiki.php/JustForFun/RedirectLoop` will endlessly redirect to the page. As such we have used the following html as placeholder for such occurances: ```html <!DOCTYPE html><html><head lang=\"en\"><title>Error: URL Exceeds maximum allowed redirects.</title></head><body class=\"\"><div>Error: URL Exceeds maximum allowed redirects.</div></body></html> ``` URLs may not match the final url in which the page was retrieved from. As they may be redirects present while scraping. #### Q-Score Distribution Not Applicable ### Data Splits The jsonl files are split by their namespaces. ## Dataset Creation ### Curation Rationale We have curated TvTropes.org as it serves as one of the best resource for common themes, narrative devices, and character archetypes that shape our various stories around the world. ### Source Data #### Initial Data Collection and Normalization None. No normalization is performed as this is a raw dump of the dataset. #### Who are the source language producers? The related editors/users of TvTropes.org ### Annotations #### Annotation process No Annotations are present. #### Who are the annotators? No human annotators. ### Personal and Sensitive Information We are certain there is no PII included in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content. It may also be useful for other languages depending on your language model. ### Discussion of Biases This dataset contains mainly TV Tropes used in media. ### Other Known Limitations N/A ## Additional Information ### Dataset Curators KaraKaraWitch ### Licensing Information Apache 2.0, for all parts of which KaraKaraWitch may be considered authors. All other material is distributed under fair use principles. Ronsor Labs additionally is allowed to relicense the dataset as long as it has gone through processing. ### Citation Information ``` @misc{tvtroper, title = {TvTroper: Tropes & Others.}, author = {KaraKaraWitch}, year = {2023}, howpublished = {\url{https://huggingface.co/datasets/RyokoExtra/TvTroper}}, } ``` ### Name Etymology N/A ### Contributions - [@KaraKaraWitch (Twitter)](https://twitter.com/KaraKaraWitch) for gathering this dataset.
mrzlab630/trading-candles
2023-06-28T12:05:52.000Z
[ "region:us" ]
mrzlab630
null
null
null
7
3
Entry not found
tasksource/leandojo
2023-06-28T17:46:34.000Z
[ "license:cc-by-2.0", "region:us" ]
tasksource
null
null
null
1
3
--- license: cc-by-2.0 --- https://github.com/lean-dojo/LeanDojo ``` @article{yang2023leandojo, title={{LeanDojo}: Theorem Proving with Retrieval-Augmented Language Models}, author={Yang, Kaiyu and Swope, Aidan and Gu, Alex and Chalamala, Rahul and Song, Peiyang and Yu, Shixing and Godil, Saad and Prenger, Ryan and Anandkumar, Anima}, journal={arXiv preprint arXiv:2306.15626}, year={2023} } ```
Lycoris53/Japanese-Amitaro-VITS-TTS-Voice-data
2023-06-30T02:32:45.000Z
[ "license:apache-2.0", "region:us" ]
Lycoris53
null
null
null
1
3
--- license: apache-2.0 --- # Japanese-Amitaro-VITS-TTS-Voice-data Annotated Japanese voice data for VITS TTS training All credits goes to Amitaro : [あみたろの声素材工房](https://amitaro.net) - Annotated Json data : amitaro_with_kana.json - Annotated txt data : amitaro_train.txt - Python file : amitaro_html_parse.py Due to direct link restriction from creators, wav files can be found at [Amitaro Voice Lab.](https://amitaro.net/voice/voice_dl/) (press the link on あみたろの声素材工房・圧縮ファイル置き場 section to download)
JacquesVlaming/Questions_Answers
2023-06-30T11:08:49.000Z
[ "region:us" ]
JacquesVlaming
null
null
null
0
3
--- dataset_info: features: - name: question dtype: string - name: context dtype: string - name: answer dtype: string splits: - name: train num_bytes: 921084 num_examples: 976 - name: validation num_bytes: 111135 num_examples: 108 download_size: 221671 dataset_size: 1032219 --- # Dataset Card for "Questions_Answers" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
aboli-marathe/WEDGE
2023-07-01T02:33:54.000Z
[ "task_categories:image-classification", "task_categories:object-detection", "size_categories:n<1K", "license:cc-by-nc-sa-4.0", "code", "arxiv:2305.07528", "region:us" ]
aboli-marathe
null
null
null
1
3
--- license: cc-by-nc-sa-4.0 task_categories: - image-classification - object-detection tags: - code pretty_name: WEDGE size_categories: - n<1K --- Dataset available at: https://github.com/Infernolia/WEDGE Dataset Website: https://infernolia.github.io/WEDGE/ Pre-print: https://arxiv.org/abs/2305.07528
FangyuLei/tatqa
2023-07-01T03:01:45.000Z
[ "license:bsd-3-clause", "region:us" ]
FangyuLei
null
null
null
1
3
--- license: bsd-3-clause ---
Veucci/lyric-to-3genre
2023-07-04T14:10:50.000Z
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "license:cc-by-nc-4.0", "music", "region:us" ]
Veucci
null
null
null
1
3
--- license: cc-by-nc-4.0 size_categories: - 1K<n<10K task_categories: - text-classification language: - en tags: - music --- # Song Lyrics Dataset ## Description This dataset contains a collection of song lyrics from various artists and genres in english. It is intended to be used for research, analysis, and other non-commercial purposes. ## Dataset Details The dataset is organized in a tabular format with the following columns: - `Genre` (int): Genre of the lyrics - `Lyrics` (str): The lyrics of the song. - Pop: 979 rows - Rock: 995 rows - Hip-Hop: 1040 rows ## Usage Feel free to use this dataset for non-commercial purposes such as academic research, natural language processing tasks, sentiment analysis, or personal projects. You are allowed to analyze, modify, and derive insights from the dataset. If you use this dataset in your work, we kindly request that you provide attribution by citing this repository or linking back to it. ## License This dataset is released under the Creative Commons Attribution-NonCommercial license. This means that you are not allowed to use the dataset for commercial purposes. For detailed information about the license, please refer to the [LICENSE](./LICENSE) file. ## Contact If you have any questions, suggestions, or concerns regarding this dataset, please feel free to reach out to email at [efe.ozkan732@gmail.com](mailto:efe.ozkan732@gmail.com). Happy exploring and analyzing the world of song lyrics!
AhmedBou/clinical_terms_synonyms
2023-07-21T15:18:20.000Z
[ "task_categories:text-classification", "size_categories:1K<n<10K", "language:en", "license:apache-2.0", "medical", "region:us" ]
AhmedBou
null
null
null
0
3
--- license: apache-2.0 task_categories: - text-classification language: - en tags: - medical size_categories: - 1K<n<10K --- A dataset consisting of 359 clinical trial terms, each accompanied by a list of synonyms.
pengxiang01/test
2023-08-16T07:57:20.000Z
[ "task_categories:tabular-to-text", "task_categories:table-to-text", "task_categories:multiple-choice", "task_categories:text-retrieval", "task_categories:time-series-forecasting", "task_categories:visual-question-answering", "task_categories:question-answering", "task_categories:zero-shot-image-classi...
pengxiang01
null
null
null
0
3
--- task_categories: - tabular-to-text - table-to-text - multiple-choice - text-retrieval - time-series-forecasting - visual-question-answering - question-answering - zero-shot-image-classification - depth-estimation language: - ab - ak - ar license: bsl-1.0 tags: - biology - code - medical pretty_name: sdfsad size_categories: - 10K<n<100K --- aasdfsdf