license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
other
['generated_from_trainer']
false
distilroberta-hatespeech This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3619 - Acc: 0.8423
7a036898502eba9a64b21d686cc8aa19
other
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 12345 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 16 - num_epochs: 20 - mixed_precision_training: Native AMP
77fd611a075a91595163085554f5fe1b
other
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Acc | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.3096 | 1.0 | 4021 | 0.3375 | 0.8540 | | 0.3711 | 2.0 | 8042 | 0.3305 | 0.8574 | | 0.322 | 3.0 | 12063 | 0.3398 | 0.8534 | | 0.3197 | 4.0 | 16084 | 0.3444 | 0.8504 | | 0.3332 | 5.0 | 20105 | 0.3619 | 0.8423 |
199bf68189d83f06e3123ec4bb80e8cd
afl-3.0
[]
false
Model Description We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147.pdf), covering 13 different application scenarios. Each model contains 11 billion parameters. | Model | Description | Recommended Application | ----------- | ----------- |----------- | | rst-all-11b | Trained with all the signals below except signals that are used to train Gaokao models | All applications below (specialized models are recommended first if high performance is preferred) | | rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing | Knowledge intensive tasks, information extraction tasks,factual checker | | rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization or other general generation tasks, meta-evaluation (e.g., BARTScore) | | rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning, relation extraction, event-based extraction | | rst-information-extraction-11b | Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity | Named entity recognition, relation extraction and other general IE tasks in the news, scientific or other domains| | rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction, event prediction | | rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | general text classification | | rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging, general IE tasks, common sense reasoning | | rst-natural-language-inference-11b | Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information | Natural language inference, multiple-choice question answering, reasoning | | rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment classification, emotion classification | | rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model | General multiple-choice question answering| | rst-gaokao-cloze-11b | Trained with manually crafted cloze datasets | General cloze filling| | **rst-gaokao-writing-11b** | **Trained with example essays from past Gaokao-English exams and grammar error correction signals** | **Essay writing, story generation, grammar error correction and other text generation tasks** |
d8a93df79af0950e0dfdac68fc0e0329
afl-3.0
[]
false
**Art & Eros (Jan 06, 2023)** *https://civitai.com/models/3950/art-and-eros-aeros-a-tribute-to-beauty* ***Tags:*** artistic, armorgirl, knollingcase, nude, photograpyfantasy, sci fi, fantasy, punk, photorealistic, synthwave, cyber punk, nsfw, post apocalyptic, porn, hassan, portraits, elden ring style, girl, woman, realistic, apocalypse, cyborg, science fiction, dreamlikeart, dreamlike, mod ***Trigger Words:*** elden ring style, modelshoot style, dreamlikeart, postapocalypse, analog style, knollingcase, swpunk, synthwave, cyborgdiffusion ***Example 1:*** wide shot, a (muscular:1.1) (((naked))) girl, young [[[Gal Gadot]]], (small head), by Alphonse Mucha, (Mandy Jurgens), Granblue Fantasy, Greg Rutkowski, detailed face, detailed belly, PERFECT (((gorgeous FACE))), highly detailed, INTRICATE ***Example 2:*** a curvy (((naked))) girl as a (skimpy) futuristic battle armor, [MELISSA BENOIST], [Emma Watson], cinematic lighting, toned abs, perfect large breasts, thick ass ***Example 3:*** wide angle closeup close up nude portrait of a Dutch Appealing cute woman with hollywood hair wearing 30s medieval style Poncho , thong bikini , nude big tits, Defiance look, gesture motion on side in St Petersburg street, exposed hairy pussy, working as Public Relations Officer , outside Shoe Store with a Art Hoe mood, in front of a Disease man , Fill Light , Olive,Mulberry , sidelit, analog 85mm sharp focus, d750 hdr photo by Ying Tang , Engaging , focus on the eyes, rim halo light, 8k canon RAW, art photography, cold blue lighting, Microburst golden hour, hard light, movie still from Braveheart , knollingcase by Simon Bisley --------------------------------------------------------------- **Dreamlike Photoreal (Jan 04, 2023)** *https://civitai.com/models/3811/dreamlike-photoreal-20* ***Tags:*** photorealistic ***Example 1:*** lookin high quality studio photo of slim [(hs20yo19:1.13):0.6] [(hstei:0.1)::0.4] with (blonde bun hair :1.3) [(eyeshadows, smokyeyes, heavy clubbing makeup:1.35):0.3] , person smiling (21yo:0.1) (sitting with spread legs in a locker room:1.3), (visible perky nipples:1.3) (hs20yo9:1.17) (cleavage, big breasts :1.25)in ( (short cotton tshirt:1.4) and denim shorts:1.1), studio lighting, smiling fitness model (defined abs :1) Nikon, 8k, 1080p, 40mm, photoshop ***Example 2:*** photo, higly detailed, 8k, pretty woman making selfie, table in cafe outroom, wide angle, morning, colored, happy crowd around, paparaci, wind, dynamic scene, cinema like ***Example 3:*** (extremely detailed CG unity 8k wallpaper), young swedish woman, soft lighting, detailed face, concept art, digital painting, looking into camera. photorealistic, photorealism, greg rutkowski, trending on artstation, upper waist photo by Annie Leibovitz, film, studio lighting, detailed skin, ultra realistic, bokeh, sharp features, unreal engine cinematic smooth, intricate detail --------------------------------------------------------------- **Grapefruit (version 3)** *https://civitai.com/models/2583/grapefruit-hentai-model* ***Tags:*** anime, nsfw, hentai ***Example 1:*** masterpiece, 1girl, solo, animal ears, long hair, beach, red eyes, black hair, nude, large breasts, tongue, from above, choker, paw pose, cum, ***Example 2:*** (masterpiece), best quality, detailed, looking at viewer, ((nude) robotic girl sitting:1.3), mechanical, (cyberpunk city in background), beret, orange eyes, silver long hair, sigma 135mm lens, cowboy shot, medium breasts, night, from above, ***Example 3:*** masterpiece, best quality, detailed, 1girl, blonde hair, braid, sweets, candies, chocolates, cozy, warm, bangs, (messy room:1.2), light pink eyes, books, medium breasts, witch, [[spread legs]], thighhighs, lying, topless, pussy, --------------------------------------------------------------- ***GuoFeng (version 2)*** *https://civitai.com/models/8470/guofeng2* ***tags:*** style, anime, character, girl, woman, cartoon, realistic, 3d, chinese,game character, chinese dress ***Example 1:*** best quality, masterpiece, highres, young girl, china dress,Beautiful face, earrings, hair ornament, upper body, orange eyes, long black hair, solo, light smile, ***Example 2:*** (Masterpiece), (Extremely detailed CG Unity 8k wallpaper), Best Quality, (Original Character Painting), ((cowboy shot)),(Solo), 1 Girl, (Medium Tits), (cleavage),((Brunetize)), Sweeping Bangs, (Extremely Delicate Beautiful), (Beautiful and Detailed Eye Description), (Beautiful and Detailed Facial Depiction), Standing, ((Embroidery)), ((Dao Robe)), Delicate Clothes Slipping Off Shoulders, Hair Accessories, Gemstone Necklaces, Delicate Faces, Look at the audience, ***Example 3:*** (best quality),((masterpiece)),(highres), original, (extremely detailed 8K wallpaper), overexposure,1girl,(medium breasts),(an extremely delicate and beautiful),(Beautiful and detailed eye description),(Beautiful and detailed facial depiction),(upper body),earrings,necklace,snow,snowflakes, bangs,Ice crystal,winter **Openjourney (version 1)** *https://civitai.com/models/86/openjourney-aka-midjourney-v4* ***Tags:*** style, midjourney ***Trigger Words:*** mdjrny-v4 style ***Example 1:*** OpenJourney 3 d goddess close - up profile portrait with ram skull. beautiful intricately detailed japanese crow kitsune mask and clasical japanese kimono. betta fish, jellyfish phoenix, bio luminescent, plasma, ice, water, wind, creature, artwork by tooth wu and wlop and beeple and greg rutkowski , mdjrny-v4 style ***Example 2:*** [[Barbara Palvin]], Alicia Vikander, Cyberpunk-rock, Flight Jacket, skimpy outfit, cool colorful dieselpunk, flower punk, atompunk, Ink Dropped in water, splatter drippings, frosted tips hair, lots of chains, spikes on a jacket, pulp Manga, cinematic lighting, in the style of Gediminas Pranckevicius, Moebius, (((PERFECT FACE))), ((PERFECT big BREAST)), (thick ass), highly detailed, (INTRICATE), (((detailed face))), ((detailed breast)), (detailed nipple), mdjrny-v4 style ***Example 3:*** 1984 big brother, cinematic, artstation, 8k, extremely detailed, dark color palette, detailed, hyperrealism, postprocessing, 8k, octane render, de-noise, blender render --------------------------------------------------------------- **PFG (version 2)** *https://civitai.com/models/1227/pfg* ***Tags:*** hental, porn, women ***Example 1:*** Nude girl!!! holding a cat, studio lighting!! trending on artstation 3d. 8k quality super realistic illustration by Wayne Barlowe and Gustave Dore lineart!!!!! of the character! full body shot!!!! hdr painting concept Art Zbrush Threu Bokowina popovnuk macro lens flare lights cute detailed photorealistic cinematic photography 35mm camera wide ***Example 2:*** threesome, nude, sweaty, big tits, facial, blond ***Example 3:*** (insanely detailed, bloom:1.5), ((solo)), (highest quality, Alessandro Casagrande, Greg Rutkowski, Sally Mann, concept art, 4k), (colourful), (high sharpness), ((detailed pupils)), red eyes, ((painting:1.1)), (digital painting:1.1), detailed face and eyes,Masterpiece, best quality, highly detailed photo:1, 8k, detailed face,photorealistic, (black Hair,ponytail hair cut, ecstatic:1.1),(18yo woman:1),By jeremy mann, by sandra chevrier, by maciej kuciara, ((Large Breast)), sharp, ((perfect body)), realistic, real shadow, 3d, ((black jacket)), black leather pants, (black sexy obsessive bra), ((full body)), ((cyberpunk night city background)), (by Michelangelo) --------------------------------------------------------------- **Protogen (version x5.8)** *https://civitai.com/models/3666/protogen-x34-photorealism-official-release* ***Trigger Words:*** modelshoot style, analog style, mdjrny-v4 style, nousr robot ***Example 1:*** modelshoot style, (extremely detailed CG unity 8k wallpaper), full shot body photo of the most beautiful artwork in the world, english medieval witch, black silk vale, pale skin, black silk robe, black cat, necromancy magic, sexy, medieval era, photorealistic painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, trending on CGSociety, Intricate, High Detail, Sharp focus, dramatic, photorealistic painting art by midjourney and greg rutkowski ***Example 2:*** actress Emilia Clarke naked, Daenerys Targaryen, HDR, super realistic, hyper realistic, hyper detailed, highly detailed, 4k, 8k, DSLR, photo, photo realistic, wide angle camera, slim girl, skinny girl, cute girl, a 15 year old girl, ((((teenage girl)))), ((small breast)), flat chest, ((tiny breasts)), ((small tits)), ((little boobs)), ((ass|pussy)), girl in center, asian girl, (((one girl))), (((solo girl))), pink hair, (((girl wearing black leather suit))), ((legs spread)), (((girl sitting on motorcycle | red kawasaki ninja))), ((((motorcycle with dragon (drawing|print|design))))), white garage environment, ((fingerless leather bicycle gloves)), ((golden snake bracelets on her arms)), ((((Glamor Shot)))), ((((Golden Hour)))), ((Color Grading)), ((Bokeh)) ***Example 3:*** 64k uhd, sharp gamma, dan mumford colors, digital extremely detailed painting Artstation coral tentacles - By alexander jansson smeared watermelon pattern floating in the air thick liquid creative - 4k uhd, hyper detailed, ((steampunk)), lovecraft colors, epic composition, octane render, Metal Hellsinger style ***Example 4:*** full shot body photo of the most beautiful artwork in the world featuring bikini model, ((((small breasts))), ((((naked)))), ((((small boobs)))), smiling, freckles, sexy, High Detail, Sharp focus, dramatic, photorealistic, ultra sharp, ultra hd, hyper realistic, ultra realistic, no underwear, no bikini, no pants, no shorts, completely naked, wide open legs, showing pussy, (((holding her tits with both hands))), beautiful hands, beautiful fingers, fine fingers --------------------------------------------------------------- **RealEldenApocalypse_AnalogSexKnoll_4CandyPureSimp+FEET (version 1)** *https://civitai.com/models/1654/realeldenapocalypseanalogsexknoll4candypuresimpfeet* ***Tags:*** artistic, nude, science fiction, portraits, elden ring style, fantasy, photorealistic, nsfw, post apocalyptic, hassan, girl, woman, realistic, photography, knollingcase, sci fi, apocalypse ***Trigger Words:*** elden ring style, postapocalypse, knollingcase, analog style, bf ***Example 1:*** professional medium shot photo of skinny provoking nymphette with (curly pixie haircut) (ruby red hair) snub nose with detailed facial features and model eyes hanging out at meadow, soft shadows ***Example 2:*** elden ring style, bf, Professional Photo, ((Front Shot)), ((Full Body)), (Clothed), ((wearing skimpy fantasy Maid To Tease White and Black Lace Apron with Ruffle details attached elastic garders adjustable criss cross back straps and ribbon waist tie), (Young Female:1.2), ((DARK ELF)), (dark grey skin), (standing), [grim dark:cyberpunk:0.75], (in A magical kingdom where everything is perfect and everyone is happy), Legs slightly bent, Curvy Fit body type, Medium breasts, (Puffy Nipples), (pokies), Neutral Expression, Shaved Pubic Hair, Small labia, tight pussy, (Perfect Large Ass), Perfect face, detailed eyes, succubus, (horns on head), ((magical glowing tattoos)), (((blood and dirt on clothes and skin))), distressed, Supple Skin Pores, (Dark scarlet colored hair), wet, depth of field, cinematic lighting, photographed on a Canon EOS-1D X Mark III, 50mm Sigma, ISO 100, (highly detailed:1.2), photorealism, HDR 4k, cinematic film still from the Lord of The Rings, Masterpiece ***Example 3:*** a young woman sitting and spreading her legs, full_body_shot, closeup, nsfw, sweaty, pussy, nipples, cinematic, detailed face, realistic face, photo realistic, elden ring style, knollingcase, analog style, bf ***Example 4:*** professional photo of a nude woman lying on her back on a bed with legs spread, full body, medium breast, smiling, highly detailed, 8k resolution --------------------------------------------------------------- **WoopWoop-Photo (version 1.2)** *https://civitai.com/models/4041/woopwoop-photo* ***Tags:*** men, realistic, photography, women, photorealistic, nsfw, hentai, hardcore, porn, anatomical, gay, penis, anatomy, realism, semi-realistic, hyperrealism, vagina, homoerotic, homosexual, lesbian, lgbtqia+, lgbtq, lgbt, queer, genderqueer ***Example 1:*** (((photographic, photo, photogenic, rule of thirds, dramatic lighting))), ((sexy)), (detailed face, detailed nose) (((mature woman))) ((thickthick)) (((wearing tank top, spaghetti straps))), ((freckles)), long curly messy brown hair, ((collar or choker)), ((smirk)), ((tattoo)) ***Example 2:*** (((photographic, photo, photogenic, rule of thirds, candle lighting))), ((beautiful)), (detailed face, detailed nose) (((mature woman))) ((brown skin)) ((thick)) (((wearing summer dress))), , medium curly messy brown,brunette hair, ((collar or choker)), ((smile)), ((tattoo)) ***Example 3:*** (((photographic, photo, photogenic, rule of thirds, moody lighting))), ((face only)) ((beautiful)), (detailed face, detailed nose) (((mature woman))) on beach ((black skin)) ((thick)) (((wearing summer dress))), , short wavy brushed red,ginger hair, ((collar or choker)), ((smile)), ((tattoo)) --------------------------------------------------------------- ***Project Photo Beta 2.0 LITE (version 2)*** *https://civitai.com/models/5160/project-photo-beta-20-lite* ***Tags:*** photography, photograph, photorealistic ***Trigger Words:*** (lightroom) red:34% blue:53% green:43% filmgrain_minimal, texture:+25%, clarity: +40%, Contrast:+4%, shadows: +11% , sharpen:70% ***Example 1:*** Portrait of teen boy with blue hair and with cute face, North Pole Snow Vibe, perfect composition, hyperrealistic, super detailed, 8k, high quality, trending art, trending on artstation, sharp focus, studio photo, intricate details, highly detailed, by greg rutkowski (lightroom) red:68% blue:41% green:37% filmgrain_minimal, texture:+25%, clarity: +40%, Contrast:+15%, shadows: +11% , sharpen:100% ***Example 2:*** professional portrait photography, 85mm lens, gothic woman with red hair, centered, scenic background, perfect composition, golden ratio, hyperrealistic, photorealism, super detailed, 32k, high quality, trending on artstation, sharp focus, studio lighting, intricate details, hyperdetailed photography by greg rutkowski, dino tomic, (lightroom) red:68% blue:41% green:37% filmgrain_minimal, texture:+25%, clarity: +40%, Contrast:+15%, shadows: +11% , sharpen:100% ***Example 3:*** portrait 1girl, arms_behind_back, breasts, dress, hair_over_one_eye, jewelry, lips, medium_breasts, navel, necklace, pink_hair, realistic, short_hair, solo, (SEMI-SILHOUETTE light:1.1), (raytracing:1.1), (cryengine:1.1), (skin detail:1.1),(photrealistic:1.1) --------------------------------------------------------------- ***Project Unreal Engine 5 (version 2)*** *https://civitai.com/models/4752/project-unreal-engine-5* ***Tags:*** portraits, 3d, ultra realistic, real person ***Example 1:*** (1 girl) < intricate stunning highly detailed girl by artgerm and edouard bisson, pale eyes, long blonde hair, portrait, soft studio lighting, ultra realistic gold filigree detailed bodice, photorealistic, octane render, unreal engine, hyper detailed, volumetric lighting, hdr, octane render, 4k, 8K (skin defect: very few) (freckled face: very few) (Birthmark: 0,3) (greasy hair: 0,2) (clothes wrinkling: 0,5) (body scrub: 0,4) (perfect eyes: 1,0) (eyes size: 1,0) (lipsticked mouth: 1,5) (boobs size big) (age 25) (long hair minimum) (make up medium) (face skinny) (realistic fingers) (little nose) (NO TEXT) (attentive facial expression) (left and right hands five fingers) ***Example 2:*** portrait pale pink haired goddess, wearing byzantine gown | fantasy, hyper-detailed, accurate anatomy, symmetrical facial features, sharp focus, volumetric lighting, 16k | karol bak, yoshitaka amano, tom bagshaw, aurora, zbrush cel-shaded, cgsociety | ethereal beautiful astral vaporwave storybook illustration, dark fantasy ***Example 3:*** masterpiece portrait of Rei Ayanami \(evangelion\), evangelion \(Hideaki\), caustics, textile shading, high resolution illustration, red eyes, feminine, no pupils, blue hair, short hair, japanese school uniform, loafers, detailed school, japanese school hallway, japanese modern school in Tokyo, soft light, black stockings, torn stockings, indoors, wooden floor, hallway, at night, neon lights --------------------------------------------------------------- ***Openjourney (version 1)*** *https://civitai.com/models/86/openjourney-aka-midjourney-v4* ***tags:*** style, midjourney ***Example 1:*** [[Barbara Palvin]], Alicia Vikander, Cyberpunk-rock, Flight Jacket, skimpy outfit, cool colorful dieselpunk, flower punk, atompunk, Ink Dropped in water, splatter drippings, frosted tips hair, lots of chains, spikes on a jacket, pulp Manga, cinematic lighting, in the style of Gediminas Pranckevicius, Moebius, (((PERFECT FACE))), ((PERFECT big BREAST)), (thick ass), highly detailed, (INTRICATE), (((detailed face))), ((detailed breast)), (detailed nipple), mdjrny-v4 style ***Example 2:*** OpenJourney 3 d goddess close - up profile portrait with ram skull. beautiful intricately detailed japanese crow kitsune mask and clasical japanese kimono. betta fish, jellyfish phoenix, bio luminescent, plasma, ice, water, wind, creature, artwork by tooth wu and wlop and beeple and greg rutkowski , mdjrny-v4 style ***Example 3:*** mdjrny-v4 style of an oil painting of a flower (dragon skull:1.1) as vase on a table with a white cloth on it and a white tablecloth, (flying skull moths:1.1), impressionist painting, vivid, painting by (Leonid Afremov:1.2), Patrice Murciano --------------------------------------------------------------- ***Realistic Vision (version 1.3)*** *https://civitai.com/models/4201/realistic-vision-v13* ***tags:*** character, realistic, photorealistic, nsfw, anatomical, semi-realistic, cgi ***Trigger Words:*** analog style, modelshoot style, nsfw, nudity ***Example 1:*** girl, (pale skin:0.1), techwear, city, (detailed skin:1.4), realistic, film grain, natural light ***Example 2:*** RAW photo, a wide shot photo of 21 y.o woman in swimsuit clothes, long haircut, pale skin, slim body, ((full body)), background is grassy meadow, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 ***Example 3:*** RAW photo, a close up portrait photo of Natasha Romanoff in string bikini clothes, redhair,long hair, pale skin, background is new york, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3 --------------------------------------------------------------- ***Kenshi (version 01)*** *https://civitai.com/models/3850/kenshi* ***Tags:*** anime, semi-realistic, bochen, nixeu, guweiz, wlop ***Example 1:*** (midriff), middle_finger, (makima:1.2) close up, (makima_eye:1.2), (glowing_eye:1.2), scary, detailed_eyes, ((face_focus, zoomed_face, zoomed_in, bokeh, underwater, water, wet_surface,)) ((face_portrait:1.2)), (duality_style:0.3), (line_style:1), (minimal_gradient:0.7), (nixeu_basic2:0.7), (nixeu_extra:0.7),(nixeu_soft:0.7),(nixeu_white:0.7), (dark_fantasy:1), (flame_surge_style:1), ((sitting on a throne)), bloody, evil, dark,moody, spooky background, villian, colorful, beautiful, braided, (chains), somber expression, looking down, dark energy, colorful, vibrant colors, portal to another world, red nail polish, side view, ultra realistic, intricate details, elegant, hyper realistic, tonemapping, hyperfocus, sharp focus, hyperdetailed,intricated detail, shiny, realism, [colorful], [volumetric lighting],photorealistic realistic, luxurious, close_up, 8k, detailed, unreal engine 5, ray tracing, 8k, cinematic, depth of field, octane render,realistic lighting, cinematic lighting, small gold particles, best_quality, big smile, oversized jacket, good_anatomy, highly detailed, fit, ultra realistic, highres, superb, 8k wallpaper, extremely detailed, intricate, limited palette, ,smile, (freckles:0.5), small details, ultra detailed, close fists, c_cup, confused, fragance rose, red, black, white, gold, Decapitation of the damned, 3d,3ds, unreal engine 5, volumetric lighting, realistic, realistic lighting, cinematic, 4k, cinematic lighting, 8k, depth of field 3d, 3ds, masterpiece, perfect, award-winning,hyper-detailed, photorealistic, ultra realistic, realistic light, unity, hard lighting, intricate details, stop motion, hyperfocus, tonemapping, sharp focus, hyper detailed, scary, zoom out ***Example 2:*** masterpiece, best quality, ultra-detailed, illustration, random, island, tropical, clear skies, blue water, sandy beaches, palm trees, exotic flowers, lush vegetation, diverse wildlife, seabirds, boats, ships, waterfalls, canyons, cliffs, caves, ancient ruins, detailed background, a mix of different elements, surreal, dreamlike, abstract, a blend of different landscapes, cultures, architecture, nature, elements of fantasy, science fiction, mystery, depth, dimension, light and shadows, ***Example 3:*** ((female:1.2)), 💥☠️🔮✨, beatiful young woman, human, v-shaped chin, (perfectly symmetrical face), ((villanous facial expression)), ((cyberpunk:1.0, retowave:0.8 colorful:1.2 outfit)), blurred environment background, neon energy halo in her back, (perfectly shaped eyes:0.8), dark black hair, tied hair, pale skin, portrait, digital art, concept art, post processed, dynamic lighting, (painted by bochen and wlop, stylized by nixeu and greg rutkowski), trend on pixiv, perfect composition, cinematic, moody, rule of thirds, majestic, detailed, sharp details, sharp focus, perfect anatomy, shiny, masterpiece, award-winning photography, fine-tuning face, masterpiece ***Example 4:*** sam yang, 1girl, (jellyfish hair:1.5), (peach hair:1.1), (flattop:1.4), hair clip, covered nipples, puffy nipples, raglan top, jeans, detailed_eyes, spoken_heart, arms behind back, large breasts, <lora:samdoesartsSamYang_normal:0.95> ***Example 5:*** (male:1.2), adult face, symmetrical face, sharp eyes, orange eyes, long yellow orange hair, man with unique power, dream power, (wearing a blue cloak), (glowing_eye: 1.1), alone, energy around him (anime_style:1.1), (semi-style:1.0), (pixel-style:0.2), (detailed) (Face_focus:1.2), Close up shot, upper body shot, posing, looking forward, --------------------------------------------------------------- ***ChilloutMix (version fp32)*** *https://civitai.com/models/6424/chilloutmix* ***Example 1:*** parameters best quality, ultra high res, (photorealistic:1.35),(Korean:1.1) ,ultra-detailed,incredibly detailed,(an extremely delicate and beautiful),detailed cg 8k wallpaper,(nsfw:1.4641),POV, (half naked hanfu:1.8), (realistic humid skin:1.2),(solo:1.4), (1girl:1.1),(hanfugirl:1.6),(open clothes:1.4), (off shoulder:1.1), (looking at viewer:1.331), (large breasts:1.71),(clear fingers:1.5), (shiny skin:1.41), armlet, bangle, anklet, black hair, blunt bangs, parted bangs, high ponytail, hair rings, half updo, braided bun, (widow's peak:1.21), hair ornament, earrings,(Standing in the water:1.331), (parted lips:1.1), (eyelashes:1.1), (happy:1.6), (depth of field:1.1), lens flare, (chromatic aberration:1.1), (caustics:1.1), in summer, (water:1.331), branch, (beautiful detailed sky:1.331), (flower on liquid:1.331),white clothes,Mouth slightly open, beautiful detailed eyes,(scattered luminous petals:1.331), (style-keta:0.78), (qrx:0.51),gbf ***Example 2:*** (head to toe:1.4), a fantasy blonde princess in lingerie, doggystyle, legs, thighs, white skin, slender, 18 years old, looking at viewer, 1girl, princess, hair ornament, jewelry, necklace, bracelet, cleavage, gold bra, gold panties, gold thighhighs, lot of jewelry, inside a castle background, erotic pose, candles, navel, midriff, red curtains, beautiful, round face, ***Example 3:*** (masterpiece:1.0), (best quality:1.4), (ultra highres:1.2), (photorealistic:1.4), (8k, RAW photo:1.2), (soft focus:1.4), 1 young girl, (18yo:1.3), (sharp focus:1.4), (Japanese:0.7), (russian:1.1), detailed beautiful face, black hair, (detailed maid crothes:1.4), (lace choker:1.2), beautiful white shiny humid skin ***Example 4:*** (masterpiece:1.0), (best quality:1.4), (ultra highres:1.2), (delicate illustration:1.4), (renaissance art:1.4), (8k, RAW photo:1.2), (soft focus:1.4), 1 young girl, (18yo:1.3), (sharp focus:1.4), (Japanese:1.0), (korean:0.7), detailed beautiful face, black hair, (detailed maid crothes:1.4), (lace choker:1.2), beautiful white shiny humid skin ***Example 5:*** 4k, high-res, masterpiece, best quality, ((Hasselblad photography)), (Korean K-pop idol), finely detailed skin, ((pale white skin)), sharp focus, (cinematic lighting), collarbone, (overcast tone), overcast whitebalance, morning, soft lighting, narrow waist, dynamic angle, [:(detailed face:1.2):0.2], (PureErosFace_V1), armpit crease, lewd pose, natural breasts, snowy white skin, winter clothings, groin, thigh gap, slender, ((highleg bikini)), scarf, beret, thongs, ((sagging breasts)) ***Example 6:*** Perfect full body photo of a 16yo cute girl,(Elf) fairy,cute hairstyle,(Sexy wet (Epic fantasy gorgeous dress) translucent beautyfull intricacy clothing decorative pattern details multicolor gown),cute delicate face,symmetrical leg,large breasts,sex happy,hairy wet pussy cum dildo,pale skin pores,hoop earrings ***Example 7:*** european girl, best quality, ultra high res, (photorealistic:1.4), autumn, street, stilettos, long grey coat, stockings, panties, perfect body, small breasts, nipples, (blond short hair:1), ((puffy eyes)), happy, full body --------------------------------------------------------------- ***Uber Realistic Porn Merge (URPM) (version 1.2)*** *https://civitai.com/models/2661/uber-realistic-porn-merge-urpm* ***Tags:*** portraits, character, girl, woman, realistic, photography, person, women, fantasy, photorealistic, merge, nsfw, sexy, blend, sex, hardcore, porn, nude, pussy, lewd ***Example 1:*** wide angle pussy and ass, (woman) porn, tight (asshole), natural boobs, big tits ***Example 2:*** a hot frightened helpless, screaming young woman riding dick of a creepy monster, (((penis penetrating asshole))), (focus on asshole), (detailed dandruff penis), fucked hard, ((detailed facial features)), very detailed face , wide-angle, (full body), digital art, high contrast dynamic lighting, horror fantasy, intricate detail, sharp focus, masterpiece, anatomical details, full body shot, 8k , ultra wide angle ***Example 3:*** 20 year old k-idol, 1 girl, 1 man, boyfriend, (sharp focus:1.4), (smile:1.1), (realistic humid skin:1.4), (beautiful face:1.1), detailed eyes, detailed face, (small breasts:1), (curvy body:0.8), (long black ponytail hair:1.2), bangs, black eyes, depth of field, nude, naked, best quality, ultra high res, (photorealistic:1.4), (aegyo sal:1), ((puffy eyes)), full body, ((legs spread on cock)), ((super wet skin)), (moaning), horny, pussy, ((Sexual intercourse)), ((sex)), ((fucked by man)), ((POV from below)), ((Sexual penetration)), ((vast cum on woman's legs)), ((vast cum on woman's pussy)), ((5 fingers)), hetero, ((1girl above 1man)), ((1man below 1girl)), (((cowgirl position))), (straddling), luxury hotel, ((suite room)), bed, side lighting, high contrast ***Example 4:*** 20 year old k-idol, 1 girl, 1 man, boyfriend, (sharp focus:1.4), (smile:1.1), (realistic humid skin:1.4), (beautiful face:1.1), detailed eyes, detailed face, (small breasts:1), (curvy body:0.8), (long black ponytail hair:1.2), bangs, black eyes, depth of field, nude, naked, best quality, ultra high res, (photorealistic:1.4), (aegyo sal:1), ((puffy eyes)), full body, ((legs spread on cock)), ((super wet skin)), (moaning), horny, pussy, ((Sexual intercourse)), ((sex)), ((fucked by man)), ((POV from below)), ((Sexual penetration)), ((vast cum on woman's legs)), ((vast cum on woman's pussy)), ((5 fingers)), hetero, ((1girl above 1man)), ((1man below 1girl)), (((cowgirl position))), (straddling), luxury hotel, ((suite room)), bed, side lighting, high contrast, sexy lingeries ***Example 5:*** (iphone shot), (uncovered Nipples:1.4), (perfect face), (pretty face), ((indonesian hijab)), (white_skin), (style-glass:1.1)), indonesian girl with hijab showing wet pussy to camera, looking to camera, no bra, no panties, nipples through material, shadowed eyes, Intricate, High Detail, Sharp focus, porn, jakarta, monas, thamrin, bundaran_HI, transjakarta, stasiun, gojek ***Example 6:*** ((best quality)), ((ultra res)), ((photorealistic:1.4)), (intricate details), 19 years old, blonde hair, perfect face, make up:1.5, light on face, face detail, ***Example 7:*** RAW photo, ((chromatic aberration)), ((caustic)), ((detailed face)),nude woman posing for a picture in front of a window with her hand up, smiling, hairy pussy, trending on ArtStation Pixiv, high detail, sharp focus, smooth,aesthetic ,8k uhd, dslr, soft lighting, high quality, film grain ***Example 8:*** A blonde punk woman with stitches on her face stands in a dark urban setting, holding a liquor bottle. She is dressed in tattered punk clothes and has a cheerful expression. The street lights highlight her unique appearance in a medium shot. The image conveys a sense of individuality, rebellion, and carefree joy. body blend by the light. --------------------------------------------------------------- ---------------------------------------------------------------
abc0ed5bb42bc85cc5761e35d8fc6d1c
apache-2.0
['image-classification', 'timm']
false
Model card for levit_conv_128s.fb_dist_in1k A LeViT image classification model using default linear mode (non-convolutional mode with nn.Linear and nn.BatchNorm1d). Pretrained on ImageNet-1k using distillation by paper authors.
32a200c5ea7c11b96585afb425645546
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 7.8 - GMACs: 0.3 - Activations (M): 1.9 - Image size: 224 x 224 - **Papers:** - LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference: https://arxiv.org/abs/2104.01136 - **Original:** https://github.com/facebookresearch/LeViT - **Dataset:** ImageNet-1k
ddcc76a45049ebd7db096cfdeb430eb7
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('levit_conv_128s.fb_dist_in1k', pretrained=True) model = model.eval()
969faed70d368fd4d96001c650ecb2dc
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'levit_conv_128s.fb_dist_in1k', pretrained=True, num_classes=0,
90c6f1ffc403acd502a3df59a1c3708b
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'levit_conv_128s.fb_dist_in1k', pretrained=True, features_only=True, ) model = model.eval()
c9b3487bbbba9e849991f0fe110a8bea
apache-2.0
['summarization', 'generated_from_trainer']
false
t5-small-finetuned-cnn This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 1.8436 - Rouge1: 33.2082 - Rouge2: 16.798 - Rougel: 28.9573 - Rougelsum: 31.1044
912ae81416d3028f43a79bc417adb9c0
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 2.3793 | 1.0 | 359 | 1.8885 | 33.0321 | 16.7798 | 28.9367 | 30.9509 | | 2.1432 | 2.0 | 718 | 1.8481 | 33.1559 | 16.8557 | 29.015 | 31.1122 | | 2.0571 | 3.0 | 1077 | 1.8391 | 32.99 | 16.716 | 28.8118 | 30.9178 | | 2.0001 | 4.0 | 1436 | 1.8357 | 33.0543 | 16.6731 | 28.8375 | 30.9604 | | 1.9609 | 5.0 | 1795 | 1.8437 | 33.1019 | 16.7576 | 28.8669 | 31.001 | | 1.925 | 6.0 | 2154 | 1.8402 | 33.1388 | 16.7539 | 28.8887 | 31.0262 | | 1.9036 | 7.0 | 2513 | 1.8423 | 33.1825 | 16.759 | 28.9154 | 31.0656 | | 1.8821 | 8.0 | 2872 | 1.8436 | 33.2082 | 16.798 | 28.9573 | 31.1044 |
eff20f98b884de430a3e20e6099e1406
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6
8a614a04346255b3fae6cf6c2f844b7f
apache-2.0
['generated_from_trainer']
false
correct_distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_40_24 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5794 - Precision: 0.0094 - Recall: 0.0147 - F1: 0.0115 - Accuracy: 0.7156
a36b50ad32957eaa2974f3c19a02c0a3
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
d9eb747810dafccdd7d3a0a0a3858b61
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 10 | 0.6319 | 0.08 | 0.0312 | 0.0449 | 0.6753 | | No log | 2.0 | 20 | 0.6265 | 0.0364 | 0.0312 | 0.0336 | 0.6764 | | No log | 3.0 | 30 | 0.6216 | 0.0351 | 0.0312 | 0.0331 | 0.6762 | | No log | 4.0 | 40 | 0.6193 | 0.0274 | 0.0312 | 0.0292 | 0.6759 | | No log | 5.0 | 50 | 0.6183 | 0.0222 | 0.0312 | 0.0260 | 0.6773 |
5074d9da68e435f562294141e8f03738
creativeml-openrail-m
['text-to-image']
false
Final Fantasy XIV Part One Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk fntsy1 (use that on your prompt)
d5a50a89bb9ecbc2116ec6c26dc24042
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Small Chinese This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 zh-CN dataset. It achieves the following results on the evaluation set: - Loss: 0.3946 - Wer: 72.3626
e448e862ba1488668e0d58df796328f5
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.5179 | 2.02 | 1000 | 0.3333 | 72.9831 | | 0.1273 | 4.04 | 2000 | 0.3562 | 73.9621 | | 0.0163 | 6.06 | 3000 | 0.3790 | 73.9708 | | 0.004 | 8.07 | 4000 | 0.3946 | 72.3626 | | 0.025 | 11.0 | 5000 | 0.4019 | 72.6772 |
8457c66ae0c9a9c4008a61ab4457485f
apache-2.0
[]
false
distilbert-base-en-sw-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
1ddbb9637e0519a606ac51bb440e0125
apache-2.0
[]
false
How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-sw-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-sw-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
4415eace8e9e896a192f6bd46c9a04eb
apache-2.0
['generated_from_trainer']
false
bert-fa-base-uncased-finetune_on_hoshfa This model is a fine-tuned version of [HooshvareLab/bert-fa-base-uncased](https://huggingface.co/HooshvareLab/bert-fa-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5274
043ca499ef0af8bd27b17fb4a329ebbd
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
b3ddf1d72305457c3d02c7ac695694c4
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3643 | 1.0 | 1604 | 2.1323 | | 1.5142 | 2.0 | 3208 | 2.1392 | | 0.8834 | 3.0 | 4812 | 2.5274 |
ff51c0e8ca6953b7e41efba1c9188dfa
apache-2.0
['generated_from_trainer']
false
Examples of "green patents" titles: - "A method for recycling waste" - score: 0.714 - "A method of reducing pollution" - score: 0.786 - "An apparatus to improve environmental aspects" - score: 0.570 - "A method to improve waste management" - score: 0.813 - "A device to use renewable energy sources" - score: 0.98 - "A technology for efficient electrical power generation"- score: 0.975 - "A method for the production of fuel of non-fossil origin" - score: 0.975 - "Biofuels from waste" - score: 0.88 - "A combustion technology with mitigation potential" - score: 0.947 - "A device to capture greenhouse gases" - score: 0.871 - "A method to reduce the greenhouse effect" - score: 0.887 - "A device to improve the climate" - score: 0.650 - "A device to stop climate change" - score: 0.55
2557622aa89263cf90aafeef944b537d
apache-2.0
['generated_from_trainer']
false
Examples of the model's limitation - "A method to avoid trash" - score: 0.165 - "A method to reduce trash" - score: 0.333 - "A method to burn the Amazonas" - score: 0.501 - "A method to burn wood" - score: 0.408 - "Green plastics" - score: 0.126 - "Greta Thunberg" - score: 0.313 (How dare you, model?); BUT: "A method of using Greta Thunberg to stop climate change" - score: 0.715 Examples were inspired by https://www.epo.org/news-events/in-focus/classification/classification.html
68f767bc006f9c7cc7844c8d97d4fd07
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-greenpatent This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [green patent dataset](https://huggingface.co/datasets/cwinkler/green_patents). The green patent dataset was split into 70 % training data and 30 % test data (using ".train_test_split(test_size=0.3)"). The model achieves the following results on the evaluation set: - Loss: 0.3148 - Accuracy: 0.8776 - F1: 0.8770
15de3e374cb71a942322be39a7035ca6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4342 | 1.0 | 101 | 0.3256 | 0.8721 | 0.8712 | | 0.3229 | 2.0 | 202 | 0.3148 | 0.8776 | 0.8770 |
795c66f3b29f1eb267a7c7f03fa37619
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-russian-demo-kaggle This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: inf - Wer: 0.9997
d921c3c34e91ee2de6e2a67d928ad5df
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP
c0b7dd5926f00024d72ae98764f72943
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.0102 | 1.03 | 500 | inf | 0.9997 | | 0.0068 | 2.06 | 1000 | inf | 0.9997 | | 0.0 | 3.09 | 1500 | inf | 0.9997 | | 0.0313 | 4.12 | 2000 | inf | 0.9997 | | 0.0 | 5.15 | 2500 | inf | 0.9997 | | 0.0052 | 6.19 | 3000 | inf | 0.9997 | | 0.0287 | 7.22 | 3500 | inf | 0.9997 | | 0.0 | 8.25 | 4000 | inf | 0.9997 | | 0.01 | 9.28 | 4500 | inf | 0.9997 | | 0.0 | 10.31 | 5000 | inf | 0.9997 | | 0.3919 | 11.34 | 5500 | inf | 0.9997 | | 0.0 | 12.37 | 6000 | inf | 0.9997 | | 0.0 | 13.4 | 6500 | inf | 0.9997 | | 0.0 | 14.43 | 7000 | inf | 0.9997 | | 0.6422 | 15.46 | 7500 | inf | 0.9997 | | 0.0 | 16.49 | 8000 | inf | 0.9997 | | 0.0 | 17.53 | 8500 | inf | 0.9997 | | 0.0 | 18.56 | 9000 | inf | 0.9997 | | 0.0 | 19.59 | 9500 | inf | 0.9997 | | 0.0 | 20.62 | 10000 | inf | 0.9997 | | 0.0427 | 21.65 | 10500 | inf | 0.9997 | | 0.0 | 22.68 | 11000 | inf | 0.9997 | | 0.0 | 23.71 | 11500 | inf | 0.9997 | | 0.0 | 24.74 | 12000 | inf | 0.9997 | | 0.0091 | 25.77 | 12500 | inf | 0.9997 | | 0.1243 | 26.8 | 13000 | inf | 0.9997 | | 0.0 | 27.83 | 13500 | inf | 0.9997 | | 0.0 | 28.87 | 14000 | inf | 0.9997 | | 0.0 | 29.9 | 14500 | inf | 0.9997 |
28e78b679748ada9d7501c1f9ac9e07d
apache-2.0
['generated_from_trainer']
false
bert-finetuned-pos This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0580 - Precision: 0.9348 - Recall: 0.9502 - F1: 0.9424 - Accuracy: 0.9868
1e07e33b024b321acb59fe772bf5be1d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0875 | 1.0 | 1756 | 0.0680 | 0.9158 | 0.9352 | 0.9254 | 0.9826 | | 0.0321 | 2.0 | 3512 | 0.0611 | 0.9289 | 0.9448 | 0.9368 | 0.9856 | | 0.0222 | 3.0 | 5268 | 0.0580 | 0.9348 | 0.9502 | 0.9424 | 0.9868 |
2dd015dfa1414c9706f150c27e0eeffa
creativeml-openrail-m
['coreml', 'stable-diffusion', 'text-to-image']
false
Healy's Anime Blend V1.7: Source(s): [CivitAI](https://civitai.com/models/1400/healys-anime-blend) This is a blend of some anime models mixed with "realistic" stuff to get a look i've been trying to accomplish for awhile. Im pretty happy with what it outputs, but judge that for yourself. I can't for the life of me remember what I put into this model. I take no credit whatsoever, I just smashed rocks together like a caveman and the outcome somehow worked. It can create NSFW stuff to I think, but i've noticed the outcomes remain pretty tolerable with "cleavage" in the negative prompts.
0ad954bc56d58e2848ef078f8c08009d
apache-2.0
['translation']
false
fiu-fiu * source group: Finno-Ugrian languages * target group: Finno-Ugrian languages * OPUS readme: [fiu-fiu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fiu-fiu/README.md) * model: transformer * source language(s): est fin fkv_Latn hun izh krl liv_Latn vep vro * target language(s): est fin fkv_Latn hun izh krl liv_Latn vep vro * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.zip) * test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.test.txt) * test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.eval.txt)
22c16b1b24f3ca27e72a0e439fbfdcec
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.est-est.est.est | 2.0 | 0.252 | | Tatoeba-test.est-fin.est.fin | 51.0 | 0.704 | | Tatoeba-test.est-fkv.est.fkv | 1.1 | 0.211 | | Tatoeba-test.est-vep.est.vep | 3.1 | 0.272 | | Tatoeba-test.fin-est.fin.est | 55.2 | 0.722 | | Tatoeba-test.fin-fkv.fin.fkv | 1.6 | 0.207 | | Tatoeba-test.fin-hun.fin.hun | 42.4 | 0.663 | | Tatoeba-test.fin-izh.fin.izh | 12.9 | 0.509 | | Tatoeba-test.fin-krl.fin.krl | 4.6 | 0.292 | | Tatoeba-test.fkv-est.fkv.est | 2.4 | 0.148 | | Tatoeba-test.fkv-fin.fkv.fin | 15.1 | 0.427 | | Tatoeba-test.fkv-liv.fkv.liv | 1.2 | 0.261 | | Tatoeba-test.fkv-vep.fkv.vep | 1.2 | 0.233 | | Tatoeba-test.hun-fin.hun.fin | 47.8 | 0.681 | | Tatoeba-test.izh-fin.izh.fin | 24.0 | 0.615 | | Tatoeba-test.izh-krl.izh.krl | 1.8 | 0.114 | | Tatoeba-test.krl-fin.krl.fin | 13.6 | 0.407 | | Tatoeba-test.krl-izh.krl.izh | 2.7 | 0.096 | | Tatoeba-test.liv-fkv.liv.fkv | 1.2 | 0.164 | | Tatoeba-test.liv-vep.liv.vep | 3.4 | 0.181 | | Tatoeba-test.multi.multi | 36.7 | 0.581 | | Tatoeba-test.vep-est.vep.est | 3.4 | 0.251 | | Tatoeba-test.vep-fkv.vep.fkv | 1.2 | 0.215 | | Tatoeba-test.vep-liv.vep.liv | 3.4 | 0.179 |
1794e4ef375f375ec611029def33b9fd
apache-2.0
['translation']
false
System Info: - hf_name: fiu-fiu - source_languages: fiu - target_languages: fiu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fiu-fiu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['se', 'fi', 'hu', 'et', 'fiu'] - src_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'} - tgt_constituents: {'izh', 'mdf', 'vep', 'vro', 'sme', 'myv', 'fkv_Latn', 'krl', 'fin', 'hun', 'kpv', 'udm', 'liv_Latn', 'est', 'mhr', 'sma'} - src_multilingual: True - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/fiu-fiu/opus-2020-07-26.test.txt - src_alpha3: fiu - tgt_alpha3: fiu - short_pair: fiu-fiu - chrF2_score: 0.581 - bleu: 36.7 - brevity_penalty: 0.981 - ref_len: 19444.0 - src_name: Finno-Ugrian languages - tgt_name: Finno-Ugrian languages - train_date: 2020-07-26 - src_alpha2: fiu - tgt_alpha2: fiu - prefer_old: False - long_pair: fiu-fiu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
9fa10981d2c1e5f0b4256f2747fdee66
apache-2.0
['automatic-speech-recognition', 'nl']
false
exp_w2v2t_nl_xls-r_s831 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
87dce927fb04c80d906efe9778ec698c
apache-2.0
['generated_from_trainer']
false
roberta-base-bne-finetuned-recores2 This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 8.9761 - Accuracy: 0.3113
e7f64c41c0bfc48db841fe75b3d4a85e
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25
2ad1526e7c22746249e3e3dc41b2615c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.6094 | 1.0 | 1047 | 1.6094 | 0.2259 | | 1.6094 | 2.0 | 2094 | 1.6094 | 0.2121 | | 1.6094 | 3.0 | 3141 | 1.6094 | 0.2314 | | 1.6094 | 4.0 | 4188 | 1.6094 | 0.1956 | | 1.6094 | 5.0 | 5235 | 1.6094 | 0.2121 | | 1.6121 | 6.0 | 6282 | 1.6094 | 0.1818 | | 1.6094 | 7.0 | 7329 | 1.6094 | 0.2259 | | 1.6092 | 8.0 | 8376 | 1.6094 | 0.1736 | | 1.6094 | 9.0 | 9423 | 1.6094 | 0.1956 | | 1.6094 | 10.0 | 10470 | 1.6094 | 0.1736 | | 1.6094 | 11.0 | 11517 | 1.6094 | 0.1983 | | 1.6094 | 12.0 | 12564 | 1.6094 | 0.2176 | | 1.6094 | 13.0 | 13611 | 1.6094 | 0.1928 | | 1.6096 | 14.0 | 14658 | 1.6094 | 0.1846 | | 1.6145 | 15.0 | 15705 | 1.6094 | 0.2066 | | 1.6094 | 16.0 | 16752 | 1.6022 | 0.2121 | | 1.8471 | 17.0 | 17799 | 1.6101 | 0.1763 | | 2.8148 | 18.0 | 18846 | 2.7585 | 0.2452 | | 2.5445 | 19.0 | 19893 | 2.4576 | 0.2920 | | 1.9972 | 20.0 | 20940 | 3.6002 | 0.2865 | | 1.9844 | 21.0 | 21987 | 5.3809 | 0.3168 | | 2.849 | 22.0 | 23034 | 7.2230 | 0.3140 | | 1.4208 | 23.0 | 24081 | 8.0602 | 0.2975 | | 0.4045 | 24.0 | 25128 | 8.2947 | 0.3058 | | 0.3052 | 25.0 | 26175 | 8.9761 | 0.3113 |
518e5d0d28b5cb6d41e3b7438a26c75d
mit
['generated_from_trainer']
false
roberta-base-finetuned-cola This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5732 - Matthews Correlation: 0.6495
e175e8f38692057e6feca48778d95043
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - total_eval_batch_size: 5 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - training precision: Mixed Precision
2fd6e7e9b7b35e42ada126dca9e850d5
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5211 | 1.0 | 534 | 0.4031 | 0.5599 | | 0.3739 | 2.0 | 1068 | 0.4688 | 0.5713 | | 0.0697 | 3.0 | 1602 | 0.4988 | 0.6070 | | 0.0712 | 4.0 | 2136 | 0.5596 | 0.6221 | | 0.0955 | 5.0 | 2670 | 0.5732 | 0.6495 |
2866cbb35a45371b78bb8146ddf7dec9
apache-2.0
['image-classification', 'generated_from_trainer']
false
modeversion2_m7_e8 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem7 dataset. It achieves the following results on the evaluation set: - Loss: 0.1060 - Accuracy: 0.9761
2258badb34ca8cf9a761ba9c24aec511
apache-2.0
['image-classification', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP
bdc654d4e1968a2e3377769550d9e68f
apache-2.0
['image-classification', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 4.0231 | 0.06 | 100 | 3.8568 | 0.1883 | | 3.3863 | 0.12 | 200 | 3.2510 | 0.2596 | | 2.6187 | 0.18 | 300 | 2.6243 | 0.3882 | | 2.3097 | 0.23 | 400 | 2.2189 | 0.4527 | | 1.9016 | 0.29 | 500 | 1.9495 | 0.5244 | | 1.7478 | 0.35 | 600 | 1.6609 | 0.6091 | | 1.2345 | 0.41 | 700 | 1.4335 | 0.6426 | | 1.4129 | 0.47 | 800 | 1.3001 | 0.6752 | | 1.1722 | 0.53 | 900 | 1.2030 | 0.6785 | | 1.0808 | 0.59 | 1000 | 1.0051 | 0.7273 | | 0.8814 | 0.64 | 1100 | 1.0715 | 0.7063 | | 0.9831 | 0.7 | 1200 | 0.9283 | 0.7334 | | 0.8118 | 0.76 | 1300 | 0.8525 | 0.7631 | | 0.7203 | 0.82 | 1400 | 0.7849 | 0.7756 | | 0.8881 | 0.88 | 1500 | 0.8786 | 0.7487 | | 0.6407 | 0.94 | 1600 | 0.6896 | 0.8000 | | 0.7574 | 1.0 | 1700 | 0.7314 | 0.7754 | | 0.6063 | 1.06 | 1800 | 0.6312 | 0.8068 | | 0.4797 | 1.11 | 1900 | 0.5792 | 0.8296 | | 0.4973 | 1.17 | 2000 | 0.5846 | 0.8221 | | 0.4432 | 1.23 | 2100 | 0.7057 | 0.7905 | | 0.5518 | 1.29 | 2200 | 0.5621 | 0.8304 | | 0.3256 | 1.35 | 2300 | 0.5890 | 0.8143 | | 0.4284 | 1.41 | 2400 | 0.5204 | 0.8485 | | 0.3702 | 1.47 | 2500 | 0.5699 | 0.8256 | | 0.2858 | 1.52 | 2600 | 0.5815 | 0.8287 | | 0.3706 | 1.58 | 2700 | 0.4615 | 0.8571 | | 0.3484 | 1.64 | 2800 | 0.4812 | 0.8518 | | 0.2865 | 1.7 | 2900 | 0.4285 | 0.8638 | | 0.4474 | 1.76 | 3000 | 0.5217 | 0.8377 | | 0.2101 | 1.82 | 3100 | 0.4478 | 0.8589 | | 0.3545 | 1.88 | 3200 | 0.4444 | 0.8612 | | 0.2728 | 1.93 | 3300 | 0.4213 | 0.8645 | | 0.3525 | 1.99 | 3400 | 0.3551 | 0.8848 | | 0.0936 | 2.05 | 3500 | 0.4074 | 0.8748 | | 0.2118 | 2.11 | 3600 | 0.4089 | 0.8812 | | 0.2744 | 2.17 | 3700 | 0.3534 | 0.8894 | | 0.211 | 2.23 | 3800 | 0.4422 | 0.8599 | | 0.1684 | 2.29 | 3900 | 0.3705 | 0.8858 | | 0.1885 | 2.34 | 4000 | 0.3651 | 0.8862 | | 0.249 | 2.4 | 4100 | 0.4234 | 0.8687 | | 0.1485 | 2.46 | 4200 | 0.3784 | 0.8798 | | 0.1188 | 2.52 | 4300 | 0.3589 | 0.8873 | | 0.1274 | 2.58 | 4400 | 0.3570 | 0.8917 | | 0.2206 | 2.64 | 4500 | 0.3377 | 0.8920 | | 0.1287 | 2.7 | 4600 | 0.3170 | 0.9023 | | 0.1805 | 2.75 | 4700 | 0.3469 | 0.8934 | | 0.1505 | 2.81 | 4800 | 0.4258 | 0.8757 | | 0.1592 | 2.87 | 4900 | 0.3415 | 0.8948 | | 0.1297 | 2.93 | 5000 | 0.3168 | 0.9028 | | 0.1284 | 2.99 | 5100 | 0.3060 | 0.9089 | | 0.0833 | 3.05 | 5200 | 0.2610 | 0.9207 | | 0.0334 | 3.11 | 5300 | 0.2766 | 0.9197 | | 0.0847 | 3.17 | 5400 | 0.3366 | 0.9016 | | 0.1112 | 3.22 | 5500 | 0.3098 | 0.9079 | | 0.0477 | 3.28 | 5600 | 0.3385 | 0.9041 | | 0.0419 | 3.34 | 5700 | 0.2944 | 0.9139 | | 0.0827 | 3.4 | 5800 | 0.2715 | 0.9239 | | 0.0659 | 3.46 | 5900 | 0.2695 | 0.9230 | | 0.0244 | 3.52 | 6000 | 0.3050 | 0.9147 | | 0.0883 | 3.58 | 6100 | 0.2862 | 0.9203 | | 0.0527 | 3.63 | 6200 | 0.2383 | 0.9319 | | 0.0828 | 3.69 | 6300 | 0.2984 | 0.9182 | | 0.0678 | 3.75 | 6400 | 0.2135 | 0.9436 | | 0.0492 | 3.81 | 6500 | 0.2605 | 0.9296 | | 0.0374 | 3.87 | 6600 | 0.2192 | 0.9380 | | 0.1846 | 3.93 | 6700 | 0.2804 | 0.9187 | | 0.0557 | 3.99 | 6800 | 0.2599 | 0.9253 | | 0.0127 | 4.04 | 6900 | 0.2412 | 0.9336 | | 0.0203 | 4.1 | 7000 | 0.2214 | 0.9415 | | 0.0272 | 4.16 | 7100 | 0.2322 | 0.9356 | | 0.066 | 4.22 | 7200 | 0.2643 | 0.9325 | | 0.0628 | 4.28 | 7300 | 0.2170 | 0.9406 | | 0.0108 | 4.34 | 7400 | 0.2388 | 0.9405 | | 0.026 | 4.4 | 7500 | 0.2533 | 0.9372 | | 0.0401 | 4.45 | 7600 | 0.2407 | 0.9358 | | 0.0493 | 4.51 | 7700 | 0.2213 | 0.9415 | | 0.0951 | 4.57 | 7800 | 0.3016 | 0.9237 | | 0.0017 | 4.63 | 7900 | 0.2183 | 0.9448 | | 0.0561 | 4.69 | 8000 | 0.1962 | 0.9492 | | 0.0063 | 4.75 | 8100 | 0.1868 | 0.9522 | | 0.0054 | 4.81 | 8200 | 0.2068 | 0.9459 | | 0.0519 | 4.87 | 8300 | 0.2141 | 0.9429 | | 0.027 | 4.92 | 8400 | 0.2138 | 0.9438 | | 0.0034 | 4.98 | 8500 | 0.1774 | 0.9529 | | 0.0096 | 5.04 | 8600 | 0.1778 | 0.9512 | | 0.0011 | 5.1 | 8700 | 0.1854 | 0.9512 | | 0.0195 | 5.16 | 8800 | 0.1914 | 0.9483 | | 0.0245 | 5.22 | 8900 | 0.2156 | 0.9471 | | 0.0055 | 5.28 | 9000 | 0.1640 | 0.9574 | | 0.0166 | 5.33 | 9100 | 0.1770 | 0.9568 | | 0.0217 | 5.39 | 9200 | 0.2011 | 0.9479 | | 0.0017 | 5.45 | 9300 | 0.2210 | 0.9462 | | 0.0161 | 5.51 | 9400 | 0.1510 | 0.9621 | | 0.0193 | 5.57 | 9500 | 0.1643 | 0.9586 | | 0.0121 | 5.63 | 9600 | 0.1716 | 0.9535 | | 0.0146 | 5.69 | 9700 | 0.1720 | 0.9554 | | 0.0071 | 5.74 | 9800 | 0.1831 | 0.9541 | | 0.0018 | 5.8 | 9900 | 0.2076 | 0.9485 | | 0.0007 | 5.86 | 10000 | 0.1636 | 0.9599 | | 0.0005 | 5.92 | 10100 | 0.1625 | 0.9602 | | 0.0277 | 5.98 | 10200 | 0.1874 | 0.9546 | | 0.0005 | 6.04 | 10300 | 0.1790 | 0.9579 | | 0.0012 | 6.1 | 10400 | 0.1840 | 0.9544 | | 0.0431 | 6.15 | 10500 | 0.1571 | 0.9628 | | 0.0332 | 6.21 | 10600 | 0.1599 | 0.9591 | | 0.0014 | 6.27 | 10700 | 0.1493 | 0.9632 | | 0.0014 | 6.33 | 10800 | 0.1366 | 0.9661 | | 0.0006 | 6.39 | 10900 | 0.1582 | 0.9609 | | 0.0005 | 6.45 | 11000 | 0.1704 | 0.9589 | | 0.0004 | 6.51 | 11100 | 0.1376 | 0.9671 | | 0.0755 | 6.57 | 11200 | 0.1375 | 0.9654 | | 0.0002 | 6.62 | 11300 | 0.1361 | 0.9661 | | 0.0006 | 6.68 | 11400 | 0.1323 | 0.9675 | | 0.0009 | 6.74 | 11500 | 0.1239 | 0.9692 | | 0.0004 | 6.8 | 11600 | 0.1514 | 0.9631 | | 0.0002 | 6.86 | 11700 | 0.1386 | 0.9664 | | 0.0004 | 6.92 | 11800 | 0.1368 | 0.9659 | | 0.0004 | 6.98 | 11900 | 0.1276 | 0.9684 | | 0.0002 | 7.03 | 12000 | 0.1171 | 0.9712 | | 0.0002 | 7.09 | 12100 | 0.1142 | 0.9711 | | 0.0001 | 7.15 | 12200 | 0.1183 | 0.9727 | | 0.0002 | 7.21 | 12300 | 0.1167 | 0.9732 | | 0.0002 | 7.27 | 12400 | 0.1143 | 0.9737 | | 0.0001 | 7.33 | 12500 | 0.1129 | 0.9737 | | 0.0002 | 7.39 | 12600 | 0.1116 | 0.9742 | | 0.0002 | 7.44 | 12700 | 0.1126 | 0.9745 | | 0.0002 | 7.5 | 12800 | 0.1111 | 0.9748 | | 0.0002 | 7.56 | 12900 | 0.1102 | 0.9747 | | 0.0001 | 7.62 | 13000 | 0.1094 | 0.9747 | | 0.0001 | 7.68 | 13100 | 0.1086 | 0.9742 | | 0.0001 | 7.74 | 13200 | 0.1079 | 0.9748 | | 0.0002 | 7.8 | 13300 | 0.1062 | 0.9754 | | 0.0002 | 7.85 | 13400 | 0.1068 | 0.9757 | | 0.0001 | 7.91 | 13500 | 0.1061 | 0.9762 | | 0.0001 | 7.97 | 13600 | 0.1060 | 0.9761 |
fe32c6b90b07160306e7f9b750ab2f0a
apache-2.0
[]
false
BERT large model (cased) whole word masking Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in [this repository](https://github.com/google-research/bert). This model is cased: it makes a difference between english and English. Differently to other BERT models, this model was trained with a new technique: Whole Word Masking. In this case, all of the tokens corresponding to a word are masked at once. The overall masking rate remains the same. The training is identical -- each masked WordPiece token is predicted independently. Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team.
8319256c330342105e507f41205f135f
apache-2.0
[]
false
Model description BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. This model has the following configuration: - 24-layer - 1024 hidden dimension - 16 attention heads - 336M parameters.
1ededa60d15fbd32211a02f902b928a9
apache-2.0
[]
false
How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-large-cased-whole-word-masking') >>> unmasker("Hello I'm a [MASK] model.") [ { "sequence":"[CLS] Hello I'm a fashion model. [SEP]", "score":0.1474294513463974, "token":4633, "token_str":"fashion" }, { "sequence":"[CLS] Hello I'm a magazine model. [SEP]", "score":0.05430116504430771, "token":2435, "token_str":"magazine" }, { "sequence":"[CLS] Hello I'm a male model. [SEP]", "score":0.039395421743392944, "token":2581, "token_str":"male" }, { "sequence":"[CLS] Hello I'm a former model. [SEP]", "score":0.036936815828084946, "token":1393, "token_str":"former" }, { "sequence":"[CLS] Hello I'm a professional model. [SEP]", "score":0.03663451969623566, "token":1848, "token_str":"professional" } ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-large-cased-whole-word-masking') model = BertModel.from_pretrained("bert-large-cased-whole-word-masking") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('bert-large-cased-whole-word-masking') model = TFBertModel.from_pretrained("bert-large-cased-whole-word-masking") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ```
afd3361adec08c70a3c558db511f2d81
apache-2.0
[]
false
Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-large-cased-whole-word-masking') >>> unmasker("The man worked as a [MASK].") [ { "sequence":"[CLS] The man worked as a carpenter. [SEP]", "score":0.09021259099245071, "token":25169, "token_str":"carpenter" }, { "sequence":"[CLS] The man worked as a cook. [SEP]", "score":0.08125395327806473, "token":9834, "token_str":"cook" }, { "sequence":"[CLS] The man worked as a mechanic. [SEP]", "score":0.07524766772985458, "token":19459, "token_str":"mechanic" }, { "sequence":"[CLS] The man worked as a waiter. [SEP]", "score":0.07397029548883438, "token":17989, "token_str":"waiter" }, { "sequence":"[CLS] The man worked as a guard. [SEP]", "score":0.05848982185125351, "token":3542, "token_str":"guard" } ] >>> unmasker("The woman worked as a [MASK].") [ { "sequence":"[CLS] The woman worked as a maid. [SEP]", "score":0.19436432421207428, "token":13487, "token_str":"maid" }, { "sequence":"[CLS] The woman worked as a waitress. [SEP]", "score":0.16161060333251953, "token":15098, "token_str":"waitress" }, { "sequence":"[CLS] The woman worked as a nurse. [SEP]", "score":0.14942803978919983, "token":7439, "token_str":"nurse" }, { "sequence":"[CLS] The woman worked as a secretary. [SEP]", "score":0.10373266786336899, "token":4848, "token_str":"secretary" }, { "sequence":"[CLS] The woman worked as a cook. [SEP]", "score":0.06384387612342834, "token":9834, "token_str":"cook" } ] ``` This bias will also affect all fine-tuned versions of this model.
4fa8932917e2b6500bde8b9671e48eae
apache-2.0
[]
false
Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Model | SQUAD 1.1 F1/EM | Multi NLI Accuracy ---------------------------------------- | :-------------: | :----------------: BERT-Large, Cased (Whole Word Masking) | 92.9/86.7 | 86.46
f85c64b4d221b9d2918ead36eeed6365
apache-2.0
[]
false
BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
c028e8d10eb540170104ce66664d5bdc
cc
[]
false
This model trained on SD-1.5 provides different styles of layered paper art Triggerword: scherenschnitt papercut Prompt expample: layering paper art, 75mm photography of a scherenschnitt papercut, the christmas crib scene in the stable with ox mule and adoration of kings, artist's work, detailed, (white) paper, (navyblue) paper, (color) paper, christmas, backlight effect, harmonic shapes, winter landscape, cute, romantic xmas, in focus, 8k, a bit underexposed, 3d effect, unreal engine, blender render, ((symmetrie)), abstraction, HD, family christmas in switzerland, in layering paper art, paper cut, paper folding Negative prompt: text, writing, logo, signature, tree Settings Steps: 50, Sampler: DPM fast, CFG scale: 14, Seed: 2147632306, Size: 704x512, Model hash: 78e2aaa9, Variation seed: 362561481, Variation seed strength: 0.4
a4f365bb1cc725e3132dc48b5bc3de7b
['mit']
['BERT', 'MNLI', 'NLI', 'transformer', 'pre-training']
false
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). This is one of the smaller pre-trained BERT variants, together with [bert-tiny](https://huggingface.co/prajjwal1/bert-tiny), [bert-mini](https://huggingface.co/prajjwal1/bert-mini) and [bert-small](https://huggingface.co/prajjwal1/bert-small). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task. If you use the model, please consider citing both the papers: ``` @misc{bhargava2021generalization, title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics}, author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers}, year={2021}, eprint={2110.01518}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{DBLP:journals/corr/abs-1908-08962, author = {Iulia Turc and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation}, journal = {CoRR}, volume = {abs/1908.08962}, year = {2019}, url = {http://arxiv.org/abs/1908.08962}, eprinttype = {arXiv}, eprint = {1908.08962}, timestamp = {Thu, 29 Aug 2019 16:32:34 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Config of this model: - `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium) Other models to check out: - `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny) - `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini) - `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small) Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli). Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
2db89cf7a92c4f8a8656f81213532f5b
mit
['huggan', 'gan']
false
Model description This model is a [Pix2Pix](https://arxiv.org/abs/1611.07004) model trained on the [huggan/maps](https://huggingface.co/datasets/huggan/maps) dataset. The goal for the model is to turn a satellite map into a geographic map à la Google Maps, and the other way around. The model was trained using the [example script](https://github.com/huggingface/community-events/tree/main/huggan/pytorch/pix2pix) provided by HuggingFace as part of the [HugGAN sprint](https://github.com/huggingface/community-events/tree/main/huggan).
815a484d1a3adcc3d770e4acb4f666b4
mit
['huggan', 'gan']
false
How to use ```python from huggan.pytorch.pix2pix.modeling_pix2pix import GeneratorUNet from PIL import Image from torchvision.utils import save_image image = Image.open("...") generator = GeneratorUNet.from_pretrained("huggan/pix2pix-maps") pixel_values = transform(image).unsqueeze(0) output = generator(pixel_values) save_image(output, 'output.png', normalize=True) ```
fd7f27a5413b91c5a67788780ff354fd
mit
['huggan', 'gan']
false
BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/IsolaZZE16, author = {Phillip Isola and Jun{-}Yan Zhu and Tinghui Zhou and Alexei A. Efros}, title = {Image-to-Image Translation with Conditional Adversarial Networks}, journal = {CoRR}, volume = {abs/1611.07004}, year = {2016}, url = {http://arxiv.org/abs/1611.07004}, eprinttype = {arXiv}, eprint = {1611.07004}, timestamp = {Mon, 13 Aug 2018 16:49:05 +0200}, biburl = {https://dblp.org/rec/journals/corr/IsolaZZE16.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
7f030ba9ad8b49572cd15ecc8947d3d4
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.4495 - Rouge1: 28.6501 - Rouge2: 7.9821 - Rougel: 22.5657 - Rougelsum: 22.579 - Gen Len: 18.819
89aab3f62b4fdc58254ab475ce9b8c19
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.6832 | 1.0 | 25506 | 2.4495 | 28.6501 | 7.9821 | 22.5657 | 22.579 | 18.819 |
984e88966bfa6b7d3b3d1bd1240782a8
apache-2.0
['bert', 'qqp', 'glue', 'torchdistill']
false
`bert-base-uncased` fine-tuned on QQP dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb). The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/qqp/ce/bert_base_uncased.yaml). I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **77.9**.
65b1be34265c57d88de122cb688cf2b2
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2t_en_vp-nl_s169 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
372f3b7b3c270edb9c5592b2c2408284
apache-2.0
['generated_from_keras_callback']
false
nandysoham/Dell-theme-finetuned-overfinetuned This model is a fine-tuned version of [nandysoham/distilbert-base-uncased-finetuned-squad](https://huggingface.co/nandysoham/distilbert-base-uncased-finetuned-squad) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4305 - Train End Logits Accuracy: 0.7857 - Train Start Logits Accuracy: 0.8006 - Validation Loss: 2.3316 - Validation End Logits Accuracy: 0.1647 - Validation Start Logits Accuracy: 0.2118 - Epoch: 9
74daa275ebdedb7320f8852ceda72784
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 210, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
e66a823cb4a4917c8ea8fe18fb8dc7ea
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.5691 | 0.5179 | 0.5119 | 1.2093 | 0.4588 | 0.4588 | 0 | | 0.9333 | 0.6101 | 0.5833 | 1.2828 | 0.3176 | 0.3647 | 1 | | 0.7924 | 0.6042 | 0.5982 | 1.4627 | 0.2824 | 0.2824 | 2 | | 0.6858 | 0.6905 | 0.6786 | 1.5630 | 0.3059 | 0.2941 | 3 | | 0.6562 | 0.6518 | 0.6815 | 1.7647 | 0.2235 | 0.2118 | 4 | | 0.5996 | 0.7054 | 0.6994 | 2.0109 | 0.2118 | 0.2471 | 5 | | 0.5277 | 0.7440 | 0.7589 | 2.1286 | 0.1765 | 0.2000 | 6 | | 0.4810 | 0.7679 | 0.7798 | 2.2263 | 0.1529 | 0.2000 | 7 | | 0.4488 | 0.8036 | 0.7887 | 2.2999 | 0.1529 | 0.1882 | 8 | | 0.4305 | 0.7857 | 0.8006 | 2.3316 | 0.1647 | 0.2118 | 9 |
669266b2c6854d0464cddc8d19e5e979
apache-2.0
['text-classification', 'generated_from_trainer']
false
distilroberta-base-mrpc-glue-oscar-salas9 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset. It achieves the following results on the evaluation set: - Loss: 0.3999 - Accuracy: 0.8705
e044d06f0c35d8b2853391eb9a7111cb
apache-2.0
['generated_from_trainer']
false
t5-base-pointer-adv-mtop This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the mtop dataset. It achieves the following results on the evaluation set: - Loss: 0.1281 - Exact Match: 0.7105
68994d7cdabd67eea6680a9bd542db37
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Exact Match | |:-------------:|:-----:|:----:|:---------------:|:-----------:| | 1.7704 | 1.09 | 200 | 0.3664 | 0.1315 | | 1.9751 | 2.17 | 400 | 0.2091 | 0.3400 | | 1.0019 | 3.26 | 600 | 0.1453 | 0.4586 | | 1.313 | 4.35 | 800 | 0.1313 | 0.5065 | | 0.6593 | 5.43 | 1000 | 0.1281 | 0.5266 | | 0.3216 | 6.52 | 1200 | 0.1317 | 0.5253 | | 0.4614 | 7.61 | 1400 | 0.1508 | 0.5262 | | 0.3577 | 8.69 | 1600 | 0.1422 | 0.5360 | | 0.3748 | 9.78 | 1800 | 0.1419 | 0.5459 | | 0.2422 | 10.87 | 2000 | 0.1603 | 0.5356 | | 0.4443 | 11.96 | 2200 | 0.1526 | 0.5472 | | 0.2671 | 13.04 | 2400 | 0.1606 | 0.5481 | | 0.227 | 14.13 | 2600 | 0.1774 | 0.5441 | | 0.2053 | 15.22 | 2800 | 0.1752 | 0.5441 | | 0.1517 | 16.3 | 3000 | 0.1770 | 0.5481 |
cc384c41a6a5e8faaf65fcbb2e1c8acf
apache-2.0
['generated_from_trainer']
false
fin_sentiment This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5401 - Accuracy: 0.7840
4a69e23b9c1b420c73347dd7aacfac95
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 125 | 0.5401 | 0.7840 |
40bae72034af76fce45e36c6faa90d1a
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
manda2 Dreambooth model trained by tehqikness with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
8173c93c172817b590d59e336613c7be
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
fastbooth-jsjessy-950 Dreambooth model trained by eicu with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: ![0](https://huggingface.co/eicu/fastbooth-jsjessy-950/resolve/main/sample_images/jsjessy_10.jpg) ![1](https://huggingface.co/eicu/fastbooth-jsjessy-950/resolve/main/sample_images/jsjessy_2.jpg) ![2](https://huggingface.co/eicu/fastbooth-jsjessy-950/resolve/main/sample_images/jsjessy_11.jpg) ![3](https://huggingface.co/eicu/fastbooth-jsjessy-950/resolve/main/sample_images/jsjessy_7.jpg) ![4](https://huggingface.co/eicu/fastbooth-jsjessy-950/resolve/main/sample_images/jsjessy_12.jpg) ![5](https://huggingface.co/eicu/fastbooth-jsjessy-950/resolve/main/sample_images/jsjessy_8.jpg) ![6](https://huggingface.co/eicu/fastbooth-jsjessy-950/resolve/main/sample_images/jsjessy_1.jpg) ![7](https://huggingface.co/eicu/fastbooth-jsjessy-950/resolve/main/sample_images/jsjessy_9.jpg) ![8](https://huggingface.co/eicu/fastbooth-jsjessy-950/resolve/main/sample_images/jsjessy_4.jpg) ![9](https://huggingface.co/eicu/fastbooth-jsjessy-950/resolve/main/sample_images/jsjessy_5.jpg) ![10](https://huggingface.co/eicu/fastbooth-jsjessy-950/resolve/main/sample_images/jsjessy_6.jpg) ![11](https://huggingface.co/eicu/fastbooth-jsjessy-950/resolve/main/sample_images/jsjessy_3.jpg)
fda5bb398a5269e2de1d55c33efa6c6d
apache-2.0
['translation']
false
itc-itc * source group: Italic languages * target group: Italic languages * OPUS readme: [itc-itc](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-itc/README.md) * model: transformer * source language(s): arg ast bjn cat cos egl fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lat_Grek lat_Latn lij lld_Latn lmo mwl oci pap pcd pms por roh ron scn spa srd vec wln zsm_Latn * target language(s): arg ast bjn cat cos egl fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lat_Grek lat_Latn lij lld_Latn lmo mwl oci pap pcd pms por roh ron scn spa srd vec wln zsm_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-07.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.zip) * test set translations: [opus-2020-07-07.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.test.txt) * test set scores: [opus-2020-07-07.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.eval.txt)
07e49899b4813996b9edba748e3ab38f
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.arg-fra.arg.fra | 40.8 | 0.501 | | Tatoeba-test.arg-spa.arg.spa | 59.9 | 0.739 | | Tatoeba-test.ast-fra.ast.fra | 45.4 | 0.628 | | Tatoeba-test.ast-por.ast.por | 100.0 | 1.000 | | Tatoeba-test.ast-spa.ast.spa | 46.8 | 0.636 | | Tatoeba-test.cat-fra.cat.fra | 51.6 | 0.689 | | Tatoeba-test.cat-ita.cat.ita | 49.2 | 0.699 | | Tatoeba-test.cat-por.cat.por | 48.0 | 0.688 | | Tatoeba-test.cat-ron.cat.ron | 35.4 | 0.719 | | Tatoeba-test.cat-spa.cat.spa | 69.0 | 0.826 | | Tatoeba-test.cos-fra.cos.fra | 22.3 | 0.383 | | Tatoeba-test.cos-pms.cos.pms | 3.4 | 0.199 | | Tatoeba-test.egl-fra.egl.fra | 9.5 | 0.283 | | Tatoeba-test.egl-ita.egl.ita | 3.0 | 0.206 | | Tatoeba-test.egl-spa.egl.spa | 3.7 | 0.194 | | Tatoeba-test.fra-arg.fra.arg | 3.8 | 0.090 | | Tatoeba-test.fra-ast.fra.ast | 25.9 | 0.457 | | Tatoeba-test.fra-cat.fra.cat | 42.2 | 0.637 | | Tatoeba-test.fra-cos.fra.cos | 3.3 | 0.185 | | Tatoeba-test.fra-egl.fra.egl | 2.2 | 0.120 | | Tatoeba-test.fra-frm.fra.frm | 1.0 | 0.191 | | Tatoeba-test.fra-gcf.fra.gcf | 0.2 | 0.099 | | Tatoeba-test.fra-glg.fra.glg | 40.5 | 0.625 | | Tatoeba-test.fra-hat.fra.hat | 22.6 | 0.472 | | Tatoeba-test.fra-ita.fra.ita | 46.7 | 0.679 | | Tatoeba-test.fra-lad.fra.lad | 15.9 | 0.345 | | Tatoeba-test.fra-lat.fra.lat | 2.9 | 0.247 | | Tatoeba-test.fra-lij.fra.lij | 1.0 | 0.201 | | Tatoeba-test.fra-lld.fra.lld | 1.1 | 0.257 | | Tatoeba-test.fra-lmo.fra.lmo | 1.2 | 0.241 | | Tatoeba-test.fra-msa.fra.msa | 0.4 | 0.111 | | Tatoeba-test.fra-oci.fra.oci | 7.3 | 0.322 | | Tatoeba-test.fra-pap.fra.pap | 69.8 | 0.912 | | Tatoeba-test.fra-pcd.fra.pcd | 0.6 | 0.144 | | Tatoeba-test.fra-pms.fra.pms | 1.0 | 0.181 | | Tatoeba-test.fra-por.fra.por | 39.7 | 0.619 | | Tatoeba-test.fra-roh.fra.roh | 5.7 | 0.286 | | Tatoeba-test.fra-ron.fra.ron | 36.4 | 0.591 | | Tatoeba-test.fra-scn.fra.scn | 2.1 | 0.101 | | Tatoeba-test.fra-spa.fra.spa | 47.5 | 0.670 | | Tatoeba-test.fra-srd.fra.srd | 2.8 | 0.306 | | Tatoeba-test.fra-vec.fra.vec | 3.0 | 0.345 | | Tatoeba-test.fra-wln.fra.wln | 3.5 | 0.212 | | Tatoeba-test.frm-fra.frm.fra | 11.4 | 0.472 | | Tatoeba-test.gcf-fra.gcf.fra | 7.1 | 0.267 | | Tatoeba-test.gcf-lad.gcf.lad | 0.0 | 0.170 | | Tatoeba-test.gcf-por.gcf.por | 0.0 | 0.230 | | Tatoeba-test.gcf-spa.gcf.spa | 13.4 | 0.314 | | Tatoeba-test.glg-fra.glg.fra | 54.7 | 0.702 | | Tatoeba-test.glg-ita.glg.ita | 40.1 | 0.661 | | Tatoeba-test.glg-por.glg.por | 57.6 | 0.748 | | Tatoeba-test.glg-spa.glg.spa | 70.0 | 0.817 | | Tatoeba-test.hat-fra.hat.fra | 14.2 | 0.419 | | Tatoeba-test.hat-spa.hat.spa | 17.9 | 0.449 | | Tatoeba-test.ita-cat.ita.cat | 51.0 | 0.693 | | Tatoeba-test.ita-egl.ita.egl | 1.1 | 0.114 | | Tatoeba-test.ita-fra.ita.fra | 58.2 | 0.727 | | Tatoeba-test.ita-glg.ita.glg | 41.7 | 0.652 | | Tatoeba-test.ita-lad.ita.lad | 17.5 | 0.419 | | Tatoeba-test.ita-lat.ita.lat | 7.1 | 0.294 | | Tatoeba-test.ita-lij.ita.lij | 1.0 | 0.208 | | Tatoeba-test.ita-msa.ita.msa | 0.9 | 0.115 | | Tatoeba-test.ita-oci.ita.oci | 12.3 | 0.378 | | Tatoeba-test.ita-pms.ita.pms | 1.6 | 0.182 | | Tatoeba-test.ita-por.ita.por | 44.8 | 0.665 | | Tatoeba-test.ita-ron.ita.ron | 43.3 | 0.653 | | Tatoeba-test.ita-spa.ita.spa | 56.6 | 0.733 | | Tatoeba-test.ita-vec.ita.vec | 2.0 | 0.187 | | Tatoeba-test.lad-fra.lad.fra | 30.4 | 0.458 | | Tatoeba-test.lad-gcf.lad.gcf | 0.0 | 0.163 | | Tatoeba-test.lad-ita.lad.ita | 12.3 | 0.426 | | Tatoeba-test.lad-lat.lad.lat | 1.6 | 0.178 | | Tatoeba-test.lad-por.lad.por | 8.8 | 0.394 | | Tatoeba-test.lad-ron.lad.ron | 78.3 | 0.717 | | Tatoeba-test.lad-spa.lad.spa | 28.3 | 0.531 | | Tatoeba-test.lat-fra.lat.fra | 9.4 | 0.300 | | Tatoeba-test.lat-ita.lat.ita | 20.0 | 0.421 | | Tatoeba-test.lat-lad.lat.lad | 3.8 | 0.173 | | Tatoeba-test.lat-por.lat.por | 13.0 | 0.354 | | Tatoeba-test.lat-ron.lat.ron | 14.0 | 0.358 | | Tatoeba-test.lat-spa.lat.spa | 21.8 | 0.436 | | Tatoeba-test.lij-fra.lij.fra | 13.8 | 0.346 | | Tatoeba-test.lij-ita.lij.ita | 14.7 | 0.442 | | Tatoeba-test.lld-fra.lld.fra | 18.8 | 0.428 | | Tatoeba-test.lld-spa.lld.spa | 11.1 | 0.377 | | Tatoeba-test.lmo-fra.lmo.fra | 11.0 | 0.329 | | Tatoeba-test.msa-fra.msa.fra | 0.8 | 0.129 | | Tatoeba-test.msa-ita.msa.ita | 1.1 | 0.138 | | Tatoeba-test.msa-msa.msa.msa | 19.1 | 0.453 | | Tatoeba-test.msa-pap.msa.pap | 0.0 | 0.037 | | Tatoeba-test.msa-por.msa.por | 2.4 | 0.155 | | Tatoeba-test.msa-ron.msa.ron | 1.2 | 0.129 | | Tatoeba-test.msa-spa.msa.spa | 1.0 | 0.139 | | Tatoeba-test.multi.multi | 40.8 | 0.599 | | Tatoeba-test.mwl-por.mwl.por | 35.4 | 0.561 | | Tatoeba-test.oci-fra.oci.fra | 24.5 | 0.467 | | Tatoeba-test.oci-ita.oci.ita | 23.3 | 0.493 | | Tatoeba-test.oci-spa.oci.spa | 26.1 | 0.505 | | Tatoeba-test.pap-fra.pap.fra | 31.0 | 0.629 | | Tatoeba-test.pap-msa.pap.msa | 0.0 | 0.051 | | Tatoeba-test.pcd-fra.pcd.fra | 13.8 | 0.381 | | Tatoeba-test.pcd-spa.pcd.spa | 2.6 | 0.227 | | Tatoeba-test.pms-cos.pms.cos | 3.4 | 0.217 | | Tatoeba-test.pms-fra.pms.fra | 13.4 | 0.347 | | Tatoeba-test.pms-ita.pms.ita | 13.0 | 0.373 | | Tatoeba-test.pms-spa.pms.spa | 13.1 | 0.374 | | Tatoeba-test.por-ast.por.ast | 100.0 | 1.000 | | Tatoeba-test.por-cat.por.cat | 45.1 | 0.673 | | Tatoeba-test.por-fra.por.fra | 52.5 | 0.698 | | Tatoeba-test.por-gcf.por.gcf | 16.0 | 0.128 | | Tatoeba-test.por-glg.por.glg | 57.5 | 0.750 | | Tatoeba-test.por-ita.por.ita | 50.1 | 0.710 | | Tatoeba-test.por-lad.por.lad | 15.7 | 0.341 | | Tatoeba-test.por-lat.por.lat | 11.1 | 0.362 | | Tatoeba-test.por-msa.por.msa | 2.4 | 0.136 | | Tatoeba-test.por-mwl.por.mwl | 30.5 | 0.559 | | Tatoeba-test.por-roh.por.roh | 0.0 | 0.132 | | Tatoeba-test.por-ron.por.ron | 40.0 | 0.632 | | Tatoeba-test.por-spa.por.spa | 58.6 | 0.756 | | Tatoeba-test.roh-fra.roh.fra | 23.1 | 0.564 | | Tatoeba-test.roh-por.roh.por | 21.4 | 0.347 | | Tatoeba-test.roh-spa.roh.spa | 19.8 | 0.489 | | Tatoeba-test.ron-cat.ron.cat | 59.5 | 0.854 | | Tatoeba-test.ron-fra.ron.fra | 47.4 | 0.647 | | Tatoeba-test.ron-ita.ron.ita | 45.7 | 0.683 | | Tatoeba-test.ron-lad.ron.lad | 44.2 | 0.712 | | Tatoeba-test.ron-lat.ron.lat | 14.8 | 0.449 | | Tatoeba-test.ron-msa.ron.msa | 1.2 | 0.098 | | Tatoeba-test.ron-por.ron.por | 42.7 | 0.650 | | Tatoeba-test.ron-spa.ron.spa | 50.4 | 0.686 | | Tatoeba-test.scn-fra.scn.fra | 2.4 | 0.180 | | Tatoeba-test.scn-spa.scn.spa | 5.1 | 0.212 | | Tatoeba-test.spa-arg.spa.arg | 10.8 | 0.267 | | Tatoeba-test.spa-ast.spa.ast | 24.6 | 0.514 | | Tatoeba-test.spa-cat.spa.cat | 61.6 | 0.783 | | Tatoeba-test.spa-egl.spa.egl | 2.2 | 0.106 | | Tatoeba-test.spa-fra.spa.fra | 51.1 | 0.683 | | Tatoeba-test.spa-gcf.spa.gcf | 7.8 | 0.067 | | Tatoeba-test.spa-glg.spa.glg | 62.8 | 0.776 | | Tatoeba-test.spa-hat.spa.hat | 16.6 | 0.398 | | Tatoeba-test.spa-ita.spa.ita | 51.8 | 0.718 | | Tatoeba-test.spa-lad.spa.lad | 14.6 | 0.393 | | Tatoeba-test.spa-lat.spa.lat | 21.5 | 0.486 | | Tatoeba-test.spa-lld.spa.lld | 2.0 | 0.222 | | Tatoeba-test.spa-msa.spa.msa | 0.8 | 0.113 | | Tatoeba-test.spa-oci.spa.oci | 10.3 | 0.377 | | Tatoeba-test.spa-pcd.spa.pcd | 0.9 | 0.115 | | Tatoeba-test.spa-pms.spa.pms | 1.5 | 0.194 | | Tatoeba-test.spa-por.spa.por | 49.4 | 0.698 | | Tatoeba-test.spa-roh.spa.roh | 4.6 | 0.261 | | Tatoeba-test.spa-ron.spa.ron | 39.1 | 0.618 | | Tatoeba-test.spa-scn.spa.scn | 2.0 | 0.113 | | Tatoeba-test.spa-wln.spa.wln | 8.7 | 0.295 | | Tatoeba-test.srd-fra.srd.fra | 6.7 | 0.369 | | Tatoeba-test.vec-fra.vec.fra | 59.9 | 0.608 | | Tatoeba-test.vec-ita.vec.ita | 14.2 | 0.405 | | Tatoeba-test.wln-fra.wln.fra | 8.9 | 0.344 | | Tatoeba-test.wln-spa.wln.spa | 9.6 | 0.298 |
25b5fd4a141d8201532fb0de8c12ca95
apache-2.0
['translation']
false
System Info: - hf_name: itc-itc - source_languages: itc - target_languages: itc - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/itc-itc/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc'] - src_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat_Latn', 'lad_Latn', 'pcd', 'lat_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'srd', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'} - tgt_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat_Latn', 'lad_Latn', 'pcd', 'lat_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'srd', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'} - src_multilingual: True - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/itc-itc/opus-2020-07-07.test.txt - src_alpha3: itc - tgt_alpha3: itc - short_pair: itc-itc - chrF2_score: 0.599 - bleu: 40.8 - brevity_penalty: 0.968 - ref_len: 77448.0 - src_name: Italic languages - tgt_name: Italic languages - train_date: 2020-07-07 - src_alpha2: itc - tgt_alpha2: itc - prefer_old: False - long_pair: itc-itc - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
6596dfbaff8be593e02ea52052299fe5
cc-by-4.0
[]
false
crosloengual-bert-si-nli CroSloEngual BERT model finetuned on the SI-NLI dataset for Slovene natural language inference. Fine-tuned in a classic sequence pair classification setting on the official training/validation/test split for 10 epochs, using validation set accuracy for model selection. Optimized using the AdamW optimizer (learning rate 2e-5) and cross-entropy loss. Using batch size `82` (selected based on the available GPU memory) and maximum sequence length `107` (99th percentile of the lengths in the training set). Achieves the following metrics: - best validation accuracy: `0.660` - test accuracy = `0.673`
523bc812b00837745c6a6c1beb6b60b0
mit
[]
false
aavegotchi on Stable Diffusion This is the `<aave-gotchi>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<aave-gotchi> 0](https://huggingface.co/sd-concepts-library/aavegotchi/resolve/main/concept_images/2.jpeg) ![<aave-gotchi> 1](https://huggingface.co/sd-concepts-library/aavegotchi/resolve/main/concept_images/0.jpeg) ![<aave-gotchi> 2](https://huggingface.co/sd-concepts-library/aavegotchi/resolve/main/concept_images/1.jpeg) ![<aave-gotchi> 3](https://huggingface.co/sd-concepts-library/aavegotchi/resolve/main/concept_images/4.jpeg) ![<aave-gotchi> 4](https://huggingface.co/sd-concepts-library/aavegotchi/resolve/main/concept_images/3.jpeg)
d75eecca103fe438f08d0598f1752a2a
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-en-to-regex This model is a fine-tuned version of [rymaju/t5-small-finetuned-en-to-regex](https://huggingface.co/rymaju/t5-small-finetuned-en-to-regex) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0032 - Bleu: 12.1984 - Gen Len: 16.7502
db945331b08dc6d9aa5da4f6cc5b39b2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 0.0092 | 1.0 | 6188 | 0.0043 | 12.1984 | 16.7522 | | 0.0069 | 2.0 | 12376 | 0.0040 | 12.2039 | 16.7502 | | 0.0056 | 3.0 | 18564 | 0.0034 | 12.2091 | 16.7483 | | 0.0048 | 4.0 | 24752 | 0.0035 | 12.2103 | 16.7502 | | 0.0049 | 5.0 | 30940 | 0.0035 | 12.1984 | 16.7502 | | 0.0046 | 6.0 | 37128 | 0.0033 | 12.1984 | 16.7502 | | 0.0046 | 7.0 | 43316 | 0.0035 | 12.1984 | 16.7502 | | 0.0046 | 8.0 | 49504 | 0.0032 | 12.1984 | 16.7502 | | 0.0042 | 9.0 | 55692 | 0.0032 | 12.1984 | 16.7502 | | 0.0043 | 10.0 | 61880 | 0.0032 | 12.1984 | 16.7502 |
fcc2d7b6696083b5d4362d108d8178d8
mit
[]
false
Description This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022) We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022) We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models: | method | epoch 1 | epoch 3 | epoch 3 | epoch 4 | |--- |--- |--- |--- |--- | | raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) | | m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) | | m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) | | regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) | | w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) | | w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) | This model is `m-vote-nonstrict-epoch-4`
ae7e34ce151b4fd481dd958bb9656160
mit
[]
false
Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_name = 'm-vote-nonstrict-epoch-4' tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased") full_model_path = f'MartinoMensio/racism-models-{model_name}' model = AutoModelForSequenceClassification.from_pretrained(full_model_path) pipe = pipeline("text-classification", model = model, tokenizer = tokenizer) texts = [ 'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!', 'Es que los judíos controlan el mundo' ] print(pipe(texts))
1df9c59feaec5d82cea1b7502109bdb6
apache-2.0
[]
false
distilbert-base-en-zh-hi-cased We are sharing smaller versions of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) that handle a custom number of languages. Our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
8dc9471fefd45bedc9f9548f6b0700ac
apache-2.0
[]
false
How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/distilbert-base-en-zh-hi-cased") model = AutoModel.from_pretrained("Geotrend/distilbert-base-en-zh-hi-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
4a77ece9155210e230d46660ff662993
creativeml-openrail-m
['text-to-image']
false
noggles_fastdb_4800 on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
d412c1723763be156110bb951cc0cd03
creativeml-openrail-m
['text-to-image']
false
Model by alxdfy This your the Stable Diffusion model fine-tuned the noggles_fastdb_4800 concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt(s)`: **test.png** You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb). You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Sample pictures of this concept: test.png ![test.png 0](https://huggingface.co/alxdfy/noggles-fastdb-4800/resolve/main/concept_images/test.png)
7d3b7602c7b0f4bd0efccb6f3bec5ddc
mit
[]
false
Description A fine-tuned regression model that assigns a functioning level to Dutch sentences describing respiration functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about respiration functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model.
9edb95df8d46e8b0ae6f09a532f66bf7
mit
[]
false
Functioning levels Level | Meaning ---|--- 4 | No problem with respiration, and/or respiratory rate is normal (EWS: 9-20). 3 | Shortness of breath in exercise (saturation &ge;90), and/or respiratory rate is slightly increased (EWS: 21-30). 2 | Shortness of breath in rest (saturation &ge;90), and/or respiratory rate is fairly increased (EWS: 31-35). 1 | Needs oxygen at rest or during exercise (saturation &lt;90), and/or respiratory rate &gt;35. 0 | Mechanical ventilation is needed. The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model.
b8698e6f9f55c5bfa44537ea32ce8dab
mit
[]
false
Intended uses and limitations - The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records). - The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled.
5a20b52a2cf18d62cb1cd76525f2f0e2
mit
[]
false
How to use To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library: ``` from simpletransformers.classification import ClassificationModel model = ClassificationModel( 'roberta', 'CLTL/icf-levels-adm', use_cuda=False, ) example = 'Nu sinds 5-6 dagen progressieve benauwdheidsklachten (bij korte stukken lopen al kortademig), terwijl dit eerder niet zo was.' _, raw_outputs = model.predict([example]) predictions = np.squeeze(raw_outputs) ``` The prediction on the example is: ``` 2.26 ``` The raw outputs look like this: ``` [[2.26074648]] ```
f4f87304861c500ea67f8f6e4a98bd9d
mit
[]
false
Training data - The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released. - The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines).
aaed733241db3cafaa424947f470e020
mit
[]
false
Evaluation results The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). | | Sentence-level | Note-level |---|---|--- mean absolute error | 0.48 | 0.37 mean squared error | 0.55 | 0.34 root mean squared error | 0.74 | 0.58
fb243445537cf693402767220e490056
mit
[]
false
sd-concepts-library/uma-meme on Stable Diffusion This is the `<uma-object-full>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<uma-object-full> 0](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/unnamed_7_.jpg) ![<uma-object-full> 1](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/28.jpg) ![<uma-object-full> 2](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/unnamed_11_.jpg) ![<uma-object-full> 3](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/unnamed_12_.jpg) ![<uma-object-full> 4](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/unnamed_1_.png) ![<uma-object-full> 5](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/22.jpg) ![<uma-object-full> 6](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/10.jpg) ![<uma-object-full> 7](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/KakaoTalk_20220904_015246222.jpg) ![<uma-object-full> 8](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/50.jpg) ![<uma-object-full> 9](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/unnamed.png) ![<uma-object-full> 10](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/unnamed_6_.jpg) ![<uma-object-full> 11](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/21.jpg) ![<uma-object-full> 12](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/FbCVln9WIAA74Z2.png) ![<uma-object-full> 13](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/file.jpg) ![<uma-object-full> 14](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/tt0.png) ![<uma-object-full> 15](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/31.jpg) ![<uma-object-full> 16](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/unnamed-1.jpg) ![<uma-object-full> 17](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/unnamed.jpg) ![<uma-object-full> 18](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/unnamed_5_.jpg) ![<uma-object-full> 19](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/3-30-25.png) ![<uma-object-full> 20](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/Fb-Pk97aMAIgbYr.png) ![<uma-object-full> 21](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/2.jpg) ![<uma-object-full> 22](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/unnamed_2_.png) ![<uma-object-full> 23](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/6.jpg) ![<uma-object-full> 24](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/unnamed_1_.jpg) ![<uma-object-full> 25](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/FZoyWUcXwAE3k2K.png) ![<uma-object-full> 26](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/unnamed_4_.jpg) ![<uma-object-full> 27](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/2022-09-14_13-02-28.png) ![<uma-object-full> 28](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/16.jpg) ![<uma-object-full> 29](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/unnamed_9_.jpg) ![<uma-object-full> 30](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/unnamed_10_.jpg) ![<uma-object-full> 31](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/4.jpg) ![<uma-object-full> 32](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/unnamed_3_.jpg) ![<uma-object-full> 33](https://huggingface.co/sd-concepts-library/sd-concepts-library-uma-meme/resolve/main/concept_images/unnamed_8_.jpg)
2af32cfe0ba1b558aa9ee70f2103d26a
afl-3.0
[]
false
scores: [-2.9463, -2.9463] ``` <strong> Cite us:<strong> ``` @article{rau2022role, title={The Role of Complex NLP in Transformers for Text Ranking?}, author={Rau, David and Kamps, Jaap}, journal={arXiv preprint arXiv:2207.02522}, year={2022} } ```
1e7b29cc1d0393f9d93a8dec96807557
mit
['generated_from_trainer']
false
roberta-base-finetuned-ner This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0497 - Precision: 0.9510 - Recall: 0.9602 - F1: 0.9556 - Accuracy: 0.9892
3687d0a8e303e76e2e54cdc2eded989f
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2066 | 1.0 | 878 | 0.0699 | 0.9226 | 0.9294 | 0.9260 | 0.9828 | | 0.0486 | 2.0 | 1756 | 0.0569 | 0.9465 | 0.9549 | 0.9507 | 0.9878 | | 0.0254 | 3.0 | 2634 | 0.0497 | 0.9510 | 0.9602 | 0.9556 | 0.9892 |
02d506d152cafc199436888fdd2788ce
mit
['conversational']
false
DialoGPT Trained on the Speech of a Game Character ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua")
f0da05e6642cc0023867c4db5fffc767
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples-DM This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3248 - Accuracy: 0.8667 - F1: 0.8734
57ccbfbf09af6110985712ea64f051e9
gpl-3.0
['object-detection', 'yolo', 'autogenerated-modelcard']
false
Model Description <!-- Provide a longer summary of what this model is. --> YOLOv6 is a single-stage object detection framework dedicated to industrial applications, with hardware-friendly efficient design and high performance. - **Developed by:** [More Information Needed] - **Shared by [Optional]:** [@nateraw](https://hf.co/nateraw) - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Related Models:** [yolov6s](https://hf.co/nateraw/yolov6s), [yolov6n](https://hf.co/nateraw/yolov6n) - **Parent Model:** N/A - **Resources for more information:** The [official GitHub Repository](https://github.com/meituan/YOLOv6)
cc602d6c7f8513cd1cfa6896a07d0de8
gpl-3.0
['object-detection', 'yolo', 'autogenerated-modelcard']
false
Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> This model often classifies objects incorrectly, especially when applied to videos. It does not handle crowds very well.
9160201debb29c2b757314f19c667f92
gpl-3.0
['object-detection', 'yolo', 'autogenerated-modelcard']
false
Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.
1dec7232a2c43b5ec1f1ef87d706ef8e