license stringlengths 2 30 | tags stringlengths 2 513 | is_nc bool 1 class | readme_section stringlengths 201 597k | hash stringlengths 32 32 |
|---|---|---|---|---|
apache-2.0 | ['automatic-speech-recognition', 'it'] | false | exp_w2v2t_it_unispeech-sat_s306 Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 0a37729972a09928858a5c4aa16670be |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-large-xlsr-hausa2-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.2993 - Wer: 0.4826 | 8699aaa94829272af9104003429a63ca |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9.6e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 13 - gradient_accumulation_steps: 3 - total_train_batch_size: 36 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 50 - mixed_precision_training: Native AMP | a758ae0df1937feb2b27978fb3ba6989 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.1549 | 12.5 | 400 | 2.7289 | 1.0 | | 2.0566 | 25.0 | 800 | 0.4582 | 0.6768 | | 0.4423 | 37.5 | 1200 | 0.3037 | 0.5138 | | 0.2991 | 50.0 | 1600 | 0.2993 | 0.4826 | | 6c4d7ca3f9ba9c72d95fd2237b416918 |
apache-2.0 | ['translation'] | false | opus-mt-kqn-sv * source languages: kqn * target languages: sv * OPUS readme: [kqn-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/kqn-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/kqn-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/kqn-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/kqn-sv/opus-2020-01-09.eval.txt) | 1d8d01f523236ea104d7be4dc9e92755 |
mit | [] | false | Hate Speech Classifier for Social Media Content in English Language A monolingual model for hate speech classification of social media content in English language. The model was trained on 103190 YouTube comments and tested on an independent test set of 20554 YouTube comments. It is based on English BERT base pre-trained language model. | e45f4fbf4406e94bb51ec16e050ad908 |
creativeml-openrail-m | ['stable-diffusion', 'text-to-image', 'image-to-image'] | false | Abstract Animation Sprite Sheets An experimental Dreambooth model trained on individual frames of looping 3D animations that were then laid out on a 4x4 grid. Generates sprite sheets that can create very interesting abstract animations. Use the token **AbstrAnm spritesheet**. Size must be set at 512x512 or your outputs may not work properly. **Example prompt:** <i>AbstrAnm spritesheet, animation of a red glowing orb in the sky, highly detailed, fog, atmosphere, glow, sprites, animated, abstract</i> <br> **Negative prompt:** <i>high contrast, text, overlay</i> <br> Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 8 Feel free to experiment with other types of prompts and/or model merges.  You can also upscale it 4x to produce 512x512 animations. Used SD Upscale from AUTOMATIC1111's web UI to add more sharpness and detail.  Discovered it's actually quite flexible and could even animate less abstract concepts.  **Prompt 1:** <i>AbstrAnm spritesheet, animation of magical swirling clouds in the clear blue sky, floating in crystal clear water, circular, sunny, timelapse, lens flare, nature, 35mm lens shot, photorealistic, sprites, animated, art by Greg Rutkowski</i> <br> **Negative prompt:** <i>text, overlay, abstract, boring, empty, barren, simple background</i> <br> Steps: 25, Sampler: DPM++ 2S a, CFG scale: 10 **Prompt 2:** <i>AbstrAnm spritesheet, animation of a beautiful flower blowing in the wind, serene, pink, sunny, timelapse, lens flare, nature, 35mm lens shot, photorealistic, sprites, animated, art by Greg Rutkowski</i> **Negative prompt:** <i>text, overlay, abstract, boring, empty, barren, simple background</i> <br> Steps: 25, Sampler: DPM++ 2S a, CFG scale: 10 Some issues with this model: - May not loop seamlessly - Tends to be too noisy - Sprites aren't usually perfect squares - Small size and short animation (could experiment with training on larger resolutions in the future) | a1637be8cca374fcbc2a69b64168cfdb |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-base-timit-demo-colab_3 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1942 - Wer: 1.0 | 98106099566ef218fae231278c793f47 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP | d484bbc44b9cceab3d8682b04d4bcc53 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 4.2975 | 3.52 | 500 | 3.1771 | 1.0 | | 3.1468 | 7.04 | 1000 | 3.1917 | 1.0 | | 3.147 | 10.56 | 1500 | 3.1784 | 1.0 | | 3.1467 | 14.08 | 2000 | 3.1850 | 1.0 | | 3.1446 | 17.61 | 2500 | 3.2022 | 1.0 | | 3.1445 | 21.13 | 3000 | 3.2196 | 1.0 | | 3.1445 | 24.65 | 3500 | 3.2003 | 1.0 | | 3.1443 | 28.17 | 4000 | 3.1942 | 1.0 | | 3e4fdd0ba679303e6dbf2b0907858b12 |
apache-2.0 | ['generated_from_trainer'] | false | Fine_Tuning_XLSR_300M_testing_6_model This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2263 - Wer: 1.0 | 7bcff8c031d8658e556a11aae1f50599 |
apache-2.0 | ['translation'] | false | opus-mt-de-nso * source languages: de * target languages: nso * OPUS readme: [de-nso](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-nso/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-nso/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-nso/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-nso/opus-2020-01-20.eval.txt) | 02bcc221ecea99a3de4dc998cdd8c73a |
mit | ['kenlm', 'perplexity', 'n-gram', 'kneser-ney', 'bigscience'] | false | KenLM models This repo contains several KenLM models trained on different tokenized datasets and languages. KenLM models are probabilistic n-gram languge models that models. One use case of these models consist on fast perplexity estimation for [filtering or sampling large datasets](https://huggingface.co/bertin-project/bertin-roberta-base-spanish). For example, one could use a KenLM model trained on French Wikipedia to run inference on a large dataset and filter out samples that are very unlike to appear on Wikipedia (high perplexity), or very simple non-informative sentences that could appear repeatedly (low perplexity). At the root of this repo you will find different directories named after the dataset models were trained on (e.g. `wikipedia`, `oscar`). Within each directory, you will find several models trained on different language subsets of the dataset (e.g. `en (English)`, `es (Spanish)`, `fr (French)`). For each language you will find three different files * `{language}.arpa.bin`: The trained KenLM model binary * `{language}.sp.model`: The trained SentencePiece model used for tokenization * `{language}.sp.vocab`: The vocabulary file for the SentencePiece model The models have been trained using some of the preprocessing steps from [cc_net](https://github.com/facebookresearch/cc_net), in particular replacing numbers with zeros and normalizing punctuation. So, it is important to keep the default values for the parameters: `lower_case`, `remove_accents`, `normalize_numbers` and `punctuation` when using the pre-trained models in order to replicate the same pre-processing steps at inference time. | 4603bb8abf4898730081f2a49cebac6d |
mit | ['kenlm', 'perplexity', 'n-gram', 'kneser-ney', 'bigscience'] | false | 46793.5 (high perplexity, since the sentence is colloquial and contains grammar mistakes) ``` In the example above we see that, since Wikipedia is a collection of encyclopedic articles, a KenLM model trained on it will naturally give lower perplexity scores to sentences with formal language and no grammar mistakes than colloquial sentences with grammar mistakes. | cea1255d4259fe7b9c0a2314a6ce75af |
apache-2.0 | ['mobile', 'vison', 'image-classification'] | false | Model Details <!-- Give an overview of your model, the relevant research paper, who trained it, etc. --> EfficientFormer-L7, developed by [Snap Research](https://github.com/snap-research), is one of three EfficientFormer models. The EfficientFormer models were released as part of an effort to prove that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance. This checkpoint of EfficientFormer-L7 was trained for 300 epochs. - Developed by: Yanyu Li, Geng Yuan, Yang Wen, Eric Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren - Language(s): English - License: This model is licensed under the apache-2.0 license - Resources for more information: - [Research Paper](https://arxiv.org/abs/2206.01191) - [GitHub Repo](https://github.com/snap-research/EfficientFormer/) </model_details> <how_to_start> | c6be22d9e3f72990a9967b93453229b0 |
apache-2.0 | ['mobile', 'vison', 'image-classification'] | false | Direct Use This model can be used for image classification and semantic segmentation. On mobile devices (the model was tested on iPhone 12), the CoreML checkpoints will perform these tasks with low latency. <Limitations_and_Biases> | e7ed785f6da4418aa034022fa5d96bdd |
apache-2.0 | ['mobile', 'vison', 'image-classification'] | false | Limitations and Biases Though most designs in EfficientFormer are general-purposed, e.g., dimension- consistent design and 4D block with CONV-BN fusion, the actual speed of EfficientFormer may vary on other platforms. For instance, if GeLU is not well supported while HardSwish is efficiently implemented on specific hardware and compiler, the operator may need to be modified accordingly. The proposed latency-driven slimming is simple and fast. However, better results may be achieved if search cost is not a concern and an enumeration-based brute search is performed. Since the model was trained on Imagenet-1K, the [biases embedded in that dataset](https://huggingface.co/datasets/imagenet-1k | 0daa9580222bc6f47dbcde9e3462584a |
apache-2.0 | ['mobile', 'vison', 'image-classification'] | false | Citation Information ```bibtex @article{li2022efficientformer, title={EfficientFormer: Vision Transformers at MobileNet Speed}, author={Li, Yanyu and Yuan, Geng and Wen, Yang and Hu, Eric and Evangelidis, Georgios and Tulyakov, Sergey and Wang, Yanzhi and Ren, Jian}, journal={arXiv preprint arXiv:2206.01191}, year={2022} } ``` </Cite> | cab32a5b07322aaad98eb9c74d8015da |
creativeml-openrail-m | [] | false | Model mixes Custom models created by combining different models together. You can and should influence the style of these models by mentioning the keywords of the artists included at a sufficiently high weight:\ For example (m_wlop illustration style:1.3) | 310174885ed4ad87f907f8401e6425fe |
creativeml-openrail-m | [] | false | diffmix ★ Similar to anymix, but using add differential for the first level merges. Specifics have been forgotten. Guweiz and Greg might be included - if I recall correctly - in addition to the models included in anymix. | bbee5dc7d98ee49731da31a8bdd5946a |
creativeml-openrail-m | [] | false | megamix Weighted sum merge between all of my models at equal proportions, including both waifu diffusion and anything v3 versions of the same model. Artists included are Wlop (m_wlop), Nixeu (m_nixeu), RossDraws (m_ross), Cutesexyrobutts (m_robutts), Guweiz (m_guweiz) and Grzegorz Rutkowski (m_greg). | 623e785715ebeb90df32718d7f1610aa |
creativeml-openrail-m | [] | false | model_0 : - smooth.safetensors model_1 : diffmix.safetensors base_alpha : 0.8 output_file: S:\Library\Files\Tools\Super SD 2.0\models\Stable-diffusion\1-different.ckpt weights : 0,0,0,0,0,0,0,0,0,0,0,0,0.85,0.05,0.02,0.01,0.01,0.02,0.05,0.1,0.2,0.4,0.6,0.8,1 skip ids : 0 : 0:None, 1:Skip, 2:Reset | e0628393f0e7063c8628a7c8ad2caab0 |
creativeml-openrail-m | [] | false | model_0 : 1-different.ckpt model_1 : smooth-diff.ckpt base_alpha : 0.1 output_file: S:\Library\Files\Tools\Super SD 2.0\models\Stable-diffusion\2-different.ckpt weights : 0,0,0,0,0,0,0,0,0,0,0,0,0.2,0.15,0.25,0.5,0.7,0.8,0.6,0.2,0.05,0.01,0,0,0 skip ids : 0 : 0:None, 1:Skip, 2:Reset | feb8cb26f08d40c9850912a3515a8090 |
creativeml-openrail-m | [] | false | model_0 : 2-different.ckpt model_1 : protogenX53Photorealism_10.safetensors base_alpha : 0.1 output_file: S:\Library\Files\Tools\Super SD 2.0\models\Stable-diffusion\3-different.ckpt weights : 0.2,0.2,0.2,0.2,0.25,0.25,0.3,0.4,0.4,0.3,0.2,0.1,0.2,0,0,0,0,0,0,0,0,0,0,0,0 skip ids : 0 : 0:None, 1:Skip, 2:Reset | 6143ca5055b79a9f34ed4387e825f3f2 |
creativeml-openrail-m | [] | false | model_0 : 3-different.ckpt model_1 : protogenV22Anime_22.safetensors base_alpha : 0.1 output_file: S:\Library\Files\Tools\Super SD 2.0\models\Stable-diffusion\4-different.ckpt weights : 0.75,0.5,0.3,0.15,0.08,0.04,0.02,0.01,0.01,0.01,0.01,0.01,0.1,0,0,0,0,0,0,0,0,0,0,0,0 skip ids : 0 : 0:None, 1:Skip, 2:Reset | 0a71224002af41d919aa2334a635ca9a |
creativeml-openrail-m | [] | false | model_0 : 4-different.ckpt model_1 : hd-ross.ckpt base_alpha : 0.1 output_file: S:\Library\Files\Tools\Super SD 2.0\models\Stable-diffusion\different-v1.ckpt weights : 0,0,0,0,0,0.1,0.21,0.28,0.3,0.26,0.18,0.1,0.05,0.1,0.18,0.22,0.23,0.2,0.12,0,0,0,0,0,0 skip ids : 0 : 0:None, 1:Skip, 2:Reset | c6c0b58d0c01c5b08dca8a044f9fad1b |
creativeml-openrail-m | [] | false | model_0 : different-v1.ckpt model_1 : anymix-hardlight.ckpt base_alpha : 0.2 output_file: S:\Library\Files\Tools\Super SD 2.0\models\Stable-diffusion\different-v1-x.ckpt weights : 0.05,0.12,0.19,0.2,0.17,0.12,0.06,0.05,0.07,0.08,0.11,0.15,0.25,0.25,0.18,0.11,0.05,0.08,0.12,0.14,0.15,0.13,0.11,0.09,0.1 skip ids : 0 : 0:None, 1:Skip, 2:Reset | a33ac832dfcaeade59fe7ce21d95ec8d |
creativeml-openrail-m | [] | false | model_0 : different-v1-x.ckpt model_1 : AbyssOrangeMix2_nsfw.safetensors base_alpha : 0.1 output_file: S:\Library\Files\Tools\Super SD 2.0\models\Stable-diffusion\different-v3-c.ckpt weights : 0.5,0.4,0.3,0.2,0.2,0.2,0.2,0.2,0.25,0.3,0.35,0.4,0.45,0.4,0.35,0.3,0.25,0.2,0.15,0.1,0.05,0,0,0,0 skip ids : 0 : 0:None, 1:Skip, 2:Reset ``` | f545dbbd837a4f6169fc1ae07190808e |
creativeml-openrail-m | [] | false | Links to models https://huggingface.co/SirVeggie/wlop\ https://huggingface.co/SirVeggie/nixeu\ https://huggingface.co/SirVeggie/ross_draws\ https://huggingface.co/SirVeggie/cutesexyrobutts\ https://huggingface.co/SirVeggie/guweiz\ https://huggingface.co/SirVeggie/greg_rutkowski https://huggingface.co/darkstorm2150/Protogen_v2.2_Official_Release\ https://huggingface.co/darkstorm2150/Protogen_x5.3_Official_Release\ https://huggingface.co/WarriorMama777/OrangeMixs | ccb7fd92606390905b4f4f05660cb640 |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers'] | false | Please enable hires. fix when using it. Replicant is built by merging several models with fine-tuning WD1.4 and photorealistic SD2.0 models that works with danbooru tags.I trained 4 models to merge and prepared several LoRa models for tuning.As with SD1.x, merging individually trained models is better quality than training many concepts at once.This model is a workflow test and is not good enough. WD1.4 seems to vary greatly in quality with/without Hires. fix.In Replicant, the difference in quality is more noticeable because of the detailed drawings.So I recommend enabling Hires.fix for use. | df282854492abc8c5e861e6abd2761d9 |
creativeml-openrail-m | ['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers'] | false | Example Denoising strength 0.6 is a bit large. I like 0.57 better. The optimal CFG Scale value should also be examined. Hands often multiply. When this happens, increase the value of "extra hands".  ((masterpiece, best quality)), 1girl, flower, solo, dress, holding, sky, cloud, hat, outdoors, bangs, bouquet, rose, expressionless, blush, pink hair, flower field, red flower, pink eyes, white dress, looking at viewer, midium hair, holding flower, small breasts, red rose, holding bouquet, sun hat, white headwear, depth of field Negative prompt: (low quality, worst quality:1.4), (bad anatomy), (inaccurate limb:1.2), inaccurate eyes, extra digit,(extra arms:1.2), extra hands, fewer digits ,long body, cropped, jpeg artifacts, signature, watermark, username, blurry, empty eyes Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 576x384, Denoising strength: 0.6, Hires upscale: 2, Hires upscaler: Latent  ((masterpiece, best quality)), 1girl, skirt, shoes, solo, jacket, holding, alley, sitting, can, sneakers, hood, bag, hoodie, squatting, bangs, shirt, black hair, black skirt, short hair, white jacket, looking away, white footwear, full body, red eyes, long sleeves, open jacket, open clothes, holding can, Negative prompt: (low quality, worst quality:1.4), (bad anatomy), (inaccurate limb:1.2), inaccurate eyes, extra digit,(extra arms:1.2), extra legs, extra hands, fewer digits , long body, cropped, jpeg artifacts, signature, watermark, username, blurry, empty eyes,drinking Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 576x384, Denoising strength: 0.6, Hires upscale: 2, Hires upscaler: Latent  ((masterpiece, best quality)), 1girl, blood, solo, wings, halo, dress, socks, angel, long hair, shoes, standing, ribbon, long hair, blue eyes, angel wings, blood on clothes, white hair, full body, white wings, black footwear, white dress, feathered wings, white sock, white background, long sleeves, simple background, Negative prompt: (low quality, worst quality:1.4), (bad anatomy), (inaccurate limb:1.2), inaccurate eyes, extra digit,(extra arms:1.2), extra legs, extra hands, fewer digits , long body, cropped, jpeg artifacts, signature, watermark, username, blurry, empty eyes Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 384x576, Denoising strength: 0.57, Hires upscale: 2, Hires upscaler: Latent  ((masterpiece, best quality)), 1girl, car, solo, shorts, jacket, bangs, sitting, shirt, shoes, hairclip, socks, sneakers, denim, sidelocks, motor vehicle, long hair, ground vehicle,brown hair, looking at viewer, white shirt, black jacket, long sleeves, sports car, vehicle focus, aqua eyes, white socks, blue shorts, open clothes, black footwear, denim shorts, open jacket Negative prompt: (low quality, worst quality:1.4), (bad anatomy), (inaccurate limb:1.2), inaccurate eyes, extra digit, (extra arms:1.2), extra hands, fewer digits ,long body, cropped, jpeg artifacts, signature, watermark, username, blurry, empty eyes Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 384x576, Denoising strength: 0.6, Hires upscale: 2, Hires upscaler: Latent  ((masterpiece, best quality)), 1girl, solo, twintails, lollipop, smile, ahoge, hairclip, bow, holding, ribbon, frills, blush, shirt, :d, stuffed toy, pink hair, stuffed animal, red nails, hair ornament, open mouth, looking at viewer, stuffed bunny, nail polish, short sleeves, object hug, puffy sleeves, hair between eyes, upper body, light blue eyes, puffy short sleeves, holding stuffed toy, hair bow, white bow, doll hug, hair ribbon, streaked hair, white shirt Negative prompt: (low quality, worst quality:1.4), (bad anatomy), (inaccurate limb:1.2), inaccurate eyes, extra digit, (extra arms:1.2), extra hands, fewer digits ,long body, cropped, jpeg artifacts, signature, watermark, username, blurry, empty eyes Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 512x512, Denoising strength: 0.57, Hires upscale: 2, Hires upscaler: Latent  ((masterpiece, best quality)), 1girl, solo, tail, barefoot, skirt, sleeping, lying, grass, shirt, outdoors, socks, flower, long hair, on side, animal ears, blonde hair, cat tail, closed eyes, blue skirt, white shirt, cat ears, school uniform, dappled sunlight, short sleeves, bare legs, closed mouth, full body, pleated skirt Negative prompt: (low quality, worst quality:1.4), (bad anatomy), (inaccurate limb:1.2), inaccurate eyes, extra digit, (extra arms:1.2), extra hands, fewer digits ,long body, cropped, jpeg artifacts, signature, watermark, username, blurry, empty eyes Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 576x384, Denoising strength: 0.6, Hires upscale: 2, Hires upscaler: Latent  ((masterpiece, best quality)), 1girl, car, building, gun, weapon, outdoors, solo, military, day, city, standing, serious, pants, rifle, holding, jacket, motor vehicle, ground vehicle, brown hair, assault rifle, long hair, vehicle focus, holding gun, holding weapon, black footwear, military vehicle, full body, depth of field, Negative prompt: (low quality, worst quality:1.4), (bad anatomy), (inaccurate limb:1.2), inaccurate eyes, extra digit, (extra arms:1.2), extra hands, fewer digits ,long body, cropped, jpeg artifacts, signature, watermark, username, blurry, empty eyes Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 576x384, Denoising strength: 0.6, Hires upscale: 2, Hires upscaler: Latent | 786c758fc0ae6d5e025c9003c777ff9d |
apache-2.0 | ['generated_from_trainer'] | false | distilroberta-base-wikitextepoch_50 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6360 | 840c7ebe54d97448c25df53e06da5774 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 | 514ae77af26492334b19fc031aef2bd5 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 1.9729 | 1.0 | 2145 | 1.7725 | | 1.9158 | 2.0 | 4290 | 1.7521 | | 1.8479 | 3.0 | 6435 | 1.7376 | | 1.8081 | 4.0 | 8580 | 1.7272 | | 1.7966 | 5.0 | 10725 | 1.7018 | | 1.7284 | 6.0 | 12870 | 1.7010 | | 1.7198 | 7.0 | 15015 | 1.6868 | | 1.6985 | 8.0 | 17160 | 1.6879 | | 1.6712 | 9.0 | 19305 | 1.6930 | | 1.6489 | 10.0 | 21450 | 1.6594 | | 1.6643 | 11.0 | 23595 | 1.6856 | | 1.6215 | 12.0 | 25740 | 1.6816 | | 1.6125 | 13.0 | 27885 | 1.6714 | | 1.5936 | 14.0 | 30030 | 1.6760 | | 1.5745 | 15.0 | 32175 | 1.6660 | | 1.572 | 16.0 | 34320 | 1.6690 | | 1.5614 | 17.0 | 36465 | 1.6807 | | 1.558 | 18.0 | 38610 | 1.6711 | | 1.5305 | 19.0 | 40755 | 1.6446 | | 1.5021 | 20.0 | 42900 | 1.6573 | | 1.4923 | 21.0 | 45045 | 1.6648 | | 1.5086 | 22.0 | 47190 | 1.6757 | | 1.4895 | 23.0 | 49335 | 1.6525 | | 1.4918 | 24.0 | 51480 | 1.6577 | | 1.4642 | 25.0 | 53625 | 1.6633 | | 1.4604 | 26.0 | 55770 | 1.6462 | | 1.4644 | 27.0 | 57915 | 1.6509 | | 1.4633 | 28.0 | 60060 | 1.6417 | | 1.4188 | 29.0 | 62205 | 1.6519 | | 1.4066 | 30.0 | 64350 | 1.6363 | | 1.409 | 31.0 | 66495 | 1.6419 | | 1.4029 | 32.0 | 68640 | 1.6510 | | 1.4013 | 33.0 | 70785 | 1.6522 | | 1.3939 | 34.0 | 72930 | 1.6498 | | 1.3648 | 35.0 | 75075 | 1.6423 | | 1.3682 | 36.0 | 77220 | 1.6504 | | 1.3603 | 37.0 | 79365 | 1.6511 | | 1.3621 | 38.0 | 81510 | 1.6533 | | 1.3783 | 39.0 | 83655 | 1.6426 | | 1.3707 | 40.0 | 85800 | 1.6542 | | 1.3628 | 41.0 | 87945 | 1.6671 | | 1.3359 | 42.0 | 90090 | 1.6394 | | 1.3433 | 43.0 | 92235 | 1.6409 | | 1.3525 | 44.0 | 94380 | 1.6366 | | 1.3312 | 45.0 | 96525 | 1.6408 | | 1.3389 | 46.0 | 98670 | 1.6225 | | 1.3323 | 47.0 | 100815 | 1.6309 | | 1.3294 | 48.0 | 102960 | 1.6151 | | 1.3356 | 49.0 | 105105 | 1.6374 | | 1.3285 | 50.0 | 107250 | 1.6360 | | 6a39f9dbf9ee68fb9511a06a78f95df3 |
apache-2.0 | ['Sound Classification', 'CNN14'] | false | <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> | 496d2ce0d9cb7eca888e5711e0f24ec1 |
apache-2.0 | ['Sound Classification', 'CNN14'] | false | CNN14 Trained on VGGSound dataset with SimCLR and Fine Tuned on ESC50 This repository provides all the necessary tools to perform audip classification with [CNN14 model](https://arxiv.org/abs/1912.10211) model, implemented with SpeechBrain. For a better experience we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The encoder is first trained with SimCLR on the VGGGSound dataset, and then fine tuned on ESC50 folds 1,2,3. | Release | Classification Accuracy Valid | Classification Accuracy Test | |:-------------:|:--------------:|:--------------:| | 26-11-22 | 90% | 82% | | 2cd0d1f4464f94b1b7ec657c3452bbf0 |
apache-2.0 | ['Sound Classification', 'CNN14'] | false | Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). | 3fb830bc731f670aff24e249a2bd6dcd |
apache-2.0 | ['Sound Classification', 'CNN14'] | false | Referencing SpeechBrain ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` | 3f02699a3293db8166924ac7a6af3bef |
apache-2.0 | ['Sound Classification', 'CNN14'] | false | Referencing This Pretrained Model The encoder is originally trained for our [paper](https://arxiv.org/pdf/2205.07390.pdf). You can reference our paper if you use this model for your research. ```bibtex @inproceedings{wang2022CRL, title={Learning Representations for New Sound Classes With Continual Self-Supervised Learning}, author={Zhepei Wang, Cem Subakan, Xilin Jiang, Junkai Wu, Efthymios Tzinis, Mirco Ravanelli, Paris Smaragdis}, year={2022}, booktitle={Accepted to IEEE Signal Processing Letters} } ``` | 00c4bdf7f7b6919f5f611e62aceadd2b |
apache-2.0 | ['generated_from_trainer'] | false | bert-base-uncased-finetuned-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0905 - Precision: 0.9068 - Recall: 0.9200 - F1: 0.9133 - Accuracy: 0.9787 | 8a0e9e872c00a128339e680c92928258 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1266 | 1.0 | 1123 | 0.0952 | 0.8939 | 0.8869 | 0.8904 | 0.9742 | | 0.0741 | 2.0 | 2246 | 0.0866 | 0.8936 | 0.9247 | 0.9089 | 0.9774 | | 0.0496 | 3.0 | 3369 | 0.0905 | 0.9068 | 0.9200 | 0.9133 | 0.9787 | | 63116bc1edb0de3c759c4e715bddabc8 |
apache-2.0 | ['translation'] | false | eng-itc * source group: English * target group: Italic languages * OPUS readme: [eng-itc](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-itc/README.md) * model: transformer * source language(s): eng * target language(s): arg ast cat cos egl ext fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lat_Latn lij lld_Latn lmo max_Latn mfe min mwl oci pap pms por roh ron scn spa tmw_Latn vec wln zlm_Latn zsm_Latn * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.eval.txt) | 7cbba8ae92757ca3b8748bb394a9b72d |
apache-2.0 | ['translation'] | false | Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2016-enro-engron.eng.ron | 27.1 | 0.565 | | newsdiscussdev2015-enfr-engfra.eng.fra | 29.9 | 0.574 | | newsdiscusstest2015-enfr-engfra.eng.fra | 35.3 | 0.609 | | newssyscomb2009-engfra.eng.fra | 27.7 | 0.567 | | newssyscomb2009-engita.eng.ita | 28.6 | 0.586 | | newssyscomb2009-engspa.eng.spa | 29.8 | 0.569 | | news-test2008-engfra.eng.fra | 25.0 | 0.536 | | news-test2008-engspa.eng.spa | 27.1 | 0.548 | | newstest2009-engfra.eng.fra | 26.7 | 0.557 | | newstest2009-engita.eng.ita | 28.9 | 0.583 | | newstest2009-engspa.eng.spa | 28.9 | 0.567 | | newstest2010-engfra.eng.fra | 29.6 | 0.574 | | newstest2010-engspa.eng.spa | 33.8 | 0.598 | | newstest2011-engfra.eng.fra | 30.9 | 0.590 | | newstest2011-engspa.eng.spa | 34.8 | 0.598 | | newstest2012-engfra.eng.fra | 29.1 | 0.574 | | newstest2012-engspa.eng.spa | 34.9 | 0.600 | | newstest2013-engfra.eng.fra | 30.1 | 0.567 | | newstest2013-engspa.eng.spa | 31.8 | 0.576 | | newstest2016-enro-engron.eng.ron | 25.9 | 0.548 | | Tatoeba-test.eng-arg.eng.arg | 1.6 | 0.120 | | Tatoeba-test.eng-ast.eng.ast | 17.2 | 0.389 | | Tatoeba-test.eng-cat.eng.cat | 47.6 | 0.668 | | Tatoeba-test.eng-cos.eng.cos | 4.3 | 0.287 | | Tatoeba-test.eng-egl.eng.egl | 0.9 | 0.101 | | Tatoeba-test.eng-ext.eng.ext | 8.7 | 0.287 | | Tatoeba-test.eng-fra.eng.fra | 44.9 | 0.635 | | Tatoeba-test.eng-frm.eng.frm | 1.0 | 0.225 | | Tatoeba-test.eng-gcf.eng.gcf | 0.7 | 0.115 | | Tatoeba-test.eng-glg.eng.glg | 44.9 | 0.648 | | Tatoeba-test.eng-hat.eng.hat | 30.9 | 0.533 | | Tatoeba-test.eng-ita.eng.ita | 45.4 | 0.673 | | Tatoeba-test.eng-lad.eng.lad | 5.6 | 0.279 | | Tatoeba-test.eng-lat.eng.lat | 12.1 | 0.380 | | Tatoeba-test.eng-lij.eng.lij | 1.4 | 0.183 | | Tatoeba-test.eng-lld.eng.lld | 0.5 | 0.199 | | Tatoeba-test.eng-lmo.eng.lmo | 0.7 | 0.187 | | Tatoeba-test.eng-mfe.eng.mfe | 83.6 | 0.909 | | Tatoeba-test.eng-msa.eng.msa | 31.3 | 0.549 | | Tatoeba-test.eng.multi | 38.0 | 0.588 | | Tatoeba-test.eng-mwl.eng.mwl | 2.7 | 0.322 | | Tatoeba-test.eng-oci.eng.oci | 8.2 | 0.293 | | Tatoeba-test.eng-pap.eng.pap | 46.7 | 0.663 | | Tatoeba-test.eng-pms.eng.pms | 2.1 | 0.194 | | Tatoeba-test.eng-por.eng.por | 41.2 | 0.635 | | Tatoeba-test.eng-roh.eng.roh | 2.6 | 0.237 | | Tatoeba-test.eng-ron.eng.ron | 40.6 | 0.632 | | Tatoeba-test.eng-scn.eng.scn | 1.6 | 0.181 | | Tatoeba-test.eng-spa.eng.spa | 49.5 | 0.685 | | Tatoeba-test.eng-vec.eng.vec | 1.6 | 0.223 | | Tatoeba-test.eng-wln.eng.wln | 7.1 | 0.250 | | 5fdc8590a20cfaa3c317e17cd147c5c7 |
apache-2.0 | ['translation'] | false | System Info: - hf_name: eng-itc - source_languages: eng - target_languages: itc - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-itc/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'sc', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'itc'] - src_constituents: {'eng'} - tgt_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'bjn', 'lmo', 'mwl', 'lij', 'lat_Latn', 'lad_Latn', 'pcd', 'lat_Grek', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'srd', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'} - src_multilingual: False - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-itc/opus2m-2020-08-01.test.txt - src_alpha3: eng - tgt_alpha3: itc - short_pair: en-itc - chrF2_score: 0.588 - bleu: 38.0 - brevity_penalty: 0.9670000000000001 - ref_len: 73951.0 - src_name: English - tgt_name: Italic languages - train_date: 2020-08-01 - src_alpha2: en - tgt_alpha2: itc - prefer_old: False - long_pair: eng-itc - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41 | e7a9d1b90d0e80938f055a5c05530c7a |
apache-2.0 | ['translation'] | false | tur-aze * source group: Turkish * target group: Azerbaijani * OPUS readme: [tur-aze](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-aze/README.md) * model: transformer-align * source language(s): tur * target language(s): aze_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.eval.txt) | df69521607868894ce0c65aecf88cb1c |
apache-2.0 | ['translation'] | false | System Info: - hf_name: tur-aze - source_languages: tur - target_languages: aze - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/tur-aze/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['tr', 'az'] - src_constituents: {'tur'} - tgt_constituents: {'aze_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/tur-aze/opus-2020-06-16.test.txt - src_alpha3: tur - tgt_alpha3: aze - short_pair: tr-az - chrF2_score: 0.551 - bleu: 27.7 - brevity_penalty: 1.0 - ref_len: 5436.0 - src_name: Turkish - tgt_name: Azerbaijani - train_date: 2020-06-16 - src_alpha2: tr - tgt_alpha2: az - prefer_old: False - long_pair: tur-aze - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41 | eeac422e0a759d0151513923a2cb357a |
mit | ['huggan', 'gan'] | false | Training data Describe the data you used to train the model. If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data. | 1f81348968346d14df1b6c302fdf59fa |
apache-2.0 | ['generated_from_keras_callback'] | false | Sounak/bert-large-finetuned This model is a fine-tuned version of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.7634 - Validation Loss: 1.6843 - Epoch: 0 | 32c3c73c7d42def5d4d9584c7cc8644c |
apache-2.0 | ['generated_from_keras_callback'] | false | Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 157, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 | e50a1160176bb050cbab191878805c62 |
mit | ['automatic-speech-recognition', 'generated_from_trainer'] | false | Model description We pre-trained a wav2vec 2.0 base model on 842h of unlabelled Luxembourgish speech collected from [RTL.lu](https://www.rtl.lu/). Then the model was fine-tuned on 4h of labelled Luxembourgish Speech from the same domain. Additionally, we rescore the output transcription with a 5-gram language model trained on text corpora from RTL.lu and the Luxembourgish parliament. | 71135b5a6959ece4c4711cc8fa39b627 |
mit | ['automatic-speech-recognition', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 3 - eval_batch_size: 3 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50.0 - mixed_precision_training: Native AMP | bc6e51dd7c334af33658e5095ddd286f |
mit | ['automatic-speech-recognition', 'generated_from_trainer'] | false | Citation This model is a result of our paper `IMPROVING LUXEMBOURGISH SPEECH RECOGNITION WITH CROSS-LINGUAL SPEECH REPRESENTATIONS` submitted to the [IEEE SLT 2022 workshop](https://slt2022.org/) ``` @misc{lb-wav2vec2, author = {Nguyen, Le Minh and Nayak, Shekhar and Coler, Matt.}, keywords = {Luxembourgish, multilingual speech recognition, language modelling, wav2vec 2.0 XLSR-53, under-resourced language}, title = {IMPROVING LUXEMBOURGISH SPEECH RECOGNITION WITH CROSS-LINGUAL SPEECH REPRESENTATIONS}, year = {2022}, copyright = {2023 IEEE} } ``` | 51ae7cd97af009ad9f3a767379eb82c1 |
apache-2.0 | ['generated_from_trainer'] | false | finetuned_token_itr0_3e-05_all_16_02_2022-20_12_04 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1620 - Precision: 0.3509 - Recall: 0.3793 - F1: 0.3646 - Accuracy: 0.9468 | 29d6997e3cfb8a5016ceca11a7814c9d |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 | 6fa0725d8311999604e7dd8a40e49364 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 38 | 0.2997 | 0.1125 | 0.2057 | 0.1454 | 0.8669 | | No log | 2.0 | 76 | 0.2620 | 0.1928 | 0.2849 | 0.2300 | 0.8899 | | No log | 3.0 | 114 | 0.2497 | 0.1923 | 0.2906 | 0.2314 | 0.8918 | | No log | 4.0 | 152 | 0.2474 | 0.1819 | 0.3377 | 0.2365 | 0.8905 | | No log | 5.0 | 190 | 0.2418 | 0.2128 | 0.3264 | 0.2576 | 0.8997 | | 93231dbedd92419a57e53c5cdd605161 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1560 | 09bf22fa34eb75eb0c1697195b543e75 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2252 | 1.0 | 5533 | 1.1671 | | 0.9494 | 2.0 | 11066 | 1.1279 | | 0.7696 | 3.0 | 16599 | 1.1560 | | 20c11a75b3325a6eb27d26a47b57de90 |
apache-2.0 | ['generated_from_trainer'] | false | mt5-small-finetuned-tradition-zh This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the xlsum dataset. It achieves the following results on the evaluation set: - Loss: 2.9218 - Rouge1: 5.7806 - Rouge2: 1.266 - Rougel: 5.761 - Rougelsum: 5.7833 | a9475f2085738d018f6281e27199535f |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 | fa0fb4dfe34a717fbc37bf2e1a8dad7b |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:| | 4.542 | 1.0 | 2336 | 3.1979 | 4.8334 | 1.025 | 4.8142 | 4.8326 | | 3.7542 | 2.0 | 4672 | 3.0662 | 5.2155 | 1.0978 | 5.2025 | 5.2158 | | 3.5706 | 3.0 | 7008 | 3.0070 | 5.5471 | 1.3397 | 5.5386 | 5.5391 | | 3.4668 | 4.0 | 9344 | 2.9537 | 5.5865 | 1.1558 | 5.5816 | 5.5964 | | 3.4082 | 5.0 | 11680 | 2.9391 | 5.8061 | 1.3462 | 5.7944 | 5.812 | | 3.375 | 6.0 | 14016 | 2.9218 | 5.7806 | 1.266 | 5.761 | 5.7833 | | 1af8510b7e3506d436893f2b66fc13f3 |
mit | ['generated_from_trainer'] | false | hopeful_newton This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets. | f1b939d1a994b028e4abb7ee7ff599df |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 3147 - mixed_precision_training: Native AMP | ab12f1bbd22754f45760b29d9ef9000d |
mit | ['generated_from_trainer'] | false | Full config {'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True, 'skip_tokens': 1649999872}, 'generation': {'every_n_steps': 32, 'force_call_on': [25177], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}], 'scorer_config': {}}, 'kl_gpt3_callback': {'every_n_steps': 32, 'force_call_on': [25177], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': False, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'revision': '9e6c78543a6ff1e4089002c38864d5a9cf71ec90', 'value_head_config': {'is_detached': False}}, 'path_or_name': 'tomekkorbak/nervous_wozniak'}, 'objective': {'alpha': 1, 'beta': 10, 'name': 'AWR'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 512, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'hopeful_newton', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0001, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output2', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 3346, 'save_strategy': 'steps', 'seed': 42, 'tokens_already_seen': 1649999872, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} | 01caf5502bb598f42d281abffd118eb6 |
apache-2.0 | [] | false | c4multilingual) corpus, covering 101 languages: Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu. **Note**: mT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: [mC4](https://www.tensorflow.org/datasets/catalog/c4 | b413ebb73e1b869c06400fa27adbd18c |
apache-2.0 | [] | false | c4multilingual) Other Community Checkpoints: [here](https://huggingface.co/models?search=mt5) Paper: [mT5: A massively multilingual pre-trained text-to-text transformer](https://arxiv.org/abs/2010.11934) Authors: *Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel* | e7395769be3629b64071141263cef082 |
apache-2.0 | [] | false | Abstract The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We describe the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. All of the code and model checkpoints used in this work are publicly available. | 89d78efb23475ee078b4c25511a07697 |
apache-2.0 | ['generated_from_trainer'] | false | mobilebert_sa_GLUE_Experiment_logit_kd_data_aug_cola_128 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.7034 - Matthews Correlation: 0.1046 | 3687a01142be1575287c87faa53713b9 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:-----:|:---------------:|:--------------------:| | 0.6386 | 1.0 | 1669 | 0.7034 | 0.1046 | | 0.5613 | 2.0 | 3338 | 0.7201 | 0.0912 | | 0.535 | 3.0 | 5007 | 0.7257 | 0.1111 | | 0.5023 | 4.0 | 6676 | 0.7109 | 0.1655 | | 0.4569 | 5.0 | 8345 | 0.7769 | 0.1762 | | 0.4162 | 6.0 | 10014 | 0.7752 | 0.1431 | | 7084dea8b7b704e0c6aa739f5932bb71 |
apache-2.0 | ['automatic-speech-recognition', 'nl'] | false | exp_w2v2t_nl_vp-sv_s607 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 2d9222a2b46af76ac901452087b992ce |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-owndata This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2515 - Wer: 0.3212 | 053394e90212ec825a69bd67dea39f6b |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP | 7b01f8f4eb41a3973c683d27e3c3041b |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.262 | 0.36 | 100 | 3.4482 | 0.9832 | | 3.0032 | 0.72 | 200 | 2.9441 | 0.9832 | | 2.9141 | 1.08 | 300 | 2.9393 | 0.9832 | | 2.8585 | 1.44 | 400 | 2.8848 | 0.9627 | | 2.2837 | 1.8 | 500 | 2.1732 | 1.0111 | | 0.9834 | 2.16 | 600 | 0.8765 | 0.7345 | | 0.7288 | 2.52 | 700 | 0.5741 | 0.5641 | | 0.5521 | 2.88 | 800 | 0.3937 | 0.4467 | | 0.3751 | 3.24 | 900 | 0.3484 | 0.4112 | | 0.3733 | 3.6 | 1000 | 0.2964 | 0.3912 | | 0.2443 | 3.96 | 1100 | 0.2673 | 0.3446 | | 0.2667 | 4.32 | 1200 | 0.2657 | 0.3357 | | 0.2237 | 4.68 | 1300 | 0.2515 | 0.3212 | | 4b0f8afa2c87b717ed31d2eb642789bb |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-MLM This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2156 - Accuracy: 0.5252 | 61c416e4062e946365021f731e56cff7 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 | 14a90f803a2d6c0f06efea5eebbd137b |
apache-2.0 | ['generated_from_trainer'] | false | bert-finetuned-ner-80percent This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5462 - Precision: 0.8116 - Recall: 0.8408 - F1: 0.8260 - Accuracy: 0.9238 | c40e036878e7a0e6b454ccd8aec943a1 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 60 | 0.5514 | 0.7966 | 0.8348 | 0.8152 | 0.9170 | | No log | 2.0 | 120 | 0.5718 | 0.8020 | 0.8333 | 0.8174 | 0.9184 | | No log | 3.0 | 180 | 0.5462 | 0.8116 | 0.8408 | 0.8260 | 0.9238 | | 12ea7ac0e72a630b49c5cb1a3f24337c |
apache-2.0 | ['automatic-speech-recognition', 'en'] | false | exp_w2v2r_en_xls-r_gender_male-10_female-0_s287 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 9efcffbb31f4471355d19c5a3b179592 |
apache-2.0 | ['deep-narrow'] | false | T5-Efficient-SMALL-NL8 (Deep-Narrow version) T5-Efficient-SMALL-NL8 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block. | 2b9757735678ce3646431c777c05d29b |
apache-2.0 | ['deep-narrow'] | false | Details model architecture This model checkpoint - **t5-efficient-small-nl8** - is of model type **Small** with the following variations: - **nl** is **8** It has **75.21** million parameters and thus requires *ca.* **300.84 MB** of memory in full precision (*fp32*) or **150.42 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh | | abd88715e34156a7e32336c8a8f90bf4 |
apache-2.0 | ['translation'] | false | opus-mt-es-bzs * source languages: es * target languages: bzs * OPUS readme: [es-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-bzs/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-bzs/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bzs/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-bzs/opus-2020-01-16.eval.txt) | 35bb30d41e67ebd6cd4aa848232103a2 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2187 - Accuracy: 0.924 - F1: 0.9241 | 82937e3791cf97dc74b45519f223a40b |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8161 | 1.0 | 250 | 0.3112 | 0.9135 | 0.9102 | | 0.2468 | 2.0 | 500 | 0.2187 | 0.924 | 0.9241 | | b78a4606d302ad1b4a72850db7af91b0 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2174 - Accuracy: 0.927 - F1: 0.9271 | 4ee1a55f48ffae29f9ab6f8968de789a |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8148 | 1.0 | 250 | 0.3148 | 0.9 | 0.8967 | | 0.2487 | 2.0 | 500 | 0.2174 | 0.927 | 0.9271 | | f83a175e68d66b06c92af7a3a9f870a7 |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-large-xls-r-300m-vietnamese-cv11.0-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.6392 - Wer: 0.4792 | 2c37102e7a71d1f49d3918806f329c0d |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP | 11d30b1c3ca6064081a1058ff6efd64b |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 10.0365 | 4.55 | 400 | 3.4508 | 0.9984 | | 2.5036 | 9.09 | 800 | 1.0268 | 0.6972 | | 0.5974 | 13.64 | 1200 | 0.7071 | 0.5492 | | 0.3221 | 18.18 | 1600 | 0.6401 | 0.5071 | | 0.2046 | 22.73 | 2000 | 0.6154 | 0.4871 | | 0.1445 | 27.27 | 2400 | 0.6392 | 0.4792 | | 119c24dd80f62c4cccdcfe49ca31ec07 |
apache-2.0 | ['automatic-speech-recognition', 'et'] | false | exp_w2v2t_et_hubert_s390 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | df3f4ee2debfe0ee1632fc02f6515500 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Whisper_large_Shona This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the google/fleurs sn_zw dataset. It achieves the following results on the evaluation set: - Loss: 0.9189 - Wer: 37.5 | 0ffb16c05df2e9f109d5c5ecc27b7492 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 1000 | 12817b9f5d40bdb76c4b7d8ae836b6f4 |
apache-2.0 | ['whisper-event', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0005 | 41.64 | 500 | 0.8784 | 37.525 | | 0.0003 | 83.32 | 1000 | 0.9189 | 37.5 | | 710e4dd59f4fa294f9c654baf5aeb2c0 |
apache-2.0 | ['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event'] | false | This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset. It achieves the following results on the evaluation set: - Loss: 0.7805 - Wer: 0.4340 | 645a5bb9b197eab3294e8462e6ac5fa1 |
apache-2.0 | ['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 8000 - mixed_precision_training: Native AMP | f439ba4ea9eb512d9a78b7cb8f0f3b93 |
apache-2.0 | ['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.36 | 400 | 1.9130 | 0.9244 | | 5.0013 | 2.71 | 800 | 0.7789 | 0.5944 | | 0.6544 | 4.07 | 1200 | 0.7298 | 0.5852 | | 0.4021 | 5.42 | 1600 | 0.6978 | 0.5667 | | 0.3003 | 6.78 | 2000 | 0.6764 | 0.5382 | | 0.3003 | 8.14 | 2400 | 0.7249 | 0.5463 | | 0.2345 | 9.49 | 2800 | 0.7280 | 0.5124 | | 0.1993 | 10.85 | 3200 | 0.7289 | 0.4690 | | 0.1617 | 12.2 | 3600 | 0.7431 | 0.4733 | | 0.1432 | 13.56 | 4000 | 0.7448 | 0.4733 | | 0.1432 | 14.92 | 4400 | 0.7746 | 0.4485 | | 0.1172 | 16.27 | 4800 | 0.7589 | 0.4742 | | 0.1035 | 17.63 | 5200 | 0.7539 | 0.4353 | | 0.0956 | 18.98 | 5600 | 0.7648 | 0.4495 | | 0.0845 | 20.34 | 6000 | 0.7877 | 0.4719 | | 0.0845 | 21.69 | 6400 | 0.7884 | 0.4434 | | 0.0761 | 23.05 | 6800 | 0.7796 | 0.4386 | | 0.0634 | 24.41 | 7200 | 0.7729 | 0.4306 | | 0.0571 | 25.76 | 7600 | 0.7826 | 0.4298 | | 0.0508 | 27.12 | 8000 | 0.7805 | 0.4340 | | e0a424666d159098c23901862c703596 |
apache-2.0 | ['generated_from_trainer'] | false | sentiment-analysis-twitter This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the new_dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.4579 - Accuracy: 0.7965 | 3af69055109c03502ec0091a3b7e8249 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 | 31c8e29e9991596b220ed6ff415073b0 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5315 | 1.0 | 157 | 0.4517 | 0.788 | | 0.388 | 2.0 | 314 | 0.4416 | 0.8 | | 0.3307 | 3.0 | 471 | 0.4579 | 0.7965 | | ff18188da3c7e95b36a13d92252268b0 |
mit | ['generated_from_keras_callback'] | false | nandysoham16/Warsaw_Pact-clustered This model is a fine-tuned version of [nandysoham16/12-clustered_aug](https://huggingface.co/nandysoham16/12-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0828 - Train End Logits Accuracy: 0.9792 - Train Start Logits Accuracy: 0.9826 - Validation Loss: 2.2175 - Validation End Logits Accuracy: 0.0 - Validation Start Logits Accuracy: 0.0 - Epoch: 0 | 435a9155ed12949c16993bdc33bcb5e4 |
mit | ['generated_from_keras_callback'] | false | Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 18, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 | 6bc4a648126f06a8cab733c8235a21d8 |
mit | ['generated_from_keras_callback'] | false | Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.0828 | 0.9792 | 0.9826 | 2.2175 | 0.0 | 0.0 | 0 | | c06ec7380e689fb418d403c34d9d7a3a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.