license stringlengths 2 30 | tags stringlengths 2 513 | is_nc bool 1 class | readme_section stringlengths 201 597k | hash stringlengths 32 32 |
|---|---|---|---|---|
apache-2.0 | ['generated_from_keras_callback'] | false | Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 | 5c1b492272ad51b7ca85db304d814e17 |
apache-2.0 | ['generated_from_trainer'] | false | bart-large-finetuned-large This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6397 - Rouge1: 88.2870 - Rouge2: 26.4705 - Rougel: 88.1924 - Rougelsum: 88.3415 - Gen Len: 6.0323 | 7dc31f59a795bd7ea2aa1debd1c088e2 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP | 0b652df0e446199bcfadeaf4722eaf44 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 121 | 0.8676 | 67.7680 | 19.6386 | 67.5697 | 67.5758 | 6.2774 | | No log | 2.0 | 242 | 0.6661 | 73.6309 | 21.6079 | 73.2496 | 73.5335 | 5.3957 | | No log | 3.0 | 363 | 0.6649 | 82.6362 | 21.4663 | 82.3944 | 82.6107 | 5.6624 | | No log | 4.0 | 484 | 0.6598 | 86.4811 | 25.3580 | 86.1949 | 86.3580 | 5.7914 | | 0.5135 | 5.0 | 605 | 0.8032 | 86.0334 | 25.1510 | 85.8895 | 85.9038 | 6.5634 | | 0.5135 | 6.0 | 726 | 0.6981 | 88.0139 | 25.6152 | 87.9025 | 87.9932 | 6.3591 | | 0.5135 | 7.0 | 847 | 0.6991 | 88.7421 | 25.6469 | 88.5959 | 88.7255 | 6.3376 | | 0.5135 | 8.0 | 968 | 0.5995 | 88.9180 | 26.9917 | 88.6984 | 88.8878 | 5.8538 | | 0.1613 | 9.0 | 1089 | 0.5973 | 88.5923 | 26.7081 | 88.4593 | 88.6287 | 5.8387 | | 0.1613 | 10.0 | 1210 | 0.6397 | 88.2870 | 26.4705 | 88.1924 | 88.3415 | 6.0323 | | ba01c7cfab5018024948b67fe2370ac7 |
apache-2.0 | ['automatic-speech-recognition', 'ar'] | false | exp_w2v2t_ar_r-wav2vec2_s779 Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (ar)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 6e63d52097fc59aea6232b2ddc98c738 |
apache-2.0 | ['generated_from_keras_callback'] | false | juancopi81/course-bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0547 - Epoch: 0 | c4c08659b3533a8dcab88f7c70d59657 |
apache-2.0 | ['generated_from_keras_callback'] | false | Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5546, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 | 8bb50ee11ee13f734f49396484cdd93b |
cc0-1.0 | ['audio', 'automatic-speech-recognition', 'speech', 'hf-asr-leaderboard', 'sv'] | false | KBLab's wav2vec 2.0 large VoxRex Swedish (C) with 4-gram model Training of the acoustic model is the work of KBLab. See [VoxRex-C](https://huggingface.co/KBLab/wav2vec2-large-voxrex-swedish) for more details. This repo extends the acoustic model with a social media 4-gram language model for boosted performance. | 9443bfdbfd35373a0a26b8f5d7472de1 |
cc0-1.0 | ['audio', 'automatic-speech-recognition', 'speech', 'hf-asr-leaderboard', 'sv'] | false | Model description VoxRex-C is extended with a 4-gram language model estimated from a subset extracted from [The Swedish Culturomics Gigaword Corpus](https://spraakbanken.gu.se/resurser/gigaword) from Språkbanken. The subset contains 40M words from the social media genre between 2010 and 2015. | 1c8680000ea0ea4e335f6baf8860117f |
cc0-1.0 | ['audio', 'automatic-speech-recognition', 'speech', 'hf-asr-leaderboard', 'sv'] | false | Load the model. Using GPU if available model_name = 'viktor-enzell/wav2vec2-large-voxrex-swedish-4gram' device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') pipe = pipeline(model=model_name).to(device) | a5afdb075f7ea0b14d5cd88931d01731 |
cc0-1.0 | ['audio', 'automatic-speech-recognition', 'speech', 'hf-asr-leaderboard', 'sv'] | false | More verbose usage example with audio pre-processing Example of transcribing 1% of the Common Voice test split. The model expects 16kHz audio, so audio with another sampling rate is resampled to 16kHz. ```python from transformers import Wav2Vec2ForCTC, Wav2Vec2ProcessorWithLM from datasets import load_dataset import torch import torchaudio.functional as F | 36f90a097bcc0979dd252f574d948334 |
cc0-1.0 | ['audio', 'automatic-speech-recognition', 'speech', 'hf-asr-leaderboard', 'sv'] | false | Import model and processor. Using GPU if available model_name = 'viktor-enzell/wav2vec2-large-voxrex-swedish-4gram' device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device); processor = Wav2Vec2ProcessorWithLM.from_pretrained(model_name) | 7e13812a424efcd581ff81536321c3cb |
cc0-1.0 | ['audio', 'automatic-speech-recognition', 'speech', 'hf-asr-leaderboard', 'sv'] | false | Convert speech file to array and downsample to 16 kHz sampling_rate = sample['audio']['sampling_rate'] sample['speech'] = F.resample(torch.tensor(sample['audio']['array']), sampling_rate, 16_000) return sample common_voice = common_voice.map(speech_file_to_array) | cdb0fda7826181a4fc2fd3f647f61b1c |
cc0-1.0 | ['audio', 'automatic-speech-recognition', 'speech', 'hf-asr-leaderboard', 'sv'] | false | Run inference inputs = processor(common_voice['speech'], sampling_rate=16_000, return_tensors='pt', padding=True).to(device) with torch.no_grad(): logits = model(**inputs).logits transcripts = processor.batch_decode(logits.cpu().numpy()).text ``` | 425335f6717e2b315c62d8c19f77dc51 |
cc0-1.0 | ['audio', 'automatic-speech-recognition', 'speech', 'hf-asr-leaderboard', 'sv'] | false | Training procedure Text data for the n-gram model is pre-processed by removing characters not part of the wav2vec 2.0 vocabulary and uppercasing all characters. After pre-processing and storing each text sample on a new line in a text file, a [KenLM](https://github.com/kpu/kenlm) model is estimated. See [this tutorial](https://huggingface.co/blog/wav2vec2-with-ngram) for more details. | f8b7c1d7202f066a00e28d019e4dcc49 |
apache-2.0 | ['image-classification', 'timm'] | false | Model card for maxxvitv2_rmlp_base_rw_224.sw_in12k A timm specific MaxxViT-V2 (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Trained in `timm` on ImageNet-12k (a 11821 class subset of full ImageNet-22k) by Ross Wightman. | f69bbd8d0e833a9c7d6e2d95191da838 |
apache-2.0 | ['image-classification', 'timm'] | false | Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 127.2 - GMACs: 24.2 - Activations (M): 62.8 - Image size: 224 x 224 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883 - **Dataset:** ImageNet-12k | ffbc6b8f542f0d3c87bcbb9059888ab8 |
apache-2.0 | ['image-classification', 'timm'] | false | Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('maxxvitv2_rmlp_base_rw_224.sw_in12k', pretrained=True) model = model.eval() | 1d3c5f975955265f72966a2648ecdcc4 |
apache-2.0 | ['image-classification', 'timm'] | false | Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxxvitv2_rmlp_base_rw_224.sw_in12k', pretrained=True, features_only=True, ) model = model.eval() | 79fcd3ddcb008b37b3530dec90baf1cc |
apache-2.0 | ['image-classification', 'timm'] | false | Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxxvitv2_rmlp_base_rw_224.sw_in12k', pretrained=True, num_classes=0, | b62f0e1216f766950f2969360be3544d |
other | ['generated_from_trainer'] | false | finetuned-distilbert-news-article-catgorization This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the news_article_categorization dataset. It achieves the following results on the evaluation set: - Loss: 0.1548 - F1_score(weighted): 0.96 | 2485d3c5fe65228dff1b24cfd253f430 |
other | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-5 - train_batch_size: 3 - eval_batch_size: 3 - seed: 17 - optimizer: AdamW(lr=1e-5 and epsilon=1e-08) - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 0 - num_epochs: 2 | 592cf16ed85774f136a91f191519eb6a |
other | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Validation Loss | f1 score | |:-------------:|:-----:|:---------------: |:------:| | 0.6359 | 1.0 | 0.1739 | 0.9619 | | 0.1548 | 2.0 | 0.1898 | 0.9648 | | cd2424f4b38b9b1d8e5570a3b87e21e2 |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-large-xls-r-300m-kika5_my-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3860 - Wer: 0.3505 | b6ea989a559804da0d7c91f3d90cea49 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.0007 | 4.82 | 400 | 0.6696 | 0.8283 | | 0.2774 | 9.64 | 800 | 0.4231 | 0.5476 | | 0.1182 | 14.46 | 1200 | 0.4253 | 0.5102 | | 0.0859 | 19.28 | 1600 | 0.4600 | 0.4866 | | 0.0693 | 24.1 | 2000 | 0.4030 | 0.4533 | | 0.0611 | 28.92 | 2400 | 0.4189 | 0.4412 | | 0.0541 | 33.73 | 2800 | 0.4272 | 0.4380 | | 0.0478 | 38.55 | 3200 | 0.4537 | 0.4505 | | 0.0428 | 43.37 | 3600 | 0.4349 | 0.4181 | | 0.038 | 48.19 | 4000 | 0.4562 | 0.4199 | | 0.0345 | 53.01 | 4400 | 0.4209 | 0.4310 | | 0.0316 | 57.83 | 4800 | 0.4336 | 0.4058 | | 0.0288 | 62.65 | 5200 | 0.4004 | 0.3920 | | 0.025 | 67.47 | 5600 | 0.4115 | 0.3857 | | 0.0225 | 72.29 | 6000 | 0.4296 | 0.3948 | | 0.0182 | 77.11 | 6400 | 0.3963 | 0.3772 | | 0.0165 | 81.93 | 6800 | 0.3921 | 0.3687 | | 0.0152 | 86.75 | 7200 | 0.3969 | 0.3592 | | 0.0133 | 91.57 | 7600 | 0.3803 | 0.3527 | | 0.0118 | 96.39 | 8000 | 0.3860 | 0.3505 | | 6fce4dd4ab8964677794af0de403143d |
mit | ['generated_from_keras_callback'] | false | esm2_t12_35M_UR50D-finetuned-secondary-structure-classification This model is a fine-tuned version of [facebook/esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4076 - Train Masked Accuracy: 0.8342 - Validation Loss: 0.4714 - Validation Masked Accuracy: 0.8060 - Epoch: 2 | 0db80a127c4f48d49c8750794acb8232 |
mit | ['generated_from_keras_callback'] | false | Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.0} - training_precision: float32 | 97c587fd2011caf183bcfdd813537092 |
mit | ['generated_from_keras_callback'] | false | Training results | Train Loss | Train Masked Accuracy | Validation Loss | Validation Masked Accuracy | Epoch | |:----------:|:---------------------:|:---------------:|:--------------------------:|:-----:| | 0.5874 | 0.7454 | 0.4908 | 0.7962 | 0 | | 0.4503 | 0.8156 | 0.4703 | 0.8043 | 1 | | 0.4076 | 0.8342 | 0.4714 | 0.8060 | 2 | | ba5b47c86500850af732af5211ab2e31 |
apache-2.0 | ['webgpt', 'regression', 'reward-model'] | false | Reward Model pretrained on openai/webgpt_comparison Reward model finetuned from existing pretrain model. Things that aligned with the orignal papers * Overfits easily using rank loss * Small learning rate Different from the papers * Small model performs bad due to lack of world knowledge, since the validation accuracy doesn't even reach 60%. OpenAI RM had 6B parameters. * Train using a 80-20 train-validation split on torch AMP settings Other models I had tried * bloomz-560m : embedding size doesn't worth the training, since this dataset only contain english prompt * gpt2-large : not stable * gpt2-base : not stable | d69dc0352c7361dc8c6da8b8632eb402 |
apache-2.0 | ['webgpt', 'regression', 'reward-model'] | false | Performance on validation split | model | val acc | val loss (rank loss) | |---|---|---| | [roberta-base](https://huggingface.co/theblackcat102/roberta-base-webgpt-rm) | 56.21 | 0.71 | | [roberta-large](https://huggingface.co/theblackcat102/roberta-large-webgpt-rm) | 57.89 | 0.67 | | [electra-base](https://huggingface.co/theblackcat102/electra-base-webgpt-rm) | 57.02 | 0.70 | | [electra-large](https://huggingface.co/theblackcat102/electra-large-webgpt-rm) | 58.75 | 0.69 | Tensorboard logs are located under runs/ | 483aeb5f6aa2acc4f43daa571bc0953d |
apache-2.0 | ['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_1500k'] | false | MultiBERTs, Intermediate Checkpoint - Seed 1, Step 1500k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model | c5ec01c50b8087878b98cb9c74ae336b |
apache-2.0 | ['multiberts', 'multiberts-seed_1', 'multiberts-seed_1-step_1500k'] | false | How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1500k') model = TFBertModel.from_pretrained("google/multiberts-seed_1-step_1500k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_1-step_1500k') model = BertModel.from_pretrained("google/multiberts-seed_1-step_1500k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` | 2894c852d28944d8d0bd2a3e463074ed |
apache-2.0 | ['generated_from_trainer'] | false | small-mlm-glue-qnli-target-glue-rte This model is a fine-tuned version of [muhtasham/small-mlm-glue-qnli](https://huggingface.co/muhtasham/small-mlm-glue-qnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.3314 - Accuracy: 0.6101 | 4a997b99d90c966398663a1d9c0c966c |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4059 | 6.41 | 500 | 1.5081 | 0.6209 | | 0.0562 | 12.82 | 1000 | 2.5424 | 0.5921 | | 0.0258 | 19.23 | 1500 | 2.7425 | 0.6209 | | 0.0161 | 25.64 | 2000 | 3.3314 | 0.6101 | | d040b4132d94f3c36023ea33da3d39a4 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200 | 3dab31b5dd6608464114838b6b00b041 |
apache-2.0 | ['generated_from_trainer'] | false | bert-base-multilingual-cased-finetuned-multilingual-pos This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1999 - Precision: 0.9438 - Recall: 0.9438 - F1: 0.9438 - Accuracy: 0.9541 | 7ce75877f84721cd1e94feddee2690c4 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 | 0b80e847c5227721ff423fa81b2b3bbf |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 1.0385 | 0.29 | 100 | 0.4411 | 0.8523 | 0.8473 | 0.8498 | 0.8739 | | 0.3849 | 0.57 | 200 | 0.3275 | 0.8907 | 0.8913 | 0.8910 | 0.9103 | | 0.2976 | 0.86 | 300 | 0.2879 | 0.9034 | 0.9037 | 0.9036 | 0.9203 | | 0.2487 | 1.14 | 400 | 0.2599 | 0.9132 | 0.9115 | 0.9123 | 0.9285 | | 0.2027 | 1.43 | 500 | 0.2444 | 0.9224 | 0.9198 | 0.9211 | 0.9349 | | 0.1899 | 1.71 | 600 | 0.2287 | 0.9239 | 0.9246 | 0.9243 | 0.9378 | | 0.18 | 2.0 | 700 | 0.2184 | 0.9282 | 0.9297 | 0.9289 | 0.9418 | | 0.1351 | 2.29 | 800 | 0.2214 | 0.9297 | 0.9291 | 0.9294 | 0.9424 | | 0.134 | 2.57 | 900 | 0.2123 | 0.9337 | 0.9333 | 0.9335 | 0.9458 | | 0.1294 | 2.86 | 1000 | 0.1993 | 0.9359 | 0.9344 | 0.9352 | 0.9476 | | 0.1156 | 3.14 | 1100 | 0.2018 | 0.9377 | 0.9377 | 0.9377 | 0.9494 | | 0.1007 | 3.43 | 1200 | 0.2027 | 0.9375 | 0.9384 | 0.9380 | 0.9495 | | 0.0959 | 3.71 | 1300 | 0.1971 | 0.9387 | 0.9394 | 0.9390 | 0.9505 | | 0.0982 | 4.0 | 1400 | 0.1953 | 0.9408 | 0.9414 | 0.9411 | 0.9522 | | 0.0761 | 4.29 | 1500 | 0.1987 | 0.9404 | 0.9412 | 0.9408 | 0.9517 | | 0.0788 | 4.57 | 1600 | 0.1994 | 0.9405 | 0.9411 | 0.9408 | 0.9518 | | 0.0755 | 4.86 | 1700 | 0.2009 | 0.9413 | 0.9420 | 0.9417 | 0.9525 | | 0.0671 | 5.14 | 1800 | 0.2011 | 0.9421 | 0.9423 | 0.9422 | 0.9527 | | 0.0636 | 5.43 | 1900 | 0.2002 | 0.9428 | 0.9431 | 0.9430 | 0.9532 | | 0.0628 | 5.71 | 2000 | 0.1993 | 0.9422 | 0.9433 | 0.9428 | 0.9532 | | 0.0645 | 6.0 | 2100 | 0.1979 | 0.9434 | 0.9430 | 0.9432 | 0.9536 | | 0.0543 | 6.29 | 2200 | 0.2017 | 0.9427 | 0.9434 | 0.9430 | 0.9532 | | 0.0558 | 6.57 | 2300 | 0.1992 | 0.9427 | 0.9432 | 0.9430 | 0.9534 | | 0.0529 | 6.86 | 2400 | 0.1999 | 0.9438 | 0.9438 | 0.9438 | 0.9541 | | 585a45a733525bec3163e62f0cfb17e0 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-tagesschau-subcategories This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7723 - Accuracy: 0.7267 | 04c5e659544a6da865b94e5832275276 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.4 | 30 | 1.3433 | 0.5667 | | No log | 0.8 | 60 | 1.0861 | 0.6933 | | No log | 1.2 | 90 | 0.9395 | 0.7067 | | No log | 1.6 | 120 | 0.8647 | 0.68 | | No log | 2.0 | 150 | 0.8018 | 0.72 | | No log | 2.4 | 180 | 0.7723 | 0.7267 | | No log | 2.8 | 210 | 0.7616 | 0.72 | | No log | 3.2 | 240 | 0.7348 | 0.7067 | | No log | 3.6 | 270 | 0.7747 | 0.72 | | 64f455a18426f72f6cedf7bf3afc1a43 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased_swag_mqa This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the swag dataset. It achieves the following results on the evaluation set: - Loss: 0.8556 - Accuracy: 0.6494 | 6a1d4c020adbd29808853fe4dc3c4eab |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1 - mixed_precision_training: Native AMP | a55246ed44298ace7f565ab5ec61c92b |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9234 | 1.0 | 2000 | 0.8556 | 0.6494 | | fd2e7c8948349d9f6736e7ee1807241f |
apache-2.0 | [] | false | Modelo de 125M de parámetros, adestrado e afinado desde un modelo preentrenado (GPT2-Spanish), usando un dataset en galego de 387MB obtido da wikipedia en galego. No contexto da **[Resolución do 22 de decembro de 2021 da Secretaría Xeral de Educación e Formación Profesional pola que se convocan premios para o desenvolvemento de proxectos de innovación tecnolóxica ou científica e proxectos de innovación didáctica no ámbito da formación profesional en centros públicos dependentes da Consellería de Cultura, Educación e Universidade](http://www.edu.xunta.gal/fp/sites/fp/files/pi2022__resolucion_de_convocatoria.pdf)**, baixo o nome de "*Creación dun modelo de linguaxe adestrado previamente mediante técnicas de autoatención para explorar arquitecturas que permitan o seu uso en solucións de procesamento da linguaxe natural en galego tanto na docencia como na contorna empresarial*" | 1ec4b8cd56906c53a5a612822744fd96 |
apache-2.0 | ['generated_from_trainer'] | false | sentiment-model-on-imdb-dataset This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3694 - Accuracy: 0.85 - F1: 0.8544 | e149f86d2beabee935f6af29d4c08073 |
apache-2.0 | ['generated_from_keras_callback'] | false | Haakf/allsides_right_text_headline_padded_overfit This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.8995 - Validation Loss: 1.7970 - Epoch: 19 | d63692ebf715611846dde5f74c4efb66 |
apache-2.0 | ['generated_from_keras_callback'] | false | Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -797, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 | 8575316d80c8ccc042e2298e1943fedd |
apache-2.0 | ['generated_from_keras_callback'] | false | Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.9722 | 1.8914 | 0 | | 1.9552 | 1.8628 | 1 | | 1.9303 | 1.8589 | 2 | | 1.9311 | 1.8490 | 3 | | 1.9168 | 1.8710 | 4 | | 1.8825 | 1.8630 | 5 | | 1.8841 | 1.8935 | 6 | | 1.8924 | 1.8301 | 7 | | 1.8940 | 1.8391 | 8 | | 1.9021 | 1.8450 | 9 | | 1.8821 | 1.8698 | 10 | | 1.8958 | 1.8886 | 11 | | 1.8891 | 1.8550 | 12 | | 1.8849 | 1.8777 | 13 | | 1.8809 | 1.8690 | 14 | | 1.8859 | 1.8723 | 15 | | 1.8932 | 1.8602 | 16 | | 1.9025 | 1.8583 | 17 | | 1.8853 | 1.7923 | 18 | | 1.8995 | 1.7970 | 19 | | 8018cf65a8575151aebed6e848a03f24 |
apache-2.0 | ['generated_from_trainer'] | false | paraphraser-german-mt5-small This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the paws-x (de) and tapaco (de) dataset. It achieves the following results on the evaluation set: - Loss: 1.7678 - Perplexity: 5.86 | 285fe91cd091458846a5921dd2446035 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 | bb3987ef9f9db4cdd6d02352865d8902 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.7064 | 0.05 | 2000 | 2.0731 | | 2.8673 | 0.11 | 4000 | 2.0420 | | 2.6133 | 0.16 | 6000 | 2.0080 | | 2.4563 | 0.21 | 8000 | 1.9556 | | 2.385 | 0.27 | 10000 | 1.9090 | | 2.3122 | 0.32 | 12000 | 1.9127 | | 2.2775 | 0.38 | 14000 | 1.8658 | | 2.2323 | 0.43 | 16000 | 1.8407 | | 2.17 | 0.48 | 18000 | 1.8342 | | 2.1672 | 0.54 | 20000 | 1.8328 | | 2.1488 | 0.59 | 22000 | 1.8071 | | 2.1026 | 0.64 | 24000 | 1.8328 | | 2.1036 | 0.7 | 26000 | 1.7979 | | 2.0854 | 0.75 | 28000 | 1.7895 | | 2.0594 | 0.81 | 30000 | 1.7944 | | 2.0793 | 0.86 | 32000 | 1.7726 | | 2.0661 | 0.91 | 34000 | 1.7762 | | 2.0722 | 0.97 | 36000 | 1.7714 | | 0224cdb3afcc7be05829e4049a407544 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7904 - Matthews Correlation: 0.5227 | 3627408c8c36c58729646e667ff0857d |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.528 | 1.0 | 535 | 0.5180 | 0.4003 | | 0.3508 | 2.0 | 1070 | 0.5120 | 0.5019 | | 0.2409 | 3.0 | 1605 | 0.6374 | 0.5128 | | 0.1806 | 4.0 | 2140 | 0.7904 | 0.5227 | | 0.1311 | 5.0 | 2675 | 0.8824 | 0.5227 | | c6c6a03a7c2af9b236a7263d2804e019 |
apache-2.0 | ['generated_from_trainer'] | false | finetuned_sentence_itr3_2e-05_all_27_02_2022-17_44_32 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4095 - Accuracy: 0.8263 - F1: 0.8865 | b7d0b8282594377059f538b9f11937df |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 | | No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 | | 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 | | 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 | | 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 | | 000ce0b69cedc125a45c956e497b05ab |
cc-by-4.0 | [] | false | Small-E-Czech Small-E-Czech is an [Electra](https://arxiv.org/abs/2003.10555)-small model pretrained on a Czech web corpus created at [Seznam.cz](https://www.seznam.cz/) and introduced in an [IAAI 2022 paper](https://arxiv.org/abs/2112.01810). Like other pretrained models, it should be finetuned on a downstream task of interest before use. At Seznam.cz, it has helped improve [web search ranking](https://blog.seznam.cz/2021/02/vyhledavani-pomoci-vyznamovych-vektoru/), query typo correction or clickbait titles detection. We release it under [CC BY 4.0 license](https://creativecommons.org/licenses/by/4.0/) (i.e. allowing commercial use). To raise an issue, please visit our [github](https://github.com/seznam/small-e-czech). | 70167e47fda11706c2a4a822d70e9479 |
cc-by-4.0 | [] | false | How to use the discriminator in transformers ```python from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("Seznam/small-e-czech") tokenizer = ElectraTokenizerFast.from_pretrained("Seznam/small-e-czech") sentence = "Za hory, za doly, mé zlaté parohy" fake_sentence = "Za hory, za doly, kočka zlaté parohy" fake_sentence_tokens = ["[CLS]"] + tokenizer.tokenize(fake_sentence) + ["[SEP]"] fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") outputs = discriminator(fake_inputs) predictions = torch.nn.Sigmoid()(outputs[0]).cpu().detach().numpy() for token in fake_sentence_tokens: print("{:>7s}".format(token), end="") print() for prediction in predictions.squeeze(): print("{:7.1f}".format(prediction), end="") print() ``` In the output we can see the probabilities of particular tokens not belonging in the sentence (i.e. having been faked by the generator) according to the discriminator: ``` [CLS] za hory , za dol | 4cc96081fdfe418113f7fe2ad6594923 |
apache-2.0 | ['generated_from_trainer'] | false | all-roberta-large-v1-banking-5-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2920 - Accuracy: 0.3982 | d13ba9ce99a59268752f6374abb5793c |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7211 | 1.0 | 1 | 2.5748 | 0.2301 | | 2.2722 | 2.0 | 2 | 2.4566 | 0.3009 | | 1.9185 | 3.0 | 3 | 2.3596 | 0.3805 | | 1.667 | 4.0 | 4 | 2.2920 | 0.3982 | | 1.4704 | 5.0 | 5 | 2.2565 | 0.3982 | | 3b5d814294f38f95d11c02b930ca0b05 |
apache-2.0 | ['CTC', 'pytorch', 'speechbrain', 'Transformer'] | false | wav2vec 2.0 with CTC/Attention trained on DVoice Swahili (No LM) This repository provides all the necessary tools to perform automatic speech recognition from an end-to-end system pretrained on a [DVoice-VoxLingua107](https://zenodo.org/record/6342622) Swahili dataset within SpeechBrain. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). | DVoice Release | Val. CER | Val. WER | Test CER | Test WER | |:-------------:|:---------------------------:| -----:| -----:| -----:| | v2.0 | 8.83 | 22.78 | 9.46 | 23.16 | | cdee6c07cf87288db5914e38988c660e |
apache-2.0 | ['CTC', 'pytorch', 'speechbrain', 'Transformer'] | false | Transcribing your own audio files (in Swahili) ```python from speechbrain.pretrained import EncoderASR asr_model = EncoderASR.from_hparams(source="aioxlabs/dvoice-swahili", savedir="pretrained_models/asr-wav2vec2-dvoice-sw") asr_model.transcribe_file('./the_path_to_your_audio_file') ``` | daf744d0c06764bf79c35877ed1bad85 |
apache-2.0 | ['generated_from_trainer'] | false | openai/whisper-small This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4429 - Wer: 52.7568 | e5a3e551aa6487f8f49cf895fd4f5ed8 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP | 458bedb387a18afcf0cc3fa59b0d0625 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.3629 | 1.03 | 1000 | 0.4917 | 53.1291 | | 0.289 | 2.06 | 2000 | 0.4747 | 61.3855 | | 0.2996 | 3.08 | 3000 | 0.4542 | 55.4692 | | 0.2331 | 4.11 | 4000 | 0.4353 | 51.4917 | | 0.1566 | 5.14 | 5000 | 0.4429 | 52.7568 | | d9408e51d9b6214aa95ec799c0631af6 |
apache-2.0 | ['generated_from_trainer', 'whisper-event'] | false | whisper-small-it This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1919 - Wer: 11.72 | 74d49d5083ac7cf2995a6aec9e7a4b86 |
apache-2.0 | ['generated_from_trainer', 'whisper-event'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP | db97639060338043653a6bc628b5ed6a |
apache-2.0 | ['generated_from_trainer', 'whisper-event'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1441 | 1.68 | 1000 | 0.1912 | 0.1256 | | 0.0653 | 3.36 | 2000 | 0.1845 | 0.1182 | | 0.0374 | 5.03 | 3000 | 0.1919 | 0.1172 | | 0.0238 | 6.71 | 4000 | 0.2069 | 0.1202 | | 0.0162 | 8.39 | 5000 | 0.2184 | 0.1223 | | 70f194b39cabf44b840634244737e037 |
apache-2.0 | ['deep-narrow'] | false | T5-Efficient-SMALL-DL12 (Deep-Narrow version) T5-Efficient-SMALL-DL12 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block. | 22e38b246c0d50642fa123335e327cd1 |
apache-2.0 | ['deep-narrow'] | false | Details model architecture This model checkpoint - **t5-efficient-small-dl12** - is of model type **Small** with the following variations: - **dl** is **12** It has **85.7** million parameters and thus requires *ca.* **342.82 MB** of memory in full precision (*fp32*) or **171.41 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh | | 85bc96163f4dfc2f43d635a0dd616103 |
creativeml-openrail-m | ['text-to-image', 'stable-diffusion'] | false | noggles_v21_5900 Dreambooth model trained by alxdfy with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept:  | 7c8aa3fe68fd0f16f4bc49155563bf80 |
mit | ['generated_from_trainer'] | false | finetune_deberta_small_model This model is a fine-tuned version of [nc33/finetune_deberta_small_model](https://huggingface.co/nc33/finetune_deberta_small_model) on the super_glue dataset. It achieves the following results on the evaluation set: - Loss: 0.6788 - Accuracy: 0.8021 | 3b39e1ce59d003fc46d5f1975b42c59d |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3666 | 1.0 | 590 | 0.5625 | 0.8003 | | 0.2501 | 2.0 | 1180 | 0.6762 | 0.7976 | | 0.2343 | 3.0 | 1770 | 0.6788 | 0.8021 | | 2d13fdd3370270e93692181eae7cd0e1 |
mit | ['audio', 'music', 'generation', 'tensorflow'] | false | Musika Techno Model Pretrained Techno GAN model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation. Introduced in [this paper](https://arxiv.org/abs/2208.08706). | 50d1a780c4622e85c185a274175bfd38 |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-large-xls-r-300m-ja-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_10_0 dataset. It achieves the following results on the evaluation set: - Loss: 1.1407 - Wer: 0.2456 | c5a56da95ddfaffa6a5c102785b1b03a |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 637 | 5.3238 | 0.9663 | | No log | 2.0 | 1274 | 4.1785 | 0.7662 | | No log | 3.0 | 1911 | 2.3701 | 0.4983 | | No log | 4.0 | 2548 | 1.8443 | 0.4090 | | 6.5781 | 5.0 | 3185 | 1.4892 | 0.3363 | | 6.5781 | 6.0 | 3822 | 1.3229 | 0.2995 | | 6.5781 | 7.0 | 4459 | 1.2418 | 0.2814 | | 6.5781 | 8.0 | 5096 | 1.1928 | 0.2647 | | 1.0184 | 9.0 | 5733 | 1.1584 | 0.2520 | | 1.0184 | 10.0 | 6370 | 1.1407 | 0.2456 | | 8c06ada4447c876e756cb7935e511a4f |
mit | ['generated_from_trainer'] | false | gpt2_prefinetune_SARC_1epoch_withcontext This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.7899 | 9178bf8bad48a81b923435b660c921f1 |
mit | [] | false | model by Unev3n This your the Stable Diffusion model fine-tuned the spacecat concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of sks spacecat** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept:           | 3603d9a9bb7100853f7ff9062cd5180e |
apache-2.0 | ['translation'] | false | BART Translation model For further models, scripts and details, see [our repository](https://github.com/nytud/machine-translation) or [our demo site](https://juniper.nytud.hu/demo/nlp). - Source language: English - Target language: Hungarian - Pretrained on English WikiText-103 and Hungarian Wikipedia - Finetuned on subcorpora from OPUS - Segments: 56.837.602 | 9d305fcda561df952ea00fc8bc00ee29 |
apache-2.0 | ['translation'] | false | Results | Model | BLEU | chrF-3 | | ------------- | ------------- | ------------- | | Google en-hu | 25.30 | 54.08 | | **BART-base-enhu** | **34.38** | **58.88** | | Google hu-en| 34.48 | 59.59 | | **BART-base-huen** | **38.03** | **61,37** | | 319563fa8cc940978eb8b4f091d3a93f |
apache-2.0 | ['translation'] | false | Citation If you use this model, please cite the following paper: ``` @inproceedings {yang-bart, title = {{BARTerezzünk! Messze, messze, messze a világtól, - BART kísérleti modellek magyar nyelvre}}, booktitle = {XVIII. Magyar Számítógépes Nyelvészeti Konferencia}, year = {2022}, publisher = {Szegedi Tudományegyetem, Informatikai Intézet}, address = {Szeged, Magyarország}, author = {Yang, Zijian Győző}, pages = {15--29} } ``` | 54e770e6506ecc2e1c987db934ea86ae |
apache-2.0 | ['translation'] | false | opus-mt-tr-es * source languages: tr * target languages: es * OPUS readme: [tr-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tr-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/tr-es/opus-2020-01-26.zip) * test set translations: [opus-2020-01-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-es/opus-2020-01-26.test.txt) * test set scores: [opus-2020-01-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tr-es/opus-2020-01-26.eval.txt) | 79e71001c0fd4d8cdbe8cd94863f4baf |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-large-xls-r-300m-turkish-colab-2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4152 - Wer: 0.3686 | 5252b1741cae1c19de3ac485f072d27d |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP | b09c54b147110aa0c567b71efec4ff8a |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.9382 | 7.4 | 400 | 0.6296 | 0.7016 | | 0.2837 | 14.81 | 800 | 0.4440 | 0.5161 | | 0.1185 | 22.22 | 1200 | 0.4217 | 0.4007 | | 0.0701 | 29.62 | 1600 | 0.4152 | 0.3686 | | f4087ce1ff9e80ad3f70e8648cf4abc1 |
apache-2.0 | ['image-classification', 'huggingpics', 'generated_from_trainer'] | false | huggingpics-package-demo-2 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3761 - Acc: 0.9403 | 4baa72958d55eb6d4a84593fa02c014c |
apache-2.0 | ['image-classification', 'huggingpics', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 - mixed_precision_training: Native AMP | 42da376bb531317e972501043eaf6907 |
apache-2.0 | ['image-classification', 'huggingpics', 'generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Acc | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.0328 | 1.0 | 24 | 0.9442 | 0.7463 | | 0.8742 | 2.0 | 48 | 0.7099 | 0.9403 | | 0.6451 | 3.0 | 72 | 0.5050 | 0.9403 | | 0.508 | 4.0 | 96 | 0.3761 | 0.9403 | | 8599011afb9f4e5a5f6b927a2ae25ee1 |
mit | ['generated_from_trainer'] | false | roberta-large-unlabeled-labeled-gab-reddit-task-semeval2023-t10-210000sample This model is a fine-tuned version of [HPL/roberta-large-unlabeled-labeled-gab-reddit-task-semeval2023-t10-150000sample](https://huggingface.co/HPL/roberta-large-unlabeled-labeled-gab-reddit-task-semeval2023-t10-150000sample) on the None dataset. | f3277806017e8a9d5275a920871bd848 |
creativeml-openrail-m | [] | false | Stable diffusion models removed EMA version from the ai image channel and elsewhere. Feel free to download. <font size="6">**Treebark**</font> This model was made by 나무껍질맛 in arcalive AI channel. 1. Anything V.3, add-difference 0.2 (animefull prevgood - animesfw prevgood) 2. Add-difference 1 (Gape60 - Animefull) 3. U-net merge with BasilMix (1,0.9,0.7,0.5,0.3,0.1,1,1,1,1,1,1,0,0,0,0,0,0,0,0.1,0.3,0.5,0.7,0.9,1_base alpha 0) 4. Add-difference 0.2 (SXD v1.0 - SD1.5) Treebark is more **gape** style than other 2.5D style models. <font size="6">**AniDosmix**</font> This model was made by DiaryOfSta in arcalive AI channel. And the original model is uploaded on https://civitai.com/models/6437/anidosmix I permitted uploading this pruned fp16 version. AniDosmix is a 2.5D style model and balanced for making people and **background**. | 51288ea88c9d1553fc6cf2f4441b215c |
apache-2.0 | ['generated_from_trainer'] | false | Article_500v1_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article500v1_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.2058 - Precision: 0.6615 - Recall: 0.6746 - F1: 0.6680 - Accuracy: 0.9326 | 71ce34e6a94613bdcfb39a827af38aca |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 58 | 0.3029 | 0.3539 | 0.3790 | 0.3660 | 0.8967 | | No log | 2.0 | 116 | 0.2191 | 0.6223 | 0.6488 | 0.6353 | 0.9262 | | No log | 3.0 | 174 | 0.2058 | 0.6615 | 0.6746 | 0.6680 | 0.9326 | | 128d4b83ea4a0d351ed3521f974f2aef |
apache-2.0 | ['generated_from_trainer'] | false | finetuned_token_2e-05_all_16_02_2022-15_50_54 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1750 - Precision: 0.3286 - Recall: 0.3334 - F1: 0.3310 - Accuracy: 0.9447 | f9f61d17ff427d2d6e03bda35ddc1339 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 38 | 0.3355 | 0.0975 | 0.2358 | 0.1380 | 0.8361 | | No log | 2.0 | 76 | 0.3177 | 0.1359 | 0.2709 | 0.1810 | 0.8398 | | No log | 3.0 | 114 | 0.3000 | 0.1542 | 0.3043 | 0.2047 | 0.8471 | | No log | 4.0 | 152 | 0.3033 | 0.1589 | 0.3060 | 0.2091 | 0.8434 | | No log | 5.0 | 190 | 0.3029 | 0.1629 | 0.3110 | 0.2138 | 0.8447 | | ca30d1596a6a659a227b29e2bd1d8165 |
mit | [] | false | carlitos el mago on Stable Diffusion This is the `<carloscarbonell>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`:     | fa5c99a3e5de7cb7846bc1f45b8d3cdd |
mit | [] | false | ingmar-bergman on Stable Diffusion This is the `<ingmar-bergman>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`:      | 3b6a0c72bf37de28207bf19ee8d8c2ac |
apache-2.0 | ['automatic-speech-recognition', 'uk'] | false | exp_w2v2t_uk_hubert_s878 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (uk)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | d13c920bd651542a05418658a1b41e15 |
mit | ['generated_from_trainer'] | false | roberta-base-finetuned-swag This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the swag dataset. It achieves the following results on the evaluation set: - Loss: 0.5161 - Accuracy: 0.8266 | eaedb7d87c34ecaaf1cf1cc2eb9f2cde |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1273 | 1.0 | 2298 | 0.5415 | 0.7898 | | 0.2373 | 2.0 | 4596 | 0.4756 | 0.8175 | | 0.1788 | 3.0 | 6894 | 0.5161 | 0.8266 | | 085c78145775e3f497c61742742f5961 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-preprint_full This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3258 | 829720c589d05314c88827cd28785c2f |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7315 | 1.0 | 47 | 2.4462 | | 2.577 | 2.0 | 94 | 2.3715 | | 2.5386 | 3.0 | 141 | 2.3692 | | 15dc770767cd1410cf9c51b8e8610bc5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.