license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3.0 - mixed_precision_training: Native AMP
6b8ae12de7ccbf90581260cd4f0dc1ad
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 1.0265 | 1.0 | 122 | 1.0110 | 98.4608 | | 0.9208 | 2.0 | 244 | 0.9148 | 88.3812 | | 0.8169 | 3.0 | 366 | 0.8394 | 86.0500 |
8e36125cc27b08181552499f34e85ffe
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-Russian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Russian using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz.
79204cd741abb5eebe186b6639b28a98
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ru", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-russian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-russian") resampler = torchaudio.transforms.Resample(48_000, 16_000)
e4bf7ed5892c3ca7f75bd32fbbc318dc
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the Russian test data of Common Voice. ```python import torch import torchaudio import urllib.request import tarfile import pandas as pd from tqdm.auto import tqdm from datasets import load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
d7b866ae163c472e0ed18abdc3eb6738
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Download the raw data instead of using HF datasets to save disk space data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ru.tar.gz" filestream = urllib.request.urlopen(data_url) data_file = tarfile.open(fileobj=filestream, mode="r|gz") data_file.extractall() wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-russian") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-russian") model.to("cuda") cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/ru/test.tsv", sep='\t') clips_path = "cv-corpus-6.1-2020-12-11/ru/clips/" def clean_sentence(sent): sent = sent.lower()
af5f07d2f78c9d29bca1a11dcc80bfd2
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
remove repeated spaces sent = " ".join(sent.split()) return sent targets = [] preds = [] for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]): row["sentence"] = clean_sentence(row["sentence"]) speech_array, sampling_rate = torchaudio.load(clips_path + row["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) row["speech"] = resampler(speech_array).squeeze().numpy() inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) targets.append(row["sentence"]) preds.append(processor.batch_decode(pred_ids)[0])
4e5e43d5a40a08c3ae20ef88f6da7b42
apache-2.0
['generated_from_trainer']
false
distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8268
8cca638f1c3f92fa535d9a0a3fa065da
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 1.9963 | | 2.0972 | 2.0 | 500 | 1.8649 | | 2.0972 | 3.0 | 750 | 1.8268 |
18e147e0e5b6a583d2516f5bd111e318
cc-by-4.0
['roberta', 'roberta-base', 'token-classification', 'NER', 'named-entities', 'BIO', 'movies', 'DAPT']
false
Movie Roberta + Movies NER Task Objective: This is Roberta Base + Movie DAPT --> trained for the NER task using MIT Movie Dataset https://huggingface.co/thatdramebaazguy/movie-roberta-base was used as the MovieRoberta. ``` model_name = "thatdramebaazguy/movie-roberta-MITmovieroberta-base-MITmovie" pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="ner") ```
a7458fe14ed7e00d703e95b8498dc25a
cc-by-4.0
['roberta', 'roberta-base', 'token-classification', 'NER', 'named-entities', 'BIO', 'movies', 'DAPT']
false
Overview **Language model:** roberta-base **Language:** English **Downstream-task:** NER **Training data:** MIT Movie **Eval data:** MIT Movie **Infrastructure**: 2x Tesla v100 **Code:** See [example](https://github.com/adityaarunsinghal/Domain-Adaptation/blob/master/scripts/shell_scripts/movieR_NER_squad.sh)
cde1b927abdc1d5c5356751449724be2
cc-by-4.0
['roberta', 'roberta-base', 'token-classification', 'NER', 'named-entities', 'BIO', 'movies', 'DAPT']
false
Eval on MIT Movie - epoch = 5.0 - eval_accuracy = 0.9472 - eval_f1 = 0.8876 - eval_loss = 0.2211 - eval_mem_cpu_alloc_delta = 3MB - eval_mem_cpu_peaked_delta = 2MB - eval_mem_gpu_alloc_delta = 0MB - eval_mem_gpu_peaked_delta = 38MB - eval_precision = 0.887 - eval_recall = 0.8881 - eval_runtime = 0:00:03.73 - eval_samples = 1955 - eval_samples_per_second = 523.095 Github Repo: - [Domain-Adaptation Project](https://github.com/adityaarunsinghal/Domain-Adaptation/) ---
401de2af61b1c31ad47b343478750aa6
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
efca5cce0f9232c28eaf92ccbde6e9a5
mit
['generated_from_trainer']
false
wmt-mbart50-large-finetuned-en-to-pt This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2510 - Bleu: 62.7011 - Gen Len: 19.224
2ffde8e961c8b2c11e72a595a293c17a
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP
8fdb5b284bafd36ab88e0176548fd7f4
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | 1.6426 | 1.0 | 433 | 0.5323 | 4.484 | 10.5635 | | 0.2571 | 2.0 | 866 | 0.1965 | 47.6449 | 19.164 | | 0.1043 | 3.0 | 1299 | 0.1723 | 53.6231 | 19.1455 | | 0.058 | 4.0 | 1732 | 0.1908 | 52.9831 | 18.5543 | | 0.0382 | 5.0 | 2165 | 0.1801 | 58.4418 | 19.0808 | | 0.0244 | 6.0 | 2598 | 0.2014 | 56.0197 | 20.0485 | | 0.0195 | 7.0 | 3031 | 0.2029 | 56.7903 | 18.642 | | 0.0138 | 8.0 | 3464 | 0.2015 | 57.6855 | 19.0 | | 0.0126 | 9.0 | 3897 | 0.2095 | 58.5733 | 18.7644 | | 0.0095 | 10.0 | 4330 | 0.1946 | 60.3165 | 19.6097 | | 0.0067 | 11.0 | 4763 | 0.2094 | 60.2691 | 18.9561 | | 0.0055 | 12.0 | 5196 | 0.2202 | 60.375 | 19.3025 | | 0.0046 | 13.0 | 5629 | 0.2153 | 60.7254 | 19.0855 | | 0.0035 | 14.0 | 6062 | 0.2239 | 61.458 | 19.0647 | | 0.0054 | 15.0 | 6495 | 0.2250 | 61.5297 | 19.164 | | 0.0025 | 16.0 | 6928 | 0.2458 | 61.263 | 19.0531 | | 0.002 | 17.0 | 7361 | 0.2354 | 62.4404 | 19.2102 | | 0.0015 | 18.0 | 7794 | 0.2403 | 62.0235 | 19.1293 | | 0.0011 | 19.0 | 8227 | 0.2477 | 62.6301 | 19.2494 | | 0.0009 | 20.0 | 8660 | 0.2510 | 62.7011 | 19.224 |
5f9f2b1b1c2430dae36263697a9b3869
mit
['generated_from_trainer']
false
predict-perception-xlmr-focus-object This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1927 - Rmse: 0.5495 - Rmse Focus::a Su un oggetto: 0.5495 - Mae: 0.4174 - Mae Focus::a Su un oggetto: 0.4174 - R2: 0.5721 - R2 Focus::a Su un oggetto: 0.5721 - Cos: 0.5652 - Pair: 0.0 - Rank: 0.5 - Neighbors: 0.5518 - Rsa: nan
0e6cb714b3e656f6897f3f9085cbf894
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Su un oggetto | Mae | Mae Focus::a Su un oggetto | R2 | R2 Focus::a Su un oggetto | Cos | Pair | Rank | Neighbors | Rsa | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------------:|:------:|:--------------------------:|:-------:|:-------------------------:|:-------:|:----:|:----:|:---------:|:---:| | 1.0316 | 1.0 | 15 | 0.6428 | 1.0035 | 1.0035 | 0.8806 | 0.8806 | -0.4272 | -0.4272 | -0.4783 | 0.0 | 0.5 | 0.5302 | nan | | 1.0005 | 2.0 | 30 | 0.4564 | 0.8456 | 0.8456 | 0.7078 | 0.7078 | -0.0134 | -0.0134 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan | | 0.9519 | 3.0 | 45 | 0.4151 | 0.8063 | 0.8063 | 0.6797 | 0.6797 | 0.0784 | 0.0784 | 0.1304 | 0.0 | 0.5 | 0.4888 | nan | | 0.92 | 4.0 | 60 | 0.3982 | 0.7898 | 0.7898 | 0.6516 | 0.6516 | 0.1159 | 0.1159 | 0.2174 | 0.0 | 0.5 | 0.5036 | nan | | 0.8454 | 5.0 | 75 | 0.2739 | 0.6550 | 0.6550 | 0.5292 | 0.5292 | 0.3919 | 0.3919 | 0.6522 | 0.0 | 0.5 | 0.4160 | nan | | 0.7247 | 6.0 | 90 | 0.2413 | 0.6148 | 0.6148 | 0.5347 | 0.5347 | 0.4642 | 0.4642 | 0.4783 | 0.0 | 0.5 | 0.3453 | nan | | 0.6055 | 7.0 | 105 | 0.3109 | 0.6978 | 0.6978 | 0.6115 | 0.6115 | 0.3098 | 0.3098 | 0.4783 | 0.0 | 0.5 | 0.4154 | nan | | 0.5411 | 8.0 | 120 | 0.3932 | 0.7848 | 0.7848 | 0.6712 | 0.6712 | 0.1271 | 0.1271 | 0.4783 | 0.0 | 0.5 | 0.4154 | nan | | 0.4784 | 9.0 | 135 | 0.1316 | 0.4540 | 0.4540 | 0.3750 | 0.3750 | 0.7079 | 0.7079 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.4039 | 10.0 | 150 | 0.2219 | 0.5896 | 0.5896 | 0.4954 | 0.4954 | 0.5074 | 0.5074 | 0.5652 | 0.0 | 0.5 | 0.4838 | nan | | 0.3415 | 11.0 | 165 | 0.1935 | 0.5505 | 0.5505 | 0.4443 | 0.4443 | 0.5704 | 0.5704 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.3369 | 12.0 | 180 | 0.2118 | 0.5761 | 0.5761 | 0.4554 | 0.4554 | 0.5296 | 0.5296 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.3083 | 13.0 | 195 | 0.1928 | 0.5496 | 0.5496 | 0.4368 | 0.4368 | 0.5718 | 0.5718 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.2678 | 14.0 | 210 | 0.2205 | 0.5877 | 0.5877 | 0.4472 | 0.4472 | 0.5105 | 0.5105 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.2199 | 15.0 | 225 | 0.2118 | 0.5760 | 0.5760 | 0.4689 | 0.4689 | 0.5297 | 0.5297 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.2238 | 16.0 | 240 | 0.2461 | 0.6209 | 0.6209 | 0.5047 | 0.5047 | 0.4537 | 0.4537 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.2233 | 17.0 | 255 | 0.2307 | 0.6011 | 0.6011 | 0.4618 | 0.4618 | 0.4879 | 0.4879 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.1903 | 18.0 | 270 | 0.2207 | 0.5880 | 0.5880 | 0.4432 | 0.4432 | 0.5100 | 0.5100 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan | | 0.1714 | 19.0 | 285 | 0.2146 | 0.5798 | 0.5798 | 0.4368 | 0.4368 | 0.5236 | 0.5236 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.1759 | 20.0 | 300 | 0.1745 | 0.5228 | 0.5228 | 0.4152 | 0.4152 | 0.6126 | 0.6126 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.1505 | 21.0 | 315 | 0.1944 | 0.5519 | 0.5519 | 0.4170 | 0.4170 | 0.5684 | 0.5684 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan | | 0.1467 | 22.0 | 330 | 0.1802 | 0.5313 | 0.5313 | 0.3910 | 0.3910 | 0.5999 | 0.5999 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan | | 0.1441 | 23.0 | 345 | 0.2360 | 0.6081 | 0.6081 | 0.4755 | 0.4755 | 0.4760 | 0.4760 | 0.4783 | 0.0 | 0.5 | 0.4938 | nan | | 0.1553 | 24.0 | 360 | 0.2129 | 0.5774 | 0.5774 | 0.4539 | 0.4539 | 0.5274 | 0.5274 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan | | 0.1163 | 25.0 | 375 | 0.1780 | 0.5281 | 0.5281 | 0.3952 | 0.3952 | 0.6048 | 0.6048 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan | | 0.1266 | 26.0 | 390 | 0.2163 | 0.5821 | 0.5821 | 0.4569 | 0.4569 | 0.5198 | 0.5198 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan | | 0.1416 | 27.0 | 405 | 0.1829 | 0.5352 | 0.5352 | 0.4082 | 0.4082 | 0.5939 | 0.5939 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan | | 0.1576 | 28.0 | 420 | 0.1930 | 0.5498 | 0.5498 | 0.4126 | 0.4126 | 0.5716 | 0.5716 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan | | 0.118 | 29.0 | 435 | 0.2070 | 0.5694 | 0.5694 | 0.4378 | 0.4378 | 0.5405 | 0.5405 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan | | 0.1179 | 30.0 | 450 | 0.1927 | 0.5495 | 0.5495 | 0.4174 | 0.4174 | 0.5721 | 0.5721 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan |
be48472c3add1b8dc50b087350b72c55
apache-2.0
['translation', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.8061 | 1.0 | 500 | 0.5023 | | 0.6521 | 2.0 | 1000 | 0.3094 | | 0.5033 | 3.0 | 1500 | 0.2751 |
ad8ea6eda7bcedb625917c3df7a16fc1
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 28.6 - GMACs: 13.1 - Activations (M): 39.5 - Image size: 384 x 384 - **Papers:** - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - **Original:** https://github.com/facebookresearch/ConvNeXt - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-22k
f3ef871e98753ee5e6f94bb946bb0679
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('convnext_tiny.fb_in22k_ft_in1k_384', pretrained=True) model = model.eval()
5744119d8e1e77f5e71c89b2cc1726c8
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_tiny.fb_in22k_ft_in1k_384', pretrained=True, features_only=True, ) model = model.eval()
3788b445693d80fddb98a767dfa2f130
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_tiny.fb_in22k_ft_in1k_384', pretrained=True, num_classes=0,
0146006a53b988db373b3da4aff2ce86
cc-by-4.0
['int8', 'Intel® Neural Compressor', 'PostTrainingStatic']
false
Post-training static quantization This is an INT8 PyTorch model quantized with [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor). The original fp32 model comes from the fine-tuned model [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2). The calibration dataloader is the train dataloader. The default calibration sampling size 100 isn't divisible exactly by batch size 8, so the real sampling size is 104. The linear modules **roberta.encoder.layer.7.output.dense**, **roberta.encoder.layer.8.output.dense**, **roberta.encoder.layer.9.output.dense**, fall back to fp32 for less than 1% relative accuracy loss.
3aeb5aca3471cb544119405ee163708b
cc-by-4.0
['int8', 'Intel® Neural Compressor', 'PostTrainingStatic']
false
Load with optimum: ```python from optimum.intel.neural_compressor.quantization import IncQuantizedModelForQuestionAnswering int8_model = IncQuantizedModelForQuestionAnswering.from_pretrained( 'Intel/roberta-base-squad2-int8-static', ) ```
f7a0684a6086c4867f9628e08b6eba58
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_qqp_192 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.4568 - Accuracy: 0.7910 - F1: 0.7234 - Combined Score: 0.7572
14ceda07f66aa65eda995229396fc36a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.5339 | 1.0 | 1422 | 0.5031 | 0.7551 | 0.6484 | 0.7018 | | 0.4835 | 2.0 | 2844 | 0.4866 | 0.7650 | 0.6504 | 0.7077 | | 0.4587 | 3.0 | 4266 | 0.4792 | 0.7694 | 0.6422 | 0.7058 | | 0.4369 | 4.0 | 5688 | 0.4851 | 0.7745 | 0.6716 | 0.7230 | | 0.4155 | 5.0 | 7110 | 0.4705 | 0.7791 | 0.6970 | 0.7380 | | 0.3961 | 6.0 | 8532 | 0.4633 | 0.7858 | 0.7093 | 0.7476 | | 0.3772 | 7.0 | 9954 | 0.4572 | 0.7908 | 0.7176 | 0.7542 | | 0.3593 | 8.0 | 11376 | 0.4568 | 0.7910 | 0.7234 | 0.7572 | | 0.3422 | 9.0 | 12798 | 0.4661 | 0.7927 | 0.7227 | 0.7577 | | 0.3265 | 10.0 | 14220 | 0.4596 | 0.7983 | 0.7290 | 0.7636 | | 0.3119 | 11.0 | 15642 | 0.4635 | 0.7977 | 0.7255 | 0.7616 | | 0.2961 | 12.0 | 17064 | 0.4857 | 0.8008 | 0.7309 | 0.7659 | | 0.2831 | 13.0 | 18486 | 0.4987 | 0.8037 | 0.7314 | 0.7676 |
346c45d0e81c53c36677ef17c6bc776a
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7809 - Matthews Correlation: 0.5286
3a77cade122545fd02dfe641d6a9ed0b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5299 | 1.0 | 535 | 0.5040 | 0.4383 | | 0.3472 | 2.0 | 1070 | 0.5284 | 0.4911 | | 0.2333 | 3.0 | 1605 | 0.6633 | 0.5091 | | 0.1733 | 4.0 | 2140 | 0.7809 | 0.5286 | | 0.1255 | 5.0 | 2675 | 0.8894 | 0.5282 |
78eeefc1eb170324e382fa6716acf3b0
other
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'art', 'artistic', 'diffusers']
false
Examples <img src="https://cdn.openart.ai/uploads/image_1675448197954_1024.jpg" style="max-width: 600px;" width="100%"/> <img src="https://cdn.openart.ai/uploads/image_1675411612740_1024.jpg" style="max-width: 600px;" width="100%"/> <img src="https://cdn.openart.ai/uploads/image_1675196635672_1024.jpg" style="max-width: 600px;" width="100%"/> <img src="https://cdn.openart.ai/uploads/image_1674581722334_1024.jpg" style="max-width: 600px;" width="100%"/> <img src="https://cdn.openart.ai/uploads/image_1674987795511_1024.jpg" style="max-width: 600px;" width="100%"/> <img src="https://cdn.openart.ai/uploads/image_1674932237434_1024.jpg" style="max-width: 600px;" width="100%"/> <img src="https://cdn.openart.ai/uploads/image_1673903295569_1024.jpg" style="max-width: 600px;" width="100%"/> <img src="https://cdn.openart.ai/uploads/image_1674064743430_1024.jpg" style="max-width: 600px;" width="100%"/> <img src="https://cdn.openart.ai/uploads/image_1673727870966_1024.jpg" style="max-width: 600px;" width="100%"/> <img src="https://cdn.openart.ai/uploads/image_1673979519921_1024.jpg" style="max-width: 600px;" width="100%"/> <img src="https://cdn.openart.ai/uploads/image_1675283643707_1024.jpg" style="max-width: 600px;" width="100%"/> <img src="https://cdn.openart.ai/uploads/image_1675277243663_1024.jpg" style="max-width: 600px;" width="100%"/> <img src="https://cdn.openart.ai/uploads/image_1675018609128_1024.jpg" style="max-width: 600px;" width="100%"/> More examples: https://openart.ai/@raudemer_enchanting_8k
1ea0ac846750b81c03e2bc6d96ca456b
other
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'art', 'artistic', 'diffusers']
false
rMadaArt UI: Requires AUTOMATIC1111 Stable Diffusion Webui --api (https://github.com/AUTOMATIC1111/stable-diffusion-webui) <img src="https://cdn.openart.ai/uploads/image_1675183856117_1024.jpg" style="max-width: 800px;" width="100%"/> https://www.youtube.com/watch?v=47OjMczhBpM&t=416s https://www.youtube.com/watch?v=o7hrptahjvI
3cbcef27cc1fe65c799e230d297547b6
other
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'art', 'artistic', 'diffusers']
false
Atmosphere and fine tunning variations https://www.youtube.com/watch?v=M_0DRfESzks <img src="https://cdn.openart.ai/uploads/image_1675514080826_1024.jpg" style="max-width: 600px;" width="100%"/> <img src="https://cdn.openart.ai/uploads/image_1675515489063_1024.jpg" style="max-width: 600px;" width="100%"/> <img src="https://cdn.openart.ai/uploads/image_1675514369201_1024.jpg" style="max-width: 600px;" width="100%"/> <img src="https://cdn.openart.ai/uploads/image_1675524026464_1024.jpg" style="max-width: 600px;" width="100%"/>
11daa7dd73a0dcd6dc2c0b8f316cbcc6
apache-2.0
salesken
false
This model evaluates the wellformedness (non-fragment, grammatically correct) score of a sentence. Model is case-sensitive and penalises for incorrect case and grammar as well. ['She is presenting a paper tomorrow','she is presenting a paper tomorrow','She present paper today'] [[0.8917],[0.4270],[0.0134]] ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("salesken/query_wellformedness_score") model = AutoModelForSequenceClassification.from_pretrained("salesken/query_wellformedness_score") sentences = [' what was the reason for everyone to leave the company ', ' What was the reason behind everyone leaving the company ', ' why was everybody leaving the company ', ' what was the reason to everyone leave the company ', ' what be the reason for everyone to leave the company ', ' what was the reasons for everyone to leave the company ', ' what were the reasons for everyone to leave the company '] features = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ```
91c0a4bcc39f41730b5df43f3f90f9a5
mit
['generated_from_keras_callback']
false
nlu_sherlock_model_20220220 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set:
cb9bdbc6fc6718fc215ddf8453919340
mit
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -955, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32
30d55cf4de47e065ee922b76e93df6fe
cc-by-4.0
['questions and answers generation']
false
Model Card of `lmqg/bart-base-squad-qag` This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for question & answer pair generation task on the [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
5bfff39364a00f94cd97716cdf5995e0
cc-by-4.0
['questions and answers generation']
false
Overview - **Language model:** [facebook/bart-base](https://huggingface.co/facebook/bart-base) - **Language:** en - **Training data:** [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
122522ef0777706c893a248f209453ec
cc-by-4.0
['questions and answers generation']
false
model prediction question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/bart-base-squad-qag") output = pipe("Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
0213e9e17e4f61edfee495827ce8b912
cc-by-4.0
['questions and answers generation']
false
Evaluation - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-base-squad-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_squad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 84.49 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedF1Score (MoverScore) | 57.46 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (BERTScore) | 85.64 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedPrecision (MoverScore) | 60.01 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (BERTScore) | 83.38 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) | | QAAlignedRecall (MoverScore) | 55.26 | default | [lmqg/qag_squad](https://huggingface.co/datasets/lmqg/qag_squad) |
e8ebb3e0206f4aa0fc8ef5a4eaa242af
cc-by-4.0
['questions and answers generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qag_squad - dataset_name: default - input_types: ['paragraph'] - output_types: ['questions_answers'] - prefix_types: None - model: facebook/bart-base - max_length: 512 - max_length_output: 256 - epoch: 2 - batch: 16 - lr: 1e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-base-squad-qag/raw/main/trainer_config.json).
e734b8e0287df36540305e5e6738b7f7
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0
892f51e2b6f059a941e64663fec31095
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
all-mpnet-base-v1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
0e697a0f3a078d92d5c47a1d54faebf4
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-mpnet-base-v1') embeddings = model.encode(sentences) print(embeddings) ```
309fb83b0cc3df5f32a586ae60e07130
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-mpnet-base-v1) ------
c13f716d62dff6919661d3d8b3d1fa7b
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
39d1ec10acbd9975d0e8bc83330f7093
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 128 word pieces is truncated.
6aa2a779b222b9ef8e61ade90b69b6d7
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Pre-training We use the pretrained [`microsoft/mpnet-base`](https://huggingface.co/microsoft/mpnet-base). Please refer to the model card for more detailed information about the pre-training procedure.
3f0c1c9d37462cf6033de183e4f9281c
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Hyper parameters We trained ou model on a TPU v3-8. We train the model during 920k steps using a batch size of 512 (64 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
16f20f7466daae509e3709a4b3a95d75
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/
47d6b5fe17d283b33432fa176578b5d5
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,124,818,467** |
b178f4a991ce17628f587ca77f4c5071
apache-2.0
['automatic-speech-recognition', 'pt']
false
exp_w2v2t_pt_vp-nl_s833 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
85f79a06d43d4201704c044924b8a5f2
apache-2.0
['image-classification', 'vision', 'generated_from_trainer']
false
vit-base-beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.1328 - Accuracy: 0.9699
a312c8637d44c67b8de78409d22d56a9
apache-2.0
['image-classification', 'vision', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0
6135bea8a80a4cab6826a632d98f4d24
apache-2.0
['image-classification', 'vision', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 0.49 | 1.0 | 65 | 0.9624 | 0.4050 | | 0.2769 | 2.0 | 130 | 0.9850 | 0.1862 | | 0.1441 | 3.0 | 195 | 0.9774 | 0.1554 | | 0.1661 | 4.0 | 260 | 0.9774 | 0.1333 | | 0.1754 | 5.0 | 325 | 0.9699 | 0.1328 |
5cc2ad0f45157f719271efbec3fa7776
apache-2.0
['translation', 'generated_from_trainer']
false
En-Nso_update2 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-nso](https://huggingface.co/Helsinki-NLP/opus-mt-en-nso) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.4199 - Bleu: 24.4776
29fc2db3c63eb041216acceffccd87d6
apache-2.0
['translation', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15
ef910fce52ad5109e5651f4faccc2dab
apache-2.0
['translation', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 3.6661 | 1.0 | 865 | 3.0081 | 17.6871 | | 2.7495 | 2.0 | 1730 | 2.7725 | 20.1475 | | 2.4533 | 3.0 | 2595 | 2.6433 | 22.5433 | | 2.3203 | 4.0 | 3460 | 2.5625 | 22.9963 | | 2.1356 | 5.0 | 4325 | 2.5190 | 23.5696 | | 2.0258 | 6.0 | 5190 | 2.4881 | 23.8367 | | 1.9481 | 7.0 | 6055 | 2.4641 | 24.0611 | | 1.8769 | 8.0 | 6920 | 2.4526 | 24.3214 | | 1.8211 | 9.0 | 7785 | 2.4392 | 24.5300 | | 1.7689 | 10.0 | 8650 | 2.4307 | 24.4627 | | 1.7314 | 11.0 | 9515 | 2.4254 | 24.4936 | | 1.7 | 12.0 | 10380 | 2.4243 | 24.4673 | | 1.6695 | 13.0 | 11245 | 2.4202 | 24.5613 | | 1.6562 | 14.0 | 12110 | 2.4200 | 24.4886 | | 1.6446 | 15.0 | 12975 | 2.4199 | 24.4711 |
20a7fe8e5b12f31ab9bc52f0ffefb3af
apache-2.0
['audio-classification', 'generated_from_trainer']
false
wav2vec2-base-ft-keyword-spotting This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0824 - Accuracy: 0.9826
b54c3b87ca692d51347215e381b031ff
apache-2.0
['audio-classification', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 - mixed_precision_training: Native AMP
f76b83e18a05253eeb507353817df9a2
apache-2.0
['audio-classification', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8972 | 1.0 | 399 | 0.7023 | 0.8174 | | 0.3274 | 2.0 | 798 | 0.1634 | 0.9773 | | 0.1993 | 3.0 | 1197 | 0.1048 | 0.9788 | | 0.1777 | 4.0 | 1596 | 0.0824 | 0.9826 | | 0.1527 | 5.0 | 1995 | 0.0812 | 0.9810 |
f29839450da1903e680840d923ccd63f
gpl-3.0
['generated_from_trainer']
false
Description - The dataset consists of 148 Filipino storytelling books, 5,005 total sentences, 45,792 total tokens, and 5,646 unique tokens. - This NER model only supports the Filipino language and does not include proper nouns, verbs, adjectives, and adverbs as of the moment - The input must undergo preprocessing. Soon I will upload the code to GitHub for preprocessing the input - To replicate the preprocessed input use this example as a guide - Input: "May umaapoy na bahay " - Preprocessed Input: "apoy bahay"
8751173b9050ed4be69a4cccaed7a953
gpl-3.0
['generated_from_trainer']
false
bert-tagalog-base-uncased-ner-v1 This model is a fine-tuned version of [jcblaise/bert-tagalog-base-uncased](https://huggingface.co/jcblaise/bert-tagalog-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2824 - Precision: 0.9091 - Recall: 0.8988 - F1: 0.9039 - Accuracy: 0.9488
9d8f5b6b804991f170e4cedd4fb867ff
gpl-3.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 205 | 0.5311 | 0.6465 | 0.5458 | 0.5919 | 0.8387 | | No log | 2.0 | 410 | 0.3052 | 0.7736 | 0.7811 | 0.7774 | 0.9110 | | 0.4693 | 3.0 | 615 | 0.2531 | 0.8493 | 0.8363 | 0.8427 | 0.9319 | | 0.4693 | 4.0 | 820 | 0.2384 | 0.8755 | 0.8715 | 0.8735 | 0.9402 | | 0.064 | 5.0 | 1025 | 0.2671 | 0.8909 | 0.8823 | 0.8866 | 0.9435 | | 0.064 | 6.0 | 1230 | 0.2527 | 0.8864 | 0.8920 | 0.8892 | 0.9459 | | 0.064 | 7.0 | 1435 | 0.2708 | 0.9088 | 0.9011 | 0.9049 | 0.9491 | | 0.0111 | 8.0 | 1640 | 0.2733 | 0.8992 | 0.8977 | 0.8984 | 0.9490 | | 0.0111 | 9.0 | 1845 | 0.2765 | 0.8991 | 0.8965 | 0.8978 | 0.9485 | | 0.0037 | 10.0 | 2050 | 0.2824 | 0.9091 | 0.8988 | 0.9039 | 0.9488 |
c1324d5c5827a431f2b90043401440b0
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7758 - Accuracy: 0.92
1517ab36010c55210e0580edbe387faa
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.295 | 1.0 | 318 | 3.2908 | 0.7448 | | 2.6313 | 2.0 | 636 | 1.8779 | 0.8384 | | 1.5519 | 3.0 | 954 | 1.1600 | 0.8981 | | 1.0148 | 4.0 | 1272 | 0.8585 | 0.9123 | | 0.7974 | 5.0 | 1590 | 0.7758 | 0.92 |
313dd1e8f4ae8a3cf00fcdfc9b913450
creativeml-openrail-m
['text-to-image']
false
seif-1_5 Dreambooth model trained by HusseinHE with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: skseif (use that on your prompt)
298a8fef696b5e7770306b95eb9eccb9
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-mi This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8606
ff66bff250888eb742a8bf8926ee576f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.1069 | 1.0 | 97 | 2.3524 | | 2.1677 | 2.0 | 194 | 1.9426 | | 1.9197 | 3.0 | 291 | 2.0536 |
c096c18f4eb3890272b9af7ab7517daf
mit
['azbert', 'pretraining', 'fill-mask']
false
About Here we share a pretrained BERT model that is aware of math tokens. The math tokens are treated specially and tokenized using [pya0](https://github.com/approach0/pya0), which adds very limited new tokens for latex markup (total vocabulary is just 31,061). This model is trained on 4 x 2 Tesla V100 with a total batch size of 64, using Math StackExchange data with 2.7 million sentence pairs trained for 7 epochs.
87c789d67423a833142083b95e64ef2e
mit
['azbert', 'pretraining', 'fill-mask']
false
Usage Download and try it out ```sh pip install pya0==0.3.2 wget https://vault.cs.uwaterloo.ca/s/gqstFZmWHCLGXe3/download -O ckpt.tar.gz mkdir -p ckpt tar xzf ckpt.tar.gz -C ckpt --strip-components=1 python test.py --test_file test.txt ```
cbad895a02021b8a45d947ed10d6c6d7
mit
['azbert', 'pretraining', 'fill-mask']
false
Test file format Modify the test examples in `test.txt` to play with it. The test file is tab-separated, the first column is additional positions you want to mask for the right-side sentence (useful for masking tokens in math markups). A zero means no additional mask positions.
1532f8b1c398e13498c5b059dba9c47b
mit
['azbert', 'pretraining', 'fill-mask']
false
Upload to huggingface This repo is hosted on [Github](https://github.com/approach0/azbert), and only mirrored at [huggingface](https://huggingface.co/castorini/azbert-base). To upload to huggingface, use the `upload2hgf.sh` script. Before runnig this script, be sure to check: * check points for model and tokenizer are created under `./ckpt` folder * model contains all the files needed: `config.json` and `pytorch_model.bin` * tokenizer contains all the files needed: `added_tokens.json`, `special_tokens_map.json`, `tokenizer_config.json`, `vocab.txt` and `tokenizer.json` * no `tokenizer_file` field in `tokenizer_config.json` (sometimes it is located locally at `~/.cache`) * `git-lfs` is installed * having git-remote named `hgf` reference to `https://huggingface.co/castorini/azbert-base`
7e870dffeeb923dbbe11028220afdf63
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad-seed-42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.4364
15d30002ab5211a168f6809da879e9b6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.1937 | 1.0 | 8235 | 1.2350 | | 0.9256 | 2.0 | 16470 | 1.3129 | | 0.7489 | 3.0 | 24705 | 1.4364 |
cf48a4de86e26cc0c9df9d6ac2ef4e2a
apache-2.0
['translation']
false
opus-mt-fi-gil * source languages: fi * target languages: gil * OPUS readme: [fi-gil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-gil/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-gil/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-gil/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-gil/opus-2020-01-08.eval.txt)
9d27f1a0f3e5a629c3e7115249babd22
mit
[]
false
Cat toy on Stable Diffusion This is the `<cat-toy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<cat-toy> 0](https://huggingface.co/sd-concepts-library/cat-toy/resolve/main/concept_images/3.jpeg) ![<cat-toy> 1](https://huggingface.co/sd-concepts-library/cat-toy/resolve/main/concept_images/0.jpeg) ![<cat-toy> 2](https://huggingface.co/sd-concepts-library/cat-toy/resolve/main/concept_images/1.jpeg) ![<cat-toy> 3](https://huggingface.co/sd-concepts-library/cat-toy/resolve/main/concept_images/2.jpeg)
70b1fb41d9c7ac795c9ac26e8822ffc1
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_5 This model is a fine-tuned version of [husnu/wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_4](https://huggingface.co/husnu/wav2vec2-large-xls-r-300m-turkish-colab_common_voice-8_4) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3439 - Wer: 0.3634
eb858fa0112d4d8f1ed9a54a3e03eea9
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 - mixed_precision_training: Native AMP
24dcd3811c1c8605de6d2e606db28ba4
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1243 | 0.51 | 400 | 0.4312 | 0.4202 | | 0.1956 | 1.02 | 800 | 0.4421 | 0.4498 | | 0.1816 | 1.53 | 1200 | 0.4012 | 0.4285 | | 0.1548 | 2.04 | 1600 | 0.3720 | 0.3845 | | 0.1171 | 2.55 | 2000 | 0.3439 | 0.3634 |
cea15870b18065edb275dda7497a5432
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-finetuned-effectiveness-dagstuhl This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6418 - Accuracy: 0.6190
8168a75537b5d772decbcd8e70e67456
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 16 | 0.6729 | 0.5714 | | No log | 2.0 | 32 | 0.6418 | 0.6190 | | No log | 3.0 | 48 | 0.6719 | 0.5556 | | No log | 4.0 | 64 | 0.6386 | 0.6032 | | No log | 5.0 | 80 | 0.6559 | 0.5714 |
0ea754f7e8a30e56af506217dbd348d7
creativeml-openrail-m
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers', 'lora']
false
LoRA DreamBooth - simbatheoglion These are LoRA adaption weights for [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base). The weights were trained on the instance prompt "a photo of simbatheog" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. Test prompt: A photo of simbatheog in a bucket ![image_0](test_images/image_0.png) ![image_1](test_images/image_1.png) ![image_2](test_images/image_2.png) ![image_3](test_images/image_3.png)
bec9aa2f8b86f216edc36f3765708831
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'hy', 'hf-asr-leaderboard']
false
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - HY-AM dataset. It achieves the following results on the evaluation set: - Loss: **0.4521** - Wer: **0.5141** - Cer: **0.1100** - Wer+LM: **0.2756** - Cer+LM: **0.0866**
95bb563c72546a624144f89793176a5e
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'hy', 'hf-asr-leaderboard']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: tristage - lr_scheduler_ratios: [0.1, 0.4, 0.5] - training_steps: 1400 - mixed_precision_training: Native AMP
a7cca5ce591b3f7ebe24564d4605bb23
apache-2.0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'robust-speech-event', 'hy', 'hf-asr-leaderboard']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:------:|:----:|:---------------:|:------:|:------:| | 6.1298 | 19.87 | 100 | 3.1204 | 1.0 | 1.0 | | 2.7269 | 39.87 | 200 | 0.6200 | 0.7592 | 0.1755 | | 1.4643 | 59.87 | 300 | 0.4796 | 0.5921 | 0.1277 | | 1.1242 | 79.87 | 400 | 0.4637 | 0.5359 | 0.1145 | | 0.9592 | 99.87 | 500 | 0.4521 | 0.5141 | 0.1100 | | 0.8704 | 119.87 | 600 | 0.4736 | 0.4914 | 0.1045 | | 0.7908 | 139.87 | 700 | 0.5394 | 0.5250 | 0.1124 | | 0.7049 | 159.87 | 800 | 0.4822 | 0.4754 | 0.0985 | | 0.6299 | 179.87 | 900 | 0.4890 | 0.4809 | 0.1028 | | 0.5832 | 199.87 | 1000 | 0.5233 | 0.4813 | 0.1028 | | 0.5145 | 219.87 | 1100 | 0.5350 | 0.4781 | 0.0994 | | 0.4604 | 239.87 | 1200 | 0.5223 | 0.4715 | 0.0984 | | 0.4226 | 259.87 | 1300 | 0.5167 | 0.4625 | 0.0953 | | 0.3946 | 279.87 | 1400 | 0.5248 | 0.4614 | 0.0950 |
1f29e0a3fbe45f4a2b2a0118fd799ebb
apache-2.0
['generated_from_trainer']
false
mobilebert_sa_GLUE_Experiment_logit_kd_stsb This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 1.1918 - Pearson: 0.1864 - Spearmanr: 0.1859 - Combined Score: 0.1862
455132772124ed261d9d5c4aa06eb656
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 1.7465 | 1.0 | 45 | 1.2026 | 0.0588 | 0.0666 | 0.0627 | | 1.079 | 2.0 | 90 | 1.4599 | 0.0595 | 0.0691 | 0.0643 | | 1.0784 | 3.0 | 135 | 1.2063 | 0.0611 | 0.0707 | 0.0659 | | 0.9943 | 4.0 | 180 | 1.3534 | 0.0730 | 0.0730 | 0.0730 | | 0.9523 | 5.0 | 225 | 1.3943 | 0.1080 | 0.1010 | 0.1045 | | 0.8379 | 6.0 | 270 | 1.1918 | 0.1864 | 0.1859 | 0.1862 | | 0.7217 | 7.0 | 315 | 1.2542 | 0.2080 | 0.2144 | 0.2112 | | 0.6304 | 8.0 | 360 | 1.2209 | 0.1920 | 0.1979 | 0.1950 | | 0.5573 | 9.0 | 405 | 1.2925 | 0.1881 | 0.1814 | 0.1847 | | 0.5048 | 10.0 | 450 | 1.3943 | 0.1731 | 0.1877 | 0.1804 | | 0.4754 | 11.0 | 495 | 1.3058 | 0.1845 | 0.1817 | 0.1831 |
b9d88e88f6adb1c9501ce69dd566a7a4
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'wildcard']
false
DreamBooth model for the pochita concept trained by Arch4ngel on the Arch4ngel/pochita_v2 dataset. This is a Stable Diffusion model fine-tuned on the pochita concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of pochita plushie** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
4cc54ff2a8d06c0b9a4b1ee4e4a65560
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.2745 - Accuracy: 0.9346
4480aedab3804a6738c63249b4c507a3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1778 | 1.0 | 4210 | 0.3553 | 0.9060 | | 0.1257 | 2.0 | 8420 | 0.2745 | 0.9346 | | 0.0779 | 3.0 | 12630 | 0.3272 | 0.9300 | | 0.0655 | 4.0 | 16840 | 0.3412 | 0.9323 | | 0.0338 | 5.0 | 21050 | 0.3994 | 0.9300 |
25a6c99afed030ae6bb4617dcc300e21
apache-2.0
['translation']
false
opus-mt-fi-ilo * source languages: fi * target languages: ilo * OPUS readme: [fi-ilo](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-ilo/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-ilo/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ilo/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-ilo/opus-2020-01-08.eval.txt)
8543f70075293c511ecf027f26ed84d3
mit
['generated_from_trainer']
false
finetuned_gpt2-medium_sst2_negation0.0001_pretrainedTrue_epochs1 This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 2.8742
5515fe0b1c1d6187bae077665498ae1c
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cloud1-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0074 - Precision: 0.9714 - Recall: 0.9855 - F1: 0.9784 - Accuracy: 0.9972
69a42234986ceaceab104318ec6e0c88
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 166 | 0.0160 | 0.9653 | 0.9420 | 0.9535 | 0.9945 | | No log | 2.0 | 332 | 0.0089 | 0.9623 | 0.9855 | 0.9737 | 0.9965 | | No log | 3.0 | 498 | 0.0074 | 0.9714 | 0.9855 | 0.9784 | 0.9972 |
05326a07c1ff75066a6b06982226ae9c
mit
[]
false
Model miniALBERT is a recursive transformer model which uses cross-layer parameter sharing, embedding factorisation, and bottleneck adapters to achieve high parameter efficiency. Since miniALBERT is a compact model, it is trained using a layer-to-layer distillation technique, using the bert-base model as the teacher. Currently, this model is trained for one epoch on the English subset of Wikipedia. In terms of architecture, this model uses an embedding dimension of 128, a hidden size of 768, an MLP expansion rate of 4, and a reduction factor of 16 for bottleneck adapters. In general, this model uses 6 recursions and has a unique parameter count of 11 million parameters.
3553276f2daf8ca6fde66b5300a141da
mit
[]
false
For Sequence Classification use the below code model = MiniAlbertForTokenClassification.from_pretrained("nlpie/miniALBERT-128") ``` In addition, For efficient fine-tuning using the pre-trained bottleneck adapters use the below code: ```Python model.trainAdaptersOnly() ```
901c245c8ec62a114f8fdb361e3b243f
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_qnli_256 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6564 - Accuracy: 0.6030
012ac9a001e6630795bfc371ab423d6e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.679 | 1.0 | 410 | 0.6614 | 0.5938 | | 0.6496 | 2.0 | 820 | 0.6564 | 0.6030 | | 0.6268 | 3.0 | 1230 | 0.6635 | 0.5978 | | 0.6055 | 4.0 | 1640 | 0.6714 | 0.5933 | | 0.5836 | 5.0 | 2050 | 0.6964 | 0.5913 | | 0.5602 | 6.0 | 2460 | 0.7319 | 0.5832 | | 0.5385 | 7.0 | 2870 | 0.7653 | 0.5718 |
1642b8eb6f16836b9932795b803b5863
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 4 - mixed_precision_training: Native AMP
772909d371a78a98ac08c476c31a7ce9
mit
['generated_from_trainer']
false
bart-large-cnn-samsum-ElectrifAi_v3 This model is a fine-tuned version of [philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8053 - Rouge1: 62.0348 - Rouge2: 41.9592 - Rougel: 49.1046 - Rougelsum: 59.4965 - Gen Len: 101.2747
db6194c56ef74006429461a16ab664bf