license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
mit
['sklearn', 'skops', 'tabular-classification']
false
x27;]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-32" type="checkbox" ><label for="sk-estimator-id-32" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder(handle_unknown=&
dfe31ad144d95ad52c35dca2c56a6452
mit
['sklearn', 'skops', 'tabular-classification']
false
x27;)</pre></div></div></div></div></div></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-33" type="checkbox" ><label for="sk-estimator-id-33" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(class_weight=&
772bff5aa002ff1e4de5103ccdb6f973
apache-2.0
['generated_from_trainer']
false
bert-finetuned-targetexpression This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7245 - Precision: 0.5780 - Recall: 0.5871 - F1: 0.5825 - Accuracy: 0.7560
5dc6191d6cb80adfa4b065981c7ce146
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 218 | 0.7744 | 0.4945 | 0.5491 | 0.5204 | 0.7232 | | No log | 2.0 | 436 | 0.7151 | 0.5794 | 0.5703 | 0.5748 | 0.7540 | | 0.7582 | 3.0 | 654 | 0.7245 | 0.5780 | 0.5871 | 0.5825 | 0.7560 |
645e2d1b7acb67ecf79ad2406e13c9fa
afl-3.0
[]
false
This model is used detecting **abusive speech** in **English**. It is finetuned on MuRIL model using English abusive speech dataset. The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive) LABEL_0 :-> Normal LABEL_1 :-> Abusive
aff78ec110f94a5caaae52d9315d8fd9
apache-2.0
['generated_from_trainer']
false
albert-xxlarge-v2-finetuned-Poems This model is a fine-tuned version of [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1923
40122895629b74e3d0494ee3e4c2ca55
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-07 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP
84c29db3efdb9603bc332052e42ca3e7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.482 | 1.0 | 19375 | 2.2959 | | 2.258 | 2.0 | 38750 | 2.2357 | | 2.2146 | 3.0 | 58125 | 2.2085 | | 2.1975 | 4.0 | 77500 | 2.1929 | | 2.1893 | 5.0 | 96875 | 2.1863 |
2cd504043f775c02044a62b0c94050c9
apache-2.0
['generated_from_trainer']
false
idpintents-key-value This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8276 - F1: 0.8849
e0bc2aaab90497e0755a0257fcb5632f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.8264 | 1.0 | 68 | 1.3672 | 0.7358 | | 1.3147 | 2.0 | 136 | 0.9310 | 0.8356 | | 1.0444 | 3.0 | 204 | 0.8276 | 0.8849 |
5b8cdb7dad8c21a8b5772c46fd30b4f0
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-finetuned-classification This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 38.9115 - Mse: 38.9115 - Mae: 4.5330 - R2: 0.7802 - Accuracy: 0.1620 - Msev: 0.0257
0ac0056e97ce75a04685fe4f209f3b1c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy | Msev | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:------:|:--------:|:------:| | 12.4524 | 1.0 | 5215 | 43.9797 | 43.9797 | 4.8194 | 0.7515 | 0.1693 | 0.0227 | | 4.393 | 2.0 | 10430 | 39.2333 | 39.2333 | 4.6028 | 0.7783 | 0.1737 | 0.0255 | | 2.424 | 3.0 | 15645 | 41.3763 | 41.3763 | 4.6597 | 0.7662 | 0.1620 | 0.0242 | | 1.781 | 4.0 | 20860 | 39.4309 | 39.4309 | 4.5960 | 0.7772 | 0.1767 | 0.0254 | | 1.3608 | 5.0 | 26075 | 38.9115 | 38.9115 | 4.5330 | 0.7802 | 0.1620 | 0.0257 | | 1.2014 | 6.0 | 31290 | 39.7403 | 39.7403 | 4.5850 | 0.7755 | 0.1716 | 0.0252 | | 1.0742 | 7.0 | 36505 | 40.4495 | 40.4495 | 4.6133 | 0.7715 | 0.1685 | 0.0247 | | 0.837 | 8.0 | 41720 | 39.5864 | 39.5864 | 4.5630 | 0.7763 | 0.1620 | 0.0253 | | 0.8054 | 9.0 | 46935 | 39.9482 | 39.9482 | 4.5839 | 0.7743 | 0.1569 | 0.0250 | | 0.8085 | 10.0 | 52150 | 39.5685 | 39.5685 | 4.5669 | 0.7764 | 0.1573 | 0.0253 |
626fc33fa228448adb5dc0c134a07bfe
cc-by-4.0
['question generation']
false
Model Card of `research-backup/t5-large-subjqa-vanilla-restaurants-qg` This model is fine-tuned version of [t5-large](https://huggingface.co/t5-large) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: restaurants) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
c21735d78e02debd59d6b4200fdf0125
cc-by-4.0
['question generation']
false
Overview - **Language model:** [t5-large](https://huggingface.co/t5-large) - **Language:** en - **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (restaurants) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
5537e842845f17c7e4ac7d3c324371f1
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "research-backup/t5-large-subjqa-vanilla-restaurants-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
e6b31ac8bef3fa21f5527c6ea9ce4d85
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-large-subjqa-vanilla-restaurants-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.restaurants.json) | | Score | Type | Dataset | |:-----------|--------:|:------------|:-----------------------------------------------------------------| | BERTScore | 80.61 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_1 | 3.91 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_2 | 0.73 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_3 | 0 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_4 | 0 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | METEOR | 4.61 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | MoverScore | 50.31 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | ROUGE_L | 6.43 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
9ea6909089b0e6d837a3e82c7d7186bd
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_subjqa - dataset_name: restaurants - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: t5-large - max_length: 512 - max_length_output: 32 - epoch: 1 - batch: 16 - lr: 1e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-large-subjqa-vanilla-restaurants-qg/raw/main/trainer_config.json).
1e92bb34b4fbe98bfb24f10ced400e39
mit
['summarization', 'headline-generation', 'text-generation']
false
t5-small for headline generation This model is a [t5-small](https://huggingface.co/t5-small) fine-tuned for headline generation using the [JulesBelveze/tldr_news](https://huggingface.co/datasets/JulesBelveze/tldr_news) dataset.
a54afb4a8b971d72aad207701d9f4a82
mit
['summarization', 'headline-generation', 'text-generation']
false
Using this model ```python import re from transformers import AutoTokenizer, T5ForConditionalGeneration WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip())) article_text = """US FCC commissioner Brendan Carr has asked Apple and Google to remove TikTok from their app stores. The video app is owned by Chinese company ByteDance. Carr claims that TikTok functions as a surveillance tool that harvests extensive amounts of personal and sensitive data from US citizens. TikTok says its data access approval process is overseen by a US-based security team and that data is only accessed on an as-needed basis under strict controls.""" model_name = "JulesBelveze/t5-small-headline-generator" tokenizer = AutoTokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name) input_ids = tokenizer( [WHITESPACE_HANDLER(article_text)], return_tensors="pt", padding="max_length", truncation=True, max_length=384 )["input_ids"] output_ids = model.generate( input_ids=input_ids, max_length=84, no_repeat_ngram_size=2, num_beams=4 )[0] summary = tokenizer.decode( output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(summary) ```
b6330f1d0d68942cbd2706f591d341e1
creativeml-openrail-m
[]
false
LoRA created from: https://civitai.com/models/4462/aaacups by https://civitai.com/user/modelsforall Currently testing, on photorealistic models. Weights between 0.5 and 2.0 seem to give good results depending on the proportions of the starting model/img and desired amount of reduction. Try a higher value on something like URPMv1.2, lower for Realistic Vision V1.3. Going too high reduces skin texture and introduces artifacting.
48d3f44377cdaca222d1327f51fc2592
cc-by-sa-4.0
['japanese', 'wikipedia', 'question-answering', 'dependency-parsing']
false
Model Description This is a BERT model pretrained on Japanese Wikipedia texts for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [bert-large-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-large-japanese-char-extended) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
51f5aaf402bf51565230f2e73c028fbe
cc-by-sa-4.0
['japanese', 'wikipedia', 'question-answering', 'dependency-parsing']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-wikipedia-ud-head") model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/bert-large-japanese-wikipedia-ud-head") qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model,align_to_words=False) print(qap(question="国語",context="全学年にわたって小学校の国語の教科書に挿し絵>が用いられている")) ``` or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/)) ```py class TransformersUD(object): def __init__(self,bert): import os from transformers import (AutoTokenizer,AutoModelForQuestionAnswering, AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline) self.tokenizer=AutoTokenizer.from_pretrained(bert) self.model=AutoModelForQuestionAnswering.from_pretrained(bert) x=AutoModelForTokenClassification.from_pretrained if os.path.isdir(bert): d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger")) else: from transformers.utils import cached_file c=AutoConfig.from_pretrained(cached_file(bert,"deprel/config.json")) d=x(cached_file(bert,"deprel/pytorch_model.bin"),config=c) s=AutoConfig.from_pretrained(cached_file(bert,"tagger/config.json")) t=x(cached_file(bert,"tagger/pytorch_model.bin"),config=s) self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer, aggregation_strategy="simple") self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer) def __call__(self,text): import numpy,torch,ufal.chu_liu_edmonds w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)] z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w) r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan) v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[] for i,t in enumerate(v): q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id] c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]]) b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c] with torch.no_grad(): d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]), token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b])) s,e=d.start_logits.tolist(),d.end_logits.tolist() for i in range(n): for j in range(n): m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1] h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] if [0 for i in h if i==0]!=[0]: i=([p for s,e,p in w]+["root"]).index("root") j=i+1 if i<n else numpy.nanargmax(m[:,0]) m[0:j,0]=m[j+1:,0]=numpy.nan h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0] u="
ed7e74242f44b72ae0d317a89edb4f5f
cc-by-sa-4.0
['japanese', 'wikipedia', 'question-answering', 'dependency-parsing']
false
text = "+text.replace("\n"," ")+"\n" for i,(s,e,p) in enumerate(w,1): p="root" if h[i]==0 else "dep" if p=="root" else p u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]), str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n" return u+"\n" nlp=TransformersUD("KoichiYasuoka/bert-large-japanese-wikipedia-ud-head") print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている")) ```
0487bc2b0bdb01cc11be8167d223df14
other
['vision', 'image-segmentation']
false
Mask2Former Mask2Former model trained on Cityscapes instance segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation ](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/). Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team.
42b87466ce3fce510bc5b3c241cf5548
other
['vision', 'image-segmentation']
false
load Mask2Former fine-tuned on Cityscapes instance segmentation processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-tiny-cityscapes-instance") model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-tiny-cityscapes-instance") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs)
bccd3c8db9703ec7ae50bd6943c3ee5a
apache-2.0
['generated_from_keras_callback']
false
cwan6830/bert-finetuned-ner This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0247 - Validation Loss: 0.0564 - Epoch: 2
2830605758d7f41f6de6e79a9030d1b4
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1226 | 0.0579 | 0 | | 0.0396 | 0.0514 | 1 | | 0.0247 | 0.0564 | 2 |
2a3b8c8eb69ab32785a4a6263efc7282
apache-2.0
['generated_from_trainer']
false
wav2vec2-xls-r-300m-mn-demo This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9633 - Wer: 0.5586
7f5193ba4c56c0ce4fd47bb6b10a2c9b
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.5564 | 6.77 | 400 | 2.8622 | 0.9998 | | 1.0005 | 13.55 | 800 | 0.9428 | 0.6614 | | 0.3018 | 20.34 | 1200 | 0.9611 | 0.5860 | | 0.1918 | 27.12 | 1600 | 0.9633 | 0.5586 |
c05f550d37423e978dd7aa310c700079
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2r_fr_xls-r_accent_france-5_belgium-5_s42 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
c687178c6f51cf27cde64d31f2a1f48f
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
wav2vec2-live-japanese https://github.com/ttop32/wav2vec2-live-japanese-translator Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese hiragana using the - [common_voice](https://huggingface.co/datasets/common_voice) - [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut) - [CSS10](https://github.com/Kyubyong/css10) - [TEDxJP-10K](https://github.com/laboroai/TEDxJP-10K) - [JVS](https://sites.google.com/site/shinnosuketakamichi/research-topics/jvs_corpus) - [JSSS](https://sites.google.com/site/shinnosuketakamichi/research-topics/jsss_corpus)
5fa46298f6b924f772592d39be0c48de
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
usage import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor model = Wav2Vec2ForCTC.from_pretrained("ttop324/wav2vec2-live-japanese") processor = Wav2Vec2Processor.from_pretrained("ttop324/wav2vec2-live-japanese") test_dataset = load_dataset("common_voice", "ja", split="test")
ab96d367805ebe82b728200740fe412c
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = torchaudio.functional.resample(speech_array, sampling_rate, 16000)[0].numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset[:2]["sentence"]) ```
5583efd049475459dfb8fba155c66d40
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re import pykakasi import MeCab wer = load_metric("wer") cer = load_metric("cer") model = Wav2Vec2ForCTC.from_pretrained("ttop324/wav2vec2-live-japanese").to("cuda") processor = Wav2Vec2Processor.from_pretrained("ttop324/wav2vec2-live-japanese") test_dataset = load_dataset("common_voice", "ja", split="test") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\‘\”\�‘、。.!,・―─~「」『』\\\\※\[\]\{\}「」〇?…]' wakati = MeCab.Tagger("-Owakati") kakasi = pykakasi.kakasi() kakasi.setMode("J","H")
e99fc984378705d426f89dab9090c15e
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
katakana to hiragana conv = kakasi.getConverter() FULLWIDTH_TO_HALFWIDTH = str.maketrans( ' 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!゛#$%&()*+、ー。/:;〈=〉?@[]^_‘{|}~', ' 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!"
7b62281b8d92c2f183c3d8f1d7856e88
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
$%&()*+,-./:;<=>?@[]^_`{|}~', ) def fullwidth_to_halfwidth(s): return s.translate(FULLWIDTH_TO_HALFWIDTH) def preprocessData(batch): batch["sentence"] = fullwidth_to_halfwidth(batch["sentence"]) batch["sentence"] = re.sub(chars_to_ignore_regex,' ', batch["sentence"]).lower()
ef839b3a2329ad8d90d4ac9dd32b6e4b
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
remove multiple space speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = torchaudio.functional.resample(speech_array, sampling_rate, 16000)[0].numpy() return batch test_dataset = test_dataset.map(preprocessData)
d80db57ad473fcd2c858d6936529d21d
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) print("CER: {:2f}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ```
e145ecdb35bc1a0008d70e23f1bfa912
apache-2.0
['generated_from_trainer']
false
bart-base-finetuned-xsum This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0837 - Rouge1: 53.7269 - Rouge2: 42.5336 - Rougel: 52.0499 - Rougelsum: 52.6213 - Gen Len: 15.0789
04147563bc31da23cffaa399315270fd
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.1829 | 1.0 | 4578 | 1.0837 | 53.7269 | 42.5336 | 52.0499 | 52.6213 | 15.0789 |
1874f25cb6cb979010ccb74e59449c12
mit
['generated_from_trainer']
false
roberta-base-offensive-lm-tapt This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0014 - eval_runtime: 16.317 - eval_samples_per_second: 61.286 - eval_steps_per_second: 1.961 - epoch: 0.72 - step: 1100
b5b3f309b5685b6723e42cde4612df64
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 16 - mixed_precision_training: Native AMP
a10c719eecdfc94ec703be783d97dc24
apache-2.0
['generated_from_trainer']
false
bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0624 - Precision: 0.9323 - Recall: 0.9485 - F1: 0.9404 - Accuracy: 0.9859
37763aebdd38c18a2a94b94310273502
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.087 | 1.0 | 1756 | 0.0696 | 0.9183 | 0.9406 | 0.9293 | 0.9832 | | 0.0378 | 2.0 | 3512 | 0.0564 | 0.9355 | 0.9502 | 0.9428 | 0.9863 | | 0.0194 | 3.0 | 5268 | 0.0624 | 0.9323 | 0.9485 | 0.9404 | 0.9859 |
ca0e1bde78d6d1a1ce47c0da0bdfeb33
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
false
Wav2Vec2-Base-960h + 4-gram This model is identical to [Facebook's Wav2Vec2-Large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self), but is augmented with an English 4-gram. The `4-gram.arpa.gz` of [Librispeech's official ngrams](https://www.openslr.org/11) is used.
c4cdda1e4b9ea316642de838c1a02c09
apache-2.0
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard']
false
Evaluation This code snippet shows how to evaluate **patrickvonplaten/wav2vec2-large-960h-lv60-self-4-gram** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import AutoModelForCTC, AutoProcessor import torch from jiwer import wer model_id = "patrickvonplaten/wav2vec2-large-960h-lv60-self-4-gram" librispeech_eval = load_dataset("librispeech_asr", "other", split="test") model = AutoModelForCTC.from_pretrained(model_id).to("cuda") processor = AutoProcessor.from_pretrained(model_id) def map_to_pred(batch): inputs = processor(batch["audio"]["array"], sampling_rate=16_000, return_tensors="pt") inputs = {k: v.to("cuda") for k,v in inputs.items()} with torch.no_grad(): logits = model(**inputs).logits transcription = processor.batch_decode(logits.cpu().numpy()).text[0] batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, remove_columns=["audio"]) print(wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | |---|---| | 1.84 | 3.71 |
cc249c30fccfcd066400ef43397f9189
apache-2.0
['generated_from_trainer']
false
wav2vec_trained This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0337 - Wer: 0.1042
dce19474eed083b8bb6070744bd49fd8
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.3849 | 2.21 | 500 | 2.9148 | 1.0 | | 1.9118 | 4.42 | 1000 | 0.9627 | 0.5833 | | 0.7596 | 6.64 | 1500 | 0.8953 | 0.3542 | | 0.4602 | 8.85 | 2000 | 0.3325 | 0.2083 | | 0.331 | 11.06 | 2500 | 0.3084 | 0.2083 | | 0.2474 | 13.27 | 3000 | 0.0960 | 0.1667 | | 0.1934 | 15.49 | 3500 | 0.1276 | 0.125 | | 0.156 | 17.7 | 4000 | 0.0605 | 0.0833 | | 0.1244 | 19.91 | 4500 | 0.0831 | 0.1458 | | 0.1006 | 22.12 | 5000 | 0.0560 | 0.125 | | 0.0827 | 24.34 | 5500 | 0.0395 | 0.0833 | | 0.0723 | 26.55 | 6000 | 0.0573 | 0.0833 | | 0.0606 | 28.76 | 6500 | 0.0337 | 0.1042 |
563e4eaf6695e51fe84c8f4e01245477
apache-2.0
['generated_from_trainer']
false
mt5-large-qasrl-es-p1-role This model is a fine-tuned version of [google/mt5-large](https://huggingface.co/google/mt5-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4259
c7030a1bf0b741d078ab447e651e3263
apache-2.0
['translation']
false
opus-mt-guw-fr * source languages: guw * target languages: fr * OPUS readme: [guw-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-fr/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-fr/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-fr/opus-2020-01-09.eval.txt)
31812813ddfd6a812b084f6a6858a41c
apache-2.0
['automatic-speech-recognition', 'fa']
false
exp_w2v2t_fa_unispeech-ml_s195 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
4697a915279d9c857e3870678097538d
apache-2.0
['pytorch', 'causal-lm', 'text-classification', 'text-generation']
false
mnli). MNLI dataset consists of pairs of sentences, a *premise* and a *hypothesis*. The task is to predict the relation between the premise and the hypothesis, which can be: - `entailment`: hypothesis follows from the premise, - `contradiction`: hypothesis contradicts the premise, - `neutral`: hypothesis and premise are unrelated. We finetune the model as a Causal Language Model (CLM): given a sequence of tokens, the task is to predict the next token. To achieve this, we create a stylised prompt string, following the approach of [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). ```shell mnli hypothesis: {hypothesis} premise: {premise} target: {class_label} <|endoftext|> ``` For example: ``` mnli hypothesis: Your contributions were of no help with our students' education. premise: Your contribution helped make it possible for us to provide our students with a quality education. target: contradiction <|endoftext|> ```
6c6a1bee5615cf8aa20d203aabd9ff7a
apache-2.0
['pytorch', 'causal-lm', 'text-classification', 'text-generation']
false
Model description GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters. <figure> | Hyperparameter | Value | |----------------------|------------| | \\(n_{parameters}\\) | 6053381344 | | \\(n_{layers}\\) | 28&ast; | | \\(d_{model}\\) | 4096 | | \\(d_{ff}\\) | 16384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50257/50400&dagger; (same tokenizer as GPT-2/3) | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py
81fb2fdb2c315b27c2421cc9176540b8
apache-2.0
['pytorch', 'causal-lm', 'text-classification', 'text-generation']
false
L223) | <figcaption><p><strong>&ast;</strong> Each layer consists of one feedforward block and one self attention block.</p> <p><strong>&dagger;</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure> The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. [EleutherAI/gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B), our starting point for finetuning, is trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
8af186b2da3d8bad16026e7f83c1986e
apache-2.0
['pytorch', 'causal-lm', 'text-classification', 'text-generation']
false
Fine-tuning and validation data Fine tuning is done using the `train` split of the GLUE MNLI dataset and the performance is measured using the [validation_mismatched](https://huggingface.co/datasets/glue
f08a5b3a4900768b003e97c40d2f5aec
apache-2.0
['pytorch', 'causal-lm', 'text-classification', 'text-generation']
false
mnli_mismatched) split. `validation_mismatched` means validation examples are not derived from the same sources as those in the training set and therefore not closely resembling any of the examples seen at training time. Data splits for the mnli dataset are the following |train |validation_matched|validation_mismatched| |-----:|-----------------:|--------------------:| |392702| 9815| 9832|
efd23165b4547f61d1c3bccda4bee474
apache-2.0
['pytorch', 'causal-lm', 'text-classification', 'text-generation']
false
Fine-tuning procedure Fine tuned on a Graphcore IPU-POD64 using `popxl`. Prompt sentences are tokenized and packed together to form 1024 token sequences, following [HF packing algorithm](https://github.com/huggingface/transformers/blob/v4.20.1/examples/pytorch/language-modeling/run_clm.py). No padding is used. The packing process works in groups of 1000 examples and discards any remainder from each group that isn't a whole sequence. For the 392,702 training examples this gives a total of 17,762 sequences per epoch. Since the model is trained to predict the next token, labels are simply the input sequence shifted by one token. Given the training format, no extra care is needed to account for different sequences: the model does not need to know which sentence a token belongs to.
b321d5873c2a0d317e34deb47031dbac
apache-2.0
['pytorch', 'causal-lm', 'text-classification', 'text-generation']
false
Hyperparameters: - optimiser: AdamW (beta1: 0.9, beta2: 0.999, eps: 1e-6, weight decay: 0.0, learning rate: 5e-6) - learning rate schedule: warmup schedule (min: 1e-7, max: 5e-6, warmup proportion: 0.005995) - batch size: 128 - training steps: 300. Each epoch consists of ceil(17,762/128) steps, hence 300 steps are approximately 2 epochs.
bb7f301dc951206ef6679fefcf78a0d0
apache-2.0
['pytorch', 'causal-lm', 'text-classification', 'text-generation']
false
Performance The resulting model matches SOTA performance with 82.5% accuracy. ``` Total number of examples 9832 Number with badly formed result 0 Number with incorrect result 1725 Number with correct result 8107 [82.5%] example 0 = {'prompt_text': "mnli hypothesis: Your contributions were of no help with our students' education. premise: Your contribution helped make it possible for us to provide our students with a quality education. target:", 'class_label': 'contradiction'} result = {'generated_text': ' contradiction'} First 10 generated_text and expected class_label results: 0: 'contradiction' contradiction 1: 'contradiction' contradiction 2: 'entailment' entailment 3: 'contradiction' contradiction 4: 'entailment' entailment 5: 'entailment' entailment 6: 'contradiction' contradiction 7: 'contradiction' contradiction 8: 'entailment' neutral 9: 'contradiction' contradiction ```
8457042537dcb4adbf3198312f196a9c
apache-2.0
['pytorch', 'causal-lm', 'text-classification', 'text-generation']
false
How to use The model can be easily loaded using AutoModelForCausalLM. You can use the pipeline API for text generation. ```python from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-j-6B') hf_model = AutoModelForCausalLM.from_pretrained("Graphcore/gptj-mnli", pad_token_id=tokenizer.eos_token_id) generator = pipeline('text-generation', model=hf_model, tokenizer=tokenizer) prompt = "mnli hypothesis: Your contributions were of no help with our students' education." \ "premise: Your contribution helped make it possible for us to provide our students with a quality education. target:" out = generator(prompt, return_full_text=False, max_new_tokens=5, top_k=1)
04afdc215d036ea261ce7153eed57dd4
apache-2.0
['pytorch', 'causal-lm', 'text-classification', 'text-generation']
false
[{'generated_text': ' contradiction'}] ``` You can create prompt-like inputs starting from GLUE MNLI dataset using functions provided in the `data_utils.py` script. ```python from datasets import load_dataset from data_utils import form_text, split_text dataset = load_dataset('glue', 'mnli', split='validation_mismatched') dataset = dataset.map( form_text, remove_columns=['hypothesis', 'premise','label', 'idx'])
9bed2be188b1613320a3175754b2cba9
apache-2.0
['pytorch', 'causal-lm', 'text-classification', 'text-generation']
false
dataset[0] {'text': "mnli hypothesis: Your contributions were of no help with our students' education. premise: Your contribution helped make it possible for us to provide our students with a quality education. target: contradiction<|endoftext|>"} dataset = dataset.map(split_text, remove_columns=['text'])
83ff05971c85cfe33fd6e1186a01fd78
apache-2.0
['pytorch', 'causal-lm', 'text-classification', 'text-generation']
false
dataset[0] {'prompt_text': "mnli hypothesis: Your contributions were of no help with our students' education. premise: Your contribution helped make it possible for us to provide our students with a quality education. target:",
1a558ca2e62011230540f395699b739c
mit
['dialogue', 'russian']
false
This is a version of the [cointegrated/rut5-small](https://huggingface.co/cointegrated/rut5-small) model fine-tuned on some Russian dialogue data. It is not very smart and creative, but it is small and fast, and can serve as a fallback response generator for some chatbot or can be fine-tuned to imitate the style of someone. The input of the model is the previous dialogue utterances separated by `'\n\n'`, and the output is the next utterance. The model can be used as follows: ```
02d2f0e301f7f2331d5befb4a9f0be62
mit
['dialogue', 'russian']
false
!pip install transformers sentencepiece import torch from transformers import T5ForConditionalGeneration, T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("cointegrated/rut5-small-chitchat") model = T5ForConditionalGeneration.from_pretrained("cointegrated/rut5-small-chitchat") text = 'Привет! Расскажи, как твои дела?' inputs = tokenizer(text, return_tensors='pt') with torch.no_grad(): hypotheses = model.generate( **inputs, do_sample=True, top_p=0.5, num_return_sequences=3, repetition_penalty=2.5, max_length=32, ) for h in hypotheses: print(tokenizer.decode(h, skip_special_tokens=True))
3d1adb71349df8382e4955f8e9fb08cb
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper small Luxembourgish This model is a fine-tuned version of [bofenghuang/whisper-small-cv11-german-punct](https://huggingface.co/bofenghuang/whisper-small-cv11-german-punct) on the google/fleurs lb_lu dataset. It achieves the following results on the evaluation set: - Loss: 1.1857 - Wer: 39.4990
9901b181f563e61844db20a620a82a65
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP
ff4e0a36de27c6ec60a2dbd0bcf074ef
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.0618 | 38.46 | 500 | 1.0104 | 43.2968 | | 0.0055 | 76.92 | 1000 | 1.0684 | 40.1288 | | 0.0024 | 115.38 | 1500 | 1.1056 | 40.9447 | | 0.0014 | 153.85 | 2000 | 1.1280 | 39.7615 | | 0.0013 | 192.31 | 2500 | 1.1415 | 39.9857 | | 0.0008 | 230.77 | 3000 | 1.1573 | 39.7996 | | 0.0006 | 269.23 | 3500 | 1.1682 | 40.0095 | | 0.0006 | 307.69 | 4000 | 1.1769 | 39.7233 | | 0.0005 | 346.15 | 4500 | 1.1826 | 39.5134 | | 0.0004 | 384.62 | 5000 | 1.1857 | 39.4990 |
b6ca7bacc6f7b8c3515af8334791cbe4
apache-2.0
['generated_from_trainer', 'robust-speech-event', 'hf-asr-leaderboard']
false
wav2vec2-large-xls-r-300m-hindi This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7049 - Wer: 0.3200
0c17d81ed19d8170705bec88b0ecf523
cc-by-4.0
['spacy', 'token-classification']
false
NER Model for 'Ministerratsprotokolle' | Feature | Description | | --- | --- | | **Name** | `de_MRP_NER` | | **Version** | `0.0.0` | | **spaCy** | `>=3.1.0,<3.2.0` | | **Default Pipeline** | `tok2vec`, `ner` | | **Components** | `tok2vec`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | `cc-by` | | **Author** | [Peter Andorfer]() |
b315fa2127db300d4716e3b6746105ed
creativeml-openrail-m
['text-to-image']
false
KEITH Dreambooth model trained by duja1 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: k123eith (use that on your prompt) ![k123eith 0](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%281%29.jpg)![k123eith 1](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%282%29.jpg)![k123eith 2](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%283%29.jpg)![k123eith 3](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%284%29.jpg)![k123eith 4](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%285%29.jpg)![k123eith 5](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%286%29.jpg)![k123eith 6](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%287%29.jpg)![k123eith 7](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%288%29.jpg)![k123eith 8](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%289%29.jpg)![k123eith 9](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2810%29.jpg)![k123eith 10](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2811%29.jpg)![k123eith 11](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2812%29.jpg)![k123eith 12](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2813%29.jpg)![k123eith 13](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2814%29.jpg)![k123eith 14](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2815%29.jpg)![k123eith 15](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2816%29.jpg)![k123eith 16](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2817%29.jpg)![k123eith 17](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2818%29.jpg)![k123eith 18](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2819%29.jpg)![k123eith 19](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2820%29.jpg)![k123eith 20](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2821%29.jpg)![k123eith 21](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2822%29.jpg)![k123eith 22](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2823%29.jpg)![k123eith 23](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2824%29.jpg)![k123eith 24](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2825%29.jpg)![k123eith 25](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2826%29.jpg)![k123eith 26](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2827%29.jpg)![k123eith 27](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2828%29.jpg)![k123eith 28](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2829%29.jpg)![k123eith 29](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2830%29.jpg)![k123eith 30](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2831%29.jpg)![k123eith 31](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2832%29.jpg)![k123eith 32](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2833%29.jpg)![k123eith 33](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2834%29.jpg)![k123eith 34](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2835%29.jpg)![k123eith 35](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2836%29.jpg)![k123eith 36](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2837%29.jpg)![k123eith 37](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2838%29.jpg)![k123eith 38](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2839%29.jpg)![k123eith 39](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2840%29.jpg)![k123eith 40](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2841%29.jpg)![k123eith 41](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2842%29.jpg)![k123eith 42](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2843%29.jpg)![k123eith 43](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2844%29.jpg)![k123eith 44](https://huggingface.co/duja1/keith/resolve/main/concept_images/k123eith_%2845%29.jpg)
06a2cf24d4b14b37550e69e01c7a98af
apache-2.0
['generated_from_trainer']
false
my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 5.8153
3d3dd6d8940342b3a97f87d029677904
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 3 | 5.8866 | | No log | 2.0 | 6 | 5.8367 | | No log | 3.0 | 9 | 5.8153 |
60813024891b3b9c7352e73cfe34eefb
cc-by-sa-4.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-Bengali Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) Bengali using the [Bengali ASR training data set containing ~196K utterances](https://www.openslr.org/53/). When using this model, make sure that your speech input is sampled at 16kHz.
a9dabc449d79f188f73aff8a257c6f68
cc-by-sa-4.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage Dataset must be downloaded from [this website](https://www.openslr.org/53/) and preprocessed accordingly. For example 1250 test samples has been chosen. ```python import pandas as pd test_dataset = pd.read_csv('utt_spk_text.tsv', sep='\\t', header=None)[60000:61250] test_dataset.columns = ["audio_path", "__", "label"] test_dataset = test_data.drop("__", axis=1) def add_file_path(text): path = "data/" + text[:2] + "/" + text + '.flac' return path test_dataset['audio_path'] = test_dataset['audio_path'].map(lambda x: add_file_path(x)) ``` The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor processor = Wav2Vec2Processor.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali") model = Wav2Vec2ForCTC.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali") resampler = torchaudio.transforms.Resample(48_000, 16_000)
a1376066ab5c677e3ba51436a0cfe982
cc-by-sa-4.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["audio_path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["label"][:2]) ```
8775f8542b5f9dbb1f161bd7e72d79d7
cc-by-sa-4.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the Bengali test data of OpenSLR. ```python import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali") model = Wav2Vec2ForCTC.from_pretrained("tanmoyio/wav2vec2-large-xlsr-bengali") model.to("cuda") resampler = torchaudio.transforms.Resample(48_000, 16_000)
3bd34e25ed9b1f95dceac4007b12e8ca
cc-by-sa-4.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["label"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn)
a81ba49d9c6a446fdec7a3df27b19546
cc-by-sa-4.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 88.58 %
e38d45e7a3226927b2d09a526f6324ea
apache-2.0
['national library of spain', 'spanish', 'bne', 'capitel', 'pos']
false
Model description The **roberta-base-bne-capitel-pos** is a Part-of-speech-tagging (POS) model for the Spanish language fine-tuned from the [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained using the largest Spanish corpus known to date, with a total of 570GB of clean and deduplicated text, processed for this work, compiled from the web crawlings performed by the [National Library of Spain (Biblioteca Nacional de España)](http://www.bne.es/en/Inicio/index.html) from 2009 to 2019.
cb006e3b5f6aab143d4dd5a30ce99a18
apache-2.0
['national library of spain', 'spanish', 'bne', 'capitel', 'pos']
false
Intended uses and limitations **roberta-base-bne-capitel-pos** model can be used to Part-of-speech-tagging (POS) a text. The model is limited by its training dataset and may not generalize well for all use cases.
b7bacc624bc36d137c8b3268344da7f9
apache-2.0
['national library of spain', 'spanish', 'bne', 'capitel', 'pos']
false
How to use Here is how to use this model: ```python from transformers import pipeline from pprint import pprint nlp = pipeline("token-classification", model="PlanTL-GOB-ES/roberta-base-bne-capitel-pos") example = "El alcalde de Vigo, Abel Caballero, ha comenzado a colocar las luces de Navidad en agosto." pos_results = nlp(example) pprint(pos_results) ```
80e3e4f808ce840adcf0fd3bc52ff798
apache-2.0
['national library of spain', 'spanish', 'bne', 'capitel', 'pos']
false
Training procedure The model was trained with a batch size of 32 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
172cdaa40aa5e646f735927a539308b9
apache-2.0
['national library of spain', 'spanish', 'bne', 'capitel', 'pos']
false
Evaluation results We evaluated the **roberta-base-bne-capitel-pos** on the CAPITEL-POS test set against standard multilingual and monolingual baselines: | Model | CAPITEL-POS (F1) | | ------------|:----| | roberta-large-bne-capitel-pos | **98.56** | | roberta-base-bne-capitel-pos | 98.46 | | BETO | 98.36 | | mBERT | 98.39 | | BERTIN | 98.47 | | ELECTRA | 98.16 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-spanish).
a9e06a29f1a7643b82a19eb83379e133
apache-2.0
['translation', 'generated_from_trainer']
false
opus-finetuned-de-bar This model is a fine-tuned version of [Helsinki-NLP/opus-mt-de-fr](https://huggingface.co/Helsinki-NLP/opus-mt-de-fr) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1192 - Bleu: 11.2681 - Chrf: 62.8905 - Ter: 49.5446
ea44598b6377a6cc89f4d67ccbd17404
apache-2.0
['translation', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 6 - mixed_precision_training: Native AMP
9356fcf1d53db70da0091799f9dd6c86
apache-2.0
['generated_from_trainer']
false
swin-tiny-patch4-window7-224-finetuned-flower-classifier This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2362 - Accuracy: 0.9339
329774a620bf06646eeb59dcc1ad8d1c
apache-2.0
['generated_from_trainer']
false
Model description This model was created by importing the dataset of the photos of flowers into Google Colab from kaggle here: https://www.kaggle.com/datasets/l3llff/flowers. I then used the image classification tutorial here: https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb obtaining the following notebook: https://colab.research.google.com/drive/1bapCEz4vkDd16Ax9jb5oHGa85PeuyZVW?usp=sharing The possible classified flowers are: 'common_daisy', 'rose', 'california_poppy', 'iris', 'astilbe', 'carnation', 'tulip', 'sunflower', 'coreopsis', 'magnolia', 'water_lily', 'bellflower', 'daffodil', 'calendula', 'dandelion', 'black_eyed_susan'
fafbbcec7505ed73ef9705c49e90d843
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.365 | 0.99 | 110 | 0.2362 | 0.9339 |
deb7cf9f7f7a6f7d0cb6b0b0a0035314
apache-2.0
['image-classification', 'timm']
false
Model card for convnext_tiny.in12k_ft_in1k A ConvNeXt image classification model. Pretrained in `timm` on ImageNet-12k (a 11821 class subset of full ImageNet-22k) and fine-tuned on ImageNet-1k by Ross Wightman. ImageNet-12k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program. Fine-tuning performed on 8x GPU [Lambda Labs](https://lambdalabs.com/) cloud instances.
7a210979914243935aa85042e6b335a3
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 28.6 - GMACs: 4.5 - Activations (M): 13.4 - Image size: 224 x 224 - **Papers:** - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - **Original:** https://github.com/rwightman/pytorch-image-models - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-12k
1588de71818c0d7c946781f2853c64c9
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('convnext_tiny.in12k_ft_in1k', pretrained=True) model = model.eval()
a626b728fe97d5028b9898729eed648e
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_tiny.in12k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval()
e2b1c153290d907a3f2f1fa098ff40a6
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'convnext_tiny.in12k_ft_in1k', pretrained=True, num_classes=0,
622d2817d4d37d0120c7fb08326375f1
apache-2.0
['generated_from_trainer']
false
mobilebert_add_GLUE_Experiment_logit_kd_rte_256 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.3914 - Accuracy: 0.5271
3c507818e6a70e09352758ef941e33da
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4089 | 1.0 | 20 | 0.3935 | 0.5271 | | 0.4082 | 2.0 | 40 | 0.3914 | 0.5271 | | 0.4076 | 3.0 | 60 | 0.3919 | 0.5271 | | 0.4075 | 4.0 | 80 | 0.3927 | 0.5271 | | 0.4074 | 5.0 | 100 | 0.3926 | 0.5271 | | 0.407 | 6.0 | 120 | 0.3921 | 0.5271 | | 0.4054 | 7.0 | 140 | 0.3944 | 0.5235 |
6e8b9b08f065578f1e6ca6d5aa9f457e
apache-2.0
['deep-narrow']
false
T5-Efficient-BASE-NL40 (Deep-Narrow version) T5-Efficient-BASE-NL40 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
b08c95c156b7e51197e65f3911f4e50e
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-base-nl40** - is of model type **Base** with the following variations: - **nl** is **40** It has **685.53** million parameters and thus requires *ca.* **2742.11 MB** of memory in full precision (*fp32*) or **1371.05 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
8a856f4c5d75f0b339c24bbf2773c3d1
mit
[]
false
Model description This model is a distilled version of the [Indonesian BERT base model](https://huggingface.co/cahya/bert-base-indonesian-1.5G). This model is uncased. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers)
8a4e943aff8f3e607a734f26354b43e4
mit
[]
false
How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='cahya/distilbert-base-indonesian') >>> unmasker("Ayahku sedang bekerja di sawah untuk [MASK] padi") [ { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk menanam padi [SEP]", "score": 0.6853187084197998, "token": 12712, "token_str": "menanam" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk bertani padi [SEP]", "score": 0.03739545866847038, "token": 15484, "token_str": "bertani" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk memetik padi [SEP]", "score": 0.02742469497025013, "token": 30338, "token_str": "memetik" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk penggilingan padi [SEP]", "score": 0.02214187942445278, "token": 28252, "token_str": "penggilingan" }, { "sequence": "[CLS] ayahku sedang bekerja di sawah untuk tanam padi [SEP]", "score": 0.0185895636677742, "token": 11308, "token_str": "tanam" } ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import DistilBertTokenizer, DistilBertModel model_name='cahya/distilbert-base-indonesian' tokenizer = DistilBertTokenizer.from_pretrained(model_name) model = DistilBertModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in Tensorflow: ```python from transformers import DistilBertTokenizer, TFDistilBertModel model_name='cahya/distilbert-base-indonesian' tokenizer = DistilBertTokenizer.from_pretrained(model_name) model = TFDistilBertModel.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ```
d1e8030b8709b06740a1767fbea37851