license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.3466
1ad7220be72bb75d68406da9c1a59519
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2739 | 1.0 | 4118 | 1.2801 | | 1.0001 | 2.0 | 8236 | 1.2823 | | 0.8484 | 3.0 | 12354 | 1.3466 |
fb65813cbabee334cf9dddb91dfa8e9b
apache-2.0
['automatic-speech-recognition', 'ja']
false
exp_w2v2t_ja_vp-fr_s368 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
1311c09b86fd36da7c182a8d9326b20d
apache-2.0
['generated_from_trainer']
false
distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6429
babb74fd7a7ee566d6c7b61a09f3b8fe
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7607 | 1.0 | 2334 | 3.6664 | | 3.6527 | 2.0 | 4668 | 3.6473 | | 3.6015 | 3.0 | 7002 | 3.6429 |
418be9609bfb395b9863be606ced0451
other
['vision', 'image-segmentation']
false
SegFormer (b4-sized) model fine-tuned on ADE20k SegFormer model fine-tuned on ADE20k at resolution 512x512. It was introduced in the paper [SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203) by Xie et al. and first released in [this repository](https://github.com/NVlabs/SegFormer). Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
d98ced9ddf2d1ddc1548175432e0e7a0
other
['vision', 'image-segmentation']
false
How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation from PIL import Image import requests feature_extractor = SegformerFeatureExtractor.from_pretrained("nvidia/segformer-b4-finetuned-ade-512-512") model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b4-finetuned-ade-512-512") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits
6a1e561ffd3db5e247f6856e58c3d283
apache-2.0
['translation']
false
afr-spa * source group: Afrikaans * target group: Spanish * OPUS readme: [afr-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md) * model: transformer-align * source language(s): afr * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.eval.txt)
15924fd0bee18b53d3e0c15354c69923
apache-2.0
['translation']
false
System Info: - hf_name: afr-spa - source_languages: afr - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['af', 'es'] - src_constituents: {'afr'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt - src_alpha3: afr - tgt_alpha3: spa - short_pair: af-es - chrF2_score: 0.68 - bleu: 49.9 - brevity_penalty: 1.0 - ref_len: 2783.0 - src_name: Afrikaans - tgt_name: Spanish - train_date: 2020-06-17 - src_alpha2: af - tgt_alpha2: es - prefer_old: False - long_pair: afr-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
9153ebf861c5b3d203427d280a4ae7b7
apache-2.0
['automatic-speech-recognition', 'pt']
false
exp_w2v2t_pt_vp-sv_s894 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
f49df6aaef3a0ff7cd82a7df8cf84465
apache-2.0
['generated_from_trainer']
false
Tagged_Uni_100v2_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni100v2_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.4048 - Precision: 0.2783 - Recall: 0.1589 - F1: 0.2023 - Accuracy: 0.8412
7d5dd257895169fbd58f558ab87d0348
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 39 | 0.4802 | 0.3667 | 0.0784 | 0.1292 | 0.8125 | | No log | 2.0 | 78 | 0.4028 | 0.2745 | 0.1540 | 0.1973 | 0.8412 | | No log | 3.0 | 117 | 0.4048 | 0.2783 | 0.1589 | 0.2023 | 0.8412 |
3b50037cb7e5a8cef4cbd2c652116715
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
false
DreamBooth model for the panda concept trained by zhangshengdong. This is a Stable Diffusion model fine-tuned on the panda concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of panda animal** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
8c3bb2923b223848ed0bd4b71d12481f
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
false
Description This is a Stable Diffusion model fine-tuned on `animal-panda` images for the animal theme, for the Hugging Face DreamBooth Hackathon, from the HF CN Community, corporated with the HeyWhale.
7958777fd4847bb40b18d74b3e3d8eb1
apache-2.0
['text2text-generation']
false
Model Details - **Model Description:** 뭔가 찾아봐도 모델이나 알고리즘이 딱히 없어서 만들어본 모델입니다. <br /> BartForConditionalGeneration Fine-Tuning Model For Korean To Number <br /> BartForConditionalGeneration으로 파인튜닝한, 한글을 숫자로 변환하는 Task 입니다. <br /> - Dataset use [Korea aihub](https://aihub.or.kr/aihubdata/data/list.do?currMenu=115&topMenu=100&srchDataRealmCode=REALM002&srchDataTy=DATA004) <br /> I can't open my fine-tuning datasets for my private issue <br /> 데이터셋은 Korea aihub에서 받아서 사용하였으며, 파인튜닝에 사용된 모든 데이터를 사정상 공개해드릴 수는 없습니다. <br /> - Korea aihub data is ONLY permit to Korean!!!!!!! <br /> aihub에서 데이터를 받으실 분은 한국인일 것이므로, 한글로만 작성합니다. <br /> 정확히는 철자전사를 음성전사로 번역하는 형태로 학습된 모델입니다. (ETRI 전사기준) <br /> - In case, ten million, some people use 10 million or some people use 10000000, so this model is crucial for training datasets <br /> 천만을 1000만 혹은 10000000으로 쓸 수도 있기에, Training Datasets에 따라 결과는 상이할 수 있습니다. <br /> - **수관형사와 수 의존명사의 띄어쓰기에 따라 결과가 확연히 달라질 수 있습니다. (쉰살, 쉰 살 -> 쉰살, 50살)** https://eretz2.tistory.com/34 <br /> 일단은 기준을 잡고 치우치게 학습시키기엔 어떻게 사용될지 몰라, 학습 데이터 분포에 맡기도록 했습니다. (쉰 살이 더 많을까 쉰살이 더 많을까!?) - **Developed by:** Yoo SungHyun(https://github.com/YooSungHyun) - **Language(s):** Korean - **License:** apache-2.0 - **Parent Model:** See the [kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) for more information about the pre-trained base model.
b054566017e12fbb4b9f7767271c4184
apache-2.0
['text2text-generation']
false
Evaluation Just using `evaluate-metric/bleu` and `evaluate-metric/rouge` in huggingface `evaluate` library <br /> [Training wanDB URL](https://wandb.ai/bart_tadev/BartForConditionalGeneration/runs/14hyusvf?workspace=user-bart_tadev)
e7f1d584c3050f81cc55a3a44589e78a
apache-2.0
['text2text-generation']
false
How to Get Started With the Model ```python from transformers.pipelines import Text2TextGenerationPipeline from transformers import AutoTokenizer, AutoModelForSeq2SeqLM texts = ["그러게 누가 여섯시까지 술을 마시래?"] tokenizer = AutoTokenizer.from_pretrained("lIlBrother/ko-TextNumbarT") model = AutoModelForSeq2SeqLM.from_pretrained("lIlBrother/ko-TextNumbarT") seq2seqlm_pipeline = Text2TextGenerationPipeline(model=model, tokenizer=tokenizer) kwargs = { "min_length": 0, "max_length": 1206, "num_beams": 100, "do_sample": False, "num_beam_groups": 1, } pred = seq2seqlm_pipeline(texts, **kwargs) print(pred)
bce7647917e3f29cc7124274267bdfc2
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Tiny Pashto This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the google/fleurs ps_af dataset. It achieves the following results on the evaluation set: - Loss: 0.8714 - Wer: 60.0560
ed4f3348171d4ea675cef41ca007de52
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 1300 - mixed_precision_training: Native AMP
da2812f8dce7aa61016a43cd4f7d263f
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.9153 | 2.5 | 100 | 1.0240 | 68.9864 | | 0.6865 | 5.0 | 200 | 0.8968 | 61.7660 | | 0.5474 | 7.5 | 300 | 0.8744 | 60.5554 | | 0.4646 | 10.0 | 400 | 0.8710 | 60.0560 | | 0.4557 | 12.5 | 500 | 0.8732 | 59.4658 | | 0.3882 | 15.0 | 600 | 0.8819 | 59.0648 | | 0.3346 | 17.5 | 700 | 0.9032 | 59.4809 | | 0.2947 | 20.0 | 800 | 0.9144 | 59.7685 | | 0.2724 | 22.5 | 900 | 0.9289 | 58.9815 | | 0.2785 | 25.0 | 1000 | 0.9339 | 59.2010 | | 0.2454 | 27.5 | 1100 | 0.9439 | 59.1934 | | 0.2297 | 30.0 | 1200 | 0.9485 | 59.0421 | | 0.2383 | 33.33 | 1300 | 0.9529 | 59.0799 |
805a0c12a2611f630591af37de0f43c0
apache-2.0
['generated_from_trainer']
false
Tagged_One_500v3_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one500v3_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.2659 - Precision: 0.6975 - Recall: 0.6782 - F1: 0.6877 - Accuracy: 0.9245
9926d359c999d5ca052473483d156ea9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 175 | 0.2990 | 0.5405 | 0.4600 | 0.4970 | 0.9007 | | No log | 2.0 | 350 | 0.2789 | 0.6837 | 0.6236 | 0.6523 | 0.9157 | | 0.1081 | 3.0 | 525 | 0.2659 | 0.6975 | 0.6782 | 0.6877 | 0.9245 |
6c037c7a9e7cfb558f289a23362143d6
apache-2.0
['multiberts', 'multiberts-seed_0', 'multiberts-seed_0-step_1500k']
false
MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1500k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
6000b75b032e6e52b7614915c5e3bcdd
apache-2.0
['multiberts', 'multiberts-seed_0', 'multiberts-seed_0-step_1500k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1500k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_1500k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1500k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_1500k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
789fc6c55c17a0df50fbf72339004401
apache-2.0
['pytorch', 'causal-lm']
false
- [✨Version v1✨](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1): August 25th, 2022 (*[full](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1) and [half-precision weights](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1-half)*, at step 1M) - [Version v1beta3](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta3): July 22nd, 2022 (*[full](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta3) and [half-precision weights](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta3-half)*, at step 850k) - [Version v1beta2](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta2): June 6th, 2022 (*[full](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta2) and [half-precision weights](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta2-half)*, at step 616k) - [Version v1beta1](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta1-half): April 28th, 2022 (*half-precision weights only*, at step 408k) - <details><summary>All checkpoints</summary> - [Checkpoint 130k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/6c116e533a00db027bf0a2e0b5e06d3e0772e2d0). - [Checkpoint 275k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/20f424ebcc7c500d5328ed45a8c911a2a75583f1). - [Checkpoint 408k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/c51db24abee958efe83e52fddf6d19e5f065b818). - [Checkpoint 616k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/abafe00bfb03330e72a67ed9fc0958c7399f2181). - [Checkpoint 850k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/59d5064b65043f2ff2b2549e4a076854eec22b2e). - [Checkpoint 1M](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/153dab8ad6bc0bfe84346a58834992d46e83662a). </details>
0f46c0de5d580a1c5fc78f7c4001aa66
apache-2.0
['pytorch', 'causal-lm']
false
Model Description BERTIN-GPT-J-6B is a Spanish finetuned version of GPT-J 6B, a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters. <figure> | Hyperparameter | Value | |----------------------|------------| | \\(n_{parameters}\\) | 6053381344 | | \\(n_{layers}\\) | 28&ast; | | \\(d_{model}\\) | 4096 | | \\(d_{ff}\\) | 16384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50257/50400&dagger; (same tokenizer as GPT-2/3) | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py
0c59038a3f829b238683431341656c6b
apache-2.0
['pytorch', 'causal-lm']
false
Training procedure This model was finetuned for ~65 billion tokens (65,536,000,000) over 1,000,000 steps on a single TPU v3-8 VM. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly. Training took roughly 6 months.
c9ccdf168bc4d14fc28050d7008656f2
apache-2.0
['pytorch', 'causal-lm']
false
Intended Use and Limitations BERTIN-GPT-J-6B learns an inner representation of the Spanish language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
9409b027604dd74cab08f601ca7c5ca3
apache-2.0
['pytorch', 'causal-lm']
false
How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("bertin-project/bertin-gpt-j-6B") model = AutoModelForCausalLM.from_pretrained("bertin-project/bertin-gpt-j-6B") ```
4f985c01643b6f491c70aed654c9b6f6
apache-2.0
['pytorch', 'causal-lm']
false
Limitations and Biases As the original GPT-J model, the core functionality of BERTIN-GPT-J-6B is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting BERTIN-GPT-J-6B it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon BERTIN-GPT-J-6B to produce factually accurate output. The original GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. A fine-grained analysis of the bias contained in the corpus used for fine-tuning is still pending, although some preliminary remarks are given in the [BERTIN paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/download/6403/3818). As with all language models, it is hard to predict in advance how BERTIN-GPT-J-6B will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
194a808c414ffdddbb50f93e8b3baa8a
apache-2.0
['pytorch', 'causal-lm']
false
BibTeX entry To cite this model: ```bibtex @inproceedings{BERTIN-GPT, author = {Javier De la Rosa and Andres Fernández}, editor = {Manuel Montes-y-Gómez and Julio Gonzalo and Francisco Rangel and Marco Casavantes and Miguel Ángel Álvarez-Carmona and Gemma Bel-Enguix and Hugo Jair Escalante and Larissa Freitas and Antonio Miranda-Escalada and Francisco Rodríguez-Sánchez and Aiala Rosá and Marco Antonio Sobrevilla-Cabezudo and Mariona Taulé and Rafael Valencia-García}, title = {Zero-shot Reading Comprehension and Reasoning for Spanish with {BERTIN} {GPT-J-6B}}, date = {2022-09}, booktitle = {Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2022)}, booktitleaddon = {Co-located with the Conference of the Spanish Society for Natural Language Processing (SEPLN 2022)}, eventdate = {2022-09-20/2022-09-25}, venue = {A Coru\~{n}a, Spain}, publisher = {CEUR Workshop Proceedings}, } ``` To cite the data used to train it: ```bibtex @article{BERTIN, author = {Javier De la Rosa y Eduardo G. Ponferrada y Manu Romero y Paulo Villegas y Pablo González de Prado Salas y María Grandury}, title = {{BERTIN}: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling}, journal = {Procesamiento del Lenguaje Natural}, volume = {68}, number = {0}, year = {2022}, keywords = {}, abstract = {The pre-training of large language models usually requires massive amounts of resources, both in terms of computation and data. Frequently used web sources such as Common Crawl might contain enough noise to make this pretraining sub-optimal. In this work, we experiment with different sampling methods from the Spanish version of mC4, and present a novel data-centric technique which we name perplexity sampling that enables the pre-training of language models in roughly half the amount of steps and using one fifth of the data. The resulting models are comparable to the current state-of-the-art, and even achieve better results for certain tasks. Our work is proof of the versatility of Transformers, and paves the way for small teams to train their models on a limited budget.}, issn = {1989-7553}, url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6403}, pages = {13--23} } ``` If you use this model, we would love to hear about it! Reach out on twitter, GitHub, Discord, or shoot us an email.
fa4cd19ccaf3e5bed9d6b536c6cc4a65
apache-2.0
['pytorch', 'causal-lm']
false
Team - Javier de la Rosa ([versae](https://huggingface.co/versae)) - Eduardo González ([edugp](https://huggingface.co/edugp)) - Paulo Villegas ([paulo](https://huggingface.co/paulo)) - Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps)) - Manu Romero ([mrm8488](https://huggingface.co/)) - María Grandury ([mariagrandury](https://huggingface.co/))
c7693a7a1697982fdabb4dced3c8be81
apache-2.0
['pytorch', 'causal-lm']
false
Acknowledgements This project would not have been possible without compute generously provided by the National Library of Norway and Google through the [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha. And specially, to [Stella Biderman](https://www.stellabiderman.com) for her general openness, and [Ben Wang](https://github.com/kingoflolz/mesh-transformer-jax) for the main codebase.
b08f83264703b4cc1ea678cd95a18aff
apache-2.0
['pytorch', 'causal-lm']
false
Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models be liable for any results arising from the use made by third parties of these models.
7e1cf533ff983cd471804e0e00c240b5
apache-2.0
['generated_from_trainer']
false
bigbird-base-health-fact This model is a fine-tuned version of [google/bigbird-roberta-base](https://huggingface.co/google/bigbird-roberta-base) on the health_fact dataset. It achieves the following results on the VALIDATION set: - Overall Accuracy: 0.8228995057660626 - Macro F1: 0.6979224830442152 - False Accuracy: 0.8289473684210527 - Mixture Accuracy: 0.47560975609756095 - True Accuracy: 0.9332273449920508 - Unproven Accuracy: 0.4634146341463415 It achieves the following results on the TEST set: - Overall Accuracy: 0.7948094079480941 - Macro F1: 0.6694031411935434 - Mixture Accuracy: 0.4975124378109453 - False Accuracy: 0.8092783505154639 - True Accuracy: 0.9148580968280468 - Unproven Accuracy: 0.4
dae540c2ee284806263e8ec5e1184c10
apache-2.0
['generated_from_trainer']
false
Model description Here is how you can use the model: ```python import torch from transformers import pipeline claim = "A mother revealed to her child in a letter after her death that she had just one eye because she had donated the other to him." text = "In April 2005, we spotted a tearjerker on the Internet about a mother who gave up one of her eyes to a son who had lost one of his at an early age. By February 2007 the item was circulating in e-mail in the following shortened version: My mom only had one eye. I hated her… She was such an embarrassment. She cooked for students and teachers to support the family. There was this one day during elementary school where my mom came to say hello to me. I was so embarrassed. How could she do this to me? I ignored her, threw her a hateful look and ran out. The next day at school one of my classmates said, “EEEE, your mom only has one eye!” I wanted to bury myself. I also wanted my mom to just disappear. I confronted her that day and said, “If you’re only gonna make me a laughing stock, why don’t you just die?” My mom did not respond… I didn’t even stop to think for a second about what I had said, because I was full of anger. I was oblivious to her feelings. I wanted out of that house, and have nothing to do with her. So I studied real hard, got a chance to go abroad to study. Then, I got married. I bought a house of my own. I had kids of my own. I was happy with my life, my kids and the comforts. Then one day, my Mother came to visit me. She hadn’t seen me in years and she didn’t even meet her grandchildren. When she stood by the door, my children laughed at her, and I yelled at her for coming over uninvited. I screamed at her, “How dare you come to my house and scare my children! GET OUT OF HERE! NOW!! !” And to this, my mother quietly answered, “Oh, I’m so sorry. I may have gotten the wrong address,” and she disappeared out of sight. One day, a letter regarding a school reunion came to my house. So I lied to my wife that I was going on a business trip. After the reunion, I went to the old shack just out of curiosity. My neighbors said that she died. I did not shed a single tear. They handed me a letter that she had wanted me to have. My dearest son, I think of you all the time. I’m sorry that I came to your house and scared your children. I was so glad when I heard you were coming for the reunion. But I may not be able to even get out of bed to see you. I’m sorry that I was a constant embarrassment to you when you were growing up. You see……..when you were very little, you got into an accident, and lost your eye. As a mother, I couldn’t stand watching you having to grow up with one eye. So I gave you mine. I was so proud of my son who was seeing a whole new world for me, in my place, with that eye. With all my love to you, Your mother. In its earlier incarnation, the story identified by implication its location as Korea through statements made by both the mother and the son (the son’s “I left my mother and came to Seoul” and the mother’s “I won’t visit Seoul anymore”). It also supplied a reason for the son’s behavior when his mother arrived unexpectedly to visit him (“My little girl ran away, scared of my mom’s eye” and “I screamed at her, ‘How dare you come to my house and scare my daughter!'”). A further twist was provided in the original: rather than gaining the news of his mother’s death from neighbors (who hand him her letter), the son instead discovered the woman who bore him lying dead on the floor of what used to be his childhood home, her missive to him clutched in her lifeless hand: Give your parents roses while they are alive, not deadMY mom only had one eye. I hated her … she was such an embarrassment. My mom ran a small shop at a flea market. She collected little weeds and such to sell … anything for the money we needed she was such an embarrassment. There was this one day during elementary school … It was field day, and my mom came. I was so embarrassed. How could she do this to me? I threw her a hateful look and ran out. The next day at school … “your mom only has one eye?!? !” … And they taunted me. I wished that my mom would just disappear from this world so I said to my mom, “mom … Why don’t you have the other eye?! If you’re only going to make me a laughingstock, why don’t you just die?!! !” my mom did not respond … I guess I felt a little bad, but at the same time, it felt good to think that I had said what I’d wanted to say all this time… maybe it was because my mom hadn’t punished me, but I didn’t think that I had hurt her feelings very badly. That night… I woke up, and went to the kitchen to get a glass of water. My mom was crying there, so quietly, as if she was afraid that she might wake me. I took a look at her, and then turned away. Because of the thing I had said to her earlier, there was something pinching at me in the corner of my heart. Even so, I hated my mother who was crying out of her one eye. So I told myself that I would grow up and become successful. Because I hated my one-eyed mom and our desperate poverty… then I studied real hard. I left my mother and came to Seoul and studied, and got accepted in the Seoul University with all the confidence I had. Then, I got married. I bought a house of my own. Then I had kids, too… now I’m living happily as a successful man. I like it here because it’s a place that doesn’t remind me of my mom. This happiness was getting bigger and bigger, when… what?! Who’s this…it was my mother… still with her one eye. It felt as if the whole sky was falling apart on me. My little girl ran away, scared of my mom’s eye. And I asked her, “who are you? !” “I don’t know you!! !” as if trying to make that real. I screamed at her, “How dare you come to my house and scare my daughter!” “GET OUT OF HERE! NOW!! !” and to this, my mother quietly answered, “oh, I’m so sorry. I may have gotten the wrong address,” and she disappeared out of sight. Thank goodness… she doesn’t recognize me… I was quite relieved. I told myself that I wasn’t going to care, or think about this for the rest of my life. Then a wave of relief came upon me… One day, a letter regarding a school reunion came to my house. So, lying to my wife that I was going on a business trip, I went. After the reunion, I went down to the old shack, that I used to call a house… just out of curiosity there, I found my mother fallen on the cold ground. But I did not shed a single tear. She had a piece of paper in her hand…. it was a letter to me. My son… I think my life has been long enough now… And… I won’t visit Seoul anymore… but would it be too much to ask if I wanted you to come visit me once in a while? I miss you so much… and I was so glad when I heard you were coming for the reunion. But I decided not to go to the school. …for you… and I’m sorry that I only have one eye, and I was an embarrassment for you. You see, when you were very little, you got into an accident, and lost your eye. as a mom, I couldn’t stand watching you having to grow up with only one eye… so I gave you mine… I was so proud of my son that was seeing a whole new world for me, in my place, with that eye. I was never upset at you for anything you did… the couple times that you were angry with me, I thought to myself, ‘it’s because he loves me…’ my son. Oh, my son… I don’t want you to cry for me, because of my death. My son, I love you my son, I love you so much. With all modern medical technology, transplantation of the eyeball is still impossible. The optic nerve isn’t an ordinary nerve, but instead an inset running from the brain. Modern medicine isn’t able to “connect” an eyeball back to brain after an optic nerve has been severed, let alone transplant the eye from a different person. (The only exception is the cornea, the transparent part in front of the eye: corneas are transplanted to replace injured and opaque ones.) We won’t try to comment on whether any surgeon would accept an eye from a living donor for transplant into another — we’ll leave that to others who are far more knowledgeable about medical ethics and transplant procedures. But we will note that the plot device of a mother’s dramatic sacrifice for the sake of her child’s being revealed in a written communication delivered after her demise appears in another legend about maternal love: the 2008 tale about a woman who left a touching message on her cell phone even as life ebbed from her as she used her body to shield the tot during an earthquake. Giving up one’s own life for a loved one is central to a 2005 urban legend about a boy on a motorcycle who has his girlfriend hug him one last time and put on his helmet just before the crash that kills him and spares her. Returning to the “notes from the dead” theme is the 1995 story about a son who discovers only through a posthumous letter from his mother what their occasional dinner “dates” had meant to her. Another legend we’re familiar with features a meme used in the one-eyed mother story (the coming to light of the enduring love of the person who died for the completely unworthy person she’d lavished it on), but that one involves a terminally ill woman and her cheating husband. In it, an about-to-be-spurned wife begs the adulterous hoon she’d married to stick around for another 30 days and to carry her over the threshold of their home once every day of that month as her way of keeping him around long enough for her to kick the bucket and thus spare their son the knowledge that his parents were on the verge of divorce." label = "false" device = 0 if torch.cuda.is_available() else -1 pl = pipeline("text-classification", model="nbroad/bigbird-base-health-fact", device=device) input_text = claim+pl.tokenizer.sep_token+text print(len(pl.tokenizer(input_text).input_ids))
c634239820348641913404b772c91b13
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 18 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 - mixed_precision_training: Native AMP
84ac9dab585a285917a3a621a6898867
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Micro F1 | Macro F1 | False F1 | Mixture F1 | True F1 | Unproven F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|:----------:|:-------:|:-----------:| | 0.5563 | 1.0 | 1226 | 0.5020 | 0.7949 | 0.6062 | 0.7926 | 0.4591 | 0.8986 | 0.2745 | | 0.5048 | 2.0 | 2452 | 0.4969 | 0.8180 | 0.6846 | 0.8202 | 0.4342 | 0.9126 | 0.5714 | | 0.3454 | 3.0 | 3678 | 0.5864 | 0.8130 | 0.6874 | 0.8114 | 0.4557 | 0.9154 | 0.5672 |
ca23d7269dd35848dc914975e3418d84
mit
['generated_from_trainer']
false
deberta-base-mnli-finetuned-cola This model is a fine-tuned version of [microsoft/deberta-base-mnli](https://huggingface.co/microsoft/deberta-base-mnli) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8205 - Matthews Correlation: 0.6282
45aef838ee19c933e908575c1759ddb6
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.4713 | 1.0 | 535 | 0.5110 | 0.5797 | | 0.2678 | 2.0 | 1070 | 0.6648 | 0.5154 | | 0.1811 | 3.0 | 1605 | 0.6681 | 0.6121 | | 0.113 | 4.0 | 2140 | 0.8205 | 0.6282 | | 0.0831 | 5.0 | 2675 | 1.0413 | 0.6057 |
510bc8cea6d1a04ef46bb4ccf4335e63
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3822 - F1: 0.6771
7be561bfc05ff1d1413cd07fa53dd728
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1307 | 1.0 | 50 | 0.5745 | 0.4939 | | 0.5178 | 2.0 | 100 | 0.4389 | 0.6472 | | 0.3716 | 3.0 | 150 | 0.3822 | 0.6771 |
030acddab255883e08c970320f0fd923
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4386
e1170f0470987aa49c8adfa0a2f0a7b5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.8806 | 1.0 | 106 | 1.6240 | | 1.8017 | 2.0 | 212 | 1.4700 | | 1.5078 | 3.0 | 318 | 1.4257 | | 1.3149 | 4.0 | 424 | 1.4073 | | 1.1838 | 5.0 | 530 | 1.4386 |
cda76f5b0c289aba0fc334de0bbfe2dd
apache-2.0
['generated_from_trainer']
false
distilgpt2-finetuned-wikitexts This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6424
9566e956f8e2afa1b178a24f298c54ab
mit
[]
false
Model Description A series of CLIP [ConvNeXt-Base](https://arxiv.org/abs/2201.03545) (w/ wide embed dim) models trained on subsets LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip). Goals: * Explore an alternative to ViT and ResNet (w/ AttentionPooling) CLIP models that scales well with model size and image resolution Firsts: * First known ConvNeXt CLIP models trained at scale in the range of CLIP ViT-B/16 and RN50x4 models * First released model weights exploring increase of augmentation + regularization for image tower via adding (greater scale range of RRC, random erasing, stochastic depth) The models utilize the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Base model (`convnext_base`) as the image tower, and the same text tower as the RN50x4 (depth 12, embed dim 640) model from OpenAI CLIP. The base models are trained at 256x256 image resolution and roughly match the RN50x4 models on FLOPs and activation counts. The models with `320` in the name are trained at 320x320. All models in this series were trained for 13B samples and have ImageNet Zero-Shot top-1 of >= 70.8%. Comparing to ViT-B/16 at 34B SS with zero-shot of 70.2% (68.1% for 13B SS) this suggests the ConvNeXt architecture may be more sample efficient in this range of model scale. More experiments needed to confirm. | Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) | | ----- | ------- | ---------- | ------------ | --------- | | [convnext_base_w.laion2b_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K) | LAION-2B | 256x256 | RRC (0.9, 1.0) | 70.8 | | [convnext_base_w.laion2b_s13b_b82k_augreg](https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1) | 71.5 | | [convnext_base_w.laion_aesthetic_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w-laion_aesthetic-s13B-b82K) | LAION-A | 256x256 | RRC (0.9, 1.0) | 71.0 | | [convnext_base_w_320.laion_aesthetic_s13b_b82k](https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K) | LAION-A | 320x320 | RRC (0.9, 1.0) | 71.7 | | [convnext_base_w_320.laion_aesthetic_s13b_b82k_augreg](https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K-augreg) | LAION-A | 320x320 | RRC (0.33, 1.0), RE (0.35), SD (0.1) | 71.3 | RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering. Model training done by Ross Wightman across both the [stability.ai](https://stability.ai/) cluster and the [JUWELS Booster](https://apps.fz-juelich.de/jsc/hps/juwels/booster-overview.html) supercomputer. See acknowledgements below.
9d0cd13c0dfc79787f0a0e49bb61dde3
mit
[]
false
Citation **BibTeX:** ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` OpenCLIP software ```bibtex @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` OpenAI CLIP paper ```bibtex @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` ```bibtex @Article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ```
f56d9baf08b35a0e7d42eafa26227967
mit
['generated_from_trainer']
false
BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-NDD-NER This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the ManpreetK/NDD_NER dataset. It achieves the following results on the evaluation set: - Overall Precision: 0.6297 - Overall Recall: 0.7068 - Overall F1: 0.6660 - Overall Accuracy: 0.9044 - Loss: 0.3763 - Associated_Problem Precision/Recall/F1: 0.6316/0.5294/0.576 - Associated_Problem Number: 68 - Condition Precision/Recall/F1: 0.8052/0.8921/0.8464 - Condition Number: 139 - Intervention Precision/Recall/F1: 0.5159/0.6633/0.5804 - Intervention Number: 98 - Patient_Group Precision/Recall/F1: 0.5512/0.8046/0.6542 - Patient_Group Number: 87 - Test Precision/Recall/F1: 0.5882/0.4878/0.5333 - Test Number: 82
22ea228322719e8358ff081b8071a1de
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Associated Problem Precision | Associated Problem Recall | Associated Problem F1 | Associated Problem Number | Condition Precision | Condition Recall | Condition F1 | Condition Number | Intervention Precision | Intervention Recall | Intervention F1 | Intervention Number | Patient Group Precision | Patient Group Recall | Patient Group F1 | Patient Group Number | Test Precision | Test Recall | Test F1 | Test Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:----------------------------:|:-------------------------:|:---------------------:|:-------------------------:|:-------------------:|:----------------:|:------------:|:----------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------:|:-----------:|:-------:|:-----------:|:-----------------:|:--------------:|:----------:|:----------------:| | 1.4014 | 1.0 | 11 | 0.7804 | 0.0 | 0.0 | 0.0 | 68 | 0.0 | 0.0 | 0.0 | 139 | 0.0 | 0.0 | 0.0 | 98 | 0.0 | 0.0 | 0.0 | 87 | 0.0 | 0.0 | 0.0 | 82 | 0.0 | 0.0 | 0.0 | 0.7808 | | 0.7625 | 2.0 | 22 | 0.5575 | 0.0 | 0.0 | 0.0 | 68 | 0.7468 | 0.8273 | 0.7850 | 139 | 0.3333 | 0.1429 | 0.2 | 98 | 0.7627 | 0.5172 | 0.6164 | 87 | 0.4286 | 0.1098 | 0.1748 | 82 | 0.6630 | 0.3861 | 0.488 | 0.8546 | | 0.5152 | 3.0 | 33 | 0.4489 | 0.2222 | 0.0588 | 0.0930 | 68 | 0.7011 | 0.9281 | 0.7988 | 139 | 0.4674 | 0.4388 | 0.4526 | 98 | 0.5528 | 0.7816 | 0.6476 | 87 | 0.5758 | 0.4634 | 0.5135 | 82 | 0.5839 | 0.5949 | 0.5893 | 0.8820 | | 0.3621 | 4.0 | 44 | 0.4020 | 0.2727 | 0.1324 | 0.1782 | 68 | 0.7716 | 0.8993 | 0.8306 | 139 | 0.4538 | 0.5510 | 0.4977 | 98 | 0.5752 | 0.7471 | 0.65 | 87 | 0.7059 | 0.4390 | 0.5414 | 82 | 0.6046 | 0.6097 | 0.6071 | 0.8900 | | 0.252 | 5.0 | 55 | 0.3764 | 0.5 | 0.5588 | 0.5278 | 68 | 0.8219 | 0.8633 | 0.8421 | 139 | 0.5426 | 0.5204 | 0.5312 | 98 | 0.5610 | 0.7931 | 0.6571 | 87 | 0.5641 | 0.5366 | 0.55 | 82 | 0.6228 | 0.6793 | 0.6498 | 0.9014 | | 0.1988 | 6.0 | 66 | 0.3839 | 0.4918 | 0.4412 | 0.4651 | 68 | 0.7590 | 0.9065 | 0.8262 | 139 | 0.4161 | 0.6327 | 0.5020 | 98 | 0.552 | 0.7931 | 0.6509 | 87 | 0.5606 | 0.4512 | 0.5 | 82 | 0.5714 | 0.6835 | 0.6225 | 0.8961 | | 0.1623 | 7.0 | 77 | 0.3669 | 0.4941 | 0.6176 | 0.5490 | 68 | 0.8105 | 0.8921 | 0.8493 | 139 | 0.4667 | 0.6429 | 0.5408 | 98 | 0.5702 | 0.7931 | 0.6635 | 87 | 0.5634 | 0.4878 | 0.5229 | 82 | 0.5982 | 0.7131 | 0.6506 | 0.9020 | | 0.1319 | 8.0 | 88 | 0.3763 | 0.6316 | 0.5294 | 0.576 | 68 | 0.8052 | 0.8921 | 0.8464 | 139 | 0.5159 | 0.6633 | 0.5804 | 98 | 0.5512 | 0.8046 | 0.6542 | 87 | 0.5882 | 0.4878 | 0.5333 | 82 | 0.6297 | 0.7068 | 0.6660 | 0.9044 | | 0.117 | 9.0 | 99 | 0.3834 | 0.6481 | 0.5147 | 0.5738 | 68 | 0.8158 | 0.8921 | 0.8522 | 139 | 0.4923 | 0.6531 | 0.5614 | 98 | 0.5738 | 0.8046 | 0.6699 | 87 | 0.5909 | 0.4756 | 0.5270 | 82 | 0.6336 | 0.7004 | 0.6653 | 0.9030 | | 0.1125 | 10.0 | 110 | 0.3854 | 0.5441 | 0.5441 | 0.5441 | 68 | 0.8170 | 0.8993 | 0.8562 | 139 | 0.4737 | 0.6429 | 0.5455 | 98 | 0.5635 | 0.8161 | 0.6667 | 87 | 0.5882 | 0.4878 | 0.5333 | 82 | 0.6131 | 0.7089 | 0.6575 | 0.9028 |
9414b4095b09ee93ca54c3078c1415ea
apache-2.0
['generated_from_trainer']
false
mBERT_all_ty_SQen_SQ20_1 This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5305
ce605c51b444f1ea5a91971cb129bf12
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2992 - eval_accuracy: 0.9085 - eval_f1: 0.9069 - eval_runtime: 2.8799 - eval_samples_per_second: 694.475 - eval_steps_per_second: 11.112 - epoch: 1.0 - step: 250
dab75545d4b0cbfc5183228cc5340114
apache-2.0
['generated_from_trainer']
false
distilled-mt5-small-0.6-0.5 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 3.5047 - Bleu: 5.2928 - Gen Len: 40.7094
bed7c52cbbfd79b5b9c123bd98d673b3
mit
[]
false
jetsetdreamcastcovers on Stable Diffusion This is the `<jet>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<jet> 0](https://huggingface.co/sd-concepts-library/jetsetdreamcastcovers/resolve/main/concept_images/0.jpeg) ![<jet> 1](https://huggingface.co/sd-concepts-library/jetsetdreamcastcovers/resolve/main/concept_images/2.jpeg) ![<jet> 2](https://huggingface.co/sd-concepts-library/jetsetdreamcastcovers/resolve/main/concept_images/1.jpeg)
d96c12764e9ac6f09fc52ac71ad03b8f
apache-2.0
['transformers']
false
This is a finetuned version of [RuRoBERTa-large](https://huggingface.co/sberbank-ai/ruRoberta-large) for the task of linguistic acceptability classification on the [RuCoLA](https://rucola-benchmark.com/) benchmark. The hyperparameters used for finetuning are as follows: * 5 training epochs (with early stopping based on validation MCC) * Peak learning rate: 1e-5, linear warmup for 10% of total training time * Weight decay: 1e-4 * Batch size: 32 * Random seed: 5 * Optimizer: [torch.optim.AdamW](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html)
4d956ee86e49a3f941843ea2d1a84225
apache-2.0
['translation']
false
cpp-eng * source group: Creoles and pidgins, Portuguese-based * target group: English * OPUS readme: [cpp-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-eng/README.md) * model: transformer * source language(s): ind max_Latn min pap tmw_Latn zlm_Latn zsm_Latn * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.eval.txt)
69671f6804fc5a8d6c7363c63fda8d55
apache-2.0
['translation']
false
Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.msa-eng.msa.eng | 39.6 | 0.580 | | Tatoeba-test.multi.eng | 39.7 | 0.580 | | Tatoeba-test.pap-eng.pap.eng | 49.1 | 0.579 |
eb60fca14078bd38069f108e34b877b8
apache-2.0
['translation']
false
System Info: - hf_name: cpp-eng - source_languages: cpp - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/cpp-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['id', 'cpp', 'en'] - src_constituents: {'zsm_Latn', 'ind', 'pap', 'min', 'tmw_Latn', 'max_Latn', 'zlm_Latn'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/cpp-eng/opus2m-2020-07-31.test.txt - src_alpha3: cpp - tgt_alpha3: eng - short_pair: cpp-en - chrF2_score: 0.58 - bleu: 39.7 - brevity_penalty: 0.972 - ref_len: 37399.0 - src_name: Creoles and pidgins, Portuguese-based - tgt_name: English - train_date: 2020-07-31 - src_alpha2: cpp - tgt_alpha2: en - prefer_old: False - long_pair: cpp-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
89eb5dc0aa388ecf7b29e2e6246b246f
apache-2.0
['translation', 'generated_from_keras_callback']
false
tf-marian-finetuned-kde4-en-to-zh_TW This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7752 - Validation Loss: 0.9022 - Epoch: 2
fd835cc1a240e3dc314346dde634f364
apache-2.0
['translation', 'generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 11973, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
82ab077b04176399a01bc8f903773ec2
apache-2.0
['translation', 'generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.7752 | 0.9022 | 0 | | 0.7749 | 0.9022 | 1 | | 0.7752 | 0.9022 | 2 |
4227b8c945c8c46bc6fa3bedee033f06
apache-2.0
['automatic-speech-recognition', 'phongdtd/VinDataVLSP', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 8 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - num_epochs: 40.0 - mixed_precision_training: Native AMP
05e908b457be092db3546d6189150dc5
apache-2.0
['generated_from_trainer']
false
favsbot_filtersort_using_t5_summarization This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the filter_sort dataset. It achieves the following results on the evaluation set: - Loss: 2.3327 - Rouge1: 15.7351 - Rouge2: 0.0 - Rougel: 13.4803 - Rougelsum: 13.5134 - Gen Len: 12.6667
4dac262902e21d4295b922524c6d1658
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 5 | 3.8161 | 14.754 | 0.0 | 12.6197 | 12.6426 | 10.5 | | 4.8789 | 2.0 | 10 | 3.6423 | 14.754 | 0.0 | 12.6197 | 12.6426 | 10.5 | | 4.8789 | 3.0 | 15 | 3.4687 | 14.754 | 0.0 | 12.6197 | 12.6426 | 10.5 | | 4.5407 | 4.0 | 20 | 3.3086 | 14.754 | 0.0 | 12.6197 | 12.6426 | 10.5 | | 4.5407 | 5.0 | 25 | 3.1726 | 14.754 | 0.0 | 12.6197 | 12.6426 | 10.5 | | 4.2216 | 6.0 | 30 | 3.0464 | 15.7792 | 0.0 | 13.5134 | 13.5411 | 12.6667 | | 4.2216 | 7.0 | 35 | 2.9326 | 15.7792 | 0.0 | 13.5134 | 13.5411 | 12.6667 | | 4.0021 | 8.0 | 40 | 2.8305 | 15.7792 | 0.0 | 13.5134 | 13.5411 | 12.6667 | | 4.0021 | 9.0 | 45 | 2.7386 | 15.7792 | 0.0 | 13.5134 | 13.5411 | 12.6667 | | 3.7634 | 10.0 | 50 | 2.6588 | 15.7792 | 0.0 | 13.5134 | 13.5411 | 12.6667 | | 3.7634 | 11.0 | 55 | 2.5916 | 15.7792 | 0.0 | 13.5134 | 13.5411 | 12.6667 | | 3.6224 | 12.0 | 60 | 2.5358 | 15.7792 | 0.0 | 13.5134 | 13.5411 | 12.6667 | | 3.6224 | 13.0 | 65 | 2.4895 | 15.7792 | 0.0 | 13.5134 | 13.5411 | 12.6667 | | 3.496 | 14.0 | 70 | 2.4486 | 15.7792 | 0.0 | 13.5134 | 13.5411 | 12.6667 | | 3.496 | 15.0 | 75 | 2.4140 | 15.7792 | 0.0 | 13.5134 | 13.5411 | 12.6667 | | 3.4157 | 16.0 | 80 | 2.3857 | 15.7351 | 0.0 | 13.4803 | 13.5134 | 12.6667 | | 3.4157 | 17.0 | 85 | 2.3622 | 15.7351 | 0.0 | 13.4803 | 13.5134 | 12.6667 | | 3.3964 | 18.0 | 90 | 2.3455 | 15.7351 | 0.0 | 13.4803 | 13.5134 | 12.6667 | | 3.3964 | 19.0 | 95 | 2.3361 | 15.7351 | 0.0 | 13.4803 | 13.5134 | 12.6667 | | 3.3502 | 20.0 | 100 | 2.3327 | 15.7351 | 0.0 | 13.4803 | 13.5134 | 12.6667 |
cf764eafdbc37f77e6c8130aa17122a1
apache-2.0
['generated_from_keras_callback']
false
nandysoham/20-clustered This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7888 - Train End Logits Accuracy: 0.7768 - Train Start Logits Accuracy: 0.7381 - Validation Loss: 0.9406 - Validation End Logits Accuracy: 0.7385 - Validation Start Logits Accuracy: 0.7043 - Epoch: 1
6472960f42592a060942216eb8231b9f
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 336, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
3a1b3ae3355d9ad9a1b928ad7b9f5968
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.1149 | 0.6935 | 0.6577 | 0.9268 | 0.7266 | 0.7073 | 0 | | 0.7888 | 0.7768 | 0.7381 | 0.9406 | 0.7385 | 0.7043 | 1 |
dbdda55330b214a75415d9dfd6390f0b
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-t5-summarization This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 1.7613 - Rouge1: 24.5755 - Rouge2: 11.8424 - Rougel: 20.3031 - Rougelsum: 23.1867 - Gen Len: 18.9999
c3668e735a04153e0d0deebe92796efb
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 - mixed_precision_training: Native AMP
c0d768b05d475ab63d5e8bcdd904a9e6
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 1.9891 | 1.0 | 17945 | 1.7981 | 24.382 | 11.7099 | 20.1707 | 23.0021 | 18.9998 | | 1.9527 | 2.0 | 35890 | 1.7816 | 24.4884 | 11.7673 | 20.2698 | 23.1233 | 19.0 | | 1.9421 | 3.0 | 53835 | 1.7728 | 24.5782 | 11.8401 | 20.3343 | 23.2033 | 18.9997 | | 1.9298 | 4.0 | 71780 | 1.7677 | 24.566 | 11.8723 | 20.3296 | 23.1943 | 18.9999 | | 1.9256 | 5.0 | 89725 | 1.7619 | 24.5662 | 11.8385 | 20.3265 | 23.2016 | 18.9999 | | 1.9056 | 6.0 | 107670 | 1.7613 | 24.5755 | 11.8424 | 20.3031 | 23.1867 | 18.9999 |
0f1bf5a1dbc866504d1177453f1e5e8b
apache-2.0
['masked-lm']
false
AL-RoBERTa base model Pretrained model on Albanian language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case-sensitive: it makes a difference between tirana and Tirana.
c45636db712f528078c3066c72d9f37a
apache-2.0
['masked-lm']
false
Model description RoBERTa is a transformers model pre-trained on a large corpus of text data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pre-trained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
cc9bb05d1759f053a08770ec15967a89
apache-2.0
['masked-lm']
false
How to use You can use this model directly with a pipeline for masked language modeling: \ from transformers import pipeline \ unmasker = pipeline('fill-mask', model='macedonizer/al-roberta-base') \ unmasker("Tirana është \\<mask\\> i Shqipërisë.") \ [{'score': 0.9426872134208679, 'sequence': 'Tirana është kryeqyteti i Shqipërisë', 'token': 7901, 'token_str': ' kryeqyteti'}, {'score': 0.03112833760678768, 'sequence': 'Tirana është kryeqytet i Shqipërisë', 'token': 7439, 'token_str': ' kryeqytet'}, {'score': 0.0022084848023951054, 'sequence': 'Tirana është qytet i Shqipërisë', 'token': 2246, 'token_str': ' qytet'}, {'score': 0.0016222079284489155, 'sequence': 'Tirana është qyteti i Shqipërisë', 'token': 2784, 'token_str': ' qyteti'}, {'score': 0.0008979254635050893, 'sequence': 'Tirana është Kryeqytet i Shqipërisë', 'token': 37653, 'token_str': ' Kryeqytet'}] Here is how to use this model to get the features of a given text in PyTorch: from transformers import RobertaTokenizer, RobertaModel \ tokenizer = RobertaTokenizer.from_pretrained('macedonizer/al-roberta-base') \ model = RobertaModel.from_pretrained('macedonizer/al-roberta-base') \ text = "Replace me by any text you'd like." \ encoded_input = tokenizer(text, return_tensors='pt') \ output = model(**encoded_input)
0852c53127ba2f267a5d4f1cc6d2c17c
apache-2.0
['generated_from_trainer']
false
tiny-mlm-snli-target-glue-sst2 This model is a fine-tuned version of [muhtasham/tiny-mlm-snli](https://huggingface.co/muhtasham/tiny-mlm-snli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4415 - Accuracy: 0.8234
4e9520343d0edebd464f07fcca9cdc17
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5925 | 0.24 | 500 | 0.4955 | 0.7741 | | 0.4461 | 0.48 | 1000 | 0.4719 | 0.7856 | | 0.3964 | 0.71 | 1500 | 0.4450 | 0.8016 | | 0.375 | 0.95 | 2000 | 0.4547 | 0.7970 | | 0.3361 | 1.19 | 2500 | 0.4403 | 0.8050 | | 0.3163 | 1.43 | 3000 | 0.4422 | 0.8028 | | 0.2995 | 1.66 | 3500 | 0.4388 | 0.8073 | | 0.2931 | 1.9 | 4000 | 0.4413 | 0.8096 | | 0.2741 | 2.14 | 4500 | 0.4884 | 0.8028 | | 0.2555 | 2.38 | 5000 | 0.4415 | 0.8234 |
7cebb23ebbf85203293e913e3eb3c694
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
sentence-transformers/paraphrase-distilroberta-base-v1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
aea18b61952f161f9ea7a7b6c47e57dc
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-distilroberta-base-v1') embeddings = model.encode(sentences) print(embeddings) ```
be4f1fc2f083ee78b499bbf7e2428b9c
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-distilroberta-base-v1') model = AutoModel.from_pretrained('sentence-transformers/paraphrase-distilroberta-base-v1')
d89139b6f0de29d837b52edfde1fd4ac
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-distilroberta-base-v1)
90f70d1ece72be603fc3c7708cd0a529
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
3d002b1599e0a3e49cdde341cb43eb91
apache-2.0
['generated_from_trainer', 'tex2log', 'log2tex', 'foc']
false
T5 (small) fine-tuned on Text2Log This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an Text2Log dataset. It achieves the following results on the evaluation set: - Loss: 0.0313
0bb69d71a2d7814bb43c9a3f7b80200c
apache-2.0
['generated_from_trainer', 'tex2log', 'log2tex', 'foc']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6
fa2a62d9ca9efe1693ecbeeace1a2488
apache-2.0
['generated_from_trainer', 'tex2log', 'log2tex', 'foc']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 0.0749 | 1.0 | 21661 | 0.0509 | | 0.0564 | 2.0 | 43322 | 0.0396 | | 0.0494 | 3.0 | 64983 | 0.0353 | | 0.0425 | 4.0 | 86644 | 0.0332 | | 0.04 | 5.0 | 108305 | 0.0320 | | 0.0381 | 6.0 | 129966 | 0.0313 |
694f7914f2a1fc7984af3cae4d242a69
apache-2.0
['generated_from_trainer', 'tex2log', 'log2tex', 'foc']
false
Usage: ```py from transformers import AutoTokenizer, T5ForConditionalGeneration MODEL_CKPT = "mrm8488/t5-small-finetuned-text2log" model = T5ForConditionalGeneration.from_pretrained(MODEL_CKPT).to(device) tokenizer = AutoTokenizer.from_pretrained(MODEL_CKPT) def translate(text): inputs = tokenizer(text, padding="longest", max_length=64, return_tensors="pt") input_ids = inputs.input_ids.to(device) attention_mask = inputs.attention_mask.to(device) output = model.generate(input_ids, attention_mask=attention_mask, early_stopping=False, max_length=64) return tokenizer.decode(output[0], skip_special_tokens=True) prompt_nl_to_fol = "translate to fol: " prompt_fol_to_nl = "translate to nl: " example_1 = "Every killer leaves something." example_2 = "all x1.(_woman(x1) -> exists x2.(_emotion(x2) & _experience(x1,x2)))" print(translate(prompt_nl_to_fol + example_1))
5950bd263a524e9150691bdd7693b9f6
mit
['sentiment', 'bert']
false
German Sentiment Classification with Bert This model was trained for sentiment classification of German language texts. To achieve the best results all model inputs needs to be preprocessed with the same procedure, that was applied during the training. To simplify the usage of the model, we provide a Python package that bundles the code need for the preprocessing and inferencing. The model uses the Googles Bert architecture and was trained on 1.834 million German-language samples. The training data contains texts from various domains like Twitter, Facebook and movie, app and hotel reviews. You can find more information about the dataset and the training process in the [paper](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.202.pdf).
721c987c0e57c5a1d5a3efc6a1d3dde9
mit
['sentiment', 'bert']
false
Using the Python package To get started install the package from [pypi](https://pypi.org/project/germansentiment/): ```bash pip install germansentiment ``` ```python from germansentiment import SentimentModel model = SentimentModel() texts = [ "Mit keinem guten Ergebniss","Das ist gar nicht mal so gut", "Total awesome!","nicht so schlecht wie erwartet", "Der Test verlief positiv.","Sie fährt ein grünes Auto."] result = model.predict_sentiment(texts) print(result) ``` The code above will output following list: ```python ["negative","negative","positive","positive","neutral", "neutral"] ```
79df927ed7294055053f3bff3d93b774
mit
['sentiment', 'bert']
false
Output class probabilities ```python from germansentiment import SentimentModel model = SentimentModel() classes, probabilities = model.predict_sentiment(["das ist super"], output_probabilities = True) print(classes, probabilities) ``` ```python ['positive'] [[['positive', 0.9761366844177246], ['negative', 0.023540444672107697], ['neutral', 0.00032294404809363186]]] ```
917621065fc163a54e8fa1d3bb9c470f
mit
['sentiment', 'bert']
false
Model and Data If you are interested in code and data that was used to train this model please have a look at [this repository](https://github.com/oliverguhr/german-sentiment) and our [paper](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.202.pdf). Here is a table of the F1 scores that this model achieves on different datasets. Since we trained this model with a newer version of the transformer library, the results are slightly better than reported in the paper. | Dataset | F1 micro Score | | :----------------------------------------------------------- | -------------: | | [holidaycheck](https://github.com/oliverguhr/german-sentiment) | 0.9568 | | [scare](https://www.romanklinger.de/scare/) | 0.9418 | | [filmstarts](https://github.com/oliverguhr/german-sentiment) | 0.9021 | | [germeval](https://sites.google.com/view/germeval2017-absa/home) | 0.7536 | | [PotTS](https://www.aclweb.org/anthology/L16-1181/) | 0.6780 | | [emotions](https://github.com/oliverguhr/german-sentiment) | 0.9649 | | [sb10k](https://www.spinningbytes.com/resources/germansentiment/) | 0.7376 | | [Leipzig Wikipedia Corpus 2016](https://wortschatz.uni-leipzig.de/de/download/german) | 0.9967 | | all | 0.9639 |
00e79bce717219ece110e268b074f372
mit
['sentiment', 'bert']
false
Cite For feedback and questions contact me view mail or Twitter [@oliverguhr](https://twitter.com/oliverguhr). Please cite us if you found this useful: ``` @InProceedings{guhr-EtAl:2020:LREC, author = {Guhr, Oliver and Schumann, Anne-Kathrin and Bahrmann, Frank and Böhme, Hans Joachim}, title = {Training a Broad-Coverage German Sentiment Classification Model for Dialog Systems}, booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference}, month = {May}, year = {2020}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {1620--1625}, url = {https://www.aclweb.org/anthology/2020.lrec-1.202} } ```
e9a8fbcc9e7f1476144c6a87e928fde4
mit
['generated_from_trainer']
false
xlm-roberta-finetuned-hipe-tags-clara-1 This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0799 - F1: 0.8282 - Precision: 0.8167 - Recall: 0.8400
575631c3b9b5a1a6898b4472997cf356
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:------:| | 0.2177 | 1.0 | 445 | 0.0858 | 0.7763 | 0.7568 | 0.7970 | | 0.0694 | 2.0 | 890 | 0.0810 | 0.8056 | 0.7974 | 0.8140 | | 0.038 | 3.0 | 1335 | 0.0799 | 0.8282 | 0.8167 | 0.8400 |
2a336d9b3c687d556cd236623801d853
apache-2.0
['automatic-speech-recognition', 'pt']
false
exp_w2v2t_pt_unispeech_s952 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
efe9300a2533a2ca19406b92ffa2a4e0
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1280 - F1: 0.8819
aee2d19cd820a09795aaa024c5915776
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2947 | 1.0 | 715 | 0.1790 | 0.8235 | | 0.1499 | 2.0 | 1430 | 0.1365 | 0.8664 | | 0.0969 | 3.0 | 2145 | 0.1280 | 0.8819 |
50ee51753b61906e7ceff67b86f1fe65
apache-2.0
['generated_from_trainer']
false
bert-small-finetuned-finetuned-finer-longer10 This model is a fine-tuned version of [muhtasham/bert-small-finetuned-finer](https://huggingface.co/muhtasham/bert-small-finetuned-finer) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3791
4a64216c1e9a8a9bd2a99737133c87ba
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7
768981d344d16083f75142640463c4f7
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.5687 | 1.0 | 2433 | 1.5357 | | 1.5081 | 2.0 | 4866 | 1.4759 | | 1.4813 | 3.0 | 7299 | 1.4337 | | 1.4453 | 4.0 | 9732 | 1.4084 | | 1.4257 | 5.0 | 12165 | 1.3913 | | 1.4155 | 6.0 | 14598 | 1.3855 | | 1.4057 | 7.0 | 17031 | 1.3791 |
1a4b9a36e4c5eb439698144f7b407501
cc-by-sa-4.0
['generated_from_trainer']
false
fin2 This model is a fine-tuned version of [nlpaueb/sec-bert-base](https://huggingface.co/nlpaueb/sec-bert-base) on the fin dataset. It achieves the following results on the evaluation set: - Loss: 0.2405 - Precision: 0.9363 - Recall: 0.7610 - F1: 0.8396 - Accuracy: 0.9743
c8f4673e3e1370feddd306c2e882750d
cc-by-sa-4.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 129 | 0.2186 | 0.7980 | 0.6454 | 0.7137 | 0.9653 | | No log | 2.0 | 258 | 0.2109 | 0.9487 | 0.7371 | 0.8296 | 0.9734 | | No log | 3.0 | 387 | 0.2531 | 0.9746 | 0.7649 | 0.8571 | 0.9743 | | 0.1166 | 4.0 | 516 | 0.2345 | 0.9403 | 0.7530 | 0.8363 | 0.9741 | | 0.1166 | 5.0 | 645 | 0.2405 | 0.9363 | 0.7610 | 0.8396 | 0.9743 |
680eb87bcac8bd674320c8adec4e3d38
apache-2.0
['stanza', 'token-classification']
false
Stanza model for Polish (pl) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2022-10-12 03:03:50.924
f7e449ebb80a378ab2b9c7e519efe92a
apache-2.0
['translation']
false
ukr-tur * source group: Ukrainian * target group: Turkish * OPUS readme: [ukr-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ukr-tur/README.md) * model: transformer-align * source language(s): ukr * target language(s): tur * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ukr-tur/opus-2020-06-17.eval.txt)
d47692db0085799fb7817123bdde21e5