license stringlengths 2 30 | tags stringlengths 2 513 | is_nc bool 1 class | readme_section stringlengths 201 597k | hash stringlengths 32 32 |
|---|---|---|---|---|
apache-2.0 | ['chinese', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing'] | false | Model Description This is a BERT model pre-trained on Chinese Wikipedia texts (both simplified and traditional) for POS-tagging and dependency-parsing, derived from [chinese-roberta-wwm-ext-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech). | 29b9a31c76036f379e3a596c6b6be198 |
apache-2.0 | ['chinese', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing'] | false | How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/chinese-roberta-large-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/chinese-roberta-large-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/chinese-roberta-large-upos") ``` | 5f69a3f17944fac252ce34da5c52694d |
mit | ['translation', 'wmt21'] | false | WMT 21 En-X WMT 21 En-X is a 4.7B multilingual encoder-decoder (seq-to-seq) model trained for one-to-many multilingual translation. It was introduced in this [paper](https://arxiv.org/abs/2108.03265) and first released in [this](https://github.com/pytorch/fairseq/tree/main/examples/wmt21) repository. The model can directly translate English text into 7 other languages: Hausa (ha), Icelandic (is), Japanese (ja), Czech (cs), Russian (ru), Chinese (zh), German (de). To translate into a target language, the target language id is forced as the first generated token. To force the target language id as the first generated token, pass the `forced_bos_token_id` parameter to the `generate` method. *Note: `M2M100Tokenizer` depends on `sentencepiece`, so make sure to install it before running the example.* To install `sentencepiece` run `pip install sentencepiece` Since the model was trained with domain tags, you should prepend them to the input as well. * "wmtdata newsdomain": Use for sentences in the news domain * "wmtdata otherdomain": Use for sentences in all other domain ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("facebook/wmt21-dense-24-wide-en-x") tokenizer = AutoTokenizer.from_pretrained("facebook/wmt21-dense-24-wide-en-x") inputs = tokenizer("wmtdata newsdomain One model for many languages.", return_tensors="pt") | 833c16dd950db217728fbf58d250b4d1 |
mit | ['translation', 'wmt21'] | false | BibTeX entry and citation info ``` @inproceedings{tran2021facebook title={Facebook AI’s WMT21 News Translation Task Submission}, author={Chau Tran and Shruti Bhosale and James Cross and Philipp Koehn and Sergey Edunov and Angela Fan}, booktitle={Proc. of WMT}, year={2021}, } ``` | 2a9d78180c0d4ff7fe5f61a3474d2295 |
mit | [] | false | Hate Speech Classifier for Social Media Content in Italian Language A monolingual model for hate speech classification of social media content in Italian language. The model was trained on 119,670 YouTube comments and tested on an independent test set of 21,072 YouTube comments. It is based on Italian ALBERTO pre-trained language model. | e79e6ebdc91fb55e9a0b154f4756698c |
apache-2.0 | ['object-detection', 'computer-vision', 'gan', 'animegan'] | false | BibTeX Entry and Citation Info ``` @InProceedings{wu2022animesr, author={Wu, Yanze and Wang, Xintao and Li, Gen and Shan, Ying}, title={AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos}, booktitle={Advances in Neural Information Processing Systems}, year={2022} } ``` | 5d922a0339405ab13e59cb5e34f0b630 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.6727 | d3455890bb8b3892c57e6e24a14f21f1 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 | 56bed69253da09d6584cd731e8a6225d |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5227 | 1.0 | 1107 | 2.0485 | | 1.7555 | 2.0 | 2214 | 1.7443 | | 1.4567 | 3.0 | 3321 | 1.6511 | | 1.2107 | 4.0 | 4428 | 1.6496 | | 1.083 | 5.0 | 5535 | 1.6727 | | d482a7c1ae400181a2ae8099af0c4080 |
creativeml-openrail-m | ['text-to-image'] | false | sd-album-covers Dreambooth model trained by shivi with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: Taylor Swift Red Album Cover (use that on your prompt) Santana Africa Speaks Album Cover (use that on your prompt) Beatles Abbey Road Album Cover (use that on your prompt) Led Zepellin Celebration Day album cover (use that on your prompt) Maroon5 band Overexposed music album cover (use that on your prompt) Metallica Harvester of Sorrow music album cover (use that on your prompt) Linkin Park band logo (use that on your prompt)  | bea4b9ca5a2b9b3f3cd0d2c64bb6b7c7 |
cc-by-4.0 | ['espnet', 'audio', 'speech-recognition'] | false | Environments - date: `Sat Oct 22 14:55:21 EDT 2022` - python version: `3.8.6 (default, Dec 17 2020, 16:57:01) [GCC 10.2.0]` - espnet version: `espnet 202207` - pytorch version: `pytorch 1.8.1+cu102` - Git hash: `e534106b837ff6cdd29977a52983c022ff1afb0f` - Commit date: `Sun Sep 11 22:31:23 2022 -0400` | f7ecbdd8f814d689758e9f8af6380f62 |
cc-by-4.0 | ['espnet', 'audio', 'speech-recognition'] | false | WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_all_bpe6500_valid.loss.ave_asr_model_valid.acc.ave_3best/test_all|77809|1592160|70.5|26.1|3.4|3.4|32.9|97.0| | df375019ec4d5bd9c1ee27c01d61e9ac |
cc-by-4.0 | ['espnet', 'audio', 'speech-recognition'] | false | CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_all_bpe6500_valid.loss.ave_asr_model_valid.acc.ave_3best/test_all|77809|10235271|92.2|4.7|3.1|2.6|10.4|97.0| | dff611b4a890d01281462ade533e7dbb |
cc-by-4.0 | ['espnet', 'audio', 'speech-recognition'] | false | TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_lm_lm_train_lm_all_bpe6500_valid.loss.ave_asr_model_valid.acc.ave_3best/test_all|77809|9622352|91.3|5.6|3.1|2.7|11.4|97.0| | fe1808e8ef7cc51becaa157338bf962d |
cc-by-sa-4.0 | ['asteroid', 'audio', 'DCCRNet', 'audio-to-audio', 'speech-enhancement'] | false | Asteroid model `JorisCos/DCCRNet_Libri1Mix_enhsignle_16k` Description: This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the `enh_single` task of the Libri1Mix dataset. Training config: ```yml data: n_src: 1 sample_rate: 16000 segment: 3 task: enh_single train_dir: data/wav16k/min/train-360 valid_dir: data/wav16k/min/dev filterbank: stft_kernel_size: 400 stft_n_filters: 512 stft_stride: 100 masknet: architecture: DCCRN-CL n_src: 1 optim: lr: 0.001 optimizer: adam weight_decay: 1.0e-05 training: batch_size: 12 early_stop: true epochs: 200 gradient_clipping: 5 half_lr: true num_workers: 4 ``` Results: On Libri1Mix min test set : ```yml si_sdr: 13.329767398333798 si_sdr_imp: 9.879986092474098 sdr: 13.87279932997016 sdr_imp: 10.370136530757103 sir: Infinity sir_imp: NaN sar: 13.87279932997016 sar_imp: 10.370136530757103 stoi: 0.9140907015623948 stoi_imp: 0.11817087802185405 ``` License notice: This work "DCCRNet_Libri1Mix_enhsignle_16k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov, used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only). "DCCRNet_Libri1Mix_enhsignle_16k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | 79c5014c07e337304ca28bc9c1b0f3c2 |
mit | ['gpt_neo', 'code_synthesis'] | false | GPT-Neo-1.3B-APPS > **Please refer to our new [GitHub Wiki](https://github.com/ncoop57/gpt-code-clippy/wiki) which documents our efforts in detail in creating the open source version of GitHub Copilot** | 10cec2e388392a17b0ff6ba5bb124f91 |
mit | ['gpt_neo', 'code_synthesis'] | false | Training data The model is trained on the [Automated Programming Progress Standard (APPS) dataset](https://github.com/hendrycks/apps). The dataset consists of 10,000 coding problems in total, with 131,836 test cases for checking solutions and 232,444 ground-truth solutions written by humans. Problems can be complicated, as the average length of a problem is 293.2 words. The data are split evenly into training and test sets, with 5,000 problems each. This model is fine-tuned using most of the APPS dataset including both train and test split to explore the impact of this training task on model performance on other code synthesis evaluation metrics. A model fine-tuned on train set only can be found [here](https://huggingface.co/flax-community/gpt-neo-125M-apps). | 9ef64a899fa0131245e64d4d6102a434 |
mit | ['gpt_neo', 'code_synthesis'] | false | Training procedure The training script used to train this model can be found [here](https://github.com/ncoop57/gpt-code-clippy/blob/camera-ready/training/run_clm_apps.py). Training is done for 5 epochs using AdamW optimizer and leaner decay learning rate schedule with 800 warmup steps. To reproduce the training one can use this command with the above script: ```bash python run_clm_apps.py \ --output_dir $HOME/gpt-neo-1.3B-apps \ --model_name_or_path EleutherAI/gpt-neo-1.3B \ --dataset_name $HOME/gpt-code-clippy/data_processing/apps.py \ --dataset_config_name formatted \ --do_train --do_eval \ --block_size="1024" \ --per_device_train_batch_size="3" \ --per_device_eval_batch_size="3" \ --preprocessing_num_workers="16" \ --learning_rate="8e-5" \ --warmup_steps="800" \ --adam_beta1="0.9" \ --adam_beta2="0.98" \ --weight_decay="0.1" \ --overwrite_output_dir \ --num_train_epochs="5" \ --logging_steps="50" \ --eval_steps="2000" \ --report_to="wandb" \ --dtype="bfloat16" \ --save_strategy epoch \ --gradient_accumulation_steps 1 \ ``` | a3a05f3ccc408ab4976d7c8407fbd446 |
mit | ['gpt_neo', 'code_synthesis'] | false | How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py from transformers import AutoModelForCausalLM, AutoTokenizer, FlaxAutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("flax-community/gpt-code-clippy-1.3B-apps") tokenizer = AutoTokenizer.from_pretrained("flax-community/gpt-code-clippy-1.3B-apps") prompt = """ A function to greet user. Given a user name it should say hello def greet(name): ANSWER: """ input_ids = tokenizer(prompt, return_tensors='pt').input_ids.to(device) start = input_ids.size(1) out = model.generate(input_ids, do_sample=True, max_length=50, num_beams=2, early_stopping=True, eos_token_id=tokenizer.eos_token_id, ) print(tokenizer.decode(out[0][start:])) ``` | 29009f423743c433b6de6912798948f3 |
mit | ['gpt_neo', 'code_synthesis'] | false | Limitations and Biases The model is intended to be used for research purposes and comes with no guarantees of quality of generated code. The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374) from OpenAI has a good discussion on what the impact of a large language model trained on code could be. Therefore, some parts of their discuss are highlighted here as it pertains to this dataset and models that may be trained from it. **As well as some differences in views from the paper, particularly around legal implications**. 1. **Over-reliance:** This model may generate plausible solutions that may appear correct, but are not necessarily the correct solution. Not properly evaluating the generated code may cause have negative consequences such as the introduction of bugs, or the introduction of security vulnerabilities. Therefore, it is important that users are aware of the limitations and potential negative consequences of using this language model. 2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software. 5. **Biases:** The model is trained on data containing prompt questions formatted in specific way. The performance of the model can be worse if the prompt formatting is different from that used in APPS dataset. GPT-CC is finetuned GPT-Neo and might have inhereted biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M | 69a69e6ae4e23d043c70aa8fa76e93a9 |
mit | ['generated_from_keras_callback'] | false | Sushant45/Canadian_Armed_Forces-clustered This model is a fine-tuned version of [nandysoham16/0-clustered_aug](https://huggingface.co/nandysoham16/0-clustered_aug) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5757 - Train End Logits Accuracy: 0.8542 - Train Start Logits Accuracy: 0.8160 - Validation Loss: 0.4930 - Validation End Logits Accuracy: 1.0 - Validation Start Logits Accuracy: 0.4000 - Epoch: 0 | ed4e3d9c785c7423f6e7d0e1853838a7 |
mit | ['generated_from_keras_callback'] | false | Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 0.5757 | 0.8542 | 0.8160 | 0.4930 | 1.0 | 0.4000 | 0 | | 78f8ecfbfc7a2cb1730301922be0ec43 |
cc-by-sa-4.0 | ['asteroid', 'audio', 'DPTNet', 'audio-to-audio'] | false | Asteroid model `cankeles/DPTNet_WHAMR_enhsignle_16k` Description: This model was trained by M. Can Keleş using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the `enh_single` task of the Libri1Mix dataset. Training config: ```yml data: mode: min nondefault_nsrc: null sample_rate: 16000 segment: 2.0 task: enh_single train_dir: wav16k/min/tr/ valid_dir: wav16k/min/cv/ filterbank: kernel_size: 16 n_filters: 64 stride: 8 main_args: exp_dir: exp/tmp help: null masknet: bidirectional: true chunk_size: 100 dropout: 0 ff_activation: relu ff_hid: 256 hop_size: 50 in_chan: 64 mask_act: sigmoid n_repeats: 2 n_src: 1 norm_type: gLN out_chan: 64 optim: lr: 0.001 optimizer: adam weight_decay: 1.0e-05 positional arguments: {} scheduler: d_model: 64 steps_per_epoch: 10000 training: batch_size: 4 early_stop: true epochs: 60 gradient_clipping: 5 half_lr: true num_workers: 4 ``` Results: On custom min test set : ```yml 'sar': 12.853384266251018, 'sar_imp': 8.950332361953906, 'sdr': 12.853384266251018, 'sdr_imp': 8.950332361953906, 'si_sdr': 12.247012621312548, 'si_sdr_imp': 8.429646186633407, 'sir': inf, 'sir_imp': nan, 'stoi': 0.9022338865380519, 'stoi_imp': 0.09735707619500522 ``` | 9fd0a0c520f0e94a7651c83d3f9da9be |
apache-2.0 | ['part-of-speech', 'token-classification'] | false | XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Western Armenian This model is part of our paper called: - Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details. | 41dc75bba400284676482f4d61a321c4 |
apache-2.0 | ['part-of-speech', 'token-classification'] | false | Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hyw") model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hyw") ``` | 6ec5095b4b1a8f2f5f46a9ebe1b5336c |
apache-2.0 | ['generated_from_trainer'] | false | distilbart-podimo-data-eval-3 This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3828 - Rouge1: 32.8203 - Rouge2: 7.8994 - Rougel: 18.9659 - Rougelsum: 29.4196 - Gen Len: 114.5264 | 34f706ab0d8cfb6f2995e928cd95e7cb |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 | e2cfdd65f93f4843cbe8a7c7cd0d15cb |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:| | 3.9049 | 1.0 | 132 | 3.5343 | 30.2542 | 6.031 | 17.269 | 26.9847 | 113.7689 | | 3.4248 | 2.0 | 264 | 3.4055 | 31.6518 | 7.2786 | 18.2641 | 28.4006 | 114.6547 | | 3.1594 | 3.0 | 396 | 3.3579 | 32.0442 | 7.3554 | 18.3492 | 28.7615 | 113.7443 | | 2.9645 | 4.0 | 528 | 3.3445 | 32.0945 | 7.637 | 18.6289 | 28.899 | 115.5321 | | 2.8073 | 5.0 | 660 | 3.3470 | 32.7852 | 7.9597 | 19.2358 | 29.5057 | 108.3519 | | 2.685 | 6.0 | 792 | 3.3532 | 32.3775 | 7.661 | 18.6719 | 28.9282 | 117.1104 | | 2.5941 | 7.0 | 924 | 3.3711 | 32.6976 | 7.8917 | 19.069 | 29.3785 | 113.1943 | | 2.5267 | 8.0 | 1056 | 3.3828 | 32.8203 | 7.8994 | 18.9659 | 29.4196 | 114.5264 | | fff1eff15867bfe95ed35289d5323eba |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP | 81c3a6109b5f3667990c60be5db27969 |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-base-timit-small This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5361 - Wer: 0.3380 | bbdf1f10fcfebe3570b533846b1e1bb4 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.571 | 1.0 | 500 | 1.9252 | 1.0022 | | 0.8969 | 2.01 | 1000 | 0.5066 | 0.5292 | | 0.4326 | 3.01 | 1500 | 0.4523 | 0.4562 | | 0.2993 | 4.02 | 2000 | 0.4228 | 0.4202 | | 0.2335 | 5.02 | 2500 | 0.4252 | 0.4178 | | 0.2009 | 6.02 | 3000 | 0.4136 | 0.3910 | | 0.1552 | 7.03 | 3500 | 0.4747 | 0.3863 | | 0.1388 | 8.03 | 4000 | 0.4359 | 0.3859 | | 0.1226 | 9.04 | 4500 | 0.4367 | 0.3879 | | 0.1109 | 10.04 | 5000 | 0.4360 | 0.3760 | | 0.0991 | 11.04 | 5500 | 0.4899 | 0.3672 | | 0.0882 | 12.05 | 6000 | 0.4608 | 0.3653 | | 0.0792 | 13.05 | 6500 | 0.4882 | 0.3703 | | 0.0745 | 14.06 | 7000 | 0.4716 | 0.3625 | | 0.065 | 15.06 | 7500 | 0.4896 | 0.3651 | | 0.0596 | 16.06 | 8000 | 0.4831 | 0.3659 | | 0.0563 | 17.07 | 8500 | 0.5092 | 0.3585 | | 0.0536 | 18.07 | 9000 | 0.5376 | 0.3675 | | 0.0465 | 19.08 | 9500 | 0.5019 | 0.3534 | | 0.049 | 20.08 | 10000 | 0.4869 | 0.3723 | | 0.0423 | 21.08 | 10500 | 0.4947 | 0.3501 | | 0.0348 | 22.09 | 11000 | 0.5524 | 0.3453 | | 0.0315 | 23.09 | 11500 | 0.5369 | 0.3499 | | 0.0312 | 24.1 | 12000 | 0.5283 | 0.3519 | | 0.0258 | 25.1 | 12500 | 0.5202 | 0.3461 | | 0.0249 | 26.1 | 13000 | 0.5270 | 0.3449 | | 0.0236 | 27.11 | 13500 | 0.5388 | 0.3408 | | 0.0206 | 28.11 | 14000 | 0.5361 | 0.3388 | | 0.0224 | 29.12 | 14500 | 0.5361 | 0.3380 | | 02610d3c457ab238773e8be916f47378 |
apache-2.0 | ['Image Captioning'] | false | Model Description These are model weights originally provided by the authors of the paper [Text-Only Training for Image Captioning using Noise-Injected CLIP](https://arxiv.org/pdf/2211.00575.pdf). Their method aims to train CLIP with only text samples. Therefore they are injecting zero-mean Gaussian Noise into the text embeddings before decoding. In their words: *Specifically, we assume that the visual embedding corresponding to a text embedding lies somewhere within a ball of small radius around the text embedding (see Fig. 1). We would like all text embeddings in this ball to decode to the same caption,which should also correspond to the visual content mapped to this ball. We implement this intuition by adding zero-mean Gaussian noise of STD to the text embedding before decoding it.* The "Noise Level" of 0.025 is equivalent to the Noise Variance which is the square of the STD. The reported metrics are results of a model with a Noise Variance of 0.016, which the authors unfortunately do not provide in their repository. | af47fbc22eded1b2d8de795d0c34ca63 |
apache-2.0 | ['generated_from_trainer'] | false | model_broadclass_onSet1 This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9014 - 0 Precision: 0.5217 - 0 Recall: 1.0 - 0 F1-score: 0.6857 - 0 Support: 24 - 1 Precision: 1.0 - 1 Recall: 0.7692 - 1 F1-score: 0.8696 - 1 Support: 39 - 2 Precision: 1.0 - 2 Recall: 0.5652 - 2 F1-score: 0.7222 - 2 Support: 23 - 3 Precision: 1.0 - 3 Recall: 0.75 - 3 F1-score: 0.8571 - 3 Support: 12 - Accuracy: 0.7755 - Macro avg Precision: 0.8804 - Macro avg Recall: 0.7711 - Macro avg F1-score: 0.7837 - Macro avg Support: 98 - Weighted avg Precision: 0.8829 - Weighted avg Recall: 0.7755 - Weighted avg F1-score: 0.7884 - Weighted avg Support: 98 - Wer: 0.9368 - Mtrix: [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 9, 30, 0, 0], [2, 10, 0, 13, 0], [3, 3, 0, 0, 9]] | 841518d32fed43065797998e3b0e210a |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | 0 Precision | 0 Recall | 0 F1-score | 0 Support | 1 Precision | 1 Recall | 1 F1-score | 1 Support | 2 Precision | 2 Recall | 2 F1-score | 2 Support | 3 Precision | 3 Recall | 3 F1-score | 3 Support | Accuracy | Macro avg Precision | Macro avg Recall | Macro avg F1-score | Macro avg Support | Weighted avg Precision | Weighted avg Recall | Weighted avg F1-score | Weighted avg Support | Wer | Mtrix | |:-------------:|:-----:|:----:|:---------------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:-----------:|:--------:|:----------:|:---------:|:--------:|:-------------------:|:----------------:|:------------------:|:-----------------:|:----------------------:|:-------------------:|:---------------------:|:--------------------:|:------:|:---------------------------------------------------------------------------------------:| | 2.395 | 4.16 | 100 | 2.2004 | 0.2449 | 1.0 | 0.3934 | 24 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2449 | 0.0612 | 0.25 | 0.0984 | 98 | 0.0600 | 0.2449 | 0.0964 | 98 | 0.9879 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 39, 0, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] | | 2.2919 | 8.33 | 200 | 2.1576 | 0.2449 | 1.0 | 0.3934 | 24 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2449 | 0.0612 | 0.25 | 0.0984 | 98 | 0.0600 | 0.2449 | 0.0964 | 98 | 0.9879 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 39, 0, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] | | 2.0987 | 12.49 | 300 | 2.0882 | 0.2449 | 1.0 | 0.3934 | 24 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2449 | 0.0612 | 0.25 | 0.0984 | 98 | 0.0600 | 0.2449 | 0.0964 | 98 | 0.9879 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 39, 0, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] | | 1.9079 | 16.65 | 400 | 1.8619 | 0.2449 | 1.0 | 0.3934 | 24 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2449 | 0.0612 | 0.25 | 0.0984 | 98 | 0.0600 | 0.2449 | 0.0964 | 98 | 0.9879 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 39, 0, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] | | 1.7168 | 20.82 | 500 | 1.6469 | 0.2449 | 1.0 | 0.3934 | 24 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2449 | 0.0612 | 0.25 | 0.0984 | 98 | 0.0600 | 0.2449 | 0.0964 | 98 | 0.9879 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 39, 0, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] | | 1.551 | 24.98 | 600 | 1.6614 | 0.2449 | 1.0 | 0.3934 | 24 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2449 | 0.0612 | 0.25 | 0.0984 | 98 | 0.0600 | 0.2449 | 0.0964 | 98 | 0.9879 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 39, 0, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] | | 1.6399 | 29.16 | 700 | 1.5818 | 0.2449 | 1.0 | 0.3934 | 24 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2449 | 0.0612 | 0.25 | 0.0984 | 98 | 0.0600 | 0.2449 | 0.0964 | 98 | 0.9879 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 39, 0, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] | | 1.3329 | 33.33 | 800 | 1.2267 | 0.2449 | 1.0 | 0.3934 | 24 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2449 | 0.0612 | 0.25 | 0.0984 | 98 | 0.0600 | 0.2449 | 0.0964 | 98 | 0.9879 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 39, 0, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] | | 1.1996 | 37.49 | 900 | 1.2143 | 0.2449 | 1.0 | 0.3934 | 24 | 0.0 | 0.0 | 0.0 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2449 | 0.0612 | 0.25 | 0.0984 | 98 | 0.0600 | 0.2449 | 0.0964 | 98 | 0.9879 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 39, 0, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] | | 1.01 | 41.65 | 1000 | 0.9496 | 0.2474 | 1.0 | 0.3967 | 24 | 1.0 | 0.0256 | 0.05 | 39 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.2551 | 0.3119 | 0.2564 | 0.1117 | 98 | 0.4586 | 0.2551 | 0.1170 | 98 | 0.9777 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 38, 1, 0, 0], [2, 23, 0, 0, 0], [3, 12, 0, 0, 0]] | | 0.9516 | 45.82 | 1100 | 0.9471 | 0.2927 | 1.0 | 0.4528 | 24 | 1.0 | 0.3846 | 0.5556 | 39 | 1.0 | 0.0435 | 0.0833 | 23 | 0.0 | 0.0 | 0.0 | 12 | 0.4082 | 0.5732 | 0.3570 | 0.2729 | 98 | 0.7043 | 0.4082 | 0.3515 | 98 | 0.9661 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 24, 15, 0, 0], [2, 22, 0, 1, 0], [3, 12, 0, 0, 0]] | | 0.9544 | 49.98 | 1200 | 0.9452 | 0.3582 | 1.0 | 0.5275 | 24 | 1.0 | 0.5128 | 0.6780 | 39 | 1.0 | 0.3043 | 0.4667 | 23 | 0.75 | 0.25 | 0.375 | 12 | 0.5510 | 0.7771 | 0.5168 | 0.5118 | 98 | 0.8122 | 0.5510 | 0.5544 | 98 | 0.9540 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 18, 20, 0, 1], [2, 16, 0, 7, 0], [3, 9, 0, 0, 3]] | | 0.9538 | 54.16 | 1300 | 0.9259 | 0.4615 | 1.0 | 0.6316 | 24 | 1.0 | 0.6923 | 0.8182 | 39 | 1.0 | 0.5217 | 0.6857 | 23 | 0.8571 | 0.5 | 0.6316 | 12 | 0.7041 | 0.8297 | 0.6785 | 0.6918 | 98 | 0.8506 | 0.7041 | 0.7185 | 98 | 0.9439 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 11, 27, 0, 1], [2, 11, 0, 12, 0], [3, 6, 0, 0, 6]] | | 0.952 | 58.33 | 1400 | 0.9052 | 0.4528 | 1.0 | 0.6234 | 24 | 1.0 | 0.6667 | 0.8 | 39 | 1.0 | 0.4348 | 0.6061 | 23 | 0.8889 | 0.6667 | 0.7619 | 12 | 0.6939 | 0.8354 | 0.6920 | 0.6978 | 98 | 0.8524 | 0.6939 | 0.7066 | 98 | 0.9464 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 12, 26, 0, 1], [2, 13, 0, 10, 0], [3, 4, 0, 0, 8]] | | 0.8938 | 62.49 | 1500 | 0.9070 | 0.48 | 1.0 | 0.6486 | 24 | 0.9677 | 0.7692 | 0.8571 | 39 | 1.0 | 0.4348 | 0.6061 | 23 | 1.0 | 0.5833 | 0.7368 | 12 | 0.7245 | 0.8619 | 0.6968 | 0.7122 | 98 | 0.8598 | 0.7245 | 0.7324 | 98 | 0.9398 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 9, 30, 0, 0], [2, 12, 1, 10, 0], [3, 5, 0, 0, 7]] | | 0.9027 | 66.65 | 1600 | 0.8919 | 0.5714 | 1.0 | 0.7273 | 24 | 1.0 | 0.8462 | 0.9167 | 39 | 1.0 | 0.7391 | 0.85 | 23 | 1.0 | 0.5 | 0.6667 | 12 | 0.8163 | 0.8929 | 0.7713 | 0.7902 | 98 | 0.8950 | 0.8163 | 0.8240 | 98 | 0.9398 | [[0, 1, 2, 3], [0, 24, 0, 0, 0], [1, 6, 33, 0, 0], [2, 6, 0, 17, 0], [3, 6, 0, 0, 6]] | | 40b98e4b0dbccd6e4ba6529851a6d0a8 |
mit | ['generated_from_trainer'] | false | roberta_large-ner-conll2003_0818_v1 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.1481 - Precision: 0.8993 - Recall: 0.9269 - F1: 0.9129 - Accuracy: 0.9784 | 5182fb790eb5aac6903126815d60e2c0 |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 | 3bbf1b6a1f5b09353f4815536244682e |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2033 | 1.0 | 878 | 0.0472 | 0.9277 | 0.9551 | 0.9412 | 0.9887 | | 0.044 | 2.0 | 1756 | 0.0428 | 0.9365 | 0.9610 | 0.9486 | 0.9895 | | 415c6972217b132fd5f26e10e2416705 |
apache-2.0 | ['generated_from_trainer'] | false | demo_hate_1234567 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.8697 - F1: 0.7773 | 641c41ab5ba13fa47c663f192fbe81de |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.320702985778492e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 | 84e36b56d485d3290454686b2e6ceba9 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 282 | 0.4850 | 0.7645 | | 0.3877 | 2.0 | 564 | 0.5160 | 0.7856 | | 0.3877 | 3.0 | 846 | 0.6927 | 0.7802 | | 0.1343 | 4.0 | 1128 | 0.8697 | 0.7773 | | 2e1cd330fbb83c3ce4861f3210fc6e48 |
mit | ['generated_from_keras_callback'] | false | HuggingAlex1247/gelectra-large-germaner This model is a fine-tuned version of [deepset/gelectra-large](https://huggingface.co/deepset/gelectra-large) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1696 - Validation Loss: 0.0800 - Epoch: 0 | fafeecdd653c57729c59b479fbcaf4d2 |
mit | ['generated_from_keras_callback'] | false | Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 3475, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 | 94fd3cd0e69adf95ffe4567dc98e9de0 |
apache-2.0 | ['automatic-speech-recognition', 'it'] | false | exp_w2v2t_it_vp-it_s411 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (it)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool. | 43afa61eeb6fff992f72af94baf21dc5 |
mit | [] | false | a female hero from The Legend of Mir on Stable Diffusion This is the `a <female-hero> from The Legend of Mir` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`:       | 93fd02e38d2c0cadc0d05a66e559c65d |
apache-2.0 | ['automatic-speech-recognition', 'hf-asr-leaderboard', 'it', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event'] | false | Fine-tuned XLS-R 1B model for speech recognition in Italian Fine-tuned [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on Italian using the train and validation splits of [Common Voice 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0), [Multilingual TEDx](http://www.openslr.org/100), [Multilingual LibriSpeech](https://www.openslr.org/94/), and [Voxpopuli](https://github.com/facebookresearch/voxpopuli). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool, and thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :) | 129cbb4511ba1bef9741b7cd9d5b6c7e |
apache-2.0 | ['automatic-speech-recognition', 'hf-asr-leaderboard', 'it', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event'] | false | Usage Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-xls-r-1b-italian") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` Writing your own inference script: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "it" MODEL_ID = "jonatasgrosman/wav2vec2-xls-r-1b-italian" SAMPLES = 10 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) | 85697d7c8436231fe128448836983ba0 |
apache-2.0 | ['automatic-speech-recognition', 'hf-asr-leaderboard', 'it', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event'] | false | We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) ``` | 4f6f5f19d263975e3b459136ed828da2 |
apache-2.0 | ['automatic-speech-recognition', 'hf-asr-leaderboard', 'it', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event'] | false | Evaluation Commands 1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-italian --dataset mozilla-foundation/common_voice_8_0 --config it --split test ``` 2. To evaluate on `speech-recognition-community-v2/dev_data` ```bash python eval.py --model_id jonatasgrosman/wav2vec2-xls-r-1b-italian --dataset speech-recognition-community-v2/dev_data --config it --split validation --chunk_length_s 5.0 --stride_length_s 1.0 ``` | dac2f5411973586fc308cd38baeffdf9 |
apache-2.0 | ['automatic-speech-recognition', 'hf-asr-leaderboard', 'it', 'mozilla-foundation/common_voice_8_0', 'robust-speech-event'] | false | Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr-1b-italian, title={Fine-tuned {XLS-R} 1{B} model for speech recognition in {I}talian}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-italian}}, year={2022} } ``` | 2702071c239b9e573f7e22befa94e28d |
apache-2.0 | ['translation'] | false | opus-mt-fr-bzs * source languages: fr * target languages: bzs * OPUS readme: [fr-bzs](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-bzs/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-bzs/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bzs/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-bzs/opus-2020-01-09.eval.txt) | 1d6b481690bde45206469f6f2138d248 |
other | ['stable-diffusion', 'text-to-image'] | false | Cool Japan Diffusion 2.1.1 Beta Model Card  [注意事项。中国将对图像生成的人工智能实施法律限制。 ](http://www.cac.gov.cn/2022-12/11/c_1672221949318230.htm) (中国国内にいる人への警告) English version is [here](README_en.md). | 921e931f9c023f575b221905126b3184 |
other | ['stable-diffusion', 'text-to-image'] | false | 使い方 手軽に楽しみたい方は、こちらの[Space](https://huggingface.co/spaces/aipicasso/cool-japan-diffusion-latest-demo)をお使いください。 詳しい本モデルの取り扱い方は[こちらの取扱説明書](https://alfredplpl.hatenablog.com/entry/2023/01/11/182146)にかかれています。 モデルは[ここ](https://huggingface.co/aipicasso/cool-japan-diffusion-2-1-1-beta/resolve/main/v2-1-1-beta.ckpt)からダウンロードできます。 以下、一般的なモデルカードの日本語訳です。 | 3855203a1604ecada3347c4f264f7086 |
other | ['stable-diffusion', 'text-to-image'] | false | Diffusersの場合 [🤗's Diffusers library](https://github.com/huggingface/diffusers) を使ってください。 まずは、以下のスクリプトを実行し、ライブラリをいれてください。 ```bash pip install --upgrade git+https://github.com/huggingface/diffusers.git transformers accelerate scipy ``` 次のスクリプトを実行し、画像を生成してください。 ```python from diffusers import StableDiffusionPipeline, EulerAncestralDiscreteScheduler import torch model_id = "aipicasso/cool-japan-diffusion-2-1-1-beta" scheduler = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler") pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16) | 7f56d401cd372850a0a5d92f593bb0d8 |
other | ['stable-diffusion', 'text-to-image'] | false | ,use_auth_token="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") pipe = pipe.to("cuda") prompt = "anime, a portrait of a girl with black short hair and red eyes, kimono, full color illustration, official art, 4k, detailed" negative_prompt="(((deformed))), blurry, ((((bad anatomy)))), bad pupil, disfigured, poorly drawn face, mutation, mutated, (extra limb), (ugly), (poorly drawn hands), bad hands, fused fingers, messy drawing, broken legs censor, low quality, ((mutated hands and fingers:1.5), (long body :1.3), (mutation, poorly drawn :1.2), ((bad eyes)), ui, error, missing fingers, fused fingers, one hand with more than 5 fingers, one hand with less than 5 fingers, one hand with more than 5 digit, one hand with less than 5 digit, extra digit, fewer digits, fused digit, missing digit, bad digit, liquid digit, long body, uncoordinated body, unnatural body, lowres, jpeg artifacts, 2d, 3d, cg, text" image = pipe(prompt,negative_prompt=negative_prompt, width=512, height=512, num_inference_steps=20).images[0] image.save("girl.png") ``` **注意**: - [xformers](https://github.com/facebookresearch/xformers) を使うと早くなるらしいです。 - GPUを使う際にGPUのメモリが少ない人は `pipe.enable_attention_slicing()` を使ってください。 | c69c1941e7c05ad43cb71b01a88348c5 |
other | ['stable-diffusion', 'text-to-image'] | false | 想定される用途 - コンテスト - [AIアートグランプリ](https://www.aiartgrandprix.com/)への投稿 - ファインチューニングに用いた全データを開示し、審査基準を満たしていることを判断してもらうようにします。 - コンテストに向けて、要望があれば、Hugging Face の Community などで私に伝えてください。 - 画像生成AIに関する報道 - 公共放送だけでなく、営利企業でも可能 - 画像合成AIに関する情報を「知る権利」は創作業界に悪影響を及ぼさないと判断したためです。また、報道の自由などを尊重しました。 - クールジャパンの紹介 - 他国の人にクールジャパンとはなにかを説明すること。 - 他国の留学生はクールジャパンに惹かれて日本に来ることがおおくあります。そこで、クールジャパンが日本では「クールでない」とされていることにがっかりされることがとても多いとAlfred Incrementは感じております。他国の人が憧れる自国の文化をもっと誇りに思ってください。 - 研究開発 - Discord上でのモデルの利用 - プロンプトエンジニアリング - ファインチューニング(追加学習とも) - DreamBooth など - 他のモデルとのマージ - Latent Diffusion Modelとクールジャパンとの相性 - 本モデルの性能をFIDなどで調べること - 本モデルがStable Diffusion以外のモデルとは独立であることをチェックサムやハッシュ関数などで調べること - 教育 - 美大生や専門学校生の卒業制作 - 大学生の卒業論文や課題制作 - 先生が画像生成AIの現状を伝えること - 自己表現 - SNS上で自分の感情や思考を表現すること - Hugging Face の Community にかいてある用途 - 日本語か英語で質問してください | 6000810e09b9b59e3de9a4ab660faf24 |
other | ['stable-diffusion', 'text-to-image'] | false | 学習 **学習データ** 次のデータを主に使ってStable Diffusionをファインチューニングしています。 - VAEについて - Danbooruなどの無断転載サイトを除いた日本の国内法を遵守したデータ: 60万種類 (データ拡張により無限枚作成) - U-Netについて - Danbooruなどの無断転載サイトを除いた日本の国内法を遵守したデータ: 80万ペア **学習プロセス** Stable DiffusionのVAEとU-Netをファインチューニングしました。 - **ハードウェア:** RTX 3090 - **オプティマイザー:** AdamW - **Gradient Accumulations**: 1 - **バッチサイズ:** 1 | d31e3e8a579062fe1bfed3f08f5e3c3c |
apache-2.0 | ['generated_from_trainer'] | false | Article_100v7_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article100v7_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.6011 - Precision: 0.1661 - Recall: 0.0138 - F1: 0.0254 - Accuracy: 0.7860 | d139774adf6d5d5f96941359f670d098 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 12 | 0.7375 | 0.0 | 0.0 | 0.0 | 0.7810 | | No log | 2.0 | 24 | 0.6356 | 0.0571 | 0.0010 | 0.0020 | 0.7820 | | No log | 3.0 | 36 | 0.6011 | 0.1661 | 0.0138 | 0.0254 | 0.7860 | | 8987bfd8512b7d9f5b5122fd6c17c74d |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.2293 | 7d6c038919d4c32297200ba3f9297714 |
apache-2.0 | ['generated_from_trainer'] | false | vit-base-patch16-224-finetuned This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7617 - Accuracy: 0.3333 | 7771d697e908b042d19750a57f16b3e3 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 0.6063 | 0.6667 | | No log | 2.0 | 2 | 0.6958 | 0.3333 | | No log | 3.0 | 3 | 0.7617 | 0.3333 | | 5249eeddc80e100da28f63d089b7934b |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-15 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8623 - Wer: 0.8585 | f01116742d890409fd4660519a9a9a81 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 400 - num_epochs: 30 | c61936d1e69fc60ea1e9eb3f7aedd088 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 9.6808 | 1.37 | 200 | 3.7154 | 1.0 | | 3.0784 | 2.74 | 400 | 3.1542 | 1.0 | | 2.8919 | 4.11 | 600 | 2.9918 | 1.0 | | 2.8317 | 5.48 | 800 | 2.8971 | 1.0 | | 2.7958 | 6.85 | 1000 | 2.8409 | 1.0 | | 2.7699 | 8.22 | 1200 | 2.8278 | 1.0 | | 2.6365 | 9.59 | 1400 | 2.4657 | 1.0 | | 2.1096 | 10.96 | 1600 | 1.8358 | 0.9988 | | 1.6485 | 12.33 | 1800 | 1.4525 | 0.9847 | | 1.3967 | 13.7 | 2000 | 1.2467 | 0.9532 | | 1.2492 | 15.07 | 2200 | 1.1261 | 0.9376 | | 1.1543 | 16.44 | 2400 | 1.0654 | 0.9194 | | 1.0863 | 17.81 | 2600 | 1.0136 | 0.9161 | | 1.0275 | 19.18 | 2800 | 0.9601 | 0.8827 | | 0.9854 | 20.55 | 3000 | 0.9435 | 0.8878 | | 0.9528 | 21.92 | 3200 | 0.9170 | 0.8807 | | 0.926 | 23.29 | 3400 | 0.9121 | 0.8783 | | 0.9025 | 24.66 | 3600 | 0.8884 | 0.8646 | | 0.8909 | 26.03 | 3800 | 0.8836 | 0.8690 | | 0.8717 | 27.4 | 4000 | 0.8810 | 0.8646 | | 0.8661 | 28.77 | 4200 | 0.8623 | 0.8585 | | f44a082e0bfdf8b212df334bd7c93d87 |
mit | ['summarization'] | false | Model description BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. This model is a fine-tuned version of [KBLab/bart-base-swedish-cased](https://huggingface.co/KBLab/bart-base-swedish-cased) on the [Gabriel/bart-base-cnn-swe](https://huggingface.co/datasets/Gabriel/cnn_daily_swe) dataset and can be used for summarization tasks. | c753ed834116bc864085ffdb8822b424 |
mit | ['summarization'] | false | Intended uses & limitations This model should only be used to fine-tune further on and summarization tasks. ```python from transformers import pipeline summarizer = pipeline("summarization", model="Gabriel/bart-base-cnn-swe") ARTICLE = """ Frankrike lås Sebastien Chabal har nämnts för en farlig tackling på Englands Simon Shaw under lördagens VM semifinal i Paris. Simon Shaw lastar av trots att Raphael Ibanez, vänster, och Sebastien Chabal. Sale Sharks framåt kommer att ställas inför en disciplinär utfrågning på måndag efter hans tackling på motsatt andra-rower Shaw noterades genom att citera kommissionär Dennis Wheelahan. Chabal började matchen på ersättningsbänken, men kom i 26: e minuten att ersätta den skadade Fabien Pelous under värd Frankrikes 14-9 nederlag. Om han blir avstängd missar Chabal fredagens tredje och fjärde match på Parc des Princes. Samtidigt, Frankrike tränare Bernard Laporte sade att nederlaget var svårare att ta än Englands 24-7 seger i 2003 semifinalen. "År 2003 var de bättre än oss. I själva verket var de bättre än alla", sade Laporte, som lämnar sin roll att tillträda posten som junior idrottsminister i den franska regeringen. "De var som Nya Zeeland i denna turnering - favoriten, förutom att de gick hela vägen. Den här gången är det svårare för igår var det 50-50." Samtidigt, England -- försöker bli den första nationen att försvara VM-titeln -- avslöjade att stjärna kicker Jonny Wilkinson återigen hade problem med matchbollarna under semifinalen. Flughalvan, som uttryckte sin oro efter att ha kämpat med stöveln mot Australien, avvisade en boll innan han sparkade en vital trepoängare mot Frankrike. "Vi sa det inte förra veckan men en icke-match bollen kom ut på fältet i Marseille som Jonny sparkade," chef för rugby Rob Andrew sade. "Han tänkte inte på det när han sparkade det. Matchbollarna är märkta, numrerade ett till sex. Igår kväll hade de "World Cup semifinal England vs Frankrike" skrivet på dem. På matchkvällen var Jonny vaksam när han sparkade för mål att de faktiskt var matchbollar han sparkade. "Träningsbollarna förlorar tryck och form. Hela frågan förra veckan, arrangörerna accepterade alla sex matchbollar bör användas av båda sidor på torsdagen före matchen. " E-post till en vän. """ print(summarizer(ARTICLE, max_length=130, min_length=30, num_beams=10 ,do_sample=False)) >>> [{'summary_text': """ Frankrike lås Sebastien Chabal har nämnts för en farlig tackling på Englands Simon Shaw under VM semifinal i Paris. Sale Sharks framåt kommer att ställas inför en disciplinär utfrågning på måndag efter hans tackling på motsatt andra - rower Shaw noterades genom att citera kommissionär Dennis Wheelahan. Om Chabal blir avstängd missar Chabal fredagens tredje och fjärde match på Parc des Princes."""}] ``` | 861e2fbb674ce08c0bf20c02d0743c2b |
mit | ['summarization'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2*2 = 4 - mixed_precision_training: Native AMP | 75203e493f9d65307014fd224206f068 |
mit | ['summarization'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 2.2349 | 1.0 | 17944 | 2.0643 | 21.9564 | 10.2133 | 17.9958 | 20.6502 | 19.9992 | | 2.0726 | 2.0 | 35888 | 2.0253 | 22.0568 | 10.3302 | 18.0648 | 20.7482 | 19.9996 | | 1.8658 | 3.0 | 53832 | 2.0333 | 22.0871 | 10.2902 | 18.0577 | 20.7082 | 19.998 | | 1.8121 | 4.0 | 71776 | 1.9759 | 22.2046 | 10.4332 | 18.1753 | 20.846 | 19.9971 | | 6003efcc9f0c4a9795426fff8522bfb8 |
apache-2.0 | ['es', 'ticket classification'] | false | You can include sample code which will be formatted from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("hiiamsid/BETO_es_binary_classification") model = AutoModelForSequenceClassification.from_pretrained("hiiamsid/BETO_es_binary_classification") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` | 57bd69ac2934f20c516d18bedabd26fc |
apache-2.0 | ['translation'] | false | opus-mt-lue-sv * source languages: lue * target languages: sv * OPUS readme: [lue-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lue-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lue-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lue-sv/opus-2020-01-09.eval.txt) | 7653173fca9dab3f41a4671233bb8b35 |
apache-2.0 | ['summarization', 'en', 'mt5', 'Abstractive Summarization', 'generated_from_trainer'] | false | mt5-base-finetuned-en-cnn This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 3.1286 - Rouge-1: 22.84 - Rouge-2: 10.11 - Rouge-l: 21.8 - Gen Len: 19.0 - Bertscore: 87.12 | 073ae8b634c94844635d578bb7480ac7 |
apache-2.0 | ['summarization', 'en', 'mt5', 'Abstractive Summarization', 'generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 5 - label_smoothing_factor: 0.1 | eb69c00cf27fca7752532b84976b540c |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1000 | 57e38eb62871ab11d6f313151d985c8f |
apache-2.0 | ['exbert', 'multiberts', 'multiberts-seed-4'] | false | MultiBERTs Seed 4 Checkpoint 80k (uncased) Seed 4 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). | f3795150f2b9cde6201937fd5929f032 |
apache-2.0 | ['exbert', 'multiberts', 'multiberts-seed-4'] | false | How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-80k') model = BertModel.from_pretrained("multiberts-seed-4-80k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` | 107186a069a688a68b3cb39fd70431e2 |
apache-2.0 | ['generated_from_trainer'] | false | my_awesome_billsum_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset. It achieves the following results on the evaluation set: - Loss: 2.5312 - Rouge1: 0.1421 - Rouge2: 0.0515 - Rougel: 0.1184 - Rougelsum: 0.1182 - Gen Len: 19.0 | 0c7d753d0c98ca497fe15d652177f624 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP | 7c6c81c1a7411a439652a5ea9caff978 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 62 | 2.8298 | 0.1269 | 0.0364 | 0.1068 | 0.1068 | 19.0 | | No log | 2.0 | 124 | 2.6134 | 0.133 | 0.045 | 0.1114 | 0.1109 | 19.0 | | No log | 3.0 | 186 | 2.5476 | 0.142 | 0.0518 | 0.118 | 0.1179 | 19.0 | | No log | 4.0 | 248 | 2.5312 | 0.1421 | 0.0515 | 0.1184 | 0.1182 | 19.0 | | f19d0b7e81c4487e1c62984112842d83 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP | d9a65a780982342b0e07f4f4be9a358c |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:| | No log | 1.0 | 188 | 2.1202 | 7.5964 | 17.3996 | | 6d09b55c8708cce4056505c22aadacc3 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0611 - Precision: 0.9210 - Recall: 0.9357 - F1: 0.9283 - Accuracy: 0.9832 | 94beb9c2d3bd3a4ad05d458740a8b12b |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2341 | 1.0 | 878 | 0.0734 | 0.9118 | 0.9206 | 0.9162 | 0.9799 | | 0.0546 | 2.0 | 1756 | 0.0591 | 0.9210 | 0.9350 | 0.9279 | 0.9829 | | 0.0297 | 3.0 | 2634 | 0.0611 | 0.9210 | 0.9357 | 0.9283 | 0.9832 | | 9bf2431be89d97af3676f0e5f988083d |
mit | ['roberta-base', 'roberta-base-epoch_21'] | false | RoBERTa, Intermediate Checkpoint - Epoch 21 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_21. | 83feb44566c2bcfc4d54ac1c337d9c94 |
apache-2.0 | ['generated_from_trainer'] | false | bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0583 - Precision: 0.9396 - Recall: 0.9530 - F1: 0.9463 - Accuracy: 0.9868 | ccc35f6da6aa1d65ca0920869753bdf1 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0883 | 1.0 | 1756 | 0.0702 | 0.9184 | 0.9320 | 0.9252 | 0.9819 | | 0.0338 | 2.0 | 3512 | 0.0661 | 0.9263 | 0.9480 | 0.9370 | 0.9853 | | 0.0174 | 3.0 | 5268 | 0.0583 | 0.9396 | 0.9530 | 0.9463 | 0.9868 | | bf365309a998c367da6fa302a9d9b5aa |
cc-by-4.0 | ['translation', 'opus-mt-tc'] | false | Model Details Neural machine translation model for translating from Persian (fa) to Italic languages (itc). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation (transformer-big) - **Release**: 2022-07-23 - **License:** CC-BY-4.0 - **Language(s):** - Source Language(s): fas - Target Language(s): fra ita por ron spa - Language Pair(s): fas-fra fas-por fas-ron - Valid Target Language Labels: >>acf<< >>aoa<< >>arg<< >>ast<< >>cat<< >>cbk<< >>ccd<< >>cks<< >>cos<< >>cri<< >>crs<< >>dlm<< >>drc<< >>egl<< >>ext<< >>fab<< >>fax<< >>fra<< >>frc<< >>frm<< >>fro<< >>frp<< >>fur<< >>gcf<< >>gcr<< >>glg<< >>hat<< >>idb<< >>ist<< >>ita<< >>itk<< >>kea<< >>kmv<< >>lad<< >>lad_Latn<< >>lat<< >>lat_Latn<< >>lij<< >>lld<< >>lmo<< >>lou<< >>mcm<< >>mfe<< >>mol<< >>mwl<< >>mxi<< >>mzs<< >>nap<< >>nrf<< >>oci<< >>osc<< >>osp<< >>pap<< >>pcd<< >>pln<< >>pms<< >>pob<< >>por<< >>pov<< >>pre<< >>pro<< >>qbb<< >>qhr<< >>rcf<< >>rgn<< >>roh<< >>ron<< >>ruo<< >>rup<< >>ruq<< >>scf<< >>scn<< >>sdc<< >>sdn<< >>spa<< >>spq<< >>spx<< >>src<< >>srd<< >>sro<< >>tmg<< >>tvy<< >>vec<< >>vkp<< >>wln<< >>xfa<< >>xum<< - **Original Model**: [opusTCv20210807_transformer-big_2022-07-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fas-itc/opusTCv20210807_transformer-big_2022-07-23.zip) - **Resources for more information:** - [OPUS-MT-train GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) - More information about released models for this language pair: [OPUS-MT fas-itc README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/fas-itc/README.md) - [More information about MarianNMT models in the transformers library](https://huggingface.co/docs/transformers/model_doc/marian) - [Tatoeba Translation Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/ This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>fra<<` | 49b1a2f89d68b526bc1c5c3b8016946d |
cc-by-4.0 | ['translation', 'opus-mt-tc'] | false | How to Get Started With the Model A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>lad<< اسلام زیباست.", ">>spa<< ورود به کتابخانه رایگان است." ] model_name = "pytorch-models/opus-mt-tc-big-fa-itc" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) | 1d5d589b42eb0e193a3d16a6ca55059f |
cc-by-4.0 | ['translation', 'opus-mt-tc'] | false | La entrada a la biblioteca es gratuita. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-fa-itc") print(pipe(">>lad<< اسلام زیباست.")) | 6271ca872a0adf49f032f95160d0b0d4 |
cc-by-4.0 | ['translation', 'opus-mt-tc'] | false | Training - **Data**: opusTCv20210807 ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) - **Pre-processing**: SentencePiece (spm32k,spm32k) - **Model Type:** transformer-big - **Original MarianNMT Model**: [opusTCv20210807_transformer-big_2022-07-23.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/fas-itc/opusTCv20210807_transformer-big_2022-07-23.zip) - **Training Scripts**: [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train) | a5a1f98e5dc32a5c818230b4cf69089a |
cc-by-4.0 | ['translation', 'opus-mt-tc'] | false | Evaluation * test set translations: [opusTCv20210807_transformer-big_2022-07-23.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fas-itc/opusTCv20210807_transformer-big_2022-07-23.test.txt) * test set scores: [opusTCv20210807_transformer-big_2022-07-23.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/fas-itc/opusTCv20210807_transformer-big_2022-07-23.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | | 85bc9f2cd4b98369178e2a1135a671fa |
cc-by-4.0 | ['translation', 'opus-mt-tc'] | false | words | |----------|---------|-------|-------|-------|--------| | fas-fra | tatoeba-test-v2021-08-07 | 0.57949 | 37.5 | 376 | 3377 | | fas-fra | flores101-devtest | 0.55883 | 28.9 | 1012 | 28343 | | fas-ita | flores101-devtest | 0.49512 | 19.7 | 1012 | 27306 | | fas-por | flores101-devtest | 0.54829 | 27.6 | 1012 | 26519 | | fas-ron | flores101-devtest | 0.48821 | 19.7 | 1012 | 26799 | | fas-spa | flores101-devtest | 0.47722 | 19.4 | 1012 | 29199 | | 55cac370ddf7367b6c4316d724488518 |
apache-2.0 | [] | false | Model Description This model is fine-tuned version of [DmitryPogrebnoy/distilbert-base-russian-cased](https://huggingface.co/DmitryPogrebnoy/distilbert-base-russian-cased). The code for the fine-tuned process can be found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker/blob/main/spellchecker/ml_ranging/models/med_distilbert_base_russian_cased/fine_tune_distilbert_base_russian_cased.py). The model is fine-tuned on a specially collected dataset of over 30,000 medical anamneses in Russian. The collected dataset can be found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker/blob/main/data/anamnesis/processed/all_anamnesis.csv). This model was created as part of a master's project to develop a method for correcting typos in medical histories using BERT models as a ranking of candidates. The project is open source and can be found [here](https://github.com/DmitryPogrebnoy/MedSpellChecker). | 4e96a8d2a97155301ac3fc4fe4a199ba |
apache-2.0 | [] | false | How to Get Started With the Model You can use the model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> pipeline = pipeline('fill-mask', model='DmitryPogrebnoy/MedDistilBertBaseRuCased') >>> pipeline("У пациента [MASK] боль в грудине.") [{'score': 0.1733243614435196, 'token': 6880, 'token_str': 'имеется', 'sequence': 'У пациента имеется боль в грудине.'}, {'score': 0.08818087726831436, 'token': 1433, 'token_str': 'есть', 'sequence': 'У пациента есть боль в грудине.'}, {'score': 0.03620537742972374, 'token': 3793, 'token_str': 'особенно', 'sequence': 'У пациента особенно боль в грудине.'}, {'score': 0.03438418731093407, 'token': 5168, 'token_str': 'бол', 'sequence': 'У пациента бол боль в грудине.'}, {'score': 0.032936397939920425, 'token': 6281, 'token_str': 'протекает', 'sequence': 'У пациента протекает боль в грудине.'}] ``` Or you can load the model and tokenizer and do what you need to do: ```python >>> from transformers import AutoTokenizer, AutoModelForMaskedLM >>> tokenizer = AutoTokenizer.from_pretrained("DmitryPogrebnoy/MedDistilBertBaseRuCased") >>> model = AutoModelForMaskedLM.from_pretrained("DmitryPogrebnoy/MedDistilBertBaseRuCased") ``` | f702c850fe7b2b00f8fa36394b916c4d |
apache-2.0 | ['generated_from_trainer'] | false | bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0622 - Precision: 0.9314 - Recall: 0.9507 - F1: 0.9410 - Accuracy: 0.9863 | b86e91011deb9ce121b04591bd6ea12e |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0821 | 1.0 | 1756 | 0.0639 | 0.9108 | 0.9371 | 0.9238 | 0.9834 | | 0.0366 | 2.0 | 3512 | 0.0585 | 0.9310 | 0.9497 | 0.9403 | 0.9857 | | 0.019 | 3.0 | 5268 | 0.0622 | 0.9314 | 0.9507 | 0.9410 | 0.9863 | | e5522cf8008d0e5f5c6d747449b46ae3 |
apache-2.0 | ['translation'] | false | por-tgl * source group: Portuguese * target group: Tagalog * OPUS readme: [por-tgl](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-tgl/README.md) * model: transformer-align * source language(s): por * target language(s): tgl_Latn * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.eval.txt) | 8f3c7aa6f97070a9246424527becae71 |
apache-2.0 | ['translation'] | false | System Info: - hf_name: por-tgl - source_languages: por - target_languages: tgl - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/por-tgl/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['pt', 'tl'] - src_constituents: {'por'} - tgt_constituents: {'tgl_Latn'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/por-tgl/opus-2020-06-17.test.txt - src_alpha3: por - tgt_alpha3: tgl - short_pair: pt-tl - chrF2_score: 0.565 - bleu: 28.4 - brevity_penalty: 1.0 - ref_len: 13620.0 - src_name: Portuguese - tgt_name: Tagalog - train_date: 2020-06-17 - src_alpha2: pt - tgt_alpha2: tl - prefer_old: False - long_pair: por-tgl - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41 | 53e9630f474416b515232c774c58ee94 |
apache-2.0 | ['generated_from_trainer'] | false | finetuned_distilgpt2_sst2_negation0.0001_pretrainedTrue_epochs1 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 3.2798 | 3a81e43b5df29df1e56ce38f1ccddbe3 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 | 4ceea3aedb61b6294d65951ff8934483 |
apache-2.0 | ['image-classification', 'vision', 'generated_from_trainer'] | false | vit-base-mnist This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the mnist dataset. It achieves the following results on the evaluation set: - Loss: 0.0236 - Accuracy: 0.9949 | 7f302633f1fc2a07091561485c8e1915 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.