license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['whisper-event', 'hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0105 | 4.24 | 1000 | 0.1973 | 12.6130 | | 0.0016 | 8.47 | 2000 | 0.2198 | 11.8985 | | 0.0004 | 12.71 | 3000 | 0.2310 | 11.4547 | | 0.0003 | 16.95 | 4000 | 0.2380 | 11.4270 | | 0.0002 | 21.19 | 5000 | 0.2417 | 11.4086 |
07173f68a1314e309fb2eac2bc58c80d
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
sentence-transformers/stsb-distilroberta-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
90eac81fa0320a9b6eb2ec869981a917
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/stsb-distilroberta-base-v2') embeddings = model.encode(sentences) print(embeddings) ```
23aa58d760fc36390b1e7f4b3fecea85
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/stsb-distilroberta-base-v2') model = AutoModel.from_pretrained('sentence-transformers/stsb-distilroberta-base-v2')
7095307ed2c1d38f45956220c9c75115
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/stsb-distilroberta-base-v2)
265c2c0ad40a5718ee008a1ef45e9e93
mit
['generated_from_trainer']
false
smalldata-pysentimiento-robertuito-eng-only-sentiment-single-finetuned-memes This model is a fine-tuned version of [jayantapaul888/twitter-data-microsoft-xtremedistil-l6-h256-uncased-sentiment-finetuned-memes](https://huggingface.co/jayantapaul888/twitter-data-microsoft-xtremedistil-l6-h256-uncased-sentiment-finetuned-memes) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3693 - Accuracy: 0.8533 - Precision: 0.8686 - Recall: 0.8673 - F1: 0.8678
c3c72a071781ff43f2659a1d0c3a85ac
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 378 | 0.3505 | 0.8466 | 0.8687 | 0.8600 | 0.8608 | | 0.4239 | 2.0 | 756 | 0.3369 | 0.8570 | 0.8725 | 0.8700 | 0.8707 | | 0.325 | 3.0 | 1134 | 0.3286 | 0.8533 | 0.8700 | 0.8675 | 0.8677 | | 0.277 | 4.0 | 1512 | 0.3472 | 0.8533 | 0.8681 | 0.8680 | 0.8678 | | 0.277 | 5.0 | 1890 | 0.3538 | 0.8593 | 0.8736 | 0.8732 | 0.8734 | | 0.2438 | 6.0 | 2268 | 0.3693 | 0.8533 | 0.8686 | 0.8673 | 0.8678 |
0978afe76edaf9ea78a2b7988d043564
apache-2.0
['image-segmentation', 'vision']
false
DETR (End-to-End Object Detection) model with ResNet-101 backbone DEtection TRansformer (DETR) model trained end-to-end on COCO 2017 panoptic (118k annotated images). It was introduced in the paper [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872) by Carion et al. and first released in [this repository](https://github.com/facebookresearch/detr). Disclaimer: The team releasing DETR did not write a model card for this model so this model card has been written by the Hugging Face team.
8931a807cb44be192c2fe2b83a72b888
apache-2.0
['image-segmentation', 'vision']
false
Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. DETR can be naturally extended to perform panoptic segmentation, by adding a mask head on top of the decoder outputs.
74b43ad0c2d21b6a37a8a75012e32b93
apache-2.0
['image-segmentation', 'vision']
false
How to use Here is how to use this model: ```python from transformers import DetrFeatureExtractor, DetrForSegmentation from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = DetrFeatureExtractor.from_pretrained('facebook/detr-resnet-101-panoptic') model = DetrForSegmentation.from_pretrained('facebook/detr-resnet-101-panoptic')
5e8be4f6ec9787584ec21e4742f0d3e4
apache-2.0
['image-segmentation', 'vision']
false
use the `post_process_panoptic` method of `DetrFeatureExtractor` to convert to COCO format processed_sizes = torch.as_tensor(inputs["pixel_values"].shape[-2:]).unsqueeze(0) result = feature_extractor.post_process_panoptic(outputs, processed_sizes)[0]
fe64847d75e67edf9f8563179ed6ff51
apache-2.0
['image-segmentation', 'vision']
false
Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/detr/blob/master/datasets/coco_panoptic.py). Images are resized/rescaled such that the shortest side is at least 800 pixels and the largest side at most 1333 pixels, and normalized across the RGB channels with the ImageNet mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225).
b429252807a11bcc78f0f19b4ff568df
apache-2.0
['image-segmentation', 'vision']
false
Evaluation results This model achieves the following results on COCO 2017 validation: a box AP (average precision) of **40.1**, a segmentation AP (average precision) of **33** and a PQ (panoptic quality) of **45.1**. For more details regarding evaluation results, we refer to table 5 of the original paper.
f101f5987815d73d70ecc7cf483d7bc2
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
false
DreamBooth model for the kndrtycr concept trained by alikanakar on the alikanakar/sd_finetune_toy_car dataset. This is a Stable Diffusion model fine-tuned on the kndrtycr concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of kndrtycr toy** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
0056b5203e83dfe0402e74ccd4cb8480
mit
['generated_from_trainer']
false
berturk-128k-keyword-discriminator This model is a fine-tuned version of [dbmdz/bert-base-turkish-128k-cased](https://huggingface.co/dbmdz/bert-base-turkish-128k-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3828 - Precision: 0.6791 - Recall: 0.7234 - Accuracy: 0.9294 - F1: 0.7006 - Ent/precision: 0.6931 - Ent/accuracy: 0.7715 - Ent/f1: 0.7302 - Con/precision: 0.6473 - Con/accuracy: 0.6282 - Con/f1: 0.6376
7c5c29e7a97bc472d8230a13e070775f
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 | Ent/precision | Ent/accuracy | Ent/f1 | Con/precision | Con/accuracy | Con/f1 | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:------:|:-------------:|:------------:|:------:|:-------------:|:------------:|:------:| | 0.1632 | 1.0 | 1875 | 0.1637 | 0.6661 | 0.6900 | 0.9320 | 0.6778 | 0.6649 | 0.7401 | 0.7005 | 0.6692 | 0.5907 | 0.6275 | | 0.1151 | 2.0 | 3750 | 0.1709 | 0.6538 | 0.7446 | 0.9292 | 0.6963 | 0.6682 | 0.7864 | 0.7225 | 0.6223 | 0.6619 | 0.6415 | | 0.0817 | 3.0 | 5625 | 0.1931 | 0.6667 | 0.7292 | 0.9294 | 0.6965 | 0.6843 | 0.7677 | 0.7236 | 0.6290 | 0.6529 | 0.6407 | | 0.057 | 4.0 | 7500 | 0.2375 | 0.6578 | 0.7486 | 0.9277 | 0.7002 | 0.6708 | 0.7950 | 0.7277 | 0.6284 | 0.6567 | 0.6422 | | 0.041 | 5.0 | 9375 | 0.2765 | 0.6683 | 0.7390 | 0.9284 | 0.7019 | 0.6834 | 0.7821 | 0.7294 | 0.6351 | 0.6538 | 0.6444 | | 0.0297 | 6.0 | 11250 | 0.3128 | 0.6811 | 0.7249 | 0.9295 | 0.7023 | 0.6979 | 0.7710 | 0.7327 | 0.6438 | 0.6334 | 0.6386 | | 0.0211 | 7.0 | 13125 | 0.3633 | 0.6780 | 0.7236 | 0.9290 | 0.7001 | 0.6919 | 0.7722 | 0.7299 | 0.6463 | 0.6273 | 0.6366 | | 0.0165 | 8.0 | 15000 | 0.3828 | 0.6791 | 0.7234 | 0.9294 | 0.7006 | 0.6931 | 0.7715 | 0.7302 | 0.6473 | 0.6282 | 0.6376 |
c226ce4eb741bbccd6f587f0dafea315
apache-2.0
['generated_from_trainer']
false
initial-dq-model This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1677 - Precision: 0.7763 - Recall: 0.9380 - F1: 0.8495 - Accuracy: 0.9423
ac3b0f6dd05dc12c942c3f0eb6305de2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2251 | 1.0 | 1220 | 0.1768 | 0.7481 | 0.9264 | 0.8277 | 0.9378 | | 0.186 | 2.0 | 2440 | 0.1677 | 0.7763 | 0.9380 | 0.8495 | 0.9423 |
72e8796785320461dccbef1f6935cd72
apache-2.0
['generated_from_keras_callback']
false
pranavkrishna/bert_amazon-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.0874 - Validation Loss: 2.6529 - Epoch: 0
6216190956358807a792af4f36fd0dde
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -1156, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 2000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.02} - training_precision: float32
53b3f2769f11ad8291dfeed687647659
apache-2.0
['generated_from_trainer']
false
VANBase-finetuned-brs-finetuned-brs This model is a fine-tuned version of [Visual-Attention-Network/van-base](https://huggingface.co/Visual-Attention-Network/van-base) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.7056 - Accuracy: 0.5882 - F1: 0.6957 - Precision (ppv): 0.6154 - Recall (sensitivity): 0.8 - Specificity: 0.2857 - Npv: 0.5 - Auc: 0.5429
3b813442e6b7c0c92bf2a03d65c802da
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 100
799358473e053c586d3037134013cb99
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision (ppv) | Recall (sensitivity) | Specificity | Npv | Auc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------------:|:--------------------:|:-----------:|:------:|:------:| | 0.6589 | 6.25 | 100 | 0.6655 | 0.5882 | 0.6316 | 0.6667 | 0.6 | 0.5714 | 0.5 | 0.5857 | | 0.6262 | 12.49 | 200 | 0.6917 | 0.5294 | 0.6364 | 0.5833 | 0.7 | 0.2857 | 0.4 | 0.4929 | | 0.4706 | 18.74 | 300 | 0.6776 | 0.5882 | 0.6957 | 0.6154 | 0.8 | 0.2857 | 0.5 | 0.5429 | | 0.5202 | 24.98 | 400 | 0.7018 | 0.5294 | 0.6 | 0.6 | 0.6 | 0.4286 | 0.4286 | 0.5143 | | 0.4628 | 31.25 | 500 | 0.6903 | 0.6471 | 0.75 | 0.6429 | 0.9 | 0.2857 | 0.6667 | 0.5929 | | 0.3525 | 37.49 | 600 | 0.7241 | 0.5294 | 0.6667 | 0.5714 | 0.8 | 0.1429 | 0.3333 | 0.4714 | | 0.2877 | 43.74 | 700 | 0.8262 | 0.5882 | 0.7407 | 0.5882 | 1.0 | 0.0 | nan | 0.5 | | 0.2921 | 49.98 | 800 | 0.8058 | 0.4706 | 0.64 | 0.5333 | 0.8 | 0.0 | 0.0 | 0.4 | | 0.3834 | 56.25 | 900 | 0.7864 | 0.5882 | 0.7407 | 0.5882 | 1.0 | 0.0 | nan | 0.5 | | 0.2267 | 62.49 | 1000 | 0.5520 | 0.7647 | 0.8182 | 0.75 | 0.9 | 0.5714 | 0.8 | 0.7357 | | 0.3798 | 68.74 | 1100 | 0.8722 | 0.4706 | 0.64 | 0.5333 | 0.8 | 0.0 | 0.0 | 0.4 | | 0.2633 | 74.98 | 1200 | 0.7260 | 0.6471 | 0.7273 | 0.6667 | 0.8 | 0.4286 | 0.6 | 0.6143 | | 0.3439 | 81.25 | 1300 | 1.0187 | 0.4118 | 0.5455 | 0.5 | 0.6 | 0.1429 | 0.2 | 0.3714 | | 0.2532 | 87.49 | 1400 | 0.8812 | 0.5882 | 0.7407 | 0.5882 | 1.0 | 0.0 | nan | 0.5 | | 0.0841 | 93.74 | 1500 | 0.8717 | 0.5294 | 0.6923 | 0.5625 | 0.9 | 0.0 | 0.0 | 0.45 | | 0.3409 | 99.98 | 1600 | 0.7056 | 0.5882 | 0.6957 | 0.6154 | 0.8 | 0.2857 | 0.5 | 0.5429 |
363d0f55925029a1de12ab96e97a17e0
apache-2.0
['translation']
false
opus-mt-es-ase * source languages: es * target languages: ase * OPUS readme: [es-ase](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-ase/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/es-ase/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ase/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/es-ase/opus-2020-01-20.eval.txt)
b1be7df693f71d36fcce21c82c9ba4c3
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2234 - Accuracy: 0.9265 - F1: 0.9265
beaa9eca8d3c333e91075649a9b030fc
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8759 | 1.0 | 250 | 0.3343 | 0.9035 | 0.8999 | | 0.2637 | 2.0 | 500 | 0.2234 | 0.9265 | 0.9265 |
253d7ecd1842b2a96387a8e87662eddd
mit
[]
false
Cyberpunk-Lucy on Stable Diffusion This is the `<cyberpunk-lucy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<cyberpunk-lucy> 0](https://huggingface.co/sd-concepts-library/cyberpunk-lucy/resolve/main/concept_images/1.jpeg) ![<cyberpunk-lucy> 1](https://huggingface.co/sd-concepts-library/cyberpunk-lucy/resolve/main/concept_images/2.jpeg) ![<cyberpunk-lucy> 2](https://huggingface.co/sd-concepts-library/cyberpunk-lucy/resolve/main/concept_images/0.jpeg) ![<cyberpunk-lucy> 3](https://huggingface.co/sd-concepts-library/cyberpunk-lucy/resolve/main/concept_images/3.jpeg) ![<cyberpunk-lucy> 4](https://huggingface.co/sd-concepts-library/cyberpunk-lucy/resolve/main/concept_images/4.jpeg) ![<cyberpunk-lucy> 5](https://huggingface.co/sd-concepts-library/cyberpunk-lucy/resolve/main/concept_images/5.jpeg)
ff2233abe23164c43c6dd9c68634bcb9
apache-2.0
['generated_from_trainer']
false
youtube-bert This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4771
2210ff8ad00cb77874386a3f0490ae9a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.691 | 1.0 | 1077 | 2.5445 | | 2.5768 | 2.0 | 2154 | 2.5226 | | 2.5227 | 3.0 | 3231 | 2.5027 |
e2e12af506c086025edbf4b10acab1ad
apache-2.0
['generated_from_trainer']
false
Tagged_Uni_500v8_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_uni500v8_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.2501 - Precision: 0.7046 - Recall: 0.6968 - F1: 0.7007 - Accuracy: 0.9317
e709231e006e60e53ef836fc53e8fde8
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 169 | 0.2800 | 0.5648 | 0.5035 | 0.5324 | 0.9043 | | No log | 2.0 | 338 | 0.2383 | 0.6783 | 0.6738 | 0.6760 | 0.9286 | | 0.1144 | 3.0 | 507 | 0.2501 | 0.7046 | 0.6968 | 0.7007 | 0.9317 |
b8e632d496e1f68766be1367c1df69eb
cc-by-sa-4.0
['vietnamese', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is a PhoBERT model pre-trained on Vietnamese texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [phobert-large](https://huggingface.co/vinai/phobert-large).
1c8cb2799f0464554fbee6e2aaa15077
cc-by-sa-4.0
['vietnamese', 'token-classification', 'pos', 'dependency-parsing']
false
text = "+text+"\n" q=[self.model.config.id2label[p[i,j]].split("|") for i,j in enumerate(h)] t=[i.replace("_"," ") for i in t] if len(t)!=len(v)-2: t=[z.pop(0) if i==self.tokenizer.unk_token else i.replace("_"," ") for i in self.tokenizer.convert_ids_to_tokens(v[1:-1])] for i,j in reversed(list(enumerate(q[2:],2))): if j[-1]=="goeswith" and set([k[-1] for k in q[h[i]+1:i+1]])=={"goeswith"}: h=[b if i>b else b-1 for a,b in enumerate(h) if i!=a] t[i-2]=(t[i-2][0:-2] if t[i-2].endswith("@@") else t[i-2]+" ")+t.pop(i-1) q.pop(i) t=[i[0:-2].strip() if i.endswith("@@") else i.strip() for i in t] for i,j in enumerate(t,1): u+="\t".join([str(i),j,"_",q[i][0],"_","|".join(q[i][1:-1]),str(h[i]),q[i][-1],"_","_"])+"\n" return u+"\n" nlp=UDgoeswithViNLP("KoichiYasuoka/phobert-large-vietnamese-ud-goeswith") print(nlp("Hai cái đầu thì tốt hơn một.")) ``` with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/) and [ViNLP](https://pypi.org/project/ViNLP/). Or without them: ``` from transformers import pipeline nlp=pipeline("universal-dependencies","KoichiYasuoka/phobert-large-vietnamese-ud-goeswith",trust_remote_code=True,aggregation_strategy="simple") print(nlp("Hai cái đầu thì tốt hơn một.")) ```
842d547110f9da4ba41e7e8b1ddaceec
apache-2.0
['token-classification']
false
distilroberta-base-ner-conll2003 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the conll2003 dataset. eval F1-Score: **95,29** (CoNLL-03) test F1-Score: **90,74** (CoNLL-03) eval F1-Score: **95,29** (CoNLL++ / CoNLL-03 corrected) test F1-Score: **92,23** (CoNLL++ / CoNLL-03 corrected)
f911bf7f2db0513aa13bdd3cda03a747
apache-2.0
['token-classification']
false
Model Usage ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("philschmid/distilroberta-base-ner-conll2003") model = AutoModelForTokenClassification.from_pretrained("philschmid/distilroberta-base-ner-conll2003") nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True) example = "My name is Philipp and live in Germany" nlp(example) ```
e43ccc466e677950027cdb893f55dd6f
apache-2.0
['token-classification']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.9902376275441704e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6.0 - mixed_precision_training: Native AMP
5cc77273bc7c0a55543fc2992b8381fb
apache-2.0
['token-classification']
false
CoNNL2003 It achieves the following results on the evaluation set: - Loss: 0.0583 - Precision: 0.9493 - Recall: 0.9566 - F1: 0.9529 - Accuracy: 0.9883 It achieves the following results on the test set: - Loss: 0.2025 - Precision: 0.8999 - Recall: 0.915 - F1: 0.9074 - Accuracy: 0.9741
314b3ea736fce63873108bf65cfdcc8c
apache-2.0
['token-classification']
false
CoNNL++ / CoNLL2003 corrected It achieves the following results on the evaluation set: - Loss: 0.0567 - Precision: 0.9493 - Recall: 0.9566 - F1: 0.9529 - Accuracy: 0.9883 It achieves the following results on the test set: - Loss: 0.1359 - Precision: 0.92 - Recall: 0.9245 - F1: 0.9223 - Accuracy: 0.9785
5eac220b3ad0439e71d68f74e65d78ae
cc-by-4.0
['question generation']
false
Model Card of `lmqg/t5-small-squadshifts-nyt-qg` This model is fine-tuned version of [lmqg/t5-small-squad](https://huggingface.co/lmqg/t5-small-squad) for question generation task on the [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (dataset_name: nyt) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
88a12a480a218df95f3fdce84d35a730
cc-by-4.0
['question generation']
false
Overview - **Language model:** [lmqg/t5-small-squad](https://huggingface.co/lmqg/t5-small-squad) - **Language:** en - **Training data:** [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (nyt) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
43a2cd2969907a51c701926ac88e7e8a
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-small-squadshifts-nyt-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
0c144b0172efcedc83e1e2640ebd0afc
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-small-squadshifts-nyt-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.nyt.json) | | Score | Type | Dataset | |:-----------|--------:|:-------|:---------------------------------------------------------------------------| | BERTScore | 92.2 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_1 | 23.37 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_2 | 15.24 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_3 | 10.64 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | Bleu_4 | 7.71 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | METEOR | 23.7 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | MoverScore | 63.71 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) | | ROUGE_L | 23.43 | nyt | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
ae6493b4a743dcc06f47ee6413967387
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squadshifts - dataset_name: nyt - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: lmqg/t5-small-squad - max_length: 512 - max_length_output: 32 - epoch: 6 - batch: 32 - lr: 5e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 2 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-small-squadshifts-nyt-qg/raw/main/trainer_config.json).
ff3831027681c87feebadd32a3a70450
mit
['Explain code', 'Code Summarization', 'Summarization']
false
Model description Gemini is a transformer based on Google's T5 model. The model is pre-trained on approximately 800k code/description pairs and then fine-tuned on 10k higher-level explanations that were synthetically generated. Gemini is capable of summarization/explaining short to medium code snippets in: - Python - Javascript (mostly vanilla JS, however, it can handle frameworks like React as well) - Java - Ruby - Go And outputs a description in English.
be6abfee8703aa5daa7e269820a8e6c5
mit
['Explain code', 'Code Summarization', 'Summarization']
false
Intended uses Gemini without any additional fine-tuning is capable of explaining code in a sentence or two and typically performs best in Python and Javascript. We recommend using Gemini for either simple code explanation, documentation or producing more synthetic data to improve its explanations.
c592a90dcd2696b1dbbf22920f55c92e
mit
['Explain code', 'Code Summarization', 'Summarization']
false
How to use You can use this model directly with a pipeline for Text2Text generation, as shown below: ```python from transformers import pipeline, set_seed summarizer = pipeline('text2text-generation', model='describeai/gemini') code = "print('hello world!')" response = summarizer(code, max_length=100, num_beams=3) print("Summarized code: " + response[0]['generated_text']) ``` Which should yield something along the lines of: ``` Summarized code: The following code is greeting the world. ```
2a1211b609b782783cf4dd87c7b15cc7
mit
['Explain code', 'Code Summarization', 'Summarization']
false
Limitations Typically, Gemini may produce overly simplistic descriptions that don't encompass the entire code snippet. We suspect with more training data, this could be circumvented and will produce better results.
87c7bb9c4f7b4a274c7969a8ed2a13cd
mit
['Explain code', 'Code Summarization', 'Summarization']
false
About Us A Describe.ai, we are focused on building Artificial Intelligence systems that can understand language as well as humans. While a long path, we plan to contribute our findings to our API to the Open Source community.
1e2314f8bc92467e61451fbe4c2e5ccd
apache-2.0
['whisper-event', 'generated_from_trainer', 'hf-asr-leaderboard', 'pashto', 'ps']
false
Whisper Small Pashto This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the google/fleurs ps_af dataset. It achieves the following results on the evaluation set: - Loss: 1.1800 - Wer: 63.1053
c95dcc4f16bea300fd2480f1d35a6161
apache-2.0
['whisper-event', 'generated_from_trainer', 'hf-asr-leaderboard', 'pashto', 'ps']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-07 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 5200 - mixed_precision_training: Native AMP
af8fa9461dd560a1855962f193378a5d
apache-2.0
['whisper-event', 'generated_from_trainer', 'hf-asr-leaderboard', 'pashto', 'ps']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:--------:| | 2.0871 | 14.29 | 100 | 2.0102 | 230.2739 | | 1.465 | 28.57 | 200 | 1.4969 | 137.2427 | | 1.1617 | 42.86 | 300 | 1.2716 | 76.3242 | | 1.0019 | 57.14 | 400 | 1.1645 | 71.3756 | | 0.9052 | 71.43 | 500 | 1.1051 | 69.7866 | | 0.8334 | 85.71 | 600 | 1.0691 | 68.2657 | | 0.7838 | 100.0 | 700 | 1.0483 | 67.1686 | | 0.7539 | 114.29 | 800 | 1.0363 | 66.4195 | | 0.7377 | 128.57 | 900 | 1.0297 | 66.2001 | | 0.7325 | 142.86 | 1000 | 1.0277 | 66.0033 | | 0.6952 | 157.14 | 1100 | 1.0122 | 65.0575 | | 0.6531 | 171.43 | 1200 | 1.0014 | 64.4219 | | 0.6189 | 185.71 | 1300 | 0.9945 | 63.7939 | | 0.5993 | 200.0 | 1400 | 0.9896 | 63.3550 | | 0.5757 | 214.29 | 1500 | 0.9864 | 63.2264 | | 0.5601 | 228.57 | 1600 | 0.9845 | 62.9162 | | 0.5482 | 242.86 | 1700 | 0.9833 | 62.8178 | | 0.5382 | 257.14 | 1800 | 0.9827 | 62.8405 | | 0.5325 | 271.43 | 1900 | 0.9823 | 62.7648 | | 0.5287 | 285.71 | 2000 | 0.9822 | 62.8178 | | 0.3494 | 357.14 | 2500 | 1.0026 | 61.6147 | | 0.2287 | 428.57 | 3000 | 1.0533 | 61.5163 | | 0.1525 | 500.0 | 3500 | 1.1041 | 62.0536 | | 0.1089 | 571.43 | 4000 | 1.1451 | 62.5076 | | 0.0871 | 642.86 | 4500 | 1.1704 | 62.9313 | | 0.0797 | 714.29 | 5000 | 1.1791 | 63.1659 | | 0.0799 | 728.57 | 5100 | 1.1800 | 63.1053 | | 0.0791 | 742.86 | 5200 | 1.1803 | 63.1129 |
5fa2906a145d5cebf512ac211549d9a0
apache-2.0
['bert', 'qqp', 'glue', 'kd', 'torchdistill']
false
`bert-base-uncased` fine-tuned on QQP dataset, using fine-tuned `bert-large-uncased` as a teacher model, [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_kd_and_submission.ipynb) for knowledge distillation. The training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/qqp/kd/bert_base_uncased_from_bert_large_uncased.yaml). I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **78.9**.
d9bae4091d7a4ca5187d9808f6781b1d
apache-2.0
['generated_from_trainer']
false
tiny-mlm-imdb-target-imdb This model is a fine-tuned version of [muhtasham/tiny-mlm-imdb](https://huggingface.co/muhtasham/tiny-mlm-imdb) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2699 - Accuracy: 0.8895 - F1: 0.9415
814bf72d01a6b6892f507ba534b480ad
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5432 | 0.64 | 500 | 0.3567 | 0.8578 | 0.9235 | | 0.366 | 1.28 | 1000 | 0.3687 | 0.8414 | 0.9138 | | 0.32 | 1.92 | 1500 | 0.2648 | 0.8922 | 0.9430 | | 0.2868 | 2.56 | 2000 | 0.3868 | 0.8314 | 0.9079 | | 0.2671 | 3.2 | 2500 | 0.3092 | 0.8774 | 0.9347 | | 0.248 | 3.84 | 3000 | 0.2699 | 0.8895 | 0.9415 |
8c104ee6588985a43a121e297181445f
creativeml-openrail-m
['text-to-image']
false
Duskfall's Final Fantasy Pt2 Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk fantadsk2 (use that on your prompt)
1ec0f82704f8bdcfcb95ce676d06fe2b
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
all-roberta-large-v1 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
1edcf223f19db013fadd2d8e0217acc2
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-roberta-large-v1') embeddings = model.encode(sentences) print(embeddings) ```
2576d6ab6c0be65c483f799a9931a76c
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-roberta-large-v1') model = AutoModel.from_pretrained('sentence-transformers/all-roberta-large-v1')
d4cb10b2f2b1b4ed56ea32fc5da8afbb
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-roberta-large-v1) ------
3a38762243fc0ffafba704e12550fdb1
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`roberta-large`](https://huggingface.co/roberta-large) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
895c3f59ca2dc66b194e5736f2a9cc52
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity']
false
Hyper parameters We trained ou model on a TPU v3-8. We train the model during 400k steps using a batch size of 256 (32 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
2c8f22bdc0ead3ca2038d18819eb169a
apache-2.0
['speech']
false
Data2Vec-Audio-Large-100h [Facebook's Data2Vec](https://ai.facebook.com/research/data2vec-a-general-framework-for-self-supervised-learning-in-speech-vision-and-language/) The large model pretrained and fine-tuned on 100 hours of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. [Paper](https://arxiv.org/abs/2202.03555) Authors: Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli **Abstract** While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a self-distillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches. The original model can be found under https://github.com/pytorch/fairseq/tree/main/examples/data2vec .
4fd9bcfeee5ddadebab63b692bdc27f6
apache-2.0
['deep-narrow']
false
T5-Efficient-SMALL-KV16 (Deep-Narrow version) T5-Efficient-SMALL-KV16 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
3a5d73241932b6c5c1cf0e2cfec8e81b
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-small-kv16** - is of model type **Small** with the following variations: - **kv** is **16** It has **46.37** million parameters and thus requires *ca.* **185.46 MB** of memory in full precision (*fp32*) or **92.73 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
a04e8c7722ca4313ea98a080df7df0f8
apache-2.0
['protein language model', 'generated_from_trainer']
false
tape-fluorescence-prediction-RITA_s This model is a fine-tuned version of [lightonai/RITA_s](https://huggingface.co/lightonai/RITA_s) on the cradle-bio/tape-fluorescence dataset. It achieves the following results on the evaluation set: - Loss: 0.5855 - Spearmanr: 0.2955
7b7ab5f0ad4d4a1a4644d6e3afa616eb
apache-2.0
['protein language model', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 128 - total_train_batch_size: 4096 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP
aa524897054736a1f5f2589a008e7c41
apache-2.0
['protein language model', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:---------:| | 4.3595 | 0.85 | 4 | 0.7057 | 0.0940 | | 0.8654 | 1.85 | 8 | 0.6873 | 0.1280 | | 0.8292 | 2.85 | 12 | 0.6835 | 0.2290 | | 0.8212 | 3.85 | 16 | 0.6837 | 0.3110 | | 0.8191 | 4.85 | 20 | 0.6799 | 0.3281 | | 0.8137 | 5.85 | 24 | 0.6748 | 0.3277 | | 0.8057 | 6.85 | 28 | 0.6592 | 0.3162 | | 0.7769 | 7.85 | 32 | 0.6283 | 0.3065 | | 0.7382 | 8.85 | 36 | 0.6103 | 0.2795 | | 0.5991 | 9.85 | 40 | 0.5855 | 0.2955 |
a6ba8fb54bdf52c6624a1f8f36529771
apache-2.0
['generated_from_keras_callback']
false
silviacamplani/distilbert-finetuned-ner-music This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6767 - Validation Loss: 0.7802 - Train Precision: 0.5256 - Train Recall: 0.5824 - Train F1: 0.5525 - Train Accuracy: 0.8017 - Epoch: 9
b8b0ebe7d325b5ae4df9ad4db1bbf9ac
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 2.6671 | 2.0032 | 0.0 | 0.0 | 0.0 | 0.5482 | 0 | | 1.7401 | 1.5194 | 0.1820 | 0.0693 | 0.1004 | 0.5902 | 1 | | 1.3487 | 1.2627 | 0.2628 | 0.2952 | 0.2781 | 0.6766 | 2 | | 1.1390 | 1.0990 | 0.4018 | 0.4527 | 0.4257 | 0.7181 | 3 | | 0.9823 | 0.9837 | 0.4575 | 0.4887 | 0.4726 | 0.7311 | 4 | | 0.8741 | 0.9022 | 0.5008 | 0.5338 | 0.5168 | 0.7544 | 5 | | 0.7904 | 0.8449 | 0.5085 | 0.5626 | 0.5342 | 0.7776 | 6 | | 0.7327 | 0.8097 | 0.5211 | 0.5779 | 0.5480 | 0.7917 | 7 | | 0.7000 | 0.7872 | 0.5281 | 0.5842 | 0.5547 | 0.7975 | 8 | | 0.6767 | 0.7802 | 0.5256 | 0.5824 | 0.5525 | 0.8017 | 9 |
ffcb9a26b533ef93b54d8cc74eadce43
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_logit_kd_qqp_384 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.6771 - Accuracy: 0.6454 - F1: 0.0788 - Combined Score: 0.3621
07343c9a1d74cede3d29a588645bb391
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.7984 | 1.0 | 1422 | 0.7600 | 0.6318 | 0.0 | 0.3159 | | 0.7388 | 2.0 | 2844 | 0.7348 | 0.6318 | 0.0 | 0.3159 | | 0.7037 | 3.0 | 4266 | 0.7082 | 0.6329 | 0.0056 | 0.3192 | | 0.6717 | 4.0 | 5688 | 0.7014 | 0.6474 | 0.0908 | 0.3691 | | 0.6462 | 5.0 | 7110 | 0.6841 | 0.6377 | 0.0339 | 0.3358 | | 0.6259 | 6.0 | 8532 | 0.6795 | 0.6382 | 0.0364 | 0.3373 | | 0.6092 | 7.0 | 9954 | 0.6782 | 0.6408 | 0.0513 | 0.3461 | | 0.5941 | 8.0 | 11376 | 0.6771 | 0.6454 | 0.0788 | 0.3621 | | 0.5812 | 9.0 | 12798 | 0.6841 | 0.6492 | 0.0991 | 0.3741 | | 0.5703 | 10.0 | 14220 | 0.6774 | 0.6452 | 0.0776 | 0.3614 | | 0.5604 | 11.0 | 15642 | 0.6791 | 0.6464 | 0.0831 | 0.3647 | | 0.5523 | 12.0 | 17064 | 0.6817 | 0.6520 | 0.1143 | 0.3831 | | 0.5448 | 13.0 | 18486 | 0.6774 | 0.6477 | 0.0905 | 0.3691 |
cd1001b35c4257b2f449cb9a40c6b093
apache-2.0
['translation']
false
hbs-epo * source group: Serbo-Croatian * target group: Esperanto * OPUS readme: [hbs-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hbs-epo/README.md) * model: transformer-align * source language(s): bos_Latn hrv srp_Cyrl srp_Latn * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-epo/opus-2020-06-16.eval.txt)
b6110099a237d4110867e059835480de
apache-2.0
['translation']
false
System Info: - hf_name: hbs-epo - source_languages: hbs - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/hbs-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['sh', 'eo'] - src_constituents: {'hrv', 'srp_Cyrl', 'bos_Latn', 'srp_Latn'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/hbs-epo/opus-2020-06-16.test.txt - src_alpha3: hbs - tgt_alpha3: epo - short_pair: sh-eo - chrF2_score: 0.38299999999999995 - bleu: 18.7 - brevity_penalty: 0.9990000000000001 - ref_len: 18457.0 - src_name: Serbo-Croatian - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: sh - tgt_alpha2: eo - prefer_old: False - long_pair: hbs-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
ad8b5d38d826f900c6ca59ffd168de12
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2t_en_vp-it_s859 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
f7b902a9dc3079e6fe7c43996068a39f
apache-2.0
['generated_from_trainer']
false
wav2vec2-ksponspeech This model is a fine-tuned version of [Wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - **WER(Word Error Rate)** for Third party test data : 0.373 **For improving WER:** - Numeric / Character Unification - Decoding the word with the correct notation (from word based on pronounciation) - Uniform use of special characters (. / ?) - Converting non-existent words to existing words
3c3426544fe6c237c40a45a4fa5c198e
apache-2.0
['generated_from_trainer']
false
Model description Korean Wav2vec with Ksponspeech dataset. This model was trained by two dataset : - Train1 : https://huggingface.co/datasets/Taeham/wav2vec2-ksponspeech-train (1 ~ 20000th data in Ksponspeech) - Train2 : https://huggingface.co/datasets/Taeham/wav2vec2-ksponspeech-train2 (20100 ~ 40100th data in Ksponspeech) - Validation : https://huggingface.co/datasets/Taeham/wav2vec2-ksponspeech-test (20000 ~ 20100th data in Ksponspeech) - Third party test : https://huggingface.co/datasets/Taeham/wav2vec2-ksponspeech-test (60000 ~ 20100th data in Ksponspeech)
1e61ffabf84200603e0524ea56499dcd
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP
fec4567bff90508b426e087172dcf354
apache-2.0
['generated_from_trainer']
false
opus-mt-en-ar-evaluated-en-to-ar-1000instancesopus-leaningRate2e-05-batchSize8-11epoch-3 This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the opus100 dataset. It achieves the following results on the evaluation set: - Loss: 0.1421 - Bleu: 21.3028 - Meteor: 0.1285 - Gen Len: 9.975
7ac57cbed3124c49ccce4b1159fcd1cd
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:| | 1.0508 | 1.0 | 100 | 0.1413 | 27.9009 | 0.1416 | 8.85 | | 0.1253 | 2.0 | 200 | 0.1372 | 23.11 | 0.1345 | 9.855 | | 0.1017 | 3.0 | 300 | 0.1390 | 21.7885 | 0.1364 | 9.97 | | 0.0868 | 4.0 | 400 | 0.1378 | 21.3889 | 0.1314 | 9.835 | | 0.0754 | 5.0 | 500 | 0.1398 | 22.198 | 0.132 | 9.675 | | 0.0667 | 6.0 | 600 | 0.1396 | 20.8645 | 0.1308 | 10.055 | | 0.0604 | 7.0 | 700 | 0.1408 | 20.289 | 0.1303 | 10.53 | | 0.0553 | 8.0 | 800 | 0.1414 | 21.7023 | 0.1293 | 10.005 | | 0.0518 | 9.0 | 900 | 0.1421 | 21.3028 | 0.1285 | 9.975 |
5dbbc9dbcc5d3f42af2afb8715c1fdf0
apache-2.0
['generated_from_keras_callback']
false
celera_relevance This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3072 - Train Sparse Categorical Accuracy: 0.8813 - Validation Loss: 0.4371 - Validation Sparse Categorical Accuracy: 0.8295 - Epoch: 2
4500f98d0d025ac8f127e43d8cd10358
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 0.4060 | 0.8274 | 0.3665 | 0.8440 | 0 | | 0.3388 | 0.8594 | 0.3639 | 0.8585 | 1 | | 0.3072 | 0.8813 | 0.4371 | 0.8295 | 2 |
41da7e2355d805a8d7d216d216180f0f
cc-by-sa-4.0
['english', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is an XLM-RoBERTa model pre-trained with [UD_English-EWT](https://github.com/UniversalDependencies/UD_English-EWT) for POS-tagging and dependency-parsing, derived from [xlm-roberta-base](https://huggingface.co/xlm-roberta-base). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
f01b88ab79aced27fedc5ee659fc991f
cc-by-sa-4.0
['english', 'token-classification', 'pos', 'dependency-parsing']
false
How to Use ```py from transformers import AutoTokenizer,AutoModelForTokenClassification tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/xlm-roberta-base-english-upos") model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/xlm-roberta-base-english-upos") ``` or ```py import esupar nlp=esupar.load("KoichiYasuoka/xlm-roberta-base-english-upos") ```
476f01116d51e99c0e1aee77e01c6558
mit
[]
false
Description A fine-tuned multi-label classification model that detects 9 [WHO-ICF](https://www.who.int/standards/classifications/international-classification-of-functioning-disability-and-health) domains in clinical text in Dutch. The model is based on a pre-trained Dutch medical language model ([link to be added]()), a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC.
cc6e16deb3681bd2df9ecdd2f780f80f
mit
[]
false
ICF domains The model can detect 9 domains, which were chosen due to their relevance to recovery from COVID-19: ICF code | Domain | name in repo ---|---|--- b440 | Respiration functions | ADM b140 | Attention functions | ATT d840-d859 | Work and employment | BER b1300 | Energy level | ENR d550 | Eating | ETN d450 | Walking | FAC b455 | Exercise tolerance functions | INS b530 | Weight maintenance functions | MBW b152 | Emotional functions | STM
1b3b41c1a092199b269ba45f3ebccd02
mit
[]
false
How to use To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library: ``` from simpletransformers.classification import MultiLabelClassificationModel model = MultiLabelClassificationModel( 'roberta', 'CLTL/icf-domains', use_cuda=False, ) example = 'Nu sinds 5-6 dagen progressieve benauwdheidsklachten (bij korte stukken lopen al kortademig), terwijl dit eerder niet zo was.' predictions, raw_outputs = model.predict([example]) ``` The predictions look like this: ``` [[1, 0, 0, 0, 0, 1, 1, 0, 0]] ``` The indices of the multi-label stand for: ``` [ADM, ATT, BER, ENR, ETN, FAC, INS, MBW, STM] ``` In other words, the above prediction corresponds to assigning the labels ADM, FAC and INS to the example sentence. The raw outputs look like this: ``` [[0.51907885 0.00268032 0.0030862 0.03066113 0.00616694 0.64720929 0.67348498 0.0118863 0.0046311 ]] ``` For this model, the threshold at which the prediction for a label flips from 0 to 1 is **0.5**.
b21df2799b93ed220db5ce5b3ee712da
mit
[]
false
Sentence-level | | ADM | ATT | BER | ENR | ETN | FAC | INS | MBW | STM |---|---|---|---|---|---|---|---|---|--- precision | 0.98 | 0.98 | 0.56 | 0.96 | 0.92 | 0.84 | 0.89 | 0.79 | 0.70 recall | 0.49 | 0.41 | 0.29 | 0.57 | 0.49 | 0.71 | 0.26 | 0.62 | 0.75 F1-score | 0.66 | 0.58 | 0.35 | 0.72 | 0.63 | 0.76 | 0.41 | 0.70 | 0.72 support | 775 | 39 | 54 | 160 | 382 | 253 | 287 | 125 | 181
97beefbf93fb6f791958fc0d09919c72
mit
[]
false
Note-level | | ADM | ATT | BER | ENR | ETN | FAC | INS | MBW | STM |---|---|---|---|---|---|---|---|---|--- precision | 1.0 | 1.0 | 0.66 | 0.96 | 0.95 | 0.84 | 0.95 | 0.87 | 0.80 recall | 0.89 | 0.56 | 0.44 | 0.70 | 0.72 | 0.89 | 0.46 | 0.87 | 0.87 F1-score | 0.94 | 0.71 | 0.50 | 0.81 | 0.82 | 0.86 | 0.61 | 0.87 | 0.84 support | 231 | 27 | 34 | 92 | 165 | 95 | 116 | 64 | 94
8a1625dd2867b6839e4475977e824076
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3517
da8c5283c98a24fd90171ba88b109538
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2094 | 1.0 | 8235 | 1.2174 | | 0.9515 | 2.0 | 16470 | 1.1923 | | 0.7687 | 3.0 | 24705 | 1.3517 |
352a99a1ac7740629247d93e5d881e66
cc
[]
false
ParchArt is an embedding for Stable Diffusion 2.0+ Parchart should be considered a talented artist who has imbibed far too much absinthe. He works primarily in ink on whatever scraps of parchment he can find and occasionally pulls out some watercolor paints. He loves to write on his work so you'll see a lot of random annotations and text. He almost never gives you exactly what you ask for or expect but keep trying - some of what he produces is quite compelling. <strong>Interested in generating your own embeddings? <a href="https://docs.google.com/document/d/1JvlM0phnok4pghVBAMsMq_-Z18_ip_GXvHYE0mITdFE/edit?usp=sharing" target="_blank">My Google doc walkthrough might help</a></strong> All of the examples below were using DPM++ SDE sampler, 20 steps, cfg scale 7 Other samplers can have dramatically different kinds of outputs. Euler samplers will be often be particularly softer, fuzzy kinds of images. Trigger the embedding with 'ParchArt' I recommend keeping prompts fairly simple. I.e., Prompt: malevolent fae king portrait by ParchArt, iron crown with inset gems | Negative prompt: duplicates, extra body parts, extra head Lots of details will just make for more simplified bland output, generally. This is something of a chaos machine and makes Stable Diffusion behave rather differently than I am accustomed to, and I sometimes go through a lot of lame outputs before I find something I like, but the keepers are really worth the hunting, for me. ![01740-604538298-malevolent fae___.jpg](https://s3.amazonaws.com/moonup/production/uploads/1673163733380-63169de2f5e32157c5226974.jpeg) ![01762-2154384243-a majestic li___.jpg](https://s3.amazonaws.com/moonup/production/uploads/1673163733555-63169de2f5e32157c5226974.jpeg) ![01767-4115484961-stained glass___.jpg](https://s3.amazonaws.com/moonup/production/uploads/1673163733904-63169de2f5e32157c5226974.jpeg) ![01744-3834349626-beautiful but___.jpg](https://s3.amazonaws.com/moonup/production/uploads/1673163733504-63169de2f5e32157c5226974.jpeg) ![01741-604538295-malevolent fae___.jpg](https://s3.amazonaws.com/moonup/production/uploads/1673163733512-63169de2f5e32157c5226974.jpeg) ![01760-1529201022-an eagle spre___.jpg](https://s3.amazonaws.com/moonup/production/uploads/1673163733454-63169de2f5e32157c5226974.jpeg) ![01749-1663409693-portrait of a___.jpg](https://s3.amazonaws.com/moonup/production/uploads/1673163733230-63169de2f5e32157c5226974.jpeg) ![01748-1663409692-portrait of a___.jpg](https://s3.amazonaws.com/moonup/production/uploads/1673163733327-63169de2f5e32157c5226974.jpeg) ![01746-2288693663-the interior____.jpg](https://s3.amazonaws.com/moonup/production/uploads/1673163733495-63169de2f5e32157c5226974.jpeg) ![01745-2288693662-the interior____.jpg](https://s3.amazonaws.com/moonup/production/uploads/1673163733481-63169de2f5e32157c5226974.jpeg) ![01766-611820593-stained glass____.jpg](https://s3.amazonaws.com/moonup/production/uploads/1673163733596-63169de2f5e32157c5226974.jpeg) ![01750-311095501-wide fantasy l___.jpg](https://s3.amazonaws.com/moonup/production/uploads/1673163733056-63169de2f5e32157c5226974.jpeg) ![01765-2144729744-an alien's la___.jpg](https://s3.amazonaws.com/moonup/production/uploads/1673163733422-63169de2f5e32157c5226974.jpeg) ![01764-2144729746-an alien's la___.jpg](https://s3.amazonaws.com/moonup/production/uploads/1673163733608-63169de2f5e32157c5226974.jpeg) ![01753-3465790098-2 point persp___.jpg](https://s3.amazonaws.com/moonup/production/uploads/1673163733625-63169de2f5e32157c5226974.jpeg) ![01752-1653857321-modern archit___.jpg](https://s3.amazonaws.com/moonup/production/uploads/1673163733572-63169de2f5e32157c5226974.jpeg)
4e1228ed98c58c4f31b139a8da7573ca
mit
[]
false
model by alxdfy This your the Stable Diffusion model fine-tuned the noggles_glasses_600 concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of a person wearing sks glasses** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). Here are the images used for training this concept: ![image 0](https://huggingface.co/sd-dreambooth-library/noggles-glasses-600/resolve/main/concept_images/_DSC3476.jpg) ![image 1](https://huggingface.co/sd-dreambooth-library/noggles-glasses-600/resolve/main/concept_images/292068449_779660049832297_7554632901123311495_n_2875x.jpg) ![image 2](https://huggingface.co/sd-dreambooth-library/noggles-glasses-600/resolve/main/concept_images/Screenshot 2022-09-28 101632.jpg) ![image 3](https://huggingface.co/sd-dreambooth-library/noggles-glasses-600/resolve/main/concept_images/292471692_1200098353866646_8688611891608490893_n_2672x.jpg) ![image 4](https://huggingface.co/sd-dreambooth-library/noggles-glasses-600/resolve/main/concept_images/291437103_575113617405080_4253713068724854490_n_3121x.jpg) ![image 5](https://huggingface.co/sd-dreambooth-library/noggles-glasses-600/resolve/main/concept_images/strip1.jpg) ![image 6](https://huggingface.co/sd-dreambooth-library/noggles-glasses-600/resolve/main/concept_images/Screenshot 2022-09-28 101717.jpg) ![image 7](https://huggingface.co/sd-dreambooth-library/noggles-glasses-600/resolve/main/concept_images/20220910_182800-01.jpg) ![image 8](https://huggingface.co/sd-dreambooth-library/noggles-glasses-600/resolve/main/concept_images/20220910_225712-02.jpg) ![image 9](https://huggingface.co/sd-dreambooth-library/noggles-glasses-600/resolve/main/concept_images/292236552_1477604436022119_7495376372190185135_n_2749x.jpg) ![image 10](https://huggingface.co/sd-dreambooth-library/noggles-glasses-600/resolve/main/concept_images/293054543_1413890889119491_3885435733085354832_n_1284x.jpg) ![image 11](https://huggingface.co/sd-dreambooth-library/noggles-glasses-600/resolve/main/concept_images/_DSC3613.jpg) ![image 12](https://huggingface.co/sd-dreambooth-library/noggles-glasses-600/resolve/main/concept_images/gossamer-min.jpg) ![image 13](https://huggingface.co/sd-dreambooth-library/noggles-glasses-600/resolve/main/concept_images/kidsnouns.jpg) ![image 14](https://huggingface.co/sd-dreambooth-library/noggles-glasses-600/resolve/main/concept_images/GOPR0023-01.jpg) ![image 15](https://huggingface.co/sd-dreambooth-library/noggles-glasses-600/resolve/main/concept_images/292316029_435557191830626_6362856498470202385_n_3004x.jpg) ![image 16](https://huggingface.co/sd-dreambooth-library/noggles-glasses-600/resolve/main/concept_images/_DSC3466.jpg)
32a6590f635b082ded809c565fa65f50
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:| | No log | 1.0 | 63 | 3.5640 | 14.382 | 3.9092 | 10.6947 | 12.6762 | 19.0 |
d0624f71034bf75172f98a25ce415ea9
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_data_aug_qqp This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.6240 - Accuracy: 0.8026 - F1: 0.7392 - Combined Score: 0.7709
5ab4551ebfd54d19b69c37e3eb5309f3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:--------------:| | 0.2706 | 1.0 | 29671 | 0.6240 | 0.8026 | 0.7392 | 0.7709 | | 0.0776 | 2.0 | 59342 | 0.8567 | 0.8033 | 0.7426 | 0.7729 | | 0.0413 | 3.0 | 89013 | 0.9095 | 0.8077 | 0.7440 | 0.7759 | | 0.0283 | 4.0 | 118684 | 1.0795 | 0.8087 | 0.7408 | 0.7747 | | 0.0218 | 5.0 | 148355 | 1.2082 | 0.8097 | 0.7443 | 0.7770 | | 0.0183 | 6.0 | 178026 | 1.2471 | 0.8032 | 0.7372 | 0.7702 |
dff73c6f87b9ac7399cff49438871062
agpl-3.0
['token classification']
false
Model description This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It was then fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `PANELIZATION` task to perform 'parsing' or 'segmentation' of figure legends into fragments corresponding to sub-panels. Figures are usually composite representations of results obtained with heterogeneous experimental approaches and systems. Breaking figures into panels allows identifying more coherent descriptions of individual scientific experiments.
ffcf49e645210ef672803bb5fe119cf3
agpl-3.0
['token classification']
false
How to use The intended use of this model is for 'parsing' figure legends into sub-fragments corresponding to individual panels as used in SourceData annotations (https://sourcedata.embo.org). To have a quick check of the model: ```python from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification example = """Fig 4. a, Volume density of early (Avi) and late (Avd) autophagic vacuoles.a, Volume density of early (Avi) and late (Avd) autophagic vacuoles from four independent cultures. Examples of Avi and Avd are shown in b and c, respectively. Bars represent 0.4����m. d, Labelling density of cathepsin-D as estimated in two independent experiments. e, Labelling density of LAMP-1.""" tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512) model = RobertaForTokenClassification.from_pretrained('EMBO/sd-panelization') ner = pipeline('ner', model, tokenizer=tokenizer) res = ner(example) for r in res: print(r['word'], r['entity']) ```
e2af19632be277e2e077545af42ceae7
agpl-3.0
['token classification']
false
Training procedure The training was run on an NVIDIA DGX Station with 4XTesla V100 GPUs. Training code is available at https://github.com/source-data/soda-roberta - Model fine-tuned: EMBO/bio-lm - Tokenizer vocab size: 50265 - Training data: EMBO/sd-nlp - Dataset configuration: PANELIZATION - TTraining with 2175 examples. - Evaluating on 622 examples. - Training on 2 features: `O`, `B-PANEL_START` - Epochs: 1.3 - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `learning_rate`: 0.0001 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0
8f228f0563ab9c58d9305f247b3a9423
agpl-3.0
['token classification']
false
Eval results Testing on 1802 examples from test set with `sklearn.metrics`: ``` precision recall f1-score support PANEL_START 0.89 0.95 0.92 5427 micro avg 0.89 0.95 0.92 5427 macro avg 0.89 0.95 0.92 5427 weighted avg 0.89 0.95 0.92 5427 ```
b14c05c4e3cc672856d8344c7a091629
apache-2.0
['generated_from_trainer']
false
MediumVin2 This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3370 - Wer: 100.0
f3b0fd4b2b88ad8125128400fac3bca9