modelId
stringlengths
6
107
label
list
readme
stringlengths
0
56.2k
readme_len
int64
0
56.2k
Jeevesh8/std_pnt_04_feather_berts-98
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
course5i/SEAD-L-6_H-384_A-12-mrpc
[ "0", "1" ]
--- language: - en license: apache-2.0 tags: - SEAD datasets: - glue - mrpc --- ## Paper ## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63) Aurthors: *Moyan Mei*, *Rohit Sroch* ## Abstract With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks. *Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63). Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).* ## SEAD-L-6_H-384_A-12-mrpc This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **mrpc** task. For weights initialization, we used [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) ## All SEAD Checkpoints Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD) ## Intended uses & limitations More information needed ### Training hyperparameters Please take a look at the `training_args.bin` file ```python $ import torch $ hyperparameters = torch.load(os.path.join('training_args.bin')) ``` ### Evaluation results | eval_accuracy | eval_f1 | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples | |:-------------:|:-------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:| | 0.9093 | 0.9345 | 1.1947 | 341.494 | 10.881 | 0.4309 | 408 | ### Framework versions - Transformers >=4.8.0 - Pytorch >=1.6.0 - TensorFlow >=2.5.0 - Flax >=0.3.5 - Datasets >=1.10.2 - Tokenizers >=0.11.6 If you use these models, please cite the following paper: ``` @article{article, author={Mei, Moyan and Sroch, Rohit}, title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding}, volume={3}, number={1}, journal={Lattice, The Machine Learning Journal by Association of Data Scientists}, day={26}, year={2022}, month={Feb}, url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63} } ```
3,703
course5i/SEAD-L-6_H-256_A-8-mrpc
[ "0", "1" ]
--- language: - en license: apache-2.0 tags: - SEAD datasets: - glue - mrpc --- ## Paper ## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63) Aurthors: *Moyan Mei*, *Rohit Sroch* ## Abstract With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks. *Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63). Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).* ## SEAD-L-6_H-256_A-8-mrpc This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **mrpc** task. For weights initialization, we used [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) ## All SEAD Checkpoints Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD) ## Intended uses & limitations More information needed ### Training hyperparameters Please take a look at the `training_args.bin` file ```python $ import torch $ hyperparameters = torch.load(os.path.join('training_args.bin')) ``` ### Evaluation results | eval_accuracy | eval_f1 | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples | |:-------------:|:-------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:| | 0.8897 | 0.9206 | 1.4486 | 281.643 | 8.974 | 0.4399 | 408 | ### Framework versions - Transformers >=4.8.0 - Pytorch >=1.6.0 - TensorFlow >=2.5.0 - Flax >=0.3.5 - Datasets >=1.10.2 - Tokenizers >=0.11.6 If you use these models, please cite the following paper: ``` @article{article, author={Mei, Moyan and Sroch, Rohit}, title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding}, volume={3}, number={1}, journal={Lattice, The Machine Learning Journal by Association of Data Scientists}, day={26}, year={2022}, month={Feb}, url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63} } ```
3,702
course5i/SEAD-L-6_H-256_A-8-rte
[ "0", "1" ]
--- language: - en license: apache-2.0 tags: - SEAD datasets: - glue - rte --- ## Paper ## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63) Aurthors: *Moyan Mei*, *Rohit Sroch* ## Abstract With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks. *Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63). Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).* ## SEAD-L-6_H-256_A-8-rte This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **rte** task. For weights initialization, we used [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) ## All SEAD Checkpoints Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD) ## Intended uses & limitations More information needed ### Training hyperparameters Please take a look at the `training_args.bin` file ```python $ import torch $ hyperparameters = torch.load(os.path.join('training_args.bin')) ``` ### Evaluation results | eval_accuracy | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples | |:-------------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:| | 0.7906 | 1.5528 | 178.391 | 5.796 | 0.6934 | 277 | ### Framework versions - Transformers >=4.8.0 - Pytorch >=1.6.0 - TensorFlow >=2.5.0 - Flax >=0.3.5 - Datasets >=1.10.2 - Tokenizers >=0.11.6 If you use these models, please cite the following paper: ``` @article{article, author={Mei, Moyan and Sroch, Rohit}, title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding}, volume={3}, number={1}, journal={Lattice, The Machine Learning Journal by Association of Data Scientists}, day={26}, year={2022}, month={Feb}, url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63} } ```
3,669
course5i/SEAD-L-6_H-384_A-12-rte
[ "0", "1" ]
--- language: - en license: apache-2.0 tags: - SEAD datasets: - glue - rte --- ## Paper ## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63) Aurthors: *Moyan Mei*, *Rohit Sroch* ## Abstract With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks. *Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63). Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).* ## SEAD-L-6_H-384_A-12-rte This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **rte** task. For weights initialization, we used [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) ## All SEAD Checkpoints Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD) ## Intended uses & limitations More information needed ### Training hyperparameters Please take a look at the `training_args.bin` file ```python $ import torch $ hyperparameters = torch.load(os.path.join('training_args.bin')) ``` ### Evaluation results | eval_accuracy | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples | |:-------------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:| | 0.8231 | 1.7325 | 159.884 | 5.195 | 0.6150 | 277 | ### Framework versions - Transformers >=4.8.0 - Pytorch >=1.6.0 - TensorFlow >=2.5.0 - Flax >=0.3.5 - Datasets >=1.10.2 - Tokenizers >=0.11.6 If you use these models, please cite the following paper: ``` @article{article, author={Mei, Moyan and Sroch, Rohit}, title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding}, volume={3}, number={1}, journal={Lattice, The Machine Learning Journal by Association of Data Scientists}, day={26}, year={2022}, month={Feb}, url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63} } ```
3,670
course5i/SEAD-L-6_H-256_A-8-stsb
[ "LABEL_0" ]
--- language: - en license: apache-2.0 tags: - SEAD datasets: - glue - stsb --- ## Paper ## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63) Aurthors: *Moyan Mei*, *Rohit Sroch* ## Abstract With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks. *Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63). Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).* ## SEAD-L-6_H-256_A-8-stsb This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **stsb** task. For weights initialization, we used [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) ## All SEAD Checkpoints Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD) ## Intended uses & limitations More information needed ### Training hyperparameters Please take a look at the `training_args.bin` file ```python $ import torch $ hyperparameters = torch.load(os.path.join('training_args.bin')) ``` ### Evaluation results | eval_pearson | eval_spearmanr | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples | |:------------:|:--------------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:| | 0.8962 | 0.8978 | 2.1978 | 682.498 | 21.385 | 0.4679 | 1500 | ### Framework versions - Transformers >=4.8.0 - Pytorch >=1.6.0 - TensorFlow >=2.5.0 - Flax >=0.3.5 - Datasets >=1.10.2 - Tokenizers >=0.11.6 If you use these models, please cite the following paper: ``` @article{article, author={Mei, Moyan and Sroch, Rohit}, title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding}, volume={3}, number={1}, journal={Lattice, The Machine Learning Journal by Association of Data Scientists}, day={26}, year={2022}, month={Feb}, url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63} } ```
3,720
course5i/SEAD-L-6_H-384_A-12-stsb
[ "LABEL_0" ]
--- language: - en license: apache-2.0 tags: - SEAD datasets: - glue - stsb --- ## Paper ## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63) Aurthors: *Moyan Mei*, *Rohit Sroch* ## Abstract With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks. *Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63). Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).* ## SEAD-L-6_H-384_A-12-stsb This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **stsb** task. For weights initialization, we used [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) ## All SEAD Checkpoints Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD) ## Intended uses & limitations More information needed ### Training hyperparameters Please take a look at the `training_args.bin` file ```python $ import torch $ hyperparameters = torch.load(os.path.join('training_args.bin')) ``` ### Evaluation results | eval_pearson | eval_spearmanr | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples | |:------------:|:--------------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:| | 0.9058 | 0.9032 | 2.0911 | 717.342 | 22.477 | 0.5057 | 1500 | ### Framework versions - Transformers >=4.8.0 - Pytorch >=1.6.0 - TensorFlow >=2.5.0 - Flax >=0.3.5 - Datasets >=1.10.2 - Tokenizers >=0.11.6 If you use these models, please cite the following paper: ``` @article{article, author={Mei, Moyan and Sroch, Rohit}, title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding}, volume={3}, number={1}, journal={Lattice, The Machine Learning Journal by Association of Data Scientists}, day={26}, year={2022}, month={Feb}, url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63} } ```
3,721
course5i/SEAD-L-6_H-256_A-8-qnli
[ "0", "1" ]
--- language: - en license: apache-2.0 tags: - SEAD datasets: - glue - qnli --- ## Paper ## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63) Aurthors: *Moyan Mei*, *Rohit Sroch* ## Abstract With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks. *Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63). Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).* ## SEAD-L-6_H-256_A-8-qnli This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **qnli** task. For weights initialization, we used [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) ## All SEAD Checkpoints Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD) ## Intended uses & limitations More information needed ### Training hyperparameters Please take a look at the `training_args.bin` file ```python $ import torch $ hyperparameters = torch.load(os.path.join('training_args.bin')) ``` ### Evaluation results | eval_accuracy | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples | |:-------------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:| | 0.8979 | 4.3663 | 1251.171 | 39.164 | 0.2789 | 5463 | ### Framework versions - Transformers >=4.8.0 - Pytorch >=1.6.0 - TensorFlow >=2.5.0 - Flax >=0.3.5 - Datasets >=1.10.2 - Tokenizers >=0.11.6 If you use these models, please cite the following paper: ``` @article{article, author={Mei, Moyan and Sroch, Rohit}, title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding}, volume={3}, number={1}, journal={Lattice, The Machine Learning Journal by Association of Data Scientists}, day={26}, year={2022}, month={Feb}, url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63} } ```
3,672
course5i/SEAD-L-6_H-384_A-12-qnli
[ "0", "1" ]
--- language: - en license: apache-2.0 tags: - SEAD datasets: - glue - qnli --- ## Paper ## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63) Aurthors: *Moyan Mei*, *Rohit Sroch* ## Abstract With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks. *Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63). Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).* ## SEAD-L-6_H-384_A-12-qnli This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **qnli** task. For weights initialization, we used [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) ## All SEAD Checkpoints Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD) ## Intended uses & limitations More information needed ### Training hyperparameters Please take a look at the `training_args.bin` file ```python $ import torch $ hyperparameters = torch.load(os.path.join('training_args.bin')) ``` ### Evaluation results | eval_accuracy | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples | |:-------------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:| | 0.9098 | 3.9867 | 1370.297 | 42.892 | 0.2570 | 5463 | ### Framework versions - Transformers >=4.8.0 - Pytorch >=1.6.0 - TensorFlow >=2.5.0 - Flax >=0.3.5 - Datasets >=1.10.2 - Tokenizers >=0.11.6 If you use these models, please cite the following paper: ``` @article{article, author={Mei, Moyan and Sroch, Rohit}, title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding}, volume={3}, number={1}, journal={Lattice, The Machine Learning Journal by Association of Data Scientists}, day={26}, year={2022}, month={Feb}, url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63} } ```
3,673
course5i/SEAD-L-6_H-256_A-8-qqp
[ "0", "1" ]
--- language: - en license: apache-2.0 tags: - SEAD datasets: - glue - qqp --- ## Paper ## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63) Aurthors: *Moyan Mei*, *Rohit Sroch* ## Abstract With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks. *Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63). Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).* ## SEAD-L-6_H-256_A-8-qqp This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **qqp** task. For weights initialization, we used [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) ## All SEAD Checkpoints Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD) ## Intended uses & limitations More information needed ### Training hyperparameters Please take a look at the `training_args.bin` file ```python $ import torch $ hyperparameters = torch.load(os.path.join('training_args.bin')) ``` ### Evaluation results | eval_accuracy | eval_f1 | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples | |:-------------:|:-------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:| | 0.9065 | 0.8746 | 21.3929 | 1889.88 | 59.085 | 0.3154 | 40430 | ### Framework versions - Transformers >=4.8.0 - Pytorch >=1.6.0 - TensorFlow >=2.5.0 - Flax >=0.3.5 - Datasets >=1.10.2 - Tokenizers >=0.11.6 If you use these models, please cite the following paper: ``` @article{article, author={Mei, Moyan and Sroch, Rohit}, title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding}, volume={3}, number={1}, journal={Lattice, The Machine Learning Journal by Association of Data Scientists}, day={26}, year={2022}, month={Feb}, url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63} } ```
3,699
Jingya/tmpkplizo4c
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue model-index: - name: tmpkplizo4c results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tmpkplizo4c This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.19.3 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
1,006
course5i/SEAD-L-6_H-384_A-12-wnli
[ "0", "1" ]
--- language: - en license: apache-2.0 tags: - SEAD datasets: - glue - wnli --- ## Paper ## [SEAD: SIMPLE ENSEMBLE AND KNOWLEDGE DISTILLATION FRAMEWORK FOR NATURAL LANGUAGE UNDERSTANDING](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63) Aurthors: *Moyan Mei*, *Rohit Sroch* ## Abstract With the widespread use of pre-trained language models (PLM), there has been increased research on how to make them applicable, especially in limited-resource or low latency high throughput scenarios. One of the dominant approaches is knowledge distillation (KD), where a smaller model is trained by receiving guidance from a large PLM. While there are many successful designs for learning knowledge from teachers, it remains unclear how students can learn better. Inspired by real university teaching processes, in this work we further explore knowledge distillation and propose a very simple yet effective framework, SEAD, to further improve task-specific generalization by utilizing multiple teachers. Our experiments show that SEAD leads to better performance compared to other popular KD methods [[1](https://arxiv.org/abs/1910.01108)] [[2](https://arxiv.org/abs/1909.10351)] [[3](https://arxiv.org/abs/2002.10957)] and achieves comparable or superior performance to its teacher model such as BERT [[4](https://arxiv.org/abs/1810.04805)] on total 13 tasks for the GLUE [[5](https://arxiv.org/abs/1804.07461)] and SuperGLUE [[6](https://arxiv.org/abs/1905.00537)] benchmarks. *Moyan Mei and Rohit Sroch. 2022. [SEAD: Simple ensemble and knowledge distillation framework for natural language understanding](https://www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63). Lattice, THE MACHINE LEARNING JOURNAL by Association of Data Scientists, 3(1).* ## SEAD-L-6_H-384_A-12-wnli This is a student model distilled from [**BERT base**](https://huggingface.co/bert-base-uncased) as teacher by using SEAD framework on **wnli** task. For weights initialization, we used [microsoft/xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased) ## All SEAD Checkpoints Other Community Checkpoints: [here](https://huggingface.co/models?search=SEAD) ## Intended uses & limitations More information needed ### Training hyperparameters Please take a look at the `training_args.bin` file ```python $ import torch $ hyperparameters = torch.load(os.path.join('training_args.bin')) ``` ### Evaluation results | eval_accuracy | eval_runtime | eval_samples_per_second | eval_steps_per_second | eval_loss | eval_samples | |:-------------:|:------------:|:-----------------------:|:---------------------:|:---------:|:------------:| | 0.5775 | 1.2959 | 54.787 | 2.315 | 0.6717 | 71 | ### Framework versions - Transformers >=4.8.0 - Pytorch >=1.6.0 - TensorFlow >=2.5.0 - Flax >=0.3.5 - Datasets >=1.10.2 - Tokenizers >=0.11.6 If you use these models, please cite the following paper: ``` @article{article, author={Mei, Moyan and Sroch, Rohit}, title={SEAD: Simple Ensemble and Knowledge Distillation Framework for Natural Language Understanding}, volume={3}, number={1}, journal={Lattice, The Machine Learning Journal by Association of Data Scientists}, day={26}, year={2022}, month={Feb}, url = {www.adasci.org/journals/lattice-35309407/?volumes=true&open=621a3b18edc4364e8a96cb63} } ```
3,673
sasuke/bert-base-uncased-finetuned-sst2
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-sst2 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.9323394495412844 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.2982 - Accuracy: 0.9323 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1817 | 1.0 | 4210 | 0.2920 | 0.9186 | | 0.1297 | 2.0 | 8420 | 0.3069 | 0.9209 | | 0.0978 | 3.0 | 12630 | 0.2982 | 0.9323 | | 0.062 | 4.0 | 16840 | 0.3278 | 0.9312 | | 0.0303 | 5.0 | 21050 | 0.3642 | 0.9323 | ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0 - Datasets 2.2.2 - Tokenizers 0.12.1
1,849
QuentinKemperino/ECHR_test_Merged
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8", "LABEL_9" ]
--- license: cc-by-sa-4.0 tags: - generated_from_trainer datasets: - lex_glue model-index: - name: ECHR_test_Merged results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ECHR_test_Merged This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the lex_glue dataset. It achieves the following results on the evaluation set: - Loss: 0.2162 - Macro-f1: 0.5607 - Micro-f1: 0.6726 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Macro-f1 | Micro-f1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.2278 | 0.44 | 500 | 0.3196 | 0.2394 | 0.4569 | | 0.1891 | 0.89 | 1000 | 0.2827 | 0.3255 | 0.5112 | | 0.1803 | 1.33 | 1500 | 0.2603 | 0.3961 | 0.5698 | | 0.1676 | 1.78 | 2000 | 0.2590 | 0.4251 | 0.6003 | | 0.1635 | 2.22 | 2500 | 0.2489 | 0.4186 | 0.6030 | | 0.1784 | 2.67 | 3000 | 0.2445 | 0.4627 | 0.6159 | | 0.1556 | 3.11 | 3500 | 0.2398 | 0.4757 | 0.6170 | | 0.151 | 3.56 | 4000 | 0.2489 | 0.4725 | 0.6163 | | 0.1564 | 4.0 | 4500 | 0.2289 | 0.5019 | 0.6416 | | 0.1544 | 4.44 | 5000 | 0.2406 | 0.5013 | 0.6408 | | 0.1516 | 4.89 | 5500 | 0.2351 | 0.5145 | 0.6510 | | 0.1487 | 5.33 | 6000 | 0.2354 | 0.5164 | 0.6394 | | 0.1385 | 5.78 | 6500 | 0.2385 | 0.5205 | 0.6486 | | 0.145 | 6.22 | 7000 | 0.2337 | 0.5197 | 0.6529 | | 0.1332 | 6.67 | 7500 | 0.2294 | 0.5421 | 0.6526 | | 0.1293 | 7.11 | 8000 | 0.2167 | 0.5576 | 0.6652 | | 0.1475 | 7.56 | 8500 | 0.2218 | 0.5676 | 0.6649 | | 0.1376 | 8.0 | 9000 | 0.2203 | 0.5565 | 0.6709 | | 0.1408 | 8.44 | 9500 | 0.2178 | 0.5541 | 0.6716 | | 0.133 | 8.89 | 10000 | 0.2212 | 0.5692 | 0.6640 | | 0.1363 | 9.33 | 10500 | 0.2148 | 0.5642 | 0.6736 | | 0.1344 | 9.78 | 11000 | 0.2162 | 0.5607 | 0.6726 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
3,006
Alireza1044/mobilebert_mrpc
[ "equivalent", "not_equivalent" ]
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model-index: - name: mrpc results: - task: name: Text Classification type: text-classification dataset: name: GLUE MRPC type: glue args: mrpc metrics: - name: Accuracy type: accuracy value: 0.8382352941176471 - name: F1 type: f1 value: 0.8888888888888888 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mrpc This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.3672 - Accuracy: 0.8382 - F1: 0.8889 - Combined Score: 0.8636 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.11.0 - Datasets 2.2.2 - Tokenizers 0.12.1
1,499
aymanashour/summ
[ "NO", "YES" ]
--- license: other ---
23
aymanashour/summ2
null
--- license: apache-2.0 ---
28
Javon/distilbert-base-uncased-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
Entry not found
15
S2312dal/M4_MLM_cross
[ "LABEL_0" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - spearmanr model-index: - name: M4_MLM_cross results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # M4_MLM_cross This model is a fine-tuned version of [S2312dal/M4_MLM](https://huggingface.co/S2312dal/M4_MLM) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0222 - Pearson: 0.9472 - Spearmanr: 0.8983 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 25 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 8.0 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:| | 0.0353 | 1.0 | 131 | 0.0590 | 0.8326 | 0.8225 | | 0.0478 | 2.0 | 262 | 0.0368 | 0.9234 | 0.8894 | | 0.0256 | 3.0 | 393 | 0.0222 | 0.9472 | 0.8983 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
1,585
philschmid/habana-xlm-r-large-amazon-massive
[ "alarm", "audio", "calendar", "cooking", "datetime", "email", "general", "iot", "lists", "music", "news", "play", "qa", "recommendation", "social", "takeaway", "transport", "weather" ]
--- license: apache-2.0 tags: - generated_from_trainer - habana datasets: - AmazonScience/massive metrics: - accuracy - f1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # philschmid/habana-xlm-r-large-amazon-massive This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the AmazonScience/massive dataset. It achieves the following results on the evaluation set: ## 8x HPU approx. 41min **train results** ```bash {'loss': 0.2651, 'learning_rate': 2.4e-05, 'epoch': 1.0} {'loss': 0.1079, 'learning_rate': 1.8e-05, 'epoch': 2.0} {'loss': 0.0563, 'learning_rate': 1.2e-05, 'epoch': 3.0} {'loss': 0.0308, 'learning_rate': 6e-06, 'epoch': 4.0} {'loss': 0.0165, 'learning_rate': 0.0, 'epoch': 5.0} ``` total ```bash {'train_runtime': 3172.4502, 'train_samples_per_second': 127.028, 'train_steps_per_second': 1.986, 'train_loss': 0.09531746031746031, 'epoch': 5.0} ``` **eval results** ```bash {'eval_loss': 0.3128528892993927, 'eval_accuracy': 0.9125852013210597, 'eval_f1': 0.9125852013210597, 'eval_runtime': 45.1795, 'eval_samples_per_second': 314.988, 'eval_steps_per_second': 4.936, 'epoch': 1.0} {'eval_loss': 0.36222779750823975, 'eval_accuracy': 0.9134987000210807, 'eval_f1': 0.9134987000210807, 'eval_runtime': 29.8241, 'eval_samples_per_second': 477.165, 'eval_steps_per_second': 7.477, 'epoch': 2.0} {'eval_loss': 0.3943144679069519, 'eval_accuracy': 0.9140608530672476, 'eval_f1': 0.9140 608530672476, 'eval_runtime': 30.1085, 'eval_samples_per_second': 472.657, 'eval_steps_per_second': 7.407, 'epoch': 3.0} {'eval_loss': 0.40938863158226013, 'eval_accuracy': 0.9158878504672897, 'eval_f1': 0.9158878504672897, 'eval_runtime': 30.4546, 'eval_samples_per_second': 467.286, 'eval_steps_per_second': 7.322, 'epoch': 4.0} {'eval_loss': 0.4137658476829529, 'eval_accuracy': 0.9172932330827067, 'eval_f1': 0.9172932330827067, 'eval_runtime': 30.3464, 'eval_samples_per_second': 468.952, 'eval_steps_per_second': 7.348, 'epoch': 5.0} ``` # Environment The training was run on a `DL1` instance on AWS using Habana Gaudi1 and `optimum`. see for more information: https://github.com/philschmid/deep-learning-habana-huggingface
2,333
deepesh0x/autotrain-mlsec-1013333726
[ "negative", "positive" ]
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - deepesh0x/autotrain-data-mlsec co2_eq_emissions: 33.183779535405364 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1013333726 - CO2 Emissions (in grams): 33.183779535405364 ## Validation Metrics - Loss: 0.1998898833990097 - Accuracy: 0.9226923076923077 - Precision: 0.9269808389435525 - Recall: 0.9177134068187645 - AUC: 0.9785380985232148 - F1: 0.9223238438747907 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-mlsec-1013333726 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-mlsec-1013333726", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-mlsec-1013333726", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
1,169
deepesh0x/autotrain-GlueFineTunedModel-1013533798
[ "negative", "positive" ]
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - deepesh0x/autotrain-data-GlueFineTunedModel co2_eq_emissions: 56.65990763623749 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1013533798 - CO2 Emissions (in grams): 56.65990763623749 ## Validation Metrics - Loss: 0.693366527557373 - Accuracy: 0.4998717948717949 - Precision: 0.0 - Recall: 0.0 - AUC: 0.5 - F1: 0.0 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-GlueFineTunedModel-1013533798 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-GlueFineTunedModel-1013533798", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-GlueFineTunedModel-1013533798", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
1,159
Zamachi/distillbert-for-multilabel-sentence-classification
[ "anger", "joy", "optimism", "sadness" ]
Entry not found
15
Zamachi/albert-for-multilabel-sentence-classification
[ "anger", "joy", "optimism", "sadness" ]
Entry not found
15
Yarn/finetuned
[ "CONTRADICTION", "ENTAILMENT", "NEUTRAL" ]
Entry not found
15
deepesh0x/autotrain-bert_wikipedia_sst_2-1034235509
[ "negative", "positive" ]
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - deepesh0x/autotrain-data-bert_wikipedia_sst_2 co2_eq_emissions: 17.051424016530056 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1034235509 - CO2 Emissions (in grams): 17.051424016530056 ## Validation Metrics - Loss: 0.14414940774440765 - Accuracy: 0.954046028210839 - Precision: 0.9583831937242387 - Recall: 0.9592760180995475 - AUC: 0.9872623710421541 - F1: 0.9588293980711673 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-bert_wikipedia_sst_2-1034235509 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-bert_wikipedia_sst_2-1034235509", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-bert_wikipedia_sst_2-1034235509", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
1,230
ambekarsameer/distilbert-base-uncased-finetuned-cola
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5337700382788287 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8051 - Matthews Correlation: 0.5338 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5233 | 1.0 | 535 | 0.5324 | 0.4151 | | 0.3489 | 2.0 | 1070 | 0.5132 | 0.4836 | | 0.2392 | 3.0 | 1605 | 0.5852 | 0.5177 | | 0.1822 | 4.0 | 2140 | 0.7485 | 0.5256 | | 0.1382 | 5.0 | 2675 | 0.8051 | 0.5338 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
1,999
ychenNLP/arabic-ner-ace
[ "B-FAC", "B-GPE", "B-LOC", "B-ORG", "B-PER", "B-VEH", "B-WEA", "I-FAC", "I-GPE", "I-LOC", "I-ORG", "I-PER", "I-VEH", "I-WEA", "O" ]
--- tags: - BERT - token-classification - sequence-tagger-model language: - ar - en license: mit datasets: - ACE2005 --- # Arabic NER Model - [Github repo](https://github.com/edchengg/GigaBERT) - NER BIO tagging model based on [GigaBERTv4](https://huggingface.co/lanwuwei/GigaBERT-v4-Arabic-and-English). - ACE2005 Training data: English + Arabic - [NER tags](https://www.ldc.upenn.edu/sites/www.ldc.upenn.edu/files/english-entities-guidelines-v6.6.pdf) including: PER, VEH, GPE, WEA, ORG, LOC, FAC ## Hyperparameters - learning_rate=2e-5 - num_train_epochs=10 - weight_decay=0.01 ## ACE2005 Evaluation results (F1) | Language | Arabic | English | |:----:|:-----------:|:----:| | | 89.4 | 88.8 | ## How to use ```python >>> from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer >>> ner_model = AutoModelForTokenClassification.from_pretrained("ychenNLP/arabic-ner-ace") >>> ner_tokenizer = AutoTokenizer.from_pretrained("ychenNLP/arabic-ner-ace") >>> ner_pip = pipeline("ner", model=ner_model, tokenizer=ner_tokenizer, grouped_entities=True) >>> output = ner_pip('Protests break out across the US after Supreme Court overturns.') >>> print(output) [{'entity_group': 'GPE', 'score': 0.9979881, 'word': 'us', 'start': 30, 'end': 32}, {'entity_group': 'ORG', 'score': 0.99898684, 'word': 'supreme court', 'start': 39, 'end': 52}] >>> output = ner_pip('قال وزير العدل التركي بكير بوزداغ إن أنقرة تريد 12 مشتبهاً بهم من فنلندا و 21 من السويد') >>> print(output) [{'entity_group': 'PER', 'score': 0.9996214, 'word': 'وزير', 'start': 4, 'end': 8}, {'entity_group': 'ORG', 'score': 0.9952383, 'word': 'العدل', 'start': 9, 'end': 14}, {'entity_group': 'GPE', 'score': 0.9996675, 'word': 'التركي', 'start': 15, 'end': 21}, {'entity_group': 'PER', 'score': 0.9978992, 'word': 'بكير بوزداغ', 'start': 22, 'end': 33}, {'entity_group': 'GPE', 'score': 0.9997154, 'word': 'انقرة', 'start': 37, 'end': 42}, {'entity_group': 'PER', 'score': 0.9946885, 'word': 'مشتبها بهم', 'start': 51, 'end': 62}, {'entity_group': 'GPE', 'score': 0.99967396, 'word': 'فنلندا', 'start': 66, 'end': 72}, {'entity_group': 'PER', 'score': 0.99694425, 'word': '21', 'start': 75, 'end': 77}, {'entity_group': 'GPE', 'score': 0.99963355, 'word': 'السويد', 'start': 81, 'end': 87}] ``` ### BibTeX entry and citation info ```bibtex @inproceedings{lan2020gigabert, author = {Lan, Wuwei and Chen, Yang and Xu, Wei and Ritter, Alan}, title = {Giga{BERT}: Zero-shot Transfer Learning from {E}nglish to {A}rabic}, booktitle = {Proceedings of The 2020 Conference on Empirical Methods on Natural Language Processing (EMNLP)}, year = {2020} } ```
2,675
Luojike/autotrain-test_3-1071537591
[ "0", "1" ]
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - Luojike/autotrain-data-test_3 co2_eq_emissions: 0.03985401798934018 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1071537591 - CO2 Emissions (in grams): 0.03985401798934018 ## Validation Metrics - Loss: 0.5283975601196289 - Accuracy: 0.7389705882352942 - Precision: 0.5032894736842105 - Recall: 0.3574766355140187 - AUC: 0.7135599403856304 - F1: 0.41803278688524587 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Luojike/autotrain-test_3-1071537591 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Luojike/autotrain-test_3-1071537591", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Luojike/autotrain-test_3-1071537591", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
1,169
Kayvane/distilbert-complaints-wandb-product
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8", "LABEL_9" ]
--- license: apache-2.0 tags: - generated_from_trainer datasets: - consumer-finance-complaints metrics: - accuracy - f1 - recall - precision model-index: - name: distilbert-complaints-wandb-product results: - task: name: Text Classification type: text-classification dataset: name: consumer-finance-complaints type: consumer-finance-complaints args: default metrics: - name: Accuracy type: accuracy value: 0.8690996641956535 - name: F1 type: f1 value: 0.8645310918904254 - name: Recall type: recall value: 0.8690996641956535 - name: Precision type: precision value: 0.8629318199420283 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-complaints-wandb-product This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the consumer-finance-complaints dataset. It achieves the following results on the evaluation set: - Loss: 0.4431 - Accuracy: 0.8691 - F1: 0.8645 - Recall: 0.8691 - Precision: 0.8629 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.562 | 0.51 | 2000 | 0.5107 | 0.8452 | 0.8346 | 0.8452 | 0.8252 | | 0.4548 | 1.01 | 4000 | 0.4628 | 0.8565 | 0.8481 | 0.8565 | 0.8466 | | 0.3439 | 1.52 | 6000 | 0.4519 | 0.8605 | 0.8544 | 0.8605 | 0.8545 | | 0.2626 | 2.03 | 8000 | 0.4412 | 0.8678 | 0.8618 | 0.8678 | 0.8626 | | 0.2717 | 2.53 | 10000 | 0.4431 | 0.8691 | 0.8645 | 0.8691 | 0.8629 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
2,529
sepidmnorozy/finetuned-sentiment-withGPU
null
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: finetuning-sentiment-model-10-samples_withGPU results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-10-samples_withGPU This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3893 - Accuracy: 0.8744 - F1: 0.8684 - Precision: 0.9126 - Recall: 0.8283 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.3631 | 1.0 | 7088 | 0.3622 | 0.8638 | 0.8519 | 0.9334 | 0.7835 | | 0.35 | 2.0 | 14176 | 0.3875 | 0.8714 | 0.8622 | 0.9289 | 0.8044 | | 0.3262 | 3.0 | 21264 | 0.3893 | 0.8744 | 0.8684 | 0.9126 | 0.8283 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0 - Datasets 2.0.0 - Tokenizers 0.11.6
1,722
domenicrosati/deberta-v3-xsmall-with-biblio-context-finetuned-review_classifier
null
--- license: mit tags: - text-classification - generated_from_trainer metrics: - accuracy - f1 - recall - precision model-index: - name: deberta-v3-xsmall-with-biblio-context-finetuned-review_classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-xsmall-with-biblio-context-finetuned-review_classifier This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0979 - Accuracy: 0.9682 - F1: 0.8332 - Recall: 0.8466 - Precision: 0.8202 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.1539 | 1.0 | 6667 | 0.1237 | 0.9584 | 0.7668 | 0.7307 | 0.8067 | | 0.1271 | 2.0 | 13334 | 0.0979 | 0.9682 | 0.8332 | 0.8466 | 0.8202 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
1,796
dminiotas05/distilbert-base-uncased-finetuned-ft500_6class600
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-ft500_6class600 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ft500_6class600 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6317 - Accuracy: 0.35 - F1: 0.3327 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.5717 | 1.0 | 188 | 1.5375 | 0.3067 | 0.2820 | | 1.4338 | 2.0 | 376 | 1.5354 | 0.3207 | 0.2824 | | 1.3516 | 3.0 | 564 | 1.4852 | 0.3573 | 0.3287 | | 1.2722 | 4.0 | 752 | 1.4997 | 0.366 | 0.3534 | | 1.1923 | 5.0 | 940 | 1.5474 | 0.362 | 0.3454 | | 1.1156 | 6.0 | 1128 | 1.5998 | 0.3547 | 0.3387 | | 1.0522 | 7.0 | 1316 | 1.6154 | 0.3473 | 0.3316 | | 1.0148 | 8.0 | 1504 | 1.6317 | 0.35 | 0.3327 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
1,944
mbyanfei/autotrain-amazon-shoe-reviews-classification-1104340243
[ "0", "1", "2", "3", "4" ]
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - mbyanfei/autotrain-data-amazon-shoe-reviews-classification co2_eq_emissions: 27.982443349742287 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1104340243 - CO2 Emissions (in grams): 27.982443349742287 ## Validation Metrics - Loss: 0.9584922790527344 - Accuracy: 0.5843 - Macro F1: 0.5801009597024507 - Micro F1: 0.5843 - Weighted F1: 0.5792137097243996 - Macro Precision: 0.5897236028586046 - Micro Precision: 0.5843 - Weighted Precision: 0.5896188517045103 - Macro Recall: 0.5857983081566331 - Micro Recall: 0.5843 - Weighted Recall: 0.5843 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/mbyanfei/autotrain-amazon-shoe-reviews-classification-1104340243 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("mbyanfei/autotrain-amazon-shoe-reviews-classification-1104340243", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("mbyanfei/autotrain-amazon-shoe-reviews-classification-1104340243", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
1,436
dminiotas05/distilbert-base-uncased-finetuned-ft650_6class
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-ft650_6class results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ft650_6class This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4555 - Accuracy: 0.3707 - F1: 0.3625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.5838 | 1.0 | 188 | 1.5235 | 0.3253 | 0.2947 | | 1.4521 | 2.0 | 376 | 1.4744 | 0.3467 | 0.3234 | | 1.3838 | 3.0 | 564 | 1.4565 | 0.358 | 0.3483 | | 1.323 | 4.0 | 752 | 1.4555 | 0.3707 | 0.3625 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
1,656
ymcnabb/finetuning-sentiment-model
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8733333333333333 - name: F1 type: f1 value: 0.8758169934640523 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3291 - Accuracy: 0.8733 - F1: 0.8758 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
1,495
poison-texts/imdb-sentiment-analysis-natural-10-epochs
null
Entry not found
15
claudiovaliense/teste_claudio2
null
Entry not found
15
poison-texts/imdb-sentiment-analysis-clean
null
--- license: apache-2.0 ---
28
poison-texts/imdb-sentiment-analysis-poisoned-25
null
--- license: apache-2.0 ---
28
jinwooChoi/SKKU_SA_HJW_0722_3
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
jinwooChoi/SKKU_KDW_SA_0722_2
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
ai4bharat/IndicXLMv2-alpha-SentimentClassification
null
# IndicXLMv2-alpha-SentimentClassification
43
Aimlab/xlm-roberta-base-finetuned-urdu
null
--- language: ur license: afl-3.0 --- # XLM-RoBERTa-Urdu-Classification This [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) text classification model trained on Urdu sentiment [data-set](https://huggingface.co/datasets/hassan4830/urdu-binary-classification-data) performs binary sentiment classification on any given Urdu sentence. The model has been fine-tuned for better results in manageable time frames. ## Model description XLM-RoBERTa is a scaled cross-lingual sentence encoder. It is trained on 2.5T of data across 100 languages data filtered from Common Crawl. XLM-R achieves state-of-the-arts results on multiple cross-lingual benchmarks. The XLM-RoBERTa model was proposed in Unsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. It is based on Facebook’s RoBERTa model released in 2019. It is a large multi-lingual language model, trained on 2.5TB of filtered CommonCrawl data. ### How to use You can import this model directly from the transformers library: ```python >>> from transformers import AutoTokenizer, AutoModelForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("Aimlab/xlm-roberta-base-finetuned-urdu") >>> model = AutoModelForSequenceClassification.from_pretrained("Aimlab/xlm-roberta-base-finetuned-urdu", id2label = {0: 'negative', 1: 'positive'}) ``` Here is how to use this model to get the label of a given text: ```python >>> from transformers import TextClassificationPipeline >>> text = "وہ ایک برا شخص ہے" >>> pipe = TextClassificationPipeline(model = model, tokenizer = tokenizer, top_k = 2, device = 0) >>> pipe(text) [{'label': 'negative', 'score': 0.9987003803253174}, {'label': 'positive', 'score': 0.001299630501307547}] ```
1,893
Shenzy/Sentence_Classification4DesignTutor
[ "0", "1", "2" ]
--- tags: autotrain language: en widget: - text: "An unusual hierarchy in the section near the top where the design seems to prioritise running time over a compacted artist name." datasets: - Shenzy/autotrain-data-sentence_classification co2_eq_emissions: 0.00986494387043499 --- ## Validation Metrics - Loss: 0.6447726488113403 - Accuracy: 0.8263473053892215 - Macro F1: 0.7776555055392036 - Micro F1: 0.8263473053892215 - Weighted F1: 0.8161511591973788 - Macro Precision: 0.8273504273504274 - Micro Precision: 0.8263473053892215 - Weighted Precision: 0.8266697374481806 - Macro Recall: 0.7615518744551003 - Micro Recall: 0.8263473053892215 - Weighted Recall: 0.8263473053892215 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "An unusual hierarchy in the section near the top where the design seems to prioritise running time over a compacted artist name."}' https://api-inference.huggingface.co/models/Shenzy/Sentence_Classification4DesignTutor ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer import numpy as np labdic ={ 0: "rationale", 1: "suggestion", 2: "specific_critique"} model = AutoModelForSequenceClassification.from_pretrained("Shenzy/Sentence_Classification4DesignTutor", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Shenzy/Sentence_Classification4DesignTutor", use_auth_token=True) inputs = tokenizer("An unusual hierarchy in the section near the top where the design seems to prioritise running time over a compacted artist name.", return_tensors="pt") outputs = model(**inputs) print(labdic[np.argmax(outputs)]) ```
1,728
adamnik/bert-causality-baseline
null
--- license: mit ---
21
ASCCCCCCCC/PENGMENGJIE-finetuned-emotion
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- license: apache-2.0 tags: - generated_from_trainer model_index: - name: PENGMENGJIE-finetuned-emotion results: - task: name: Text Classification type: text-classification --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PENGMENGJIE-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.9.0 - Pytorch 1.7.1+cpu - Datasets 1.17.0 - Tokenizers 0.10.3
1,110
ASCCCCCCCC/bert-base-chinese-finetuned-amazon_zh_20000
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3", "LABEL_4", "LABEL_5" ]
--- tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: bert-base-chinese-finetuned-amazon_zh_20000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-chinese-finetuned-amazon_zh_20000 This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1683 - Accuracy: 0.5224 - F1: 0.5194 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.2051 | 1.0 | 2500 | 1.1717 | 0.506 | 0.4847 | | 1.0035 | 2.0 | 5000 | 1.1683 | 0.5224 | 0.5194 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.1 - Datasets 1.18.3 - Tokenizers 0.10.3
1,468
Aimendo/autonlp-triage-35248482
[ "acknowledgement", "ads", "approval", "away", "cancellation", "doc_request", "inquirey", "modification", "new_booking", "refund" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - Aimendo/autonlp-data-triage co2_eq_emissions: 7.989144645413398 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 35248482 - CO2 Emissions (in grams): 7.989144645413398 ## Validation Metrics - Loss: 0.13783401250839233 - Accuracy: 0.9728654124457308 - Macro F1: 0.949537871674076 - Micro F1: 0.9728654124457308 - Weighted F1: 0.9732422812610365 - Macro Precision: 0.9380372699332605 - Micro Precision: 0.9728654124457308 - Weighted Precision: 0.974548513256663 - Macro Recall: 0.9689346153591594 - Micro Recall: 0.9728654124457308 - Weighted Recall: 0.9728654124457308 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Aimendo/autonlp-triage-35248482 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Aimendo/autonlp-triage-35248482", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Aimendo/autonlp-triage-35248482", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,351
Ajay191191/autonlp-Test-530014983
[ "0", "1" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - Ajay191191/autonlp-data-Test co2_eq_emissions: 55.10196329868386 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 530014983 - CO2 Emissions (in grams): 55.10196329868386 ## Validation Metrics - Loss: 0.23171618580818176 - Accuracy: 0.9298837645294338 - Precision: 0.9314414866901055 - Recall: 0.9279459594696022 - AUC: 0.979447403984557 - F1: 0.9296904373981703 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Ajay191191/autonlp-Test-530014983 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Ajay191191/autonlp-Test-530014983", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Ajay191191/autonlp-Test-530014983", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,145
Alireza1044/albert-base-v2-rte
null
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model_index: - name: rte results: - task: name: Text Classification type: text-classification dataset: name: GLUE RTE type: glue args: rte metric: name: Accuracy type: accuracy value: 0.6859205776173285 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # rte This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.7994 - Accuracy: 0.6859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0 ### Training results ### Framework versions - Transformers 4.9.0 - Pytorch 1.9.0+cu102 - Datasets 1.10.2 - Tokenizers 0.10.3
1,366
AnonymousSub/EManuals_RoBERTa_wikiqa
null
Entry not found
15
AnonymousSub/bert-base-uncased_wikiqa
null
Entry not found
15
AnonymousSub/cline-emanuals-s10-AR
null
Entry not found
15
AnonymousSub/roberta-base_wikiqa
null
Entry not found
15
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1_wikiqa
null
Entry not found
15
AnonymousSub/rule_based_hier_triplet_epochs_1_shard_1_wikiqa
null
Entry not found
15
AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_1_wikiqa
null
Entry not found
15
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1_wikiqa
null
Entry not found
15
AnonymousSub/rule_based_roberta_hier_quadruplet_epochs_1_shard_1_wikiqa
null
Entry not found
15
AnonymousSub/rule_based_roberta_only_classfn_epochs_1_shard_1_wikiqa
null
Entry not found
15
AnonymousSub/rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1_wikiqa
null
Entry not found
15
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_1_wikiqa
null
Entry not found
15
AnonymousSub/rule_based_twostage_quadruplet_epochs_1_shard_1_wikiqa
null
Entry not found
15
AnonymousSub/rule_based_twostagetriplet_hier_epochs_1_shard_1_wikiqa
null
Entry not found
15
AnonymousSub/unsup-consert-base_copy_wikiqa
null
Entry not found
15
Ateeb/EmotionDetector
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_20", "LABEL_21", "LABEL_22", "LABEL_23", "LABEL_24", "LABEL_25", "LABEL_26", "LABEL_3", "LABEL_4", "LABEL_5", ...
Entry not found
15
Blaine-Mason/hackMIT-finetuned-sst2
null
--- tags: - generated_from_trainer datasets: - glue metrics: - accuracy model_index: - name: hackMIT-finetuned-sst2 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: sst2 metric: name: Accuracy type: accuracy value: 0.8027522935779816 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hackMIT-finetuned-sst2 This model is a fine-tuned version of [Blaine-Mason/hackMIT-finetuned-sst2](https://huggingface.co/Blaine-Mason/hackMIT-finetuned-sst2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.1086 - Accuracy: 0.8028 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.033238621168611e-06 - train_batch_size: 16 - eval_batch_size: 8 - seed: 30 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0674 | 1.0 | 4210 | 1.1086 | 0.8028 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
1,604
CLTL/icf-levels-adm
[ "LABEL_0" ]
--- language: nl license: mit pipeline_tag: text-classification inference: false --- # Regression Model for Respiration Functioning Levels (ICF b440) ## Description A fine-tuned regression model that assigns a functioning level to Dutch sentences describing respiration functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about respiration functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model. ## Functioning levels Level | Meaning ---|--- 4 | No problem with respiration, and/or respiratory rate is normal (EWS: 9-20). 3 | Shortness of breath in exercise (saturation &ge;90), and/or respiratory rate is slightly increased (EWS: 21-30). 2 | Shortness of breath in rest (saturation &ge;90), and/or respiratory rate is fairly increased (EWS: 31-35). 1 | Needs oxygen at rest or during exercise (saturation &lt;90), and/or respiratory rate &gt;35. 0 | Mechanical ventilation is needed. The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model. ## Intended uses and limitations - The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records). - The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled. ## How to use To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library: ``` from simpletransformers.classification import ClassificationModel model = ClassificationModel( 'roberta', 'CLTL/icf-levels-adm', use_cuda=False, ) example = 'Nu sinds 5-6 dagen progressieve benauwdheidsklachten (bij korte stukken lopen al kortademig), terwijl dit eerder niet zo was.' _, raw_outputs = model.predict([example]) predictions = np.squeeze(raw_outputs) ``` The prediction on the example is: ``` 2.26 ``` The raw outputs look like this: ``` [[2.26074648]] ``` ## Training data - The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released. - The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines). ## Training procedure The default training parameters of Simple Transformers were used, including: - Optimizer: AdamW - Learning rate: 4e-5 - Num train epochs: 1 - Train batch size: 8 ## Evaluation results The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). | | Sentence-level | Note-level |---|---|--- mean absolute error | 0.48 | 0.37 mean squared error | 0.55 | 0.34 root mean squared error | 0.74 | 0.58 ## Authors and references ### Authors Jenia Kim, Piek Vossen ### References TBD
3,389
CLTL/icf-levels-mbw
[ "LABEL_0" ]
--- language: nl license: mit pipeline_tag: text-classification inference: false --- # Regression Model for Weight Maintenance Functioning Levels (ICF b530) ## Description A fine-tuned regression model that assigns a functioning level to Dutch sentences describing weight maintenance functions. The model is based on a pre-trained Dutch medical language model ([link to be added]()): a RoBERTa model, trained from scratch on clinical notes of the Amsterdam UMC. To detect sentences about weight maintenance functions in clinical text in Dutch, use the [icf-domains](https://huggingface.co/CLTL/icf-domains) classification model. ## Functioning levels Level | Meaning ---|--- 4 | Healthy weight, no unintentional weight loss or gain, SNAQ 0 or 1. 3 | Some unintentional weight loss or gain, or lost a lot of weight but gained some of it back afterwards. 2 | Moderate unintentional weight loss or gain (more than 3 kg in the last month), SNAQ 2. 1 | Severe unintentional weight loss or gain (more than 6 kg in the last 6 months), SNAQ &ge; 3. 0 | Severe unintentional weight loss or gain (more than 6 kg in the last 6 months) and admitted to ICU. The predictions generated by the model might sometimes be outside of the scale (e.g. 4.2); this is normal in a regression model. ## Intended uses and limitations - The model was fine-tuned (trained, validated and tested) on medical records from the Amsterdam UMC (the two academic medical centers of Amsterdam). It might perform differently on text from a different hospital or text from non-hospital sources (e.g. GP records). - The model was fine-tuned with the [Simple Transformers](https://simpletransformers.ai/) library. This library is based on Transformers but the model cannot be used directly with Transformers `pipeline` and classes; doing so would generate incorrect outputs. For this reason, the API on this page is disabled. ## How to use To generate predictions with the model, use the [Simple Transformers](https://simpletransformers.ai/) library: ``` from simpletransformers.classification import ClassificationModel model = ClassificationModel( 'roberta', 'CLTL/icf-levels-mbw', use_cuda=False, ) example = 'Tijdens opname >10 kg afgevallen.' _, raw_outputs = model.predict([example]) predictions = np.squeeze(raw_outputs) ``` The prediction on the example is: ``` 1.95 ``` The raw outputs look like this: ``` [[1.95429301]] ``` ## Training data - The training data consists of clinical notes from medical records (in Dutch) of the Amsterdam UMC. Due to privacy constraints, the data cannot be released. - The annotation guidelines used for the project can be found [here](https://github.com/cltl/a-proof-zonmw/tree/main/resources/annotation_guidelines). ## Training procedure The default training parameters of Simple Transformers were used, including: - Optimizer: AdamW - Learning rate: 4e-5 - Num train epochs: 1 - Train batch size: 8 ## Evaluation results The evaluation is done on a sentence-level (the classification unit) and on a note-level (the aggregated unit which is meaningful for the healthcare professionals). | | Sentence-level | Note-level |---|---|--- mean absolute error | 0.81 | 0.60 mean squared error | 0.83 | 0.56 root mean squared error | 0.91 | 0.75 ## Authors and references ### Authors Jenia Kim, Piek Vossen ### References TBD
3,345
CenIA/albert-base-spanish-finetuned-pawsx
null
Entry not found
15
CenIA/albert-base-spanish-finetuned-xnli
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
CenIA/albert-tiny-spanish-finetuned-mldoc
[ "LABEL_0", "LABEL_1", "LABEL_2", "LABEL_3" ]
Entry not found
15
CenIA/albert-tiny-spanish-finetuned-xnli
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
CenIA/albert-xxlarge-spanish-finetuned-xnli
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
CleveGreen/FieldClassifier_v2
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_20", "LABEL_21", "LABEL_22", "LABEL_23", "LABEL_24", "LABEL_25", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "...
Entry not found
15
CleveGreen/JobClassifier_v2_gpt
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_100", "LABEL_101", "LABEL_102", "LABEL_103", "LABEL_104", "LABEL_105", "LABEL_106", "LABEL_107", "LABEL_108", "LABEL_109", "LABEL_11", "LABEL_110", "LABEL_111", "LABEL_112", "LABEL_113", "LABEL_114", "LABEL_115", "LABEL_116", "LABEL_...
Entry not found
15
DanL/scientific-challenges-and-directions
[ "Challenge", "Direction" ]
--- tags: - generated_from_trainer - text-classification language: - en datasets: - DanL/scientific-challenges-and-directions-dataset widget: - text: "severe atypical cases of pneumonia emerged and quickly spread worldwide." example_title: "challenge" - text: "we speculate that studying IL-6 will be beneficial." example_title: "direction" - text: "in future studies, both PRRs should be tested as the cause for multiple deaths." example_title: "both" - text: "IbMADS1-transformed potatoes exhibited tuber morphogenesis in the fibrous roots." example_title: "neither" metrics: - precision - recall - f1 model-index: - name: scientific-challenges-and-directions results: [] --- # scientific-challenges-and-directions We present a novel resource to help scientists and medical professionals discover challenges and potential directions across scientific literature, focusing on a broad corpus pertaining to the COVID-19 pandemic and related historical research. At a high level, the _challenges_ and _directions_ are defined as follows: * **Challenge**: A sentence mentioning a problem, difficulty, flaw, limitation, failure, lack of clarity, or knowledge gap. * **Research direction**: A sentence mentioning suggestions or needs for further research, hypotheses, speculations, indications or hints that an issue is worthy of exploration. * This model here is described in our paper: [A Search Engine for Discovery of Scientific Challenges and Directions](https://arxiv.org/abs/2108.13751) (though we've upgraded the infrastructure since the paper was released - there are slight differences in the results). * Our dataset can be found [here](https://huggingface.co/datasets/DanL/scientific-challenges-and-directions-dataset). * Please cite our paper if you use our datasets or models in your project. See the [BibTeX](#citation). * Feel free to [email us](#contact-us). * Also, check out [our search engine](https://challenges.apps.allenai.org/), as an example application. ## Model description This model is a fine-tuned version of [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the [scientific-challenges-and-directions-dataset](https://huggingface.co/datasets/DanL/scientific-challenges-and-directions-dataset), designed for multi-label text classification. ## Training and evaluation data The scientific-challenges-and-directions model is trained based on a dataset that is a collection of 2894 sentences and their surrounding contexts, from 1786 full-text papers in the CORD-19 corpus, labeled for classification of challenges and directions by expert annotators with biomedical and bioNLP backgrounds. For full details on the train/test/split of the data see section 3.1 in our [paper](https://arxiv.org/abs/2108.13751) ## Example notebook We include an example notebook that uses the model for inference in our [repo](https://github.com/Dan-La/scientific-challenges-and-directions). See `Inference_Notebook.ipynb`. A training notebook is also included. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning rate: 2e-05 - train batch size: 8 - eval batch size: 4 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr scheduler type: linear - lr scheduler warmup steps: 500 - num epochs: 30 ### Training results The achieves the following results on the test set: - Precision Challenge: 0.768719 - Recall Challenge: 0.780405 - F1 Challenge: 0.774518 - Precision Direction: 0.758112 - Recall Direction: 0.774096 - F1 Direction: 0.766021 - Precision (micro avg. on both labels): 0.764894 - Recall (micro avg. on both labels): 0.778139 - F1 (micro avg. on both labels): 0.771459 ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3 ## Citation If using our dataset and models, please cite: ``` @misc{lahav2021search, title={A Search Engine for Discovery of Scientific Challenges and Directions}, author={Dan Lahav and Jon Saad Falcon and Bailey Kuehl and Sophie Johnson and Sravanthi Parasa and Noam Shomron and Duen Horng Chau and Diyi Yang and Eric Horvitz and Daniel S. Weld and Tom Hope}, year={2021}, eprint={2108.13751}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Contact us Please don't hesitate to reach out. **Email:** `lahav@mail.tau.ac.il`,`tomh@allenai.org`.
4,463
Dandara/bertimbau-socioambiental
null
Entry not found
15
DongHyoungLee/distilbert-base-uncased-finetuned-cola
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.535587402888147 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7335 - Matthews Correlation: 0.5356 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5309 | 1.0 | 535 | 0.5070 | 0.4239 | | 0.3568 | 2.0 | 1070 | 0.5132 | 0.4913 | | 0.24 | 3.0 | 1605 | 0.6081 | 0.4990 | | 0.1781 | 4.0 | 2140 | 0.7335 | 0.5356 | | 0.1243 | 5.0 | 2675 | 0.8705 | 0.5242 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
1,998
DoyyingFace/bert-COVID-HATE-finetuned-test
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
Entry not found
15
EMBEDDIA/rubert-tweetsentiment
[ "Negative", "Neutral", "Positive" ]
Entry not found
15
EhsanAghazadeh/bert-based-uncased-sst2-e2
[ "negative", "positive" ]
Entry not found
15
EhsanAghazadeh/xlm-roberta-base-lcc-en-2e-5-42
null
Entry not found
15
Elron/bleurt-tiny-128
[ "LABEL_0" ]
\n## BLEURT Pytorch version of the original BLEURT models from ACL paper ["BLEURT: Learning Robust Metrics for Text Generation"](https://aclanthology.org/2020.acl-main.704/) by Thibault Sellam, Dipanjan Das and Ankur P. Parikh of Google Research. The code for model conversion was originated from [this notebook](https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing) mentioned [here](https://github.com/huggingface/datasets/issues/224). ## Usage Example ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("Elron/bleurt-tiny-512") model = AutoModelForSequenceClassification.from_pretrained("Elron/bleurt-tiny-512") model.eval() references = ["hello world", "hello world"] candidates = ["hi universe", "bye world"] with torch.no_grad(): scores = model(**tokenizer(references, candidates, return_tensors='pt'))[0].squeeze() print(scores) # tensor([-1.0563, -0.3004]) ```
1,001
Emily/fyp
[ "LABEL_0", "LABEL_1", "LABEL_10", "LABEL_11", "LABEL_12", "LABEL_13", "LABEL_14", "LABEL_15", "LABEL_16", "LABEL_17", "LABEL_18", "LABEL_19", "LABEL_2", "LABEL_20", "LABEL_3", "LABEL_4", "LABEL_5", "LABEL_6", "LABEL_7", "LABEL_8", "LABEL_9" ]
Entry not found
15
Hinova/distilbert-base-uncased-finetuned-cola
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model_index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: cola metric: name: Matthews Correlation type: matthews_correlation value: 0.5481326292844919 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8301 - Matthews Correlation: 0.5481 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5252 | 1.0 | 535 | 0.5094 | 0.4268 | | 0.3515 | 2.0 | 1070 | 0.5040 | 0.4948 | | 0.2403 | 3.0 | 1605 | 0.5869 | 0.5449 | | 0.1731 | 4.0 | 2140 | 0.7338 | 0.5474 | | 0.1219 | 5.0 | 2675 | 0.8301 | 0.5481 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0 - Datasets 1.11.0 - Tokenizers 0.10.3
1,991
IsaacBot/bert-base-uncased-finetuned-GP-Sentiment
[ "negative", "neutral", "positive" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: bert-base-uncased-finetuned-GP-Sentiment results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-GP-Sentiment This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7815 - F1: 0.6808 - Accuracy: 0.7390 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:| | No log | 1.0 | 313 | 0.7492 | 0.6448 | 0.6845 | | 0.7419 | 2.0 | 626 | 0.7281 | 0.6800 | 0.7350 | | 0.7419 | 3.0 | 939 | 0.7815 | 0.6808 | 0.7390 | | 0.5309 | 4.0 | 1252 | 0.8782 | 0.6799 | 0.7422 | | 0.336 | 5.0 | 1565 | 1.1222 | 0.6792 | 0.7390 | | 0.336 | 6.0 | 1878 | 1.1544 | 0.6671 | 0.7174 | | 0.219 | 7.0 | 2191 | 1.3721 | 0.6627 | 0.7246 | | 0.1541 | 8.0 | 2504 | 1.4864 | 0.6652 | 0.7326 | | 0.1541 | 9.0 | 2817 | 1.6475 | 0.6660 | 0.7446 | | 0.1094 | 10.0 | 3130 | 1.6749 | 0.6700 | 0.7446 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
2,099
IsabellaKarabasz/roberta-base-bne-finetuned-amazon_reviews_multi
null
--- license: cc-by-4.0 tags: - generated_from_trainer datasets: - amazon_reviews_multi model_index: - name: roberta-base-bne-finetuned-amazon_reviews_multi results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi args: es --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-amazon_reviews_multi This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
1,293
Jeska/VaccinChatSentenceClassifierDutch_fromBERTje2
[ "chitchat_ask_bye", "chitchat_ask_hi", "chitchat_ask_hi_de", "chitchat_ask_hi_en", "chitchat_ask_hi_fr", "chitchat_ask_hoe_gaat_het", "chitchat_ask_name", "chitchat_ask_thanks", "faq_ask_aantal_gevaccineerd", "faq_ask_aantal_gevaccineerd_wereldwijd", "faq_ask_afspraak_afzeggen", "faq_ask_afspr...
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: VaccinChatSentenceClassifierDutch_fromBERTje2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # VaccinChatSentenceClassifierDutch_fromBERTje2 This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5112 - Accuracy: 0.9004 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06 - lr_scheduler_type: linear - num_epochs: 15.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 4.1505 | 1.0 | 1320 | 3.3293 | 0.3793 | | 2.7333 | 2.0 | 2640 | 2.2295 | 0.6133 | | 2.0189 | 3.0 | 3960 | 1.5134 | 0.7587 | | 1.2504 | 4.0 | 5280 | 1.0765 | 0.8282 | | 0.7733 | 5.0 | 6600 | 0.7937 | 0.8629 | | 0.5217 | 6.0 | 7920 | 0.6438 | 0.8784 | | 0.3148 | 7.0 | 9240 | 0.5733 | 0.8857 | | 0.2067 | 8.0 | 10560 | 0.5362 | 0.8912 | | 0.1507 | 9.0 | 11880 | 0.5098 | 0.8903 | | 0.1024 | 10.0 | 13200 | 0.5078 | 0.8976 | | 0.0837 | 11.0 | 14520 | 0.5054 | 0.8967 | | 0.0608 | 12.0 | 15840 | 0.5062 | 0.8958 | | 0.0426 | 13.0 | 17160 | 0.5072 | 0.9013 | | 0.0374 | 14.0 | 18480 | 0.5110 | 0.9040 | | 0.0346 | 15.0 | 19800 | 0.5112 | 0.9004 | ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0 - Datasets 1.16.1 - Tokenizers 0.10.3
2,272
Jeska/VaccinChatSentenceClassifierDutch_fromBERTje2_DAdialog02
[ "chitchat_ask_bye", "chitchat_ask_hi", "chitchat_ask_hi_de", "chitchat_ask_hi_en", "chitchat_ask_hi_fr", "chitchat_ask_hoe_gaat_het", "chitchat_ask_name", "chitchat_ask_thanks", "faq_ask_aantal_gevaccineerd", "faq_ask_aantal_gevaccineerd_wereldwijd", "faq_ask_afspraak_afzeggen", "faq_ask_afspr...
Entry not found
15
Jihyun22/bert-base-finetuned-nli
[ "LABEL_0", "LABEL_1", "LABEL_2" ]
--- tags: - generated_from_trainer datasets: - klue metrics: - accuracy model_index: - name: bert-base-finetuned-nli results: - task: name: Text Classification type: text-classification dataset: name: klue type: klue args: nli metric: name: Accuracy type: accuracy value: 0.756 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-finetuned-nli This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the klue dataset. It achieves the following results on the evaluation set: - Loss: 0.1357 - Accuracy: 0.756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 196 | 0.7357 | 0.156 | | No log | 2.0 | 392 | 0.5952 | 0.0993 | | 0.543 | 3.0 | 588 | 0.5630 | 0.099 | | 0.543 | 4.0 | 784 | 0.5670 | 0.079 | | 0.543 | 5.0 | 980 | 0.5795 | 0.078 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
1,784
Katsiaryna/distilbert-base-uncased-finetuned_9th
[ "LABEL_0" ]
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned_9th results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned_9th This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2826 - Accuracy: 0.4462 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2357 | 1.0 | 569 | 0.2277 | 0.3474 | | 0.2237 | 2.0 | 1138 | 0.2316 | 0.3474 | | 0.1847 | 3.0 | 1707 | 0.2456 | 0.3712 | | 0.1302 | 4.0 | 2276 | 0.2763 | 0.4602 | | 0.0863 | 5.0 | 2845 | 0.2826 | 0.4462 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
1,627
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-normal
[ "LABEL_0" ]
Entry not found
15
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-top1
[ "LABEL_0" ]
Entry not found
15
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-top3_op1
[ "LABEL_0" ]
Entry not found
15
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-top3_op2
[ "LABEL_0" ]
Entry not found
15
Katsiaryna/stsb-TinyBERT-L-4-finetuned_auc_151221-top3_op3
[ "LABEL_0" ]
Entry not found
15
Katsiaryna/stsb-distilroberta-base-finetuned_9th_auc_ce
[ "LABEL_0" ]
Entry not found
15
Kceilord/autonlp-tc-13522454
[ "0", "1" ]
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - Kceilord/autonlp-data-tc --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 13522454 ## Validation Metrics - Loss: 0.31450966000556946 - Accuracy: 0.8461538461538461 - Precision: 0.8181818181818182 - Recall: 0.782608695652174 - AUC: 0.9369259032455604 - F1: 0.8 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Kceilord/autonlp-tc-13522454 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Kceilord/autonlp-tc-13522454", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Kceilord/autonlp-tc-13522454", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
1,028