{ "base_model": "google-bert/bert-base-chinese", "tree": [ { "model_id": "google-bert/bert-base-chinese", "gated": "False", "card": "---\nlanguage: zh\n---\n\n# Bert-base-chinese\n\n## Table of Contents\n- [Model Details](#model-details)\n- [Uses](#uses)\n- [Risks, Limitations and Biases](#risks-limitations-and-biases)\n- [Training](#training)\n- [Evaluation](#evaluation)\n- [How to Get Started With the Model](#how-to-get-started-with-the-model)\n\n\n## Model Details\n\n### Model Description\n\nThis model has been pre-trained for Chinese, training and random input masking has been applied independently to word pieces (as in the original BERT paper).\n\n- **Developed by:** HuggingFace team\n- **Model Type:** Fill-Mask\n- **Language(s):** Chinese\n- **License:** [More Information needed]\n- **Parent Model:** See the [BERT base uncased model](https://huggingface.co/bert-base-uncased) for more information about the BERT base model.\n\n### Model Sources\n- **Paper:** [BERT](https://arxiv.org/abs/1810.04805)\n\n## Uses\n\n#### Direct Use\n\nThis model can be used for masked language modeling \n\n\n\n## Risks, Limitations and Biases\n**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).\n\n\n## Training\n\n#### Training Procedure\n* **type_vocab_size:** 2\n* **vocab_size:** 21128\n* **num_hidden_layers:** 12\n\n#### Training Data\n[More Information Needed]\n\n## Evaluation\n\n#### Results\n\n[More Information Needed]\n\n\n## How to Get Started With the Model\n```python\nfrom transformers import AutoTokenizer, AutoModelForMaskedLM\n\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-chinese\")\n\nmodel = AutoModelForMaskedLM.from_pretrained(\"bert-base-chinese\")\n\n```\n\n\n\n\n\n", "metadata": "\"N/A\"", "depth": 0, "children": [ "jackietung/bert-base-chinese-finetuned-sentiment", "AIYIYA/my_aa", "AIYIYA/my_1", "AIYIYA/my_12", "hw2942/bert-base-chinese-finetuning-wallstreetcn-morning-news-market-overview-open-000001SH-v1", "hw2942/bert-base-chinese-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1", "AIYIYA/my_wr", "AIYIYA/my_wr1", "AIYIYA/my_wr2", "AIYIYA/my_wr3", "yyyy1992/my_disflu_chinese_model", "hw2942/bert-base-chinese-SSEC", "Hzmin9/my_awesome_model", "indiejoseph/bert-base-cantonese", "AIYIYA/my_html2", "AIYIYA/my_html3", "hw2942/bert-base-chinese-SSE50", "RtwC/berttest2", "HansOMEL/MultiChoise-bert-base-chinese-Hw1", "rylai88/bert_base_chinese_baidu_fintune", "HansOMEL/QA-bert-base-chinese-Hw1", "xjlulu/ntu_adl_paragraph_selection_model", "xjlulu/ntu_adl_span_selection_bert", "AIYIYA/my_dl_t", "AIYIYA/my_dl_1", "AIYIYA/my_dl_2", "piecake/model_1", "piecake/model_2", "ThuyNT03/CS431_Car-COQE_CSI", "AIYIYA/my_ti_new1", "ThuyNT03/CS431_Ele-COQE_CSI", "AIYIYA/my_ti_new2", "BrianHsu/Bert_QA_multiple_choice", "BrianHsu/BERT_test_graident_accumulation", "BrianHsu/BERT_test_graident_accumulation_test2", "BrianHsu/BERT_test_graident_accumulation_test3", "BrianHsu/BERT_test_graident_accumulation_test4", "AIYIYA/my_new_inputs", "AIYIYA/my_new_inputs1", "AIYIYA/my_new_login", "AIYIYA/my_new_login1", "AIYIYA/my_new_login2", "AIYIYA/my_new_login3", "AIYIYA/my_new_login4", "AIYIYA/my_new_inp1", "AIYIYA/my_new_in2", "AIYIYA/my_new_in3", "Ghunghru/Misinformation-Covid-bert-base-chinese", "Ghunghru/Misinformation-Covid-LowLearningRatebert-base-chinese", "chriswu88/bert_ner_model", "wzChen/my_awesome_model_text_cls", "H336104/NERBorder", "Yangkt/test-trainer", "sanxialiuzhan/bert-base-chinese-ner", "karinegabsschon/classifier_adapter", "Extrabass/test_trainer", "Extrabass/checkpoint", "lynn610/bert-finetuned-ner", "thanhtctv/results", "bibibobo777/my_awesome_bert_qa_model", "Mattis0525/bert-base-chinese-finetuned-imdb", "Mattis0525/bert-base-chinese-finetuned-tcfd", "imagine0711/bert-base-chinese-finetuned-tcfd", "Welsey/overlaying", "ivanxia1988/bert_tnew_cls", "hw2942/bert-base-chinese-climate-related-prediction-1", "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-1", "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-1", "hw2942/bert-base-chinese-climate-related-prediction-v1", "hw2942/bert-base-chinese-climate-related-prediction-v2", "hw2942/bert-base-chinese-climate-related-prediction-v3", "hw2942/bert-base-chinese-climate-related-prediction-v4", "hw2942/bert-base-chinese-climate-related-prediction-v5", "hw2942/bert-base-chinese-climate-related-prediction-v6", "wsqstar/bert-finetuned-weibo-luobokuaipao", "hw2942/bert-base-chinese-climate-related-prediction-vv1", "hw2942/bert-base-chinese-climate-related-prediction-vv2", "hw2942/bert-base-chinese-climate-related-prediction-vv3", "hw2942/bert-base-chinese-climate-related-prediction-2", "hw2942/bert-base-chinese-climate-related-prediction-3", "hw2942/bert-base-chinese-climate-related-prediction-4", "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-v1", "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-v2", "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-v3", "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-v4", "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-vv1", "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-vv2", "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-vv3", "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-vv4", "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-2", "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-3", "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-4", "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-5", "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v1", "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v2", "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v3", "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v4", "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v5", "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v6", "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v7", "wsqstar/GISchat-weibo-100k-fine-tuned-bert", "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-2", "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-3", "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-4", "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-5", "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-6", "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-7", "track-AJ/GISchat-weibo-100k-fine-tuned-bert", "kaishih/bert-tzh-med-ner", "b10401015/hw1-bert-base-chinese-finetuned-1", "b10401015/hw1-1-multiple_choice-bert-base-chinese-finetuned", "b10401015/hw1-1-question_answering-bert-base-chinese-finetuned", "bibibobo777/ExampleModel", "b10401015/hw1-2-multiple_choice-bert-base-chinese-finetuned", "b10401015/hw1-2-question_answering-bert-base-chinese-finetuned", "b10401015/hw1-3-question_answering-bert-base-chinese-finetuned", "b10401015/hw1-4-question_answering-bert-base-chinese-finetuned", "riiwang/lr_3e-05_batch_2_epoch_1_model_span_selector", "riiwang/lr_3e-05_batch_2_epoch_3_model_span_selector", "b10401015/hw1-3-multiple_choice-bert-base-chinese-finetuned", "riiwang/lr_3e-05_batch_2_epoch_5_model_span_selector", "riiwang/lr_0.0003_batch_2_epoch_3_model_span_selector", "riiwang/lr_5e-05_batch_8_epoch_3_model_span_selector", "riiwang/lr_5e-05_batch_8_epoch_5_model_span_selector", "riiwang/lr_3e-06_batch_4_epoch_3_model_span_selector", "b09501048/adl_hw1_multi_choice_model", "frett/chinese_extract_bert", "jazzson/bert-base-chinese-finetuned-paragraph_extraction-2", "jazzson/bert-base-chinese-finetuned-question-answering-4", "jazzson/bert-base-chinese-finetuned-question-answering-6", "jazzson/bert-base-chinese-finetuned-question-answering-8", "jazzson/bert-base-chinese-finetuned-question-answering-retrain1", "smlhd/bert_cn_finetuning", "frett/chinese_extract_bert_scratch", "jazzson/bert-base-chinese-finetuned-paragraph_extraction-retrain3", "scfengv/TVL_GameLayerClassifier", "missingstuffedbun/test_20241030080931", "missingstuffedbun/test_20241030100037", "linxiaoming/chinese-sentiment-model", "PassbyGrocer/bert-ner-msra", "PassbyGrocer/bert-ner-weibo", "calvinobai/chinese-sentiment-model", "sky1223/chinese-sentiment-model", "marsyao/chinese-sentiment-model", "PassbyGrocer/bert_crf-ner-weibo", "PassbyGrocer/bert_bilstm_crf-ner-weibo", "PassbyGrocer/bert_bilstm_dst_crf-ner-weibo", "missingstuffedbun/test_20241111084845", "real-jiakai/bert-base-chinese-finetuned-cmrc2018", "real-jiakai/bert-base-chinese-finetuned-squadv2", "Xubqpanda/LegalDuet", "Chengfengke/herbert", "wsqstar/weibo-model-4tags", "akirazh/bilibili-bullet-comment-classify-model", "Vrepol/bert-base-chinese-finetuned-imdb", "wjwhhh/BertSentiment", "Macropodus/bert4csc_v1", "sercetexam9/bert-base-chinese-chn-finetuned-augmentation-LUNAR", "AnonymousCS/populism_model012", "roberthsu2003/models_for_ner", "roberthsu2003/models_for_qa_cut", "jackietung/bert-base-chinese-finetuned-multi-classification", "jinchenliuljc/ecom_ner_model", "hsincho/bert_propaganda_shanghai", "zzz16/Public-analysis", "jinchenliuljc/ecommerce-sentiment-analysis", "roberthsu2003/models_for_qa_slide", "roberthsu2003/for_classification", "tiya0825/MBTI-ScoreModel2.0", "colourrain/bert_cn_sst", "roberthsu2003/for_multiple_choice", "roberthsu2003/sentence_similarity", "KingLear/Philosophy_google-bert-base-chinese", "Nice2meetuwu/Bert-Base-Chinese-for-stock", "luohuashijieyoufengjun/ner_based_bert-base-chinese", "li1212/bert-base-chinese-finetuned-moviereviews-mask-tf", "left0ver/bert-base-chinese-finetune-sentiment-classification", "ZON8955/NER_demo", "luohuashijieyoufengjun/ner_based_bert-base-chinese-only-phone", "luohuashijieyoufengjun/ner_based_bert-base-chinese-only-phone1", "Xiaoxi2333/bert_multilabel_chinese", "lili0324/bert-base-chinese-finetuned-imdb-shanghai", "luohuashijieyoufengjun/ner_based_bert-base-chinese_badcase1" ], "children_count": 183, "adapters": [ "scfengv/TVL_GeneralLayerClassifier" ], "adapters_count": 1, "quantized": [ "Xenova/bert-base-chinese" ], "quantized_count": 1, "merges": [], "merges_count": 0, "total_derivatives": 185, "spaces": [], "spaces_count": 0, "parents": [], "base_model": "google-bert/bert-base-chinese", "base_model_relation": "base" }, { "model_id": "jackietung/bert-base-chinese-finetuned-sentiment", "gated": "False", "card": "---\nlanguage: zh\nlicense: mit\ntags:\n - bert\n - sentiment-analysis\n - chinese\n - customer feedback\n - app reviews\ndatasets:\n- custom\nmetrics:\n - accuracy\n - f1\npipeline_tag: text-classification\nwidget:\n - text: \u5546\u54c1\u641c\u5c0b\u9ad4\u9a57\u5f88\u597d\n - text: \u7121\u6cd5\u767b\u5165\u6703\u54e1\u5e33\u865f\n - text: \u7d50\u5e33\u6642\u7cfb\u7d71\u51fa\u932f\nbase_model:\n - google-bert/bert-base-chinese\nlibrary_name: transformers\n---\n\n# BERT \u4e2d\u6587\u60c5\u611f\u5206\u6790\u6a21\u578b\n\n\u9019\u662f\u4e00\u500b\u57fa\u65bc BERT \u7684\u4e2d\u6587\u60c5\u611f\u5206\u6790\u6a21\u578b\uff0c\u53ef\u7528\u65bc\u5224\u65b7\u6587\u672c\u7684\u60c5\u611f\u50be\u5411\uff08\u6b63\u9762\u3001\u8ca0\u9762\u6216\u4e2d\u6027\uff09\u3002\n\n## \u6a21\u578b\u63cf\u8ff0\n\n- \u6a21\u578b\u57fa\u65bc bert-base-chinese \u5fae\u8abf\n- \u9069\u7528\u65bcApp\u4e2d\u6587\u8a55\u8ad6\u7684\u60c5\u611f\u5206\u6790\n- \u8f38\u51fa\u6a19\u7c64\uff1a0\uff08\u8ca0\u9762\uff09\uff0c1\uff08\u6b63\u9762\uff09\uff0c2\uff08\u4e2d\u6027\uff09\n- \u4f7f\u7528 Focal Loss \u8a13\u7df4\uff0c\u4ee5\u8655\u7406\u985e\u5225\u4e0d\u5e73\u8861\u554f\u984c\n\n## \u4f7f\u7528\u65b9\u6cd5\n\n```python\nfrom transformers import AutoModelForSequenceClassification, AutoTokenizer\nimport torch\n\n# \u8f09\u5165\u6a21\u578b\u548c\u5206\u8a5e\u5668\nmodel = AutoModelForSequenceClassification.from_pretrained(\"jackietung/bert-base-chinese-sentiment-finetuned\")\ntokenizer = AutoTokenizer.from_pretrained(\"jackietung/bert-base-chinese-sentiment-finetuned\")\n\n# \u6e96\u5099\u8f38\u5165\ntext = \"\u9019\u500bApp\u4f7f\u7528\u9ad4\u9a57\u5f88\u5dee\uff01\"\ninputs = tokenizer(text, return_tensors=\"pt\")\n\n# \u9032\u884c\u9810\u6e2c\nwith torch.no_grad():\n outputs = model(**inputs)\n predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)\n \n # \u7372\u53d6\u9810\u6e2c\u7d50\u679c\n label_names = [\"\u8ca0\u9762\", \"\u6b63\u9762\", \"\u4e2d\u6027\"]\n predicted_class = torch.argmax(predictions, dim=1).item()\n \n print(f\"\u9810\u6e2c\u985e\u5225: {label_names[predicted_class]}\")\n print(f\"\u9810\u6e2c\u5206\u6578: {predictions[0][predicted_class].item():.4f}\")\n \n # \u986f\u793a\u6240\u6709\u985e\u5225\u7684\u5206\u6578\n for i, label in enumerate(label_names):\n print(f\"{label} \u5206\u6578: {predictions[0][i].item():.4f}\")", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "jackietung/bert-base-chinese-finetuned-sentiment", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_aa", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_aa\n results: []\n---\n\n\n\n# AIYIYA/my_aa\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.7596\n- Validation Loss: 1.4913\n- Train Accuracy: 0.6753\n- Epoch: 29\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 280, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 3.4316 | 3.2876 | 0.2078 | 0 |\n| 3.0452 | 3.0083 | 0.2338 | 1 |\n| 2.6954 | 2.7106 | 0.3766 | 2 |\n| 2.3815 | 2.4910 | 0.4935 | 3 |\n| 2.0499 | 2.3035 | 0.5584 | 4 |\n| 1.8322 | 2.1419 | 0.5844 | 5 |\n| 1.6292 | 1.9997 | 0.6104 | 6 |\n| 1.4675 | 1.8933 | 0.6234 | 7 |\n| 1.3115 | 1.8016 | 0.5974 | 8 |\n| 1.2088 | 1.7273 | 0.6364 | 9 |\n| 1.1053 | 1.6728 | 0.6623 | 10 |\n| 1.0254 | 1.6284 | 0.6364 | 11 |\n| 0.9600 | 1.6252 | 0.6494 | 12 |\n| 0.9058 | 1.5662 | 0.6623 | 13 |\n| 0.8675 | 1.5423 | 0.6623 | 14 |\n| 0.8434 | 1.5208 | 0.6753 | 15 |\n| 0.8356 | 1.5140 | 0.6753 | 16 |\n| 0.8070 | 1.5024 | 0.6753 | 17 |\n| 0.7749 | 1.4941 | 0.6753 | 18 |\n| 0.7805 | 1.4913 | 0.6753 | 19 |\n| 0.7764 | 1.4913 | 0.6753 | 20 |\n| 0.7630 | 1.4913 | 0.6753 | 21 |\n| 0.7806 | 1.4913 | 0.6753 | 22 |\n| 0.7665 | 1.4913 | 0.6753 | 23 |\n| 0.7803 | 1.4913 | 0.6753 | 24 |\n| 0.7778 | 1.4913 | 0.6753 | 25 |\n| 0.7781 | 1.4913 | 0.6753 | 26 |\n| 0.7798 | 1.4913 | 0.6753 | 27 |\n| 0.7845 | 1.4913 | 0.6753 | 28 |\n| 0.7596 | 1.4913 | 0.6753 | 29 |\n\n\n### Framework versions\n\n- Transformers 4.31.0\n- TensorFlow 2.12.0\n- Datasets 2.13.1\n- Tokenizers 0.13.3\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_aa", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_1\n results: []\n---\n\n\n\n# AIYIYA/my_1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 1.1600\n- Validation Loss: 1.4880\n- Train Accuracy: 0.7195\n- Epoch: 7\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 300, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 3.3536 | 3.0356 | 0.2195 | 0 |\n| 2.8571 | 2.6364 | 0.3902 | 1 |\n| 2.4461 | 2.2839 | 0.4634 | 2 |\n| 2.0491 | 2.0340 | 0.5122 | 3 |\n| 1.7890 | 1.7980 | 0.6463 | 4 |\n| 1.5356 | 1.6520 | 0.6951 | 5 |\n| 1.3215 | 1.5640 | 0.7195 | 6 |\n| 1.1600 | 1.4880 | 0.7195 | 7 |\n\n\n### Framework versions\n\n- Transformers 4.31.0\n- TensorFlow 2.12.0\n- Datasets 2.13.1\n- Tokenizers 0.13.3\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_1", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_12", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_12\n results: []\n---\n\n\n\n# AIYIYA/my_12\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.5441\n- Validation Loss: 1.0817\n- Train Accuracy: 0.7799\n- Epoch: 11\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 580, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 3.3278 | 2.9680 | 0.3208 | 0 |\n| 2.7007 | 2.5022 | 0.4654 | 1 |\n| 2.1853 | 2.0269 | 0.5597 | 2 |\n| 1.7380 | 1.7066 | 0.6352 | 3 |\n| 1.4422 | 1.5095 | 0.6855 | 4 |\n| 1.1789 | 1.3789 | 0.7484 | 5 |\n| 1.0105 | 1.3038 | 0.7484 | 6 |\n| 0.8728 | 1.2295 | 0.7484 | 7 |\n| 0.7790 | 1.1804 | 0.7484 | 8 |\n| 0.6699 | 1.1553 | 0.7673 | 9 |\n| 0.6131 | 1.1061 | 0.7673 | 10 |\n| 0.5441 | 1.0817 | 0.7799 | 11 |\n\n\n### Framework versions\n\n- Transformers 4.31.0\n- TensorFlow 2.12.0\n- Datasets 2.13.1\n- Tokenizers 0.13.3\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_12", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-finetuning-wallstreetcn-morning-news-market-overview-open-000001SH-v1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\n- finance\nmetrics:\n- accuracy\nmodel-index:\n- name: >-\n bert-base-chinese-finetuning-wallstreetcn-morning-news-market-overview-open-000001SH-v1\n results: []\nlanguage:\n- zh\nwidget:\n - text: >-\n \u60e0\u8a89\u4e0b\u8c03\u7f8e\u56fd3A\u4e3b\u6743\u4fe1\u7528\u8bc4\u7ea7\u6b21\u65e5\uff0c\u7ecf\u6d4e\u5b66\u5bb6\u770b\u8f7b\u8bc4\u7ea7\u4e0b\u8c03\u5f71\u54cd\uff0c\u7f8e\u56fd7\u6708ADP\u65b0\u589e\u5c31\u4e1a\u8d85\u9884\u671f\u7206\u8868\u3002\u98ce\u9669\u60c5\u7eea\u88ab\u91cd\u521b\uff0c\u6807\u666e\u3001\u9053\u6307\u3001\u5c0f\u76d8\u80a1\u9f50\u8dcc\u7ea61%\uff0c\u7eb3\u6307\u8dcc\u8d852%\u521b2\u6708\u4ee5\u6765\u6700\u5dee\u3002\n \u7f8e\u56fd\u8d85\u5bfc\u8dcc\u8fd129%\u3002\u7f8e\u503a\u53d1\u884c\u6d77\u5578\u5373\u5c06\u6765\u88ad\uff0c10\u5e74\u671f\u7f8e\u503a\u6536\u76ca\u7387\u4e00\u5ea6\u521b\u4e5d\u4e2a\u6708\u65b0\u9ad8\uff0c\u4e24\u5e74\u671f\u7f8e\u503a\u6536\u76ca\u7387\u8dcc\u5e45\u663e\u8457\u6536\u7a84\u3002\u7f8e\u5143\u8f6c\u6da8\u5237\u65b0\u4e09\u5468\u534a\u9ad8\u4f4d\u3002\n \u5546\u54c1\u666e\u8dcc\u3002\u6cb9\u4ef7\u8dcc\u8d852%\uff0c\u7f8e\u6cb9\u8dcc\u7a7f80\u7f8e\u5143\u6574\u6570\u4f4d\u3002\u9ec4\u91d1\u5931\u5b881940\u7f8e\u5143\u81f3\u4e09\u5468\u65b0\u4f4e\u3002\n \u4e2d\u56fd\u5e02\u573a\u65b9\u9762\uff0c\u7f8e\u80a1\u65f6\u6bb5\uff0c\u4e2d\u6982\u80a1\u6307\u8dcc4%\uff0c\u7406\u60f3\u6c7d\u8f66\u5219\u518d\u521b\u5386\u53f2\u65b0\u9ad8\uff0c\u79bb\u5cb8\u4eba\u6c11\u5e01\u4e00\u5ea6\u8dcc\u7a7f7.21\u5143\uff0c\u6700\u6df1\u8dcc270\u70b9\u81f3\u4e00\u5468\u4f4e\u4f4d\u3002\u6caa\u6307\u6536\u8dcc\u8fd11%\uff0c\u533b\u836f\u3001\u94f6\u884c\u75b2\u8f6f\uff0c\u8d85\u5bfc\u6982\u5ff5\u3001\u5730\u4ea7\u3001\u5238\u5546\u5f3a\u52bf\u3002\u6052\u6307\u6536\u8dcc2.47%\uff0c\u5357\u5411\u8d44\u91d1\u51c0\u6d41\u51654.02\u4ebf\u6e2f\u5143\u3002\n---\n\n\n\n# bert-base-chinese-finetuning-wallstreetcn-morning-news-market-overview-open-000001SH-v1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the dataset of Wallstreetcn Morning News Market Overview with overnight index (000001.SH) movement labels.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6601119637489319\n- Accuracy: 0.7241379310344828\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 38 | 0.6936 | 0.4483 |\n| No log | 2.0 | 76 | 0.6850 | 0.5862 |\n| No log | 3.0 | 114 | 0.6977 | 0.5862 |\n| No log | 4.0 | 152 | 0.6579 | 0.6207 |\n| No log | 5.0 | 190 | 0.7235 | 0.4483 |\n| No log | 6.0 | 228 | 0.6601 | 0.7241 |\n| No log | 7.0 | 266 | 0.6510 | 0.6897 |\n| No log | 8.0 | 304 | 0.7066 | 0.7241 |\n| No log | 9.0 | 342 | 0.8716 | 0.6552 |\n| No log | 10.0 | 380 | 0.8149 | 0.6207 |\n\n\n### Framework versions\n\n- Transformers 4.31.0\n- Pytorch 2.0.1+cu118\n- Datasets 2.14.2\n- Tokenizers 0.13.3", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-finetuning-wallstreetcn-morning-news-market-overview-open-000001SH-v1", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- f1\nmodel-index:\n- name: bert-base-chinese-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1\n results: []\n---\n\n\n\n# bert-base-chinese-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.3043\n- F1: 0.4167\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | F1 |\n|:-------------:|:-----:|:----:|:---------------:|:------:|\n| No log | 1.0 | 38 | 0.6797 | 0.0 |\n| No log | 2.0 | 76 | 0.6726 | 0.1538 |\n| No log | 3.0 | 114 | 0.6660 | 0.6154 |\n| No log | 4.0 | 152 | 0.7310 | 0.4545 |\n| No log | 5.0 | 190 | 0.8288 | 0.5926 |\n| No log | 6.0 | 228 | 0.9843 | 0.4545 |\n| No log | 7.0 | 266 | 1.4159 | 0.4545 |\n| No log | 8.0 | 304 | 1.9705 | 0.4348 |\n| No log | 9.0 | 342 | 2.2006 | 0.4167 |\n| No log | 10.0 | 380 | 2.3043 | 0.4167 |\n\n\n### Framework versions\n\n- Transformers 4.31.0\n- Pytorch 2.0.1+cu118\n- Datasets 2.14.3\n- Tokenizers 0.13.3\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-wallstreetcn-morning-news-market-overview-open-SSEC-f1-v1", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_wr", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_wr\n results: []\n---\n\n\n\n# AIYIYA/my_wr\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 1.3017\n- Validation Loss: 1.2447\n- Train Accuracy: 0.7895\n- Epoch: 7\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 120, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 2.8885 | 2.6740 | 0.1316 | 0 |\n| 2.5028 | 2.3158 | 0.4737 | 1 |\n| 2.2462 | 2.0331 | 0.6579 | 2 |\n| 1.9850 | 1.7608 | 0.7632 | 3 |\n| 1.7761 | 1.6215 | 0.7632 | 4 |\n| 1.6159 | 1.4274 | 0.7895 | 5 |\n| 1.3905 | 1.3232 | 0.7895 | 6 |\n| 1.3017 | 1.2447 | 0.7895 | 7 |\n\n\n### Framework versions\n\n- Transformers 4.31.0\n- TensorFlow 2.12.0\n- Datasets 2.14.4\n- Tokenizers 0.13.3\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_wr", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_wr1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_wr1\n results: []\n---\n\n\n\n# AIYIYA/my_wr1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 2.2336\n- Validation Loss: 1.9643\n- Train Accuracy: 0.5\n- Epoch: 3\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 30, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 2.9423 | 2.5705 | 0.1842 | 0 |\n| 2.6021 | 2.2725 | 0.4474 | 1 |\n| 2.3113 | 2.0867 | 0.4737 | 2 |\n| 2.2336 | 1.9643 | 0.5 | 3 |\n\n\n### Framework versions\n\n- Transformers 4.31.0\n- TensorFlow 2.12.0\n- Datasets 2.14.4\n- Tokenizers 0.13.3\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_wr1", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_wr2", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_wr2\n results: []\n---\n\n\n\n# AIYIYA/my_wr2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 2.1208\n- Validation Loss: 2.1831\n- Train Accuracy: 0.4737\n- Epoch: 6\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 30, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 2.8776 | 2.6967 | 0.0526 | 0 |\n| 2.5610 | 2.4772 | 0.3158 | 1 |\n| 2.4059 | 2.3114 | 0.4474 | 2 |\n| 2.2749 | 2.2041 | 0.4474 | 3 |\n| 2.1581 | 2.1831 | 0.4737 | 4 |\n| 2.1664 | 2.1831 | 0.4737 | 5 |\n| 2.1208 | 2.1831 | 0.4737 | 6 |\n\n\n### Framework versions\n\n- Transformers 4.31.0\n- TensorFlow 2.12.0\n- Datasets 2.14.4\n- Tokenizers 0.13.3\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_wr2", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_wr3", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_wr3\n results: []\n---\n\n\n\n# AIYIYA/my_wr3\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 1.1315\n- Validation Loss: 1.1418\n- Train Accuracy: 0.8158\n- Epoch: 14\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 90, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 3.0206 | 2.6776 | 0.2895 | 0 |\n| 2.6896 | 2.4286 | 0.7105 | 1 |\n| 2.4102 | 2.1955 | 0.6579 | 2 |\n| 2.1850 | 1.9989 | 0.7368 | 3 |\n| 1.9867 | 1.8181 | 0.6842 | 4 |\n| 1.8059 | 1.6320 | 0.7368 | 5 |\n| 1.5830 | 1.5359 | 0.8158 | 6 |\n| 1.5184 | 1.4081 | 0.7895 | 7 |\n| 1.4472 | 1.3072 | 0.8421 | 8 |\n| 1.3197 | 1.2605 | 0.8158 | 9 |\n| 1.2258 | 1.2182 | 0.8158 | 10 |\n| 1.2182 | 1.1752 | 0.8158 | 11 |\n| 1.1015 | 1.1583 | 0.8158 | 12 |\n| 1.1387 | 1.1463 | 0.8158 | 13 |\n| 1.1315 | 1.1418 | 0.8158 | 14 |\n\n\n### Framework versions\n\n- Transformers 4.31.0\n- TensorFlow 2.12.0\n- Datasets 2.14.4\n- Tokenizers 0.13.3\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_wr3", "base_model_relation": "base" }, { "model_id": "yyyy1992/my_disflu_chinese_model", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: my_disflu_chinese_model\n results: []\n---\n\n\n\n# my_disflu_chinese_model\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2753\n- Accuracy: 0.9154\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 278 | 0.2357 | 0.9100 |\n| 0.258 | 2.0 | 556 | 0.2753 | 0.9154 |\n\n\n### Framework versions\n\n- Transformers 4.31.0\n- Pytorch 2.0.1\n- Datasets 2.11.0\n- Tokenizers 0.13.3\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "yyyy1992/my_disflu_chinese_model", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-SSEC", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: bert-base-chinese-wallstreetcn-morning-news-market-overview-SSEC-v3\n results: []\n---\n\n\n\n# bert-base-chinese-wallstreetcn-morning-news-market-overview-SSEC-v3\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 3.1007\n- Accuracy: 0.6875\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 34 | 2.2173 | 0.7188 |\n| No log | 2.0 | 68 | 1.8368 | 0.7188 |\n| No log | 3.0 | 102 | 2.7822 | 0.625 |\n| No log | 4.0 | 136 | 2.3597 | 0.7188 |\n| No log | 5.0 | 170 | 3.3032 | 0.5312 |\n| No log | 6.0 | 204 | 2.9527 | 0.6562 |\n| No log | 7.0 | 238 | 2.7575 | 0.6875 |\n| No log | 8.0 | 272 | 2.9714 | 0.6875 |\n| No log | 9.0 | 306 | 3.0941 | 0.6875 |\n| No log | 10.0 | 340 | 3.1007 | 0.6875 |\n\n\n### Framework versions\n\n- Transformers 4.31.0\n- Pytorch 2.0.1+cu118\n- Datasets 2.14.4\n- Tokenizers 0.13.3\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-SSEC", "base_model_relation": "base" }, { "model_id": "Hzmin9/my_awesome_model", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: Hzmin9/my_awesome_model\n results: []\n---\n\n\n\n# Hzmin9/my_awesome_model\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.1928\n- Train Accuracy: 0.6725\n- Validation Loss: 1.3273\n- Validation Accuracy: 0.6725\n- Epoch: 9\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2250, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |\n|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|\n| 2.0541 | 0.595 | 1.5183 | 0.5950 | 0 |\n| 1.3021 | 0.6125 | 1.2977 | 0.6125 | 1 |\n| 0.9285 | 0.6625 | 1.2059 | 0.6625 | 2 |\n| 0.7071 | 0.6625 | 1.1796 | 0.6625 | 3 |\n| 0.5354 | 0.6525 | 1.2179 | 0.6525 | 4 |\n| 0.4165 | 0.6825 | 1.1801 | 0.6825 | 5 |\n| 0.3302 | 0.6675 | 1.3224 | 0.6675 | 6 |\n| 0.2655 | 0.6725 | 1.3056 | 0.6725 | 7 |\n| 0.2195 | 0.6675 | 1.3366 | 0.6675 | 8 |\n| 0.1928 | 0.6725 | 1.3273 | 0.6725 | 9 |\n\n\n### Framework versions\n\n- Transformers 4.33.0\n- TensorFlow 2.13.0\n- Datasets 2.14.4\n- Tokenizers 0.13.3\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "Hzmin9/my_awesome_model", "base_model_relation": "base" }, { "model_id": "indiejoseph/bert-base-cantonese", "gated": "False", "card": "---\nlanguage:\n- yue\nlicense: cc-by-4.0\ntags:\n- generated_from_trainer\nbase_model: bert-base-chinese\npipeline_tag: fill-mask\nwidget:\n- text: \u9999\u6e2f\u539f\u672c[MASK]\u4e00\u500b\u4eba\u7159\u7a00\u5c11\u5605\u6f01\u6e2f\u3002\n example_title: \u4fc2\nmodel-index:\n- name: bert-base-cantonese\n results: []\n---\n\n\n\n# bert-base-cantonese\n\nThis model is a continue pre-train version of bert-base-chinese on Cantonese Common Crawl dataset with 198m tokens.\n\n## Model description\n\nThis model has extended 500 more Chinese characters which very common in Cantonese, such as \u51a7, \u5649, \u9eaa, \u7b2a, \u519a, \u4e78 etc.\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 24\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 192\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1.0\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.35.0.dev0\n- Pytorch 2.1.1+cu121\n- Datasets 2.14.6\n- Tokenizers 0.14.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [ "AlienKevin/bert_base_cantonese_pos_hkcancor", "hon9kon9ize/bert-base-cantonese" ], "children_count": 2, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 2, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "indiejoseph/bert-base-cantonese", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_html2", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_html2\n results: []\n---\n\n\n\n# AIYIYA/my_html2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.1581\n- Train Accuracy: 0.9835\n- Validation Loss: 0.1561\n- Validation Accuracy: 1.0\n- Epoch: 2\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 24, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |\n|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|\n| 0.3969 | 0.9339 | 0.2428 | 0.9512 | 0 |\n| 0.1840 | 0.9835 | 0.1561 | 1.0 | 1 |\n| 0.1581 | 0.9835 | 0.1561 | 1.0 | 2 |\n\n\n### Framework versions\n\n- Transformers 4.33.2\n- TensorFlow 2.13.0\n- Datasets 2.14.5\n- Tokenizers 0.13.3\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_html2", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_html3", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_html3\n results: []\n---\n\n\n\n# AIYIYA/my_html3\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.1064\n- Train Accuracy: 1.0\n- Validation Loss: 0.1251\n- Validation Accuracy: 0.9804\n- Epoch: 2\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 30, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |\n|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|\n| 0.8291 | 0.7386 | 0.2926 | 0.9804 | 0 |\n| 0.2239 | 0.9804 | 0.1478 | 0.9804 | 1 |\n| 0.1064 | 1.0 | 0.1251 | 0.9804 | 2 |\n\n\n### Framework versions\n\n- Transformers 4.33.2\n- TensorFlow 2.13.0\n- Datasets 2.14.5\n- Tokenizers 0.13.3\n", "metadata": "\"N/A\"", "depth": 1, "children": [ "AIYIYA/my_html4" ], "children_count": 1, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_html3", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-SSE50", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: bert-base-chinese-wallstreetcn-morning-news-market-overview-SSE50-v1\n results: []\n---\n\n\n\n# bert-base-chinese-wallstreetcn-morning-news-market-overview-SSE50-v1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7025\n- Accuracy: 0.7879\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 34 | 0.7253 | 0.3939 |\n| No log | 2.0 | 68 | 0.6520 | 0.6061 |\n| No log | 3.0 | 102 | 0.6079 | 0.6970 |\n| No log | 4.0 | 136 | 0.5872 | 0.6667 |\n| No log | 5.0 | 170 | 0.4618 | 0.7879 |\n| No log | 6.0 | 204 | 0.4237 | 0.7879 |\n| No log | 7.0 | 238 | 0.6489 | 0.6667 |\n| No log | 8.0 | 272 | 0.5943 | 0.8182 |\n| No log | 9.0 | 306 | 0.7921 | 0.7879 |\n| No log | 10.0 | 340 | 0.7025 | 0.7879 |\n\n\n### Framework versions\n\n- Transformers 4.33.2\n- Pytorch 2.0.1+cu118\n- Datasets 2.14.5\n- Tokenizers 0.13.3\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-SSE50", "base_model_relation": "base" }, { "model_id": "RtwC/berttest2", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- precision\n- recall\n- f1\n- accuracy\nmodel-index:\n- name: berttest2\n results: []\n---\n\n\n\n# berttest2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0206\n- Precision: 0.9610\n- Recall: 0.9653\n- F1: 0.9631\n- Accuracy: 0.9956\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|\n| 0.028 | 1.0 | 2609 | 0.0225 | 0.9385 | 0.9350 | 0.9368 | 0.9932 |\n| 0.011 | 2.0 | 5218 | 0.0182 | 0.9542 | 0.9592 | 0.9567 | 0.9951 |\n| 0.0044 | 3.0 | 7827 | 0.0206 | 0.9610 | 0.9653 | 0.9631 | 0.9956 |\n\n\n### Framework versions\n\n- Transformers 4.34.0\n- Pytorch 2.0.1+cu118\n- Datasets 2.14.5\n- Tokenizers 0.14.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "RtwC/berttest2", "base_model_relation": "base" }, { "model_id": "HansOMEL/MultiChoise-bert-base-chinese-Hw1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: MultiChoise-bert-base-chinese-Hw1\n results: []\n---\n\n\n\n# MultiChoise-bert-base-chinese-Hw1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2068\n- Accuracy: 0.9581\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:-----:|:---------------:|:--------:|\n| 0.2298 | 1.0 | 10857 | 0.2068 | 0.9581 |\n\n\n### Framework versions\n\n- Transformers 4.34.0\n- Pytorch 2.0.1+cu118\n- Datasets 2.14.5\n- Tokenizers 0.14.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "HansOMEL/MultiChoise-bert-base-chinese-Hw1", "base_model_relation": "base" }, { "model_id": "rylai88/bert_base_chinese_baidu_fintune", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert_base_chinese_baidu_fintune\n results: []\n---\n\n\n\n# bert_base_chinese_baidu_fintune\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.9134\n- Mse: 2.9134\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- num_epochs: 2\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Mse |\n|:-------------:|:-----:|:-----:|:---------------:|:------:|\n| 4.2891 | 0.0 | 50 | 4.0176 | 4.0175 |\n| 4.0197 | 0.01 | 100 | 3.7198 | 3.7198 |\n| 3.9953 | 0.01 | 150 | 3.7285 | 3.7284 |\n| 3.6974 | 0.01 | 200 | 4.2507 | 4.2507 |\n| 3.6884 | 0.02 | 250 | 3.6316 | 3.6315 |\n| 3.5951 | 0.02 | 300 | 3.6354 | 3.6354 |\n| 3.5379 | 0.03 | 350 | 3.5250 | 3.5250 |\n| 3.6804 | 0.03 | 400 | 3.4025 | 3.4025 |\n| 3.3788 | 0.03 | 450 | 3.6585 | 3.6585 |\n| 3.864 | 0.04 | 500 | 3.4324 | 3.4324 |\n| 3.5062 | 0.04 | 550 | 3.3671 | 3.3671 |\n| 3.478 | 0.04 | 600 | 3.5055 | 3.5055 |\n| 3.3894 | 0.05 | 650 | 3.3347 | 3.3347 |\n| 3.3577 | 0.05 | 700 | 3.3462 | 3.3462 |\n| 3.5431 | 0.05 | 750 | 3.5167 | 3.5167 |\n| 3.421 | 0.06 | 800 | 3.2970 | 3.2970 |\n| 3.407 | 0.06 | 850 | 3.3696 | 3.3695 |\n| 3.4202 | 0.06 | 900 | 3.3125 | 3.3125 |\n| 3.5096 | 0.07 | 950 | 3.4387 | 3.4387 |\n| 3.4338 | 0.07 | 1000 | 3.5653 | 3.5653 |\n| 3.6507 | 0.08 | 1050 | 3.6666 | 3.6667 |\n| 3.3724 | 0.08 | 1100 | 3.3731 | 3.3731 |\n| 3.7244 | 0.08 | 1150 | 3.3666 | 3.3666 |\n| 3.3777 | 0.09 | 1200 | 3.6397 | 3.6397 |\n| 3.583 | 0.09 | 1250 | 3.3781 | 3.3780 |\n| 3.2942 | 0.09 | 1300 | 3.3208 | 3.3207 |\n| 3.4335 | 0.1 | 1350 | 3.3797 | 3.3797 |\n| 3.2721 | 0.1 | 1400 | 3.3782 | 3.3782 |\n| 3.2478 | 0.1 | 1450 | 3.3834 | 3.3834 |\n| 3.6509 | 0.11 | 1500 | 3.2751 | 3.2751 |\n| 3.5373 | 0.11 | 1550 | 3.3858 | 3.3858 |\n| 3.5735 | 0.11 | 1600 | 3.6914 | 3.6913 |\n| 3.3937 | 0.12 | 1650 | 3.3257 | 3.3257 |\n| 3.1949 | 0.12 | 1700 | 3.3608 | 3.3608 |\n| 3.5509 | 0.13 | 1750 | 3.3229 | 3.3228 |\n| 3.434 | 0.13 | 1800 | 3.3007 | 3.3007 |\n| 3.2915 | 0.13 | 1850 | 3.3351 | 3.3351 |\n| 3.2697 | 0.14 | 1900 | 3.2991 | 3.2991 |\n| 3.2213 | 0.14 | 1950 | 3.3364 | 3.3364 |\n| 3.1428 | 0.14 | 2000 | 3.2597 | 3.2597 |\n| 3.1465 | 0.15 | 2050 | 3.2324 | 3.2324 |\n| 3.3002 | 0.15 | 2100 | 3.2291 | 3.2290 |\n| 3.3223 | 0.15 | 2150 | 3.2819 | 3.2819 |\n| 3.3418 | 0.16 | 2200 | 3.4539 | 3.4539 |\n| 3.2661 | 0.16 | 2250 | 3.2577 | 3.2577 |\n| 3.2665 | 0.17 | 2300 | 3.3346 | 3.3345 |\n| 3.1816 | 0.17 | 2350 | 3.2627 | 3.2627 |\n| 3.3308 | 0.17 | 2400 | 3.1830 | 3.1830 |\n| 3.0341 | 0.18 | 2450 | 3.3091 | 3.3092 |\n| 3.1945 | 0.18 | 2500 | 3.2192 | 3.2192 |\n| 3.4072 | 0.18 | 2550 | 3.2281 | 3.2281 |\n| 3.2343 | 0.19 | 2600 | 3.1747 | 3.1747 |\n| 3.1914 | 0.19 | 2650 | 3.2712 | 3.2712 |\n| 3.2789 | 0.19 | 2700 | 3.2793 | 3.2792 |\n| 3.5793 | 0.2 | 2750 | 3.2033 | 3.2033 |\n| 3.069 | 0.2 | 2800 | 3.5477 | 3.5477 |\n| 3.2867 | 0.2 | 2850 | 3.2137 | 3.2137 |\n| 3.3217 | 0.21 | 2900 | 3.2518 | 3.2518 |\n| 3.1865 | 0.21 | 2950 | 3.3086 | 3.3086 |\n| 3.1641 | 0.22 | 3000 | 3.2486 | 3.2486 |\n| 3.1733 | 0.22 | 3050 | 3.2717 | 3.2717 |\n| 3.3107 | 0.22 | 3100 | 3.2439 | 3.2439 |\n| 3.2632 | 0.23 | 3150 | 3.2095 | 3.2095 |\n| 3.1569 | 0.23 | 3200 | 3.2758 | 3.2758 |\n| 3.3872 | 0.23 | 3250 | 3.1989 | 3.1989 |\n| 3.1676 | 0.24 | 3300 | 3.1942 | 3.1942 |\n| 3.301 | 0.24 | 3350 | 3.2256 | 3.2256 |\n| 3.0839 | 0.24 | 3400 | 3.5059 | 3.5059 |\n| 3.2125 | 0.25 | 3450 | 3.1671 | 3.1671 |\n| 3.2996 | 0.25 | 3500 | 3.1261 | 3.1261 |\n| 3.0045 | 0.25 | 3550 | 3.1477 | 3.1477 |\n| 3.204 | 0.26 | 3600 | 3.3003 | 3.3003 |\n| 3.3212 | 0.26 | 3650 | 3.1440 | 3.1440 |\n| 3.0475 | 0.27 | 3700 | 3.1829 | 3.1829 |\n| 3.1462 | 0.27 | 3750 | 3.1428 | 3.1428 |\n| 3.2983 | 0.27 | 3800 | 3.1720 | 3.1720 |\n| 3.5087 | 0.28 | 3850 | 3.1918 | 3.1918 |\n| 3.1398 | 0.28 | 3900 | 3.1717 | 3.1717 |\n| 3.1668 | 0.28 | 3950 | 3.2359 | 3.2359 |\n| 3.2098 | 0.29 | 4000 | 3.1765 | 3.1765 |\n| 3.2907 | 0.29 | 4050 | 3.1372 | 3.1372 |\n| 3.063 | 0.29 | 4100 | 3.2287 | 3.2287 |\n| 3.1269 | 0.3 | 4150 | 3.1292 | 3.1292 |\n| 2.8749 | 0.3 | 4200 | 3.2760 | 3.2761 |\n| 3.1634 | 0.31 | 4250 | 3.1644 | 3.1644 |\n| 3.5689 | 0.31 | 4300 | 3.1634 | 3.1634 |\n| 3.1685 | 0.31 | 4350 | 3.2055 | 3.2055 |\n| 3.1687 | 0.32 | 4400 | 3.1537 | 3.1537 |\n| 3.068 | 0.32 | 4450 | 3.1519 | 3.1518 |\n| 3.1029 | 0.32 | 4500 | 3.2265 | 3.2264 |\n| 3.3463 | 0.33 | 4550 | 3.1653 | 3.1653 |\n| 3.2194 | 0.33 | 4600 | 3.1692 | 3.1692 |\n| 3.386 | 0.33 | 4650 | 3.2148 | 3.2148 |\n| 3.0511 | 0.34 | 4700 | 3.1837 | 3.1837 |\n| 3.2149 | 0.34 | 4750 | 3.2606 | 3.2606 |\n| 3.258 | 0.34 | 4800 | 3.1853 | 3.1853 |\n| 3.4155 | 0.35 | 4850 | 3.1749 | 3.1749 |\n| 2.913 | 0.35 | 4900 | 3.1410 | 3.1410 |\n| 3.1222 | 0.36 | 4950 | 3.1347 | 3.1346 |\n| 3.2797 | 0.36 | 5000 | 3.1493 | 3.1493 |\n| 3.2699 | 0.36 | 5050 | 3.1076 | 3.1075 |\n| 3.3319 | 0.37 | 5100 | 3.1395 | 3.1395 |\n| 3.0665 | 0.37 | 5150 | 3.1579 | 3.1579 |\n| 3.1746 | 0.37 | 5200 | 3.0783 | 3.0783 |\n| 3.167 | 0.38 | 5250 | 3.1002 | 3.1002 |\n| 3.1945 | 0.38 | 5300 | 3.1255 | 3.1254 |\n| 3.1175 | 0.38 | 5350 | 3.2457 | 3.2457 |\n| 3.1442 | 0.39 | 5400 | 3.0763 | 3.0763 |\n| 3.0234 | 0.39 | 5450 | 3.1150 | 3.1150 |\n| 3.2851 | 0.39 | 5500 | 3.1527 | 3.1526 |\n| 3.2582 | 0.4 | 5550 | 3.1783 | 3.1783 |\n| 3.486 | 0.4 | 5600 | 3.0703 | 3.0703 |\n| 3.0174 | 0.41 | 5650 | 3.1628 | 3.1628 |\n| 3.0218 | 0.41 | 5700 | 3.0815 | 3.0815 |\n| 3.1719 | 0.41 | 5750 | 3.1450 | 3.1449 |\n| 3.0538 | 0.42 | 5800 | 3.2821 | 3.2821 |\n| 3.089 | 0.42 | 5850 | 3.1103 | 3.1103 |\n| 3.2584 | 0.42 | 5900 | 3.0682 | 3.0682 |\n| 3.0384 | 0.43 | 5950 | 3.0831 | 3.0831 |\n| 3.146 | 0.43 | 6000 | 3.0556 | 3.0556 |\n| 3.3227 | 0.43 | 6050 | 3.1558 | 3.1558 |\n| 3.084 | 0.44 | 6100 | 3.1062 | 3.1062 |\n| 3.035 | 0.44 | 6150 | 3.1382 | 3.1381 |\n| 3.2302 | 0.44 | 6200 | 3.4294 | 3.4294 |\n| 3.2471 | 0.45 | 6250 | 3.0630 | 3.0629 |\n| 3.3483 | 0.45 | 6300 | 3.0820 | 3.0820 |\n| 3.1711 | 0.46 | 6350 | 3.1196 | 3.1196 |\n| 3.2419 | 0.46 | 6400 | 3.1502 | 3.1501 |\n| 3.2064 | 0.46 | 6450 | 3.0777 | 3.0777 |\n| 3.2577 | 0.47 | 6500 | 3.1496 | 3.1496 |\n| 3.1598 | 0.47 | 6550 | 3.1436 | 3.1436 |\n| 3.261 | 0.47 | 6600 | 3.0848 | 3.0848 |\n| 3.0999 | 0.48 | 6650 | 3.4262 | 3.4262 |\n| 3.2579 | 0.48 | 6700 | 3.1434 | 3.1434 |\n| 3.0663 | 0.48 | 6750 | 3.1967 | 3.1967 |\n| 2.9269 | 0.49 | 6800 | 3.1421 | 3.1420 |\n| 3.0539 | 0.49 | 6850 | 3.1127 | 3.1127 |\n| 3.0889 | 0.5 | 6900 | 3.0883 | 3.0882 |\n| 3.3546 | 0.5 | 6950 | 3.1240 | 3.1240 |\n| 2.7959 | 0.5 | 7000 | 3.1809 | 3.1809 |\n| 3.1456 | 0.51 | 7050 | 3.1098 | 3.1098 |\n| 3.129 | 0.51 | 7100 | 3.1305 | 3.1305 |\n| 3.0578 | 0.51 | 7150 | 3.0595 | 3.0594 |\n| 2.9928 | 0.52 | 7200 | 3.2893 | 3.2894 |\n| 3.3873 | 0.52 | 7250 | 3.0535 | 3.0535 |\n| 3.276 | 0.52 | 7300 | 3.1102 | 3.1101 |\n| 3.0081 | 0.53 | 7350 | 3.0800 | 3.0799 |\n| 2.995 | 0.53 | 7400 | 3.0763 | 3.0762 |\n| 3.0534 | 0.53 | 7450 | 3.1923 | 3.1922 |\n| 2.9008 | 0.54 | 7500 | 3.1613 | 3.1613 |\n| 3.1102 | 0.54 | 7550 | 3.1667 | 3.1667 |\n| 3.1981 | 0.55 | 7600 | 3.0901 | 3.0901 |\n| 3.1943 | 0.55 | 7650 | 3.1479 | 3.1479 |\n| 2.9393 | 0.55 | 7700 | 3.0897 | 3.0897 |\n| 3.4017 | 0.56 | 7750 | 3.1133 | 3.1133 |\n| 3.1755 | 0.56 | 7800 | 3.1046 | 3.1045 |\n| 3.2098 | 0.56 | 7850 | 3.1901 | 3.1901 |\n| 3.0473 | 0.57 | 7900 | 3.0407 | 3.0407 |\n| 3.1164 | 0.57 | 7950 | 3.0538 | 3.0538 |\n| 3.0977 | 0.57 | 8000 | 3.0916 | 3.0916 |\n| 3.1668 | 0.58 | 8050 | 3.0511 | 3.0511 |\n| 3.1759 | 0.58 | 8100 | 3.0570 | 3.0569 |\n| 3.0314 | 0.58 | 8150 | 3.0392 | 3.0391 |\n| 3.1754 | 0.59 | 8200 | 3.0931 | 3.0931 |\n| 3.1641 | 0.59 | 8250 | 3.0616 | 3.0616 |\n| 3.1117 | 0.6 | 8300 | 3.0858 | 3.0858 |\n| 3.0428 | 0.6 | 8350 | 3.3001 | 3.3001 |\n| 3.2059 | 0.6 | 8400 | 3.1211 | 3.1211 |\n| 3.1379 | 0.61 | 8450 | 3.1142 | 3.1142 |\n| 2.6985 | 0.61 | 8500 | 3.0227 | 3.0227 |\n| 3.1372 | 0.61 | 8550 | 3.3303 | 3.3303 |\n| 3.133 | 0.62 | 8600 | 3.0319 | 3.0319 |\n| 2.8701 | 0.62 | 8650 | 3.0984 | 3.0984 |\n| 3.3546 | 0.62 | 8700 | 3.0341 | 3.0340 |\n| 3.3581 | 0.63 | 8750 | 3.0209 | 3.0208 |\n| 3.2742 | 0.63 | 8800 | 3.1695 | 3.1695 |\n| 2.9777 | 0.64 | 8850 | 3.1243 | 3.1243 |\n| 3.2559 | 0.64 | 8900 | 3.0289 | 3.0289 |\n| 2.8806 | 0.64 | 8950 | 3.0622 | 3.0622 |\n| 3.0749 | 0.65 | 9000 | 3.0341 | 3.0341 |\n| 3.0466 | 0.65 | 9050 | 3.0805 | 3.0805 |\n| 2.9984 | 0.65 | 9100 | 3.0313 | 3.0312 |\n| 3.203 | 0.66 | 9150 | 3.0184 | 3.0183 |\n| 3.2582 | 0.66 | 9200 | 3.1197 | 3.1197 |\n| 3.2952 | 0.66 | 9250 | 3.0834 | 3.0834 |\n| 2.9485 | 0.67 | 9300 | 3.0659 | 3.0659 |\n| 3.0277 | 0.67 | 9350 | 3.0454 | 3.0454 |\n| 3.2054 | 0.67 | 9400 | 3.1008 | 3.1008 |\n| 3.0935 | 0.68 | 9450 | 3.0649 | 3.0648 |\n| 3.0175 | 0.68 | 9500 | 3.0549 | 3.0549 |\n| 3.1301 | 0.69 | 9550 | 3.0076 | 3.0076 |\n| 3.0053 | 0.69 | 9600 | 3.0320 | 3.0319 |\n| 2.9718 | 0.69 | 9650 | 3.0270 | 3.0270 |\n| 3.0023 | 0.7 | 9700 | 3.0470 | 3.0469 |\n| 3.3893 | 0.7 | 9750 | 2.9923 | 2.9922 |\n| 3.0126 | 0.7 | 9800 | 3.1265 | 3.1265 |\n| 2.7614 | 0.71 | 9850 | 3.2194 | 3.2194 |\n| 3.1488 | 0.71 | 9900 | 3.0394 | 3.0394 |\n| 3.0751 | 0.71 | 9950 | 3.0037 | 3.0037 |\n| 2.6901 | 0.72 | 10000 | 3.0517 | 3.0517 |\n| 3.1097 | 0.72 | 10050 | 3.0385 | 3.0385 |\n| 2.9786 | 0.72 | 10100 | 3.0478 | 3.0478 |\n| 3.0759 | 0.73 | 10150 | 3.0663 | 3.0663 |\n| 3.1498 | 0.73 | 10200 | 3.0112 | 3.0112 |\n| 3.1841 | 0.74 | 10250 | 3.0059 | 3.0059 |\n| 2.8827 | 0.74 | 10300 | 3.1028 | 3.1028 |\n| 3.0948 | 0.74 | 10350 | 3.0770 | 3.0770 |\n| 3.1116 | 0.75 | 10400 | 3.1307 | 3.1306 |\n| 2.8361 | 0.75 | 10450 | 3.0373 | 3.0373 |\n| 3.2783 | 0.75 | 10500 | 2.9874 | 2.9874 |\n| 2.8844 | 0.76 | 10550 | 3.0150 | 3.0150 |\n| 2.9918 | 0.76 | 10600 | 3.0176 | 3.0175 |\n| 3.1552 | 0.76 | 10650 | 2.9842 | 2.9841 |\n| 2.8834 | 0.77 | 10700 | 3.0438 | 3.0437 |\n| 2.9602 | 0.77 | 10750 | 3.0263 | 3.0262 |\n| 3.215 | 0.78 | 10800 | 2.9959 | 2.9959 |\n| 3.172 | 0.78 | 10850 | 3.0018 | 3.0018 |\n| 2.7982 | 0.78 | 10900 | 2.9811 | 2.9811 |\n| 2.99 | 0.79 | 10950 | 3.0473 | 3.0472 |\n| 3.2533 | 0.79 | 11000 | 2.9874 | 2.9873 |\n| 3.0024 | 0.79 | 11050 | 2.9936 | 2.9935 |\n| 3.0641 | 0.8 | 11100 | 3.0023 | 3.0022 |\n| 2.834 | 0.8 | 11150 | 3.0665 | 3.0665 |\n| 3.5 | 0.8 | 11200 | 3.0045 | 3.0044 |\n| 2.9229 | 0.81 | 11250 | 2.9972 | 2.9972 |\n| 3.1083 | 0.81 | 11300 | 3.0198 | 3.0198 |\n| 3.1141 | 0.81 | 11350 | 3.0926 | 3.0926 |\n| 3.2897 | 0.82 | 11400 | 3.0195 | 3.0195 |\n| 2.703 | 0.82 | 11450 | 2.9642 | 2.9642 |\n| 3.2053 | 0.83 | 11500 | 3.0739 | 3.0739 |\n| 3.0592 | 0.83 | 11550 | 3.0547 | 3.0547 |\n| 2.7905 | 0.83 | 11600 | 3.0112 | 3.0112 |\n| 3.0521 | 0.84 | 11650 | 2.9676 | 2.9676 |\n| 2.8807 | 0.84 | 11700 | 2.9737 | 2.9737 |\n| 3.212 | 0.84 | 11750 | 3.0579 | 3.0578 |\n| 3.1624 | 0.85 | 11800 | 3.0113 | 3.0112 |\n| 3.0013 | 0.85 | 11850 | 3.0262 | 3.0262 |\n| 3.1247 | 0.85 | 11900 | 3.0005 | 3.0005 |\n| 3.122 | 0.86 | 11950 | 3.0288 | 3.0288 |\n| 2.9088 | 0.86 | 12000 | 3.0101 | 3.0101 |\n| 3.3433 | 0.86 | 12050 | 3.0417 | 3.0417 |\n| 3.1722 | 0.87 | 12100 | 2.9808 | 2.9807 |\n| 3.0472 | 0.87 | 12150 | 2.9896 | 2.9896 |\n| 2.8991 | 0.88 | 12200 | 2.9739 | 2.9738 |\n| 2.8017 | 0.88 | 12250 | 3.1197 | 3.1197 |\n| 3.1467 | 0.88 | 12300 | 2.9484 | 2.9483 |\n| 3.0622 | 0.89 | 12350 | 3.0068 | 3.0068 |\n| 2.7503 | 0.89 | 12400 | 3.0082 | 3.0082 |\n| 2.9746 | 0.89 | 12450 | 3.0171 | 3.0171 |\n| 3.0332 | 0.9 | 12500 | 3.0219 | 3.0219 |\n| 2.9461 | 0.9 | 12550 | 3.0852 | 3.0852 |\n| 3.1592 | 0.9 | 12600 | 2.9739 | 2.9739 |\n| 3.1065 | 0.91 | 12650 | 2.9762 | 2.9762 |\n| 2.9471 | 0.91 | 12700 | 2.9900 | 2.9900 |\n| 3.0888 | 0.92 | 12750 | 2.9958 | 2.9958 |\n| 3.0276 | 0.92 | 12800 | 2.9635 | 2.9634 |\n| 3.3018 | 0.92 | 12850 | 2.9799 | 2.9799 |\n| 3.0144 | 0.93 | 12900 | 3.0390 | 3.0390 |\n| 3.123 | 0.93 | 12950 | 3.0114 | 3.0114 |\n| 2.9762 | 0.93 | 13000 | 2.9466 | 2.9466 |\n| 3.0882 | 0.94 | 13050 | 2.9648 | 2.9648 |\n| 3.378 | 0.94 | 13100 | 2.9714 | 2.9714 |\n| 2.9257 | 0.94 | 13150 | 2.9608 | 2.9607 |\n| 3.1253 | 0.95 | 13200 | 2.9670 | 2.9670 |\n| 3.0435 | 0.95 | 13250 | 2.9772 | 2.9772 |\n| 3.1933 | 0.95 | 13300 | 2.9668 | 2.9667 |\n| 2.6627 | 0.96 | 13350 | 2.9485 | 2.9485 |\n| 2.8993 | 0.96 | 13400 | 2.9604 | 2.9604 |\n| 3.0717 | 0.97 | 13450 | 2.9680 | 2.9680 |\n| 2.9808 | 0.97 | 13500 | 3.0079 | 3.0079 |\n| 3.1127 | 0.97 | 13550 | 3.0293 | 3.0292 |\n| 2.7839 | 0.98 | 13600 | 3.0223 | 3.0222 |\n| 3.0486 | 0.98 | 13650 | 2.9962 | 2.9962 |\n| 2.9194 | 0.98 | 13700 | 3.0340 | 3.0340 |\n| 3.0708 | 0.99 | 13750 | 2.9454 | 2.9454 |\n| 2.8585 | 0.99 | 13800 | 3.0066 | 3.0065 |\n| 2.9663 | 0.99 | 13850 | 2.9561 | 2.9561 |\n| 3.1141 | 1.0 | 13900 | 2.9465 | 2.9465 |\n| 2.9909 | 1.0 | 13950 | 2.9614 | 2.9613 |\n| 2.8155 | 1.0 | 14000 | 2.9983 | 2.9983 |\n| 2.676 | 1.01 | 14050 | 2.9545 | 2.9545 |\n| 3.0067 | 1.01 | 14100 | 3.0463 | 3.0463 |\n| 2.7865 | 1.02 | 14150 | 3.1286 | 3.1285 |\n| 2.7287 | 1.02 | 14200 | 3.0271 | 3.0270 |\n| 2.4092 | 1.02 | 14250 | 3.0883 | 3.0883 |\n| 2.6929 | 1.03 | 14300 | 2.9681 | 2.9680 |\n| 2.7634 | 1.03 | 14350 | 2.9687 | 2.9686 |\n| 2.8261 | 1.03 | 14400 | 3.0169 | 3.0169 |\n| 2.7826 | 1.04 | 14450 | 2.9896 | 2.9896 |\n| 2.5205 | 1.04 | 14500 | 3.0000 | 3.0000 |\n| 2.5125 | 1.04 | 14550 | 3.2051 | 3.2051 |\n| 2.7654 | 1.05 | 14600 | 2.9598 | 2.9598 |\n| 2.7537 | 1.05 | 14650 | 3.0330 | 3.0330 |\n| 2.8008 | 1.05 | 14700 | 2.9685 | 2.9685 |\n| 2.7475 | 1.06 | 14750 | 2.9752 | 2.9752 |\n| 2.9336 | 1.06 | 14800 | 2.9771 | 2.9771 |\n| 2.7198 | 1.07 | 14850 | 2.9437 | 2.9437 |\n| 2.8061 | 1.07 | 14900 | 3.0164 | 3.0164 |\n| 2.6694 | 1.07 | 14950 | 3.0257 | 3.0257 |\n| 3.0206 | 1.08 | 15000 | 2.9708 | 2.9708 |\n| 2.5526 | 1.08 | 15050 | 3.0267 | 3.0267 |\n| 2.5243 | 1.08 | 15100 | 2.9703 | 2.9702 |\n| 2.5846 | 1.09 | 15150 | 2.9967 | 2.9967 |\n| 2.7397 | 1.09 | 15200 | 3.0103 | 3.0103 |\n| 2.673 | 1.09 | 15250 | 2.9754 | 2.9754 |\n| 2.5084 | 1.1 | 15300 | 3.0346 | 3.0345 |\n| 2.4855 | 1.1 | 15350 | 2.9458 | 2.9457 |\n| 2.7313 | 1.11 | 15400 | 2.9859 | 2.9858 |\n| 2.7006 | 1.11 | 15450 | 3.0760 | 3.0759 |\n| 2.7244 | 1.11 | 15500 | 3.0000 | 3.0000 |\n| 2.4614 | 1.12 | 15550 | 3.0309 | 3.0309 |\n| 2.4961 | 1.12 | 15600 | 3.0103 | 3.0103 |\n| 2.768 | 1.12 | 15650 | 2.9935 | 2.9935 |\n| 2.7499 | 1.13 | 15700 | 3.0056 | 3.0056 |\n| 2.653 | 1.13 | 15750 | 3.0597 | 3.0597 |\n| 2.6518 | 1.13 | 15800 | 3.0372 | 3.0372 |\n| 2.7115 | 1.14 | 15850 | 2.9719 | 2.9719 |\n| 2.7183 | 1.14 | 15900 | 3.0150 | 3.0150 |\n| 2.642 | 1.14 | 15950 | 2.9677 | 2.9676 |\n| 2.4724 | 1.15 | 16000 | 3.1429 | 3.1429 |\n| 2.5061 | 1.15 | 16050 | 3.0118 | 3.0118 |\n| 2.6537 | 1.16 | 16100 | 2.9486 | 2.9485 |\n| 2.5527 | 1.16 | 16150 | 2.9290 | 2.9289 |\n| 2.5993 | 1.16 | 16200 | 3.0312 | 3.0312 |\n| 2.5689 | 1.17 | 16250 | 2.9628 | 2.9628 |\n| 2.6791 | 1.17 | 16300 | 2.9799 | 2.9799 |\n| 2.5362 | 1.17 | 16350 | 2.9344 | 2.9344 |\n| 2.722 | 1.18 | 16400 | 2.9889 | 2.9889 |\n| 2.6466 | 1.18 | 16450 | 3.0463 | 3.0463 |\n| 2.7251 | 1.18 | 16500 | 2.9908 | 2.9908 |\n| 2.6939 | 1.19 | 16550 | 3.0059 | 3.0059 |\n| 2.5142 | 1.19 | 16600 | 3.1051 | 3.1050 |\n| 2.708 | 1.19 | 16650 | 3.0247 | 3.0246 |\n| 2.8829 | 1.2 | 16700 | 3.0766 | 3.0766 |\n| 2.4804 | 1.2 | 16750 | 2.9606 | 2.9606 |\n| 2.7648 | 1.21 | 16800 | 3.0024 | 3.0024 |\n| 2.6951 | 1.21 | 16850 | 2.9377 | 2.9377 |\n| 2.6268 | 1.21 | 16900 | 2.9665 | 2.9665 |\n| 2.4565 | 1.22 | 16950 | 2.9571 | 2.9571 |\n| 2.4351 | 1.22 | 17000 | 2.9667 | 2.9667 |\n| 2.5413 | 1.22 | 17050 | 2.9858 | 2.9857 |\n| 2.4026 | 1.23 | 17100 | 2.9627 | 2.9627 |\n| 2.475 | 1.23 | 17150 | 3.0614 | 3.0613 |\n| 2.6409 | 1.23 | 17200 | 2.9948 | 2.9947 |\n| 2.4096 | 1.24 | 17250 | 2.9809 | 2.9809 |\n| 2.9013 | 1.24 | 17300 | 2.9059 | 2.9059 |\n| 2.5439 | 1.25 | 17350 | 3.0579 | 3.0579 |\n| 2.7954 | 1.25 | 17400 | 2.9680 | 2.9680 |\n| 2.5737 | 1.25 | 17450 | 2.9070 | 2.9070 |\n| 2.8598 | 1.26 | 17500 | 2.9365 | 2.9364 |\n| 2.6169 | 1.26 | 17550 | 2.9778 | 2.9777 |\n| 2.5259 | 1.26 | 17600 | 2.9682 | 2.9681 |\n| 2.8575 | 1.27 | 17650 | 2.9945 | 2.9945 |\n| 2.7421 | 1.27 | 17700 | 2.9520 | 2.9520 |\n| 2.8372 | 1.27 | 17750 | 2.9436 | 2.9435 |\n| 2.5107 | 1.28 | 17800 | 2.9719 | 2.9718 |\n| 2.6528 | 1.28 | 17850 | 3.0114 | 3.0114 |\n| 2.5169 | 1.28 | 17900 | 2.9163 | 2.9163 |\n| 2.5384 | 1.29 | 17950 | 2.9369 | 2.9369 |\n| 2.4932 | 1.29 | 18000 | 2.9385 | 2.9384 |\n| 2.654 | 1.3 | 18050 | 2.9273 | 2.9273 |\n| 2.5108 | 1.3 | 18100 | 2.9197 | 2.9197 |\n| 2.6425 | 1.3 | 18150 | 2.9047 | 2.9047 |\n| 2.5097 | 1.31 | 18200 | 2.8998 | 2.8998 |\n| 2.6153 | 1.31 | 18250 | 2.9400 | 2.9399 |\n| 2.6642 | 1.31 | 18300 | 2.9071 | 2.9071 |\n| 2.5172 | 1.32 | 18350 | 2.9538 | 2.9537 |\n| 2.6641 | 1.32 | 18400 | 2.9670 | 2.9670 |\n| 2.667 | 1.32 | 18450 | 2.9586 | 2.9586 |\n| 2.3798 | 1.33 | 18500 | 2.9442 | 2.9442 |\n| 2.7429 | 1.33 | 18550 | 2.9354 | 2.9354 |\n| 2.6313 | 1.33 | 18600 | 2.9349 | 2.9349 |\n| 2.7297 | 1.34 | 18650 | 2.9436 | 2.9436 |\n| 2.4944 | 1.34 | 18700 | 2.9431 | 2.9431 |\n| 2.5849 | 1.35 | 18750 | 2.9068 | 2.9068 |\n| 2.4072 | 1.35 | 18800 | 2.9049 | 2.9049 |\n| 2.5155 | 1.35 | 18850 | 2.9386 | 2.9386 |\n| 2.4623 | 1.36 | 18900 | 2.9390 | 2.9390 |\n| 2.3734 | 1.36 | 18950 | 2.8948 | 2.8948 |\n| 2.662 | 1.36 | 19000 | 3.0272 | 3.0272 |\n| 2.6445 | 1.37 | 19050 | 3.0893 | 3.0893 |\n| 2.5997 | 1.37 | 19100 | 2.9809 | 2.9809 |\n| 2.7098 | 1.37 | 19150 | 2.9353 | 2.9353 |\n| 2.7256 | 1.38 | 19200 | 2.9524 | 2.9523 |\n| 2.7286 | 1.38 | 19250 | 3.0198 | 3.0198 |\n| 2.6852 | 1.39 | 19300 | 2.9169 | 2.9169 |\n| 2.6173 | 1.39 | 19350 | 2.9124 | 2.9124 |\n| 2.9245 | 1.39 | 19400 | 2.9010 | 2.9010 |\n| 2.4449 | 1.4 | 19450 | 2.9271 | 2.9271 |\n| 2.7729 | 1.4 | 19500 | 2.9354 | 2.9354 |\n| 2.5422 | 1.4 | 19550 | 2.9942 | 2.9942 |\n| 2.8516 | 1.41 | 19600 | 2.9525 | 2.9525 |\n| 2.6338 | 1.41 | 19650 | 2.9009 | 2.9009 |\n| 2.536 | 1.41 | 19700 | 2.8967 | 2.8967 |\n| 2.6251 | 1.42 | 19750 | 2.9858 | 2.9858 |\n| 2.6675 | 1.42 | 19800 | 2.9368 | 2.9367 |\n| 2.649 | 1.42 | 19850 | 2.9188 | 2.9187 |\n| 2.4321 | 1.43 | 19900 | 2.9024 | 2.9024 |\n| 2.5635 | 1.43 | 19950 | 2.9593 | 2.9592 |\n| 2.7008 | 1.44 | 20000 | 2.9312 | 2.9312 |\n| 2.3847 | 1.44 | 20050 | 2.9469 | 2.9469 |\n| 2.5795 | 1.44 | 20100 | 2.9610 | 2.9610 |\n| 2.5448 | 1.45 | 20150 | 2.9250 | 2.9249 |\n| 2.4307 | 1.45 | 20200 | 2.8984 | 2.8984 |\n| 2.603 | 1.45 | 20250 | 2.9128 | 2.9127 |\n| 2.4792 | 1.46 | 20300 | 2.9316 | 2.9315 |\n| 2.5079 | 1.46 | 20350 | 2.9318 | 2.9318 |\n| 2.4144 | 1.46 | 20400 | 2.9658 | 2.9657 |\n| 2.4941 | 1.47 | 20450 | 2.9321 | 2.9321 |\n| 2.6389 | 1.47 | 20500 | 2.9407 | 2.9406 |\n| 2.6555 | 1.47 | 20550 | 2.9680 | 2.9679 |\n| 2.4947 | 1.48 | 20600 | 2.8995 | 2.8995 |\n| 2.8275 | 1.48 | 20650 | 2.9178 | 2.9178 |\n| 2.7041 | 1.49 | 20700 | 2.9182 | 2.9182 |\n| 2.3485 | 1.49 | 20750 | 2.9254 | 2.9254 |\n| 2.4669 | 1.49 | 20800 | 2.9146 | 2.9146 |\n| 2.7119 | 1.5 | 20850 | 2.9105 | 2.9105 |\n| 2.5042 | 1.5 | 20900 | 2.9439 | 2.9439 |\n| 2.6387 | 1.5 | 20950 | 2.9054 | 2.9054 |\n| 2.7571 | 1.51 | 21000 | 2.8993 | 2.8992 |\n| 2.6901 | 1.51 | 21050 | 2.9055 | 2.9055 |\n| 2.5939 | 1.51 | 21100 | 2.9496 | 2.9496 |\n| 2.6441 | 1.52 | 21150 | 2.9458 | 2.9458 |\n| 2.73 | 1.52 | 21200 | 2.9073 | 2.9073 |\n| 2.5875 | 1.53 | 21250 | 2.9283 | 2.9283 |\n| 2.6216 | 1.53 | 21300 | 2.9595 | 2.9594 |\n| 2.777 | 1.53 | 21350 | 2.9612 | 2.9612 |\n| 2.7403 | 1.54 | 21400 | 2.8779 | 2.8778 |\n| 2.5636 | 1.54 | 21450 | 2.9410 | 2.9409 |\n| 2.4265 | 1.54 | 21500 | 2.9706 | 2.9706 |\n| 2.6707 | 1.55 | 21550 | 2.9196 | 2.9196 |\n| 2.3088 | 1.55 | 21600 | 2.9238 | 2.9237 |\n| 2.7564 | 1.55 | 21650 | 2.9096 | 2.9096 |\n| 2.6355 | 1.56 | 21700 | 2.9042 | 2.9042 |\n| 2.425 | 1.56 | 21750 | 2.9651 | 2.9651 |\n| 2.3169 | 1.56 | 21800 | 2.9371 | 2.9371 |\n| 2.6283 | 1.57 | 21850 | 2.9201 | 2.9201 |\n| 2.4333 | 1.57 | 21900 | 3.0037 | 3.0037 |\n| 2.5661 | 1.58 | 21950 | 2.9179 | 2.9178 |\n| 2.58 | 1.58 | 22000 | 2.9419 | 2.9419 |\n| 2.6451 | 1.58 | 22050 | 2.9683 | 2.9682 |\n| 2.4686 | 1.59 | 22100 | 2.9073 | 2.9073 |\n| 2.4795 | 1.59 | 22150 | 2.9364 | 2.9364 |\n| 2.6442 | 1.59 | 22200 | 2.9521 | 2.9520 |\n| 2.4085 | 1.6 | 22250 | 2.9353 | 2.9352 |\n| 2.4595 | 1.6 | 22300 | 2.9340 | 2.9340 |\n| 2.5705 | 1.6 | 22350 | 2.9283 | 2.9283 |\n| 2.4189 | 1.61 | 22400 | 2.9017 | 2.9016 |\n| 2.5823 | 1.61 | 22450 | 2.9032 | 2.9032 |\n| 2.5402 | 1.61 | 22500 | 2.9039 | 2.9038 |\n| 2.8166 | 1.62 | 22550 | 2.8849 | 2.8849 |\n| 2.6202 | 1.62 | 22600 | 2.8800 | 2.8800 |\n| 2.584 | 1.63 | 22650 | 2.8750 | 2.8750 |\n| 2.3816 | 1.63 | 22700 | 2.9109 | 2.9108 |\n| 2.5496 | 1.63 | 22750 | 2.9024 | 2.9024 |\n| 2.5379 | 1.64 | 22800 | 2.8798 | 2.8798 |\n| 2.8131 | 1.64 | 22850 | 2.8656 | 2.8656 |\n| 2.2938 | 1.64 | 22900 | 2.9004 | 2.9004 |\n| 2.6783 | 1.65 | 22950 | 2.8878 | 2.8878 |\n| 2.5324 | 1.65 | 23000 | 2.8982 | 2.8981 |\n| 2.6519 | 1.65 | 23050 | 2.8990 | 2.8990 |\n| 2.8409 | 1.66 | 23100 | 2.9316 | 2.9316 |\n| 2.6925 | 1.66 | 23150 | 2.9169 | 2.9168 |\n| 2.5419 | 1.66 | 23200 | 2.9039 | 2.9039 |\n| 2.3325 | 1.67 | 23250 | 2.9207 | 2.9207 |\n| 2.6392 | 1.67 | 23300 | 2.9194 | 2.9193 |\n| 2.8263 | 1.68 | 23350 | 2.9086 | 2.9085 |\n| 2.7376 | 1.68 | 23400 | 2.9024 | 2.9024 |\n| 2.2401 | 1.68 | 23450 | 2.9111 | 2.9110 |\n| 2.4786 | 1.69 | 23500 | 2.9104 | 2.9104 |\n| 2.55 | 1.69 | 23550 | 2.9199 | 2.9199 |\n| 2.8087 | 1.69 | 23600 | 2.9298 | 2.9298 |\n| 2.6732 | 1.7 | 23650 | 2.9338 | 2.9338 |\n| 2.4693 | 1.7 | 23700 | 2.9224 | 2.9224 |\n| 2.5044 | 1.7 | 23750 | 2.9163 | 2.9162 |\n| 2.5339 | 1.71 | 23800 | 2.9201 | 2.9201 |\n| 2.6954 | 1.71 | 23850 | 2.9250 | 2.9250 |\n| 2.4067 | 1.72 | 23900 | 2.9298 | 2.9298 |\n| 2.642 | 1.72 | 23950 | 2.8989 | 2.8989 |\n| 2.5598 | 1.72 | 24000 | 2.9036 | 2.9035 |\n| 2.3665 | 1.73 | 24050 | 2.9076 | 2.9075 |\n| 2.702 | 1.73 | 24100 | 2.9168 | 2.9167 |\n| 2.5716 | 1.73 | 24150 | 2.9149 | 2.9149 |\n| 2.5707 | 1.74 | 24200 | 2.9051 | 2.9051 |\n| 2.5379 | 1.74 | 24250 | 2.9431 | 2.9431 |\n| 2.3297 | 1.74 | 24300 | 2.9746 | 2.9746 |\n| 2.405 | 1.75 | 24350 | 2.9450 | 2.9449 |\n| 2.7137 | 1.75 | 24400 | 2.9306 | 2.9306 |\n| 2.3818 | 1.75 | 24450 | 2.9424 | 2.9423 |\n| 2.2058 | 1.76 | 24500 | 2.9433 | 2.9433 |\n| 2.2247 | 1.76 | 24550 | 2.9475 | 2.9474 |\n| 2.5951 | 1.77 | 24600 | 2.9248 | 2.9247 |\n| 2.6076 | 1.77 | 24650 | 2.9035 | 2.9034 |\n| 2.4384 | 1.77 | 24700 | 2.9169 | 2.9169 |\n| 2.5674 | 1.78 | 24750 | 2.9230 | 2.9230 |\n| 2.3697 | 1.78 | 24800 | 2.9288 | 2.9287 |\n| 2.4873 | 1.78 | 24850 | 2.9343 | 2.9342 |\n| 2.4828 | 1.79 | 24900 | 2.9140 | 2.9140 |\n| 2.4045 | 1.79 | 24950 | 2.9132 | 2.9132 |\n| 2.4529 | 1.79 | 25000 | 2.9224 | 2.9224 |\n| 2.425 | 1.8 | 25050 | 2.9152 | 2.9152 |\n| 2.4542 | 1.8 | 25100 | 2.9062 | 2.9062 |\n| 2.5876 | 1.8 | 25150 | 2.9111 | 2.9111 |\n| 2.537 | 1.81 | 25200 | 2.9082 | 2.9081 |\n| 2.487 | 1.81 | 25250 | 2.9120 | 2.9120 |\n| 2.3972 | 1.82 | 25300 | 2.9032 | 2.9032 |\n| 2.3996 | 1.82 | 25350 | 2.8937 | 2.8937 |\n| 2.5223 | 1.82 | 25400 | 2.8976 | 2.8975 |\n| 2.5235 | 1.83 | 25450 | 2.9135 | 2.9135 |\n| 2.5024 | 1.83 | 25500 | 2.9238 | 2.9238 |\n| 2.6154 | 1.83 | 25550 | 2.9292 | 2.9291 |\n| 2.6438 | 1.84 | 25600 | 2.9280 | 2.9280 |\n| 2.5625 | 1.84 | 25650 | 2.9254 | 2.9253 |\n| 2.667 | 1.84 | 25700 | 2.9235 | 2.9234 |\n| 2.7495 | 1.85 | 25750 | 2.9195 | 2.9195 |\n| 2.6583 | 1.85 | 25800 | 2.9210 | 2.9210 |\n| 2.6855 | 1.86 | 25850 | 2.9162 | 2.9162 |\n| 2.4995 | 1.86 | 25900 | 2.9150 | 2.9149 |\n| 2.6508 | 1.86 | 25950 | 2.9228 | 2.9228 |\n| 2.6263 | 1.87 | 26000 | 2.9254 | 2.9253 |\n| 2.5796 | 1.87 | 26050 | 2.9271 | 2.9270 |\n| 2.4272 | 1.87 | 26100 | 2.9225 | 2.9225 |\n| 2.5424 | 1.88 | 26150 | 2.9218 | 2.9217 |\n| 2.6146 | 1.88 | 26200 | 2.9216 | 2.9216 |\n| 2.3928 | 1.88 | 26250 | 2.9184 | 2.9184 |\n| 2.7237 | 1.89 | 26300 | 2.9169 | 2.9169 |\n| 2.4522 | 1.89 | 26350 | 2.9167 | 2.9167 |\n| 2.65 | 1.89 | 26400 | 2.9186 | 2.9185 |\n| 2.3969 | 1.9 | 26450 | 2.9151 | 2.9150 |\n| 2.6054 | 1.9 | 26500 | 2.9185 | 2.9185 |\n| 2.6169 | 1.91 | 26550 | 2.9179 | 2.9179 |\n| 2.6473 | 1.91 | 26600 | 2.9148 | 2.9148 |\n| 2.7241 | 1.91 | 26650 | 2.9127 | 2.9127 |\n| 2.5228 | 1.92 | 26700 | 2.9122 | 2.9122 |\n| 2.2797 | 1.92 | 26750 | 2.9116 | 2.9116 |\n| 2.3311 | 1.92 | 26800 | 2.9096 | 2.9096 |\n| 2.4659 | 1.93 | 26850 | 2.9097 | 2.9097 |\n| 2.6423 | 1.93 | 26900 | 2.9115 | 2.9114 |\n| 2.6203 | 1.93 | 26950 | 2.9130 | 2.9130 |\n| 2.5754 | 1.94 | 27000 | 2.9125 | 2.9125 |\n| 2.2694 | 1.94 | 27050 | 2.9122 | 2.9121 |\n| 2.4308 | 1.94 | 27100 | 2.9127 | 2.9126 |\n| 2.3289 | 1.95 | 27150 | 2.9129 | 2.9128 |\n| 2.6457 | 1.95 | 27200 | 2.9128 | 2.9128 |\n| 2.4722 | 1.96 | 27250 | 2.9126 | 2.9126 |\n| 2.5979 | 1.96 | 27300 | 2.9133 | 2.9133 |\n| 2.5693 | 1.96 | 27350 | 2.9137 | 2.9137 |\n| 2.6261 | 1.97 | 27400 | 2.9134 | 2.9134 |\n| 2.7006 | 1.97 | 27450 | 2.9136 | 2.9135 |\n| 2.6482 | 1.97 | 27500 | 2.9134 | 2.9134 |\n| 2.6639 | 1.98 | 27550 | 2.9134 | 2.9134 |\n| 2.6761 | 1.98 | 27600 | 2.9133 | 2.9133 |\n| 2.4477 | 1.98 | 27650 | 2.9134 | 2.9134 |\n| 2.4656 | 1.99 | 27700 | 2.9134 | 2.9134 |\n| 2.7268 | 1.99 | 27750 | 2.9134 | 2.9134 |\n| 2.4972 | 2.0 | 27800 | 2.9134 | 2.9134 |\n| 2.517 | 2.0 | 27850 | 2.9134 | 2.9134 |\n\n\n### Framework versions\n\n- Transformers 4.34.0\n- Pytorch 2.0.1\n- Datasets 2.14.5\n- Tokenizers 0.14.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "rylai88/bert_base_chinese_baidu_fintune", "base_model_relation": "base" }, { "model_id": "HansOMEL/QA-bert-base-chinese-Hw1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: QA-bert-base-chinese-Hw1\n results: []\n---\n\n\n\n# QA-bert-base-chinese-Hw1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.8793\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:-----:|:-----:|:---------------:|\n| 0.8471 | 1.0 | 13671 | 0.8793 |\n| 0.422 | 2.0 | 27342 | 0.9830 |\n\n\n### Framework versions\n\n- Transformers 4.34.1\n- Pytorch 2.1.0+cu118\n- Datasets 2.14.5\n- Tokenizers 0.14.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "HansOMEL/QA-bert-base-chinese-Hw1", "base_model_relation": "base" }, { "model_id": "xjlulu/ntu_adl_paragraph_selection_model", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: ntu_adl_paragraph_selection_model\n results: []\n---\n\n\n\n# ntu_adl_paragraph_selection_model\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2527\n- Accuracy: 0.9505\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:-----:|:---------------:|:--------:|\n| 0.2626 | 1.0 | 10857 | 0.2527 | 0.9505 |\n\n\n### Framework versions\n\n- Transformers 4.34.1\n- Pytorch 2.1.0+cu118\n- Datasets 2.14.5\n- Tokenizers 0.14.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "xjlulu/ntu_adl_paragraph_selection_model", "base_model_relation": "base" }, { "model_id": "xjlulu/ntu_adl_span_selection_bert", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: ntu_adl_span_selection_bert\n results: []\n---\n\n\n\n# ntu_adl_span_selection_bert\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.0552\n- Em Accuracy: 0.7607\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 1\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 2\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Em Accuracy |\n|:-------------:|:-----:|:-----:|:---------------:|:-----------:|\n| 1.161 | 1.0 | 10857 | 1.2192 | 0.7029 |\n| 0.7596 | 2.0 | 21714 | 1.3003 | 0.7338 |\n| 0.551 | 3.0 | 32571 | 1.5081 | 0.7398 |\n| 0.2034 | 4.0 | 43428 | 1.8194 | 0.7474 |\n| 0.0762 | 5.0 | 54285 | 2.0552 | 0.7607 |\n\n\n### Framework versions\n\n- Transformers 4.34.1\n- Pytorch 2.1.0+cu118\n- Datasets 2.14.6\n- Tokenizers 0.14.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "xjlulu/ntu_adl_span_selection_bert", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_dl_t", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_dl_t\n results: []\n---\n\n\n\n# AIYIYA/my_dl_t\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.6344\n- Validation Loss: 0.7331\n- Train Accuracy: 0.6667\n- Epoch: 2\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 1.1303 | 0.6328 | 0.6667 | 0 |\n| 0.8332 | 0.6572 | 0.6667 | 1 |\n| 0.6344 | 0.7331 | 0.6667 | 2 |\n\n\n### Framework versions\n\n- Transformers 4.34.1\n- TensorFlow 2.13.0\n- Datasets 2.14.5\n- Tokenizers 0.14.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_dl_t", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_dl_1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_dl_1\n results: []\n---\n\n\n\n# AIYIYA/my_dl_1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.3989\n- Validation Loss: 0.3557\n- Train Accuracy: 1.0\n- Epoch: 2\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 25, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 0.8101 | 0.6092 | 0.5 | 0 |\n| 0.5495 | 0.4091 | 1.0 | 1 |\n| 0.3989 | 0.3557 | 1.0 | 2 |\n\n\n### Framework versions\n\n- Transformers 4.34.1\n- TensorFlow 2.13.0\n- Datasets 2.14.5\n- Tokenizers 0.14.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_dl_1", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_dl_2", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_dl_2\n results: []\n---\n\n\n\n# AIYIYA/my_dl_2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.2708\n- Validation Loss: 0.2787\n- Train Accuracy: 1.0\n- Epoch: 5\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 30, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 0.7163 | 0.5677 | 0.6 | 0 |\n| 0.5522 | 0.4402 | 1.0 | 1 |\n| 0.4601 | 0.3570 | 1.0 | 2 |\n| 0.3585 | 0.3007 | 1.0 | 3 |\n| 0.2822 | 0.2787 | 1.0 | 4 |\n| 0.2708 | 0.2787 | 1.0 | 5 |\n\n\n### Framework versions\n\n- Transformers 4.35.0\n- TensorFlow 2.14.0\n- Datasets 2.14.6\n- Tokenizers 0.14.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_dl_2", "base_model_relation": "base" }, { "model_id": "piecake/model_1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: model_1\n results: []\n---\n\n\n\n# model_1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2150\n- Accuracy: 0.9511\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| 0.1896 | 1.0 | 5429 | 0.2150 | 0.9511 |\n\n\n### Framework versions\n\n- Transformers 4.35.0\n- Pytorch 2.1.0+cu118\n- Datasets 2.14.6\n- Tokenizers 0.14.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "piecake/model_1", "base_model_relation": "base" }, { "model_id": "piecake/model_2", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: model_2\n results: []\n---\n\n\n\n# model_2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7504\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:-----:|:----:|:---------------:|\n| 0.8882 | 1.0 | 1358 | 0.7504 |\n\n\n### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 2.1.0+cu118\n- Datasets 2.14.7\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "piecake/model_2", "base_model_relation": "base" }, { "model_id": "ThuyNT03/CS431_Car-COQE_CSI", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: CS431_Car-COQE_CSI\n results: []\n---\n\n\n\n# CS431_Car-COQE_CSI\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 2.1.0+cu118\n- Datasets 2.15.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "ThuyNT03/CS431_Car-COQE_CSI", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_ti_new1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_ti_new1\n results: []\n---\n\n\n\n# AIYIYA/my_ti_new1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.1553\n- Validation Loss: 0.0986\n- Train Accuracy: 0.9670\n- Epoch: 0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 6495, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 0.1553 | 0.0986 | 0.9670 | 0 |\n\n\n### Framework versions\n\n- Transformers 4.35.2\n- TensorFlow 2.14.0\n- Datasets 2.15.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_ti_new1", "base_model_relation": "base" }, { "model_id": "ThuyNT03/CS431_Ele-COQE_CSI", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: CS431_Ele-COQE_CSI\n results: []\n---\n\n\n\n# CS431_Ele-COQE_CSI\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 2.1.0+cu118\n- Datasets 2.15.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "ThuyNT03/CS431_Ele-COQE_CSI", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_ti_new2", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_ti_new2\n results: []\n---\n\n\n\n# AIYIYA/my_ti_new2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.0697\n- Validation Loss: 0.0947\n- Train Accuracy: 0.9696\n- Epoch: 1\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5525, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 0.1651 | 0.0975 | 0.9668 | 0 |\n| 0.0697 | 0.0947 | 0.9696 | 1 |\n\n\n### Framework versions\n\n- Transformers 4.35.2\n- TensorFlow 2.14.0\n- Datasets 2.15.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_ti_new2", "base_model_relation": "base" }, { "model_id": "BrianHsu/Bert_QA_multiple_choice", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: Bert_QA_multiple_choice\n results: []\n---\n\n\n\n# Bert_QA_multiple_choice\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.3867\n- Accuracy: 0.6017\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| 1.1769 | 1.0 | 752 | 1.0007 | 0.5736 |\n| 0.7735 | 2.0 | 1504 | 0.9846 | 0.5977 |\n| 0.3761 | 3.0 | 2256 | 1.3867 | 0.6017 |\n\n\n### Framework versions\n\n- Transformers 4.36.0\n- Pytorch 2.1.1+cu118\n- Datasets 2.15.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "BrianHsu/Bert_QA_multiple_choice", "base_model_relation": "base" }, { "model_id": "BrianHsu/BERT_test_graident_accumulation", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: BERT_test_graident_accumulation\n results: []\n---\n\n\n\n# BERT_test_graident_accumulation\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.3780\n- Accuracy: 0.6384\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 188 | 0.9354 | 0.5987 |\n| No log | 2.0 | 376 | 0.9827 | 0.6208 |\n| 0.7728 | 3.0 | 564 | 1.1462 | 0.6298 |\n| 0.7728 | 4.0 | 752 | 1.3019 | 0.6323 |\n| 0.7728 | 5.0 | 940 | 1.3780 | 0.6384 |\n\n\n### Framework versions\n\n- Transformers 4.36.0\n- Pytorch 2.1.1+cu118\n- Datasets 2.15.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "BrianHsu/BERT_test_graident_accumulation", "base_model_relation": "base" }, { "model_id": "BrianHsu/BERT_test_graident_accumulation_test2", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: BERT_test_graident_accumulation_test2\n results: []\n---\n\n\n\n# BERT_test_graident_accumulation_test2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.0296\n- Accuracy: 0.6379\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 188 | 0.9392 | 0.5937 |\n| No log | 2.0 | 376 | 0.9506 | 0.6354 |\n| 0.7706 | 3.0 | 564 | 1.0296 | 0.6379 |\n\n\n### Framework versions\n\n- Transformers 4.36.0\n- Pytorch 2.1.1+cu118\n- Datasets 2.15.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "BrianHsu/BERT_test_graident_accumulation_test2", "base_model_relation": "base" }, { "model_id": "BrianHsu/BERT_test_graident_accumulation_test3", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: BERT_test_graident_accumulation_test3\n results: []\n---\n\n\n\n# BERT_test_graident_accumulation_test3\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.0101\n- Accuracy: 0.6102\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 94 | 0.9398 | 0.6007 |\n| No log | 2.0 | 188 | 0.9191 | 0.6183 |\n| No log | 3.0 | 282 | 1.0101 | 0.6102 |\n\n\n### Framework versions\n\n- Transformers 4.36.0\n- Pytorch 2.1.1+cu118\n- Datasets 2.15.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "BrianHsu/BERT_test_graident_accumulation_test3", "base_model_relation": "base" }, { "model_id": "BrianHsu/BERT_test_graident_accumulation_test4", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: BERT_test_graident_accumulation_test4\n results: []\n---\n\n\n\n# BERT_test_graident_accumulation_test4\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.1752\n- Accuracy: 0.5781\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 116 | 1.0083 | 0.5586 |\n| No log | 1.99 | 232 | 1.0274 | 0.5913 |\n| No log | 2.99 | 348 | 1.1752 | 0.5781 |\n\n\n### Framework versions\n\n- Transformers 4.36.0\n- Pytorch 2.1.1+cu118\n- Datasets 2.15.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "BrianHsu/BERT_test_graident_accumulation_test4", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_new_inputs", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_new_inputs\n results: []\n---\n\n\n\n# AIYIYA/my_new_inputs\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 2.4582\n- Validation Loss: 2.5642\n- Train Accuracy: 0.2812\n- Epoch: 4\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 45, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 2.5554 | 2.6041 | 0.2188 | 0 |\n| 2.4711 | 2.5642 | 0.2812 | 1 |\n| 2.4489 | 2.5642 | 0.2812 | 2 |\n| 2.4357 | 2.5642 | 0.2812 | 3 |\n| 2.4582 | 2.5642 | 0.2812 | 4 |\n\n\n### Framework versions\n\n- Transformers 4.35.2\n- TensorFlow 2.15.0\n- Datasets 2.16.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_new_inputs", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_new_inputs1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_new_inputs1\n results: []\n---\n\n\n\n# AIYIYA/my_new_inputs1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 1.6115\n- Validation Loss: 1.7513\n- Train Accuracy: 0.7217\n- Epoch: 4\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 80, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 2.8547 | 2.5914 | 0.4261 | 0 |\n| 2.3539 | 2.2365 | 0.6 | 1 |\n| 2.0114 | 1.9683 | 0.7043 | 2 |\n| 1.7522 | 1.8043 | 0.7217 | 3 |\n| 1.6115 | 1.7513 | 0.7217 | 4 |\n\n\n### Framework versions\n\n- Transformers 4.35.2\n- TensorFlow 2.15.0\n- Datasets 2.16.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_new_inputs1", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_new_login", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_new_login\n results: []\n---\n\n\n\n# AIYIYA/my_new_login\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.6631\n- Validation Loss: 0.6750\n- Train Accuracy: 0.6522\n- Epoch: 3\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 20, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 0.8130 | 0.6869 | 0.5652 | 0 |\n| 0.6887 | 0.6837 | 0.6522 | 1 |\n| 0.6891 | 0.6828 | 0.8261 | 2 |\n| 0.6631 | 0.6750 | 0.6522 | 3 |\n\n\n### Framework versions\n\n- Transformers 4.35.2\n- TensorFlow 2.15.0\n- Datasets 2.16.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_new_login", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_new_login1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_new_login1\n results: []\n---\n\n\n\n# AIYIYA/my_new_login1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.1897\n- Validation Loss: 0.2673\n- Train Accuracy: 0.9143\n- Epoch: 3\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 30, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 0.6310 | 0.5418 | 0.8 | 0 |\n| 0.4427 | 0.3487 | 0.9143 | 1 |\n| 0.2980 | 0.2561 | 0.9429 | 2 |\n| 0.1897 | 0.2673 | 0.9143 | 3 |\n\n\n### Framework versions\n\n- Transformers 4.35.2\n- TensorFlow 2.15.0\n- Datasets 2.16.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_new_login1", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_new_login2", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_new_login2\n results: []\n---\n\n\n\n# AIYIYA/my_new_login2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.4844\n- Validation Loss: 0.4186\n- Train Accuracy: 0.8310\n- Epoch: 1\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 45, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 0.6231 | 0.5189 | 0.7324 | 0 |\n| 0.4844 | 0.4186 | 0.8310 | 1 |\n\n\n### Framework versions\n\n- Transformers 4.35.2\n- TensorFlow 2.15.0\n- Datasets 2.16.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_new_login2", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_new_login3", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_new_login3\n results: []\n---\n\n\n\n# AIYIYA/my_new_login3\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.3955\n- Validation Loss: 0.4711\n- Train Accuracy: 0.8451\n- Epoch: 2\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 0.6845 | 0.6333 | 0.6056 | 0 |\n| 0.5328 | 0.5185 | 0.8310 | 1 |\n| 0.3955 | 0.4711 | 0.8451 | 2 |\n\n\n### Framework versions\n\n- Transformers 4.35.2\n- TensorFlow 2.15.0\n- Datasets 2.16.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_new_login3", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_new_login4", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_new_login4\n results: []\n---\n\n\n\n# AIYIYA/my_new_login4\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.3002\n- Validation Loss: 0.3570\n- Train Accuracy: 0.8732\n- Epoch: 2\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 0.5716 | 0.4986 | 0.7887 | 0 |\n| 0.3840 | 0.4054 | 0.8451 | 1 |\n| 0.3002 | 0.3570 | 0.8732 | 2 |\n\n\n### Framework versions\n\n- Transformers 4.35.2\n- TensorFlow 2.15.0\n- Datasets 2.16.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_new_login4", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_new_inp1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_new_inp1\n results: []\n---\n\n\n\n# AIYIYA/my_new_inp1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.9759\n- Validation Loss: 1.0548\n- Train Accuracy: 0.8\n- Epoch: 0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 125, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 0.9759 | 1.0548 | 0.8 | 0 |\n\n\n### Framework versions\n\n- Transformers 4.35.2\n- TensorFlow 2.15.0\n- Datasets 2.16.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_new_inp1", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_new_in2", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_new_in2\n results: []\n---\n\n\n\n# AIYIYA/my_new_in2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 2.7292\n- Validation Loss: 2.5689\n- Train Accuracy: 0.4103\n- Epoch: 0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 60, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 2.7292 | 2.5689 | 0.4103 | 0 |\n\n\n### Framework versions\n\n- Transformers 4.35.2\n- TensorFlow 2.15.0\n- Datasets 2.16.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_new_in2", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_new_in3", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_new_in3\n results: []\n---\n\n\n\n# AIYIYA/my_new_in3\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 1.5359\n- Validation Loss: 1.3045\n- Train Accuracy: 0.7692\n- Epoch: 2\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 100, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Validation Loss | Train Accuracy | Epoch |\n|:----------:|:---------------:|:--------------:|:-----:|\n| 2.7060 | 2.2362 | 0.6 | 0 |\n| 2.0231 | 1.6742 | 0.7436 | 1 |\n| 1.5359 | 1.3045 | 0.7692 | 2 |\n\n\n### Framework versions\n\n- Transformers 4.35.2\n- TensorFlow 2.15.0\n- Datasets 2.16.0\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "AIYIYA/my_new_in3", "base_model_relation": "base" }, { "model_id": "Ghunghru/Misinformation-Covid-bert-base-chinese", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- f1\nmodel-index:\n- name: Misinformation-Covid-bert-base-chinese\n results: []\n---\n\n\n\n# Misinformation-Covid-bert-base-chinese\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6165\n- F1: 0.4706\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-06\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | F1 |\n|:-------------:|:-----:|:----:|:---------------:|:------:|\n| 0.6722 | 1.0 | 189 | 0.6155 | 0.0 |\n| 0.6611 | 2.0 | 378 | 0.5880 | 0.2979 |\n| 0.6133 | 3.0 | 567 | 0.5847 | 0.2727 |\n| 0.6343 | 4.0 | 756 | 0.5573 | 0.4151 |\n| 0.6557 | 5.0 | 945 | 0.5704 | 0.4444 |\n| 0.5996 | 6.0 | 1134 | 0.6545 | 0.3750 |\n| 0.6239 | 7.0 | 1323 | 0.6037 | 0.4407 |\n| 0.6089 | 8.0 | 1512 | 0.6145 | 0.4590 |\n| 0.555 | 9.0 | 1701 | 0.6273 | 0.4746 |\n| 0.5281 | 10.0 | 1890 | 0.6165 | 0.4706 |\n\n\n### Framework versions\n\n- Transformers 4.32.1\n- Pytorch 2.1.2\n- Datasets 2.12.0\n- Tokenizers 0.13.3\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "Ghunghru/Misinformation-Covid-bert-base-chinese", "base_model_relation": "base" }, { "model_id": "Ghunghru/Misinformation-Covid-LowLearningRatebert-base-chinese", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- f1\nmodel-index:\n- name: Misinformation-Covid-LowLearningRatebert-base-chinese\n results: []\n---\n\n\n\n# Misinformation-Covid-LowLearningRatebert-base-chinese\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.5999\n- F1: 0.2128\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 50\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | F1 |\n|:-------------:|:-----:|:----:|:---------------:|:------:|\n| 0.6765 | 1.0 | 189 | 0.6464 | 0.0 |\n| 0.6809 | 2.0 | 378 | 0.6449 | 0.0 |\n| 0.6734 | 3.0 | 567 | 0.6651 | 0.0 |\n| 0.6827 | 4.0 | 756 | 0.6684 | 0.0 |\n| 0.7095 | 5.0 | 945 | 0.6532 | 0.0 |\n| 0.7 | 6.0 | 1134 | 0.6646 | 0.0 |\n| 0.7192 | 7.0 | 1323 | 0.6497 | 0.0 |\n| 0.6877 | 8.0 | 1512 | 0.6446 | 0.0 |\n| 0.6831 | 9.0 | 1701 | 0.6305 | 0.0571 |\n| 0.6633 | 10.0 | 1890 | 0.6203 | 0.1622 |\n| 0.6668 | 11.0 | 2079 | 0.6219 | 0.1622 |\n| 0.6482 | 12.0 | 2268 | 0.6242 | 0.1111 |\n| 0.6543 | 13.0 | 2457 | 0.6117 | 0.15 |\n| 0.6492 | 14.0 | 2646 | 0.6236 | 0.1622 |\n| 0.6624 | 15.0 | 2835 | 0.6233 | 0.1622 |\n| 0.6525 | 16.0 | 3024 | 0.6134 | 0.15 |\n| 0.6466 | 17.0 | 3213 | 0.6118 | 0.1905 |\n| 0.6406 | 18.0 | 3402 | 0.6191 | 0.15 |\n| 0.6479 | 19.0 | 3591 | 0.6216 | 0.1538 |\n| 0.6488 | 20.0 | 3780 | 0.6076 | 0.2128 |\n| 0.6352 | 21.0 | 3969 | 0.6062 | 0.2174 |\n| 0.6213 | 22.0 | 4158 | 0.6042 | 0.2174 |\n| 0.6285 | 23.0 | 4347 | 0.6100 | 0.2326 |\n| 0.6298 | 24.0 | 4536 | 0.6076 | 0.2128 |\n| 0.6473 | 25.0 | 4725 | 0.6058 | 0.2128 |\n| 0.5972 | 26.0 | 4914 | 0.6065 | 0.2222 |\n| 0.6118 | 27.0 | 5103 | 0.6001 | 0.25 |\n| 0.6116 | 28.0 | 5292 | 0.6059 | 0.2128 |\n| 0.6289 | 29.0 | 5481 | 0.5992 | 0.25 |\n| 0.5932 | 30.0 | 5670 | 0.6006 | 0.25 |\n| 0.6076 | 31.0 | 5859 | 0.6009 | 0.2128 |\n| 0.6033 | 32.0 | 6048 | 0.6082 | 0.2128 |\n| 0.6235 | 33.0 | 6237 | 0.6023 | 0.2128 |\n| 0.6237 | 34.0 | 6426 | 0.6079 | 0.2222 |\n| 0.6176 | 35.0 | 6615 | 0.6081 | 0.2222 |\n| 0.646 | 36.0 | 6804 | 0.6019 | 0.2128 |\n| 0.6233 | 37.0 | 6993 | 0.6020 | 0.2128 |\n| 0.6004 | 38.0 | 7182 | 0.6040 | 0.2174 |\n| 0.6159 | 39.0 | 7371 | 0.5963 | 0.2449 |\n| 0.5747 | 40.0 | 7560 | 0.6011 | 0.2174 |\n| 0.6216 | 41.0 | 7749 | 0.5954 | 0.2449 |\n| 0.5893 | 42.0 | 7938 | 0.5974 | 0.2083 |\n| 0.5887 | 43.0 | 8127 | 0.5993 | 0.2128 |\n| 0.5756 | 44.0 | 8316 | 0.5993 | 0.2128 |\n| 0.6204 | 45.0 | 8505 | 0.5982 | 0.2083 |\n| 0.584 | 46.0 | 8694 | 0.5966 | 0.2449 |\n| 0.5809 | 47.0 | 8883 | 0.5989 | 0.2083 |\n| 0.5873 | 48.0 | 9072 | 0.6002 | 0.2128 |\n| 0.5999 | 49.0 | 9261 | 0.6001 | 0.2128 |\n| 0.5888 | 50.0 | 9450 | 0.5999 | 0.2128 |\n\n\n### Framework versions\n\n- Transformers 4.32.1\n- Pytorch 2.1.2\n- Datasets 2.12.0\n- Tokenizers 0.13.3\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "Ghunghru/Misinformation-Covid-LowLearningRatebert-base-chinese", "base_model_relation": "base" }, { "model_id": "chriswu88/bert_ner_model", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- precision\n- recall\n- f1\n- accuracy\nmodel-index:\n- name: bert_ner_model\n results: []\n---\n\n\n\n# bert_ner_model\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2389\n- Precision: 0.7676\n- Recall: 0.7899\n- F1: 0.7786\n- Accuracy: 0.9226\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 2\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|\n| 0.3376 | 1.0 | 2539 | 0.2704 | 0.7326 | 0.7425 | 0.7375 | 0.9113 |\n| 0.1986 | 2.0 | 5078 | 0.2389 | 0.7676 | 0.7899 | 0.7786 | 0.9226 |\n\n\n### Framework versions\n\n- Transformers 4.37.2\n- Pytorch 2.1.0+cu121\n- Datasets 2.17.1\n- Tokenizers 0.15.2\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "chriswu88/bert_ner_model", "base_model_relation": "base" }, { "model_id": "wzChen/my_awesome_model_text_cls", "gated": "False", "card": "---\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: my_awesome_model_text_cls\n results: []\n---\n\n\n\n# my_awesome_model_text_cls\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2101\n- Accuracy: 0.945\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| 0.2822 | 1.0 | 600 | 0.2034 | 0.9333 |\n| 0.1637 | 2.0 | 1200 | 0.2101 | 0.945 |\n\n\n### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "wzChen/my_awesome_model_text_cls", "base_model_relation": "base" }, { "model_id": "H336104/NERBorder", "gated": "False", "card": "---\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\ndatasets:\n- generator\nmetrics:\n- precision\n- recall\n- f1\nmodel-index:\n- name: NERBorder\n results:\n - task:\n name: Token Classification\n type: token-classification\n dataset:\n name: generator\n type: generator\n config: default\n split: train\n args: default\n metrics:\n - name: Precision\n type: precision\n value: 0.901610712050607\n - name: Recall\n type: recall\n value: 0.8982985303950894\n - name: F1\n type: f1\n value: 0.8999515736949341\n---\n\n\n\n# NERBorder\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on the generator dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.5195\n- Precision: 0.9016\n- Recall: 0.8983\n- F1: 0.9000\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |\n|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|\n| 0.2099 | 1.0 | 416 | 0.1940 | 0.8281 | 0.8152 | 0.8216 |\n| 0.1658 | 2.0 | 832 | 0.1799 | 0.8464 | 0.8590 | 0.8527 |\n| 0.1276 | 3.0 | 1248 | 0.1821 | 0.8795 | 0.8639 | 0.8716 |\n| 0.1076 | 4.0 | 1664 | 0.1961 | 0.8903 | 0.8788 | 0.8845 |\n| 0.0792 | 5.0 | 2080 | 0.2277 | 0.8787 | 0.8869 | 0.8828 |\n| 0.054 | 6.0 | 2496 | 0.2395 | 0.9084 | 0.8701 | 0.8888 |\n| 0.0433 | 7.0 | 2912 | 0.2991 | 0.8999 | 0.8915 | 0.8957 |\n| 0.0288 | 8.0 | 3328 | 0.3374 | 0.8919 | 0.8935 | 0.8927 |\n| 0.022 | 9.0 | 3744 | 0.3752 | 0.9054 | 0.8921 | 0.8987 |\n| 0.0211 | 10.0 | 4160 | 0.4105 | 0.8952 | 0.8985 | 0.8968 |\n| 0.0147 | 11.0 | 4576 | 0.4084 | 0.9013 | 0.9004 | 0.9009 |\n| 0.0095 | 12.0 | 4992 | 0.4542 | 0.9047 | 0.8952 | 0.8999 |\n| 0.01 | 13.0 | 5408 | 0.4516 | 0.9086 | 0.8896 | 0.8990 |\n| 0.0087 | 14.0 | 5824 | 0.4521 | 0.9025 | 0.8935 | 0.8980 |\n| 0.0069 | 15.0 | 6240 | 0.4878 | 0.9034 | 0.9022 | 0.9028 |\n| 0.0042 | 16.0 | 6656 | 0.5097 | 0.9021 | 0.8997 | 0.9009 |\n| 0.006 | 17.0 | 7072 | 0.5195 | 0.9054 | 0.9008 | 0.9031 |\n| 0.0043 | 18.0 | 7488 | 0.5032 | 0.9009 | 0.8977 | 0.8993 |\n| 0.0029 | 19.0 | 7904 | 0.5155 | 0.9003 | 0.8962 | 0.8983 |\n| 0.0034 | 20.0 | 8320 | 0.5195 | 0.9016 | 0.8983 | 0.9000 |\n\n\n### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.0.1\n- Datasets 2.16.1\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "H336104/NERBorder", "base_model_relation": "base" }, { "model_id": "Yangkt/test-trainer", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: test-trainer\n results: []\n---\n\n\n\n# test-trainer\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.4301\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:-----:|:----:|:---------------:|\n| No log | 1.0 | 94 | 0.4050 |\n| No log | 2.0 | 188 | 0.2719 |\n| No log | 3.0 | 282 | 0.4301 |\n\n\n### Framework versions\n\n- Transformers 4.39.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "Yangkt/test-trainer", "base_model_relation": "base" }, { "model_id": "sanxialiuzhan/bert-base-chinese-ner", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- precision\n- recall\n- f1\n- accuracy\nmodel-index:\n- name: bert-base-chinese-ner\n results: []\n---\n\n\n\n# bert-base-chinese-ner\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0378\n- Precision: 0.9227\n- Recall: 0.9195\n- F1: 0.9211\n- Accuracy: 0.9910\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |\n|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|\n| 0.0839 | 1.0 | 5796 | 0.0400 | 0.8999 | 0.8866 | 0.8932 | 0.9891 |\n| 0.0266 | 2.0 | 11592 | 0.0378 | 0.9227 | 0.9195 | 0.9211 | 0.9910 |\n| 0.0124 | 3.0 | 17388 | 0.0411 | 0.9361 | 0.9237 | 0.9299 | 0.9919 |\n\n\n### Framework versions\n\n- Transformers 4.39.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "sanxialiuzhan/bert-base-chinese-ner", "base_model_relation": "base" }, { "model_id": "karinegabsschon/classifier_adapter", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\n- precision\n- recall\n- f1\nmodel-index:\n- name: classifier_adapter\n results: []\n---\n\n\n\n# classifier_adapter\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0386\n- Accuracy: 0.9875\n- Precision: 0.8841\n- Recall: 0.7947\n- F1: 0.8283\n- Ap: 0.8850\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 12\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Ap |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|\n| No log | 0.38 | 100 | 0.1590 | 0.9571 | 0.0 | 0.0 | 0.0 | 0.1046 |\n| No log | 0.75 | 200 | 0.1578 | 0.9571 | 0.0 | 0.0 | 0.0 | 0.1808 |\n| No log | 1.13 | 300 | 0.1185 | 0.9653 | 0.0899 | 0.0599 | 0.0680 | 0.4391 |\n| No log | 1.51 | 400 | 0.0898 | 0.9724 | 0.2199 | 0.1409 | 0.1617 | 0.6479 |\n| 0.1405 | 1.89 | 500 | 0.0774 | 0.9750 | 0.3319 | 0.2273 | 0.2575 | 0.7417 |\n| 0.1405 | 2.26 | 600 | 0.0683 | 0.9771 | 0.4118 | 0.3002 | 0.3294 | 0.7791 |\n| 0.1405 | 2.64 | 700 | 0.0616 | 0.9804 | 0.6207 | 0.4336 | 0.4810 | 0.8187 |\n| 0.1405 | 3.02 | 800 | 0.0556 | 0.9821 | 0.7210 | 0.4875 | 0.5435 | 0.8380 |\n| 0.1405 | 3.4 | 900 | 0.0519 | 0.9830 | 0.7329 | 0.5224 | 0.5839 | 0.8566 |\n| 0.0598 | 3.77 | 1000 | 0.0486 | 0.9846 | 0.7818 | 0.6063 | 0.6615 | 0.8629 |\n| 0.0598 | 4.15 | 1100 | 0.0469 | 0.9853 | 0.8223 | 0.6807 | 0.7248 | 0.8633 |\n| 0.0598 | 4.53 | 1200 | 0.0457 | 0.9856 | 0.8521 | 0.7235 | 0.7663 | 0.8666 |\n| 0.0598 | 4.91 | 1300 | 0.0439 | 0.9859 | 0.8436 | 0.6955 | 0.7435 | 0.8753 |\n| 0.0598 | 5.28 | 1400 | 0.0424 | 0.9862 | 0.8715 | 0.6964 | 0.7496 | 0.8739 |\n| 0.0399 | 5.66 | 1500 | 0.0415 | 0.9869 | 0.8695 | 0.7621 | 0.7994 | 0.8772 |\n| 0.0399 | 6.04 | 1600 | 0.0416 | 0.9865 | 0.8700 | 0.7670 | 0.8039 | 0.8853 |\n| 0.0399 | 6.42 | 1700 | 0.0401 | 0.9871 | 0.8687 | 0.7686 | 0.8047 | 0.8846 |\n| 0.0399 | 6.79 | 1800 | 0.0405 | 0.9867 | 0.8734 | 0.7851 | 0.8167 | 0.8848 |\n| 0.0399 | 7.17 | 1900 | 0.0410 | 0.9865 | 0.8600 | 0.7708 | 0.8057 | 0.8770 |\n| 0.0315 | 7.55 | 2000 | 0.0393 | 0.9873 | 0.8869 | 0.7718 | 0.8158 | 0.8819 |\n| 0.0315 | 7.92 | 2100 | 0.0385 | 0.9871 | 0.8747 | 0.7861 | 0.8196 | 0.8856 |\n| 0.0315 | 8.3 | 2200 | 0.0386 | 0.9877 | 0.8863 | 0.7856 | 0.8227 | 0.8857 |\n| 0.0315 | 8.68 | 2300 | 0.0390 | 0.9869 | 0.8695 | 0.7949 | 0.8221 | 0.8830 |\n| 0.0315 | 9.06 | 2400 | 0.0391 | 0.9872 | 0.8685 | 0.8081 | 0.8311 | 0.8830 |\n| 0.026 | 9.43 | 2500 | 0.0386 | 0.9875 | 0.8841 | 0.7947 | 0.8283 | 0.8850 |\n| 0.026 | 9.81 | 2600 | 0.0390 | 0.9871 | 0.8615 | 0.8064 | 0.8264 | 0.8840 |\n| 0.026 | 10.19 | 2700 | 0.0386 | 0.9873 | 0.8689 | 0.8023 | 0.8264 | 0.8859 |\n| 0.026 | 10.57 | 2800 | 0.0386 | 0.9873 | 0.8737 | 0.7986 | 0.8265 | 0.8860 |\n\n\n### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.2.1+cu121\n- Tokenizers 0.15.2\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "karinegabsschon/classifier_adapter", "base_model_relation": "base" }, { "model_id": "Extrabass/test_trainer", "gated": "False", "card": "---\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: test_trainer\n results: []\n---\n\n\n\n# test_trainer\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0253\n- Accuracy: 0.9973\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 214 | 0.0540 | 0.9905 |\n| No log | 2.0 | 428 | 0.0606 | 0.9932 |\n| 0.0648 | 3.0 | 642 | 0.0253 | 0.9973 |\n\n\n### Framework versions\n\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "Extrabass/test_trainer", "base_model_relation": "base" }, { "model_id": "Extrabass/checkpoint", "gated": "False", "card": "---\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: checkpoint\n results: []\n---\n\n\n\n# checkpoint\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0022\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 123 | 0.0089 | 1.0 |\n| No log | 2.0 | 246 | 0.0028 | 1.0 |\n| No log | 3.0 | 369 | 0.0022 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "Extrabass/checkpoint", "base_model_relation": "base" }, { "model_id": "lynn610/bert-finetuned-ner", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- precision\n- recall\n- f1\n- accuracy\nmodel-index:\n- name: bert-finetuned-ner\n results: []\n---\n\n\n\n# bert-finetuned-ner\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1158\n- Precision: 0.7635\n- Recall: 0.7577\n- F1: 0.7606\n- Accuracy: 0.9626\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|\n| 0.1101 | 1.0 | 1875 | 0.1007 | 0.7357 | 0.7458 | 0.7407 | 0.9610 |\n| 0.0796 | 2.0 | 3750 | 0.1003 | 0.76 | 0.7530 | 0.7565 | 0.9627 |\n| 0.0538 | 3.0 | 5625 | 0.1158 | 0.7635 | 0.7577 | 0.7606 | 0.9626 |\n\n\n### Framework versions\n\n- Transformers 4.40.2\n- Pytorch 2.3.0\n- Datasets 2.18.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "lynn610/bert-finetuned-ner", "base_model_relation": "base" }, { "model_id": "thanhtctv/results", "gated": "False", "card": "---\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: results\n results: []\n---\n\n\n\n# results\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1910\n- Accuracy: 0.9265\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 12\n- eval_batch_size: 12\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| 0.2662 | 1.0 | 1029 | 0.1910 | 0.9265 |\n\n\n### Framework versions\n\n- Transformers 4.41.1\n- Pytorch 1.10.1+cu111\n- Datasets 2.19.1\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "thanhtctv/results", "base_model_relation": "base" }, { "model_id": "bibibobo777/my_awesome_bert_qa_model", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: my_awesome_bert_qa_model\n results: []\n---\n\n\n\n# my_awesome_bert_qa_model\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:-----:|:----:|:---------------:|\n| No log | 1.0 | 453 | 0.2755 |\n\n\n### Framework versions\n\n- Transformers 4.41.0.dev0\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.1\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "bibibobo777/my_awesome_bert_qa_model", "base_model_relation": "base" }, { "model_id": "Mattis0525/bert-base-chinese-finetuned-imdb", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: Mattis0525/bert-base-chinese-finetuned-imdb\n results: []\n---\n\n\n\n# Mattis0525/bert-base-chinese-finetuned-imdb\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 1.4162\n- Validation Loss: 1.1320\n- Epoch: 0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -844, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}\n- training_precision: mixed_float16\n\n### Training results\n\n| Train Loss | Validation Loss | Epoch |\n|:----------:|:---------------:|:-----:|\n| 1.4162 | 1.1320 | 0 |\n\n\n### Framework versions\n\n- Transformers 4.41.0\n- TensorFlow 2.15.0\n- Datasets 2.19.1\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "Mattis0525/bert-base-chinese-finetuned-imdb", "base_model_relation": "base" }, { "model_id": "Mattis0525/bert-base-chinese-finetuned-tcfd", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: Mattis0525/bert-base-chinese-finetuned-tcfd\n results: []\n---\n\n\n\n# Mattis0525/bert-base-chinese-finetuned-tcfd\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.6502\n- Train Accuracy: 0.0591\n- Validation Loss: 0.6504\n- Validation Accuracy: 0.0591\n- Epoch: 9\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -800, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |\n|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|\n| 0.9480 | 0.0555 | 0.8742 | 0.0566 | 0 |\n| 0.8735 | 0.0567 | 0.7660 | 0.0589 | 1 |\n| 0.7694 | 0.0574 | 0.7093 | 0.0584 | 2 |\n| 0.7190 | 0.0588 | 0.6563 | 0.0604 | 3 |\n| 0.6720 | 0.0592 | 0.6636 | 0.0601 | 4 |\n| 0.6479 | 0.0596 | 0.6639 | 0.0602 | 5 |\n| 0.6446 | 0.0598 | 0.6266 | 0.0614 | 6 |\n| 0.6257 | 0.0602 | 0.6393 | 0.0609 | 7 |\n| 0.6534 | 0.0590 | 0.6301 | 0.0588 | 8 |\n| 0.6502 | 0.0591 | 0.6504 | 0.0591 | 9 |\n\n\n### Framework versions\n\n- Transformers 4.41.1\n- TensorFlow 2.15.0\n- Datasets 2.19.1\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "Mattis0525/bert-base-chinese-finetuned-tcfd", "base_model_relation": "base" }, { "model_id": "imagine0711/bert-base-chinese-finetuned-tcfd", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: imagine0711/bert-base-chinese-finetuned-tcfd\n results: []\n---\n\n\n\n# imagine0711/bert-base-chinese-finetuned-tcfd\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.6361\n- Train Accuracy: 0.0595\n- Validation Loss: 0.6676\n- Validation Accuracy: 0.0605\n- Epoch: 7\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -800, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |\n|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|\n| 0.9501 | 0.0559 | 0.8560 | 0.0569 | 0 |\n| 0.8356 | 0.0571 | 0.7513 | 0.0585 | 1 |\n| 0.7771 | 0.0584 | 0.7556 | 0.0602 | 2 |\n| 0.6974 | 0.0590 | 0.6988 | 0.0589 | 3 |\n| 0.6641 | 0.0599 | 0.5843 | 0.0609 | 4 |\n| 0.6423 | 0.0599 | 0.6116 | 0.0605 | 5 |\n| 0.6540 | 0.0596 | 0.6470 | 0.0605 | 6 |\n| 0.6361 | 0.0595 | 0.6676 | 0.0605 | 7 |\n\n\n### Framework versions\n\n- Transformers 4.41.1\n- TensorFlow 2.15.0\n- Datasets 2.19.1\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "imagine0711/bert-base-chinese-finetuned-tcfd", "base_model_relation": "base" }, { "model_id": "Welsey/overlaying", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: overlaying\n results: []\n---\n\n\n\n# overlaying\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.0542\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:-----:|:----:|:---------------:|\n| No log | 1.0 | 3 | 0.8846 |\n| No log | 2.0 | 6 | 1.0542 |\n\n\n### Framework versions\n\n- Transformers 4.41.2\n- Pytorch 2.1.0\n- Datasets 2.19.2\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "Welsey/overlaying", "base_model_relation": "base" }, { "model_id": "ivanxia1988/bert_tnew_cls", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert_tnew_cls\n results: []\n---\n\n\n\n# bert_tnew_cls\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.6852\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:------:|:----:|:---------------:|\n| No log | 1.5625 | 50 | 1.6587 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "ivanxia1988/bert_tnew_cls", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-related-prediction-1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\n- climate\nmodel-index:\n- name: bert-base-chinese-climate-related-prediction-1\n results: []\ndatasets:\n- hw2942/climate-unrelated_0-related_1\nlanguage:\n- zh\npipeline_tag: text-classification\n---\n\n\n\n# bert-base-chinese-climate-related-prediction-1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3132\n- Accuracy: 0.95\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 175 | 0.1263 | 0.97 |\n| No log | 2.0 | 350 | 0.2586 | 0.95 |\n| 0.0616 | 3.0 | 525 | 0.0913 | 0.99 |\n| 0.0616 | 4.0 | 700 | 0.1558 | 0.98 |\n| 0.0616 | 5.0 | 875 | 0.3458 | 0.94 |\n| 0.007 | 6.0 | 1050 | 0.3482 | 0.94 |\n| 0.007 | 7.0 | 1225 | 0.2984 | 0.95 |\n| 0.007 | 8.0 | 1400 | 0.3079 | 0.95 |\n| 0.0 | 9.0 | 1575 | 0.3121 | 0.95 |\n| 0.0 | 10.0 | 1750 | 0.3132 | 0.95 |\n\n\n### Framework versions\n\n- Transformers 4.41.2\n- Pytorch 2.3.0+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-related-prediction", "base_model_relation": "finetune" }, { "model_id": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\n- climate\nmodel-index:\n- name: bert-base-chinese-climate-risk-opportunity-prediction-1\n results: []\ndatasets:\n- hw2942/climate-risk_0-opportunity_1\nlanguage:\n- zh\npipeline_tag: text-classification\n---\n\n\n\n# bert-base-chinese-climate-risk-opportunity-prediction-1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0427\n- Accuracy: 0.99\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 113 | 0.1890 | 0.97 |\n| No log | 2.0 | 226 | 0.0226 | 0.99 |\n| No log | 3.0 | 339 | 0.0335 | 0.99 |\n| No log | 4.0 | 452 | 0.0342 | 0.99 |\n| 0.0586 | 5.0 | 565 | 0.0375 | 0.99 |\n| 0.0586 | 6.0 | 678 | 0.0397 | 0.99 |\n| 0.0586 | 7.0 | 791 | 0.0409 | 0.99 |\n| 0.0586 | 8.0 | 904 | 0.0416 | 0.99 |\n| 0.0001 | 9.0 | 1017 | 0.0426 | 0.99 |\n| 0.0001 | 10.0 | 1130 | 0.0427 | 0.99 |\n\n\n### Framework versions\n\n- Transformers 4.41.2\n- Pytorch 2.3.0+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction", "base_model_relation": "finetune" }, { "model_id": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\n- climate\nmodel-index:\n- name: bert-base-chinese-climate-transition-physical-risk-prediction-1\n results: []\ndatasets:\n- hw2942/climate-transition-risk_0-physical-risk_1\nlanguage:\n- zh\npipeline_tag: text-classification\n---\n\n\n\n# bert-base-chinese-climate-transition-physical-risk-prediction-1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0001\n- Accuracy: 1.0\n\n## Model description\n\nPredict the Chinese sentence to climate transition risk or physical risk\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 57 | 0.6517 | 0.88 |\n| No log | 2.0 | 114 | 0.1019 | 0.98 |\n| No log | 3.0 | 171 | 0.0003 | 1.0 |\n| No log | 4.0 | 228 | 0.0002 | 1.0 |\n| No log | 5.0 | 285 | 0.0001 | 1.0 |\n| No log | 6.0 | 342 | 0.0001 | 1.0 |\n| No log | 7.0 | 399 | 0.0001 | 1.0 |\n| No log | 8.0 | 456 | 0.0001 | 1.0 |\n| 0.0465 | 9.0 | 513 | 0.0001 | 1.0 |\n| 0.0465 | 10.0 | 570 | 0.0001 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.41.2\n- Pytorch 2.3.0+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction", "base_model_relation": "finetune" }, { "model_id": "hw2942/bert-base-chinese-climate-related-prediction-v1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-related-prediction-v1\n results: []\n---\n\n\n\n# bert-base-chinese-climate-related-prediction-v1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2448\n- Accuracy: 0.96\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 175 | 0.1712 | 0.96 |\n| No log | 2.0 | 350 | 0.2678 | 0.95 |\n| 0.0626 | 3.0 | 525 | 0.1881 | 0.97 |\n| 0.0626 | 4.0 | 700 | 0.3598 | 0.95 |\n| 0.0626 | 5.0 | 875 | 0.2826 | 0.96 |\n| 0.0034 | 6.0 | 1050 | 0.1852 | 0.98 |\n| 0.0034 | 7.0 | 1225 | 0.2284 | 0.96 |\n| 0.0034 | 8.0 | 1400 | 0.2399 | 0.96 |\n| 0.0001 | 9.0 | 1575 | 0.2435 | 0.96 |\n| 0.0001 | 10.0 | 1750 | 0.2448 | 0.96 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-related-prediction-v1", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-related-prediction-v2", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-related-prediction-v2\n results: []\n---\n\n\n\n# bert-base-chinese-climate-related-prediction-v2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.5613\n- Accuracy: 0.96\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 175 | 0.4095 | 0.97 |\n| No log | 2.0 | 350 | 0.4708 | 0.96 |\n| 0.0 | 3.0 | 525 | 0.5164 | 0.96 |\n| 0.0 | 4.0 | 700 | 0.5271 | 0.96 |\n| 0.0 | 5.0 | 875 | 0.5314 | 0.96 |\n| 0.0 | 6.0 | 1050 | 0.5414 | 0.96 |\n| 0.0 | 7.0 | 1225 | 0.5507 | 0.96 |\n| 0.0 | 8.0 | 1400 | 0.5575 | 0.96 |\n| 0.0 | 9.0 | 1575 | 0.5597 | 0.96 |\n| 0.0 | 10.0 | 1750 | 0.5613 | 0.96 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-related-prediction-v2", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-related-prediction-v3", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-related-prediction-v3\n results: []\n---\n\n\n\n# bert-base-chinese-climate-related-prediction-v3\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6772\n- Accuracy: 0.96\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 175 | 0.5534 | 0.96 |\n| No log | 2.0 | 350 | 0.6052 | 0.96 |\n| 0.0 | 3.0 | 525 | 0.6348 | 0.96 |\n| 0.0 | 4.0 | 700 | 0.6463 | 0.96 |\n| 0.0 | 5.0 | 875 | 0.6528 | 0.96 |\n| 0.0 | 6.0 | 1050 | 0.6603 | 0.96 |\n| 0.0 | 7.0 | 1225 | 0.6657 | 0.96 |\n| 0.0 | 8.0 | 1400 | 0.6702 | 0.96 |\n| 0.0 | 9.0 | 1575 | 0.6723 | 0.96 |\n| 0.0 | 10.0 | 1750 | 0.6772 | 0.96 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-related-prediction-v3", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-related-prediction-v4", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-related-prediction-v4\n results: []\n---\n\n\n\n# bert-base-chinese-climate-related-prediction-v4\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7237\n- Accuracy: 0.96\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 175 | 0.6256 | 0.96 |\n| No log | 2.0 | 350 | 0.6534 | 0.96 |\n| 0.0 | 3.0 | 525 | 0.6735 | 0.96 |\n| 0.0 | 4.0 | 700 | 0.6836 | 0.96 |\n| 0.0 | 5.0 | 875 | 0.6903 | 0.96 |\n| 0.0 | 6.0 | 1050 | 0.6959 | 0.96 |\n| 0.0 | 7.0 | 1225 | 0.6998 | 0.96 |\n| 0.0 | 8.0 | 1400 | 0.7032 | 0.96 |\n| 0.0 | 9.0 | 1575 | 0.7047 | 0.96 |\n| 0.0 | 10.0 | 1750 | 0.7237 | 0.96 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-related-prediction-v4", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-related-prediction-v5", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-related-prediction-v5\n results: []\n---\n\n\n\n# bert-base-chinese-climate-related-prediction-v5\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7461\n- Accuracy: 0.96\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 175 | 0.6568 | 0.96 |\n| No log | 2.0 | 350 | 0.6748 | 0.96 |\n| 0.0 | 3.0 | 525 | 0.6887 | 0.96 |\n| 0.0 | 4.0 | 700 | 0.6962 | 0.96 |\n| 0.0 | 5.0 | 875 | 0.7014 | 0.96 |\n| 0.0 | 6.0 | 1050 | 0.7058 | 0.96 |\n| 0.0 | 7.0 | 1225 | 0.7088 | 0.96 |\n| 0.0 | 8.0 | 1400 | 0.7115 | 0.96 |\n| 0.0 | 9.0 | 1575 | 0.7127 | 0.96 |\n| 0.0 | 10.0 | 1750 | 0.7461 | 0.96 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-related-prediction-v5", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-related-prediction-v6", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-related-prediction-v6\n results: []\n---\n\n\n\n# bert-base-chinese-climate-related-prediction-v6\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7555\n- Accuracy: 0.96\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 175 | 0.6749 | 0.96 |\n| No log | 2.0 | 350 | 0.6875 | 0.96 |\n| 0.0 | 3.0 | 525 | 0.6980 | 0.96 |\n| 0.0 | 4.0 | 700 | 0.7040 | 0.96 |\n| 0.0 | 5.0 | 875 | 0.7083 | 0.96 |\n| 0.0 | 6.0 | 1050 | 0.7119 | 0.96 |\n| 0.0 | 7.0 | 1225 | 0.7144 | 0.96 |\n| 0.0 | 8.0 | 1400 | 0.7167 | 0.96 |\n| 0.0 | 9.0 | 1575 | 0.7177 | 0.96 |\n| 0.0 | 10.0 | 1750 | 0.7555 | 0.96 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-related-prediction-v6", "base_model_relation": "base" }, { "model_id": "wsqstar/bert-finetuned-weibo-luobokuaipao", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-finetuned-weibo-luobokuaipao\n results: []\n---\n\n\n\n# bert-finetuned-weibo-luobokuaipao\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.1020\n- Accuracy: 0.5981\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 243 | 1.0453 | 0.5519 |\n| No log | 2.0 | 486 | 0.9954 | 0.5796 |\n| 0.9964 | 3.0 | 729 | 1.0374 | 0.6074 |\n| 0.9964 | 4.0 | 972 | 1.0489 | 0.6019 |\n| 0.6111 | 5.0 | 1215 | 1.1020 | 0.5981 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n\n```\n@misc{wang2024recentsurgepublictransportation,\n title={Recent Surge in Public Interest in Transportation: Sentiment Analysis of Baidu Apollo Go Using Weibo Data}, \n author={Shiqi Wang and Zhouye Zhao and Yuhang Xie and Mingchuan Ma and Zirui Chen and Zeyu Wang and Bohao Su and Wenrui Xu and Tianyi Li},\n year={2024},\n eprint={2408.10088},\n archivePrefix={arXiv},\n primaryClass={cs.SI},\n url={https://arxiv.org/abs/2408.10088}, \n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "wsqstar/bert-finetuned-weibo-luobokuaipao", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-related-prediction-vv1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-related-prediction-vv1\n results: []\n---\n\n\n\n# bert-base-chinese-climate-related-prediction-vv1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2627\n- Accuracy: 0.96\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 175 | 0.1494 | 0.97 |\n| No log | 2.0 | 350 | 0.2169 | 0.96 |\n| 0.0639 | 3.0 | 525 | 0.1340 | 0.97 |\n| 0.0639 | 4.0 | 700 | 0.2034 | 0.96 |\n| 0.0639 | 5.0 | 875 | 0.1037 | 0.99 |\n| 0.0096 | 6.0 | 1050 | 0.2854 | 0.96 |\n| 0.0096 | 7.0 | 1225 | 0.2719 | 0.96 |\n| 0.0096 | 8.0 | 1400 | 0.2659 | 0.96 |\n| 0.0 | 9.0 | 1575 | 0.2640 | 0.96 |\n| 0.0 | 10.0 | 1750 | 0.2627 | 0.96 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-related-prediction-vv1", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-related-prediction-vv2", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-related-prediction-vv2\n results: []\n---\n\n\n\n# bert-base-chinese-climate-related-prediction-vv2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1457\n- Accuracy: 0.99\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 175 | 0.1249 | 0.99 |\n| No log | 2.0 | 350 | 0.1316 | 0.99 |\n| 0.0033 | 3.0 | 525 | 0.1358 | 0.99 |\n| 0.0033 | 4.0 | 700 | 0.1388 | 0.99 |\n| 0.0033 | 5.0 | 875 | 0.1410 | 0.99 |\n| 0.0 | 6.0 | 1050 | 0.1426 | 0.99 |\n| 0.0 | 7.0 | 1225 | 0.1439 | 0.99 |\n| 0.0 | 8.0 | 1400 | 0.1449 | 0.99 |\n| 0.0 | 9.0 | 1575 | 0.1454 | 0.99 |\n| 0.0 | 10.0 | 1750 | 0.1457 | 0.99 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-related-prediction-vv2", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-related-prediction-vv3", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-related-prediction-vv3\n results: []\n---\n\n\n\n# bert-base-chinese-climate-related-prediction-vv3\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3641\n- Accuracy: 0.97\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 175 | 0.3403 | 0.97 |\n| No log | 2.0 | 350 | 0.3726 | 0.97 |\n| 0.0 | 3.0 | 525 | 0.3800 | 0.97 |\n| 0.0 | 4.0 | 700 | 0.3857 | 0.97 |\n| 0.0 | 5.0 | 875 | 0.3822 | 0.97 |\n| 0.0 | 6.0 | 1050 | 0.3839 | 0.97 |\n| 0.0 | 7.0 | 1225 | 0.3877 | 0.97 |\n| 0.0 | 8.0 | 1400 | 0.3910 | 0.97 |\n| 0.0 | 9.0 | 1575 | 0.3640 | 0.97 |\n| 0.0 | 10.0 | 1750 | 0.3641 | 0.97 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-related-prediction-vv3", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-related-prediction-2", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-related-prediction-2\n results: []\n---\n\n\n\n# bert-base-chinese-climate-related-prediction-2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1965\n- Accuracy: 0.98\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 175 | 0.1499 | 0.96 |\n| No log | 2.0 | 350 | 0.0987 | 0.98 |\n| 0.0638 | 3.0 | 525 | 0.0951 | 0.99 |\n| 0.0638 | 4.0 | 700 | 0.2270 | 0.97 |\n| 0.0638 | 5.0 | 875 | 0.2088 | 0.97 |\n| 0.0061 | 6.0 | 1050 | 0.1855 | 0.98 |\n| 0.0061 | 7.0 | 1225 | 0.1858 | 0.98 |\n| 0.0061 | 8.0 | 1400 | 0.1921 | 0.98 |\n| 0.0001 | 9.0 | 1575 | 0.1958 | 0.98 |\n| 0.0001 | 10.0 | 1750 | 0.1965 | 0.98 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-related-prediction", "base_model_relation": "finetune" }, { "model_id": "hw2942/bert-base-chinese-climate-related-prediction-3", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-related-prediction-3\n results: []\n---\n\n\n\n# bert-base-chinese-climate-related-prediction-3\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1286\n- Accuracy: 0.99\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 175 | 0.1829 | 0.98 |\n| No log | 2.0 | 350 | 0.1678 | 0.97 |\n| 0.0179 | 3.0 | 525 | 0.1448 | 0.98 |\n| 0.0179 | 4.0 | 700 | 0.1762 | 0.98 |\n| 0.0179 | 5.0 | 875 | 0.1733 | 0.98 |\n| 0.0043 | 6.0 | 1050 | 0.1777 | 0.98 |\n| 0.0043 | 7.0 | 1225 | 0.1259 | 0.99 |\n| 0.0043 | 8.0 | 1400 | 0.1275 | 0.99 |\n| 0.0037 | 9.0 | 1575 | 0.1283 | 0.99 |\n| 0.0037 | 10.0 | 1750 | 0.1286 | 0.99 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-related-prediction", "base_model_relation": "finetune" }, { "model_id": "hw2942/bert-base-chinese-climate-related-prediction-4", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-related-prediction-4\n results: []\n---\n\n\n\n# bert-base-chinese-climate-related-prediction-4\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1847\n- Accuracy: 0.99\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 175 | 0.2234 | 0.98 |\n| No log | 2.0 | 350 | 0.2258 | 0.98 |\n| 0.0 | 3.0 | 525 | 0.2221 | 0.98 |\n| 0.0 | 4.0 | 700 | 0.1800 | 0.99 |\n| 0.0 | 5.0 | 875 | 0.1822 | 0.99 |\n| 0.0 | 6.0 | 1050 | 0.1836 | 0.99 |\n| 0.0 | 7.0 | 1225 | 0.1835 | 0.99 |\n| 0.0 | 8.0 | 1400 | 0.1843 | 0.99 |\n| 0.0 | 9.0 | 1575 | 0.1845 | 0.99 |\n| 0.0 | 10.0 | 1750 | 0.1847 | 0.99 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-related-prediction", "base_model_relation": "finetune" }, { "model_id": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-v1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-risk-opportunity-prediction-v1\n results: []\n---\n\n\n\n# bert-base-chinese-climate-risk-opportunity-prediction-v1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0005\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 113 | 0.0621 | 0.99 |\n| No log | 2.0 | 226 | 0.0227 | 0.99 |\n| No log | 3.0 | 339 | 0.0144 | 0.99 |\n| No log | 4.0 | 452 | 0.0617 | 0.99 |\n| 0.0588 | 5.0 | 565 | 0.0074 | 1.0 |\n| 0.0588 | 6.0 | 678 | 0.0026 | 1.0 |\n| 0.0588 | 7.0 | 791 | 0.0020 | 1.0 |\n| 0.0588 | 8.0 | 904 | 0.0006 | 1.0 |\n| 0.0001 | 9.0 | 1017 | 0.0005 | 1.0 |\n| 0.0001 | 10.0 | 1130 | 0.0005 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-v1", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-v2", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-risk-opportunity-prediction-v2\n results: []\n---\n\n\n\n# bert-base-chinese-climate-risk-opportunity-prediction-v2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0000\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 113 | 0.0000 | 1.0 |\n| No log | 2.0 | 226 | 0.0000 | 1.0 |\n| No log | 3.0 | 339 | 0.0000 | 1.0 |\n| No log | 4.0 | 452 | 0.0000 | 1.0 |\n| 0.0 | 5.0 | 565 | 0.0000 | 1.0 |\n| 0.0 | 6.0 | 678 | 0.0000 | 1.0 |\n| 0.0 | 7.0 | 791 | 0.0000 | 1.0 |\n| 0.0 | 8.0 | 904 | 0.0000 | 1.0 |\n| 0.0 | 9.0 | 1017 | 0.0000 | 1.0 |\n| 0.0 | 10.0 | 1130 | 0.0000 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-v2", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-v3", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-risk-opportunity-prediction-v3\n results: []\n---\n\n\n\n# bert-base-chinese-climate-risk-opportunity-prediction-v3\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 113 | 0.0000 | 1.0 |\n| No log | 2.0 | 226 | 0.0000 | 1.0 |\n| No log | 3.0 | 339 | 0.0000 | 1.0 |\n| No log | 4.0 | 452 | 0.0000 | 1.0 |\n| 0.0 | 5.0 | 565 | 0.0000 | 1.0 |\n| 0.0 | 6.0 | 678 | 0.0 | 1.0 |\n| 0.0 | 7.0 | 791 | 0.0 | 1.0 |\n| 0.0 | 8.0 | 904 | 0.0 | 1.0 |\n| 0.0 | 9.0 | 1017 | 0.0 | 1.0 |\n| 0.0 | 10.0 | 1130 | 0.0 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-v3", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-v4", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-risk-opportunity-prediction-v4\n results: []\n---\n\n\n\n# bert-base-chinese-climate-risk-opportunity-prediction-v4\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 113 | 0.0000 | 1.0 |\n| No log | 2.0 | 226 | 0.0000 | 1.0 |\n| No log | 3.0 | 339 | 0.0 | 1.0 |\n| No log | 4.0 | 452 | 0.0 | 1.0 |\n| 0.0 | 5.0 | 565 | 0.0 | 1.0 |\n| 0.0 | 6.0 | 678 | 0.0 | 1.0 |\n| 0.0 | 7.0 | 791 | 0.0 | 1.0 |\n| 0.0 | 8.0 | 904 | 0.0 | 1.0 |\n| 0.0 | 9.0 | 1017 | 0.0 | 1.0 |\n| 0.0 | 10.0 | 1130 | 0.0 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-v4", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-vv1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-risk-opportunity-prediction-vv1\n results: []\n---\n\n\n\n# bert-base-chinese-climate-risk-opportunity-prediction-vv1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0308\n- Accuracy: 0.99\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 113 | 0.0855 | 0.98 |\n| No log | 2.0 | 226 | 0.0278 | 0.99 |\n| No log | 3.0 | 339 | 0.0765 | 0.99 |\n| No log | 4.0 | 452 | 0.0476 | 0.99 |\n| 0.0494 | 5.0 | 565 | 0.0365 | 0.99 |\n| 0.0494 | 6.0 | 678 | 0.0335 | 0.99 |\n| 0.0494 | 7.0 | 791 | 0.0324 | 0.99 |\n| 0.0494 | 8.0 | 904 | 0.0312 | 0.99 |\n| 0.0001 | 9.0 | 1017 | 0.0308 | 0.99 |\n| 0.0001 | 10.0 | 1130 | 0.0308 | 0.99 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-vv1", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-vv2", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-risk-opportunity-prediction-vv2\n results: []\n---\n\n\n\n# bert-base-chinese-climate-risk-opportunity-prediction-vv2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0868\n- Accuracy: 0.99\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 113 | 0.1025 | 0.98 |\n| No log | 2.0 | 226 | 0.0124 | 0.99 |\n| No log | 3.0 | 339 | 0.0854 | 0.99 |\n| No log | 4.0 | 452 | 0.0849 | 0.99 |\n| 0.0126 | 5.0 | 565 | 0.0844 | 0.99 |\n| 0.0126 | 6.0 | 678 | 0.0855 | 0.99 |\n| 0.0126 | 7.0 | 791 | 0.0858 | 0.99 |\n| 0.0126 | 8.0 | 904 | 0.0862 | 0.99 |\n| 0.0 | 9.0 | 1017 | 0.0866 | 0.99 |\n| 0.0 | 10.0 | 1130 | 0.0868 | 0.99 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-vv2", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-vv3", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-risk-opportunity-prediction-vv3\n results: []\n---\n\n\n\n# bert-base-chinese-climate-risk-opportunity-prediction-vv3\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0000\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 113 | 0.1288 | 0.99 |\n| No log | 2.0 | 226 | 0.0000 | 1.0 |\n| No log | 3.0 | 339 | 0.0000 | 1.0 |\n| No log | 4.0 | 452 | 0.0000 | 1.0 |\n| 0.0 | 5.0 | 565 | 0.0000 | 1.0 |\n| 0.0 | 6.0 | 678 | 0.0000 | 1.0 |\n| 0.0 | 7.0 | 791 | 0.0000 | 1.0 |\n| 0.0 | 8.0 | 904 | 0.0000 | 1.0 |\n| 0.0 | 9.0 | 1017 | 0.0000 | 1.0 |\n| 0.0 | 10.0 | 1130 | 0.0000 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-vv3", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-vv4", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-risk-opportunity-prediction-vv4\n results: []\n---\n\n\n\n# bert-base-chinese-climate-risk-opportunity-prediction-vv4\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 113 | 0.0000 | 1.0 |\n| No log | 2.0 | 226 | 0.0 | 1.0 |\n| No log | 3.0 | 339 | 0.0 | 1.0 |\n| No log | 4.0 | 452 | 0.0 | 1.0 |\n| 0.0 | 5.0 | 565 | 0.0 | 1.0 |\n| 0.0 | 6.0 | 678 | 0.0 | 1.0 |\n| 0.0 | 7.0 | 791 | 0.0 | 1.0 |\n| 0.0 | 8.0 | 904 | 0.0 | 1.0 |\n| 0.0 | 9.0 | 1017 | 0.0 | 1.0 |\n| 0.0 | 10.0 | 1130 | 0.0 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-vv4", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-2", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-risk-opportunity-prediction-2\n results: []\n---\n\n\n\n# bert-base-chinese-climate-risk-opportunity-prediction-2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0001\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 113 | 0.1445 | 0.97 |\n| No log | 2.0 | 226 | 0.0308 | 0.99 |\n| No log | 3.0 | 339 | 0.0020 | 1.0 |\n| No log | 4.0 | 452 | 0.0001 | 1.0 |\n| 0.0444 | 5.0 | 565 | 0.0001 | 1.0 |\n| 0.0444 | 6.0 | 678 | 0.0001 | 1.0 |\n| 0.0444 | 7.0 | 791 | 0.0001 | 1.0 |\n| 0.0444 | 8.0 | 904 | 0.0001 | 1.0 |\n| 0.0001 | 9.0 | 1017 | 0.0001 | 1.0 |\n| 0.0001 | 10.0 | 1130 | 0.0001 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction", "base_model_relation": "finetune" }, { "model_id": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-3", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-risk-opportunity-prediction-3\n results: []\n---\n\n\n\n# bert-base-chinese-climate-risk-opportunity-prediction-3\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0164\n- Accuracy: 0.99\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 113 | 0.3129 | 0.97 |\n| No log | 2.0 | 226 | 0.0000 | 1.0 |\n| No log | 3.0 | 339 | 0.0296 | 0.99 |\n| No log | 4.0 | 452 | 0.0254 | 0.99 |\n| 0.0171 | 5.0 | 565 | 0.0246 | 0.99 |\n| 0.0171 | 6.0 | 678 | 0.0217 | 0.99 |\n| 0.0171 | 7.0 | 791 | 0.0179 | 0.99 |\n| 0.0171 | 8.0 | 904 | 0.0168 | 0.99 |\n| 0.0 | 9.0 | 1017 | 0.0164 | 0.99 |\n| 0.0 | 10.0 | 1130 | 0.0164 | 0.99 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction", "base_model_relation": "finetune" }, { "model_id": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-4", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-risk-opportunity-prediction-4\n results: []\n---\n\n\n\n# bert-base-chinese-climate-risk-opportunity-prediction-4\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0000\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 113 | 0.1374 | 0.99 |\n| No log | 2.0 | 226 | 0.4965 | 0.95 |\n| No log | 3.0 | 339 | 0.0001 | 1.0 |\n| No log | 4.0 | 452 | 0.0721 | 0.99 |\n| 0.0324 | 5.0 | 565 | 0.0000 | 1.0 |\n| 0.0324 | 6.0 | 678 | 0.0000 | 1.0 |\n| 0.0324 | 7.0 | 791 | 0.0000 | 1.0 |\n| 0.0324 | 8.0 | 904 | 0.0000 | 1.0 |\n| 0.0 | 9.0 | 1017 | 0.0000 | 1.0 |\n| 0.0 | 10.0 | 1130 | 0.0000 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction", "base_model_relation": "finetune" }, { "model_id": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction-5", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-risk-opportunity-prediction-5\n results: []\n---\n\n\n\n# bert-base-chinese-climate-risk-opportunity-prediction-5\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1069\n- Accuracy: 0.99\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 113 | 0.7263 | 0.95 |\n| No log | 2.0 | 226 | 0.0850 | 0.99 |\n| No log | 3.0 | 339 | 0.0935 | 0.99 |\n| No log | 4.0 | 452 | 0.0864 | 0.99 |\n| 0.028 | 5.0 | 565 | 0.0978 | 0.99 |\n| 0.028 | 6.0 | 678 | 0.1020 | 0.99 |\n| 0.028 | 7.0 | 791 | 0.1042 | 0.99 |\n| 0.028 | 8.0 | 904 | 0.1057 | 0.99 |\n| 0.0 | 9.0 | 1017 | 0.1066 | 0.99 |\n| 0.0 | 10.0 | 1130 | 0.1069 | 0.99 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-risk-opportunity-prediction", "base_model_relation": "finetune" }, { "model_id": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v1", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-transition-physical-risk-prediction-v1\n results: []\n---\n\n\n\n# bert-base-chinese-climate-transition-physical-risk-prediction-v1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0001\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 57 | 0.4877 | 0.9 |\n| No log | 2.0 | 114 | 0.0006 | 1.0 |\n| No log | 3.0 | 171 | 0.0003 | 1.0 |\n| No log | 4.0 | 228 | 0.0023 | 1.0 |\n| No log | 5.0 | 285 | 0.0002 | 1.0 |\n| No log | 6.0 | 342 | 0.0001 | 1.0 |\n| No log | 7.0 | 399 | 0.0001 | 1.0 |\n| No log | 8.0 | 456 | 0.0001 | 1.0 |\n| 0.0417 | 9.0 | 513 | 0.0001 | 1.0 |\n| 0.0417 | 10.0 | 570 | 0.0001 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v1", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v2", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-transition-physical-risk-prediction-v2\n results: []\n---\n\n\n\n# bert-base-chinese-climate-transition-physical-risk-prediction-v2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0001\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 57 | 0.0004 | 1.0 |\n| No log | 2.0 | 114 | 0.0002 | 1.0 |\n| No log | 3.0 | 171 | 0.0059 | 1.0 |\n| No log | 4.0 | 228 | 0.1473 | 0.98 |\n| No log | 5.0 | 285 | 0.0001 | 1.0 |\n| No log | 6.0 | 342 | 0.0001 | 1.0 |\n| No log | 7.0 | 399 | 0.0001 | 1.0 |\n| No log | 8.0 | 456 | 0.0001 | 1.0 |\n| 0.0235 | 9.0 | 513 | 0.0001 | 1.0 |\n| 0.0235 | 10.0 | 570 | 0.0001 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v2", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v3", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-transition-physical-risk-prediction-v3\n results: []\n---\n\n\n\n# bert-base-chinese-climate-transition-physical-risk-prediction-v3\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0000\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 57 | 0.0014 | 1.0 |\n| No log | 2.0 | 114 | 0.0001 | 1.0 |\n| No log | 3.0 | 171 | 0.0316 | 0.98 |\n| No log | 4.0 | 228 | 0.0698 | 0.98 |\n| No log | 5.0 | 285 | 0.0006 | 1.0 |\n| No log | 6.0 | 342 | 0.0004 | 1.0 |\n| No log | 7.0 | 399 | 0.0001 | 1.0 |\n| No log | 8.0 | 456 | 0.0000 | 1.0 |\n| 0.0115 | 9.0 | 513 | 0.0000 | 1.0 |\n| 0.0115 | 10.0 | 570 | 0.0000 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v3", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v4", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-transition-physical-risk-prediction-v4\n results: []\n---\n\n\n\n# bert-base-chinese-climate-transition-physical-risk-prediction-v4\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0049\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 57 | 0.0001 | 1.0 |\n| No log | 2.0 | 114 | 0.0689 | 0.98 |\n| No log | 3.0 | 171 | 0.3480 | 0.94 |\n| No log | 4.0 | 228 | 0.0012 | 1.0 |\n| No log | 5.0 | 285 | 0.0030 | 1.0 |\n| No log | 6.0 | 342 | 0.0049 | 1.0 |\n| No log | 7.0 | 399 | 0.0049 | 1.0 |\n| No log | 8.0 | 456 | 0.0051 | 1.0 |\n| 0.0141 | 9.0 | 513 | 0.0050 | 1.0 |\n| 0.0141 | 10.0 | 570 | 0.0049 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v4", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v5", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-transition-physical-risk-prediction-v5\n results: []\n---\n\n\n\n# bert-base-chinese-climate-transition-physical-risk-prediction-v5\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0005\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 57 | 0.1892 | 0.98 |\n| No log | 2.0 | 114 | 0.0432 | 0.98 |\n| No log | 3.0 | 171 | 0.0001 | 1.0 |\n| No log | 4.0 | 228 | 0.0001 | 1.0 |\n| No log | 5.0 | 285 | 0.0001 | 1.0 |\n| No log | 6.0 | 342 | 0.0000 | 1.0 |\n| No log | 7.0 | 399 | 0.0006 | 1.0 |\n| No log | 8.0 | 456 | 0.0005 | 1.0 |\n| 0.0178 | 9.0 | 513 | 0.0005 | 1.0 |\n| 0.0178 | 10.0 | 570 | 0.0005 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v5", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v6", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-transition-physical-risk-prediction-v6\n results: []\n---\n\n\n\n# bert-base-chinese-climate-transition-physical-risk-prediction-v6\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0001\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 57 | 0.0000 | 1.0 |\n| No log | 2.0 | 114 | 0.0000 | 1.0 |\n| No log | 3.0 | 171 | 0.0000 | 1.0 |\n| No log | 4.0 | 228 | 0.0000 | 1.0 |\n| No log | 5.0 | 285 | 0.0004 | 1.0 |\n| No log | 6.0 | 342 | 0.0001 | 1.0 |\n| No log | 7.0 | 399 | 0.0001 | 1.0 |\n| No log | 8.0 | 456 | 0.0001 | 1.0 |\n| 0.0078 | 9.0 | 513 | 0.0001 | 1.0 |\n| 0.0078 | 10.0 | 570 | 0.0001 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v6", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v7", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-transition-physical-risk-prediction-v7\n results: []\n---\n\n\n\n# bert-base-chinese-climate-transition-physical-risk-prediction-v7\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0001\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 57 | 0.0000 | 1.0 |\n| No log | 2.0 | 114 | 0.0000 | 1.0 |\n| No log | 3.0 | 171 | 0.0000 | 1.0 |\n| No log | 4.0 | 228 | 0.0000 | 1.0 |\n| No log | 5.0 | 285 | 0.0004 | 1.0 |\n| No log | 6.0 | 342 | 0.0008 | 1.0 |\n| No log | 7.0 | 399 | 0.0003 | 1.0 |\n| No log | 8.0 | 456 | 0.0002 | 1.0 |\n| 0.0178 | 9.0 | 513 | 0.0001 | 1.0 |\n| 0.0178 | 10.0 | 570 | 0.0001 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-v7", "base_model_relation": "base" }, { "model_id": "wsqstar/GISchat-weibo-100k-fine-tuned-bert", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: GISchat-weibo-100k-fine-tuned-bert\n results: []\ndatasets:\n- dirtycomputer/weibo_senti_100k\nlanguage:\n- zh\n---\n\n\n\n# GISchat-weibo-100k-fine-tuned-bert\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on [weibo-100k dataset](https://huggingface.co/datasets/dirtycomputer/weibo_senti_100k).\n\nGithub repo: https://github.com/GISChat/Fine-tune-bert \n\nIt achieves the following results on the evaluation set:\n- Loss: 0.0458\n- Accuracy: 0.9867\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 0.08 | 100 | 0.6573 | 0.606 |\n| 0.647 | 0.16 | 200 | 0.2447 | 0.9507 |\n| 0.647 | 0.24 | 300 | 0.0914 | 0.9807 |\n| 0.1276 | 0.32 | 400 | 0.0609 | 0.9843 |\n| 0.1276 | 0.4 | 500 | 0.0607 | 0.9843 |\n| 0.0921 | 0.48 | 600 | 0.1053 | 0.98 |\n| 0.0921 | 0.56 | 700 | 0.0487 | 0.9853 |\n| 0.0885 | 0.64 | 800 | 0.0523 | 0.9853 |\n| 0.0885 | 0.72 | 900 | 0.0484 | 0.986 |\n| 0.0579 | 0.8 | 1000 | 0.0549 | 0.985 |\n| 0.0579 | 0.88 | 1100 | 0.0495 | 0.9867 |\n| 0.0507 | 0.96 | 1200 | 0.0458 | 0.9867 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "wsqstar/GISchat-weibo-100k-fine-tuned-bert", "base_model_relation": "base" }, { "model_id": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-2", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-transition-physical-risk-prediction-2\n results: []\n---\n\n\n\n# bert-base-chinese-climate-transition-physical-risk-prediction-2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0001\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 57 | 0.5323 | 0.88 |\n| No log | 2.0 | 114 | 0.0074 | 1.0 |\n| No log | 3.0 | 171 | 0.0005 | 1.0 |\n| No log | 4.0 | 228 | 0.0003 | 1.0 |\n| No log | 5.0 | 285 | 0.0002 | 1.0 |\n| No log | 6.0 | 342 | 0.0001 | 1.0 |\n| No log | 7.0 | 399 | 0.0001 | 1.0 |\n| No log | 8.0 | 456 | 0.0001 | 1.0 |\n| 0.0432 | 9.0 | 513 | 0.0001 | 1.0 |\n| 0.0432 | 10.0 | 570 | 0.0001 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction", "base_model_relation": "finetune" }, { "model_id": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-3", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-transition-physical-risk-prediction-3\n results: []\n---\n\n\n\n# bert-base-chinese-climate-transition-physical-risk-prediction-3\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0000\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 57 | 0.0002 | 1.0 |\n| No log | 2.0 | 114 | 0.1703 | 0.98 |\n| No log | 3.0 | 171 | 0.0001 | 1.0 |\n| No log | 4.0 | 228 | 0.1294 | 0.98 |\n| No log | 5.0 | 285 | 0.0000 | 1.0 |\n| No log | 6.0 | 342 | 0.0000 | 1.0 |\n| No log | 7.0 | 399 | 0.0000 | 1.0 |\n| No log | 8.0 | 456 | 0.0000 | 1.0 |\n| 0.0105 | 9.0 | 513 | 0.0000 | 1.0 |\n| 0.0105 | 10.0 | 570 | 0.0000 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction", "base_model_relation": "finetune" }, { "model_id": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-4", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-transition-physical-risk-prediction-4\n results: []\n---\n\n\n\n# bert-base-chinese-climate-transition-physical-risk-prediction-4\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0000\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 57 | 0.0000 | 1.0 |\n| No log | 2.0 | 114 | 0.0002 | 1.0 |\n| No log | 3.0 | 171 | 0.0000 | 1.0 |\n| No log | 4.0 | 228 | 0.0000 | 1.0 |\n| No log | 5.0 | 285 | 0.0000 | 1.0 |\n| No log | 6.0 | 342 | 0.0000 | 1.0 |\n| No log | 7.0 | 399 | 0.0000 | 1.0 |\n| No log | 8.0 | 456 | 0.0000 | 1.0 |\n| 0.0077 | 9.0 | 513 | 0.0000 | 1.0 |\n| 0.0077 | 10.0 | 570 | 0.0000 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction", "base_model_relation": "finetune" }, { "model_id": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-5", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-transition-physical-risk-prediction-5\n results: []\n---\n\n\n\n# bert-base-chinese-climate-transition-physical-risk-prediction-5\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0000\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 57 | 0.0000 | 1.0 |\n| No log | 2.0 | 114 | 0.0000 | 1.0 |\n| No log | 3.0 | 171 | 0.0000 | 1.0 |\n| No log | 4.0 | 228 | 0.0000 | 1.0 |\n| No log | 5.0 | 285 | 0.0000 | 1.0 |\n| No log | 6.0 | 342 | 0.0000 | 1.0 |\n| No log | 7.0 | 399 | 0.0000 | 1.0 |\n| No log | 8.0 | 456 | 0.0000 | 1.0 |\n| 0.0113 | 9.0 | 513 | 0.0000 | 1.0 |\n| 0.0113 | 10.0 | 570 | 0.0000 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction", "base_model_relation": "finetune" }, { "model_id": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-6", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-transition-physical-risk-prediction-6\n results: []\n---\n\n\n\n# bert-base-chinese-climate-transition-physical-risk-prediction-6\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0000\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 57 | 0.1871 | 0.94 |\n| No log | 2.0 | 114 | 0.0000 | 1.0 |\n| No log | 3.0 | 171 | 0.0002 | 1.0 |\n| No log | 4.0 | 228 | 0.0000 | 1.0 |\n| No log | 5.0 | 285 | 0.0000 | 1.0 |\n| No log | 6.0 | 342 | 0.0000 | 1.0 |\n| No log | 7.0 | 399 | 0.0000 | 1.0 |\n| No log | 8.0 | 456 | 0.0000 | 1.0 |\n| 0.0207 | 9.0 | 513 | 0.0000 | 1.0 |\n| 0.0207 | 10.0 | 570 | 0.0000 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction", "base_model_relation": "finetune" }, { "model_id": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction-7", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-climate-transition-physical-risk-prediction-7\n results: []\n---\n\n\n\n# bert-base-chinese-climate-transition-physical-risk-prediction-7\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0000\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 1.0 | 57 | 0.5690 | 0.96 |\n| No log | 2.0 | 114 | 0.0033 | 1.0 |\n| No log | 3.0 | 171 | 0.0002 | 1.0 |\n| No log | 4.0 | 228 | 0.1431 | 0.98 |\n| No log | 5.0 | 285 | 0.0000 | 1.0 |\n| No log | 6.0 | 342 | 0.0000 | 1.0 |\n| No log | 7.0 | 399 | 0.0000 | 1.0 |\n| No log | 8.0 | 456 | 0.0000 | 1.0 |\n| 0.0153 | 9.0 | 513 | 0.0000 | 1.0 |\n| 0.0153 | 10.0 | 570 | 0.0000 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hw2942/bert-base-chinese-climate-transition-physical-risk-prediction", "base_model_relation": "finetune" }, { "model_id": "track-AJ/GISchat-weibo-100k-fine-tuned-bert", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nmetrics:\n- accuracy\ntags:\n- generated_from_trainer\nmodel-index:\n- name: GISchat-weibo-100k-fine-tuned-bert\n results: []\n---\n\n\n\n# GISchat-weibo-100k-fine-tuned-bert\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0512\n- Accuracy: 0.9867\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| No log | 0.08 | 100 | 0.6513 | 0.6347 |\n| 0.6142 | 0.16 | 200 | 0.2181 | 0.962 |\n| 0.6142 | 0.24 | 300 | 0.0776 | 0.9847 |\n| 0.1151 | 0.32 | 400 | 0.0886 | 0.9827 |\n| 0.1151 | 0.4 | 500 | 0.0646 | 0.985 |\n| 0.0978 | 0.48 | 600 | 0.0605 | 0.9843 |\n| 0.0978 | 0.56 | 700 | 0.0545 | 0.9863 |\n| 0.089 | 0.64 | 800 | 0.0635 | 0.9857 |\n| 0.089 | 0.72 | 900 | 0.0532 | 0.9863 |\n| 0.0535 | 0.8 | 1000 | 0.0634 | 0.9863 |\n| 0.0535 | 0.88 | 1100 | 0.0570 | 0.9867 |\n| 0.0557 | 0.96 | 1200 | 0.0512 | 0.9867 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.3.1+cu121\n- Datasets 2.21.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "track-AJ/GISchat-weibo-100k-fine-tuned-bert", "base_model_relation": "base" }, { "model_id": "kaishih/bert-tzh-med-ner", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- precision\n- recall\n- f1\n- accuracy\nmodel-index:\n- name: bert-chinese-med-ner\n results: []\nlicense: apache-2.0\ndatasets:\n- kaishih/CMeEE-V2\nlanguage:\n- zh\n---\n\n\n\n# test-ner\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an CMeEE-V2 dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.4423\n- Precision: 0.5197\n- Recall: 0.6287\n- F1: 0.5690\n- Accuracy: 0.8492\n\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|\n| 0.6791 | 1.0 | 938 | 0.4600 | 0.5031 | 0.6096 | 0.5513 | 0.8435 |\n| 0.3969 | 2.0 | 1876 | 0.4423 | 0.5197 | 0.6287 | 0.5690 | 0.8492 |\n\n\n### Framework versions\n\n- Transformers 4.42.4\n- Pytorch 2.4.0+cu121\n- Datasets 2.21.0\n- Tokenizers 0.19.1", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "kaishih/bert-tzh-med-ner", "base_model_relation": "base" }, { "model_id": "b10401015/hw1-bert-base-chinese-finetuned-1", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: hw1-bert-base-chinese-finetuned-1\n results: []\n---\n\n\n\n# hw1-bert-base-chinese-finetuned-1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1874\n- Accuracy: 0.9585\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:-----:|:---------------:|:--------:|\n| 0.222 | 1.0 | 10857 | 0.1874 | 0.9585 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.0+cu121\n- Datasets 3.0.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "b10401015/hw1-bert-base-chinese-finetuned", "base_model_relation": "finetune" }, { "model_id": "b10401015/hw1-1-multiple_choice-bert-base-chinese-finetuned", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: hw1-1-multiple_choice-bert-base-chinese-finetuned\n results: []\n---\n\n\n\n# hw1-1-multiple_choice-bert-base-chinese-finetuned\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1931\n- Accuracy: 0.9578\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:-----:|:---------------:|:--------:|\n| 0.1796 | 1.0 | 10857 | 0.1931 | 0.9578 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.0+cu121\n- Datasets 3.0.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "b10401015/hw1-1-multiple_choice-bert-base-chinese-finetuned", "base_model_relation": "base" }, { "model_id": "b10401015/hw1-1-question_answering-bert-base-chinese-finetuned", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: hw1-1-question_answering-bert-base-chinese-finetuned\n results: []\n---\n\n\n\n# hw1-1-question_answering-bert-base-chinese-finetuned\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.0942\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:-----:|:-----:|:---------------:|\n| 1.1333 | 1.0 | 13822 | 1.0942 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.0+cu121\n- Datasets 3.0.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "b10401015/hw1-1-question_answering-bert-base-chinese-finetuned", "base_model_relation": "base" }, { "model_id": "bibibobo777/ExampleModel", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- f1\nmodel-index:\n- name: ExampleModel\n results: []\n---\n\n\n\n# ExampleModel\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3261\n- F1: 0.8553\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | F1 |\n|:-------------:|:-----:|:----:|:---------------:|:------:|\n| 0.349 | 1.0 | 625 | 0.3261 | 0.8553 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.1+cu121\n- Datasets 3.0.1\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "bibibobo777/ExampleModel", "base_model_relation": "base" }, { "model_id": "b10401015/hw1-2-multiple_choice-bert-base-chinese-finetuned", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: hw1-2-multiple_choice-bert-base-chinese-finetuned\n results: []\n---\n\n\n\n# hw1-2-multiple_choice-bert-base-chinese-finetuned\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2311\n- Accuracy: 0.9568\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| 0.2138 | 1.0 | 2715 | 0.1893 | 0.9492 |\n| 0.1375 | 2.0 | 5430 | 0.1805 | 0.9545 |\n| 0.0413 | 3.0 | 8145 | 0.2311 | 0.9568 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.0+cu121\n- Datasets 3.0.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "b10401015/hw1-2-multiple_choice-bert-base-chinese-finetuned", "base_model_relation": "base" }, { "model_id": "b10401015/hw1-2-question_answering-bert-base-chinese-finetuned", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: hw1-2-question_answering-bert-base-chinese-finetuned\n results: []\n---\n\n\n\n# hw1-2-question_answering-bert-base-chinese-finetuned\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.7718\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:-----:|:----:|:---------------:|\n| 0.7585 | 1.0 | 3456 | 0.7009 |\n| 0.3201 | 2.0 | 6912 | 0.7718 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.0+cu121\n- Datasets 3.0.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "b10401015/hw1-2-question_answering-bert-base-chinese-finetuned", "base_model_relation": "base" }, { "model_id": "b10401015/hw1-3-question_answering-bert-base-chinese-finetuned", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: hw1-3-question_answering-bert-base-chinese-finetuned\n results: []\n---\n\n\n\n# hw1-3-question_answering-bert-base-chinese-finetuned\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6840\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 7e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:-----:|:----:|:---------------:|\n| 0.7566 | 1.0 | 1728 | 0.6559 |\n| 0.3276 | 2.0 | 3456 | 0.6840 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.0+cu121\n- Datasets 3.0.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "b10401015/hw1-3-question_answering-bert-base-chinese-finetuned", "base_model_relation": "base" }, { "model_id": "b10401015/hw1-4-question_answering-bert-base-chinese-finetuned", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: hw1-4-question_answering-bert-base-chinese-finetuned\n results: []\n---\n\n\n\n# hw1-4-question_answering-bert-base-chinese-finetuned\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6279\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 9e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:-----:|:----:|:---------------:|\n| 1.1344 | 1.0 | 864 | 0.6678 |\n| 0.3337 | 2.0 | 1728 | 0.6279 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.0+cu121\n- Datasets 3.0.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "b10401015/hw1-4-question_answering-bert-base-chinese-finetuned", "base_model_relation": "base" }, { "model_id": "riiwang/lr_3e-05_batch_2_epoch_1_model_span_selector", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: lr_3e-05_batch_2_epoch_1_model_span_selector\n results: []\n---\n\n\n\n# lr_3e-05_batch_2_epoch_1_model_span_selector\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.1+cu121\n- Datasets 2.21.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "riiwang/lr_3e-05_batch_2_epoch_1_model_span_selector", "base_model_relation": "base" }, { "model_id": "riiwang/lr_3e-05_batch_2_epoch_3_model_span_selector", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: lr_3e-05_batch_2_epoch_3_model_span_selector\n results: []\n---\n\n\n\n# lr_3e-05_batch_2_epoch_3_model_span_selector\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.1+cu121\n- Datasets 2.21.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "riiwang/lr_3e-05_batch_2_epoch_3_model_span_selector", "base_model_relation": "base" }, { "model_id": "b10401015/hw1-3-multiple_choice-bert-base-chinese-finetuned", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: hw1-3-multiple_choice-bert-base-chinese-finetuned\n results: []\n---\n\n\n\n# hw1-3-multiple_choice-bert-base-chinese-finetuned\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1648\n- Accuracy: 0.9601\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:-----:|:---------------:|:--------:|\n| 0.1751 | 1.0 | 10857 | 0.1648 | 0.9601 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.1+cu121\n- Datasets 3.0.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "b10401015/hw1-3-multiple_choice-bert-base-chinese-finetuned", "base_model_relation": "base" }, { "model_id": "riiwang/lr_3e-05_batch_2_epoch_5_model_span_selector", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: lr_3e-05_batch_2_epoch_5_model_span_selector\n results: []\n---\n\n\n\n# lr_3e-05_batch_2_epoch_5_model_span_selector\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.1+cu121\n- Datasets 2.21.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "riiwang/lr_3e-05_batch_2_epoch_5_model_span_selector", "base_model_relation": "base" }, { "model_id": "riiwang/lr_0.0003_batch_2_epoch_3_model_span_selector", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: lr_0.0003_batch_2_epoch_3_model_span_selector\n results: []\n---\n\n\n\n# lr_0.0003_batch_2_epoch_3_model_span_selector\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.1+cu121\n- Datasets 2.21.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "riiwang/lr_0.0003_batch_2_epoch_3_model_span_selector", "base_model_relation": "base" }, { "model_id": "riiwang/lr_5e-05_batch_8_epoch_3_model_span_selector", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: lr_5e-05_batch_8_epoch_3_model_span_selector\n results: []\n---\n\n\n\n# lr_5e-05_batch_8_epoch_3_model_span_selector\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.1+cu121\n- Datasets 2.21.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "riiwang/lr_5e-05_batch_8_epoch_3_model_span_selector", "base_model_relation": "base" }, { "model_id": "riiwang/lr_5e-05_batch_8_epoch_5_model_span_selector", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: lr_5e-05_batch_8_epoch_5_model_span_selector\n results: []\n---\n\n\n\n# lr_5e-05_batch_8_epoch_5_model_span_selector\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.1+cu121\n- Datasets 2.21.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "riiwang/lr_5e-05_batch_8_epoch_5_model_span_selector", "base_model_relation": "base" }, { "model_id": "riiwang/lr_3e-06_batch_4_epoch_3_model_span_selector", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: lr_3e-06_batch_4_epoch_3_model_span_selector\n results: []\n---\n\n\n\n# lr_3e-06_batch_4_epoch_3_model_span_selector\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-06\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.1+cu121\n- Datasets 2.21.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "riiwang/lr_3e-06_batch_4_epoch_3_model_span_selector", "base_model_relation": "base" }, { "model_id": "b09501048/adl_hw1_multi_choice_model", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: adl_hw1_multi_choice_model\n results: []\n---\n\n\n\n# adl_hw1_multi_choice_model\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:------:|:----:|:---------------:|:--------:|\n| No log | 0.9985 | 339 | 0.1203 | 0.9595 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.1+cu121\n- Datasets 3.0.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "b09501048/adl_hw1_multi_choice_model", "base_model_relation": "base" }, { "model_id": "frett/chinese_extract_bert", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: chinese_extract_bert\n results: []\n---\n\n\n\n# chinese_extract_bert\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.45.0.dev0\n- Pytorch 2.4.1+cu121\n- Datasets 3.0.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "frett/chinese_extract_bert", "base_model_relation": "base" }, { "model_id": "jazzson/bert-base-chinese-finetuned-paragraph_extraction-2", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: bert-base-chinese-finetuned-paragraph_extraction-2\n results: []\n---\n\n\n\n# bert-base-chinese-finetuned-paragraph_extraction-2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3001\n- Accuracy: 0.9558\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:-----:|:---------------:|:--------:|\n| 0.2313 | 1.0 | 10857 | 0.3451 | 0.9468 |\n| 0.1272 | 2.0 | 21714 | 0.3001 | 0.9558 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.1+cu121\n- Datasets 3.0.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "jazzson/bert-base-chinese-finetuned-paragraph_extraction", "base_model_relation": "finetune" }, { "model_id": "jazzson/bert-base-chinese-finetuned-question-answering-4", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-finetuned-question-answering-4\n results: []\n---\n\n\n\n# bert-base-chinese-finetuned-question-answering-4\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.1286\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:-----:|:-----:|:---------------:|\n| 1.0056 | 1.0 | 10857 | 0.9549 |\n| 0.5516 | 2.0 | 21714 | 1.1286 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.1+cu121\n- Datasets 3.0.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "jazzson/bert-base-chinese-finetuned-question-answering", "base_model_relation": "finetune" }, { "model_id": "jazzson/bert-base-chinese-finetuned-question-answering-6", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-finetuned-question-answering-6\n results: []\n---\n\n\n\n# bert-base-chinese-finetuned-question-answering-6\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.0618\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 6\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:------:|:-----:|:---------------:|\n| 2.0209 | 0.0461 | 500 | 1.9120 |\n| 1.8506 | 0.0921 | 1000 | 1.7149 |\n| 1.6908 | 0.1382 | 1500 | 1.6126 |\n| 1.7279 | 0.1842 | 2000 | 1.8186 |\n| 1.6033 | 0.2303 | 2500 | 1.5719 |\n| 1.4682 | 0.2763 | 3000 | 1.5929 |\n| 1.7458 | 0.3224 | 3500 | 2.0739 |\n| 1.575 | 0.3684 | 4000 | 1.5012 |\n| 1.473 | 0.4145 | 4500 | 1.5199 |\n| 1.5733 | 0.4605 | 5000 | 1.3922 |\n| 1.8026 | 0.5066 | 5500 | 1.6235 |\n| 1.3608 | 0.5526 | 6000 | 1.7175 |\n| 1.4554 | 0.5987 | 6500 | 1.3453 |\n| 1.7179 | 0.6447 | 7000 | 1.6828 |\n| 1.6229 | 0.6908 | 7500 | 1.5436 |\n| 1.4866 | 0.7369 | 8000 | 1.3952 |\n| 1.5038 | 0.7829 | 8500 | 1.2955 |\n| 1.5215 | 0.8290 | 9000 | 1.3297 |\n| 1.5771 | 0.8750 | 9500 | 1.4685 |\n| 1.4322 | 0.9211 | 10000 | 1.4607 |\n| 1.3962 | 0.9671 | 10500 | 1.4697 |\n| 1.0492 | 1.0132 | 11000 | 1.4867 |\n| 1.29 | 1.0592 | 11500 | 1.7879 |\n| 1.341 | 1.1053 | 12000 | 1.5917 |\n| 1.3136 | 1.1513 | 12500 | 1.5838 |\n| 1.3421 | 1.1974 | 13000 | 1.4495 |\n| 1.2831 | 1.2434 | 13500 | 1.7703 |\n| 1.118 | 1.2895 | 14000 | 1.4682 |\n| 1.1808 | 1.3355 | 14500 | 1.3217 |\n| 1.1677 | 1.3816 | 15000 | 1.4738 |\n| 0.968 | 1.4277 | 15500 | 1.6698 |\n| 1.294 | 1.4737 | 16000 | 1.7064 |\n| 1.207 | 1.5198 | 16500 | 1.6069 |\n| 1.0651 | 1.5658 | 17000 | 1.8631 |\n| 1.0354 | 1.6119 | 17500 | 1.5430 |\n| 1.4592 | 1.6579 | 18000 | 1.3579 |\n| 1.2897 | 1.7040 | 18500 | 1.3598 |\n| 1.2697 | 1.7500 | 19000 | 1.3874 |\n| 1.0655 | 1.7961 | 19500 | 1.3918 |\n| 1.2007 | 1.8421 | 20000 | 1.4897 |\n| 1.0415 | 1.8882 | 20500 | 1.4199 |\n| 1.2612 | 1.9342 | 21000 | 1.3972 |\n| 1.3252 | 1.9803 | 21500 | 1.3493 |\n| 0.7575 | 2.0263 | 22000 | 1.7524 |\n| 0.9341 | 2.0724 | 22500 | 1.6567 |\n| 0.6243 | 2.1184 | 23000 | 1.6430 |\n| 0.8075 | 2.1645 | 23500 | 1.8267 |\n| 0.8581 | 2.2106 | 24000 | 1.6460 |\n| 0.9364 | 2.2566 | 24500 | 1.4578 |\n| 0.9757 | 2.3027 | 25000 | 1.5213 |\n| 0.6887 | 2.3487 | 25500 | 1.7984 |\n| 0.9203 | 2.3948 | 26000 | 1.5756 |\n| 0.8079 | 2.4408 | 26500 | 1.6416 |\n| 0.836 | 2.4869 | 27000 | 1.7805 |\n| 0.9916 | 2.5329 | 27500 | 1.2854 |\n| 0.8501 | 2.5790 | 28000 | 1.5900 |\n| 0.951 | 2.6250 | 28500 | 1.7041 |\n| 0.725 | 2.6711 | 29000 | 1.6452 |\n| 0.9249 | 2.7171 | 29500 | 1.6845 |\n| 0.6042 | 2.7632 | 30000 | 1.7528 |\n| 0.617 | 2.8092 | 30500 | 1.7251 |\n| 0.9236 | 2.8553 | 31000 | 1.6484 |\n| 0.8841 | 2.9014 | 31500 | 1.7583 |\n| 0.7921 | 2.9474 | 32000 | 1.5881 |\n| 0.657 | 2.9935 | 32500 | 1.8081 |\n| 0.364 | 3.0395 | 33000 | 2.0073 |\n| 0.3145 | 3.0856 | 33500 | 1.8009 |\n| 0.4875 | 3.1316 | 34000 | 1.7690 |\n| 0.7391 | 3.1777 | 34500 | 1.5941 |\n| 0.4003 | 3.2237 | 35000 | 1.9043 |\n| 0.5839 | 3.2698 | 35500 | 1.5942 |\n| 0.3059 | 3.3158 | 36000 | 2.1032 |\n| 0.7912 | 3.3619 | 36500 | 1.8461 |\n| 0.4987 | 3.4079 | 37000 | 1.7626 |\n| 0.4096 | 3.4540 | 37500 | 1.9525 |\n| 0.4641 | 3.5000 | 38000 | 1.7831 |\n| 0.6741 | 3.5461 | 38500 | 1.6394 |\n| 0.5223 | 3.5922 | 39000 | 1.7295 |\n| 0.6628 | 3.6382 | 39500 | 1.7417 |\n| 0.3842 | 3.6843 | 40000 | 1.9575 |\n| 0.5447 | 3.7303 | 40500 | 1.6962 |\n| 0.5065 | 3.7764 | 41000 | 1.6205 |\n| 0.4987 | 3.8224 | 41500 | 1.7965 |\n| 0.4679 | 3.8685 | 42000 | 1.7241 |\n| 0.4412 | 3.9145 | 42500 | 1.7947 |\n| 0.5336 | 3.9606 | 43000 | 1.7249 |\n| 0.4926 | 4.0066 | 43500 | 1.7266 |\n| 0.3031 | 4.0527 | 44000 | 1.8313 |\n| 0.1739 | 4.0987 | 44500 | 2.0269 |\n| 0.1633 | 4.1448 | 45000 | 1.9412 |\n| 0.2223 | 4.1908 | 45500 | 2.1326 |\n| 0.2388 | 4.2369 | 46000 | 2.0716 |\n| 0.297 | 4.2830 | 46500 | 2.0261 |\n| 0.3006 | 4.3290 | 47000 | 2.0068 |\n| 0.3573 | 4.3751 | 47500 | 1.8945 |\n| 0.3003 | 4.4211 | 48000 | 2.0772 |\n| 0.3278 | 4.4672 | 48500 | 1.9943 |\n| 0.1343 | 4.5132 | 49000 | 2.0881 |\n| 0.2136 | 4.5593 | 49500 | 2.1435 |\n| 0.2846 | 4.6053 | 50000 | 1.9745 |\n| 0.3605 | 4.6514 | 50500 | 2.0614 |\n| 0.2491 | 4.6974 | 51000 | 1.9107 |\n| 0.2531 | 4.7435 | 51500 | 2.0504 |\n| 0.2409 | 4.7895 | 52000 | 1.9772 |\n| 0.2536 | 4.8356 | 52500 | 1.8751 |\n| 0.3425 | 4.8816 | 53000 | 1.8705 |\n| 0.1654 | 4.9277 | 53500 | 1.9489 |\n| 0.2758 | 4.9737 | 54000 | 1.9708 |\n| 0.1577 | 5.0198 | 54500 | 1.9610 |\n| 0.1067 | 5.0659 | 55000 | 2.0793 |\n| 0.1657 | 5.1119 | 55500 | 1.9446 |\n| 0.1461 | 5.1580 | 56000 | 1.9106 |\n| 0.1248 | 5.2040 | 56500 | 2.0643 |\n| 0.189 | 5.2501 | 57000 | 1.9927 |\n| 0.1907 | 5.2961 | 57500 | 2.1214 |\n| 0.1329 | 5.3422 | 58000 | 2.2351 |\n| 0.0914 | 5.3882 | 58500 | 2.0377 |\n| 0.0961 | 5.4343 | 59000 | 2.2045 |\n| 0.0744 | 5.4803 | 59500 | 2.1818 |\n| 0.1652 | 5.5264 | 60000 | 2.0111 |\n| 0.1256 | 5.5724 | 60500 | 2.0353 |\n| 0.1617 | 5.6185 | 61000 | 2.0892 |\n| 0.0725 | 5.6645 | 61500 | 2.1369 |\n| 0.2305 | 5.7106 | 62000 | 2.0559 |\n| 0.1961 | 5.7567 | 62500 | 2.0562 |\n| 0.2864 | 5.8027 | 63000 | 2.0555 |\n| 0.0569 | 5.8488 | 63500 | 2.0838 |\n| 0.0787 | 5.8948 | 64000 | 2.0614 |\n| 0.112 | 5.9409 | 64500 | 2.0628 |\n| 0.1097 | 5.9869 | 65000 | 2.0618 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.1+cu121\n- Datasets 3.0.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "jazzson/bert-base-chinese-finetuned-question-answering", "base_model_relation": "finetune" }, { "model_id": "jazzson/bert-base-chinese-finetuned-question-answering-8", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-finetuned-question-answering-8\n results: []\n---\n\n\n\n# bert-base-chinese-finetuned-question-answering-8\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.0682\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:------:|:----:|:---------------:|\n| 1.6873 | 0.1842 | 500 | 1.1089 |\n| 1.1046 | 0.3683 | 1000 | 0.9349 |\n| 0.9793 | 0.5525 | 1500 | 0.9402 |\n| 0.9477 | 0.7366 | 2000 | 0.8424 |\n| 0.8951 | 0.9208 | 2500 | 0.8333 |\n| 0.6411 | 1.1050 | 3000 | 0.9014 |\n| 0.4946 | 1.2891 | 3500 | 0.9121 |\n| 0.4887 | 1.4733 | 4000 | 0.8586 |\n| 0.4875 | 1.6575 | 4500 | 0.9060 |\n| 0.4483 | 1.8416 | 5000 | 0.7990 |\n| 0.4079 | 2.0258 | 5500 | 0.9980 |\n| 0.2337 | 2.2099 | 6000 | 1.0852 |\n| 0.2342 | 2.3941 | 6500 | 1.0850 |\n| 0.2239 | 2.5783 | 7000 | 1.0937 |\n| 0.1853 | 2.7624 | 7500 | 1.1032 |\n| 0.2009 | 2.9466 | 8000 | 1.0682 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.1+cu121\n- Datasets 3.0.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "jazzson/bert-base-chinese-finetuned-question-answering", "base_model_relation": "finetune" }, { "model_id": "jazzson/bert-base-chinese-finetuned-question-answering-retrain1", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-finetuned-question-answering-retrain1\n results: []\n---\n\n\n\n# bert-base-chinese-finetuned-question-answering-retrain1\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.1+cu121\n- Datasets 3.0.1\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "jazzson/bert-base-chinese-finetuned-question-answering-retrain1", "base_model_relation": "base" }, { "model_id": "smlhd/bert_cn_finetuning", "gated": "False", "card": "---\nlibrary_name: transformers\nlanguage:\n- en\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\ndatasets:\n- glue\nmetrics:\n- accuracy\nmodel-index:\n- name: bert_cn_finetuning\n results:\n - task:\n name: Text Classification\n type: text-classification\n dataset:\n name: GLUE SST2\n type: glue\n args: sst2\n metrics:\n - name: Accuracy\n type: accuracy\n value: 0.8279816513761468\n---\n\n\n\n# bert_cn_finetuning\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the GLUE SST2 dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.5117\n- Accuracy: 0.8280\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1.0\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.45.0.dev0\n- Pytorch 2.2.2\n- Datasets 3.0.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "smlhd/bert_cn_finetuning", "base_model_relation": "base" }, { "model_id": "frett/chinese_extract_bert_scratch", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: chinese_extract_bert_scratch\n results: []\n---\n\n\n\n# chinese_extract_bert_scratch\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 32\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5.0\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.45.0.dev0\n- Pytorch 2.4.1+cu121\n- Datasets 3.0.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "frett/chinese_extract_bert_scratch", "base_model_relation": "base" }, { "model_id": "jazzson/bert-base-chinese-finetuned-paragraph_extraction-retrain3", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: bert-base-chinese-finetuned-paragraph_extraction-retrain3\n results: []\n---\n\n\n\n# bert-base-chinese-finetuned-paragraph_extraction-retrain3\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2350\n- Accuracy: 0.9538\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:------:|:-----:|:---------------:|:--------:|\n| 0.1994 | 0.1842 | 2000 | 0.2304 | 0.9395 |\n| 0.2139 | 0.3684 | 4000 | 0.3441 | 0.9242 |\n| 0.2433 | 0.5526 | 6000 | 0.2450 | 0.9528 |\n| 0.1658 | 0.7369 | 8000 | 0.1913 | 0.9548 |\n| 0.1741 | 0.9211 | 10000 | 0.2350 | 0.9538 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.1+cu121\n- Datasets 3.0.1\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "jazzson/bert-base-chinese-finetuned-paragraph_extraction-retrain3", "base_model_relation": "base" }, { "model_id": "scfengv/TVL_GameLayerClassifier", "gated": "False", "card": "---\nlicense: mit\ndatasets:\n- scfengv/TVL-game-layer-dataset\nlanguage:\n- zh\nmetrics:\n- accuracy\nbase_model:\n- google-bert/bert-base-chinese\npipeline_tag: text-classification\ntags:\n- multi-label\n\nmodel-index:\n - name: scfengv/TVL_GameLayerClassifier\n results:\n - task:\n type: multi-label text-classification\n dataset:\n name: scfengv/TVL-game-layer-dataset\n type: scfengv/TVL-game-layer-dataset\n metrics:\n - name: Accuracy\n type: Accuracy\n value: 0.985764\n \n - name: F1 score (Micro)\n type: F1 score (Micro)\n value: 0.993132\n\n - name: F1 score (Macro)\n type: F1 score (Macro)\n value: 0.993694\n---\n# Model Details of TVL_GameLayerClassifier\n\n## Base Model\nThis model is fine-tuned from [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese).\n\n## Model Architecture\n- **Type**: BERT-based text classification model\n- **Hidden Size**: 768\n- **Number of Layers**: 12\n- **Number of Attention Heads**: 12\n- **Intermediate Size**: 3072\n- **Max Sequence Length**: 512\n- **Vocabulary Size**: 21,128\n\n## Key Components\n1. **Embeddings**\n - Word Embeddings\n - Position Embeddings\n - Token Type Embeddings\n - Layer Normalization\n\n2. **Encoder**\n - 12 layers of:\n - Self-Attention Mechanism\n - Intermediate Dense Layer\n - Output Dense Layer\n - Layer Normalization\n\n3. **Pooler**\n - Dense layer for sentence representation\n\n4. **Classifier**\n - Output layer with 5 classes\n\n## Training Hyperparameters\n\nThe model was trained using the following hyperparameters:\n\n```\nLearning rate: 1e-05\nBatch size: 32\nNumber of epochs: 10\nOptimizer: Adam\nLoss function: torch.nn.BCEWithLogitsLoss()\n```\n\n## Training Infrastructure\n\n- **Hardware Type:** NVIDIA Quadro RTX8000\n- **Library:** PyTorch\n- **Hours used:** 2hr 13mins\n\n## Model Parameters\n- Total parameters: ~102M (estimated)\n- All parameters are in 32-bit floating point (F32) format\n\n## Input Processing\n- Uses BERT tokenization\n- Supports sequences up to 512 tokens\n\n## Output\n- 5-class multi-label classification\n\n## Performance Metrics\n- Accuracy score: 0.985764\n- F1 score (Micro): 0.993132\n- F1 score (Macro): 0.993694\n\n## Training Dataset\nThis model was trained on the [scfengv/TVL-game-layer-dataset](https://huggingface.co/datasets/scfengv/TVL-game-layer-dataset).\n\n## Testing Dataset\n\n- [scfengv/TVL-game-layer-dataset](https://huggingface.co/datasets/scfengv/TVL-game-layer-dataset)\n - validation\n - Remove Emoji\n - Emoji2Desc\n - Remove Punctuation\n\n## Usage\n\n```python\nimport torch\nfrom transformers import BertForSequenceClassification, BertTokenizer\n\nmodel = BertForSequenceClassification.from_pretrained(\"scfengv/TVL_GameLayerClassifier\")\ntokenizer = BertTokenizer.from_pretrained(\"scfengv/TVL_GameLayerClassifier\")\n\n# Prepare your text\ntext = \"Your text here\" ## Please refer to Dataset\ninputs = tokenizer(text, return_tensors = \"pt\", padding = True, truncation = True, max_length = 512)\n\n# Make prediction\nwith torch.no_grad():\n outputs = model(**inputs)\n predictions = torch.sigmoid(outputs.logits)\n\n# Print predictions\nprint(predictions)\n```\n\n## Additional Notes\n- This model is specifically designed for TVL Game layer classification tasks.\n- It's based on the Chinese BERT model, indicating it's optimized for Chinese text.\n\nFor more detailed information about the model architecture or usage, please refer to the BERT documentation and the specific fine-tuning process used for this classifier.\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "scfengv/TVL_GameLayerClassifier", "base_model_relation": "base" }, { "model_id": "missingstuffedbun/test_20241030080931", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: test_20241030080931\n results: []\n---\n\n\n\n# test_20241030080931\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.3798\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 3\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:-----:|:----:|:---------------:|\n| 1.3947 | 1.0 | 40 | 1.4010 |\n| 1.3266 | 2.0 | 80 | 1.3879 |\n| 1.1353 | 3.0 | 120 | 1.3798 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.5.0+cu121\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "missingstuffedbun/test_20241030080931", "base_model_relation": "base" }, { "model_id": "missingstuffedbun/test_20241030100037", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nlibrary_name: transformers\ntags:\n- generated_from_trainer\nmodel-index:\n- name: test_20241030100037\n results: []\n---\n\n\n\n# test_20241030100037\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.6565\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 10\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:-----:|:----:|:---------------:|\n| 1.4134 | 1.0 | 20 | 1.4111 |\n| 1.3745 | 2.0 | 40 | 1.3874 |\n| 1.3136 | 3.0 | 60 | 1.3791 |\n| 1.1921 | 4.0 | 80 | 1.3380 |\n| 1.0282 | 5.0 | 100 | 1.4147 |\n| 0.697 | 6.0 | 120 | 1.6691 |\n| 0.3299 | 7.0 | 140 | 1.8745 |\n| 0.1155 | 8.0 | 160 | 2.1475 |\n| 0.0418 | 9.0 | 180 | 2.5058 |\n| 0.0217 | 10.0 | 200 | 2.6565 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.5.0+cu121\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "missingstuffedbun/test_20241030100037", "base_model_relation": "base" }, { "model_id": "linxiaoming/chinese-sentiment-model", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: linxiaoming/chinese-sentiment-model\n results: []\n---\n\n\n\n# linxiaoming/chinese-sentiment-model\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.6844\n- Train Accuracy: 0.8000\n- Epoch: 0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Train Accuracy | Epoch |\n|:----------:|:--------------:|:-----:|\n| 0.6844 | 0.8000 | 0 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- TensorFlow 2.17.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "linxiaoming/chinese-sentiment-model", "base_model_relation": "base" }, { "model_id": "PassbyGrocer/bert-ner-msra", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-ner-msra\n results: []\n---\n\n\n\n# bert-ner-msra\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.0413\n- eval_precision: 0.9481\n- eval_recall: 0.9507\n- eval_f1: 0.9494\n- eval_accuracy: 0.9939\n- eval_runtime: 10.3612\n- eval_samples_per_second: 421.283\n- eval_steps_per_second: 13.222\n- epoch: 9.0\n- step: 13041\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 10\n- mixed_precision_training: Native AMP\n\n### Framework versions\n\n- Transformers 4.46.1\n- Pytorch 2.4.1+cu124\n- Datasets 3.1.0\n- Tokenizers 0.20.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "PassbyGrocer/bert-ner-msra", "base_model_relation": "base" }, { "model_id": "PassbyGrocer/bert-ner-weibo", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- precision\n- recall\n- f1\n- accuracy\nmodel-index:\n- name: bert-ner-weibo\n results: []\n---\n\n\n\n# bert-ner-weibo\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2292\n- Precision: 0.6382\n- Recall: 0.7121\n- F1: 0.6731\n- Accuracy: 0.9680\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 100\n- mixed_precision_training: Native AMP\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|\n| 0.262 | 1.0 | 43 | 0.1853 | 0.2802 | 0.2442 | 0.2610 | 0.9527 |\n| 0.1455 | 2.0 | 86 | 0.1038 | 0.6031 | 0.7069 | 0.6509 | 0.9705 |\n| 0.0958 | 3.0 | 129 | 0.0981 | 0.6633 | 0.6787 | 0.6709 | 0.9722 |\n| 0.0756 | 4.0 | 172 | 0.1011 | 0.6189 | 0.7558 | 0.6806 | 0.9699 |\n| 0.0389 | 5.0 | 215 | 0.1058 | 0.6627 | 0.7172 | 0.6889 | 0.9715 |\n| 0.0339 | 6.0 | 258 | 0.1236 | 0.6205 | 0.7147 | 0.6643 | 0.9665 |\n| 0.0259 | 7.0 | 301 | 0.1170 | 0.6376 | 0.7326 | 0.6818 | 0.9698 |\n| 0.0182 | 8.0 | 344 | 0.1389 | 0.6110 | 0.7429 | 0.6705 | 0.9668 |\n| 0.0184 | 9.0 | 387 | 0.1368 | 0.6063 | 0.7404 | 0.6667 | 0.9651 |\n| 0.0128 | 10.0 | 430 | 0.1403 | 0.6283 | 0.7301 | 0.6754 | 0.9683 |\n| 0.0122 | 11.0 | 473 | 0.1407 | 0.6275 | 0.7404 | 0.6792 | 0.9677 |\n| 0.0147 | 12.0 | 516 | 0.1505 | 0.5967 | 0.7455 | 0.6629 | 0.9663 |\n| 0.01 | 13.0 | 559 | 0.1406 | 0.6167 | 0.7404 | 0.6729 | 0.9675 |\n| 0.0079 | 14.0 | 602 | 0.1527 | 0.6473 | 0.7172 | 0.6805 | 0.9692 |\n| 0.0112 | 15.0 | 645 | 0.1549 | 0.6545 | 0.7352 | 0.6925 | 0.9681 |\n| 0.0061 | 16.0 | 688 | 0.1585 | 0.6432 | 0.7275 | 0.6828 | 0.9691 |\n| 0.0086 | 17.0 | 731 | 0.1598 | 0.6507 | 0.7326 | 0.6892 | 0.9683 |\n| 0.0077 | 18.0 | 774 | 0.1677 | 0.6611 | 0.7172 | 0.6880 | 0.9685 |\n| 0.0053 | 19.0 | 817 | 0.1674 | 0.6351 | 0.7249 | 0.6771 | 0.9687 |\n| 0.0049 | 20.0 | 860 | 0.1777 | 0.6675 | 0.7121 | 0.6891 | 0.9687 |\n| 0.0088 | 21.0 | 903 | 0.1579 | 0.6578 | 0.7018 | 0.6791 | 0.9676 |\n| 0.0085 | 22.0 | 946 | 0.1729 | 0.6618 | 0.6941 | 0.6775 | 0.9675 |\n| 0.0062 | 23.0 | 989 | 0.1788 | 0.6395 | 0.7249 | 0.6795 | 0.9685 |\n| 0.0052 | 24.0 | 1032 | 0.1782 | 0.6458 | 0.7172 | 0.6797 | 0.9683 |\n| 0.0084 | 25.0 | 1075 | 0.1803 | 0.6345 | 0.7275 | 0.6778 | 0.9670 |\n| 0.006 | 26.0 | 1118 | 0.1972 | 0.6154 | 0.7198 | 0.6635 | 0.9651 |\n| 0.0045 | 27.0 | 1161 | 0.1852 | 0.625 | 0.7198 | 0.6691 | 0.9674 |\n| 0.0035 | 28.0 | 1204 | 0.1847 | 0.6412 | 0.7121 | 0.6748 | 0.9680 |\n| 0.0045 | 29.0 | 1247 | 0.1823 | 0.6675 | 0.6915 | 0.6793 | 0.9687 |\n| 0.0094 | 30.0 | 1290 | 0.1962 | 0.6362 | 0.7147 | 0.6731 | 0.9682 |\n| 0.0036 | 31.0 | 1333 | 0.2092 | 0.6319 | 0.7018 | 0.6650 | 0.9667 |\n| 0.0045 | 32.0 | 1376 | 0.1872 | 0.6242 | 0.7301 | 0.6730 | 0.9650 |\n| 0.0051 | 33.0 | 1419 | 0.2008 | 0.6112 | 0.7275 | 0.6643 | 0.9649 |\n| 0.0057 | 34.0 | 1462 | 0.2018 | 0.6088 | 0.7481 | 0.6713 | 0.9662 |\n| 0.003 | 35.0 | 1505 | 0.1941 | 0.6539 | 0.7044 | 0.6782 | 0.9680 |\n| 0.0074 | 36.0 | 1548 | 0.1978 | 0.6741 | 0.7018 | 0.6877 | 0.9683 |\n| 0.0045 | 37.0 | 1591 | 0.1940 | 0.6563 | 0.7069 | 0.6807 | 0.9674 |\n| 0.0031 | 38.0 | 1634 | 0.2075 | 0.6220 | 0.7275 | 0.6706 | 0.9674 |\n| 0.0058 | 39.0 | 1677 | 0.1979 | 0.6429 | 0.7172 | 0.6780 | 0.9678 |\n| 0.0029 | 40.0 | 1720 | 0.2002 | 0.6447 | 0.7044 | 0.6732 | 0.9689 |\n| 0.0041 | 41.0 | 1763 | 0.1962 | 0.6222 | 0.7069 | 0.6619 | 0.9678 |\n| 0.0028 | 42.0 | 1806 | 0.2035 | 0.6298 | 0.7172 | 0.6707 | 0.9672 |\n| 0.0033 | 43.0 | 1849 | 0.2208 | 0.6144 | 0.7249 | 0.6651 | 0.9668 |\n| 0.0024 | 44.0 | 1892 | 0.2208 | 0.6330 | 0.7095 | 0.6691 | 0.9668 |\n| 0.0043 | 45.0 | 1935 | 0.2250 | 0.5872 | 0.7095 | 0.6426 | 0.9647 |\n| 0.0043 | 46.0 | 1978 | 0.2151 | 0.6425 | 0.6838 | 0.6625 | 0.9676 |\n| 0.0054 | 47.0 | 2021 | 0.2121 | 0.6692 | 0.6761 | 0.6726 | 0.9690 |\n| 0.0048 | 48.0 | 2064 | 0.1978 | 0.6231 | 0.7224 | 0.6690 | 0.9671 |\n| 0.0049 | 49.0 | 2107 | 0.1963 | 0.6453 | 0.7249 | 0.6828 | 0.9689 |\n| 0.0043 | 50.0 | 2150 | 0.2090 | 0.6683 | 0.7095 | 0.6883 | 0.9691 |\n| 0.0032 | 51.0 | 2193 | 0.2017 | 0.6317 | 0.7275 | 0.6762 | 0.9679 |\n| 0.0046 | 52.0 | 2236 | 0.2036 | 0.6409 | 0.7249 | 0.6803 | 0.9694 |\n| 0.0052 | 53.0 | 2279 | 0.2047 | 0.6210 | 0.7455 | 0.6776 | 0.9676 |\n| 0.0027 | 54.0 | 2322 | 0.1953 | 0.6359 | 0.7095 | 0.6707 | 0.9688 |\n| 0.0048 | 55.0 | 2365 | 0.1935 | 0.6555 | 0.7044 | 0.6791 | 0.9701 |\n| 0.0037 | 56.0 | 2408 | 0.1975 | 0.6212 | 0.7378 | 0.6745 | 0.9688 |\n| 0.0064 | 57.0 | 2451 | 0.2016 | 0.6337 | 0.7249 | 0.6763 | 0.9690 |\n| 0.0039 | 58.0 | 2494 | 0.2087 | 0.6152 | 0.7275 | 0.6667 | 0.9669 |\n| 0.0027 | 59.0 | 2537 | 0.2056 | 0.6388 | 0.7275 | 0.6803 | 0.9679 |\n| 0.0028 | 60.0 | 2580 | 0.2067 | 0.6421 | 0.7378 | 0.6866 | 0.9687 |\n| 0.0031 | 61.0 | 2623 | 0.1963 | 0.6300 | 0.7352 | 0.6785 | 0.9685 |\n| 0.0042 | 62.0 | 2666 | 0.2048 | 0.6207 | 0.7404 | 0.6753 | 0.9670 |\n| 0.0034 | 63.0 | 2709 | 0.2000 | 0.6332 | 0.7455 | 0.6848 | 0.9689 |\n| 0.004 | 64.0 | 2752 | 0.1914 | 0.6484 | 0.7301 | 0.6868 | 0.9692 |\n| 0.0038 | 65.0 | 2795 | 0.1983 | 0.6185 | 0.7378 | 0.6729 | 0.9685 |\n| 0.0039 | 66.0 | 2838 | 0.2068 | 0.6214 | 0.7301 | 0.6714 | 0.9683 |\n| 0.003 | 67.0 | 2881 | 0.2129 | 0.6236 | 0.7198 | 0.6683 | 0.9685 |\n| 0.0036 | 68.0 | 2924 | 0.2118 | 0.6131 | 0.7455 | 0.6729 | 0.9676 |\n| 0.0033 | 69.0 | 2967 | 0.1997 | 0.6513 | 0.7249 | 0.6861 | 0.9691 |\n| 0.003 | 70.0 | 3010 | 0.2066 | 0.6217 | 0.7224 | 0.6683 | 0.9686 |\n| 0.0042 | 71.0 | 3053 | 0.2064 | 0.6201 | 0.7301 | 0.6706 | 0.9682 |\n| 0.0029 | 72.0 | 3096 | 0.2113 | 0.6196 | 0.7326 | 0.6714 | 0.9676 |\n| 0.0021 | 73.0 | 3139 | 0.2051 | 0.6341 | 0.7172 | 0.6731 | 0.9685 |\n| 0.0035 | 74.0 | 3182 | 0.2059 | 0.6353 | 0.7121 | 0.6715 | 0.9681 |\n| 0.0042 | 75.0 | 3225 | 0.2085 | 0.6304 | 0.7147 | 0.6699 | 0.9678 |\n| 0.0038 | 76.0 | 3268 | 0.2137 | 0.6284 | 0.7172 | 0.6699 | 0.9676 |\n| 0.0023 | 77.0 | 3311 | 0.2134 | 0.6231 | 0.7224 | 0.6690 | 0.9682 |\n| 0.003 | 78.0 | 3354 | 0.2149 | 0.6467 | 0.7198 | 0.6813 | 0.9689 |\n| 0.0034 | 79.0 | 3397 | 0.2121 | 0.6406 | 0.7147 | 0.6756 | 0.9685 |\n| 0.0034 | 80.0 | 3440 | 0.2146 | 0.6407 | 0.7198 | 0.6780 | 0.9685 |\n| 0.0033 | 81.0 | 3483 | 0.2162 | 0.6430 | 0.7224 | 0.6804 | 0.9685 |\n| 0.0031 | 82.0 | 3526 | 0.2233 | 0.6264 | 0.7198 | 0.6699 | 0.9678 |\n| 0.0043 | 83.0 | 3569 | 0.2279 | 0.6355 | 0.7172 | 0.6739 | 0.9678 |\n| 0.0032 | 84.0 | 3612 | 0.2247 | 0.6357 | 0.7224 | 0.6763 | 0.9682 |\n| 0.0046 | 85.0 | 3655 | 0.2240 | 0.6495 | 0.7147 | 0.6805 | 0.9683 |\n| 0.0047 | 86.0 | 3698 | 0.2262 | 0.6284 | 0.7172 | 0.6699 | 0.9684 |\n| 0.0036 | 87.0 | 3741 | 0.2214 | 0.6435 | 0.7147 | 0.6772 | 0.9682 |\n| 0.0034 | 88.0 | 3784 | 0.2199 | 0.6353 | 0.7121 | 0.6715 | 0.9685 |\n| 0.0034 | 89.0 | 3827 | 0.2231 | 0.6414 | 0.7172 | 0.6772 | 0.9682 |\n| 0.0024 | 90.0 | 3870 | 0.2239 | 0.6427 | 0.7121 | 0.6756 | 0.9683 |\n| 0.0019 | 91.0 | 3913 | 0.2243 | 0.6397 | 0.7121 | 0.6740 | 0.9681 |\n| 0.0032 | 92.0 | 3956 | 0.2264 | 0.6333 | 0.7147 | 0.6715 | 0.9680 |\n| 0.0021 | 93.0 | 3999 | 0.2276 | 0.6304 | 0.7147 | 0.6699 | 0.9680 |\n| 0.0029 | 94.0 | 4042 | 0.2277 | 0.6339 | 0.7121 | 0.6707 | 0.9680 |\n| 0.0039 | 95.0 | 4085 | 0.2281 | 0.6353 | 0.7121 | 0.6715 | 0.9680 |\n| 0.0021 | 96.0 | 4128 | 0.2289 | 0.6368 | 0.7121 | 0.6723 | 0.9681 |\n| 0.0027 | 97.0 | 4171 | 0.2292 | 0.6382 | 0.7121 | 0.6731 | 0.9680 |\n| 0.0028 | 98.0 | 4214 | 0.2289 | 0.6382 | 0.7121 | 0.6731 | 0.9682 |\n| 0.0027 | 99.0 | 4257 | 0.2291 | 0.6382 | 0.7121 | 0.6731 | 0.9682 |\n| 0.002 | 100.0 | 4300 | 0.2292 | 0.6382 | 0.7121 | 0.6731 | 0.9680 |\n\n\n### Framework versions\n\n- Transformers 4.46.1\n- Pytorch 1.13.1+cu116\n- Datasets 3.1.0\n- Tokenizers 0.20.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "PassbyGrocer/bert-ner-weibo", "base_model_relation": "base" }, { "model_id": "calvinobai/chinese-sentiment-model", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nlibrary_name: transformers\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: chinese-sentiment-model\n results: []\n---\n\n\n\n# chinese-sentiment-model\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- TensorFlow 2.17.0\n- Datasets 3.1.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "calvinobai/chinese-sentiment-model", "base_model_relation": "base" }, { "model_id": "sky1223/chinese-sentiment-model", "gated": "unknown", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: sky1223/chinese-sentiment-model\n results: []\n---\n\n\n\n# sky1223/chinese-sentiment-model\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.8760\n- Train Accuracy: 0.2000\n- Epoch: 0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Train Accuracy | Epoch |\n|:----------:|:--------------:|:-----:|\n| 0.8760 | 0.2000 | 0 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- TensorFlow 2.17.0\n- Datasets 3.1.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": null, "base_model_relation": null }, { "model_id": "marsyao/chinese-sentiment-model", "gated": "False", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: chinese-sentiment-model\n results: []\n---\n\n\n\n# chinese-sentiment-model\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.44.0\n- TensorFlow 2.17.0\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "marsyao/chinese-sentiment-model", "base_model_relation": "base" }, { "model_id": "PassbyGrocer/bert_crf-ner-weibo", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert_crf-ner-weibo\n results: []\n---\n\n\n\n# bert_crf-ner-weibo\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.2287\n- eval_precision: 0.6344\n- eval_recall: 0.7584\n- eval_f1: 0.6909\n- eval_accuracy: 0.9678\n- eval_runtime: 0.5124\n- eval_samples_per_second: 524.958\n- eval_steps_per_second: 9.758\n- epoch: 115.0\n- step: 2530\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 200\n- mixed_precision_training: Native AMP\n\n### Framework versions\n\n- Transformers 4.46.1\n- Pytorch 1.13.1+cu117\n- Datasets 3.1.0\n- Tokenizers 0.20.2\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "PassbyGrocer/bert_crf-ner-weibo", "base_model_relation": "base" }, { "model_id": "PassbyGrocer/bert_bilstm_crf-ner-weibo", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- precision\n- recall\n- f1\n- accuracy\nmodel-index:\n- name: bert_bilstm_crf-ner-weibo\n results: []\n---\n\n\n\n# bert_bilstm_crf-ner-weibo\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1945\n- Precision: 0.6524\n- Recall: 0.7429\n- F1: 0.6947\n- Accuracy: 0.9703\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 100\n- mixed_precision_training: Native AMP\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|\n| 0.4272 | 1.0 | 22 | 0.3531 | 0.0 | 0.0 | 0.0 | 0.9330 |\n| 0.2529 | 2.0 | 44 | 0.1587 | 0.4922 | 0.4884 | 0.4903 | 0.9613 |\n| 0.1472 | 3.0 | 66 | 0.1171 | 0.5524 | 0.6915 | 0.6142 | 0.9681 |\n| 0.0977 | 4.0 | 88 | 0.1057 | 0.5866 | 0.6967 | 0.6369 | 0.9714 |\n| 0.065 | 5.0 | 110 | 0.1035 | 0.6336 | 0.7069 | 0.6683 | 0.9715 |\n| 0.0538 | 6.0 | 132 | 0.1149 | 0.6307 | 0.7069 | 0.6667 | 0.9699 |\n| 0.0413 | 7.0 | 154 | 0.1057 | 0.6315 | 0.7224 | 0.6739 | 0.9724 |\n| 0.0344 | 8.0 | 176 | 0.1236 | 0.5979 | 0.7455 | 0.6636 | 0.9693 |\n| 0.0296 | 9.0 | 198 | 0.1271 | 0.5958 | 0.7352 | 0.6582 | 0.9680 |\n| 0.0297 | 10.0 | 220 | 0.1257 | 0.6442 | 0.6889 | 0.6658 | 0.9702 |\n| 0.0212 | 11.0 | 242 | 0.1440 | 0.6037 | 0.7481 | 0.6682 | 0.9664 |\n| 0.0208 | 12.0 | 264 | 0.1368 | 0.6284 | 0.7044 | 0.6642 | 0.9683 |\n| 0.0165 | 13.0 | 286 | 0.1337 | 0.6545 | 0.7404 | 0.6948 | 0.9698 |\n| 0.0164 | 14.0 | 308 | 0.1388 | 0.6514 | 0.7301 | 0.6885 | 0.9700 |\n| 0.014 | 15.0 | 330 | 0.1403 | 0.6690 | 0.7275 | 0.6970 | 0.9701 |\n| 0.0109 | 16.0 | 352 | 0.1467 | 0.6448 | 0.7326 | 0.6859 | 0.9694 |\n| 0.0108 | 17.0 | 374 | 0.1488 | 0.6081 | 0.7301 | 0.6636 | 0.9670 |\n| 0.0106 | 18.0 | 396 | 0.1564 | 0.6572 | 0.7147 | 0.6847 | 0.9687 |\n| 0.0105 | 19.0 | 418 | 0.1620 | 0.6667 | 0.7147 | 0.6898 | 0.9691 |\n| 0.01 | 20.0 | 440 | 0.1638 | 0.7046 | 0.6684 | 0.6860 | 0.9705 |\n| 0.0106 | 21.0 | 462 | 0.1542 | 0.6709 | 0.6761 | 0.6735 | 0.9692 |\n| 0.0092 | 22.0 | 484 | 0.1487 | 0.6683 | 0.7198 | 0.6931 | 0.9694 |\n| 0.011 | 23.0 | 506 | 0.1502 | 0.6396 | 0.7301 | 0.6819 | 0.9691 |\n| 0.0068 | 24.0 | 528 | 0.1534 | 0.6801 | 0.7378 | 0.7078 | 0.9705 |\n| 0.0077 | 25.0 | 550 | 0.1600 | 0.6793 | 0.7352 | 0.7062 | 0.9710 |\n| 0.0071 | 26.0 | 572 | 0.1644 | 0.6386 | 0.7404 | 0.6857 | 0.9676 |\n| 0.0062 | 27.0 | 594 | 0.1714 | 0.6430 | 0.7224 | 0.6804 | 0.9688 |\n| 0.006 | 28.0 | 616 | 0.1649 | 0.6461 | 0.7275 | 0.6844 | 0.9694 |\n| 0.0072 | 29.0 | 638 | 0.1631 | 0.6643 | 0.7326 | 0.6968 | 0.9695 |\n| 0.0122 | 30.0 | 660 | 0.1802 | 0.6054 | 0.7455 | 0.6682 | 0.9676 |\n| 0.0062 | 31.0 | 682 | 0.1829 | 0.6154 | 0.7404 | 0.6721 | 0.9676 |\n| 0.0075 | 32.0 | 704 | 0.1674 | 0.6313 | 0.7352 | 0.6793 | 0.9691 |\n| 0.0048 | 33.0 | 726 | 0.1664 | 0.6422 | 0.7429 | 0.6889 | 0.9692 |\n| 0.0045 | 34.0 | 748 | 0.1724 | 0.6374 | 0.7455 | 0.6872 | 0.9697 |\n| 0.0055 | 35.0 | 770 | 0.1714 | 0.6636 | 0.7301 | 0.6952 | 0.9700 |\n| 0.0071 | 36.0 | 792 | 0.1673 | 0.6316 | 0.7404 | 0.6817 | 0.9692 |\n| 0.0039 | 37.0 | 814 | 0.1635 | 0.6620 | 0.7352 | 0.6967 | 0.9709 |\n| 0.0036 | 38.0 | 836 | 0.1727 | 0.6584 | 0.7532 | 0.7026 | 0.9710 |\n| 0.0051 | 39.0 | 858 | 0.1735 | 0.6509 | 0.7429 | 0.6939 | 0.9708 |\n| 0.0033 | 40.0 | 880 | 0.1758 | 0.6949 | 0.7378 | 0.7157 | 0.9718 |\n| 0.0045 | 41.0 | 902 | 0.1812 | 0.6309 | 0.7558 | 0.6877 | 0.9698 |\n| 0.0035 | 42.0 | 924 | 0.1791 | 0.6729 | 0.7404 | 0.7050 | 0.9709 |\n| 0.0043 | 43.0 | 946 | 0.1923 | 0.6532 | 0.7455 | 0.6963 | 0.9697 |\n| 0.0045 | 44.0 | 968 | 0.1815 | 0.6492 | 0.7326 | 0.6884 | 0.9696 |\n| 0.0037 | 45.0 | 990 | 0.1830 | 0.6493 | 0.7378 | 0.6907 | 0.9700 |\n| 0.0045 | 46.0 | 1012 | 0.1809 | 0.6493 | 0.7378 | 0.6907 | 0.9700 |\n| 0.0039 | 47.0 | 1034 | 0.1811 | 0.6545 | 0.7404 | 0.6948 | 0.9701 |\n| 0.0046 | 48.0 | 1056 | 0.1740 | 0.6659 | 0.7172 | 0.6906 | 0.9708 |\n| 0.0039 | 49.0 | 1078 | 0.1827 | 0.6318 | 0.7455 | 0.6840 | 0.9694 |\n| 0.0036 | 50.0 | 1100 | 0.1762 | 0.6443 | 0.7404 | 0.6890 | 0.9698 |\n| 0.0046 | 51.0 | 1122 | 0.1752 | 0.6538 | 0.7378 | 0.6932 | 0.9702 |\n| 0.0036 | 52.0 | 1144 | 0.1856 | 0.6344 | 0.7404 | 0.6833 | 0.9692 |\n| 0.0036 | 53.0 | 1166 | 0.1870 | 0.6350 | 0.7378 | 0.6825 | 0.9693 |\n| 0.0049 | 54.0 | 1188 | 0.1840 | 0.6723 | 0.7121 | 0.6916 | 0.9699 |\n| 0.0042 | 55.0 | 1210 | 0.1927 | 0.6220 | 0.7404 | 0.6761 | 0.9687 |\n| 0.0039 | 56.0 | 1232 | 0.1854 | 0.6545 | 0.7352 | 0.6925 | 0.9704 |\n| 0.0042 | 57.0 | 1254 | 0.1900 | 0.6523 | 0.7378 | 0.6924 | 0.9700 |\n| 0.0028 | 58.0 | 1276 | 0.1894 | 0.6486 | 0.7404 | 0.6915 | 0.9697 |\n| 0.0049 | 59.0 | 1298 | 0.1904 | 0.6366 | 0.7429 | 0.6856 | 0.9695 |\n| 0.0031 | 60.0 | 1320 | 0.1844 | 0.6492 | 0.7326 | 0.6884 | 0.9698 |\n| 0.0045 | 61.0 | 1342 | 0.1866 | 0.6429 | 0.7404 | 0.6882 | 0.9696 |\n| 0.004 | 62.0 | 1364 | 0.1888 | 0.625 | 0.7326 | 0.6746 | 0.9686 |\n| 0.0031 | 63.0 | 1386 | 0.1922 | 0.6875 | 0.7352 | 0.7106 | 0.9710 |\n| 0.0044 | 64.0 | 1408 | 0.1918 | 0.6722 | 0.7326 | 0.7011 | 0.9706 |\n| 0.0046 | 65.0 | 1430 | 0.1987 | 0.6475 | 0.7506 | 0.6952 | 0.9685 |\n| 0.0044 | 66.0 | 1452 | 0.1868 | 0.6388 | 0.7455 | 0.6880 | 0.9698 |\n| 0.0042 | 67.0 | 1474 | 0.1920 | 0.6356 | 0.7532 | 0.6894 | 0.9695 |\n| 0.0038 | 68.0 | 1496 | 0.1852 | 0.6606 | 0.7506 | 0.7028 | 0.9705 |\n| 0.0033 | 69.0 | 1518 | 0.1843 | 0.6476 | 0.7558 | 0.6975 | 0.9700 |\n| 0.0034 | 70.0 | 1540 | 0.1797 | 0.6532 | 0.7506 | 0.6986 | 0.9707 |\n| 0.0042 | 71.0 | 1562 | 0.1820 | 0.6332 | 0.7455 | 0.6848 | 0.9699 |\n| 0.0033 | 72.0 | 1584 | 0.1874 | 0.6482 | 0.7532 | 0.6968 | 0.9704 |\n| 0.0039 | 73.0 | 1606 | 0.1878 | 0.6636 | 0.7506 | 0.7045 | 0.9708 |\n| 0.003 | 74.0 | 1628 | 0.1857 | 0.6553 | 0.7429 | 0.6964 | 0.9712 |\n| 0.0038 | 75.0 | 1650 | 0.1889 | 0.6606 | 0.7404 | 0.6982 | 0.9709 |\n| 0.004 | 76.0 | 1672 | 0.1880 | 0.6539 | 0.7481 | 0.6978 | 0.9709 |\n| 0.0032 | 77.0 | 1694 | 0.1875 | 0.6590 | 0.7404 | 0.6973 | 0.9706 |\n| 0.0034 | 78.0 | 1716 | 0.1868 | 0.6532 | 0.7455 | 0.6963 | 0.9710 |\n| 0.0029 | 79.0 | 1738 | 0.1899 | 0.6545 | 0.7404 | 0.6948 | 0.9705 |\n| 0.0032 | 80.0 | 1760 | 0.1899 | 0.6628 | 0.7429 | 0.7006 | 0.9709 |\n| 0.0037 | 81.0 | 1782 | 0.1928 | 0.6545 | 0.7404 | 0.6948 | 0.9705 |\n| 0.0039 | 82.0 | 1804 | 0.1916 | 0.6560 | 0.7404 | 0.6957 | 0.9705 |\n| 0.0034 | 83.0 | 1826 | 0.1926 | 0.6560 | 0.7352 | 0.6933 | 0.9705 |\n| 0.0032 | 84.0 | 1848 | 0.1931 | 0.6621 | 0.7455 | 0.7013 | 0.9709 |\n| 0.0048 | 85.0 | 1870 | 0.1925 | 0.6659 | 0.7481 | 0.7046 | 0.9712 |\n| 0.0039 | 86.0 | 1892 | 0.1903 | 0.6690 | 0.7326 | 0.6994 | 0.9709 |\n| 0.0039 | 87.0 | 1914 | 0.1948 | 0.6538 | 0.7429 | 0.6955 | 0.9709 |\n| 0.0032 | 88.0 | 1936 | 0.1949 | 0.6682 | 0.7558 | 0.7093 | 0.9710 |\n| 0.003 | 89.0 | 1958 | 0.1948 | 0.6697 | 0.7609 | 0.7124 | 0.9710 |\n| 0.0027 | 90.0 | 1980 | 0.1927 | 0.6489 | 0.7506 | 0.6961 | 0.9705 |\n| 0.0029 | 91.0 | 2002 | 0.1931 | 0.6496 | 0.7481 | 0.6953 | 0.9706 |\n| 0.003 | 92.0 | 2024 | 0.1932 | 0.6532 | 0.7455 | 0.6963 | 0.9712 |\n| 0.0029 | 93.0 | 2046 | 0.1928 | 0.6539 | 0.7481 | 0.6978 | 0.9712 |\n| 0.0036 | 94.0 | 2068 | 0.1935 | 0.6503 | 0.7506 | 0.6969 | 0.9710 |\n| 0.0034 | 95.0 | 2090 | 0.1941 | 0.6607 | 0.7558 | 0.7050 | 0.9714 |\n| 0.0035 | 96.0 | 2112 | 0.1940 | 0.6621 | 0.7455 | 0.7013 | 0.9711 |\n| 0.0028 | 97.0 | 2134 | 0.1940 | 0.6553 | 0.7429 | 0.6964 | 0.9707 |\n| 0.0032 | 98.0 | 2156 | 0.1944 | 0.6509 | 0.7429 | 0.6939 | 0.9704 |\n| 0.0028 | 99.0 | 2178 | 0.1943 | 0.6509 | 0.7429 | 0.6939 | 0.9705 |\n| 0.0021 | 100.0 | 2200 | 0.1945 | 0.6524 | 0.7429 | 0.6947 | 0.9703 |\n\n\n### Framework versions\n\n- Transformers 4.46.1\n- Pytorch 1.13.1+cu117\n- Datasets 3.1.0\n- Tokenizers 0.20.2\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "PassbyGrocer/bert_bilstm_crf-ner-weibo", "base_model_relation": "base" }, { "model_id": "PassbyGrocer/bert_bilstm_dst_crf-ner-weibo", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- precision\n- recall\n- f1\n- accuracy\nmodel-index:\n- name: bert_bilstm_dst_crf-ner-weibo\n results: []\n---\n\n\n\n# bert_bilstm_dst_crf-ner-weibo\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2064\n- Precision: 0.6286\n- Recall: 0.7224\n- F1: 0.6722\n- Accuracy: 0.9691\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 100\n- mixed_precision_training: Native AMP\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|\n| 0.4101 | 1.0 | 22 | 0.3430 | 0.0 | 0.0 | 0.0 | 0.9330 |\n| 0.2448 | 2.0 | 44 | 0.1469 | 0.5153 | 0.4756 | 0.4947 | 0.9626 |\n| 0.138 | 3.0 | 66 | 0.1119 | 0.5918 | 0.7044 | 0.6432 | 0.9715 |\n| 0.0899 | 4.0 | 88 | 0.1064 | 0.5565 | 0.6967 | 0.6187 | 0.9699 |\n| 0.0616 | 5.0 | 110 | 0.1064 | 0.5978 | 0.6915 | 0.6412 | 0.9716 |\n| 0.0553 | 6.0 | 132 | 0.1112 | 0.6078 | 0.6812 | 0.6424 | 0.9702 |\n| 0.0396 | 7.0 | 154 | 0.1165 | 0.6366 | 0.7249 | 0.6779 | 0.9705 |\n| 0.0343 | 8.0 | 176 | 0.1204 | 0.6208 | 0.7069 | 0.6611 | 0.9689 |\n| 0.0274 | 9.0 | 198 | 0.1365 | 0.6191 | 0.7481 | 0.6775 | 0.9674 |\n| 0.0291 | 10.0 | 220 | 0.1403 | 0.6288 | 0.6838 | 0.6552 | 0.9689 |\n| 0.0199 | 11.0 | 242 | 0.1415 | 0.6330 | 0.7095 | 0.6691 | 0.9688 |\n| 0.0204 | 12.0 | 264 | 0.1447 | 0.5979 | 0.7224 | 0.6542 | 0.9685 |\n| 0.0162 | 13.0 | 286 | 0.1499 | 0.5822 | 0.7378 | 0.6508 | 0.9669 |\n| 0.0163 | 14.0 | 308 | 0.1441 | 0.6138 | 0.7069 | 0.6571 | 0.9691 |\n| 0.0156 | 15.0 | 330 | 0.1543 | 0.6157 | 0.7044 | 0.6571 | 0.9678 |\n| 0.0107 | 16.0 | 352 | 0.1546 | 0.5957 | 0.7121 | 0.6487 | 0.9673 |\n| 0.0134 | 17.0 | 374 | 0.1558 | 0.5860 | 0.7095 | 0.6419 | 0.9654 |\n| 0.0103 | 18.0 | 396 | 0.1557 | 0.6030 | 0.7147 | 0.6541 | 0.9669 |\n| 0.0087 | 19.0 | 418 | 0.1596 | 0.6031 | 0.6915 | 0.6443 | 0.9665 |\n| 0.0094 | 20.0 | 440 | 0.1568 | 0.6105 | 0.6889 | 0.6473 | 0.9683 |\n| 0.0106 | 21.0 | 462 | 0.1547 | 0.6561 | 0.6915 | 0.6733 | 0.9696 |\n| 0.0088 | 22.0 | 484 | 0.1627 | 0.6483 | 0.6967 | 0.6716 | 0.9696 |\n| 0.0077 | 23.0 | 506 | 0.1628 | 0.6059 | 0.7429 | 0.6674 | 0.9669 |\n| 0.0076 | 24.0 | 528 | 0.1695 | 0.6174 | 0.6761 | 0.6454 | 0.9660 |\n| 0.0081 | 25.0 | 550 | 0.1644 | 0.6387 | 0.7044 | 0.6699 | 0.9690 |\n| 0.0066 | 26.0 | 572 | 0.1674 | 0.6225 | 0.7121 | 0.6643 | 0.9684 |\n| 0.0067 | 27.0 | 594 | 0.1640 | 0.6281 | 0.7121 | 0.6675 | 0.9691 |\n| 0.0065 | 28.0 | 616 | 0.1693 | 0.6091 | 0.7249 | 0.6620 | 0.9672 |\n| 0.0063 | 29.0 | 638 | 0.1737 | 0.6299 | 0.7044 | 0.6650 | 0.9688 |\n| 0.0141 | 30.0 | 660 | 0.1772 | 0.6205 | 0.7147 | 0.6643 | 0.9673 |\n| 0.0064 | 31.0 | 682 | 0.1817 | 0.6233 | 0.7275 | 0.6714 | 0.9685 |\n| 0.0082 | 32.0 | 704 | 0.1704 | 0.6392 | 0.6967 | 0.6667 | 0.9689 |\n| 0.0051 | 33.0 | 726 | 0.1663 | 0.6236 | 0.7069 | 0.6627 | 0.9678 |\n| 0.0041 | 34.0 | 748 | 0.1767 | 0.6278 | 0.7198 | 0.6707 | 0.9676 |\n| 0.0053 | 35.0 | 770 | 0.1749 | 0.6529 | 0.6915 | 0.6717 | 0.9687 |\n| 0.0066 | 36.0 | 792 | 0.1810 | 0.6382 | 0.7121 | 0.6731 | 0.9677 |\n| 0.0044 | 37.0 | 814 | 0.1721 | 0.6351 | 0.7069 | 0.6691 | 0.9683 |\n| 0.0043 | 38.0 | 836 | 0.1833 | 0.6283 | 0.7301 | 0.6754 | 0.9683 |\n| 0.0047 | 39.0 | 858 | 0.1862 | 0.6176 | 0.7224 | 0.6659 | 0.9676 |\n| 0.0038 | 40.0 | 880 | 0.1826 | 0.6106 | 0.7095 | 0.6564 | 0.9677 |\n| 0.0045 | 41.0 | 902 | 0.1888 | 0.6069 | 0.7224 | 0.6596 | 0.9674 |\n| 0.004 | 42.0 | 924 | 0.1862 | 0.6180 | 0.7069 | 0.6595 | 0.9682 |\n| 0.0054 | 43.0 | 946 | 0.1903 | 0.6 | 0.7095 | 0.6502 | 0.9674 |\n| 0.0052 | 44.0 | 968 | 0.1838 | 0.6379 | 0.7018 | 0.6683 | 0.9680 |\n| 0.004 | 45.0 | 990 | 0.1850 | 0.6114 | 0.7198 | 0.6612 | 0.9676 |\n| 0.0051 | 46.0 | 1012 | 0.1830 | 0.6412 | 0.7121 | 0.6748 | 0.9683 |\n| 0.0045 | 47.0 | 1034 | 0.1939 | 0.6134 | 0.7301 | 0.6667 | 0.9683 |\n| 0.0039 | 48.0 | 1056 | 0.1876 | 0.6559 | 0.6812 | 0.6683 | 0.9689 |\n| 0.0041 | 49.0 | 1078 | 0.1904 | 0.6188 | 0.7095 | 0.6611 | 0.9675 |\n| 0.0039 | 50.0 | 1100 | 0.1848 | 0.6242 | 0.7172 | 0.6675 | 0.9681 |\n| 0.0043 | 51.0 | 1122 | 0.1823 | 0.6288 | 0.6967 | 0.6610 | 0.9685 |\n| 0.0041 | 52.0 | 1144 | 0.1951 | 0.6137 | 0.7147 | 0.6603 | 0.9677 |\n| 0.004 | 53.0 | 1166 | 0.1878 | 0.6026 | 0.7095 | 0.6517 | 0.9678 |\n| 0.0047 | 54.0 | 1188 | 0.1843 | 0.6247 | 0.6889 | 0.6553 | 0.9687 |\n| 0.0042 | 55.0 | 1210 | 0.1947 | 0.6132 | 0.7172 | 0.6611 | 0.9685 |\n| 0.0039 | 56.0 | 1232 | 0.1902 | 0.6330 | 0.7095 | 0.6691 | 0.9690 |\n| 0.0038 | 57.0 | 1254 | 0.1915 | 0.6339 | 0.7121 | 0.6707 | 0.9691 |\n| 0.0035 | 58.0 | 1276 | 0.1887 | 0.6264 | 0.7198 | 0.6699 | 0.9686 |\n| 0.0044 | 59.0 | 1298 | 0.1907 | 0.6247 | 0.7147 | 0.6667 | 0.9686 |\n| 0.0026 | 60.0 | 1320 | 0.1927 | 0.6362 | 0.7147 | 0.6731 | 0.9687 |\n| 0.004 | 61.0 | 1342 | 0.1904 | 0.6374 | 0.7095 | 0.6715 | 0.9689 |\n| 0.0041 | 62.0 | 1364 | 0.1914 | 0.6222 | 0.7198 | 0.6675 | 0.9681 |\n| 0.0037 | 63.0 | 1386 | 0.1878 | 0.6298 | 0.7172 | 0.6707 | 0.9684 |\n| 0.0042 | 64.0 | 1408 | 0.1934 | 0.6074 | 0.7198 | 0.6588 | 0.9674 |\n| 0.0047 | 65.0 | 1430 | 0.1992 | 0.6092 | 0.7172 | 0.6588 | 0.9676 |\n| 0.0042 | 66.0 | 1452 | 0.1968 | 0.6186 | 0.7172 | 0.6643 | 0.9679 |\n| 0.0038 | 67.0 | 1474 | 0.1970 | 0.6189 | 0.7224 | 0.6667 | 0.9683 |\n| 0.0033 | 68.0 | 1496 | 0.1976 | 0.6173 | 0.7172 | 0.6635 | 0.9680 |\n| 0.0037 | 69.0 | 1518 | 0.1983 | 0.6247 | 0.7147 | 0.6667 | 0.9684 |\n| 0.0037 | 70.0 | 1540 | 0.1955 | 0.6247 | 0.7147 | 0.6667 | 0.9685 |\n| 0.0038 | 71.0 | 1562 | 0.1970 | 0.6290 | 0.7147 | 0.6691 | 0.9682 |\n| 0.0034 | 72.0 | 1584 | 0.2001 | 0.6242 | 0.7172 | 0.6675 | 0.9681 |\n| 0.0039 | 73.0 | 1606 | 0.2023 | 0.6293 | 0.7069 | 0.6659 | 0.9676 |\n| 0.0027 | 74.0 | 1628 | 0.2003 | 0.6381 | 0.7069 | 0.6707 | 0.9685 |\n| 0.0037 | 75.0 | 1650 | 0.2009 | 0.6203 | 0.7224 | 0.6675 | 0.9683 |\n| 0.0039 | 76.0 | 1672 | 0.2017 | 0.6275 | 0.7147 | 0.6683 | 0.9687 |\n| 0.0035 | 77.0 | 1694 | 0.2016 | 0.6166 | 0.7275 | 0.6675 | 0.9688 |\n| 0.0034 | 78.0 | 1716 | 0.2031 | 0.6108 | 0.7301 | 0.6651 | 0.9687 |\n| 0.0028 | 79.0 | 1738 | 0.2029 | 0.6116 | 0.7326 | 0.6667 | 0.9682 |\n| 0.003 | 80.0 | 1760 | 0.2036 | 0.6233 | 0.7275 | 0.6714 | 0.9683 |\n| 0.0038 | 81.0 | 1782 | 0.2063 | 0.6303 | 0.7275 | 0.6754 | 0.9676 |\n| 0.0042 | 82.0 | 1804 | 0.2040 | 0.6378 | 0.7198 | 0.6763 | 0.9685 |\n| 0.0035 | 83.0 | 1826 | 0.2023 | 0.6149 | 0.7224 | 0.6643 | 0.9681 |\n| 0.0033 | 84.0 | 1848 | 0.1991 | 0.6335 | 0.7198 | 0.6739 | 0.9685 |\n| 0.0043 | 85.0 | 1870 | 0.2013 | 0.6306 | 0.7198 | 0.6723 | 0.9686 |\n| 0.0036 | 86.0 | 1892 | 0.1988 | 0.6364 | 0.7018 | 0.6675 | 0.9694 |\n| 0.0037 | 87.0 | 1914 | 0.2041 | 0.6217 | 0.7224 | 0.6683 | 0.9689 |\n| 0.0031 | 88.0 | 1936 | 0.2043 | 0.6231 | 0.7224 | 0.6690 | 0.9689 |\n| 0.0027 | 89.0 | 1958 | 0.2041 | 0.625 | 0.7198 | 0.6691 | 0.9688 |\n| 0.0026 | 90.0 | 1980 | 0.2053 | 0.6284 | 0.7172 | 0.6699 | 0.9691 |\n| 0.0031 | 91.0 | 2002 | 0.2049 | 0.6306 | 0.7198 | 0.6723 | 0.9690 |\n| 0.003 | 92.0 | 2024 | 0.2056 | 0.6315 | 0.7224 | 0.6739 | 0.9687 |\n| 0.0028 | 93.0 | 2046 | 0.2066 | 0.6149 | 0.7224 | 0.6643 | 0.9684 |\n| 0.0031 | 94.0 | 2068 | 0.2075 | 0.6135 | 0.7224 | 0.6635 | 0.9684 |\n| 0.0038 | 95.0 | 2090 | 0.2070 | 0.6198 | 0.7249 | 0.6682 | 0.9685 |\n| 0.003 | 96.0 | 2112 | 0.2063 | 0.6253 | 0.7249 | 0.6714 | 0.9689 |\n| 0.0028 | 97.0 | 2134 | 0.2062 | 0.6275 | 0.7275 | 0.6738 | 0.9692 |\n| 0.0031 | 98.0 | 2156 | 0.2063 | 0.6272 | 0.7224 | 0.6714 | 0.9692 |\n| 0.0026 | 99.0 | 2178 | 0.2062 | 0.6286 | 0.7224 | 0.6722 | 0.9691 |\n| 0.002 | 100.0 | 2200 | 0.2064 | 0.6286 | 0.7224 | 0.6722 | 0.9691 |\n\n\n### Framework versions\n\n- Transformers 4.46.1\n- Pytorch 2.4.1+cu124\n- Datasets 3.1.0\n- Tokenizers 0.20.2\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "PassbyGrocer/bert_bilstm_dst_crf-ner-weibo", "base_model_relation": "base" }, { "model_id": "missingstuffedbun/test_20241111084845", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nlibrary_name: transformers\ntags:\n- generated_from_trainer\nmodel-index:\n- name: test_20241111084845\n results: []\n---\n\n\n\n# test_20241111084845\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.3881\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 5\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:-----:|:----:|:---------------:|\n| 1.4553 | 1.0 | 10 | 1.4085 |\n| 1.4171 | 2.0 | 20 | 1.3980 |\n| 1.3818 | 3.0 | 30 | 1.4007 |\n| 1.3472 | 4.0 | 40 | 1.4040 |\n| 1.2685 | 5.0 | 50 | 1.3881 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.5.0+cu121\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "missingstuffedbun/test_20241111084845", "base_model_relation": "base" }, { "model_id": "real-jiakai/bert-base-chinese-finetuned-cmrc2018", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\ndatasets:\n- cmrc2018\nmodel-index:\n- name: chinese_qa\n results: []\n---\n\n# bert-base-chinese-finetuned-cmrc2018\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the CMRC2018 (Chinese Machine Reading Comprehension) dataset.\n\n## Model Description\n\nThis is a BERT-based extractive question answering model for Chinese text. The model is designed to locate and extract answer spans from given contexts in response to questions.\n\nKey Features:\n- Base Model: bert-base-chinese\n- Task: Extractive Question Answering\n- Language: Chinese\n- Training Dataset: CMRC2018\n\n## Performance Metrics\n\nEvaluation results on the test set:\n- Exact Match: 59.708\n- F1 Score: 60.0723\n- Number of evaluation samples: 6,254\n- Evaluation speed: 283.054 samples/second\n\n## Intended Uses & Limitations\n\n### Intended Uses\n- Chinese reading comprehension tasks\n- Answer extraction from given documents\n- Context-based question answering systems\n\n### Limitations\n- Only supports extractive QA (cannot generate new answers)\n- Answers must be present in the context\n- Does not support multi-hop reasoning\n- Cannot handle unanswerable questions\n\n## Training Details\n\n### Training Hyperparameters\n- Learning rate: 3e-05\n- Train batch size: 12\n- Eval batch size: 8\n- Seed: 42\n- Optimizer: AdamW (betas=(0.9,0.999), epsilon=1e-08)\n- LR scheduler: linear\n- Number of epochs: 5.0\n\n### Training Results\n- Training time: 892.86 seconds\n- Training samples: 18,960\n- Training speed: 106.175 samples/second\n- Training loss: 0.5625\n\n### Framework Versions\n- Transformers: 4.47.0.dev0\n- Pytorch: 2.5.1+cu124\n- Datasets: 3.1.0\n- Tokenizers: 20.3\n\n## Usage\n\n```python\nimport torch\nfrom transformers import AutoModelForQuestionAnswering, AutoTokenizer\n\n# Load model and tokenizer\nmodel = AutoModelForQuestionAnswering.from_pretrained(\"real-jiakai/bert-base-chinese-finetuned-cmrc2018\")\ntokenizer = AutoTokenizer.from_pretrained(\"real-jiakai/bert-base-chinese-finetuned-cmrc2018\")\n\n# Prepare inputs\nquestion = \"\u957f\u57ce\u6709\u591a\u957f\uff1f\"\ncontext = \"\u957f\u57ce\u662f\u4e2d\u56fd\u53e4\u4ee3\u7684\u4f1f\u5927\u5efa\u7b51\u5de5\u7a0b\uff0c\u5168\u957f\u8d85\u8fc72\u4e07\u516c\u91cc\uff0c\u6a2a\u8de8\u4e2d\u56fd\u5317\u90e8\u591a\u4e2a\u7701\u4efd\u3002\"\n\n# Tokenize inputs\ninputs = tokenizer(\n question,\n context,\n return_tensors=\"pt\",\n max_length=384,\n truncation=True\n)\n\n# Get answer\noutputs = model(**inputs)\nanswer_start = torch.argmax(outputs.start_logits)\nanswer_end = torch.argmax(outputs.end_logits) + 1\nanswer = tokenizer.decode(inputs[\"input_ids\"][0][answer_start:answer_end])\nprint(\"Answer:\", answer)\n```\n\n## Citation\n\nIf you use this model, please cite the CMRC2018 dataset:\n\n```bibtex\n@inproceedings{cui-emnlp2019-cmrc2018,\n title = \"A Span-Extraction Dataset for {C}hinese Machine Reading Comprehension\",\n author = \"Cui, Yiming and\n Liu, Ting and\n Che, Wanxiang and\n Xiao, Li and\n Chen, Zhipeng and\n Ma, Wentao and\n Wang, Shijin and\n Hu, Guoping\",\n booktitle = \"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)\",\n month = nov,\n year = \"2019\",\n address = \"Hong Kong, China\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/D19-1600\",\n doi = \"10.18653/v1/D19-1600\",\n pages = \"5886--5891\",\n}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "real-jiakai/bert-base-chinese-finetuned-cmrc2018", "base_model_relation": "base" }, { "model_id": "real-jiakai/bert-base-chinese-finetuned-squadv2", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\ndatasets:\n- real-jiakai/chinese-squadv2\nmodel-index:\n- name: chinese_squadv2\n results: []\n---\n\n# bert-base-chinese-finetuned-squadv2\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the [Chinese SQuAD v2.0 dataset](https://huggingface.co/datasets/real-jiakai/chinese-squadv2).\n\n## Model Description\n\nThis model is designed for Chinese question answering tasks, specifically for extractive QA where the answer must be extracted from a given context paragraph. It can handle both answerable and unanswerable questions, following the SQuAD v2.0 format.\n\nKey features:\n- Based on BERT-base Chinese architecture\n- Supports both answerable and unanswerable questions\n- Trained on Chinese question-answer pairs\n- Optimized for extractive question answering\n\n## Intended Uses & Limitations\n\n### Intended Uses\n- Chinese extractive question answering\n- Reading comprehension tasks\n- Information extraction from Chinese text\n- Automated question answering systems\n\n### Limitations\n- Performance is significantly better on unanswerable questions (76.65% accuracy) compared to answerable questions (36.41% accuracy)\n- Limited to extractive QA (cannot generate new answers)\n- May not perform well on domain-specific questions outside the training data\n- Designed for modern Chinese text, may not work well with classical Chinese or dialectal variations\n\n## Training and Evaluation Data\n\nThe model was trained on the Chinese SQuAD v2.0 dataset, which contains:\n\nTraining Set:\n- Total examples: 90,027\n- Answerable questions: 46,529\n- Unanswerable questions: 43,498\n\nValidation Set:\n- Total examples: 9,936\n- Answerable questions: 3,991\n- Unanswerable questions: 5,945\n\n## Training Procedure\n\n### Training Hyperparameters\n\n- Learning rate: 3e-05\n- Batch size: 12\n- Evaluation batch size: 8\n- Number of epochs: 5\n- Optimizer: AdamW (\u03b21=0.9, \u03b22=0.999, \u03b5=1e-08)\n- Learning rate scheduler: Linear\n- Maximum sequence length: 384\n- Document stride: 128\n- Training device: CUDA-enabled GPU\n\n### Training Results\n\nFinal evaluation metrics:\n- Overall Exact Match: 60.49%\n- Overall F1 Score: 60.54%\n- Answerable Questions:\n - Exact Match: 36.41%\n - F1 Score: 36.53%\n- Unanswerable Questions:\n - Exact Match: 76.65%\n - F1 Score: 76.65%\n\n### Framework Versions\n- Transformers: 4.47.0.dev0\n- PyTorch: 2.5.1+cu124\n- Datasets: 3.1.0\n- Tokenizers: 0.20.3\n\n## Usage\n\n```python\nfrom transformers import AutoModelForQuestionAnswering, AutoTokenizer\nimport torch\n\n# Load model and tokenizer\nmodel_name = \"real-jiakai/bert-base-chinese-finetuned-squadv2\"\ntokenizer = AutoTokenizer.from_pretrained(model_name)\nmodel = AutoModelForQuestionAnswering.from_pretrained(model_name)\n\ndef get_answer(question, context, threshold=0.0):\n # Tokenize input with maximum sequence length of 384\n inputs = tokenizer(\n question,\n context,\n return_tensors=\"pt\",\n max_length=384,\n truncation=True\n )\n \n with torch.no_grad():\n outputs = model(**inputs)\n start_logits = outputs.start_logits[0]\n end_logits = outputs.end_logits[0]\n \n # Calculate null score (score for predicting no answer)\n null_score = start_logits[0].item() + end_logits[0].item()\n \n # Find the best non-null answer, excluding [CLS] position\n # Set logits at [CLS] position to negative infinity\n start_logits[0] = float('-inf')\n end_logits[0] = float('-inf')\n \n start_idx = torch.argmax(start_logits)\n end_idx = torch.argmax(end_logits)\n \n # Ensure end_idx is not less than start_idx\n if end_idx < start_idx:\n end_idx = start_idx\n \n answer_score = start_logits[start_idx].item() + end_logits[end_idx].item()\n \n # If null score is higher (beyond threshold), return \"no answer\"\n if null_score - answer_score > threshold:\n return \"Question cannot be answered based on the given context.\"\n \n # Otherwise, return the extracted answer\n tokens = tokenizer.convert_ids_to_tokens(inputs[\"input_ids\"][0])\n answer = tokenizer.convert_tokens_to_string(tokens[start_idx:end_idx+1])\n \n # Check if answer is empty or contains only special tokens\n if not answer.strip() or answer.strip() in ['[CLS]', '[SEP]']:\n return \"Question cannot be answered based on the given context.\"\n \n return answer.strip()\n\nquestions = [\n \"\u672c\u5c4a\u7b2c\u5341\u4e94\u5c4a\u73e0\u6d77\u822a\u5c55\u7684\u4eae\u70b9\u548c\u4e3b\u8981\u5c55\u793a\u5185\u5bb9\u662f\u4ec0\u4e48\uff1f\",\n \"\u73e0\u6d77\u6740\u4eba\u6848\u53d1\u751f\u5730\u70b9\uff1f\"\n]\n\ncontext = '\u7b2c\u5341\u4e94\u5c4a\u4e2d\u56fd\u56fd\u9645\u822a\u7a7a\u822a\u5929\u535a\u89c8\u4f1a\uff08\u73e0\u6d77\u822a\u5c55\uff09\u4e8e2024\u5e7411\u670812\u65e5\u81f317\u65e5\u5728\u73e0\u6d77\u56fd\u9645\u822a\u5c55\u4e2d\u5fc3\u4e3e\u884c\u3002\u672c\u5c4a\u822a\u5c55\u5438\u5f15\u4e86\u6765\u81ea47\u4e2a\u56fd\u5bb6\u548c\u5730\u533a\u7684\u8d85\u8fc7890\u5bb6\u4f01\u4e1a\u53c2\u5c55\uff0c\u5c55\u793a\u4e86\u6db5\u76d6\"\u9646\u3001\u6d77\u3001\u7a7a\u3001\u5929\u3001\u7535\u3001\u7f51\"\u5168\u9886\u57df\u7684\u9ad8\u7cbe\u5c16\u5c55\u54c1\u3002\u5176\u4e2d\uff0c\u5907\u53d7\u77a9\u76ee\u7684\u4e2d\u56fd\u7a7a\u519b\"\u516b\u4e00\"\u98de\u884c\u8868\u6f14\u961f\u548c\"\u7ea2\u9e70\"\u98de\u884c\u8868\u6f14\u961f\uff0c\u4ee5\u53ca\u4fc4\u7f57\u65af\"\u52c7\u58eb\"\u98de\u884c\u8868\u6f14\u961f\u540c\u53f0\u732e\u6280\uff0c\u4e3a\u89c2\u4f17\u5448\u73b0\u4e86\u7cbe\u5f69\u7684\u98de\u884c\u8868\u6f14\u3002\u6b64\u5916\uff0c\u672c\u5c4a\u822a\u5c55\u8fd8\u9996\u6b21\u5f00\u8f9f\u4e86\u65e0\u4eba\u673a\u3001\u65e0\u4eba\u8239\u6f14\u793a\u533a\uff0c\u5c55\u793a\u4e86\u591a\u6b3e\u524d\u6cbf\u79d1\u6280\u4ea7\u54c1\u3002'\n\nfor question in questions:\n answer = get_answer(question, context)\n print(f\"\u95ee\u9898: {question}\")\n print(f\"\u7b54\u6848: {answer}\")\n print(\"-\" * 50)\n```\n\n## Limitations and Bias\n\nThe model shows significant performance disparity between answerable and unanswerable questions, which might indicate:\n1. Dataset quality issues\n2. Potential translation artifacts in the Chinese version of SQuAD\n3. Imbalanced handling of answerable vs. unanswerable questions\n\n## Ethics & Responsible AI\n\nUsers should be aware that:\n- The model may reflect biases present in the training data\n- Performance varies significantly based on question type\n- Results should be validated for critical applications\n- The model should not be used as the sole decision-maker in critical systems\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "real-jiakai/bert-base-chinese-finetuned-squadv2", "base_model_relation": "base" }, { "model_id": "Xubqpanda/LegalDuet", "gated": "False", "card": "---\nlicense: mit\ndatasets:\n- china-ai-law-challenge/cail2018\nlanguage:\n- zh\nmetrics:\n- accuracy\n- f1\nbase_model:\n- google-bert/bert-base-chinese\npipeline_tag: text-classification\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "Xubqpanda/LegalDuet", "base_model_relation": "base" }, { "model_id": "Chengfengke/herbert", "gated": "False", "card": "---\nlicense: apache-2.0\nbase_model:\n- google-bert/bert-base-chinese\nmetrics:\n- accuracy\nlanguage:\n- en\n- zh\npipeline_tag: fill-mask\n---\n# Herbert: Pretrained Bert Model for Herbal Medicine\n\n**Herbert** is a pretrained model for herbal medicine research, developed based on the `bert-base-chinese` model. The model has been fine-tuned on domain-specific data from 675 ancient books and 32 Traditional Chinese Medicine (TCM) textbooks. It is designed to support a variety of TCM-related NLP tasks.\n\n---\n\n## Introduction\n\nThis model is optimized for TCM-related tasks, including but not limited to:\n- Herbal formula encoding\n- Domain-specific word embedding\n- Classification, labeling, and sequence prediction tasks in TCM research\n\nHerbert combines the strengths of modern pretraining techniques and domain knowledge, allowing it to excel in TCM-related text processing tasks.\n\n---\n\n## Model Config\n\n```json\n{\n \"hidden_size\": 1024,\n \"max_position_embeddings\": 512,\n \"model_type\": \"bert\",\n \"num_attention_heads\": 16,\n \"num_hidden_layers\": 24,\n \"torch_dtype\": \"float32\",\n \"vocab_size\": 21128\n}\n### requirements\n\"transformers_version\": \"4.45.1\"\n\n### Quickstart\n\n#### Use Huggingface\n```python\nfrom transformers import AutoTokenizer, AutoModel\n\n# Replace \"Chengfengke/herbert\" with the Hugging Face model repository name\nmodel_name = \"Chengfengke/herbert\"\n\n# Load tokenizer and model\ntokenizer = AutoTokenizer.from_pretrained(model_name)\nmodel = AutoModel.from_pretrained(model_name)\n\n# Input text\ntext = \"\u4e2d\u533b\u7406\u8bba\u662f\u6211\u56fd\u4f20\u7edf\u6587\u5316\u7684\u7470\u5b9d\u3002\"\n\n# Tokenize and prepare input\ninputs = tokenizer(text, return_tensors=\"pt\", truncation=True, padding=\"max_length\", max_length=128)\n\n# Get the model's outputs\nwith torch.no_grad():\n outputs = model(**inputs)\n\n# Get the embedding (sentence-level average pooling)\nsentence_embedding = outputs.last_hidden_state.mean(dim=1)\n\nprint(\"Embedding shape:\", sentence_embedding.shape)\nprint(\"Embedding vector:\", sentence_embedding)\n```\n\n\n#### LocalModel\n```python\nfrom transformers import BertTokenizer, BertForMaskedLM\n\n# Load the model and tokenizer\nmodel_name = \"Chengfengke/herbert\"\ntokenizer = BertTokenizer.from_pretrained(model_name)\nmodel = BertForMaskedLM.from_pretrained(model_name)\ninputs = tokenizer(\"This is an example text for herbal medicine.\", return_tensors=\"pt\")\noutputs = model(**inputs)\n```\n\n## Citation\n\nIf you find our work helpful, feel free to give us a cite.\n\n```bibtex\n@misc{herbert-embedding,\n title = {Herbert: A Pretrain_Bert_Model for TCM_herb and downstream Tasks as Text Embedding Generation},\n author = {Yehan Yang,Xinhan Zheng},\n month = {December},\n year = {2024}\n}\n\n@article{herbert-technical-report,\n title={Herbert: A Pretrain_Bert_Model for TCM_herb and downstream Tasks as Text Embedding Generation},\n author={Yehan Yang,Xinhan Zheng},\n institution={Beijing Angopro Technology Co., Ltd.},\n year={2024},\n note={Presented at the 2024 Machine Learning Applications Conference (MLAC)}\n}\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "Chengfengke/herbert", "base_model_relation": "base" }, { "model_id": "wsqstar/weibo-model-4tags", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nlibrary_name: transformers\nmetrics:\n- accuracy\n- precision\n- recall\n- f1\ntags:\n- generated_from_trainer\nmodel-index:\n- name: weibo-model-4tags\n results: []\n---\n\n\n\n# weibo-model-4tags\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.0245\n- Accuracy: 0.7079\n- Precision: 0.7101\n- Recall: 0.7079\n- F1: 0.7081\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |\n|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|\n| 1.1091 | 0.6849 | 50 | 1.0191 | 0.5361 | 0.6449 | 0.5361 | 0.4924 |\n| 0.7439 | 1.3699 | 100 | 0.8837 | 0.6306 | 0.6446 | 0.6306 | 0.6280 |\n| 0.7962 | 2.0548 | 150 | 0.8365 | 0.6615 | 0.6886 | 0.6615 | 0.6567 |\n| 0.5132 | 2.7397 | 200 | 0.8698 | 0.6890 | 0.6977 | 0.6890 | 0.6841 |\n| 0.2886 | 3.4247 | 250 | 0.9056 | 0.7096 | 0.7103 | 0.7096 | 0.7092 |\n| 0.1804 | 4.1096 | 300 | 0.9927 | 0.7045 | 0.7071 | 0.7045 | 0.7027 |\n| 0.146 | 4.7945 | 350 | 1.0245 | 0.7079 | 0.7101 | 0.7079 | 0.7081 |\n\n\n### Framework versions\n\n- Transformers 4.44.2\n- Pytorch 2.4.1+cu121\n- Datasets 3.2.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "wsqstar/weibo-model", "base_model_relation": "finetune" }, { "model_id": "akirazh/bilibili-bullet-comment-classify-model", "gated": "unknown", "card": "\n---\ntags:\n- autotrain\n- text-classification\nbase_model: google-bert/bert-base-chinese\nwidget:\n- text: \"I love AutoTrain\"\n---\n\n# Model Trained Using AutoTrain\n\n- Problem type: Text Classification\n\n## Validation Metrics\nloss: 1.180769681930542\n\nf1_macro: 0.31453634085213034\n\nf1_micro: 0.6304347826086957\n\nf1_weighted: 0.5551106025934401\n\nprecision_macro: 0.36293436293436293\n\nprecision_micro: 0.6304347826086957\n\nprecision_weighted: 0.5828437132784959\n\nrecall_macro: 0.31501831501831506\n\nrecall_micro: 0.6304347826086957\n\nrecall_weighted: 0.6304347826086957\n\naccuracy: 0.6304347826086957\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": null, "base_model_relation": null }, { "model_id": "Vrepol/bert-base-chinese-finetuned-imdb", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-finetuned-imdb\n results: []\n---\n\n\n\n# bert-base-chinese-finetuned-imdb\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.2260\n- Model Preparation Time: 0.0044\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 3.0\n- mixed_precision_training: Native AMP\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time |\n|:-------------:|:-----:|:----:|:---------------:|:----------------------:|\n| 1.4597 | 1.0 | 157 | 1.2989 | 0.0044 |\n| 1.3505 | 2.0 | 314 | 1.2006 | 0.0044 |\n| 1.3229 | 3.0 | 471 | 1.2647 | 0.0044 |\n\n\n### Framework versions\n\n- Transformers 4.47.0\n- Pytorch 2.2.2+cu118\n- Datasets 3.2.0\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "Vrepol/bert-base-chinese-finetuned-imdb", "base_model_relation": "base" }, { "model_id": "wjwhhh/BertSentiment", "gated": "False", "card": "---\nlicense: mit\nbase_model:\n- google-bert/bert-base-chinese\npipeline_tag: text-classification\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "wjwhhh/BertSentiment", "base_model_relation": "base" }, { "model_id": "Macropodus/bert4csc_v1", "gated": "False", "card": "---\nlicense: apache-2.0\nlanguage:\n- zh\nbase_model:\n- bert-base-chinese\npipeline_tag: text-generation\ntags:\n- csc\n- text-correct\n- chinses-spelling-correct\n- chinese-spelling-check\n- \u4e2d\u6587\u62fc\u5199\u7ea0\u9519\n- bert\n- macro-correct\n---\n# bert4csc_v1\n## \u6982\u8ff0(bert4csc_v1)\n - macro-correct, \u4e2d\u6587\u62fc\u5199\u7ea0\u9519CSC\u6d4b\u8bc4(\u6587\u672c\u7ea0\u9519), \u6743\u91cd\u4f7f\u7528\n - \u9879\u76ee\u5730\u5740\u5728[https://github.com/yongzhuo/macro-correct](https://github.com/yongzhuo/macro-correct)\n - \u672c\u6a21\u578b\u6743\u91cd\u4e3abert4csc_v1, \u4f7f\u7528macbert4csc\u67b6\u6784(pycorrector\u7248\u672c), \u5176\u7279\u70b9\u662f\u5728BertForMaskedLM\u540e\u65b0\u52a0\u4e00\u4e2a\u5206\u652f\u7528\u4e8e\u9519\u8bef\u68c0\u6d4b\u4efb\u52a1(\u5206\u7c7b\u4efb\u52a1, \u4e0d\u4ea4\u4e92);\n - \u8bad\u7ec3\u65f6\u4f7f\u7528\u4e86MFT(\u52a8\u6001mask 0.2\u7684\u975e\u9519\u8beftokens), \u540c\u65f6det_loss\u7684\u6743\u91cd\u4e3a0.15;\n - \u63a8\u7406\u65f6\u820d\u5f03\u4e86macbert\u540e\u9762\u7684\u90e8\u5206(det-layer);\n - \u5982\u4f55\u4f7f\u7528: 1.\u4f7f\u7528transformers\u8c03\u7528; 2.\u4f7f\u7528[macro-correct](https://github.com/yongzhuo/macro-correct)\u9879\u76ee\u8c03\u7528; \u8be6\u60c5\u89c1***\u4e09\u3001\u8c03\u7528(Usage)***;\n\n## \u76ee\u5f55\n* [\u4e00\u3001\u6d4b\u8bc4(Test)](#\u4e00\u3001\u6d4b\u8bc4(Test))\n* [\u4e8c\u3001\u7ed3\u8bba(Conclusion)](#\u4e8c\u3001\u7ed3\u8bba(Conclusion))\n* [\u4e09\u3001\u8c03\u7528(Usage)](#\u4e09\u3001\u8c03\u7528(Usage))\n* [\u56db\u3001\u8bba\u6587(Paper)](#\u56db\u3001\u8bba\u6587(Paper))\n* [\u4e94\u3001\u53c2\u8003(Refer)](#\u4e94\u3001\u53c2\u8003(Refer))\n* [\u516d\u3001\u5f15\u7528(Cite)](#\u516d\u3001\u5f15\u7528(Cite))\n\n\n## \u4e00\u3001\u6d4b\u8bc4(Test)\n### 1.1 \u6d4b\u8bc4\u6570\u636e\u6765\u6e90\n\u5730\u5740\u4e3a[Macropodus/csc_eval_public](https://huggingface.co/datasets/Macropodus/csc_eval_public), \u6240\u6709\u8bad\u7ec3\u6570\u636e\u5747\u6765\u81ea\u516c\u7f51\u6216\u5f00\u6e90\u6570\u636e, \u8bad\u7ec3\u6570\u636e\u4e3a1\u5343\u4e07\u5de6\u53f3, \u6df7\u6dc6\u8bcd\u5178\u8f83\u5927;\n``` \n1.gen_de3.json(5545): '\u7684\u5730\u5f97'\u7ea0\u9519, \u7531\u4eba\u6c11\u65e5\u62a5/\u5b66\u4e60\u5f3a\u56fd/chinese-poetry\u7b49\u9ad8\u8d28\u91cf\u6570\u636e\u4eba\u5de5\u751f\u6210;\n2.lemon_v2.tet.json(1053): relm\u8bba\u6587\u63d0\u51fa\u7684\u6570\u636e, \u591a\u9886\u57df\u62fc\u5199\u7ea0\u9519\u6570\u636e\u96c6(7\u4e2a\u9886\u57df), ; \u5305\u62ecgame(GAM), encyclopedia (ENC), contract (COT), medical care(MEC), car (CAR), novel (NOV), and news (NEW)\u7b49\u9886\u57df;\n3.acc_rmrb.tet.json(4636): \u6765\u81eaNER-199801(\u4eba\u6c11\u65e5\u62a5\u9ad8\u8d28\u91cf\u8bed\u6599);\n4.acc_xxqg.tet.json(5000): \u6765\u81ea\u5b66\u4e60\u5f3a\u56fd\u7f51\u7ad9\u7684\u9ad8\u8d28\u91cf\u8bed\u6599;\n5.gen_passage.tet.json(10000): \u6e90\u6570\u636e\u4e3aqwen\u751f\u6210\u7684\u597d\u8bcd\u597d\u53e5, \u7531\u51e0\u4e4e\u6240\u6709\u7684\u5f00\u6e90\u6570\u636e\u6c47\u603b\u7684\u6df7\u6dc6\u8bcd\u5178\u751f\u6210;\n6.textproof.tet.json(1447): NLP\u7ade\u8d5b\u6570\u636e, TextProofreadingCompetition;\n7.gen_xxqg.tet.json(5000): \u6e90\u6570\u636e\u4e3a\u5b66\u4e60\u5f3a\u56fd\u7f51\u7ad9\u7684\u9ad8\u8d28\u91cf\u8bed\u6599, \u7531\u51e0\u4e4e\u6240\u6709\u7684\u5f00\u6e90\u6570\u636e\u6c47\u603b\u7684\u6df7\u6dc6\u8bcd\u5178\u751f\u6210;\n8.faspell.dev.json(1000): \u89c6\u9891\u5b57\u5e55\u901a\u8fc7OCR\u540e\u83b7\u53d6\u7684\u6570\u636e\u96c6; \u6765\u81ea\u7231\u5947\u827a\u7684\u8bba\u6587faspell;\n9.lomo_tet.json(5000): \u4e3b\u8981\u4e3a\u97f3\u4f3c\u4e2d\u6587\u62fc\u5199\u7ea0\u9519\u6570\u636e\u96c6; \u6765\u81ea\u817e\u8baf; \u4eba\u5de5\u6807\u6ce8\u7684\u6570\u636e\u96c6CSCD-NS;\n10.mcsc_tet.5000.json(5000): \u533b\u5b66\u62fc\u5199\u7ea0\u9519; \u6765\u81ea\u817e\u8baf\u533b\u5178APP\u7684\u771f\u5b9e\u5386\u53f2\u65e5\u5fd7; \u6ce8\u610f\u8bba\u6587\u8bf4\u8be5\u6570\u636e\u96c6\u53ea\u5173\u6ce8\u533b\u5b66\u5b9e\u4f53\u7684\u7ea0\u9519, \u5e38\u7528\u5b57\u7b49\u7684\u7ea0\u9519\u5e76\u4e0d\u5173\u6ce8;\n11.ecspell.dev.json(1500): \u6765\u81eaECSpell\u8bba\u6587, \u5305\u62ec(law/med/gov)\u7b49\u4e09\u4e2a\u9886\u57df;\n12.sighan2013.dev.json(1000): \u6765\u81easighan13\u4f1a\u8bae;\n13.sighan2014.dev.json(1062): \u6765\u81easighan14\u4f1a\u8bae;\n14.sighan2015.dev.json(1100): \u6765\u81easighan15\u4f1a\u8bae;\n```\n### 1.2 \u6d4b\u8bc4\u6570\u636e\u9884\u5904\u7406\n```\n\u6d4b\u8bc4\u6570\u636e\u90fd\u7ecf\u8fc7 \u5168\u89d2\u8f6c\u534a\u89d2,\u7e41\u7b80\u8f6c\u5316,\u6807\u70b9\u7b26\u53f7\u6807\u51c6\u5316\u7b49\u64cd\u4f5c;\n```\n\n### 1.3 \u5176\u4ed6\u8bf4\u660e\n```\n1.\u6307\u6807\u5e26common\u7684\u6781\u4e3a\u5bbd\u677e\u6307\u6807, \u540c\u5f00\u6e90\u9879\u76eepycorrector\u7684\u8bc4\u4f30\u6307\u6807;\n2.\u6307\u6807\u5e26strict\u7684\u6781\u4e3a\u4e25\u683c\u6307\u6807, \u540c\u5f00\u6e90\u9879\u76ee[wangwang110/CSC](https://github.com/wangwang110/CSC);\n3.macbert4mdcspell_v1\u6a21\u578b\u4e3a\u8bad\u7ec3\u4f7f\u7528mdcspell\u67b6\u6784+bert\u7684mlm-loss, \u4f46\u662f\u63a8\u7406\u7684\u65f6\u5019\u53ea\u7528bert-mlm;\n4.acc_rmrb/acc_xxqg\u6570\u636e\u96c6\u6ca1\u6709\u9519\u8bef, \u7528\u4e8e\u8bc4\u4f30\u6a21\u578b\u7684\u8bef\u7ea0\u7387(\u8fc7\u5ea6\u7ea0\u9519);\n5.qwen25_1-5b_pycorrector\u7684\u6a21\u578b\u4e3ashibing624/chinese-text-correction-1.5b, \u5176\u8bad\u7ec3\u6570\u636e\u5305\u62ec\u4e86lemon_v2/mcsc_tet/ecspell\u7684\u9a8c\u8bc1\u96c6\u548c\u6d4b\u8bd5\u96c6, \u5176\u4ed6\u7684bert\u7c7b\u6a21\u578b\u7684\u8bad\u7ec3\u4e0d\u5305\u62ec\u9a8c\u8bc1\u96c6\u548c\u6d4b\u8bd5\u96c6;\n```\n\n\n## \u4e8c\u3001\u91cd\u8981\u6307\u6807\n### 2.1 F1(common_cor_f1)\n| model/common_cor_f1| avg| gen_de3| lemon_v2| gen_passage| text_proof| gen_xxqg| faspell| lomo_tet| mcsc_tet| ecspell| sighan2013| sighan2014| sighan2015 |\n|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|\n| macbert4csc_pycorrector| 45.8| 42.44| 42.89| 31.49| 46.31| 26.06| 32.7| 44.83| 27.93| 55.51| 70.89| 61.72| 66.81 |\n| bert4csc_v1| 62.28| 93.73| 61.99| 44.79| 68.0| 35.03| 48.28| 61.8| 64.41| 79.11| 77.66| 51.01| 61.54 |\n| macbert4csc_v1| 68.55| 96.67| 65.63| 48.4| 75.65| 38.43| 51.76| 70.11| 80.63| 85.55| 81.38| 57.63| 70.7 |\n| macbert4csc_v2| 68.6| 96.74| 66.02| 48.26| 75.78| 38.84| 51.91| 70.17| 80.71| 85.61| 80.97| 58.22| 69.95 |\n| macbert4mdcspell_v1| 71.1| 96.42| 70.06| 52.55| 79.61| 43.37| 53.85| 70.9| 82.38| 87.46| 84.2| 61.08| 71.32 |\n| qwen25_1-5b_pycorrector| 45.11| 27.29| 89.48| 14.61| 83.9| 13.84| 18.2| 36.71| 96.29| 88.2| 36.41| 15.64| 20.73 |\n\n### 2.2 acc(common_cor_acc)\n| model/common_cor_acc| avg| gen_de3| lemon_v2| gen_passage| text_proof| gen_xxqg| faspell| lomo_tet| mcsc_tet| ecspell| sighan2013| sighan2014| sighan2015 |\n|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|\n| macbert4csc_pycorrector| 48.26| 26.96| 28.68| 34.16| 55.29| 28.38| 22.2| 60.96| 57.16| 67.73| 55.9| 68.93| 72.73 |\n| bert4csc_v1| 60.76| 88.21| 45.96| 43.13| 68.97| 35.0| 34.0| 65.86| 73.26| 81.8| 64.5| 61.11| 67.27 |\n| macbert4csc_v1| 65.34| 93.56| 49.76| 44.98| 74.64| 36.1| 37.0| 73.0| 83.6| 86.87| 69.2| 62.62| 72.73 |\n| macbert4csc_v2| 65.22| 93.69| 50.14| 44.92| 74.64| 36.26| 37.0| 72.72| 83.66| 86.93| 68.5| 62.43| 71.73 |\n| macbert4mdcspell_v1| 67.15| 93.09| 54.8| 47.71| 78.09| 39.52| 38.8| 71.92| 84.78| 88.27| 73.2| 63.28| 72.36 |\n| qwen25_1-5b_pycorrector| 46.09| 15.82| 81.29| 22.96| 82.17| 19.04| 12.8| 50.2| 96.4| 89.13| 22.8| 27.87| 32.55 |\n\n### 2.3 acc(acc_true, thr=0.75)\n| model/acc | avg| acc_rmrb| acc_xxqg |\n|:-------------------------|:-----------------|:-----------------|:-----------------|\n| macbert4csc_pycorrector | 99.24| 99.22| 99.26 |\n| bert4csc_v1 | 98.71| 98.36| 99.06 |\n| macbert4csc_v1 | 97.72| 96.72| 98.72 |\n| macbert4csc_v2 | 97.89| 96.98| 98.8 |\n| macbert4mdcspell_v1 | 97.75| 96.51| 98.98 |\n| qwen25_1-5b_pycorrector | 82.0| 77.14| 86.86 |\n\n## \u4e8c\u3001\u7ed3\u8bba(Conclusion)\n```\n1.macbert4csc_v1/macbert4csc_v2/macbert4mdcspell_v1\u7b49\u6a21\u578b\u4f7f\u7528\u591a\u79cd\u9886\u57df\u6570\u636e\u8bad\u7ec3, \u6bd4\u8f83\u5747\u8861, \u4e5f\u9002\u5408\u4f5c\u4e3a\u7b2c\u4e00\u6b65\u7684\u9884\u8bad\u7ec3\u6a21\u578b, \u53ef\u7528\u4e8e\u4e13\u6709\u9886\u57df\u6570\u636e\u7684\u7ee7\u7eed\u5fae\u8c03;\n2.\u6bd4\u8f83macbert4csc_pycorrector/bertbase4csc_v1/macbert4csc_v2/macbert4mdcspell_v1, \u89c2\u5bdf\u88682.3, \u53ef\u4ee5\u53d1\u73b0\u8bad\u7ec3\u6570\u636e\u8d8a\u591a, \u51c6\u786e\u7387\u63d0\u5347\u7684\u540c\u65f6, \u8bef\u7ea0\u7387\u4e5f\u4f1a\u7a0d\u5fae\u9ad8\u4e00\u4e9b;\n3.MFT(Mask-Correct)\u4f9d\u65e7\u6709\u6548, \u4e0d\u8fc7\u5bf9\u4e8e\u6570\u636e\u91cf\u8db3\u591f\u7684\u60c5\u5f62\u63d0\u5347\u4e0d\u660e\u663e, \u53ef\u80fd\u4e5f\u662f\u8bef\u7ea0\u7387\u5347\u9ad8\u7684\u4e00\u4e2a\u91cd\u8981\u539f\u56e0;\n4.\u8bad\u7ec3\u6570\u636e\u4e2d\u4e5f\u5b58\u5728\u6587\u8a00\u6587\u6570\u636e, \u8bad\u7ec3\u597d\u7684\u6a21\u578b\u4e5f\u652f\u6301\u6587\u8a00\u6587\u7ea0\u9519;\n5.\u8bad\u7ec3\u597d\u7684\u6a21\u578b\u5bf9\"\u5730\u5f97\u7684\"\u7b49\u9ad8\u9891\u9519\u8bef\u5177\u6709\u8f83\u9ad8\u7684\u8bc6\u522b\u7387\u548c\u7ea0\u9519\u7387;\n```\n\n## \u4e09\u3001\u8c03\u7528(Usage)\n### 3.1 \u4f7f\u7528macro-correct\n```\nimport os\nos.environ[\"MACRO_CORRECT_FLAG_CSC_TOKEN\"] = \"1\"\n\nfrom macro_correct import correct\n### \u9ed8\u8ba4\u7ea0\u9519(list\u8f93\u5165)\ntext_list = [\"\u771f\u9ebb\u70e6\u4f60\u4e86\u3002\u5e0c\u671b\u4f60\u4eec\u597d\u597d\u7684\u8df3\u65e0\",\n \"\u5c11\u5148\u961f\u5458\u56e0\u8be5\u4e3a\u8001\u4eba\u8ba9\u5750\",\n \"\u673a\u4e03\u5b66\u4e60\u662f\u4eba\u5de5\u667a\u80fd\u9886\u9047\u6700\u80fd\u4f53\u73b0\u667a\u80fd\u7684\u4e00\u4e2a\u5206\u77e5\",\n \"\u4e00\u53ea\u5c0f\u9c7c\u8239\u6d6e\u5728\u5e73\u51c0\u7684\u6cb3\u9762\u4e0a\"\n ]\ntext_csc = correct(text_list)\nprint(\"\u9ed8\u8ba4\u7ea0\u9519(list\u8f93\u5165):\")\nfor res_i in text_csc:\n print(res_i)\nprint(\"#\" * 128)\n\n\"\"\"\n\u9ed8\u8ba4\u7ea0\u9519(list\u8f93\u5165):\n{'index': 0, 'source': '\u771f\u9ebb\u70e6\u4f60\u4e86\u3002\u5e0c\u671b\u4f60\u4eec\u597d\u597d\u7684\u8df3\u65e0', 'target': '\u771f\u9ebb\u70e6\u4f60\u4e86\u3002\u5e0c\u671b\u4f60\u4eec\u597d\u597d\u5730\u8df3\u821e', 'errors': [['\u7684', '\u5730', 12, 0.6584], ['\u65e0', '\u821e', 14, 1.0]]}\n{'index': 1, 'source': '\u5c11\u5148\u961f\u5458\u56e0\u8be5\u4e3a\u8001\u4eba\u8ba9\u5750', 'target': '\u5c11\u5148\u961f\u5458\u5e94\u8be5\u4e3a\u8001\u4eba\u8ba9\u5750', 'errors': [['\u56e0', '\u5e94', 4, 0.995]]}\n{'index': 2, 'source': '\u673a\u4e03\u5b66\u4e60\u662f\u4eba\u5de5\u667a\u80fd\u9886\u9047\u6700\u80fd\u4f53\u73b0\u667a\u80fd\u7684\u4e00\u4e2a\u5206\u77e5', 'target': '\u673a\u5668\u5b66\u4e60\u662f\u4eba\u5de5\u667a\u80fd\u9886\u57df\u6700\u80fd\u4f53\u73b0\u667a\u80fd\u7684\u4e00\u4e2a\u5206\u652f', 'errors': [['\u4e03', '\u5668', 1, 0.9998], ['\u9047', '\u57df', 10, 0.9999], ['\u77e5', '\u652f', 21, 1.0]]}\n{'index': 3, 'source': '\u4e00\u53ea\u5c0f\u9c7c\u8239\u6d6e\u5728\u5e73\u51c0\u7684\u6cb3\u9762\u4e0a', 'target': '\u4e00\u53ea\u5c0f\u9c7c\u8239\u6d6e\u5728\u5e73\u9759\u7684\u6cb3\u9762\u4e0a', 'errors': [['\u51c0', '\u9759', 8, 0.9961]]}\n\"\"\"\n```\n\n### 3.2 \u4f7f\u7528 transformers\n```\n# !/usr/bin/python\n# -*- coding: utf-8 -*-\n# @time : 2021/2/29 21:41\n# @author : Mo\n# @function: transformers\u76f4\u63a5\u52a0\u8f7dbert\u7c7b\u6a21\u578b\u6d4b\u8bd5\n\n\nimport traceback\nimport time\nimport sys\nimport os\nos.environ[\"USE_TORCH\"] = \"1\"\nfrom transformers import BertConfig, BertTokenizer, BertForMaskedLM\nimport torch\n\n# pretrained_model_name_or_path = \"shibing624/macbert4csc-base-chinese\"\n# pretrained_model_name_or_path = \"Macropodus/macbert4mdcspell_v1\"\npretrained_model_name_or_path = \"Macropodus/macbert4csc_v1\"\n# pretrained_model_name_or_path = \"Macropodus/macbert4csc_v2\"\n# pretrained_model_name_or_path = \"Macropodus/bert4csc_v1\"\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nmax_len = 128\n\nprint(\"load model, please wait a few minute!\")\ntokenizer = BertTokenizer.from_pretrained(pretrained_model_name_or_path)\nbert_config = BertConfig.from_pretrained(pretrained_model_name_or_path)\nmodel = BertForMaskedLM.from_pretrained(pretrained_model_name_or_path)\nmodel.to(device)\nprint(\"load model success!\")\n\ntexts = [\n \"\u673a\u4e03\u5b66\u4e60\u662f\u4eba\u5de5\u667a\u80fd\u9886\u9047\u6700\u80fd\u4f53\u73b0\u667a\u80fd\u7684\u4e00\u4e2a\u5206\u77e5\",\n \"\u6211\u662f\u7ec3\u4e60\u65f6\u957f\u4e24\u5ff5\u534a\u7684\u9e3d\u4ec1\u7ec3\u4e60\u751f\u8521\u5f90\u5764\",\n \"\u771f\u9ebb\u70e6\u4f60\u4e86\u3002\u5e0c\u671b\u4f60\u4eec\u597d\u597d\u7684\u8df3\u65e0\",\n \"\u4ed6\u6cd5\u8bed\u8bf4\u7684\u5f88\u597d\uff0c\u7684\u8bed\u4e5f\u4e0d\u9519\",\n \"\u9047\u5230\u4e00\u4f4d\u5f88\u68d2\u7684\u5974\u751f\u8ddf\u6211\u7597\u5929\",\n \"\u6211\u4eec\u4e3a\u8fd9\u4e2a\u76ee\u6807\u52aa\u529b\u4e0d\u89e3\",\n]\nlen_mid = min(max_len, max([len(t)+2 for t in texts]))\n\nwith torch.no_grad():\n outputs = model(**tokenizer(texts, padding=True, max_length=len_mid,\n return_tensors=\"pt\").to(device))\n\ndef get_errors(source, target):\n \"\"\" \u6781\u7b80\u65b9\u6cd5\u83b7\u53d6 errors \"\"\"\n len_min = min(len(source), len(target))\n errors = []\n for idx in range(len_min):\n if source[idx] != target[idx]:\n errors.append([source[idx], target[idx], idx])\n return errors\n\nresult = []\nfor probs, source in zip(outputs.logits, texts):\n ids = torch.argmax(probs, dim=-1)\n tokens_space = tokenizer.decode(ids[1:-1], skip_special_tokens=False)\n text_new = tokens_space.replace(\" \", \"\")\n target = text_new[:len(source)]\n errors = get_errors(source, target)\n print(source, \" => \", target, errors)\n result.append([target, errors])\nprint(result)\n\"\"\"\n\u673a\u4e03\u5b66\u4e60\u662f\u4eba\u5de5\u667a\u80fd\u9886\u9047\u6700\u80fd\u4f53\u73b0\u667a\u80fd\u7684\u4e00\u4e2a\u5206\u77e5 => \u673a\u5668\u5b66\u4e60\u662f\u4eba\u5de5\u667a\u80fd\u9886\u57df\u6700\u80fd\u4f53\u73b0\u667a\u80fd\u7684\u4e00\u4e2a\u5206\u652f [['\u4e03', '\u5668', 1], ['\u9047', '\u57df', 10], ['\u77e5', '\u652f', 21]]\n\u6211\u662f\u7ec3\u4e60\u65f6\u957f\u4e24\u5ff5\u534a\u7684\u9e3d\u4ec1\u7ec3\u4e60\u751f\u8521\u5f90\u5764 => \u6211\u662f\u7ec3\u4e60\u65f6\u957f\u4e24\u5e74\u534a\u7684\u4e2a\u4eba\u7ec3\u4e60\u751f\u8521\u5f90\u5764 [['\u5ff5', '\u5e74', 7], ['\u9e3d', '\u4e2a', 10], ['\u4ec1', '\u4eba', 11]]\n\u771f\u9ebb\u70e6\u4f60\u4e86\u3002\u5e0c\u671b\u4f60\u4eec\u597d\u597d\u7684\u8df3\u65e0 => \u771f\u9ebb\u70e6\u4f60\u4e86\u3002\u5e0c\u671b\u4f60\u4eec\u597d\u597d\u5730\u8df3\u821e [['\u7684', '\u5730', 12], ['\u65e0', '\u821e', 14]]\n\u4ed6\u6cd5\u8bed\u8bf4\u7684\u5f88\u597d\uff0c\u7684\u8bed\u4e5f\u4e0d\u9519 => \u4ed6\u6cd5\u8bed\u8bf4\u5f97\u5f88\u597d\uff0c\u5fb7\u8bed\u4e5f\u4e0d\u9519 [['\u7684', '\u5f97', 4], ['\u7684', '\u5fb7', 8]]\n\u9047\u5230\u4e00\u4f4d\u5f88\u68d2\u7684\u5974\u751f\u8ddf\u6211\u7597\u5929 => \u9047\u5230\u4e00\u4f4d\u5f88\u68d2\u7684\u5973\u751f\u8ddf\u6211\u804a\u5929 [['\u5974', '\u5973', 7], ['\u7597', '\u804a', 11]]\n\u6211\u4eec\u4e3a\u8fd9\u4e2a\u76ee\u6807\u52aa\u529b\u4e0d\u89e3 => \u6211\u4eec\u4e3a\u8fd9\u4e2a\u76ee\u6807\u52aa\u529b\u4e0d\u61c8 [['\u89e3', '\u61c8', 10]]\n\"\"\"\n```\n\n## \u56db\u3001\u8bba\u6587(Paper)\n - 2024-Refining: [Refining Corpora from a Model Calibration Perspective for Chinese](https://arxiv.org/abs/2407.15498)\n - 2024-ReLM: [Chinese Spelling Correction as Rephrasing Language Model](https://arxiv.org/abs/2308.08796)\n - 2024-DICS: [DISC: Plug-and-Play Decoding Intervention with Similarity of Characters for Chinese Spelling Check](https://arxiv.org/abs/2412.12863)\n\n - 2023-Bi-DCSpell: [A Bi-directional Detector-Corrector Interactive Framework for Chinese Spelling Check]()\n - 2023-BERT-MFT: [Rethinking Masked Language Modeling for Chinese Spelling Correction](https://arxiv.org/abs/2305.17721)\n - 2023-PTCSpell: [PTCSpell: Pre-trained Corrector Based on Character Shape and Pinyin for Chinese Spelling Correction](https://arxiv.org/abs/2212.04068)\n - 2023-DR-CSC: [A Frustratingly Easy Plug-and-Play Detection-and-Reasoning Module for Chinese](https://aclanthology.org/2023.findings-emnlp.771)\n - 2023-DROM: [Disentangled Phonetic Representation for Chinese Spelling Correction](https://arxiv.org/abs/2305.14783)\n - 2023-EGCM: [An Error-Guided Correction Model for Chinese Spelling Error Correction](https://arxiv.org/abs/2301.06323)\n - 2023-IGPI: [Investigating Glyph-Phonetic Information for Chinese Spell Checking: What Works and What\u2019s Next?](https://arxiv.org/abs/2212.04068)\n - 2023-CL: [Contextual Similarity is More Valuable than Character Similarity-An Empirical Study for Chinese Spell Checking]()\n\n - 2022-CRASpell: [CRASpell: A Contextual Typo Robust Approach to Improve Chinese Spelling Correction](https://aclanthology.org/2022.findings-acl.237)\n - 2022-MDCSpell: [MDCSpell: A Multi-task Detector-Corrector Framework for Chinese Spelling Correction](https://aclanthology.org/2022.findings-acl.98)\n - 2022-SCOPE: [Improving Chinese Spelling Check by Character Pronunciation Prediction: The Effects of Adaptivity and Granularity](https://arxiv.org/abs/2210.10996)\n - 2022-ECOPO: [The Past Mistake is the Future Wisdom: Error-driven Contrastive Probability Optimization for Chinese Spell Checking](https://arxiv.org/abs/2203.00991)\n\n - 2021-MLMPhonetics: [Correcting Chinese Spelling Errors with Phonetic Pre-training](https://aclanthology.org/2021.findings-acl.198)\n - 2021-ChineseBERT: [ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information](https://aclanthology.org/2021.acl-long.161/)\n - 2021-BERTCrsGad: [Global Attention Decoder for Chinese Spelling Error Correction](https://aclanthology.org/2021.findings-acl.122)\n - 2021-ThinkTwice: [Think Twice: A Post-Processing Approach for the Chinese Spelling Error Correction](https://www.mdpi.com/2076-3417/11/13/5832)\n - 2021-PHMOSpell: [PHMOSpell: Phonological and Morphological Knowledge Guided Chinese Spelling Chec](https://aclanthology.org/2021.acl-long.464)\n - 2021-SpellBERT: [SpellBERT: A Lightweight Pretrained Model for Chinese Spelling Check](https://aclanthology.org/2021.emnlp-main.287)\n - 2021-TwoWays: [Exploration and Exploitation: Two Ways to Improve Chinese Spelling Correction Models](https://aclanthology.org/2021.acl-short.56)\n - 2021-ReaLiSe: [Read, Listen, and See: Leveraging Multimodal Information Helps Chinese Spell Checking](https://arxiv.org/abs/2105.12306)\n - 2021-DCSpell: [DCSpell: A Detector-Corrector Framework for Chinese Spelling Error Correction](https://dl.acm.org/doi/10.1145/3404835.3463050)\n - 2021-PLOME: [PLOME: Pre-training with Misspelled Knowledge for Chinese Spelling Correction](https://aclanthology.org/2021.acl-long.233)\n - 2021-DCN: [Dynamic Connected Networks for Chinese Spelling Check](https://aclanthology.org/2021.findings-acl.216/)\n\n - 2020-SoftMaskBERT: [Spelling Error Correction with Soft-Masked BERT](https://arxiv.org/abs/2005.07421)\n - 2020-SpellGCN: [SpellGCN\uff1aIncorporating Phonological and Visual Similarities into Language Models for Chinese Spelling Check](https://arxiv.org/abs/2004.14166)\n - 2020-ChunkCSC: [Chunk-based Chinese Spelling Check with Global Optimization](https://aclanthology.org/2020.findings-emnlp.184)\n - 2020-MacBERT: [Revisiting Pre-Trained Models for Chinese Natural Language Processing](https://arxiv.org/abs/2004.13922)\n\n - 2019-FASPell: [FASPell: A Fast, Adaptable, Simple, Powerful Chinese Spell Checker Based On DAE-Decoder Paradigm](https://aclanthology.org/D19-5522)\n - 2018-Hybrid: [A Hybrid Approach to Automatic Corpus Generation for Chinese Spelling Checking](https://aclanthology.org/D18-1273)\n\n - 2015-Sighan15: [Introduction to SIGHAN 2015 Bake-off for Chinese Spelling Check](https://aclanthology.org/W15-3106/)\n - 2014-Sighan14: [Overview of SIGHAN 2014 Bake-off for Chinese Spelling Check](https://aclanthology.org/W14-6820/)\n - 2013-Sighan13: [Chinese Spelling Check Evaluation at SIGHAN Bake-off 2013](https://aclanthology.org/W13-4406/)\n\n## \u4e94\u3001\u53c2\u8003(Refer)\n - [nghuyong/Chinese-text-correction-papers](https://github.com/nghuyong/Chinese-text-correction-papers)\n - [destwang/CTCResources](https://github.com/destwang/CTCResources)\n - [wangwang110/CSC](https://github.com/wangwang110/CSC)\n - [chinese-poetry/chinese-poetry](https://github.com/chinese-poetry/chinese-poetry)\n - [chinese-poetry/huajianji](https://github.com/chinese-poetry/huajianji)\n - [garychowcmu/daizhigev20](https://github.com/garychowcmu/daizhigev20)\n - [yangjianxin1/Firefly](https://github.com/yangjianxin1/Firefly)\n - [Macropodus/xuexiqiangguo_428w](https://huggingface.co/datasets/Macropodus/xuexiqiangguo_428w)\n - [Macropodus/csc_clean_wang271k](https://huggingface.co/datasets/Macropodus/csc_clean_wang271k)\n - [Macropodus/csc_eval_public](https://huggingface.co/datasets//Macropodus/csc_eval_public)\n - [shibing624/pycorrector](https://github.com/shibing624/pycorrector)\n - [iioSnail/MDCSpell_pytorch](https://github.com/iioSnail/MDCSpell_pytorch)\n - [gingasan/lemon](https://github.com/gingasan/lemon)\n - [Claude-Liu/ReLM](https://github.com/Claude-Liu/ReLM)\n\n\n## \u516d\u3001\u5f15\u7528(Cite)\nFor citing this work, you can refer to the present GitHub project. For example, with BibTeX:\n```\n@software{macro-correct,\n url = {https://github.com/yongzhuo/macro-correct},\n author = {Yongzhuo Mo},\n title = {macro-correct},\n year = {2025}\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "Macropodus/bert4csc_v1", "base_model_relation": "base" }, { "model_id": "sercetexam9/bert-base-chinese-chn-finetuned-augmentation-LUNAR", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- f1\n- accuracy\nmodel-index:\n- name: bert-base-chinese-chn-finetuned-augmentation-LUNAR\n results: []\n---\n\n\n\n# bert-base-chinese-chn-finetuned-augmentation-LUNAR\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2282\n- F1: 0.7890\n- Roc Auc: 0.8637\n- Accuracy: 0.7323\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 20\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|\n| 0.2216 | 1.0 | 315 | 0.2200 | 0.5555 | 0.7352 | 0.5949 |\n| 0.1695 | 2.0 | 630 | 0.1692 | 0.6542 | 0.7784 | 0.6839 |\n| 0.1031 | 3.0 | 945 | 0.1674 | 0.6900 | 0.8028 | 0.6926 |\n| 0.0671 | 4.0 | 1260 | 0.1707 | 0.7356 | 0.8239 | 0.7085 |\n| 0.0415 | 5.0 | 1575 | 0.1884 | 0.7489 | 0.8419 | 0.7014 |\n| 0.0289 | 6.0 | 1890 | 0.1993 | 0.7604 | 0.8532 | 0.6998 |\n| 0.0204 | 7.0 | 2205 | 0.2331 | 0.7568 | 0.8558 | 0.6791 |\n| 0.014 | 8.0 | 2520 | 0.2070 | 0.7714 | 0.8467 | 0.7149 |\n| 0.0069 | 9.0 | 2835 | 0.2256 | 0.7823 | 0.8684 | 0.7053 |\n| 0.0055 | 10.0 | 3150 | 0.2207 | 0.7839 | 0.8611 | 0.7260 |\n| 0.0064 | 11.0 | 3465 | 0.2197 | 0.7875 | 0.8597 | 0.7252 |\n| 0.0061 | 12.0 | 3780 | 0.2282 | 0.7890 | 0.8637 | 0.7323 |\n| 0.0046 | 13.0 | 4095 | 0.2316 | 0.7865 | 0.8584 | 0.7284 |\n| 0.0022 | 14.0 | 4410 | 0.2339 | 0.7763 | 0.8519 | 0.7307 |\n| 0.0025 | 15.0 | 4725 | 0.2339 | 0.7800 | 0.8536 | 0.7315 |\n| 0.0028 | 16.0 | 5040 | 0.2328 | 0.7802 | 0.8537 | 0.7299 |\n\n\n### Framework versions\n\n- Transformers 4.45.1\n- Pytorch 2.4.0\n- Datasets 3.0.1\n- Tokenizers 0.20.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [ "sercetexam9/bert-base-chinese-chn-finetuned-augmentation-LUNAR-chn-MICRO" ], "children_count": 1, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "sercetexam9/bert-base-chinese-chn-finetuned-augmentation-LUNAR", "base_model_relation": "base" }, { "model_id": "AnonymousCS/populism_model012", "gated": "unknown", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: populism_model012\n results: []\n---\n\n\n\n# populism_model012\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3847\n- Accuracy: 0.9816\n- 1-f1: 0.3529\n- 1-recall: 0.3\n- 1-precision: 0.4286\n- Balanced Acc: 0.6466\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|\n| 0.333 | 1.0 | 225 | 0.3551 | 0.9833 | 0.0 | 0.0 | 0.0 | 0.5 |\n| 0.1579 | 2.0 | 450 | 0.3008 | 0.9839 | 0.3830 | 0.3 | 0.5294 | 0.6477 |\n| 0.2232 | 3.0 | 675 | 0.3847 | 0.9816 | 0.3529 | 0.3 | 0.4286 | 0.6466 |\n\n\n### Framework versions\n\n- Transformers 4.49.0.dev0\n- Pytorch 2.5.1+cu124\n- Datasets 3.2.0\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": null, "base_model_relation": null }, { "model_id": "roberthsu2003/models_for_ner", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\ndatasets:\n- peoples_daily_ner\nmetrics:\n- f1\nmodel-index:\n- name: models_for_ner\n results:\n - task:\n type: token-classification\n name: Token Classification\n dataset:\n name: peoples_daily_ner\n type: peoples_daily_ner\n config: peoples_daily_ner\n split: validation\n args: peoples_daily_ner\n metrics:\n - type: f1\n value: 0.9508438253415484\n name: F1\n---\n\n\n\n# models_for_ner\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on the peoples_daily_ner dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0219\n- F1: 0.9508\n\n## Model description\n\n### \u4f7f\u7528\u65b9\u6cd5(pipline\u7684\u65b9\u6cd5)\n\n```python\nfrom transformers import pipeline\n\nner_pipe = pipeline('token-classification', model='roberthsu2003/models_for_ner',aggregation_strategy='simple')\ninputs = '\u5f90\u570b\u5802\u5728\u53f0\u5317\u4e0a\u73ed'\nres = ner_pipe(inputs)\nprint(res)\nres_result = {}\nfor r in res:\n entity_name = r['entity_group']\n start = r['start']\n end = r['end']\n if entity_name not in res_result:\n res_result[entity_name] = []\n res_result[entity_name].append(inputs[start:end])\n\nres_result\n#==output==\n{'PER': ['\u5f90\u570b\u5802'], 'LOC': ['\u53f0\u5317']}\n```\n\n### \u4f7f\u7528\u65b9\u6cd5(model,tokenizer)\n\n```python\nfrom transformers import AutoModelForTokenClassification, AutoTokenizer\nimport numpy as np\n\n# Load the pre-trained model and tokenizer\nmodel = AutoModelForTokenClassification.from_pretrained('roberthsu2003/models_for_ner')\ntokenizer = AutoTokenizer.from_pretrained('roberthsu2003/models_for_ner')\n\n# The label mapping (you might need to adjust this based on your training)\n#['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']\nlabel_list = list(model.config.id2label.values())\n\n\ndef predict_ner(text):\n \"\"\"Predicts NER tags for a given text using the loaded model.\"\"\"\n # Encode the text\n inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True)\n \n # Get model predictions\n outputs = model(**inputs)\n predictions = np.argmax(outputs.logits.detach().numpy(), axis=-1)\n \n # Get the word IDs from the encoded inputs\n # This is the key change - word_ids() is a method on the encoding result, not the tokenizer itself\n word_ids = inputs.word_ids(batch_index=0)\n \n pred_tags = []\n for word_id, pred in zip(word_ids, predictions[0]):\n if word_id is None:\n continue # Skip special tokens\n pred_tags.append(label_list[pred])\n\n return pred_tags\n\n#To get the entities, you'll need to group consecutive non-O tags:\n\ndef get_entities(tags):\n \"\"\"Groups consecutive NER tags to extract entities.\"\"\"\n entities = []\n start_index = -1\n current_entity_type = None\n for i, tag in enumerate(tags):\n if tag != 'O':\n if start_index == -1:\n start_index = i\n current_entity_type = tag[2:] # Extract entity type (e.g., PER, LOC, ORG)\n else: #tag == 'O'\n if start_index != -1:\n entities.append((start_index, i, current_entity_type))\n start_index = -1\n current_entity_type = None\n if start_index != -1:\n entities.append((start_index, len(tags), current_entity_type))\n return entities\n\n# Example usage:\ntext = \"\u5f90\u570b\u5802\u5728\u53f0\u5317\u4e0a\u73ed\"\nner_tags = predict_ner(text)\nprint(f\"Text: {text}\")\n#==output==\n#Text: \u5f90\u570b\u5802\u5728\u53f0\u5317\u4e0a\u73ed\n\n\nprint(f\"NER Tags: {ner_tags}\")\n#===output==\n#NER Tags: ['B-PER', 'I-PER', 'I-PER', 'O', 'B-LOC', 'I-LOC', 'O', 'O']\n\n\nentities = get_entities(ner_tags)\nword_tokens = tokenizer.tokenize(text) # Tokenize to get individual words\nprint(f\"Entities:\")\nfor start, end, entity_type in entities:\n entity_text = \"\".join(word_tokens[start:end])\n print(f\"- {entity_text}: {entity_type}\")\n\n#==output==\n#Entities:\n#- \u5f90\u570b\u5802: PER\n#- \u53f0\u5317: LOC\n```\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 64\n- eval_batch_size: 128\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 3\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | F1 |\n|:-------------:|:-----:|:----:|:---------------:|:------:|\n| 0.0274 | 1.0 | 327 | 0.0204 | 0.9510 |\n| 0.0127 | 2.0 | 654 | 0.0174 | 0.9592 |\n| 0.0063 | 3.0 | 981 | 0.0186 | 0.9602 |\n\n\n### Framework versions\n\n- Transformers 4.48.3\n- Pytorch 2.5.1+cu124\n- Datasets 3.3.2\n- Tokenizers 0.21.0", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "roberthsu2003/models_for_ner", "base_model_relation": "base" }, { "model_id": "roberthsu2003/models_for_qa_cut", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: models_for_qa_cut\n results: []\n---\n\n\n\n# models_for_qa_cut\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.6446\n\n## Model description\n### \u4f7f\u7528\u8aaa\u660e\n\n```python\nfrom transformers import pipeline\n\npipe = pipeline(\"question-answering\", model=\"roberthsu2003/models_for_qa_cut\")\nanswer = pipe(question=\"\u8521\u82f1\u6587\u4f55\u6642\u5378\u4efb?\",context=\"\u8521\u82f1\u6587\u65bc2024\u5e745\u6708\u5378\u4efb\u4e2d\u83ef\u6c11\u570b\u7e3d\u7d71\uff0c\u4ea4\u68d2\u7d66\u6642\u4efb\u526f\u7e3d\u7d71\u8cf4\u6e05\u5fb7\u3002\u5378\u4efb\u5f8c\u8f03\u5c11\u516c\u958b\u9732\u9762\uff0c\u76f4\u81f32024\u5e7410\u6708\u5979\u53d7\u9080\u8a2a\u554f\u6b50\u6d32\u3002[25]\")\nprint(answer['answer'])\n#'2024\u5e745\u6708'\n\n\ncontext='\u53f0\u7a4d\u96fb\u4e5f\u627f\u8afe\u672a\u4f86\u5728\u53f0\u7063\u7684\u5404\u9805\u6295\u8cc7\u4e0d\u8b8a\uff0c\u8a08\u5283\u672a\u4f86\u5728\u672c\u570b\u5efa\u9020\u4e5d\u5ea7\u5ee0\uff0c\u5305\u62ec\u65b0\u7af9\u3001\u9ad8\u96c4\u3001\u53f0\u4e2d\u3001\u5609\u7fa9\u548c\u53f0\u5357\u7b49\u5730\uff0c\u57282035\u5e74\uff0c\u53f0\u7063\u4ecd\u5c07\u751f\u7522\u9ad8\u905480%\u7684\u6676\u7247\u3002'''\nanswer = pipe(question='\u53f0\u7a4d\u96fb\u672a\u4f86\u8981\u5efa\u7acb\u5e7e\u5ea7\u5ee0',context=context)\nprint(answer['answer'])\nanswer = pipe(question='2035\u5e74\u5728\u53f0\u7063\u751f\u7522\u7684\u6676\u7247\u6bd4\u4f8b?',context=context)\nprint(answer['answer'])\n#\u4e5d\u5ea7\n#80%\n```\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 2\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:-----:|:----:|:---------------:|\n| 0.6584 | 1.0 | 842 | 0.6412 |\n| 0.4002 | 2.0 | 1684 | 0.6446 |\n\n\n### Framework versions\n\n- Transformers 4.48.3\n- Pytorch 2.5.1+cu124\n- Datasets 3.3.2\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "roberthsu2003/models_for_qa_cut", "base_model_relation": "base" }, { "model_id": "jackietung/bert-base-chinese-finetuned-multi-classification", "gated": "False", "card": "---\nlanguage: zh\nlicense: mit\ntags:\n- text-classification\n- bert\n- chinese\n- customer feedback\n- app-reviews\ndatasets:\n- custom\nmetrics:\n- accuracy\n- f1\npipeline_tag: text-classification\nwidget:\n- text: \u5546\u54c1\u641c\u5c0b\u9ad4\u9a57\u5f88\u597d\n- text: \u7121\u6cd5\u767b\u5165\u6703\u54e1\u5e33\u865f\n- text: \u7d50\u5e33\u6642\u7cfb\u7d71\u51fa\u932f\nbase_model:\n- google-bert/bert-base-chinese\nlibrary_name: transformers\n---\n\n# BERT \u4e2d\u6587\u6587\u672c\u5206\u985e\u6a21\u578b\n\n\u9019\u500b\u6a21\u578b\u662f\u57fa\u65bc `bert-base-chinese` \u5fae\u8abf\u7684\u6587\u672c\u5206\u985e\u6a21\u578b\uff0c\u53ef\u4ee5\u5c07\u6587\u672c\u5206\u985e\u70ba\u4ee5\u4e0b\u516d\u500b\u985e\u5225\uff1a\n\n- \u6703\u54e1\u767b\u5165\n- \u641c\u5c0b\u529f\u80fd\n- \u5546\u54c1\u76f8\u95dc\n- \u7d50\u5e33\u4ed8\u6b3e\n- \u5ba2\u6236\u670d\u52d9\n- \u5176\u4ed6\n\n## \u6a21\u578b\u63cf\u8ff0\n\n- \u6a21\u578b\u57fa\u65bc bert-base-chinese \u5fae\u8abf\n- \u9069\u7528\u65bcApp\u4e2d\u6587\u8a55\u8ad6\u7684\u60c5\u611f\u5206\u6790\n- \u8f38\u51fa\u6a19\u7c64\uff1a0\uff08\u6703\u54e1\u767b\u5165\uff09\uff0c1\uff08\u641c\u5c0b\u529f\u80fd\uff09\uff0c2\uff08\u5546\u54c1\u76f8\u95dc\uff09\uff0c3\uff08\u7d50\u5e33\u4ed8\u6b3e\uff09\uff0c4\uff08\u5ba2\u6236\u670d\u52d9\uff09\uff0c5\uff08\u5176\u4ed6\uff09\n- \u4f7f\u7528 Focal Loss \u8a13\u7df4\uff0c\u4ee5\u8655\u7406\u985e\u5225\u4e0d\u5e73\u8861\u554f\u984c\n\n## \u4f7f\u7528\u65b9\u6cd5\n\n```python\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\nimport torch\n\n# \u8f09\u5165\u6a21\u578b\u548c\u5206\u8a5e\u5668\ntokenizer = AutoTokenizer.from_pretrained(\"jackietung/bert-base-chinese-multi-classification\")\nmodel = AutoModelForSequenceClassification.from_pretrained(\"jackietung/bert-base-chinese-multi-classification\")\n\n# \u6e96\u5099\u8f38\u5165\ntext = \"\u5546\u54c1\u641c\u5c0b\u9ad4\u9a57\u5f88\u597d\"\ninputs = tokenizer(text, return_tensors=\"pt\", padding=True, truncation=True, max_length=128)\n\n# \u9032\u884c\u9810\u6e2c\nwith torch.no_grad():\n outputs = model(**inputs)\n predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)\n predicted_class = torch.argmax(predictions, dim=-1).item()\n\n# \u985e\u5225\u6620\u5c04\nlabel_map = {\n 0: '\u6703\u54e1\u767b\u5165',\n 1: '\u641c\u5c0b\u529f\u80fd',\n 2: '\u5546\u54c1\u76f8\u95dc',\n 3: '\u7d50\u5e33\u4ed8\u6b3e',\n 4: '\u5ba2\u6236\u670d\u52d9',\n 5: '\u5176\u4ed6'\n}\n\nprint(f\"\u9810\u6e2c\u985e\u5225: {label_map[predicted_class]}\")\nprint(f\"\u9810\u6e2c\u6a5f\u7387: {predictions[0][predicted_class].item():.4f}\")", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "jackietung/bert-base-chinese-finetuned-multi-classification", "base_model_relation": "base" }, { "model_id": "jinchenliuljc/ecom_ner_model", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- precision\n- recall\n- f1\n- accuracy\nmodel-index:\n- name: ecom_ner_model\n results: []\n---\n\n\n\n# ecom_ner_model\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3748\n- Precision: 0.7042\n- Recall: 0.8002\n- F1: 0.7491\n- Accuracy: 0.8704\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 96\n- eval_batch_size: 96\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 3\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|\n| No log | 1.0 | 63 | 0.4615 | 0.6520 | 0.7503 | 0.6977 | 0.8442 |\n| No log | 2.0 | 126 | 0.3863 | 0.7008 | 0.7913 | 0.7433 | 0.8668 |\n| No log | 3.0 | 189 | 0.3748 | 0.7042 | 0.8002 | 0.7491 | 0.8704 |\n\n\n### Framework versions\n\n- Transformers 4.49.0\n- Pytorch 2.6.0+cu124\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "jinchenliuljc/ecom_ner_model", "base_model_relation": "base" }, { "model_id": "hsincho/bert_propaganda_shanghai", "gated": "False", "card": "---\nlicense: mit\nlanguage:\n- zh\nbase_model:\n- google-bert/bert-base-chinese\npipeline_tag: text-classification\ntags:\n- propaganda\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "hsincho/bert_propaganda_shanghai", "base_model_relation": "base" }, { "model_id": "zzz16/Public-analysis", "gated": "unknown", "card": "---\nlicense: apache-2.0\ndatasets:\n- XiangPan/waimai_10k\nlanguage:\n- zh\nmetrics:\n- accuracy\nbase_model:\n- google-bert/bert-base-chinese\n---\n# Introduction\n\nThis model is trained based on the **base_model:google-bert/bert-base-chinese** and **datasets:XiangPan/waimai_10k** for sentiment analysis of reviews on a food delivery platform. It is designed to quickly identify negative reviews, allowing merchants to make targeted improvements efficiently.\n\n# How to use\n\n```bash\nfrom transformers import AutoModelForSequenceClassification, AutoTokenizer\nimport torch\n\n# \u8bbe\u5907\u8bbe\u7f6e\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n\n# \u52a0\u8f7d\u9884\u8bad\u7ec3\u7684\u6a21\u578b\u548c\u5206\u8bcd\u5668\nmodel_name = \"zzz16/Public-analysis\" # \u786e\u4fdd\u8be5\u6a21\u578b\u8def\u5f84\u6b63\u786e\ntokenizer_name = \"bert-base-chinese\"\nmodel = AutoModelForSequenceClassification.from_pretrained(model_name).to(device)\ntokenizer = AutoTokenizer.from_pretrained(tokenizer_name)\n\n# \u8f93\u5165\u6587\u672c\ntext = \"\u8fd9\u4e2a\u5916\u5356\u5e73\u53f0\u7684\u670d\u52a1\u5f88\u5dee\u52b2\uff0c\u914d\u9001\u6162\uff0c\u98df\u7269\u4e5f\u51b7\u4e86\u3002\"\n\n# \u4f7f\u7528\u5206\u8bcd\u5668\u8fdb\u884c\u7f16\u7801\uff0c\u5c06\u6587\u672c\u8f6c\u5316\u4e3a\u6a21\u578b\u8f93\u5165\u7684\u683c\u5f0f\ninputs = tokenizer(text, padding=True, truncation=True, return_tensors=\"pt\")\ninputs = {key: value.to(device) for key, value in inputs.items()} # \u8fc1\u79fb\u5230\u8bbe\u5907\u4e0a\n\n# \u4f7f\u7528\u6a21\u578b\u8fdb\u884c\u9884\u6d4b\nwith torch.no_grad():\n outputs = model(**inputs)\n\n# \u83b7\u53d6\u6a21\u578b\u7684\u8f93\u51fa\u7ed3\u679c\nlogits = outputs.logits\npredicted_class = torch.argmax(logits, dim=-1)\n\n# \u8f93\u51fa\u9884\u6d4b\u7684\u7c7b\u522b\nprint(f\"\u9884\u6d4b\u7c7b\u522b: {predicted_class.item()}\")\n```\n\n# \u5408\u4f5c\n\n\u6211\u4eec\u5728\u7814\u53d1\u9488\u5bf9\u5546\u5bb6/\u4f01\u4e1a/\u5e73\u53f0\u7684\u5916\u5356\u3001\u8206\u60c5\u5206\u6790\u90e8\u7f72\uff0c\u4e3b\u8981\u9488\u5bf9\u5546\u5bb6/\u4f01\u4e1a/\u5e73\u53f0\u8fdb\u884c\u8206\u60c5\u628a\u63a7\u3001\u60c5\u611f\u5206\u6790\uff0c\u4ee5\u8fdb\u884c\u9488\u5bf9\u6027\u3001\u5feb\u901f\u5e94\u5bf9\u548c\u89e3\u51b3\u95ee\u9898\uff0c\u5982\u679c\u60a8\u7684\u516c\u53f8\u60f3\u8981\u4f53\u9a8c\u6216\u8005\u662f\u5408\u4f5c\u53ef\u4ee5\u8054\u7cfb\u6211\u4eec\uff1a3022656072@qq.com **\u90ae\u4ef6\u6700\u597d\u7528\u4e2d\u6587\uff01\u82f1\u6587\u5783\u573e\u90ae\u4ef6\u592a\u591a\uff0c\u53ef\u80fd\u4f1a\u56de\u590d\u4e0d\u53ca\u65f6**", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": null, "base_model_relation": null }, { "model_id": "jinchenliuljc/ecommerce-sentiment-analysis", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: results\n results: []\n---\n\n\n\n# results\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2167\n- Accuracy: 0.939\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 3\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| 0.3675 | 1.0 | 313 | 0.3179 | 0.8912 |\n| 0.1459 | 2.0 | 626 | 0.1266 | 0.9654 |\n| 0.0663 | 3.0 | 939 | 0.0938 | 0.979 |\n\n\n### Framework versions\n\n- Transformers 4.50.0\n- Pytorch 2.6.0+cu124\n- Datasets 3.4.1\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "jinchenliuljc/ecommerce-sentiment-analysis", "base_model_relation": "base" }, { "model_id": "roberthsu2003/models_for_qa_slide", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: models_for_qa_slide\n results: []\ndatasets:\n- roberthsu2003/for_MRC_QA\nlanguage:\n- zh\n---\n\n\n\n# models_for_qa_slide\n\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) \n\u4f7f\u7528\u7684\u8cc7\u6599\u96c6\u662froberthsu2003/for_MRC_QA\n\n## Model description\n\nQuestion&Answering \n\u4f7f\u7528overflow\u6ed1\u52d5\u8996\u7a97\u7684\u7b56\u7565\n\n## \u4f7f\u7528\u65b9\u5f0f\n\n```python\nfrom transformers import pipeline\n\npipe = pipeline(\"question-answering\", model=\"roberthsu2003/models_for_qa_slide\")\nanswer = pipe(question=\"\u8521\u82f1\u6587\u4f55\u6642\u5378\u4efb?\",context=\"\u8521\u82f1\u6587\u65bc2024\u5e745\u6708\u5378\u4efb\u4e2d\u83ef\u6c11\u570b\u7e3d\u7d71\uff0c\u4ea4\u68d2\u7d66\u6642\u4efb\u526f\u7e3d\u7d71\u8cf4\u6e05\u5fb7\u3002\u5378\u4efb\u5f8c\u8f03\u5c11\u516c\u958b\u9732\u9762\uff0c\u76f4\u81f32024\u5e7410\u6708\u5979\u53d7\u9080\u8a2a\u554f\u6b50\u6d32\u3002[25]\")\nprint(answer['answer'])\n\n-----------\n\ncontext='\u53f0\u7a4d\u96fb\u4e5f\u627f\u8afe\u672a\u4f86\u5728\u53f0\u7063\u7684\u5404\u9805\u6295\u8cc7\u4e0d\u8b8a\uff0c\u8a08\u5283\u672a\u4f86\u5728\u672c\u570b\u5efa\u9020\u4e5d\u5ea7\u5ee0\uff0c\u5305\u62ec\u65b0\u7af9\u3001\u9ad8\u96c4\u3001\u53f0\u4e2d\u3001\u5609\u7fa9\u548c\u53f0\u5357\u7b49\u5730\uff0c\u57282035\u5e74\uff0c\u53f0\u7063\u4ecd\u5c07\u751f\u7522\u9ad8\u905480%\u7684\u6676\u7247\u3002'\nanswer = pipe(question='\u53f0\u7a4d\u96fb\u672a\u4f86\u8981\u5efa\u7acb\u5e7e\u5ea7\u5ee0',context=context)\nprint(answer['answer'])\nanswer = pipe(question='2035\u5e74\u5728\u53f0\u7063\u751f\u7522\u7684\u6676\u7247\u6bd4\u4f8b?',context=context)\nprint(answer['answer'])\n\n\n```\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 2\n\n### Framework versions\n\n- Transformers 4.50.0\n- Pytorch 2.6.0+cu124\n- Datasets 3.5.0\n- Tokenizers 0.21.1", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "roberthsu2003/models_for_qa_slide", "base_model_relation": "base" }, { "model_id": "roberthsu2003/for_classification", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\n- f1\nmodel-index:\n- name: for_classification\n results: []\nlicense: apache-2.0\ndatasets:\n- roberthsu2003/data_for_classification\nlanguage:\n- zh\n---\n\n\n\n# for_classification\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.2366\n- Accuracy: 0.9189\n- F1: 0.9415\n\n## \u6a21\u578b\u5be6\u4f5c\n\n```python\nfrom transformers import pipeline\n\nid2_label = {'LABEL_0':\"\u8ca0\u8a55\",'LABEL_1':\"\u6b63\u8a55\"}\npipe = pipeline('text-classification', model=\"roberthsu2003/for_classification\")\n\nsen=\"\u670d\u52d9\u4eba\u54e1\u90fd\u5f88\u89aa\u5207\"\nprint(sen,id2_label[pipe(sen)[0]['label']])\n\nsen1=\"\u670d\u52d9\u4eba\u54e1\u90fd\u4e0d\u89aa\u5207\"\nprint(sen1,id2_label[pipe(sen1)[0]['label']])\n```\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 64\n- eval_batch_size: 128\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 3\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|\n| 0.2886 | 1.0 | 110 | 0.2269 | 0.9009 | 0.9272 |\n| 0.1799 | 2.0 | 220 | 0.2218 | 0.9112 | 0.9356 |\n| 0.1395 | 3.0 | 330 | 0.2366 | 0.9189 | 0.9415 |\n\n\n### Framework versions\n\n- Transformers 4.50.0\n- Pytorch 2.6.0+cu124\n- Datasets 3.5.0\n- Tokenizers 0.21.1", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "roberthsu2003/for_classification", "base_model_relation": "base" }, { "model_id": "tiya0825/MBTI-ScoreModel2.0", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: MBTI-ScoreModel2.0\n results: []\n---\n\n\n\n# MBTI-ScoreModel2.0\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0834\n- Ei Accuracy: 0.8745\n- Ei F1: 0.8745\n- Ei Mcc: 0.7489\n- Sn Accuracy: 0.7764\n- Sn F1: 0.7764\n- Sn Mcc: 0.5529\n- Ft Accuracy: 0.8251\n- Ft F1: 0.8235\n- Ft Mcc: 0.6593\n- Jp Accuracy: 0.8231\n- Jp F1: 0.8229\n- Jp Mcc: 0.6496\n- Order Accuracy: 0.6844\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 5\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Ei Accuracy | Ei F1 | Ei Mcc | Sn Accuracy | Sn F1 | Sn Mcc | Ft Accuracy | Ft F1 | Ft Mcc | Jp Accuracy | Jp F1 | Jp Mcc | Order Accuracy |\n|:-------------:|:------:|:-----:|:---------------:|:-----------:|:------:|:------:|:-----------:|:------:|:------:|:-----------:|:------:|:------:|:-----------:|:------:|:------:|:--------------:|\n| 0.1372 | 0.0797 | 500 | 0.1065 | 0.7782 | 0.7762 | 0.5623 | 0.6522 | 0.6411 | 0.3267 | 0.7363 | 0.7363 | 0.4726 | 0.7271 | 0.7271 | 0.4549 | 0.6865 |\n| 0.1549 | 0.1593 | 1000 | 0.1146 | 0.7454 | 0.7417 | 0.5115 | 0.6879 | 0.6874 | 0.3769 | 0.7268 | 0.7214 | 0.4684 | 0.7101 | 0.7056 | 0.4284 | 0.6967 |\n| 0.1289 | 0.2390 | 1500 | 0.0995 | 0.7894 | 0.7891 | 0.5789 | 0.6991 | 0.6991 | 0.3983 | 0.7363 | 0.7327 | 0.4828 | 0.7358 | 0.7318 | 0.4925 | 0.7121 |\n| 0.1179 | 0.3186 | 2000 | 0.0969 | 0.7959 | 0.7956 | 0.5918 | 0.6631 | 0.6436 | 0.3674 | 0.7226 | 0.7126 | 0.4743 | 0.7420 | 0.7410 | 0.4852 | 0.7173 |\n| 0.1066 | 0.3983 | 2500 | 0.0946 | 0.7844 | 0.7828 | 0.5823 | 0.7107 | 0.7046 | 0.4393 | 0.7041 | 0.6868 | 0.4550 | 0.7571 | 0.7566 | 0.5145 | 0.7156 |\n| 0.1082 | 0.4779 | 3000 | 0.0915 | 0.7442 | 0.7411 | 0.5062 | 0.6730 | 0.6537 | 0.3906 | 0.7511 | 0.7499 | 0.5095 | 0.6984 | 0.6850 | 0.4255 | 0.7162 |\n| 0.1013 | 0.5576 | 3500 | 0.0917 | 0.7912 | 0.7905 | 0.5900 | 0.7018 | 0.7006 | 0.4071 | 0.7611 | 0.7595 | 0.5271 | 0.7538 | 0.7536 | 0.5075 | 0.7105 |\n| 0.0996 | 0.6372 | 4000 | 0.0873 | 0.8165 | 0.8164 | 0.6328 | 0.7082 | 0.7082 | 0.4164 | 0.7578 | 0.7578 | 0.5156 | 0.7592 | 0.7585 | 0.5239 | 0.7152 |\n| 0.0945 | 0.7169 | 4500 | 0.0891 | 0.8246 | 0.8240 | 0.6509 | 0.6712 | 0.6569 | 0.3764 | 0.7273 | 0.7171 | 0.4858 | 0.7598 | 0.7596 | 0.5194 | 0.7190 |\n| 0.0993 | 0.7966 | 5000 | 0.0874 | 0.7742 | 0.7742 | 0.5484 | 0.7097 | 0.7077 | 0.4246 | 0.7491 | 0.7458 | 0.5085 | 0.7392 | 0.7358 | 0.4861 | 0.7197 |\n| 0.0949 | 0.8762 | 5500 | 0.0846 | 0.7910 | 0.7904 | 0.5883 | 0.7148 | 0.7130 | 0.4347 | 0.7645 | 0.7635 | 0.5358 | 0.7509 | 0.7490 | 0.5059 | 0.7215 |\n| 0.0916 | 0.9559 | 6000 | 0.0834 | 0.8179 | 0.8151 | 0.6496 | 0.7143 | 0.7133 | 0.4314 | 0.7646 | 0.7646 | 0.5294 | 0.7543 | 0.7533 | 0.5103 | 0.7217 |\n| 0.0916 | 1.0355 | 6500 | 0.0865 | 0.7677 | 0.7652 | 0.5533 | 0.7016 | 0.6975 | 0.4153 | 0.7320 | 0.7210 | 0.4996 | 0.7523 | 0.7497 | 0.5203 | 0.7233 |\n| 0.0858 | 1.1152 | 7000 | 0.0873 | 0.8212 | 0.8195 | 0.6494 | 0.7035 | 0.7002 | 0.4167 | 0.7664 | 0.7658 | 0.5343 | 0.7590 | 0.7544 | 0.5455 | 0.7236 |\n| 0.0799 | 1.1948 | 7500 | 0.0857 | 0.8088 | 0.8078 | 0.6277 | 0.7248 | 0.7239 | 0.4525 | 0.7532 | 0.7476 | 0.5261 | 0.7498 | 0.7431 | 0.5365 | 0.7182 |\n| 0.0776 | 1.2745 | 8000 | 0.0871 | 0.8137 | 0.8102 | 0.6447 | 0.7250 | 0.7246 | 0.4510 | 0.7092 | 0.6884 | 0.4802 | 0.7660 | 0.7641 | 0.5460 | 0.7170 |\n| 0.0908 | 1.3542 | 8500 | 0.0852 | 0.8239 | 0.8239 | 0.6478 | 0.7061 | 0.7016 | 0.4259 | 0.7779 | 0.7779 | 0.5569 | 0.7569 | 0.7549 | 0.5185 | 0.7168 |\n| 0.082 | 1.4338 | 9000 | 0.0823 | 0.8297 | 0.8296 | 0.6593 | 0.7183 | 0.7167 | 0.4420 | 0.7700 | 0.7674 | 0.5558 | 0.7760 | 0.7757 | 0.5556 | 0.7175 |\n| 0.0758 | 1.5135 | 9500 | 0.0813 | 0.8223 | 0.8190 | 0.6623 | 0.7111 | 0.7076 | 0.4336 | 0.7815 | 0.7786 | 0.5752 | 0.7800 | 0.7794 | 0.5609 | 0.7177 |\n| 0.0771 | 1.5931 | 10000 | 0.0797 | 0.8295 | 0.8281 | 0.6652 | 0.7248 | 0.7227 | 0.4574 | 0.7827 | 0.7817 | 0.5691 | 0.7652 | 0.7621 | 0.5512 | 0.7232 |\n| 0.0778 | 1.6728 | 10500 | 0.0836 | 0.8008 | 0.8008 | 0.6028 | 0.6927 | 0.6804 | 0.4207 | 0.7854 | 0.7853 | 0.5710 | 0.7138 | 0.6991 | 0.4657 | 0.7168 |\n| 0.0836 | 1.7524 | 11000 | 0.0797 | 0.8429 | 0.8423 | 0.6877 | 0.7290 | 0.7289 | 0.4587 | 0.7888 | 0.7883 | 0.5790 | 0.7811 | 0.7811 | 0.5627 | 0.7194 |\n| 0.0756 | 1.8321 | 11500 | 0.0802 | 0.8294 | 0.8292 | 0.6589 | 0.6930 | 0.6832 | 0.4136 | 0.7504 | 0.7408 | 0.5365 | 0.7787 | 0.7763 | 0.5754 | 0.7223 |\n| 0.0738 | 1.9117 | 12000 | 0.0802 | 0.8213 | 0.8209 | 0.6492 | 0.7353 | 0.7348 | 0.4720 | 0.7825 | 0.7805 | 0.5727 | 0.7783 | 0.7765 | 0.5704 | 0.7201 |\n| 0.0718 | 1.9914 | 12500 | 0.0784 | 0.8311 | 0.8311 | 0.6621 | 0.7382 | 0.7382 | 0.4764 | 0.7716 | 0.7682 | 0.5567 | 0.7718 | 0.7711 | 0.5450 | 0.7230 |\n| 0.0712 | 2.0711 | 13000 | 0.0860 | 0.8454 | 0.8453 | 0.6907 | 0.7419 | 0.7418 | 0.4838 | 0.7500 | 0.7399 | 0.5380 | 0.7789 | 0.7761 | 0.5782 | 0.7138 |\n| 0.0649 | 2.1507 | 13500 | 0.0802 | 0.8271 | 0.8269 | 0.6573 | 0.7409 | 0.7406 | 0.4829 | 0.7922 | 0.7911 | 0.5890 | 0.7837 | 0.7829 | 0.5751 | 0.7103 |\n| 0.0618 | 2.2304 | 14000 | 0.0820 | 0.8218 | 0.8213 | 0.6507 | 0.7321 | 0.7320 | 0.4642 | 0.7904 | 0.7874 | 0.5942 | 0.7900 | 0.7899 | 0.5798 | 0.7084 |\n| 0.066 | 2.3100 | 14500 | 0.0812 | 0.8190 | 0.8188 | 0.6421 | 0.7268 | 0.7210 | 0.4725 | 0.7820 | 0.7794 | 0.5747 | 0.7915 | 0.7912 | 0.5870 | 0.7143 |\n| 0.0655 | 2.3897 | 15000 | 0.0788 | 0.8411 | 0.8410 | 0.6821 | 0.7374 | 0.7370 | 0.4769 | 0.8027 | 0.8027 | 0.6054 | 0.7978 | 0.7977 | 0.5980 | 0.7148 |\n| 0.0626 | 2.4693 | 15500 | 0.0803 | 0.8235 | 0.8234 | 0.6502 | 0.7454 | 0.7449 | 0.4927 | 0.8072 | 0.8067 | 0.6162 | 0.7721 | 0.7692 | 0.5531 | 0.7075 |\n| 0.0687 | 2.5490 | 16000 | 0.0795 | 0.8483 | 0.8481 | 0.6969 | 0.7452 | 0.7439 | 0.4948 | 0.7882 | 0.7852 | 0.5901 | 0.7900 | 0.7894 | 0.5862 | 0.7108 |\n| 0.0653 | 2.6286 | 16500 | 0.0817 | 0.8365 | 0.8364 | 0.6751 | 0.7454 | 0.7454 | 0.4909 | 0.8019 | 0.8007 | 0.6090 | 0.7935 | 0.7922 | 0.5988 | 0.7042 |\n| 0.0644 | 2.7083 | 17000 | 0.0826 | 0.8543 | 0.8541 | 0.7090 | 0.7495 | 0.7483 | 0.5032 | 0.7982 | 0.7968 | 0.6022 | 0.7785 | 0.7759 | 0.5764 | 0.7060 |\n| 0.0605 | 2.7880 | 17500 | 0.0797 | 0.8575 | 0.8572 | 0.7158 | 0.7473 | 0.7461 | 0.4989 | 0.8083 | 0.8073 | 0.6206 | 0.8004 | 0.8001 | 0.6045 | 0.7089 |\n| 0.0656 | 2.8676 | 18000 | 0.0781 | 0.8519 | 0.8506 | 0.7115 | 0.7573 | 0.7571 | 0.5154 | 0.7962 | 0.7933 | 0.6061 | 0.7972 | 0.7962 | 0.6047 | 0.7105 |\n| 0.0585 | 2.9473 | 18500 | 0.0781 | 0.8429 | 0.8429 | 0.6872 | 0.7541 | 0.7541 | 0.5083 | 0.8114 | 0.8104 | 0.6271 | 0.8036 | 0.8029 | 0.6145 | 0.7068 |\n| 0.0533 | 3.0269 | 19000 | 0.0814 | 0.8602 | 0.8600 | 0.7204 | 0.7505 | 0.7480 | 0.5101 | 0.7785 | 0.7721 | 0.5855 | 0.8022 | 0.8018 | 0.6049 | 0.7010 |\n| 0.0501 | 3.1066 | 19500 | 0.0810 | 0.8560 | 0.8560 | 0.7126 | 0.7348 | 0.7320 | 0.4806 | 0.8155 | 0.8149 | 0.6338 | 0.8053 | 0.8052 | 0.6107 | 0.7005 |\n| 0.0537 | 3.1862 | 20000 | 0.0819 | 0.8512 | 0.8511 | 0.7038 | 0.7627 | 0.7614 | 0.5308 | 0.8194 | 0.8194 | 0.6391 | 0.8100 | 0.8100 | 0.6201 | 0.6966 |\n| 0.0543 | 3.2659 | 20500 | 0.0808 | 0.8622 | 0.8616 | 0.7274 | 0.7607 | 0.7607 | 0.5215 | 0.8157 | 0.8149 | 0.6352 | 0.8035 | 0.8034 | 0.6069 | 0.6990 |\n| 0.0457 | 3.3455 | 21000 | 0.0816 | 0.8543 | 0.8542 | 0.7087 | 0.7629 | 0.7615 | 0.5313 | 0.8160 | 0.8144 | 0.6408 | 0.8092 | 0.8088 | 0.6242 | 0.6978 |\n| 0.0551 | 3.4252 | 21500 | 0.0821 | 0.8655 | 0.8649 | 0.7341 | 0.7525 | 0.7519 | 0.5079 | 0.8170 | 0.8156 | 0.6415 | 0.8082 | 0.8082 | 0.6164 | 0.6915 |\n| 0.0548 | 3.5049 | 22000 | 0.0825 | 0.8594 | 0.8593 | 0.7187 | 0.7611 | 0.7609 | 0.5236 | 0.7895 | 0.7840 | 0.6059 | 0.8092 | 0.8092 | 0.6200 | 0.6940 |\n| 0.057 | 3.5845 | 22500 | 0.0800 | 0.8715 | 0.8712 | 0.7445 | 0.7672 | 0.7671 | 0.5350 | 0.8216 | 0.8205 | 0.6496 | 0.8176 | 0.8175 | 0.6370 | 0.6946 |\n| 0.0535 | 3.6642 | 23000 | 0.0808 | 0.8639 | 0.8637 | 0.7284 | 0.7488 | 0.7471 | 0.5049 | 0.8027 | 0.7989 | 0.6256 | 0.8013 | 0.8012 | 0.6024 | 0.6978 |\n| 0.0553 | 3.7438 | 23500 | 0.0807 | 0.8641 | 0.8641 | 0.7282 | 0.7644 | 0.7642 | 0.5297 | 0.8263 | 0.8256 | 0.6555 | 0.8116 | 0.8111 | 0.6301 | 0.6947 |\n| 0.0539 | 3.8235 | 24000 | 0.0830 | 0.8404 | 0.8399 | 0.6882 | 0.7681 | 0.7677 | 0.5377 | 0.8167 | 0.8148 | 0.6433 | 0.8131 | 0.8130 | 0.6284 | 0.6959 |\n| 0.0514 | 3.9031 | 24500 | 0.0803 | 0.8690 | 0.8690 | 0.7380 | 0.7675 | 0.7670 | 0.5380 | 0.7991 | 0.7943 | 0.6232 | 0.8153 | 0.8152 | 0.6331 | 0.6926 |\n| 0.0511 | 3.9828 | 25000 | 0.0811 | 0.8596 | 0.8596 | 0.7202 | 0.7609 | 0.7600 | 0.5261 | 0.8038 | 0.7997 | 0.6294 | 0.8160 | 0.8160 | 0.6319 | 0.6951 |\n| 0.0407 | 4.0625 | 25500 | 0.0826 | 0.8702 | 0.8699 | 0.7419 | 0.7717 | 0.7706 | 0.5482 | 0.8200 | 0.8180 | 0.6515 | 0.8173 | 0.8173 | 0.6355 | 0.6900 |\n| 0.0409 | 4.1421 | 26000 | 0.0838 | 0.8660 | 0.8660 | 0.7322 | 0.7679 | 0.7676 | 0.5370 | 0.8014 | 0.7974 | 0.6236 | 0.8155 | 0.8155 | 0.6310 | 0.6872 |\n| 0.0409 | 4.2218 | 26500 | 0.0841 | 0.8651 | 0.8650 | 0.7303 | 0.7666 | 0.7658 | 0.5374 | 0.8175 | 0.8151 | 0.6486 | 0.8201 | 0.8200 | 0.6431 | 0.6845 |\n| 0.0388 | 4.3014 | 27000 | 0.0828 | 0.8700 | 0.8700 | 0.7399 | 0.7615 | 0.7606 | 0.5271 | 0.8199 | 0.8180 | 0.6505 | 0.8173 | 0.8171 | 0.6369 | 0.6888 |\n| 0.0385 | 4.3811 | 27500 | 0.0831 | 0.8623 | 0.8623 | 0.7267 | 0.7762 | 0.7760 | 0.5532 | 0.8283 | 0.8276 | 0.6602 | 0.8230 | 0.8227 | 0.6495 | 0.6878 |\n| 0.0432 | 4.4607 | 28000 | 0.0855 | 0.8553 | 0.8552 | 0.7146 | 0.7752 | 0.7752 | 0.5507 | 0.7979 | 0.7931 | 0.6211 | 0.8188 | 0.8181 | 0.6470 | 0.6853 |\n| 0.0393 | 4.5404 | 28500 | 0.0832 | 0.8620 | 0.8620 | 0.7258 | 0.7769 | 0.7768 | 0.5538 | 0.8325 | 0.8320 | 0.6679 | 0.8199 | 0.8199 | 0.6410 | 0.6857 |\n| 0.0421 | 4.6200 | 29000 | 0.0832 | 0.8652 | 0.8651 | 0.7321 | 0.7729 | 0.7726 | 0.5475 | 0.8337 | 0.8334 | 0.6689 | 0.8208 | 0.8206 | 0.6456 | 0.6843 |\n| 0.041 | 4.6997 | 29500 | 0.0835 | 0.8726 | 0.8725 | 0.7451 | 0.7782 | 0.7781 | 0.5567 | 0.8288 | 0.8276 | 0.6643 | 0.8207 | 0.8207 | 0.6422 | 0.6837 |\n| 0.0424 | 4.7794 | 30000 | 0.0827 | 0.8762 | 0.8761 | 0.7527 | 0.7767 | 0.7767 | 0.5534 | 0.8307 | 0.8297 | 0.6667 | 0.8228 | 0.8227 | 0.6489 | 0.6853 |\n| 0.0428 | 4.8590 | 30500 | 0.0831 | 0.8753 | 0.8753 | 0.7505 | 0.7730 | 0.7727 | 0.5483 | 0.8281 | 0.8269 | 0.6635 | 0.8234 | 0.8233 | 0.6499 | 0.6849 |\n| 0.04 | 4.9387 | 31000 | 0.0834 | 0.8745 | 0.8745 | 0.7489 | 0.7764 | 0.7764 | 0.5529 | 0.8251 | 0.8235 | 0.6593 | 0.8231 | 0.8229 | 0.6496 | 0.6844 |\n\n\n### Framework versions\n\n- Transformers 4.47.1\n- Pytorch 2.3.0+cu121\n- Datasets 3.2.0\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "tiya0825/MBTI-ScoreModel2.0", "base_model_relation": "base" }, { "model_id": "colourrain/bert_cn_sst", "gated": "False", "card": "---\nlibrary_name: transformers\nlanguage:\n- en\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\ndatasets:\n- glue\nmetrics:\n- accuracy\nmodel-index:\n- name: sst2\n results:\n - task:\n name: Text Classification\n type: text-classification\n dataset:\n name: GLUE SST2\n type: glue\n args: sst2\n metrics:\n - name: Accuracy\n type: accuracy\n value: 0.8130733944954128\n---\n\n\n\n# sst2\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on the GLUE SST2 dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.4419\n- Accuracy: 0.8131\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 1.0\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.51.0.dev0\n- Pytorch 2.5.1\n- Datasets 3.5.0\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "colourrain/bert_cn_sst", "base_model_relation": "base" }, { "model_id": "roberthsu2003/for_multiple_choice", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: for_multiple_choice\n results: []\nlicense: apache-2.0\ndatasets:\n- roberthsu2003/for_Multiple_Choice\nlanguage:\n- zh\n---\n\n\n\n# for_multiple_choice\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.3109\n- Accuracy: 0.5962\n\n## \u6a21\u578b\u7684\u4f7f\u7528\n\nfrom transformers import AutoTokenizer, AutoModelForMultipleChoice\nfrom typing import Any\nimport torch\n\ntokenizer = AutoTokenizer.from_pretrained('roberthsu2003/for_multiple_choice')\nmodel = AutoModelForMultipleChoice.from_pretrained('roberthsu2003/for_multiple_choice')\n\nfrom typing import Any\nimport torch\n\nclass MultipleChoicePipeline:\n def __init__(self, model, tokenizer) -> None:\n self.model = model\n self.tokenizer = tokenizer\n self.device = model.device\n\n def preprocess(self, context, question, choices):\n cs, qcs = [], []\n for choice in choices:\n cs.append(context)\n qcs.append(question + \" \" + choice)\n return tokenizer(cs, qcs, truncation=\"only_first\", max_length=256, return_tensors=\"pt\")\n\n def predict(self, inputs):\n inputs = {k: v.unsqueeze(0).to(self.device) for k, v in inputs.items()}\n return self.model(**inputs).logits\n\n def postprocess(self, logits, choices):\n predition = torch.argmax(logits, dim=-1).cpu().item()\n return choices[predition]\n\n def __call__(self, context, question, choices) -> Any:\n inputs = self.preprocess(context,question,choices)\n logits = self.predict(inputs)\n result = self.postprocess(logits, choices)\n return result\n\nif __name__ == \"__main__\":\n pipe = MultipleChoicePipeline(model, tokenizer)\n result1 = pipe(\"\u7537\uff1a\u4f60\u4eca\u5929\u665a\u4e0a\u6709\u6642\u9593\u55ce?\u6211\u5011\u4e00\u8d77\u53bb\u770b\u96fb\u5f71\u5427? \u5973\uff1a\u4f60\u559c\u6b61\u6050\u6016\u7247\u548c\u611b\u60c5\u7247\uff0c\u4f46\u662f\u6211\u559c\u6b61\u559c\u5287\u7247\",\"\u5973\u7684\u6700\u559c\u6b61\u54ea\u7a2e\u96fb\u5f71?\",[\"\u6050\u6016\u7247\",\"\u611b\u60c5\u7247\",\"\u559c\u5287\u7247\",\"\u79d1\u5e7b\u7247\"])\n print(result1)\n\n```\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|\n| 0.9816 | 1.0 | 366 | 0.9955 | 0.5814 |\n| 0.7299 | 2.0 | 732 | 1.0239 | 0.5918 |\n| 0.3452 | 3.0 | 1098 | 1.3109 | 0.5962 |\n\n\n### Framework versions\n\n- Transformers 4.50.2\n- Pytorch 2.6.0+cu124\n- Datasets 3.5.0\n- Tokenizers 0.21.1", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "roberthsu2003/for_multiple_choice", "base_model_relation": "base" }, { "model_id": "roberthsu2003/sentence_similarity", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\n- f1\nmodel-index:\n- name: sentence_similarity\n results: []\ndatasets:\n- roberthsu2003/for_Sentence_Similarity\nlanguage:\n- zh\n---\n\n\n\n# sentence_similarity\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3474\n- Accuracy: 0.897\n- F1: 0.8652\n\n## \u6a21\u578b\u4f7f\u7528\n\n```python\n# Use a pipeline as a high-level helper\nfrom transformers import pipeline\n\npipe = pipeline(\"text-classification\", model=\"roberthsu2003/sentence_similarity\")\npipe({\"text\":\"\u6211\u559c\u6b61\u53f0\u5317\", \"text_pair\":\"\u53f0\u5317\u662f\u6211\u559c\u6b61\u7684\u5730\u65b9\"})\n\n#=======output=====\n{'label': '\u76f8\u4f3c', 'score': 0.8854433298110962}\n```\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 3.0\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |\n|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|\n| 0.2928 | 1.0 | 250 | 0.2737 | 0.887 | 0.8546 |\n| 0.1815 | 2.0 | 500 | 0.2596 | 0.8985 | 0.8741 |\n| 0.1203 | 3.0 | 750 | 0.3474 | 0.897 | 0.8652 |\n\n\n### Framework versions\n\n- Transformers 4.50.3\n- Pytorch 2.6.0+cu124\n- Tokenizers 0.21.1", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "roberthsu2003/sentence_similarity", "base_model_relation": "base" }, { "model_id": "KingLear/Philosophy_google-bert-base-chinese", "gated": "False", "card": "---\nlanguage:\n- zh\nbase_model:\n- google-bert/bert-base-chinese\npipeline_tag: question-answering\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "KingLear/Philosophy_google-bert-base-chinese", "base_model_relation": "base" }, { "model_id": "Nice2meetuwu/Bert-Base-Chinese-for-stock", "gated": "False", "card": "---\nlicense: mit\nbase_model:\n- google-bert/bert-base-chinese\npipeline_tag: text-classification\ntags:\n- finance\n---", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "Nice2meetuwu/Bert-Base-Chinese-for-stock", "base_model_relation": "base" }, { "model_id": "luohuashijieyoufengjun/ner_based_bert-base-chinese", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- precision\n- recall\n- f1\n- accuracy\nmodel-index:\n- name: ner_based_bert-base-chinese\n results: []\n---\n\n\n\n# ner_based_bert-base-chinese\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0171\n- Precision: 0.9610\n- Recall: 0.9716\n- F1: 0.9663\n- Accuracy: 0.9973\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 128\n- eval_batch_size: 128\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 15\n- mixed_precision_training: Native AMP\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|\n| 0.0941 | 1.0 | 650 | 0.0194 | 0.9150 | 0.9327 | 0.9237 | 0.9943 |\n| 0.0193 | 2.0 | 1300 | 0.0160 | 0.9282 | 0.9546 | 0.9412 | 0.9954 |\n| 0.0149 | 3.0 | 1950 | 0.0142 | 0.9477 | 0.9577 | 0.9527 | 0.9964 |\n| 0.0088 | 4.0 | 2600 | 0.0128 | 0.9551 | 0.9604 | 0.9577 | 0.9967 |\n| 0.0069 | 5.0 | 3250 | 0.0135 | 0.9567 | 0.9635 | 0.9601 | 0.9968 |\n| 0.0056 | 6.0 | 3900 | 0.0134 | 0.9552 | 0.9669 | 0.9610 | 0.9970 |\n| 0.0037 | 7.0 | 4550 | 0.0137 | 0.9592 | 0.9688 | 0.9640 | 0.9971 |\n| 0.0031 | 8.0 | 5200 | 0.0144 | 0.9592 | 0.9673 | 0.9632 | 0.9971 |\n| 0.0026 | 9.0 | 5850 | 0.0157 | 0.9536 | 0.9711 | 0.9623 | 0.9970 |\n| 0.0019 | 10.0 | 6500 | 0.0159 | 0.9586 | 0.9706 | 0.9646 | 0.9971 |\n| 0.0016 | 11.0 | 7150 | 0.0163 | 0.9592 | 0.9711 | 0.9651 | 0.9972 |\n| 0.0015 | 12.0 | 7800 | 0.0164 | 0.9621 | 0.9702 | 0.9661 | 0.9972 |\n| 0.0013 | 13.0 | 8450 | 0.0166 | 0.9625 | 0.9714 | 0.9669 | 0.9973 |\n| 0.001 | 14.0 | 9100 | 0.0171 | 0.9624 | 0.9711 | 0.9667 | 0.9973 |\n| 0.0009 | 15.0 | 9750 | 0.0171 | 0.9610 | 0.9716 | 0.9663 | 0.9973 |\n\n\n### Framework versions\n\n- Transformers 4.51.3\n- Pytorch 2.7.0+cu126\n- Datasets 3.6.0\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "luohuashijieyoufengjun/ner_based_bert-base-chinese", "base_model_relation": "base" }, { "model_id": "li1212/bert-base-chinese-finetuned-moviereviews-mask-tf", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: li1212/bert-base-chinese-finetuned-moviereviews-mask-tf\n results: []\n---\n\n\n\n# li1212/bert-base-chinese-finetuned-moviereviews-mask-tf\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 5.3915\n- Validation Loss: 5.2672\n- Epoch: 2\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 438, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 30, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}\n- training_precision: mixed_float16\n\n### Training results\n\n| Train Loss | Validation Loss | Epoch |\n|:----------:|:---------------:|:-----:|\n| 6.8286 | 5.7954 | 0 |\n| 5.6379 | 5.4138 | 1 |\n| 5.3915 | 5.2672 | 2 |\n\n\n### Framework versions\n\n- Transformers 4.51.1\n- TensorFlow 2.18.0\n- Datasets 3.5.0\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "li1212/bert-base-chinese-finetuned-moviereviews-mask-tf", "base_model_relation": "base" }, { "model_id": "left0ver/bert-base-chinese-finetune-sentiment-classification", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: mit\ndatasets:\n- left0ver/sentiment-classification\nlanguage:\n- zh\nmetrics:\n- accuracy\nbase_model:\n- google-bert/bert-base-chinese\n---\n# Introduction\n\u8fd9\u662f\u4e00\u4e2a\u57fa\u4e8ebert-base-chinese\u5fae\u8c03\u7684\u60c5\u611f\u5206\u7c7b\u7684\u6a21\u578b\uff0c\u51c6\u786e\u7387\u5927\u6982\u4e3a94.6% \uff0c\u6570\u636e\u96c6\u4e3a[sentiment-classification](https://huggingface.co/datasets/left0ver/sentiment-classification),\u4e00\u4e2a\u5b66\u4e60\u9879\u76ee\uff0c\u65e8\u5728\u5b66\u4e60NLP\u7684\u57fa\u7840\u77e5\u8bc6\u4ee5\u53ca\u4e86\u89e3hugging face\u751f\u6001\u3002\n\n\u4ee3\u7801\u8bf7\u67e5\u770b[Sentiment-Classification](https://github.com/left0ver/Sentiment-Classification),\u5177\u4f53\u7684\u7ec6\u8282\u53ef\u4ee5\u67e5\u770b\u6211\u7684[\u535a\u5ba2](https://blog.leftover.cn/2025/05/17/NLP%E6%83%85%E6%84%9F%E5%88%86%E7%B1%BB/)\n# Usage\n```python\n\nfrom transformers.models.auto.tokenization_auto import AutoTokenizer\nfrom transformers.models.auto.modeling_auto import AutoModelForSequenceClassification\n\nfrom transformers.training_args import TrainingArguments\nfrom transformers.data.data_collator import DataCollatorWithPadding\nfrom transformers.trainer import Trainer\nfrom transformers.trainer_utils import EvalPrediction\nfrom datasets import load_dataset, Features, Value, ClassLabel\n\n\nmodel = AutoModelForSequenceClassification.from_pretrained(\"left0ver/bert-base-chinese-finetune-sentiment-classification\",num_labels=2)\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-chinese\",return_tensors=\"pt\")\n\n\ndataset = load_dataset(\"left0ver/sentiment-classification\")\ntokenized_dataset = dataset.map(\n lambda examples: tokenizer(examples[\"text\"],truncation=True,max_length=512),\n batched=True,\n remove_columns=[\"text\"],\n)\ndata_collator = DataCollatorWithPadding(tokenizer=tokenizer,padding=True)\ntraining_args = TrainingArguments(\n output_dir=\"./char_based_bert_finetune\",\n num_train_epochs =10,\n eval_strategy = \"epoch\",\n per_device_train_batch_size =64,\n per_device_eval_batch_size=32,\n gradient_accumulation_steps =1,\n learning_rate = 1e-6,\n lr_scheduler_type = \"cosine\",\n logging_strategy= \"steps\",\n logging_steps = 20,\n save_strategy = \"epoch\",\n save_total_limit = 4,\n seed = 42,\n data_seed = 42,\n load_best_model_at_end=True,\n # \u6307\u5b9alabel\u7684\u5b57\u6bb5\n label_names=[\"labels\"],\n run_name=\"char_based_bert_finetune\",\n report_to=\"wandb\",\n metric_for_best_model=\"eval_accuracy\",\n greater_is_better=True,\n optim=\"adamw_torch\",\n # eval_on_start=True, # just for test eval\n)\ndef compute_metrics(eval_pred:EvalPrediction):\n predictions, labels = eval_pred\n accuracy = (predictions == labels).mean()\n return {\n 'accuracy': accuracy,\n }\n\ndef preprocess_logits_for_metrics(logits, labels):\n predictions = logits.argmax(axis=1)\n return predictions\n\ntrainer = Trainer(\n model=model,\n args=training_args,\n train_dataset=tokenized_dataset[\"train\"],\n eval_dataset=tokenized_dataset[\"validation\"],\n compute_metrics=compute_metrics,\n # tokenizer = tokenizer,\n processing_class = tokenizer,\n data_collator=data_collator,\n preprocess_logits_for_metrics =preprocess_logits_for_metrics,\n)\ntrainer.evaluate()\n\n```", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "left0ver/bert-base-chinese-finetune-sentiment-classification", "base_model_relation": "base" }, { "model_id": "ZON8955/NER_demo", "gated": "unknown", "card": "---\nlibrary_name: transformers\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: NER_demo\n results: []\n---\n\n\n\n# NER_demo\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0035\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 6\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:-----:|:----:|:---------------:|\n| No log | 1.0 | 7 | 0.3060 |\n| 0.6805 | 2.0 | 14 | 0.0752 |\n| 0.1039 | 3.0 | 21 | 0.0235 |\n| 0.1039 | 4.0 | 28 | 0.0293 |\n| 0.0319 | 5.0 | 35 | 0.0059 |\n| 0.0237 | 6.0 | 42 | 0.0035 |\n\n\n### Framework versions\n\n- Transformers 4.52.2\n- Pytorch 2.6.0+cu124\n- Datasets 2.14.4\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": null, "base_model_relation": null }, { "model_id": "luohuashijieyoufengjun/ner_based_bert-base-chinese-only-phone", "gated": "unknown", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- precision\n- recall\n- f1\n- accuracy\nmodel-index:\n- name: ner_based_bert-base-chinese-only-phone\n results: []\n---\n\n\n\n# ner_based_bert-base-chinese-only-phone\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0012\n- Precision: 1.0\n- Recall: 1.0\n- F1: 1.0\n- Accuracy: 1.0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 128\n- eval_batch_size: 128\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 10\n- mixed_precision_training: Native AMP\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|\n| No log | 1.0 | 6 | 0.4001 | 0.5696 | 0.625 | 0.5960 | 0.864 |\n| No log | 2.0 | 12 | 0.0797 | 0.9722 | 0.9722 | 0.9722 | 0.997 |\n| No log | 3.0 | 18 | 0.0153 | 0.9861 | 0.9861 | 0.9861 | 0.999 |\n| No log | 4.0 | 24 | 0.0042 | 1.0 | 1.0 | 1.0 | 1.0 |\n| No log | 5.0 | 30 | 0.0029 | 1.0 | 1.0 | 1.0 | 1.0 |\n| No log | 6.0 | 36 | 0.0017 | 1.0 | 1.0 | 1.0 | 1.0 |\n| No log | 7.0 | 42 | 0.0034 | 0.9726 | 0.9861 | 0.9793 | 0.999 |\n| No log | 8.0 | 48 | 0.0017 | 1.0 | 1.0 | 1.0 | 1.0 |\n| No log | 9.0 | 54 | 0.0013 | 1.0 | 1.0 | 1.0 | 1.0 |\n| No log | 10.0 | 60 | 0.0012 | 1.0 | 1.0 | 1.0 | 1.0 |\n\n\n### Framework versions\n\n- Transformers 4.51.3\n- Pytorch 2.7.0+cu126\n- Datasets 3.6.0\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": null, "base_model_relation": null }, { "model_id": "luohuashijieyoufengjun/ner_based_bert-base-chinese-only-phone1", "gated": "unknown", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- precision\n- recall\n- f1\n- accuracy\nmodel-index:\n- name: ner_based_bert-base-chinese-only-phone1\n results: []\n---\n\n\n\n# ner_based_bert-base-chinese-only-phone1\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0202\n- Precision: 0.9861\n- Recall: 0.9861\n- F1: 0.9861\n- Accuracy: 0.9969\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 128\n- eval_batch_size: 128\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 10\n- mixed_precision_training: Native AMP\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|\n| No log | 1.0 | 6 | 0.6782 | 0.3163 | 0.4306 | 0.3647 | 0.7385 |\n| No log | 2.0 | 12 | 0.2690 | 0.2840 | 0.3194 | 0.3007 | 0.9323 |\n| No log | 3.0 | 18 | 0.1381 | 0.3721 | 0.4444 | 0.4051 | 0.9417 |\n| No log | 4.0 | 24 | 0.0900 | 0.6216 | 0.6389 | 0.6301 | 0.9688 |\n| No log | 5.0 | 30 | 0.0588 | 0.9067 | 0.9444 | 0.9252 | 0.9896 |\n| No log | 6.0 | 36 | 0.0412 | 0.9189 | 0.9444 | 0.9315 | 0.9906 |\n| No log | 7.0 | 42 | 0.0298 | 0.9324 | 0.9583 | 0.9452 | 0.9917 |\n| No log | 8.0 | 48 | 0.0208 | 0.9589 | 0.9722 | 0.9655 | 0.9948 |\n| No log | 9.0 | 54 | 0.0223 | 0.9726 | 0.9861 | 0.9793 | 0.9958 |\n| No log | 10.0 | 60 | 0.0202 | 0.9861 | 0.9861 | 0.9861 | 0.9969 |\n\n\n### Framework versions\n\n- Transformers 4.51.3\n- Pytorch 2.7.0+cu126\n- Datasets 3.6.0\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": null, "base_model_relation": null }, { "model_id": "Xiaoxi2333/bert_multilabel_chinese", "gated": "unknown", "card": "---\nlanguage: zh\ntags:\n- bert\n- multilabel-classification\n- chinese\n- intent-classification\n- time-lbs\nbase_model:\n- google-bert/bert-base-chinese\n---\n\n# \u4e2d\u6587\u591a\u6807\u7b7e\u610f\u56fe\u8bc6\u522b\u6a21\u578b\uff08BERT\uff09\n\n\u8fd9\u662f\u4e00\u4e2a\u57fa\u4e8e `bert-base-chinese` \u5fae\u8c03\u7684\u591a\u6807\u7b7e\u5206\u7c7b\u6a21\u578b\uff0c\u652f\u6301\u4ee5\u4e0b\u4efb\u52a1\uff1a\n\n\u5bf9\u4e2d\u6587query\u8fdb\u884c\u5206\u7c7b\n- \u591a\u5206\u7c7b\uff1a\u610f\u56fe\u8bc6\u522b\uff08chat / simple question / complex question\uff09\n- \u4e8c\u5206\u7c7b\uff1a\u662f\u5426\u65f6\u95f4\u76f8\u5173\u3001\u662f\u5426\u4f4d\u7f6e\uff08LBS\uff09\u76f8\u5173\n\n## \u6a21\u578b\u7ed3\u6784\n\n- \u57fa\u7840\u6a21\u578b\uff1a[`bert-base-chinese`](https://huggingface.co/bert-base-chinese)\n- \u8f93\u51fa\u5c42\uff1a\u4e00\u4e2a 5 \u7ef4\u7684 sigmoid \u591a\u6807\u7b7e\u8f93\u51fa\u5411\u91cf\n - `[\u610f\u56fe-chat, \u610f\u56fe-simple, \u610f\u56fe-complex, \u662f\u5426\u65f6\u95f4\u76f8\u5173, \u662f\u5426LBS\u76f8\u5173]`\n\n## \u4f7f\u7528\u65b9\u6cd5\n\n```python\nimport torch\nfrom transformers import BertTokenizer\nfrom bert_classifier_3 import BertMultiLabelClassifier\n\n# \u52a0\u8f7d tokenizer \u548c\u6a21\u578b\nbert_base = \"bert-base-chinese\"\nmodel_id = \"Xiaoxi2333/bert_multilabel_chinese\"\ntokenizer = BertTokenizer.from_pretrained(model_id)\nmodel = BertMultiLabelClassifier(pretrained_model_path=bert_base, num_labels=5)\nstate_dict = torch.hub.load_state_dict_from_url(\n f\"https://huggingface.co/{model_id}/resolve/main/pytorch_model.bin\",\n map_location=\"cpu\"\n)\nmodel.load_state_dict(state_dict)\nmodel.eval()\n\n# \u5b9a\u4e49\u6807\u7b7e\nintent_labels = [\"chat\", \"simple question\", \"complex question\"]\nyesno_labels = [\"\u5426\", \"\u662f\"]\n\n# \u5b9a\u4e49\u9884\u6d4b\u51fd\u6570\ndef predict(query):\n enc = tokenizer(\n query,\n truncation=True,\n padding=\"max_length\",\n max_length=128,\n return_tensors=\"pt\"\n )\n with torch.no_grad():\n logits = model(enc[\"input_ids\"], enc[\"attention_mask\"])\n probs = torch.sigmoid(logits).squeeze(0)\n intent_index = torch.argmax(probs[:3]).item()\n is_time = int(probs[3] > 0.5)\n is_lbs = int(probs[4] > 0.5)\n\n return {\n \"query\": query,\n \"\u610f\u56fe\": intent_labels[intent_index],\n \"\u662f\u5426\u65f6\u95f4\u76f8\u5173\": yesno_labels[is_time],\n \"\u662f\u5426lbs\u76f8\u5173\": yesno_labels[is_lbs],\n \"\u539f\u59cb\u6982\u7387\": probs.tolist()\n }\n\n# \u793a\u4f8b\u67e5\u8be2\nresult = predict(\"\u660e\u5929\u5317\u4eac\u5929\u6c14\u600e\u4e48\u6837\uff1f\")\nprint(result)", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": null, "base_model_relation": null }, { "model_id": "lili0324/bert-base-chinese-finetuned-imdb-shanghai", "gated": "unknown", "card": "---\nbase_model: bert-base-chinese\ntags:\n- generated_from_trainer\nmodel-index:\n- name: bert-base-chinese-finetuned-imdb-shanghai\n results: []\n---\n\n\n\n# bert-base-chinese-finetuned-imdb-shanghai\n\nThis model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 2.7154\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0\n\n### Training results\n\n\n\n### Framework versions\n\n- Transformers 4.41.2\n- Pytorch 2.7.1+cpu\n- Datasets 3.6.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": null, "base_model_relation": null }, { "model_id": "luohuashijieyoufengjun/ner_based_bert-base-chinese_badcase1", "gated": "unknown", "card": "---\nlibrary_name: transformers\nbase_model: google-bert/bert-base-chinese\ntags:\n- generated_from_trainer\nmetrics:\n- precision\n- recall\n- f1\n- accuracy\nmodel-index:\n- name: ner_based_bert-base-chinese_badcase1\n results: []\n---\n\n\n\n# ner_based_bert-base-chinese_badcase1\n\nThis model is a fine-tuned version of [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0165\n- Precision: 0.9618\n- Recall: 0.9708\n- F1: 0.9663\n- Accuracy: 0.9974\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 128\n- eval_batch_size: 128\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 15\n- mixed_precision_training: Native AMP\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|\n| 0.1002 | 1.0 | 651 | 0.0211 | 0.9069 | 0.9390 | 0.9227 | 0.9939 |\n| 0.0199 | 2.0 | 1302 | 0.0148 | 0.9334 | 0.9525 | 0.9429 | 0.9957 |\n| 0.0145 | 3.0 | 1953 | 0.0136 | 0.9461 | 0.9578 | 0.9519 | 0.9964 |\n| 0.0088 | 4.0 | 2604 | 0.0126 | 0.9514 | 0.9616 | 0.9564 | 0.9968 |\n| 0.0067 | 5.0 | 3255 | 0.0125 | 0.9567 | 0.9632 | 0.9599 | 0.9970 |\n| 0.0058 | 6.0 | 3906 | 0.0124 | 0.9586 | 0.9673 | 0.9630 | 0.9972 |\n| 0.0038 | 7.0 | 4557 | 0.0136 | 0.9579 | 0.9674 | 0.9626 | 0.9971 |\n| 0.003 | 8.0 | 5208 | 0.0138 | 0.9605 | 0.9698 | 0.9651 | 0.9973 |\n| 0.0027 | 9.0 | 5859 | 0.0141 | 0.9595 | 0.9701 | 0.9648 | 0.9973 |\n| 0.0019 | 10.0 | 6510 | 0.0153 | 0.9597 | 0.9699 | 0.9648 | 0.9973 |\n| 0.0016 | 11.0 | 7161 | 0.0152 | 0.9611 | 0.9707 | 0.9659 | 0.9974 |\n| 0.0015 | 12.0 | 7812 | 0.0163 | 0.9614 | 0.9691 | 0.9652 | 0.9973 |\n| 0.0013 | 13.0 | 8463 | 0.0162 | 0.9629 | 0.9704 | 0.9666 | 0.9974 |\n| 0.001 | 14.0 | 9114 | 0.0165 | 0.9612 | 0.9711 | 0.9661 | 0.9974 |\n| 0.0009 | 15.0 | 9765 | 0.0165 | 0.9618 | 0.9708 | 0.9663 | 0.9974 |\n\n\n### Framework versions\n\n- Transformers 4.51.3\n- Pytorch 2.7.0+cu126\n- Datasets 3.6.0\n- Tokenizers 0.21.1\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": null, "base_model_relation": null }, { "model_id": "scfengv/TVL_GeneralLayerClassifier", "gated": "False", "card": "---\nlicense: mit\nlanguage:\n- zh\nmetrics:\n- accuracy\n- f1 (macro)\n- f1 (micro)\nbase_model:\n- google-bert/bert-base-chinese\npipeline_tag: text-classification\ntags:\n- Multi-label Text Classification\ndatasets:\n- scfengv/TVL-general-layer-dataset\nlibrary_name: adapter-transformers\nmodel-index:\n- name: scfengv/TVL_GeneralLayerClassifier\n results:\n - task:\n type: multi-label text-classification\n dataset:\n name: scfengv/TVL-general-layer-dataset\n type: scfengv/TVL-general-layer-dataset\n metrics:\n - name: Accuracy\n type: Accuracy\n value: 0.952902\n - name: F1 score (Micro)\n type: F1 score (Micro)\n value: 0.968717\n - name: F1 score (Macro)\n type: F1 score (Macro)\n value: 0.970818\n---\n# Model Details of TVL_GeneralLayerClassifier\n\n## Base Model\nThis model is fine-tuned from [google-bert/bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese).\n\n## Model Architecture\n- **Type**: BERT-based text classification model\n- **Hidden Size**: 768\n- **Number of Layers**: 12\n- **Number of Attention Heads**: 12\n- **Intermediate Size**: 3072\n- **Max Sequence Length**: 512\n- **Vocabulary Size**: 21,128\n\n## Key Components\n1. **Embeddings**\n - Word Embeddings\n - Position Embeddings\n - Token Type Embeddings\n - Layer Normalization\n\n2. **Encoder**\n - 12 layers of:\n - Self-Attention Mechanism\n - Intermediate Dense Layer\n - Output Dense Layer\n - Layer Normalization\n\n3. **Pooler**\n - Dense layer for sentence representation\n\n4. **Classifier**\n - Output layer with 4 classes\n\n## Training Hyperparameters\n\nThe model was trained using the following hyperparameters:\n\n```\nLearning rate: 1e-05\nBatch size: 32\nNumber of epochs: 10\nOptimizer: Adam\nLoss function: torch.nn.BCEWithLogitsLoss()\n```\n\n## Training Infrastructure\n\n- **Hardware Type:** NVIDIA Quadro RTX8000\n- **Library:** PyTorch\n- **Hours used:** 2hr 56mins\n\n## Model Parameters\n- Total parameters: ~102M (estimated)\n- All parameters are in 32-bit floating point (F32) format\n\n## Input Processing\n- Uses BERT tokenization\n- Supports sequences up to 512 tokens\n\n## Output\n- 4-class multi-label classification\n\n## Performance Metrics\n- Accuracy score: 0.952902\n- F1 score (Micro): 0.968717\n- F1 score (Macro): 0.970818\n\n## Training Dataset\nThis model was trained on the [scfengv/TVL-general-layer-dataset](https://huggingface.co/datasets/scfengv/TVL-general-layer-dataset).\n\n## Testing Dataset\n\n- [scfengv/TVL-general-layer-dataset](https://huggingface.co/datasets/scfengv/TVL-general-layer-dataset)\n - validation\n - Remove Emoji\n - Emoji2Desc\n - Remove Punctuation\n\n## Usage\n\n```python\nimport torch\nfrom transformers import BertForSequenceClassification, BertTokenizer\n\nmodel = BertForSequenceClassification.from_pretrained(\"scfengv/TVL_GeneralLayerClassifier\")\ntokenizer = BertTokenizer.from_pretrained(\"scfengv/TVL_GeneralLayerClassifier\")\n\n# Prepare your text\ntext = \"Your text here\" ## Please refer to Dataset\ninputs = tokenizer(text, return_tensors = \"pt\", padding = True, truncation = True, max_length = 512)\n\n# Make prediction\nwith torch.no_grad():\n outputs = model(**inputs)\n predictions = torch.sigmoid(outputs.logits)\n\n# Print predictions\nprint(predictions)\n```\n\n## Additional Notes\n- This model is specifically designed for TVL general layer classification tasks.\n- It's based on the Chinese BERT model, indicating it's optimized for Chinese text.\n\n- **Hardware Type:** NVIDIA Quadro RTX8000\n- **Library:** PyTorch\n- **Hours used:** 2hr 56mins\n\n### Training Data\n\n- [scfengv/TVL-general-layer-dataset](https://huggingface.co/datasets/scfengv/TVL-general-layer-dataset)\n - train\n\n### Training Hyperparameters\n\nThe model was trained using the following hyperparameters:\n\n```\nLearning rate: 1e-05\nBatch size: 32\nNumber of epochs: 10\nOptimizer: Adam\nLoss function: torch.nn.BCEWithLogitsLoss()\n```\n\n## Evaluation\n\n### Testing Data\n\n- [scfengv/TVL-general-layer-dataset](https://huggingface.co/datasets/scfengv/TVL-general-layer-dataset)\n - validation\n - Remove Emoji\n - Emoji2Desc\n - Remove Punctuation\n\n### Results (validation)\n\n- Accuracy: 0.952902\n- F1 Score (Micro): 0.968717\n- F1 Score (Macro): 0.970818\n", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "scfengv/TVL_GeneralLayerClassifier", "base_model_relation": "base" }, { "model_id": "Xenova/bert-base-chinese", "gated": "False", "card": "---\nbase_model: bert-base-chinese\nlibrary_name: transformers.js\n---\n\nhttps://huggingface.co/bert-base-chinese with ONNX weights to be compatible with Transformers.js.\n\nNote: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [\ud83e\udd17 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).", "metadata": "\"N/A\"", "depth": 1, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "google-bert/bert-base-chinese" ], "base_model": "Xenova/bert-base-chinese", "base_model_relation": "base" }, { "model_id": "AlienKevin/bert_base_cantonese_pos_hkcancor", "gated": "False", "card": "---\nlicense: cc-by-4.0\nbase_model: indiejoseph/bert-base-cantonese\ntags:\n- generated_from_trainer\ndatasets:\n- hkcancor\nmodel-index:\n- name: bert_base_cantonese_pos_hkcancor\n results: []\n---\n\n\n\n# bert_base_cantonese_pos_hkcancor\n\nThis model is a fine-tuned version of [indiejoseph/bert-base-cantonese](https://huggingface.co/indiejoseph/bert-base-cantonese) on the hkcancor dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.1293\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss |\n|:-------------:|:-----:|:----:|:---------------:|\n| 0.1555 | 1.0 | 669 | 0.1348 |\n| 0.1112 | 2.0 | 1338 | 0.1293 |\n\n\n### Framework versions\n\n- Transformers 4.43.3\n- Pytorch 2.4.0\n- Datasets 2.20.0\n- Tokenizers 0.19.1\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "indiejoseph/bert-base-cantonese" ], "base_model": "AlienKevin/bert_base_cantonese_pos_hkcancor", "base_model_relation": "base" }, { "model_id": "hon9kon9ize/bert-base-cantonese", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: indiejoseph/bert-base-cantonese\ntags:\n- generated_from_trainer\npipeline_tag: fill-mask\nwidget:\n- text: \u9999\u6e2f\u539f\u672c[MASK]\u4e00\u500b\u4eba\u7159\u7a00\u5c11\u5605\u6f01\u6e2f\u3002\n example_title: \u4fc2\nmodel-index:\n- name: bert-base-cantonese\n results: []\n---\n\n\n\n# bert-base-cantonese\n\nThis model is a continuation of [indiejoseph/bert-base-cantonese](https://huggingface.co/indiejoseph/bert-base-cantonese), a BERT-based model pre-trained on a substantial corpus of Cantonese text. The dataset was sourced from a variety of platforms, including news articles, social media posts, and web pages. The text was segmented into sentences containing 11 to 460 tokens per line. To ensure data quality, Minhash LSH was employed to eliminate near-duplicate sentences, resulting in a final dataset comprising 161,338,273 tokens. Training was conducted using the `run_mlm.py` script from the `transformers` library.\n\nThis continuous pre-training aims to expand the model's knowledge with more up-to-date Hong Kong and Cantonese text data. So we slightly overfit the model with higher learng rate and more epochs.\n\n[WandB](https://wandb.ai/indiejoseph/public/runs/p2685rsn/workspace?nw=nwuserindiejoseph)\n\n## Usage\n\n```python\nfrom transformers import pipeline\n\npipe = pipeline(\"fill-mask\", model=\"hon9kon9ize/bert-base-cantonese\")\n\npipe(\"\u9999\u6e2f\u7279\u9996\u4fc2\u674e[MASK]\u8d85\")\n\n# [{'score': 0.3057154417037964,\n# 'token': 2157,\n# 'token_str': '\u5bb6',\n# 'sequence': '\u9999 \u6e2f \u7279 \u9996 \u4fc2 \u674e \u5bb6 \u8d85'},\n# {'score': 0.08251259475946426,\n# 'token': 6631,\n# 'token_str': '\u8d85',\n# 'sequence': '\u9999 \u6e2f \u7279 \u9996 \u4fc2 \u674e \u8d85 \u8d85'},\n# ...\n\npipe(\"\u6211\u7747\u5230\u7531\u6cbb\u53ca\u8208\u5e36\u569f[MASK]\u597d\u8655\")\n\n# [{'score': 0.9563464522361755,\n# 'token': 1646,\n# 'token_str': '\u5605',\n# 'sequence': '\u6211 \u7747 \u5230 \u7531 \u6cbb \u53ca \u8208 \u5e36 \u569f \u5605 \u597d \u8655'},\n# {'score': 0.00982475932687521,\n# 'token': 4638,\n# 'token_str': '\u7684',\n# 'sequence': '\u6211 \u7747 \u5230 \u7531 \u6cbb \u53ca \u8208 \u5e36 \u569f \u7684 \u597d \u8655'},\n# ...\n\n```\n\n## Intended uses & limitations\n\nThis model is intended to be used for further fine-tuning on Cantonese downstream tasks.\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 180\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 1440\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0\n\n### Framework versions\n\n- Transformers 4.45.0\n- Pytorch 2.4.1+cu121\n- Datasets 2.20.0\n- Tokenizers 0.20.0", "metadata": "\"N/A\"", "depth": 2, "children": [ "wcyat/bert-suicide-detection-hk-new" ], "children_count": 1, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 1, "spaces": [], "spaces_count": 0, "parents": [ "indiejoseph/bert-base-cantonese" ], "base_model": "hon9kon9ize/bert-base-cantonese", "base_model_relation": "base" }, { "model_id": "AIYIYA/my_html4", "gated": "False", "card": "---\nbase_model: AIYIYA/my_html3\ntags:\n- generated_from_keras_callback\nmodel-index:\n- name: AIYIYA/my_html4\n results: []\n---\n\n\n\n# AIYIYA/my_html4\n\nThis model is a fine-tuned version of [AIYIYA/my_html3](https://huggingface.co/AIYIYA/my_html3) on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Train Loss: 0.1831\n- Train Accuracy: 0.9513\n- Validation Loss: 0.0522\n- Validation Accuracy: 0.9849\n- Epoch: 0\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 225, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n- training_precision: float32\n\n### Training results\n\n| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |\n|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|\n| 0.1831 | 0.9513 | 0.0522 | 0.9849 | 0 |\n\n\n### Framework versions\n\n- Transformers 4.35.2\n- TensorFlow 2.15.0\n- Datasets 2.16.1\n- Tokenizers 0.15.0\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "AIYIYA/my_html3" ], "base_model": "AIYIYA/my_html4", "base_model_relation": "base" }, { "model_id": "sercetexam9/bert-base-chinese-chn-finetuned-augmentation-LUNAR-chn-MICRO", "gated": "False", "card": "---\nlibrary_name: transformers\nbase_model: sercetexam9/bert-base-chinese-chn-finetuned-augmentation-LUNAR\ntags:\n- generated_from_trainer\nmetrics:\n- f1\n- accuracy\nmodel-index:\n- name: bert-base-chinese-chn-finetuned-augmentation-LUNAR-chn-MICRO\n results: []\n---\n\n\n\n# bert-base-chinese-chn-finetuned-augmentation-LUNAR-chn-MICRO\n\nThis model is a fine-tuned version of [sercetexam9/bert-base-chinese-chn-finetuned-augmentation-LUNAR](https://huggingface.co/sercetexam9/bert-base-chinese-chn-finetuned-augmentation-LUNAR) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.0631\n- F1: 0.9594\n- Roc Auc: 0.9720\n- Accuracy: 0.9262\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 20\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |\n|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|\n| 0.0118 | 1.0 | 1406 | 0.0631 | 0.9594 | 0.9720 | 0.9262 |\n| 0.0111 | 2.0 | 2812 | 0.0762 | 0.9527 | 0.9669 | 0.9180 |\n| 0.0126 | 3.0 | 4218 | 0.0840 | 0.9501 | 0.9720 | 0.9088 |\n| 0.0127 | 4.0 | 5624 | 0.1137 | 0.9334 | 0.9599 | 0.8788 |\n| 0.0085 | 5.0 | 7030 | 0.1123 | 0.9382 | 0.9600 | 0.8888 |\n\n\n### Framework versions\n\n- Transformers 4.45.1\n- Pytorch 2.4.0\n- Datasets 3.0.1\n- Tokenizers 0.20.0\n", "metadata": "\"N/A\"", "depth": 2, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "sercetexam9/bert-base-chinese-chn-finetuned-augmentation-LUNAR" ], "base_model": "sercetexam9/bert-base-chinese-chn-finetuned-augmentation-LUNAR-chn-MICRO", "base_model_relation": "base" }, { "model_id": "wcyat/bert-suicide-detection-hk-new", "gated": "False", "card": "---\nlibrary_name: transformers\nlicense: apache-2.0\nbase_model: hon9kon9ize/bert-base-cantonese\ntags:\n- generated_from_trainer\nmetrics:\n- accuracy\nmodel-index:\n- name: bert-suicide-detection-hk-new\n results: []\n---\n\n\n\n# bert-suicide-detection-hk-new\n\nThis model is a fine-tuned version of [hon9kon9ize/bert-base-cantonese](https://huggingface.co/hon9kon9ize/bert-base-cantonese) on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3852\n- Accuracy: 0.9241\n\n## Model description\n\nMore information needed\n\n## Intended uses & limitations\n\nMore information needed\n\n## Training and evaluation data\n\nMore information needed\n\n## Training procedure\n\n### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments\n- lr_scheduler_type: linear\n- num_epochs: 5\n\n### Training results\n\n| Training Loss | Epoch | Step | Validation Loss | Accuracy |\n|:-------------:|:------:|:----:|:---------------:|:--------:|\n| 0.5003 | 0.0573 | 20 | 0.3516 | 0.8228 |\n| 0.3891 | 0.1146 | 40 | 0.3730 | 0.8228 |\n| 0.4264 | 0.1719 | 60 | 0.3530 | 0.8165 |\n| 0.421 | 0.2292 | 80 | 0.2427 | 0.8987 |\n| 0.37 | 0.2865 | 100 | 0.4437 | 0.8418 |\n| 0.447 | 0.3438 | 120 | 0.3434 | 0.8481 |\n| 0.2692 | 0.4011 | 140 | 0.3545 | 0.8861 |\n| 0.2534 | 0.4585 | 160 | 0.3643 | 0.9051 |\n| 0.3963 | 0.5158 | 180 | 0.4267 | 0.8734 |\n| 0.2337 | 0.5731 | 200 | 0.5053 | 0.8671 |\n| 0.4065 | 0.6304 | 220 | 0.3786 | 0.9051 |\n| 0.4239 | 0.6877 | 240 | 0.2757 | 0.9051 |\n| 0.2728 | 0.7450 | 260 | 0.3095 | 0.9051 |\n| 0.3323 | 0.8023 | 280 | 0.3326 | 0.9177 |\n| 0.2479 | 0.8596 | 300 | 0.3019 | 0.9114 |\n| 0.4682 | 0.9169 | 320 | 0.3146 | 0.9051 |\n| 0.5659 | 0.9742 | 340 | 0.2427 | 0.9304 |\n| 0.1859 | 1.0315 | 360 | 0.2563 | 0.9241 |\n| 0.0832 | 1.0888 | 380 | 0.2922 | 0.9177 |\n| 0.1351 | 1.1461 | 400 | 0.3399 | 0.9051 |\n| 0.1608 | 1.2034 | 420 | 0.4556 | 0.9114 |\n| 0.3276 | 1.2607 | 440 | 0.3819 | 0.9114 |\n| 0.2105 | 1.3181 | 460 | 0.3725 | 0.9051 |\n| 0.1077 | 1.3754 | 480 | 0.3591 | 0.9241 |\n| 0.0568 | 1.4327 | 500 | 0.3666 | 0.9177 |\n| 0.1179 | 1.4900 | 520 | 0.4484 | 0.8987 |\n| 0.1392 | 1.5473 | 540 | 0.3758 | 0.9241 |\n| 0.1825 | 1.6046 | 560 | 0.3526 | 0.9241 |\n| 0.28 | 1.6619 | 580 | 0.3396 | 0.9241 |\n| 0.104 | 1.7192 | 600 | 0.3169 | 0.9177 |\n| 0.0656 | 1.7765 | 620 | 0.3365 | 0.9241 |\n| 0.2895 | 1.8338 | 640 | 0.3365 | 0.9241 |\n| 0.3512 | 1.8911 | 660 | 0.3318 | 0.9177 |\n| 0.0908 | 1.9484 | 680 | 0.3043 | 0.9051 |\n| 0.2113 | 2.0057 | 700 | 0.2724 | 0.9114 |\n| 0.1008 | 2.0630 | 720 | 0.3296 | 0.9177 |\n| 0.0428 | 2.1203 | 740 | 0.3665 | 0.9177 |\n| 0.0109 | 2.1777 | 760 | 0.4608 | 0.9114 |\n| 0.0302 | 2.2350 | 780 | 0.4164 | 0.9241 |\n| 0.1545 | 2.2923 | 800 | 0.4920 | 0.9051 |\n| 0.1136 | 2.3496 | 820 | 0.4086 | 0.9177 |\n| 0.0567 | 2.4069 | 840 | 0.3794 | 0.9114 |\n| 0.0006 | 2.4642 | 860 | 0.3758 | 0.9304 |\n| 0.0004 | 2.5215 | 880 | 0.3846 | 0.9304 |\n| 0.0597 | 2.5788 | 900 | 0.3943 | 0.9304 |\n| 0.0532 | 2.6361 | 920 | 0.4111 | 0.9304 |\n| 0.1793 | 2.6934 | 940 | 0.4152 | 0.9241 |\n| 0.293 | 2.7507 | 960 | 0.4020 | 0.9304 |\n| 0.0774 | 2.8080 | 980 | 0.3849 | 0.9241 |\n| 0.1255 | 2.8653 | 1000 | 0.3787 | 0.9177 |\n| 0.0006 | 2.9226 | 1020 | 0.3836 | 0.9241 |\n| 0.0062 | 2.9799 | 1040 | 0.4092 | 0.9114 |\n| 0.0018 | 3.0372 | 1060 | 0.4327 | 0.9241 |\n| 0.0006 | 3.0946 | 1080 | 0.4502 | 0.9177 |\n| 0.1874 | 3.1519 | 1100 | 0.4322 | 0.9177 |\n| 0.0676 | 3.2092 | 1120 | 0.4126 | 0.9114 |\n| 0.0199 | 3.2665 | 1140 | 0.4113 | 0.9051 |\n| 0.0674 | 3.3238 | 1160 | 0.4134 | 0.9177 |\n| 0.0004 | 3.3811 | 1180 | 0.4212 | 0.9177 |\n| 0.0004 | 3.4384 | 1200 | 0.4277 | 0.9177 |\n| 0.1097 | 3.4957 | 1220 | 0.4246 | 0.9177 |\n| 0.0004 | 3.5530 | 1240 | 0.4207 | 0.9177 |\n| 0.0152 | 3.6103 | 1260 | 0.4250 | 0.9177 |\n| 0.0146 | 3.6676 | 1280 | 0.4120 | 0.9241 |\n| 0.0377 | 3.7249 | 1300 | 0.4052 | 0.9304 |\n| 0.1061 | 3.7822 | 1320 | 0.4011 | 0.9177 |\n| 0.1026 | 3.8395 | 1340 | 0.4384 | 0.9177 |\n| 0.1264 | 3.8968 | 1360 | 0.4102 | 0.9177 |\n| 0.0079 | 3.9542 | 1380 | 0.4019 | 0.9241 |\n| 0.0249 | 4.0115 | 1400 | 0.3998 | 0.9177 |\n| 0.0115 | 4.0688 | 1420 | 0.3949 | 0.9241 |\n| 0.0004 | 4.1261 | 1440 | 0.3971 | 0.9241 |\n| 0.0847 | 4.1834 | 1460 | 0.3859 | 0.9304 |\n| 0.0004 | 4.2407 | 1480 | 0.3855 | 0.9304 |\n| 0.002 | 4.2980 | 1500 | 0.3879 | 0.9367 |\n| 0.0004 | 4.3553 | 1520 | 0.3917 | 0.9367 |\n| 0.076 | 4.4126 | 1540 | 0.3851 | 0.9367 |\n| 0.0004 | 4.4699 | 1560 | 0.3871 | 0.9304 |\n| 0.0925 | 4.5272 | 1580 | 0.3846 | 0.9367 |\n| 0.0009 | 4.5845 | 1600 | 0.3872 | 0.9304 |\n| 0.0045 | 4.6418 | 1620 | 0.3885 | 0.9304 |\n| 0.1944 | 4.6991 | 1640 | 0.3827 | 0.9304 |\n| 0.0004 | 4.7564 | 1660 | 0.3820 | 0.9304 |\n| 0.0616 | 4.8138 | 1680 | 0.3843 | 0.9241 |\n| 0.0003 | 4.8711 | 1700 | 0.3851 | 0.9241 |\n| 0.083 | 4.9284 | 1720 | 0.3852 | 0.9241 |\n| 0.0005 | 4.9857 | 1740 | 0.3852 | 0.9241 |\n\n\n### Framework versions\n\n- Transformers 4.48.3\n- Pytorch 2.5.1+cu124\n- Datasets 3.3.2\n- Tokenizers 0.21.0\n", "metadata": "\"N/A\"", "depth": 3, "children": [], "children_count": 0, "adapters": [], "adapters_count": 0, "quantized": [], "quantized_count": 0, "merges": [], "merges_count": 0, "total_derivatives": 0, "spaces": [], "spaces_count": 0, "parents": [ "hon9kon9ize/bert-base-cantonese" ], "base_model": "wcyat/bert-suicide-detection-hk-new", "base_model_relation": "base" } ] }