Instructions to use cycloneboy/chinese_mobilebert_base_f2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use cycloneboy/chinese_mobilebert_base_f2 with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForPreTraining tokenizer = AutoTokenizer.from_pretrained("cycloneboy/chinese_mobilebert_base_f2") model = AutoModelForPreTraining.from_pretrained("cycloneboy/chinese_mobilebert_base_f2") - Notebooks
- Google Colab
- Kaggle
Quick Links
Chinese-MobileBERT
The original Chinese-MobileBERT repository does not provide pytorch weights, here the weights are converted via the model_convert repository.
This repository is developed based on:https://github.com/ymcui/Chinese-MobileBERT
You may also be interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese MacBERT: https://github.com/ymcui/MacBERT
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
@misc{cui-2022-chinese-mobilebert,
title={Chinese MobileBERT},
author={Cui, Yiming},
howpublished={\url{https://github.com/ymcui/Chinese-MobileBERT}},
year={2022}
}
- Downloads last month
- 6,735
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
# Load model directly from transformers import AutoTokenizer, AutoModelForPreTraining tokenizer = AutoTokenizer.from_pretrained("cycloneboy/chinese_mobilebert_base_f2") model = AutoModelForPreTraining.from_pretrained("cycloneboy/chinese_mobilebert_base_f2")