Text Classification
Transformers
PyTorch
Chinese
bert
SequenceClassification
Lepton
古文
文言文
ancient
classical
letter
书信标题
Instructions to use cbdb/ClassicalChineseLetterClassification with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use cbdb/ClassicalChineseLetterClassification with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="cbdb/ClassicalChineseLetterClassification")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("cbdb/ClassicalChineseLetterClassification") model = AutoModelForSequenceClassification.from_pretrained("cbdb/ClassicalChineseLetterClassification") - Notebooks
- Google Colab
- Kaggle
update title
Browse files
README.md
CHANGED
|
@@ -13,7 +13,7 @@ tags:
|
|
| 13 |
license: cc-by-nc-sa-4.0
|
| 14 |
---
|
| 15 |
|
| 16 |
-
# <font color="
|
| 17 |
[](https://colab.research.google.com/drive/1jVu2LrNwkLolItPALKGNjeT6iCfzF8Ic?usp=sharing/)
|
| 18 |
|
| 19 |
Our model <font color="cornflowerblue">LEPTON (Classical Chinese Letter Prediction) </font> is BertForSequenceClassification Classical Chinese model that is intended to predict whether a Classical Chinese sentence is <font color="IndianRed"> a letter title (书信标题) </font> or not. This model is first inherited from the BERT base Chinese model (MLM), and finetuned using a large corpus of Classical Chinese language (3GB textual dataset), then concatenated with the BertForSequenceClassification architecture to perform a binary classification task.
|
|
|
|
| 13 |
license: cc-by-nc-sa-4.0
|
| 14 |
---
|
| 15 |
|
| 16 |
+
# <font color="IndianRed"> LEPTON (Classical Chinese Letter Prediction)</font>
|
| 17 |
[](https://colab.research.google.com/drive/1jVu2LrNwkLolItPALKGNjeT6iCfzF8Ic?usp=sharing/)
|
| 18 |
|
| 19 |
Our model <font color="cornflowerblue">LEPTON (Classical Chinese Letter Prediction) </font> is BertForSequenceClassification Classical Chinese model that is intended to predict whether a Classical Chinese sentence is <font color="IndianRed"> a letter title (书信标题) </font> or not. This model is first inherited from the BERT base Chinese model (MLM), and finetuned using a large corpus of Classical Chinese language (3GB textual dataset), then concatenated with the BertForSequenceClassification architecture to perform a binary classification task.
|