Text Classification
Transformers
PyTorch
JAX
Safetensors
code
English
roberta
text-embeddings-inference
Instructions to use Fsoft-AIC/Codebert-docstring-inconsistency with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Fsoft-AIC/Codebert-docstring-inconsistency with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Fsoft-AIC/Codebert-docstring-inconsistency")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Fsoft-AIC/Codebert-docstring-inconsistency") model = AutoModelForSequenceClassification.from_pretrained("Fsoft-AIC/Codebert-docstring-inconsistency") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -70,7 +70,7 @@ More information:
|
|
| 70 |
* License: MIT
|
| 71 |
* Model type: Transformer-Encoder based Language Model
|
| 72 |
* Architecture: BERT-base
|
| 73 |
-
* Data set: [The Vault](https://huggingface.co/datasets/Fsoft-AIC/
|
| 74 |
* Tokenizer: Byte Pair Encoding
|
| 75 |
* Vocabulary Size: 50265
|
| 76 |
* Sequence Length: 512
|
|
|
|
| 70 |
* License: MIT
|
| 71 |
* Model type: Transformer-Encoder based Language Model
|
| 72 |
* Architecture: BERT-base
|
| 73 |
+
* Data set: [The Vault](https://huggingface.co/datasets/Fsoft-AIC/the-vault-function)
|
| 74 |
* Tokenizer: Byte Pair Encoding
|
| 75 |
* Vocabulary Size: 50265
|
| 76 |
* Sequence Length: 512
|