Text Classification
Transformers
PyTorch
distilbert
fine-tuning
resume classification
text-embeddings-inference
Instructions to use oussama120/Resume_Sentence_Classification with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use oussama120/Resume_Sentence_Classification with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="oussama120/Resume_Sentence_Classification")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("oussama120/Resume_Sentence_Classification") model = AutoModelForSequenceClassification.from_pretrained("oussama120/Resume_Sentence_Classification") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -29,7 +29,7 @@ To use the model and tokenizer, you can load them from the Hugging Face Hub as f
|
|
| 29 |
from transformers import DistilBertTokenizerFast, DistilBertForSequenceClassification
|
| 30 |
|
| 31 |
# Load the model and tokenizer
|
| 32 |
-
model_name = "
|
| 33 |
tokenizer = DistilBertTokenizerFast.from_pretrained(model_name)
|
| 34 |
model = DistilBertForSequenceClassification.from_pretrained(model_name)
|
| 35 |
|
|
|
|
| 29 |
from transformers import DistilBertTokenizerFast, DistilBertForSequenceClassification
|
| 30 |
|
| 31 |
# Load the model and tokenizer
|
| 32 |
+
model_name = "oussama120/Resume_Sentence_Classification"
|
| 33 |
tokenizer = DistilBertTokenizerFast.from_pretrained(model_name)
|
| 34 |
model = DistilBertForSequenceClassification.from_pretrained(model_name)
|
| 35 |
|