Instructions to use kurtpayne/skillscan-deberta-adapter with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use kurtpayne/skillscan-deberta-adapter with PEFT:
from peft import PeftModel from transformers import AutoModelForSequenceClassification base_model = AutoModelForSequenceClassification.from_pretrained("answerdotai/ModernBERT-base") model = PeftModel.from_pretrained(base_model, "kurtpayne/skillscan-deberta-adapter") - Transformers
How to use kurtpayne/skillscan-deberta-adapter with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="kurtpayne/skillscan-deberta-adapter")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("kurtpayne/skillscan-deberta-adapter") model = AutoModelForSequenceClassification.from_pretrained("kurtpayne/skillscan-deberta-adapter") - Notebooks
- Google Colab
- Kaggle
Ctrl+K
- checkpoint-10075
- checkpoint-10104
- checkpoint-10188
- checkpoint-12630
- checkpoint-12735
- checkpoint-1431
- checkpoint-1434
- checkpoint-15282
- checkpoint-1669
- checkpoint-1707
- checkpoint-2015
- checkpoint-2862
- checkpoint-2868
- checkpoint-3338
- checkpoint-3414
- checkpoint-4030
- checkpoint-4293
- checkpoint-4302
- checkpoint-5007
- checkpoint-5121
- checkpoint-5724
- checkpoint-5736
- checkpoint-6045
- checkpoint-6676
- checkpoint-6828
- checkpoint-7155
- checkpoint-7170
- checkpoint-7578
- checkpoint-8060
- checkpoint-8345
- checkpoint-8535
- last-checkpoint
- 1.52 kB
- 5.18 kB
- 1.04 kB
- 41.3 MB xet
- 1.24 kB
- 599 MB xet
- 738 MB xet
- 244 MB xet
- 763 Bytes
- 602 Bytes
- 694 Bytes
- 3.58 MB
- 20.8 kB
- 5.91 kB xet