Instructions to use zeroshot/sst2-obert-sparse with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use zeroshot/sst2-obert-sparse with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="zeroshot/sst2-obert-sparse")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("zeroshot/sst2-obert-sparse") model = AutoModelForSequenceClassification.from_pretrained("zeroshot/sst2-obert-sparse") - Notebooks
- Google Colab
- Kaggle
Commit History
Update README.md 677cf9f
Update README.md dc7b1ac
Update README.md e3ac0ab
handler 6c63fe2
test handler 897fb91
Merge branch 'main' of https://huggingface.co/zeroshot/sst2-obert-sparse into main 0905583
edit handler 1261138
Update README.md f3eef8e
Update handler.py 1449fa1
edit handler abfa544
edited handler eeb501d
Merge branch 'main' of https://huggingface.co/zeroshot/sst2-obert-sparse into main 28eb234
edited handler dfafc60
Update README.md 7cddf42
add model files 8560cc4
add files 7e0b8ee
initial commit 9b70dfb
s̵̡̖͔͕̣͙̲̘͉̜͔͚̦̯͍̲̾́̇͐̅̐̍͝ͅp̷̢̨̡̛̥͈̻̘̬͓̝̺͓̟͇͚̳̹̱̞̼̘̼͙̟͔̥̞͍͇͇̱̜̖͍̼̦̯̺̭̊͆̌̈́̉̑̉̉̍́́̕ą̵̩͕̫̹͉̩̞̻͍̯͓͙̹̜̽̌́͒̿̍͊̂͆̽̑͒͊̑͗̈́̄̃̿̑̿̒̅̑͂̆̂͑́͌̏͂͝͠r̸͇̟͎̗̞̤̞͐̑̾̆͜s̸̨̧̛̮̠̮͇̥̺͐̄̽̈̽͌̀͛̓̏̅͛̀̋̾͂̓͐̈́̐͌̓͗̄̂͊̈́̈͗̽́̓̋̓̕̕͝͠e̶̛̞͔̳̖̱̺̹̳̜̘̜͙͔̭̭͐̃̓̿̇̋͛͊̑͗͋͛̾̏̍̀̅̄̊̾̉̊̈́͌̀̚ commited on