| | --- |
| | tags: autotrain |
| | language: unk |
| | widget: |
| | - text: "I love AutoTrain 🤗" |
| | datasets: |
| | - deepesh0x/autotrain-data-bert_wikipedia_sst2 |
| | co2_eq_emissions: 16.368556687663705 |
| | --- |
| | |
| | # Model Trained Using AutoTrain |
| |
|
| | - Problem type: Binary Classification |
| | - Model ID: 1021934687 |
| | - CO2 Emissions (in grams): 16.368556687663705 |
| |
|
| | ## Validation Metrics |
| |
|
| | - Loss: 0.15712647140026093 |
| | - Accuracy: 0.9503340757238308 |
| | - Precision: 0.9515767251616308 |
| | - Recall: 0.9598083577322332 |
| | - AUC: 0.9857179850355002 |
| | - F1: 0.9556748161399324 |
| |
|
| | ## Usage |
| |
|
| | You can use cURL to access this model: |
| |
|
| | ``` |
| | $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/deepesh0x/autotrain-bert_wikipedia_sst2-1021934687 |
| | ``` |
| |
|
| | Or Python API: |
| |
|
| | ``` |
| | from transformers import AutoModelForSequenceClassification, AutoTokenizer |
| | |
| | model = AutoModelForSequenceClassification.from_pretrained("deepesh0x/autotrain-bert_wikipedia_sst2-1021934687", use_auth_token=True) |
| | |
| | tokenizer = AutoTokenizer.from_pretrained("deepesh0x/autotrain-bert_wikipedia_sst2-1021934687", use_auth_token=True) |
| | |
| | inputs = tokenizer("I love AutoTrain", return_tensors="pt") |
| | |
| | outputs = model(**inputs) |
| | ``` |