| | --- |
| | tags: |
| | - autotrain |
| | - token-classification |
| | language: |
| | - unk |
| | widget: |
| | - text: "I love AutoTrain 🤗" |
| | datasets: |
| | - WilliamWen/autotrain-data-ni_final_01 |
| | co2_eq_emissions: |
| | emissions: 0.529268535134958 |
| | --- |
| | |
| | # Model Trained Using AutoTrain |
| |
|
| | - Problem type: Entity Extraction |
| | - Model ID: 50570120767 |
| | - CO2 Emissions (in grams): 0.5293 |
| |
|
| | ## Validation Metrics |
| |
|
| | - Loss: 0.000 |
| | - Accuracy: 1.000 |
| | - Precision: 1.000 |
| | - Recall: 1.000 |
| | - F1: 1.000 |
| |
|
| | ## Usage |
| |
|
| | You can use cURL to access this model: |
| |
|
| | ``` |
| | $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/WilliamWen/autotrain-ni_final_01-50570120767 |
| | ``` |
| |
|
| | Or Python API: |
| |
|
| | ``` |
| | from transformers import AutoModelForTokenClassification, AutoTokenizer |
| | |
| | model = AutoModelForTokenClassification.from_pretrained("WilliamWen/autotrain-ni_final_01-50570120767", use_auth_token=True) |
| | |
| | tokenizer = AutoTokenizer.from_pretrained("WilliamWen/autotrain-ni_final_01-50570120767", use_auth_token=True) |
| | |
| | inputs = tokenizer("I love AutoTrain", return_tensors="pt") |
| | |
| | outputs = model(**inputs) |
| | ``` |