How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("token-classification", model="peanutacake/ajmc_ner_de")
# Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification

tokenizer = AutoTokenizer.from_pretrained("peanutacake/ajmc_ner_de")
model = AutoModelForTokenClassification.from_pretrained("peanutacake/ajmc_ner_de")
Quick Links

Model Trained Using AutoTrain

  • Problem type: Entity Extraction
  • Model ID: 53413125973
  • CO2 Emissions (in grams): 1.2160

Validation Metrics

  • Loss: 0.109
  • Accuracy: 0.976
  • Precision: 0.000
  • Recall: 0.000
  • F1: 0.000

Usage

You can use cURL to access this model:

$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/peanutacake/autotrain-ajmc_ner_de-53413125973

Or Python API:

from transformers import AutoModelForTokenClassification, AutoTokenizer

model = AutoModelForTokenClassification.from_pretrained("peanutacake/autotrain-ajmc_ner_de-53413125973", use_auth_token=True)

tokenizer = AutoTokenizer.from_pretrained("peanutacake/autotrain-ajmc_ner_de-53413125973", use_auth_token=True)

inputs = tokenizer("I love AutoTrain", return_tensors="pt")

outputs = model(**inputs)
Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support