Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

gbpatentdata
/
patent_entities_ner

Token Classification
Transformers
Safetensors
English
xlm-roberta
Model card Files Files and versions
xet
Community

Instructions to use gbpatentdata/patent_entities_ner with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Transformers

    How to use gbpatentdata/patent_entities_ner with Transformers:

    # Use a pipeline as a high-level helper
    from transformers import pipeline
    
    pipe = pipeline("token-classification", model="gbpatentdata/patent_entities_ner")
    # Load model directly
    from transformers import AutoTokenizer, AutoModelForTokenClassification
    
    tokenizer = AutoTokenizer.from_pretrained("gbpatentdata/patent_entities_ner")
    model = AutoModelForTokenClassification.from_pretrained("gbpatentdata/patent_entities_ner")
  • Notebooks
  • Google Colab
  • Kaggle
patent_entities_ner
2.26 GB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 22 commits
gbpatentdata's picture
gbpatentdata
Update README.md
0b8ed7d verified over 1 year ago
  • .gitattributes
    1.57 kB
    Upload tokenizer over 1 year ago
  • README.md
    6.78 kB
    Update README.md over 1 year ago
  • classification_report_lr_5.0000000000e-05_test.csv
    333 Bytes
    Rename classification_report_lr_5.0000000000e-05.csv to classification_report_lr_5.0000000000e-05_test.csv over 1 year ago
  • classification_report_lr_5.0000000000e-05_val.csv
    333 Bytes
    validation results over 1 year ago
  • config.json
    1.21 kB
    Upload XLMRobertaForTokenClassification over 1 year ago
  • data_split_test.csv
    278 kB
    Rename data_splits:test.csv to data_split_test.csv over 1 year ago
  • data_split_train.csv
    845 kB
    Rename data_splits:train.csv to data_split_train.csv over 1 year ago
  • data_split_val.csv
    260 kB
    Rename data_splits:val.csv to data_split_val.csv over 1 year ago
  • labelled_data.conll
    2.45 MB
    data over 1 year ago
  • model.safetensors
    2.24 GB
    xet
    Upload XLMRobertaForTokenClassification over 1 year ago
  • sentencepiece.bpe.model
    5.07 MB
    xet
    Upload tokenizer over 1 year ago
  • special_tokens_map.json
    280 Bytes
    Upload tokenizer over 1 year ago
  • test_set_predictions.json
    411 kB
    add test set predictions and classification report over 1 year ago
  • tokenizer.json
    17.1 MB
    xet
    Upload tokenizer over 1 year ago
  • tokenizer_config.json
    1.15 kB
    Upload tokenizer over 1 year ago