Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Raccord
/
scenAIrio

Text Classification
Transformers
Safetensors
PyTorch
French
roberta
text-embeddings-inference
Model card Files Files and versions
xet
Community

Instructions to use Raccord/scenAIrio with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Transformers

    How to use Raccord/scenAIrio with Transformers:

    # Use a pipeline as a high-level helper
    from transformers import pipeline
    
    pipe = pipeline("text-classification", model="Raccord/scenAIrio")
    # Load model directly
    from transformers import AutoTokenizer, AutoModelForSequenceClassification
    
    tokenizer = AutoTokenizer.from_pretrained("Raccord/scenAIrio")
    model = AutoModelForSequenceClassification.from_pretrained("Raccord/scenAIrio")
  • Notebooks
  • Google Colab
  • Kaggle

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Gated model
You can list files but not access them

Preview of files found in this repository
  • .gitattributes
    1.52 kB
    initial commit about 2 years ago
  • README.md
    2.44 kB
    Update README.md about 2 years ago
  • config.json
    927 Bytes
    Upload RobertaForSequenceClassification about 2 years ago
  • merges.txt
    456 kB
    Upload tokenizer about 2 years ago
  • model.safetensors
    499 MB
    xet
    Upload RobertaForSequenceClassification about 2 years ago
  • special_tokens_map.json
    295 Bytes
    Upload tokenizer about 2 years ago
  • tokenizer.json
    2.11 MB
    Upload tokenizer about 2 years ago
  • tokenizer_config.json
    1.29 kB
    Upload tokenizer about 2 years ago
  • vocab.json
    798 kB
    Upload tokenizer about 2 years ago
  • vocab.txt
    232 kB
    Upload tokenizer about 2 years ago