Instructions to use dev-analyzer/file_path_model with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use dev-analyzer/file_path_model with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="dev-analyzer/file_path_model")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("dev-analyzer/file_path_model") model = AutoModelForSequenceClassification.from_pretrained("dev-analyzer/file_path_model") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- a55b6148d0e89beda4579360e751c67c10c5913de34d3123324453444a816557
- Size of remote file:
- 438 MB
- SHA256:
- 5d90caffa4c7e643c2b75655cb2dbe26a51f826a9d55684ec8176ea428724e9f
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.