Instructions to use Jibbscript/privacy-filter-oai with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Jibbscript/privacy-filter-oai with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="Jibbscript/privacy-filter-oai")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("Jibbscript/privacy-filter-oai") model = AutoModelForTokenClassification.from_pretrained("Jibbscript/privacy-filter-oai") - Transformers.js
How to use Jibbscript/privacy-filter-oai with Transformers.js:
// npm i @huggingface/transformers import { pipeline } from '@huggingface/transformers'; // Allocate pipeline const pipe = await pipeline('token-classification', 'Jibbscript/privacy-filter-oai'); - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 62512595181f4e3989d37b9082a29807d718350d5e9ddd89277895d4430d822e
- Size of remote file:
- 1.62 GB
- SHA256:
- 50f4c8c7f3c27fbc1fe16d4f74f6f7c3b74ba8f18a262e8b6911854c64c33a6d
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.