Instructions to use felflare/bert-restore-punctuation with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use felflare/bert-restore-punctuation with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="felflare/bert-restore-punctuation")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("felflare/bert-restore-punctuation") model = AutoModelForTokenClassification.from_pretrained("felflare/bert-restore-punctuation") - Notebooks
- Google Colab
- Kaggle
Dont actually believe this works
I just constantly get cuda set to true not available when it is
I call this program out as a fake
wasted an entire night tryiong to get this bag of sh!t to work only to find out its been abandoned - well thanks for letting everyone know
I have the same problem :(
I just constantly get cuda set to true not available when it is
I call this program out as a fake
wasted an entire night tryiong to get this bag of sh!t to work only to find out its been abandoned - >well thanks for letting everyone know
This thing works pretty much perfectly for me. Especially for extremely large text files.
Here's a Colab notebook I worked up for quick use of rpunct. The referenced text is a 2.7mb text file completely in lowercase without punctuation, and it only takes about 5 minutes for it to completely repunctuate the entire document. See if it works for you : https://colab.research.google.com/drive/17u5XCLhhY_BCijxpVuZkjzgC9mvl6vXQ?usp=sharing