Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Remeinium
/
WWHO

Feature Extraction
Transformers
Sinhala
Hindi
English
tokenizer
WWHO
SGPE
linguis_trie
token
tokenization
Syllable
remeinium
transformer
linguistics
NLP
sinhala
hindi
english
BPE
GPE
Eval Results (legacy)
Model card Files Files and versions
xet
Community

Instructions to use Remeinium/WWHO with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Transformers

    How to use Remeinium/WWHO with Transformers:

    # Use a pipeline as a high-level helper
    from transformers import pipeline
    
    pipe = pipeline("feature-extraction", model="Remeinium/WWHO")
    # Load model directly
    from transformers import AutoModel
    model = AutoModel.from_pretrained("Remeinium/WWHO", dtype="auto")
  • Notebooks
  • Google Colab
  • Kaggle
WWHO
18.5 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 7 commits
thekusaldarshana's picture
thekusaldarshana
Update README.md
b3a398c verified 2 months ago
  • .gitattributes
    1.52 kB
    initial commit 3 months ago
  • EVALUATION.md
    18.9 kB
    Seperate Before you Compress 2 months ago
  • LICENSE
    9.14 kB
    Syllable is the Token 3 months ago
  • README.md
    5.93 kB
    Update README.md 2 months ago
  • encoder.py
    13.1 kB
    Seperate Before you Compress 2 months ago
  • gpe_trainer.py
    28.4 kB
    Seperate Before you Compress 2 months ago
  • linguis_trie.py
    11.1 kB
    WWHO 2 months ago
  • router.py
    5.75 kB
    Seperate Before you Compress 2 months ago
  • tokenizer.json
    8.07 MB
    WWHO 2 months ago
  • vocab.json
    10.4 MB
    WWHO 2 months ago