File size: 4,888 Bytes
e4b9a7b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 | .. currentmodule:: pythainlp.tokenize
.. _tokenize-doc:
pythainlp.tokenize
==================
The :mod:`pythainlp.tokenize` module contains a comprehensive set of functions and classes for tokenizing Thai text into various units, such as sentences, words, subwords, and more. This module is a fundamental component of the PyThaiNLP library, providing tools for natural language processing in the Thai language.
Modules
-------
.. autofunction:: sent_tokenize
:noindex:
Splits Thai text into sentences. This function identifies sentence boundaries, which is essential for text segmentation and analysis.
.. autofunction:: paragraph_tokenize
:noindex:
Segments text into paragraphs, which can be valuable for document-level analysis or summarization.
.. autofunction:: subword_tokenize
:noindex:
Tokenizes text into subwords, which can be helpful for various NLP tasks, including subword embeddings.
.. autofunction:: syllable_tokenize
:noindex:
Divides text into syllables, allowing you to work with individual Thai language phonetic units.
.. autofunction:: word_tokenize
:noindex:
Splits text into words. This function is a fundamental tool for Thai language text analysis.
.. autofunction:: word_detokenize
:noindex:
Reverses the tokenization process, reconstructing text from tokenized units. Useful for text generation tasks.
.. autoclass:: Tokenizer
:members:
The `Tokenizer` class is a versatile tool for customizing tokenization processes and managing tokenization models. It provides various methods and attributes to fine-tune tokenization according to your specific needs.
Tokenization Engines
--------------------
This module offers multiple tokenization engines designed for different levels of text analysis.
Sentence level
--------------
**crfcut**
.. automodule:: pythainlp.tokenize.crfcut
:members:
A tokenizer that operates at the sentence level using Conditional Random Fields (CRF). It is suitable for segmenting text into sentences accurately.
**thaisumcut**
.. automodule:: pythainlp.tokenize.thaisumcut
:members:
A sentence tokenizer based on a maximum entropy model. It's a great choice for sentence boundary detection in Thai text.
Word level
----------
**attacut**
.. automodule:: pythainlp.tokenize.attacut
:members:
A tokenizer designed for word-level segmentation. It provides accurate word boundary detection in Thai text.
**deepcut**
.. automodule:: pythainlp.tokenize.deepcut
:members:
Utilizes deep learning techniques for word segmentation, achieving high accuracy and performance.
**multi_cut**
.. automodule:: pythainlp.tokenize.multi_cut
:members:
An ensemble tokenizer that combines multiple tokenization strategies for improved word segmentation.
**nlpo3**
.. automodule:: pythainlp.tokenize.nlpo3
:members:
A word tokenizer based on the NLPO3 model. It offers advanced word boundary detection and is suitable for various NLP tasks.
**longest**
.. automodule:: pythainlp.tokenize.longest
:members:
A tokenizer that identifies word boundaries by selecting the longest possible words in a text.
**pyicu**
.. automodule:: pythainlp.tokenize.pyicu
:members:
An ICU-based word tokenizer offering robust support for Thai text segmentation.
**nercut**
.. automodule:: pythainlp.tokenize.nercut
:members:
A tokenizer optimized for Named Entity Recognition (NER) tasks, ensuring accurate tokenization for entity recognition.
**sefr_cut**
.. automodule:: pythainlp.tokenize.sefr_cut
:members:
An advanced word tokenizer for segmenting Thai text, with a focus on precision.
**oskut**
.. automodule:: pythainlp.tokenize.oskut
:members:
A tokenizer that uses a pre-trained model for word segmentation. It's a reliable choice for general-purpose text analysis.
**newmm (Default)**
.. automodule:: pythainlp.tokenize.newmm
:members:
The default word tokenization engine that provides a balance between accuracy and efficiency for most use cases.
Subword level
-------------
**tcc**
.. automodule:: pythainlp.tokenize.tcc
:members:
Tokenizes text into Thai Character Clusters (TCCs), a subword level representation.
**tcc+**
.. automodule:: pythainlp.tokenize.tcc_p
:members:
A subword tokenizer that includes additional rules for more precise subword segmentation.
**etcc**
.. automodule:: pythainlp.tokenize.etcc
:members:
Enhanced Thai Character Clusters (eTCC) tokenizer for subword-level analysis.
**han_solo**
.. automodule:: pythainlp.tokenize.han_solo
:members:
A subword tokenizer specialized for Han characters and mixed scripts, suitable for various text processing scenarios.
|