code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def forward(self, inputs, states, src_seq_lengths=None):
"""Sample by beam search.
Parameters
----------
inputs : mx.np.ndarray
The initial input of the decoder. Shape is (batch_size,).
states : Object that contains mx.np.ndarrays
The initial states of th... | Sample by beam search.
Parameters
----------
inputs : mx.np.ndarray
The initial input of the decoder. Shape is (batch_size,).
states : Object that contains mx.np.ndarrays
The initial states of the decoder.
src_seq_lengths : mx.np.ndarray
The s... | forward | python | dmlc/gluon-nlp | src/gluonnlp/sequence_sampler.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/sequence_sampler.py | Apache-2.0 |
def forward(self, samples, valid_length, outputs, scores, step, beam_alive_mask,
states, batch_shift):
"""
Parameters
----------
samples : mx.np.ndarray
The current samples generated by beam search.
Shape (batch_size, beam_size, L).
valid_... |
Parameters
----------
samples : mx.np.ndarray
The current samples generated by beam search.
Shape (batch_size, beam_size, L).
valid_length : mx.np.ndarray
The current valid lengths of the samples
outputs : mx.np.ndarray
Outputs fr... | forward | python | dmlc/gluon-nlp | src/gluonnlp/sequence_sampler.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/sequence_sampler.py | Apache-2.0 |
def _pad_arrs_to_max_length(arrs, pad_axis, pad_val, use_shared_mem, dtype, round_to=None):
"""Inner Implementation of the Pad batchify
Parameters
----------
arrs : list
pad_axis : int
pad_val : number
use_shared_mem : bool, default False
dtype :
round_to : int
Returns
----... | Inner Implementation of the Pad batchify
Parameters
----------
arrs : list
pad_axis : int
pad_val : number
use_shared_mem : bool, default False
dtype :
round_to : int
Returns
-------
ret : NDArray
original_length : NDArray
| _pad_arrs_to_max_length | python | dmlc/gluon-nlp | src/gluonnlp/data/batchify.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/batchify.py | Apache-2.0 |
def __call__(self, data):
"""Batchify the input data.
The input can be list of numpy.ndarray, list of numbers or list of
mxnet.nd.NDArray. Inputting mxnet.nd.NDArray is discouraged as each
array need to be converted to numpy for efficient padding.
The arrays will be padded to t... | Batchify the input data.
The input can be list of numpy.ndarray, list of numbers or list of
mxnet.nd.NDArray. Inputting mxnet.nd.NDArray is discouraged as each
array need to be converted to numpy for efficient padding.
The arrays will be padded to the largest dimension at `axis` and th... | __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/batchify.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/batchify.py | Apache-2.0 |
def __call__(self, data):
"""Batchify the input data.
Parameters
----------
data : list
The samples to batchfy. Each sample should contain N attributes.
Returns
-------
ret : tuple
A tuple of length N. Contains the batchified result of ea... | Batchify the input data.
Parameters
----------
data : list
The samples to batchfy. Each sample should contain N attributes.
Returns
-------
ret : tuple
A tuple of length N. Contains the batchified result of each attribute in the input.
| __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/batchify.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/batchify.py | Apache-2.0 |
def __call__(self, data: t_List[t_Dict]) -> t_Dict:
"""
Parameters
----------
data
The samples to batchify. Each sample should be a dictionary
Returns
-------
ret
The resulting dictionary that stores the merged samples.
"""
... |
Parameters
----------
data
The samples to batchify. Each sample should be a dictionary
Returns
-------
ret
The resulting dictionary that stores the merged samples.
| __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/batchify.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/batchify.py | Apache-2.0 |
def __call__(self, data: t_List[t_NamedTuple]) -> t_NamedTuple:
"""Batchify the input data.
Parameters
----------
data
The samples to batchfy. Each sample should be a namedtuple.
Returns
-------
ret
A namedtuple of length N. Contains the ... | Batchify the input data.
Parameters
----------
data
The samples to batchfy. Each sample should be a namedtuple.
Returns
-------
ret
A namedtuple of length N. Contains the batchified result of each attribute in the input.
| __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/batchify.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/batchify.py | Apache-2.0 |
def _words_match_regex(words: List[str], ignore_case=False, replace_white_space=False) -> Pattern:
"""Obtain the regex that finds whether a given corpus contains any word in the input words
Parameters
----------
words
Returns
-------
regex
"""
words = [ele for ele in words if ele]... | Obtain the regex that finds whether a given corpus contains any word in the input words
Parameters
----------
words
Returns
-------
regex
| _words_match_regex | python | dmlc/gluon-nlp | src/gluonnlp/data/filtering.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/filtering.py | Apache-2.0 |
def __call__(self, corpus: str):
"""
Parameters
----------
corpus
Input corpus
Returns
-------
lang_label
The ISO-639 1 code of the predicted language
score
The score of the prediction
"""
if self._use_... |
Parameters
----------
corpus
Input corpus
Returns
-------
lang_label
The ISO-639 1 code of the predicted language
score
The score of the prediction
| __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/filtering.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/filtering.py | Apache-2.0 |
def _dataset_worker_fn(urls, dataset_fn, batch_sampler_fn):
"""Function to generate datasets and batch sampler for each worker."""
global _manager, _dataset
dataset = dataset_fn(urls)
batch_sampler = batch_sampler_fn(dataset)
if _manager:
dataset = _manager.list(zip(*dataset._data))
_dat... | Function to generate datasets and batch sampler for each worker. | _dataset_worker_fn | python | dmlc/gluon-nlp | src/gluonnlp/data/loading.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/loading.py | Apache-2.0 |
def _batch_worker_fn(samples, batchify_fn, dataset=None, counter=None):
"""Function for processing data in worker process."""
# pylint: disable=unused-argument
# it is required that each worker process has to fork a new MXIndexedRecordIO handle
# preserving dataset as global variable can save tons of ov... | Function for processing data in worker process. | _batch_worker_fn | python | dmlc/gluon-nlp | src/gluonnlp/data/loading.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/loading.py | Apache-2.0 |
def _push_next(self):
"""Assign next batch workload to workers."""
if self._batch_iter is not None:
r = next(self._batch_iter, None)
else:
r = None
if r is None:
result = self._next_dataset()
if result is None:
return
... | Assign next batch workload to workers. | _push_next | python | dmlc/gluon-nlp | src/gluonnlp/data/loading.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/loading.py | Apache-2.0 |
def _push_next_dataset(self):
"""Assign next dataset workload to workers."""
current_dataset_idx = self._sent_idx * self._circle_length
if current_dataset_idx < self._num_datasets:
circle_length = min(self._circle_length,
self._num_datasets - current_d... | Assign next dataset workload to workers. | _push_next_dataset | python | dmlc/gluon-nlp | src/gluonnlp/data/loading.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/loading.py | Apache-2.0 |
def _next_dataset(self):
"""Retrieve the next dataset. Returns None if no dataset is available."""
if self._rcvd_idx == self._sent_idx:
assert not self._data_buffer, 'Data buffer should be empty at this moment'
return None
assert self._rcvd_idx < self._sent_idx, \
... | Retrieve the next dataset. Returns None if no dataset is available. | _next_dataset | python | dmlc/gluon-nlp | src/gluonnlp/data/loading.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/loading.py | Apache-2.0 |
def __call__(self, max_lengths: Union[int, Sequence[int]],
min_lengths: Union[int, Sequence[int]], num_buckets: int) -> List[int]:
"""Generate bucket keys based on the lengths of sequences and number of buckets.
Parameters
----------
max_lengths
Maximum of l... | Generate bucket keys based on the lengths of sequences and number of buckets.
Parameters
----------
max_lengths
Maximum of lengths of sequences.
min_lengths
Minimum of lengths of sequences.
num_buckets
Number of buckets
Returns
... | __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/sampler.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/sampler.py | Apache-2.0 |
def __call__(self, max_lengths: Union[int, Sequence[int]],
min_lengths: Union[int, Sequence[int]], num_buckets: int) -> List[int]:
r"""This generate bucket keys given that all the buckets have the same width.
Parameters
----------
max_lengths
Maximum of leng... | This generate bucket keys given that all the buckets have the same width.
Parameters
----------
max_lengths
Maximum of lengths of sequences.
min_lengths
Minimum of lengths of sequences.
num_buckets
Number of buckets
Returns
--... | __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/sampler.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/sampler.py | Apache-2.0 |
def __call__(self, max_lengths: Union[int, Sequence[int]],
min_lengths: Union[int, Sequence[int]], num_buckets: int) -> List[int]:
r"""This function generates bucket keys with linearly increasing bucket width:
Parameters
----------
max_lengths
Maximum of len... | This function generates bucket keys with linearly increasing bucket width:
Parameters
----------
max_lengths
Maximum of lengths of sequences.
min_lengths
Minimum of lengths of sequences.
num_buckets
Number of buckets
Returns
-... | __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/sampler.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/sampler.py | Apache-2.0 |
def __call__(self, max_lengths: Union[int, Sequence[int]],
min_lengths: Union[int, Sequence[int]], num_buckets: int) -> List[int]:
r"""This function generates bucket keys exponentially increasing bucket width.
Parameters
----------
max_lengths
Maximum of len... | This function generates bucket keys exponentially increasing bucket width.
Parameters
----------
max_lengths
Maximum of lengths of sequences.
min_lengths
Minimum of lengths of sequences.
num_buckets
Number of buckets
Returns
-... | __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/sampler.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/sampler.py | Apache-2.0 |
def __repr__(self):
"""Return a string representing the statistics of the bucketing sampler.
Returns
-------
ret : str
String representing the statistics of the buckets.
"""
ret = '{name}(\n' \
' sample_num={sample_num}, batch_num={batch_num}\n' ... | Return a string representing the statistics of the bucketing sampler.
Returns
-------
ret : str
String representing the statistics of the buckets.
| __repr__ | python | dmlc/gluon-nlp | src/gluonnlp/data/sampler.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/sampler.py | Apache-2.0 |
def _check_special_token_identifier(key):
"""Raise error if the key is not valid as a key for the special token.
Parameters
----------
key
The identifier
"""
if not (key.endswith('_token') and key != '_token'):
raise ValueError('Each key needs to have the form "name_token".'
... | Raise error if the key is not valid as a key for the special token.
Parameters
----------
key
The identifier
| _check_special_token_identifier | python | dmlc/gluon-nlp | src/gluonnlp/data/vocab.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/vocab.py | Apache-2.0 |
def to_tokens(self, idx: Union[int, Tuple[int], List[int], np.ndarray])\
-> Union[Hashable, List[Hashable]]:
"""Get the tokens correspond to the chosen indices
Parameters
----------
idx
The index used to select the tokens.
Returns
-------
... | Get the tokens correspond to the chosen indices
Parameters
----------
idx
The index used to select the tokens.
Returns
-------
ret
The tokens of these selected indices.
| to_tokens | python | dmlc/gluon-nlp | src/gluonnlp/data/vocab.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/vocab.py | Apache-2.0 |
def __getitem__(self, tokens: Union[Hashable, List[Hashable], Tuple[Hashable]])\
-> Union[int, List[int]]:
"""Looks up indices of text tokens according to the vocabulary.
If `unknown_token` of the vocabulary is None, looking up unknown tokens results in KeyError.
Parameters
... | Looks up indices of text tokens according to the vocabulary.
If `unknown_token` of the vocabulary is None, looking up unknown tokens results in KeyError.
Parameters
----------
tokens
A source token or tokens to be converted.
Returns
-------
ret
... | __getitem__ | python | dmlc/gluon-nlp | src/gluonnlp/data/vocab.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/vocab.py | Apache-2.0 |
def __call__(self, tokens: Union[Hashable, List[Hashable], Tuple[Hashable]])\
-> Union[int, np.ndarray]:
"""Looks up indices of text tokens according to the vocabulary.
Parameters
----------
tokens
A source token or tokens to be converted.
Returns
... | Looks up indices of text tokens according to the vocabulary.
Parameters
----------
tokens
A source token or tokens to be converted.
Returns
-------
ret
A token index or a list of token indices according to the vocabulary.
| __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/vocab.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/vocab.py | Apache-2.0 |
def to_json(self) -> str:
"""Serialize Vocab object into a json string.
Returns
-------
ret
The serialized json string
"""
vocab_dict = dict()
# Perform sanity check to make sure that we are able to reconstruct the original vocab
for i, tok in... | Serialize Vocab object into a json string.
Returns
-------
ret
The serialized json string
| to_json | python | dmlc/gluon-nlp | src/gluonnlp/data/vocab.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/vocab.py | Apache-2.0 |
def from_json(cls, json_str: Union[str, bytes, bytearray]) -> 'Vocab':
"""Deserialize Vocab object from json string.
Parameters
----------
json_str
Serialized json string of a Vocab object.
Returns
-------
vocab
The constructed Vocab obje... | Deserialize Vocab object from json string.
Parameters
----------
json_str
Serialized json string of a Vocab object.
Returns
-------
vocab
The constructed Vocab object
| from_json | python | dmlc/gluon-nlp | src/gluonnlp/data/vocab.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/vocab.py | Apache-2.0 |
def load_vocab(vocab: Union[str, Vocab]) -> Vocab:
"""Quick helper function to load vocabulary from a file.
Parameters
----------
vocab
Returns
-------
"""
if isinstance(vocab, Vocab):
return vocab
elif isinstance(vocab, str):
return Vocab.load(vocab)
else:
... | Quick helper function to load vocabulary from a file.
Parameters
----------
vocab
Returns
-------
| load_vocab | python | dmlc/gluon-nlp | src/gluonnlp/data/vocab.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/vocab.py | Apache-2.0 |
def get_token_type(tokens: Union[List[str], List[int], List[List[str]],
List[List[int]]]) -> type:
"""
Parameters
----------
tokens
The input tokens.
Returns
-------
token_type
If the tokens is empty, return `str`.
Otherwise, return ... |
Parameters
----------
tokens
The input tokens.
Returns
-------
token_type
If the tokens is empty, return `str`.
Otherwise, return `str` if the input is str and `int` if the input is int.
| get_token_type | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/base.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/base.py | Apache-2.0 |
def rebuild_offset_from_tokens(sentence: str, tokens: List[str]) \
-> List[Tuple[int, int]]:
"""Recover the offset of the tokens in the original sentence.
If you are using a subword tokenizer, make sure to remove the prefix/postfix of the tokens
before using this function. Also, this does not work ... | Recover the offset of the tokens in the original sentence.
If you are using a subword tokenizer, make sure to remove the prefix/postfix of the tokens
before using this function. Also, this does not work for n-gram-based (n>1) subword
tokenization, i.e.
it works for "gluonnlp" --> ["gluon", "nlp"]
b... | rebuild_offset_from_tokens | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/base.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/base.py | Apache-2.0 |
def get_char_offset_from_byte_offset(sentence: str, byte_offsets: List[Tuple[int, int]]):
"""Get the character-level offsets based on the byte-level offsets
Parameters
----------
sentence
The input sentence
byte_offsets
The byte-level offsets
Returns
-------
char_offset... | Get the character-level offsets based on the byte-level offsets
Parameters
----------
sentence
The input sentence
byte_offsets
The byte-level offsets
Returns
-------
char_offsets
The character-level offsets
| get_char_offset_from_byte_offset | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/base.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/base.py | Apache-2.0 |
def encode(self, sentences: SentencesType,
output_type: type = str) \
-> Union[TokensType, TokenIDsType]:
"""Encode the input sentence(s) into multiple tokens.
Parameters
----------
sentences
The sentences to tokenize
output_type
... | Encode the input sentence(s) into multiple tokens.
Parameters
----------
sentences
The sentences to tokenize
output_type
The type of the output tokens.
- str means each token is represented by its original text.
- int means each token is r... | encode | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/base.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/base.py | Apache-2.0 |
def encode_with_offsets(self, sentences: SentencesType,
output_type: type = str) \
-> Tuple[Union[TokensType, TokenIDsType], TokenOffsetsType]:
"""Encode the input sentence(s) into multiple tokens. Different from encode, it
will also return the character start and... | Encode the input sentence(s) into multiple tokens. Different from encode, it
will also return the character start and end positions of each token in the original text.
The original text is assumed to be
Here, the default implementation is to use the tokenized result to recover the offsets.
... | encode_with_offsets | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/base.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/base.py | Apache-2.0 |
def is_new_version_model_file(model_file_path: str) -> bool:
"""Check whether the model file belongs to the new version of HuggingFace Tokenizers,
i.e., >= 0.8
Parameters
----------
model_file_path
Path to the model file
Returns
-------
is_new_version
Whether the model ... | Check whether the model file belongs to the new version of HuggingFace Tokenizers,
i.e., >= 0.8
Parameters
----------
model_file_path
Path to the model file
Returns
-------
is_new_version
Whether the model file is generated by the new version of huggingface tokenizer.
| is_new_version_model_file | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/huggingface.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/huggingface.py | Apache-2.0 |
def hf_encode(model, sentences, output_type: type = str):
"""
Parameters
----------
model
Model object in HuggingFace tokenizer
sentences
Input sentences
output_type
Output type
Returns
-------
ret
"""
is_multi_sentences = isinstance(sentences, list)... |
Parameters
----------
model
Model object in HuggingFace tokenizer
sentences
Input sentences
output_type
Output type
Returns
-------
ret
| hf_encode | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/huggingface.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/huggingface.py | Apache-2.0 |
def is_last_subword(self, tokens):
"""Whether the sub-token is the last sub-token in a split token list.
Only supports the case when the tokenizer is a HuggingFaceBPETokenizer
Parameters
----------
tokens
A single token or a list of tokens
Returns
-... | Whether the sub-token is the last sub-token in a split token list.
Only supports the case when the tokenizer is a HuggingFaceBPETokenizer
Parameters
----------
tokens
A single token or a list of tokens
Returns
-------
ret
The results
... | is_last_subword | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/huggingface.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/huggingface.py | Apache-2.0 |
def is_first_subword(self, tokens):
"""Whether the sub-token is the first sub-token in a token list.
Only supports the case when the tokenizer is a HuggingFaceWordPieceTokenizer
Parameters
----------
tokens
A single token or a list of tokens
Returns
... | Whether the sub-token is the first sub-token in a token list.
Only supports the case when the tokenizer is a HuggingFaceWordPieceTokenizer
Parameters
----------
tokens
A single token or a list of tokens
Returns
-------
ret
The results
... | is_first_subword | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/huggingface.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/huggingface.py | Apache-2.0 |
def __init__(self, merges_file: Optional[str] = None,
vocab_file: Optional[str] = None,
unk_token: Optional[str] = Vocab.UNK_TOKEN,
suffix: Optional[str] = '</w>',
dropout: Optional[float] = None,
lowercase: bool = False):
"""
... |
Parameters
----------
merges_file
The merges file saved by HuggingFace
vocab_file
Vocabulary file in GluonNLP
unk_token
The unknown token
suffix
The suffix for sub-tokens. For example, "Sunnyvale" will be "Sunny vale</w>"
... | __init__ | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/huggingface.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/huggingface.py | Apache-2.0 |
def is_last_subword(self, tokens: Union[str, int, List[str], List[int]]) \
-> Union[bool, List[bool]]:
"""Whether the token is the last subword token. This can be used for whole-word masking.
Parameters
----------
tokens
The input tokens
Returns
... | Whether the token is the last subword token. This can be used for whole-word masking.
Parameters
----------
tokens
The input tokens
Returns
-------
ret
Whether the token is the last subword token in the list of subwords.
| is_last_subword | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/huggingface.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/huggingface.py | Apache-2.0 |
def is_first_subword(self, tokens: Union[str, int, List[str], List[int]]) \
-> Union[bool, List[bool]]:
"""Whether the token is the first subword token in a sequence of subword tokens.
This can be used for implementing whole-word masking.
We won't care about the special tokens
... | Whether the token is the first subword token in a sequence of subword tokens.
This can be used for implementing whole-word masking.
We won't care about the special tokens
Parameters
----------
tokens
Returns
-------
ret
| is_first_subword | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/huggingface.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/huggingface.py | Apache-2.0 |
def is_first_subword(self, tokens: Union[str, int, List[str], List[int]]) \
-> Union[bool, List[bool]]:
"""Whether the token is the first subword token. This can be used to implement
whole-word masking.
Parameters
----------
tokens
The input tokens
... | Whether the token is the first subword token. This can be used to implement
whole-word masking.
Parameters
----------
tokens
The input tokens
Returns
-------
ret
Whether the token is the first subword token in the list of subwords
... | is_first_subword | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/sentencepiece.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/sentencepiece.py | Apache-2.0 |
def set_subword_regularization(self, nbest, alpha):
"""Set the subword-regularization parameters
For more details, you may refer to the official SentencePiece library:
https://github.com/google/sentencepiece
Parameters
----------
nbest
alpha
Returns
... | Set the subword-regularization parameters
For more details, you may refer to the official SentencePiece library:
https://github.com/google/sentencepiece
Parameters
----------
nbest
alpha
Returns
-------
| set_subword_regularization | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/sentencepiece.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/sentencepiece.py | Apache-2.0 |
def __getstate__(self):
"""Make the SentencepieceTokenizer pickleble.
We will remove the _spt_cls and _sp_model, which are not picklable, and try to
reconstruct the class via the saved model_path. This behavior is only acceptable for
multiprocessing and should not be used to save sent... | Make the SentencepieceTokenizer pickleble.
We will remove the _spt_cls and _sp_model, which are not picklable, and try to
reconstruct the class via the saved model_path. This behavior is only acceptable for
multiprocessing and should not be used to save sentencepiece models. | __getstate__ | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/sentencepiece.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/sentencepiece.py | Apache-2.0 |
def transform_sentence(self, sentence):
"""replace the separator in encoded result with suffix
a@@, b@@, c -> a, b, c</w>
Parameters
----------
sentence
Returns
-------
new_sentence
"""
return [word[:-2] if len(word) > 2 and word[-2:] =... | replace the separator in encoded result with suffix
a@@, b@@, c -> a, b, c</w>
Parameters
----------
sentence
Returns
-------
new_sentence
| transform_sentence | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/subword_nmt.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/subword_nmt.py | Apache-2.0 |
def is_last_subword(self, tokens: Union[str, int, List[str], List[int]]) \
-> Union[bool, List[bool]]:
"""Whether the token is the last subword token. This can be used
for whole-word masking.
Parameters
----------
tokens
The input tokens
Returns
... | Whether the token is the last subword token. This can be used
for whole-word masking.
Parameters
----------
tokens
The input tokens
Returns
-------
ret
Whether the token is the last subword token in the list of subwords
| is_last_subword | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/subword_nmt.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/subword_nmt.py | Apache-2.0 |
def is_first_subword(self, tokens: Union[str, int, List[str], List[int]]) \
-> Union[bool, List[bool]]:
"""Whether the token is the first subword token in a list of subword tokens
Parameters
----------
tokens
The input tokens
Returns
-------
... | Whether the token is the first subword token in a list of subword tokens
Parameters
----------
tokens
The input tokens
Returns
-------
ret
Whether the token is the first subword token in a sequence of subword tokens
that construct the... | is_first_subword | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/yttm.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/yttm.py | Apache-2.0 |
def __getstate__(self):
"""Support multiprocessing by making it pickleble"""
state = self.__dict__.copy()
state['_bpe'] = None
return state | Support multiprocessing by making it pickleble | __getstate__ | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/yttm.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/yttm.py | Apache-2.0 |
def list_sources(embedding_name=None):
"""Get valid token embedding names and their pre-trained file names.
Parameters
----------
embedding_name : str or None, default None
The pre-trained token embedding name.
Returns
-------
dict or list:
A list of all the valid pre-train... | Get valid token embedding names and their pre-trained file names.
Parameters
----------
embedding_name : str or None, default None
The pre-trained token embedding name.
Returns
-------
dict or list:
A list of all the valid pre-trained token embedding file names (`source`) for t... | list_sources | python | dmlc/gluon-nlp | src/gluonnlp/embedding/embed_loader.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/embedding/embed_loader.py | Apache-2.0 |
def load_embeddings(vocab=None, pretrained_name_or_dir='glove.6B.50d', unknown='<unk>',
unk_method=None):
"""Load pretrained word embeddings for building an embedding matrix for a given Vocab.
This function supports loading GloVe, Word2Vec and FastText word embeddings from remote sources.
... | Load pretrained word embeddings for building an embedding matrix for a given Vocab.
This function supports loading GloVe, Word2Vec and FastText word embeddings from remote sources.
You can also load your own embedding file(txt with Word2Vec or GloVe format) from a given file path.
Glove: an unsupervised l... | load_embeddings | python | dmlc/gluon-nlp | src/gluonnlp/embedding/embed_loader.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/embedding/embed_loader.py | Apache-2.0 |
def get_fasttext_model(model_name_or_dir='cc.en.300'):
""" Load fasttext model from the binaray file
This method will load fasttext model binaray file from a given file path or remote sources,
and return a `fasttext` model object. See `fasttext.cc` for more usage information.
Available sources:
['... | Load fasttext model from the binaray file
This method will load fasttext model binaray file from a given file path or remote sources,
and return a `fasttext` model object. See `fasttext.cc` for more usage information.
Available sources:
['wiki-news-300d-1M-subword', 'crawl-300d-2M-subword', 'cc.... | get_fasttext_model | python | dmlc/gluon-nlp | src/gluonnlp/embedding/embed_loader.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/embedding/embed_loader.py | Apache-2.0 |
def forward(self, data, valid_length):
"""
Generate the representation given the inputs.
This is used in training or fine-tuning a Bert model.
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C)
- layout =... |
Generate the representation given the inputs.
This is used in training or fine-tuning a Bert model.
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C)
- layout = 'TN'
Shape (seq_length, batch_siz... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/albert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/albert.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length=None):
"""Generate the representation given the inputs.
This is used in training or fine-tuning a Albert model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
... | Generate the representation given the inputs.
This is used in training or fine-tuning a Albert model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/albert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/albert.py | Apache-2.0 |
def get_initial_embedding(self, inputs, token_types=None):
"""Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
... | Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
... | get_initial_embedding | python | dmlc/gluon-nlp | src/gluonnlp/models/albert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/albert.py | Apache-2.0 |
def apply_pooling(self, sequence):
"""Generate the representation given the inputs.
This is used for pre-training or fine-tuning a Bert model.
Get the first token of the whole sequence which is [CLS]
Parameters
----------
sequence
- layout = 'NT'
... | Generate the representation given the inputs.
This is used for pre-training or fine-tuning a Bert model.
Get the first token of the whole sequence which is [CLS]
Parameters
----------
sequence
- layout = 'NT'
Shape (batch_size, sequence_length, units... | apply_pooling | python | dmlc/gluon-nlp | src/gluonnlp/models/albert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/albert.py | Apache-2.0 |
def from_cfg(cls, cfg, use_pooler=True, dtype=None) -> 'AlbertModel':
"""
Parameters
----------
cfg
use_pooler
Whether to use pooler
dtype
The dtype of the backbone model
Returns
-------
model
The created Alber... |
Parameters
----------
cfg
use_pooler
Whether to use pooler
dtype
The dtype of the backbone model
Returns
-------
model
The created AlbertModel
| from_cfg | python | dmlc/gluon-nlp | src/gluonnlp/models/albert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/albert.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length,
masked_positions):
"""Getting the scores of the masked positions.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
... | Getting the scores of the masked positions.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
The type of the token. For example, if... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/albert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/albert.py | Apache-2.0 |
def __init__(self, backbone_cfg,
weight_initializer=None,
bias_initializer=None):
"""
Parameters
----------
backbone_cfg
The cfg of the backbone model
weight_initializer
bias_initializer
"""
super().__init__()... |
Parameters
----------
backbone_cfg
The cfg of the backbone model
weight_initializer
bias_initializer
| __init__ | python | dmlc/gluon-nlp | src/gluonnlp/models/albert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/albert.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length,
masked_positions):
"""Generate the representation given the inputs.
This is used in training or fine-tuning a Albert model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch... | Generate the representation given the inputs.
This is used in training or fine-tuning a Albert model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/albert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/albert.py | Apache-2.0 |
def get_pretrained_albert(model_name: str = 'google_albert_base_v2',
root: str = get_model_zoo_home_dir(),
load_backbone: str = True,
load_mlm: str = False)\
-> Tuple[CN, SentencepieceTokenizer, str, str]:
"""Get the pretrained Al... | Get the pretrained Albert weights
Parameters
----------
model_name
The name of the Albert model.
root
The downloading root
load_backbone
Whether to load the weights of the backbone network
load_mlm
Whether to load the weights of MLM
Returns
-------
c... | get_pretrained_albert | python | dmlc/gluon-nlp | src/gluonnlp/models/albert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/albert.py | Apache-2.0 |
def __init__(self,
use_pooler: bool = False,
classifier_activation: bool = False,
extract_feature: bool = False,
pooler_activation='tanh',
**kwargs):
"""
Parameters
----------
use_pooler
Whe... |
Parameters
----------
use_pooler
Whether to use pooler
classifier_activation
extract_feature
Whether to extract the feature
pooler_activation
**kwargs
| __init__ | python | dmlc/gluon-nlp | src/gluonnlp/models/bart.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bart.py | Apache-2.0 |
def forward(self, src_data, src_valid_length, tgt_data, tgt_valid_length):
"""
Parameters
----------
src_data
- layout = 'NT'
Shape (batch_size, src_length)
- layout = 'TN'
Shape (src_length, batch_size)
src_valid_length
... |
Parameters
----------
src_data
- layout = 'NT'
Shape (batch_size, src_length)
- layout = 'TN'
Shape (src_length, batch_size)
src_valid_length
Shape (batch_size,)
tgt_data
- layout = 'NT'
... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/bart.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bart.py | Apache-2.0 |
def apply_pooling(self, sequence, valid_length):
"""Generate the representation given the inputs.
This is used for pre-training or fine-tuning a BART model.
In BART, the pooled output is the embedding of the last token.
Parameters
----------
sequence
- layou... | Generate the representation given the inputs.
This is used for pre-training or fine-tuning a BART model.
In BART, the pooled output is the embedding of the last token.
Parameters
----------
sequence
- layout = 'NT'
Shape (batch_size, sequence_length,... | apply_pooling | python | dmlc/gluon-nlp | src/gluonnlp/models/bart.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bart.py | Apache-2.0 |
def from_cfg(cls, cfg,
dtype=None,
extract_feature=False,
use_pooler=True,
classifier_activation=False):
"""
Parameters
----------
cfg
The configuration
dtype
Data type of the loaded conf... |
Parameters
----------
cfg
The configuration
dtype
Data type of the loaded config
extract_feature
Whether to only extract feature.
If so, the output of the layer will be contextual embeddings or the
contextual embedding... | from_cfg | python | dmlc/gluon-nlp | src/gluonnlp/models/bart.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bart.py | Apache-2.0 |
def get_pretrained_bart(model_name: str = 'fairseq_bart_base',
root: str = get_model_zoo_home_dir(),
load_backbone: bool = True) \
-> Tuple[CN, HuggingFaceByteBPETokenizer, str, List]:
"""Get the pretrained RoBERTa weights
Parameters
----------
mo... | Get the pretrained RoBERTa weights
Parameters
----------
model_name
The name of the RoBERTa model.
root
The downloading root
load_backbone
Whether to load the weights of the backbone network
Returns
-------
cfg
Network configuration
tokenizer
... | get_pretrained_bart | python | dmlc/gluon-nlp | src/gluonnlp/models/bart.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bart.py | Apache-2.0 |
def get_backbone(model_name: str,
root: str = get_model_zoo_home_dir(),
**kwargs) -> Tuple['Block', str, BaseTokenizer, str, List]:
"""Get the backbone network
Parameters
----------
model_name
The name of the pretrained model
root
Downloaded directo... | Get the backbone network
Parameters
----------
model_name
The name of the pretrained model
root
Downloaded directory of the model zoo
Returns
-------
model_cls
The class to construct the backbone network
cfg
Path to the config file of the backbone
to... | get_backbone | python | dmlc/gluon-nlp | src/gluonnlp/models/base.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/base.py | Apache-2.0 |
def forward(self, data, valid_length):
"""
Generate the representation given the inputs.
This is used in training or fine-tuning a bert model.
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C)
- layout =... |
Generate the representation given the inputs.
This is used in training or fine-tuning a bert model.
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C)
- layout = 'TN'
Shape (seq_length, batch_siz... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/bert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bert.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length):
# pylint: disable=arguments-differ
"""Generate the representation given the inputs.
This is used in training or fine-tuning a bert model.
Parameters
----------
inputs
- layout = 'NT'
Shape... | Generate the representation given the inputs.
This is used in training or fine-tuning a bert model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/bert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bert.py | Apache-2.0 |
def get_initial_embedding(self, inputs, token_types=None):
"""Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
... | Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
... | get_initial_embedding | python | dmlc/gluon-nlp | src/gluonnlp/models/bert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bert.py | Apache-2.0 |
def apply_pooling(self, sequence):
"""Generate the representation given the inputs.
This is used for pre-training or fine-tuning a bert model.
Get the first token of the whole sequence which is [CLS].
Parameters
----------
sequence
- layout = 'NT'
... | Generate the representation given the inputs.
This is used for pre-training or fine-tuning a bert model.
Get the first token of the whole sequence which is [CLS].
Parameters
----------
sequence
- layout = 'NT'
Shape (batch_size, sequence_length, unit... | apply_pooling | python | dmlc/gluon-nlp | src/gluonnlp/models/bert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bert.py | Apache-2.0 |
def from_cfg(cls, cfg, use_pooler=True, dtype=None) -> 'BertModel':
"""
Parameters
----------
cfg
Configuration
use_pooler
Whether to output the pooled feature
dtype
data type of the model
Returns
-------
ret
... |
Parameters
----------
cfg
Configuration
use_pooler
Whether to output the pooled feature
dtype
data type of the model
Returns
-------
ret
The constructed BertModel
| from_cfg | python | dmlc/gluon-nlp | src/gluonnlp/models/bert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bert.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length,
masked_positions):
"""Getting the scores of the masked positions.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
... | Getting the scores of the masked positions.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
If the inputs contain two sequences, we... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/bert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bert.py | Apache-2.0 |
def __init__(self, backbone_cfg,
weight_initializer=None,
bias_initializer=None):
"""
Parameters
----------
backbone_cfg
The cfg of the backbone model
weight_initializer
bias_initializer
"""
super().__init__()... |
Parameters
----------
backbone_cfg
The cfg of the backbone model
weight_initializer
bias_initializer
| __init__ | python | dmlc/gluon-nlp | src/gluonnlp/models/bert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bert.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length,
masked_positions):
"""Generate the representation given the inputs.
This is used in training or fine-tuning a bert model.
Parameters
----------
inputs
- layout = 'NT'
Shape (... | Generate the representation given the inputs.
This is used in training or fine-tuning a bert model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/bert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bert.py | Apache-2.0 |
def get_pretrained_bert(model_name: str = 'google_en_cased_bert_base',
root: str = get_model_zoo_home_dir(),
load_backbone: str = True,
load_mlm: str = False)\
-> Tuple[CN, HuggingFaceWordPieceTokenizer, str, str]:
"""Get the pretrained... | Get the pretrained bert weights
Parameters
----------
model_name
The name of the bert model.
root
The downloading root
load_backbone
Whether to load the weights of the backbone network
load_mlm
Whether to load the weights of MLM
Returns
-------
cfg
... | get_pretrained_bert | python | dmlc/gluon-nlp | src/gluonnlp/models/bert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bert.py | Apache-2.0 |
def get_generator_cfg(model_config):
"""
Get the generator configuration from the Electra model config.
The size of generator is usually smaller than discriminator but same in electra small,
which exists a conflict between source code and original paper.
"""
generator_cfg = model_config.clone()... |
Get the generator configuration from the Electra model config.
The size of generator is usually smaller than discriminator but same in electra small,
which exists a conflict between source code and original paper.
| get_generator_cfg | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def __init__(self, units=512,
hidden_size=2048,
num_layers=6,
num_heads=8,
attention_dropout_prob=0.,
hidden_dropout_prob=0.,
output_attention=False,
dtype='float32',
output_all_encodi... |
Parameters
----------
units
The number of units
hidden_size
The hidden size
num_layers
Number of layers
num_heads
Number of heads
attention_dropout_prob
Dropout probability of the attention layer
... | __init__ | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def forward(self, data, valid_length):
"""Generate the representation given the inputs.
This is used in training or fine-tuning a Electra model.
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C)
- layout = 'TN'
... | Generate the representation given the inputs.
This is used in training or fine-tuning a Electra model.
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C)
- layout = 'TN'
Shape (seq_length, batch_size, C)
... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length=None):
"""Generate the representation given the inputs.
This is used in training or fine-tuning a Electra model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
... | Generate the representation given the inputs.
This is used in training or fine-tuning a Electra model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def get_initial_embedding(self, inputs, token_types=None):
"""Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
... | Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
... | get_initial_embedding | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def apply_layerwise_decay(self, layerwise_decay: int,
not_included: Optional[List[str]] = None,
num_additional_layers: int = 2):
"""Apply the layer-wise gradient decay
.. math::
lr = lr * layerwise_decay^(max_depth - layer_depth)
... | Apply the layer-wise gradient decay
.. math::
lr = lr * layerwise_decay^(max_depth - layer_depth)
Parameters
----------
layerwise_decay
Power rate of the layer-wise decay
not_included
A list or parameter names that not included in the layer-w... | apply_layerwise_decay | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def frozen_params(self, untunable_depth: int, not_included: Optional[List[str]] = None):
"""Froze part of parameters according to layer depth.
That is, make all layer that shallower than `untunable_depth` untunable
to stop the gradient backward computation and accelerate the training.
... | Froze part of parameters according to layer depth.
That is, make all layer that shallower than `untunable_depth` untunable
to stop the gradient backward computation and accelerate the training.
Parameters
----------
untunable_depth
the depth of the neural network st... | frozen_params | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length):
"""Getting the scores of the replaced token detection of the whole sentence
based on the corrupted tokens produced from a generator.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, ... | Getting the scores of the replaced token detection of the whole sentence
based on the corrupted tokens produced from a generator.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (se... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def __init__(self, backbone_cfg,
weight_initializer=None,
bias_initializer=None):
"""
Parameters
----------
backbone_cfg
Configuration of the backbone model
weight_initializer
bias_initializer
"""
super().__in... |
Parameters
----------
backbone_cfg
Configuration of the backbone model
weight_initializer
bias_initializer
| __init__ | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def tie_embeddings(self, word_embed_params=None,
token_type_embed_params=None,
token_pos_embed_params=None,
embed_layer_norm_params=None):
"""Tie the embedding layers between the backbone and the MLM decoder
Parameters
-------... | Tie the embedding layers between the backbone and the MLM decoder
Parameters
----------
word_embed_params
token_type_embed_params
token_pos_embed_params
embed_layer_norm_params
| tie_embeddings | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length, masked_positions):
"""Getting the scores of the masked positions.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, b... | Getting the scores of the masked positions.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
If the inputs contain two sequences, we ... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def __init__(self,
disc_cfg,
uniform_generator=False,
tied_generator=False,
tied_embeddings=True,
disallow_correct=False,
temperature=1.0,
gumbel_eps=1E-9,
dtype='float32',
... |
Parameters
----------
disc_cfg :
Config for discriminator model including scaled size for generator
uniform_generator :
Wether to get a generator with uniform weights, the mlm_scores from
which are totally random. In this case , a discriminator learn... | __init__ | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length,
original_tokens, masked_positions):
"""Getting the mlm scores of each masked positions from a generator,
then produces the corrupted tokens sampling from a gumbel distribution.
We also get the ground-truth and scores of the rep... | Getting the mlm scores of each masked positions from a generator,
then produces the corrupted tokens sampling from a gumbel distribution.
We also get the ground-truth and scores of the replaced token detection
which is output by a discriminator. The ground-truth is an array with same
sha... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def get_corrupted_tokens(self, inputs, original_tokens, masked_positions, logits):
"""
Sample from the generator to create corrupted input.
Parameters
----------
inputs
The masked input
- layout = 'NT'
Shape (batch_size, seq_length)
... |
Sample from the generator to create corrupted input.
Parameters
----------
inputs
The masked input
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
original_tokens... | get_corrupted_tokens | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def get_pretrained_electra(model_name: str = 'google_electra_small',
root: str = get_model_zoo_home_dir(),
load_backbone: bool = True,
load_disc: bool = False,
load_gen: bool = False) \
-> Tuple[CN, Huggi... | Get the pretrained Electra weights
Parameters
----------
model_name
The name of the Electra model.
root
The downloading root
load_backbone
Whether to load the weights of the backbone network
load_disc
Whether to load the weights of the discriminator
load_gen
... | get_pretrained_electra | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def forward(self, x, layer_states):
"""
Parameters
----------
x
- layout = 'NT'
Shape (batch_size, seq_length, C_in)
- layout = 'TN'
Shape (seq_length, batch_size, C_in)
layer_states
- layout = 'NT'
... |
Parameters
----------
x
- layout = 'NT'
Shape (batch_size, seq_length, C_in)
- layout = 'TN'
Shape (seq_length, batch_size, C_in)
layer_states
- layout = 'NT'
Shape (2, batch_size, prev_len, C_in)
... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/gpt2.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/gpt2.py | Apache-2.0 |
def forward(self, x, layer_states):
"""
Parameters
----------
x
- layout = 'NT'
Shape (batch_size, seq_length, C_in)
- layout = 'TN'
Shape (seq_length, batch_size, C_in)
layer_states
- layout = 'NT'
... |
Parameters
----------
x
- layout = 'NT'
Shape (batch_size, seq_length, C_in)
- layout = 'TN'
Shape (seq_length, batch_size, C_in)
layer_states
- layout = 'NT'
Shape (2, batch_size, prev_len, C_in)
... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/gpt2.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/gpt2.py | Apache-2.0 |
def forward(self, x, states):
"""
Parameters
----------
x
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
states
The previous states
- layout = 'NT'
... |
Parameters
----------
x
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
states
The previous states
- layout = 'NT'
Shape (num_layers, 2, batch... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/gpt2.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/gpt2.py | Apache-2.0 |
def get_initial_embedding(self, inputs, prev_len):
"""Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
... | Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
prev_len
... | get_initial_embedding | python | dmlc/gluon-nlp | src/gluonnlp/models/gpt2.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/gpt2.py | Apache-2.0 |
def init_states(self, batch_size, ctx, dtype=None):
"""Initialize the states required for incremental decoding
Returns
-------
init_states
- layout = 'NT'
Shape (num_layers, 2, batch_size, 0, C_in)
- layout = 'TN'
Shape (num_layers... | Initialize the states required for incremental decoding
Returns
-------
init_states
- layout = 'NT'
Shape (num_layers, 2, batch_size, 0, C_in)
- layout = 'TN'
Shape (num_layers, 2, 0, batch_size, C_in)
| init_states | python | dmlc/gluon-nlp | src/gluonnlp/models/gpt2.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/gpt2.py | Apache-2.0 |
def forward(self, inputs, states):
"""Getting the logits. This can be used for language modeling.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
s... | Getting the logits. This can be used for language modeling.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
states
The states.
- l... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/gpt2.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/gpt2.py | Apache-2.0 |
def get_pretrained_gpt2(model_name: str = 'gpt2_124M',
root: str = get_model_zoo_home_dir(),
load_backbone: bool = True,
load_lm: bool = False)\
-> Tuple[CN, HuggingFaceByteBPETokenizer, str, str]:
"""Get the pretrained GPT-2 weights
... | Get the pretrained GPT-2 weights
Parameters
----------
model_name
The name of the GPT-2 model.
root
The downloading root
load_backbone
Whether to load the weights of the backbone network
load_lm
Whether to load the weights of LM
Returns
-------
cfg
... | get_pretrained_gpt2 | python | dmlc/gluon-nlp | src/gluonnlp/models/gpt2.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/gpt2.py | Apache-2.0 |
def __init__(self,
use_bottleneck: bool = True,
units: int = 512,
real_units: int = 128,
hidden_size: int = 2048,
num_heads: int = 8,
num_stacked_ffn: int = 1,
bottleneck_strategy: str = 'qk_sharing',
... |
Parameters
----------
use_bottleneck
Whether to use the bottleneck layer.
units
size of inter-bottleneck
real_units
size of intra-bottleneck
hidden_size
size of feed-forward network
num_heads
num_stacked_ff... | __init__ | python | dmlc/gluon-nlp | src/gluonnlp/models/mobilebert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/mobilebert.py | Apache-2.0 |
def forward(self, data, attn_mask):
"""
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C_in)
- layout = 'TN'
Shape (seq_length, batch_size, C_in)
attn_mask
The attention mask
... |
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C_in)
- layout = 'TN'
Shape (seq_length, batch_size, C_in)
attn_mask
The attention mask
Shape (batch_size, seq_length, seq_length)
... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/mobilebert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/mobilebert.py | Apache-2.0 |
def forward(self, data, valid_length):
"""
Generate the representation given the inputs.
This is used in training or fine-tuning a mobile bert model.
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C)
- l... |
Generate the representation given the inputs.
This is used in training or fine-tuning a mobile bert model.
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C)
- layout = 'TN'
Shape (seq_length, ba... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/mobilebert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/mobilebert.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length):
# pylint: disable=arguments-differ
"""Generate the representation given the inputs.
This is used in training or fine-tuning a mobile bert model.
Parameters
----------
inputs
- layout = 'NT'
... | Generate the representation given the inputs.
This is used in training or fine-tuning a mobile bert model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
... | forward | python | dmlc/gluon-nlp | src/gluonnlp/models/mobilebert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/mobilebert.py | Apache-2.0 |
def get_initial_embedding(self, inputs, token_types=None):
"""Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
... | Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
... | get_initial_embedding | python | dmlc/gluon-nlp | src/gluonnlp/models/mobilebert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/mobilebert.py | Apache-2.0 |
def apply_pooling(self, sequence):
"""Generate the representation given the inputs.
This is used for pre-training or fine-tuning a mobile bert model.
Get the first token of the whole sequence which is [CLS]
Parameters
----------
sequence
- layout = 'NT'
... | Generate the representation given the inputs.
This is used for pre-training or fine-tuning a mobile bert model.
Get the first token of the whole sequence which is [CLS]
Parameters
----------
sequence
- layout = 'NT'
Shape (batch_size, sequence_length... | apply_pooling | python | dmlc/gluon-nlp | src/gluonnlp/models/mobilebert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/mobilebert.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves Python code examples from Django repository that contain 'django' in the code, which helps identify Django-specific code snippets but provides limited analytical insights beyond basic filtering.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.