body stringlengths 26 98.2k | body_hash int64 -9,222,864,604,528,158,000 9,221,803,474B | docstring stringlengths 1 16.8k | path stringlengths 5 230 | name stringlengths 1 96 | repository_name stringlengths 7 89 | lang stringclasses 1
value | body_without_docstring stringlengths 20 98.2k |
|---|---|---|---|---|---|---|---|
@callback
def record_call(service):
'Add recorded event to set.'
calls.append(service) | 7,588,650,191,437,550,000 | Add recorded event to set. | tests/components/test_script.py | record_call | 27tech/home-assistant | python | @callback
def record_call(service):
calls.append(service) |
def tokenize(text: str) -> Iterator[str]:
'return iterable of uppercased words'
for match in RE_WORD.finditer(text):
(yield match.group().upper()) | 8,251,342,383,710,335,000 | return iterable of uppercased words | 08-def-type-hints/charindex.py | tokenize | eumiro/example-code-2e | python | def tokenize(text: str) -> Iterator[str]:
for match in RE_WORD.finditer(text):
(yield match.group().upper()) |
def transform(self, payload: Dict[(str, Any)], metadata: Optional[Dict[(str, Any)]]=None):
'\n The mapping is done in 4 major steps:\n\n 1. Flattens the data.\n 2. Metadata Replacers:\n Some key mapping parameters are specified in the metadata. Keys that have placeholders like\n ... | -5,254,253,032,800,944,000 | The mapping is done in 4 major steps:
1. Flattens the data.
2. Metadata Replacers:
Some key mapping parameters are specified in the metadata. Keys that have placeholders like
${metadata_key} will be substituted by values on the specified metadata key.
3. Map Data.
In this moment the keys of the mapping... | transformer/transformers/map_keys.py | transform | santunioni/Transformer | python | def transform(self, payload: Dict[(str, Any)], metadata: Optional[Dict[(str, Any)]]=None):
'\n The mapping is done in 4 major steps:\n\n 1. Flattens the data.\n 2. Metadata Replacers:\n Some key mapping parameters are specified in the metadata. Keys that have placeholders like\n ... |
def nmf(Y, A, S, W=None, prox_A=operators.prox_plus, prox_S=operators.prox_plus, proxs_g=None, steps_g=None, Ls=None, slack=0.9, update_order=None, steps_g_update='steps_f', max_iter=1000, e_rel=0.001, e_abs=0, traceback=None):
'Non-negative matrix factorization.\n\n This method solves the NMF problem\n m... | -1,810,764,077,884,436,500 | Non-negative matrix factorization.
This method solves the NMF problem
minimize || Y - AS ||_2^2
under an arbitrary number of constraints on A and/or S.
Args:
Y: target matrix MxN
A: initial amplitude matrix MxK, will be updated
S: initial source matrix KxN, will be updated
W: (optional weight mat... | proxmin/nmf.py | nmf | herjy/proxmin | python | def nmf(Y, A, S, W=None, prox_A=operators.prox_plus, prox_S=operators.prox_plus, proxs_g=None, steps_g=None, Ls=None, slack=0.9, update_order=None, steps_g_update='steps_f', max_iter=1000, e_rel=0.001, e_abs=0, traceback=None):
'Non-negative matrix factorization.\n\n This method solves the NMF problem\n m... |
def __init__(self, WA=1, WS=1, slack=0.1, max_stride=100):
'Helper class to compute the Lipschitz constants of grad f.\n\n The __call__ function compute the spectral norms of A or S, which\n determine the Lipschitz constant of the respective update steps.\n\n If a weight matrix is used, the ste... | 6,426,688,991,894,909,000 | Helper class to compute the Lipschitz constants of grad f.
The __call__ function compute the spectral norms of A or S, which
determine the Lipschitz constant of the respective update steps.
If a weight matrix is used, the stepsize will be upper bounded by
assuming the maximum value of the weights. In the case of vary... | proxmin/nmf.py | __init__ | herjy/proxmin | python | def __init__(self, WA=1, WS=1, slack=0.1, max_stride=100):
'Helper class to compute the Lipschitz constants of grad f.\n\n The __call__ function compute the spectral norms of A or S, which\n determine the Lipschitz constant of the respective update steps.\n\n If a weight matrix is used, the ste... |
def __init__(self, encoder_type=None, encoder_name=None, decoder_name=None, encoder_decoder_type=None, encoder_decoder_name=None, config=None, args=None, use_cuda=True, cuda_device=(- 1), **kwargs):
'\n Initializes a Seq2SeqModel.\n\n Args:\n encoder_type (optional): The type of model to us... | 5,702,111,848,489,322,000 | Initializes a Seq2SeqModel.
Args:
encoder_type (optional): The type of model to use as the encoder.
encoder_name (optional): The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model fi... | simpletransformers/seq2seq/seq2seq_model.py | __init__ | AliOsm/simpletransformers | python | def __init__(self, encoder_type=None, encoder_name=None, decoder_name=None, encoder_decoder_type=None, encoder_decoder_name=None, config=None, args=None, use_cuda=True, cuda_device=(- 1), **kwargs):
'\n Initializes a Seq2SeqModel.\n\n Args:\n encoder_type (optional): The type of model to us... |
def train_model(self, train_data, output_dir=None, show_running_loss=True, args=None, eval_data=None, verbose=True, **kwargs):
"\n Trains the model using 'train_data'\n\n Args:\n train_data: Pandas DataFrame containing the 2 columns - `input_text`, `target_text`.\n - ... | -3,020,603,917,038,356,000 | Trains the model using 'train_data'
Args:
train_data: Pandas DataFrame containing the 2 columns - `input_text`, `target_text`.
- `input_text`: The input text sequence.
- `target_text`: The target text sequence
output_dir: The directory where model files will be saved. If not giv... | simpletransformers/seq2seq/seq2seq_model.py | train_model | AliOsm/simpletransformers | python | def train_model(self, train_data, output_dir=None, show_running_loss=True, args=None, eval_data=None, verbose=True, **kwargs):
"\n Trains the model using 'train_data'\n\n Args:\n train_data: Pandas DataFrame containing the 2 columns - `input_text`, `target_text`.\n - ... |
def train(self, train_dataset, output_dir, show_running_loss=True, eval_data=None, verbose=True, **kwargs):
'\n Trains the model on train_dataset.\n\n Utility function to be used by the train_model() method. Not intended to be used directly.\n '
model = self.model
args = self.args
t... | -7,693,660,778,794,015,000 | Trains the model on train_dataset.
Utility function to be used by the train_model() method. Not intended to be used directly. | simpletransformers/seq2seq/seq2seq_model.py | train | AliOsm/simpletransformers | python | def train(self, train_dataset, output_dir, show_running_loss=True, eval_data=None, verbose=True, **kwargs):
'\n Trains the model on train_dataset.\n\n Utility function to be used by the train_model() method. Not intended to be used directly.\n '
model = self.model
args = self.args
t... |
def eval_model(self, eval_data, output_dir=None, verbose=True, silent=False, **kwargs):
'\n Evaluates the model on eval_data. Saves results to output_dir.\n\n Args:\n eval_data: Pandas DataFrame containing the 2 columns - `input_text`, `target_text`.\n - `input_text`:... | -2,470,111,290,407,275,000 | Evaluates the model on eval_data. Saves results to output_dir.
Args:
eval_data: Pandas DataFrame containing the 2 columns - `input_text`, `target_text`.
- `input_text`: The input text sequence.
- `target_text`: The target text sequence.
output_dir: The directory where model file... | simpletransformers/seq2seq/seq2seq_model.py | eval_model | AliOsm/simpletransformers | python | def eval_model(self, eval_data, output_dir=None, verbose=True, silent=False, **kwargs):
'\n Evaluates the model on eval_data. Saves results to output_dir.\n\n Args:\n eval_data: Pandas DataFrame containing the 2 columns - `input_text`, `target_text`.\n - `input_text`:... |
def evaluate(self, eval_dataset, output_dir, verbose=True, silent=False, **kwargs):
'\n Evaluates the model on eval_dataset.\n\n Utility function to be used by the eval_model() method. Not intended to be used directly.\n '
model = self.model
args = self.args
eval_output_dir = output... | -3,573,907,752,735,053,000 | Evaluates the model on eval_dataset.
Utility function to be used by the eval_model() method. Not intended to be used directly. | simpletransformers/seq2seq/seq2seq_model.py | evaluate | AliOsm/simpletransformers | python | def evaluate(self, eval_dataset, output_dir, verbose=True, silent=False, **kwargs):
'\n Evaluates the model on eval_dataset.\n\n Utility function to be used by the eval_model() method. Not intended to be used directly.\n '
model = self.model
args = self.args
eval_output_dir = output... |
def predict(self, to_predict):
'\n Performs predictions on a list of text.\n\n Args:\n to_predict: A python list of text (str) to be sent to the model for prediction. Note that the prefix should be prepended to the text.\n\n Returns:\n preds: A python list of the generated... | 7,405,487,662,115,485,000 | Performs predictions on a list of text.
Args:
to_predict: A python list of text (str) to be sent to the model for prediction. Note that the prefix should be prepended to the text.
Returns:
preds: A python list of the generated sequences. | simpletransformers/seq2seq/seq2seq_model.py | predict | AliOsm/simpletransformers | python | def predict(self, to_predict):
'\n Performs predictions on a list of text.\n\n Args:\n to_predict: A python list of text (str) to be sent to the model for prediction. Note that the prefix should be prepended to the text.\n\n Returns:\n preds: A python list of the generated... |
def compute_metrics(self, labels, preds, **kwargs):
'\n Computes the evaluation metrics for the model predictions.\n\n Args:\n labels: List of target sequences\n preds: List of model generated outputs\n **kwargs: Custom metrics that should be used. Pass in the metrics ... | 5,236,419,145,034,337,000 | Computes the evaluation metrics for the model predictions.
Args:
labels: List of target sequences
preds: List of model generated outputs
**kwargs: Custom metrics that should be used. Pass in the metrics as keyword arguments (name of metric: function to use).
A metric function should take in... | simpletransformers/seq2seq/seq2seq_model.py | compute_metrics | AliOsm/simpletransformers | python | def compute_metrics(self, labels, preds, **kwargs):
'\n Computes the evaluation metrics for the model predictions.\n\n Args:\n labels: List of target sequences\n preds: List of model generated outputs\n **kwargs: Custom metrics that should be used. Pass in the metrics ... |
def load_and_cache_examples(self, data, evaluate=False, no_cache=False, verbose=True, silent=False):
'\n Creates a T5Dataset from data.\n\n Utility function for train() and eval() methods. Not intended to be used directly.\n '
encoder_tokenizer = self.encoder_tokenizer
decoder_tokenizer... | -7,127,475,467,697,222,000 | Creates a T5Dataset from data.
Utility function for train() and eval() methods. Not intended to be used directly. | simpletransformers/seq2seq/seq2seq_model.py | load_and_cache_examples | AliOsm/simpletransformers | python | def load_and_cache_examples(self, data, evaluate=False, no_cache=False, verbose=True, silent=False):
'\n Creates a T5Dataset from data.\n\n Utility function for train() and eval() methods. Not intended to be used directly.\n '
encoder_tokenizer = self.encoder_tokenizer
decoder_tokenizer... |
def test_create_user_with_email_successful(self):
'이메일로 유저 생성을 성공하는 테스트'
email = 'example@example.com'
password = 'testpassword'
user = get_user_model().objects.create_user(email=email, password=password)
self.assertEqual(user.email, email)
self.assertTrue(user.check_password(password)) | -1,282,299,646,816,919,600 | 이메일로 유저 생성을 성공하는 테스트 | shoppingmall/core/tests/test_models.py | test_create_user_with_email_successful | jacobjlee/simple-shopping | python | def test_create_user_with_email_successful(self):
email = 'example@example.com'
password = 'testpassword'
user = get_user_model().objects.create_user(email=email, password=password)
self.assertEqual(user.email, email)
self.assertTrue(user.check_password(password)) |
def test_new_user_email_normalized(self):
'이메일이 표준 형식으로 들어오는 테스트'
email = 'example@example.com'
user = get_user_model().objects.create_user(email, 'testpw123')
self.assertEqual(user.email, email.lower()) | 5,622,624,492,197,440,000 | 이메일이 표준 형식으로 들어오는 테스트 | shoppingmall/core/tests/test_models.py | test_new_user_email_normalized | jacobjlee/simple-shopping | python | def test_new_user_email_normalized(self):
email = 'example@example.com'
user = get_user_model().objects.create_user(email, 'testpw123')
self.assertEqual(user.email, email.lower()) |
def test_new_user_missing_email(self):
'이메일이 입력되지 않았을 때 에러가 발생하는 테스트'
with self.assertRaises(ValueError):
get_user_model().objects.create_user(None, 'testpw123') | -4,798,733,096,387,014,000 | 이메일이 입력되지 않았을 때 에러가 발생하는 테스트 | shoppingmall/core/tests/test_models.py | test_new_user_missing_email | jacobjlee/simple-shopping | python | def test_new_user_missing_email(self):
with self.assertRaises(ValueError):
get_user_model().objects.create_user(None, 'testpw123') |
def test_create_new_superuser(self):
'Superuser를 생성하는 테스트'
user = get_user_model().objects.create_superuser('example@example.com', 'testpw123')
self.assertTrue(user.is_superuser)
self.assertTrue(user.is_staff) | 1,265,609,921,803,198,200 | Superuser를 생성하는 테스트 | shoppingmall/core/tests/test_models.py | test_create_new_superuser | jacobjlee/simple-shopping | python | def test_create_new_superuser(self):
user = get_user_model().objects.create_superuser('example@example.com', 'testpw123')
self.assertTrue(user.is_superuser)
self.assertTrue(user.is_staff) |
def xor_string(hash1, hash2, hash_size):
'Encrypt/Decrypt function used for password encryption in\n authentication, using a simple XOR.\n\n Args:\n hash1 (str): The first hash.\n hash2 (str): The second hash.\n\n Returns:\n str: A string with the xor applied.\n '
xored = [(h1 ^... | -3,380,580,171,674,236,400 | Encrypt/Decrypt function used for password encryption in
authentication, using a simple XOR.
Args:
hash1 (str): The first hash.
hash2 (str): The second hash.
Returns:
str: A string with the xor applied. | backend/env/Lib/site-packages/mysqlx/authentication.py | xor_string | Abdullah9340/Geese-Migration | python | def xor_string(hash1, hash2, hash_size):
'Encrypt/Decrypt function used for password encryption in\n authentication, using a simple XOR.\n\n Args:\n hash1 (str): The first hash.\n hash2 (str): The second hash.\n\n Returns:\n str: A string with the xor applied.\n '
xored = [(h1 ^... |
def name(self):
'Returns the plugin name.\n\n Returns:\n str: The plugin name.\n '
raise NotImplementedError | 6,467,344,744,560,710,000 | Returns the plugin name.
Returns:
str: The plugin name. | backend/env/Lib/site-packages/mysqlx/authentication.py | name | Abdullah9340/Geese-Migration | python | def name(self):
'Returns the plugin name.\n\n Returns:\n str: The plugin name.\n '
raise NotImplementedError |
def auth_name(self):
'Returns the authentication name.\n\n Returns:\n str: The authentication name.\n '
raise NotImplementedError | 6,014,413,375,730,915,000 | Returns the authentication name.
Returns:
str: The authentication name. | backend/env/Lib/site-packages/mysqlx/authentication.py | auth_name | Abdullah9340/Geese-Migration | python | def auth_name(self):
'Returns the authentication name.\n\n Returns:\n str: The authentication name.\n '
raise NotImplementedError |
def name(self):
'Returns the plugin name.\n\n Returns:\n str: The plugin name.\n '
return 'MySQL 4.1 Authentication Plugin' | -5,534,950,544,939,674,000 | Returns the plugin name.
Returns:
str: The plugin name. | backend/env/Lib/site-packages/mysqlx/authentication.py | name | Abdullah9340/Geese-Migration | python | def name(self):
'Returns the plugin name.\n\n Returns:\n str: The plugin name.\n '
return 'MySQL 4.1 Authentication Plugin' |
def auth_name(self):
'Returns the authentication name.\n\n Returns:\n str: The authentication name.\n '
return 'MYSQL41' | 5,984,777,660,505,297,000 | Returns the authentication name.
Returns:
str: The authentication name. | backend/env/Lib/site-packages/mysqlx/authentication.py | auth_name | Abdullah9340/Geese-Migration | python | def auth_name(self):
'Returns the authentication name.\n\n Returns:\n str: The authentication name.\n '
return 'MYSQL41' |
def auth_data(self, data):
'Hashing for MySQL 4.1 authentication.\n\n Args:\n data (str): The authentication data.\n\n Returns:\n str: The authentication response.\n '
if self._password:
password = (self._password.encode('utf-8') if isinstance(self._password, s... | -2,681,088,055,857,822,000 | Hashing for MySQL 4.1 authentication.
Args:
data (str): The authentication data.
Returns:
str: The authentication response. | backend/env/Lib/site-packages/mysqlx/authentication.py | auth_data | Abdullah9340/Geese-Migration | python | def auth_data(self, data):
'Hashing for MySQL 4.1 authentication.\n\n Args:\n data (str): The authentication data.\n\n Returns:\n str: The authentication response.\n '
if self._password:
password = (self._password.encode('utf-8') if isinstance(self._password, s... |
def name(self):
'Returns the plugin name.\n\n Returns:\n str: The plugin name.\n '
return 'Plain Authentication Plugin' | 4,109,888,586,528,399,000 | Returns the plugin name.
Returns:
str: The plugin name. | backend/env/Lib/site-packages/mysqlx/authentication.py | name | Abdullah9340/Geese-Migration | python | def name(self):
'Returns the plugin name.\n\n Returns:\n str: The plugin name.\n '
return 'Plain Authentication Plugin' |
def auth_name(self):
'Returns the authentication name.\n\n Returns:\n str: The authentication name.\n '
return 'PLAIN' | 3,704,259,228,832,687,000 | Returns the authentication name.
Returns:
str: The authentication name. | backend/env/Lib/site-packages/mysqlx/authentication.py | auth_name | Abdullah9340/Geese-Migration | python | def auth_name(self):
'Returns the authentication name.\n\n Returns:\n str: The authentication name.\n '
return 'PLAIN' |
def auth_data(self):
'Returns the authentication data.\n\n Returns:\n str: The authentication data.\n '
return '\x00{0}\x00{1}'.format(self._username, self._password) | 3,974,220,015,677,046,000 | Returns the authentication data.
Returns:
str: The authentication data. | backend/env/Lib/site-packages/mysqlx/authentication.py | auth_data | Abdullah9340/Geese-Migration | python | def auth_data(self):
'Returns the authentication data.\n\n Returns:\n str: The authentication data.\n '
return '\x00{0}\x00{1}'.format(self._username, self._password) |
def name(self):
'Returns the plugin name.\n\n Returns:\n str: The plugin name.\n '
return 'SHA256_MEMORY Authentication Plugin' | 7,930,071,930,540,710,000 | Returns the plugin name.
Returns:
str: The plugin name. | backend/env/Lib/site-packages/mysqlx/authentication.py | name | Abdullah9340/Geese-Migration | python | def name(self):
'Returns the plugin name.\n\n Returns:\n str: The plugin name.\n '
return 'SHA256_MEMORY Authentication Plugin' |
def auth_name(self):
'Returns the authentication name.\n\n Returns:\n str: The authentication name.\n '
return 'SHA256_MEMORY' | 4,464,576,182,657,441,000 | Returns the authentication name.
Returns:
str: The authentication name. | backend/env/Lib/site-packages/mysqlx/authentication.py | auth_name | Abdullah9340/Geese-Migration | python | def auth_name(self):
'Returns the authentication name.\n\n Returns:\n str: The authentication name.\n '
return 'SHA256_MEMORY' |
def auth_data(self, data):
'Hashing for SHA256_MEMORY authentication.\n\n The scramble is of the form:\n SHA256(SHA256(SHA256(PASSWORD)),NONCE) XOR SHA256(PASSWORD)\n\n Args:\n data (str): The authentication data.\n\n Returns:\n str: The authentication response.... | -8,982,060,605,021,540,000 | Hashing for SHA256_MEMORY authentication.
The scramble is of the form:
SHA256(SHA256(SHA256(PASSWORD)),NONCE) XOR SHA256(PASSWORD)
Args:
data (str): The authentication data.
Returns:
str: The authentication response. | backend/env/Lib/site-packages/mysqlx/authentication.py | auth_data | Abdullah9340/Geese-Migration | python | def auth_data(self, data):
'Hashing for SHA256_MEMORY authentication.\n\n The scramble is of the form:\n SHA256(SHA256(SHA256(PASSWORD)),NONCE) XOR SHA256(PASSWORD)\n\n Args:\n data (str): The authentication data.\n\n Returns:\n str: The authentication response.... |
def p_expression_1(self, p):
' expression : binary_expression '
p[0] = p[1] | 7,685,516,735,086,991,000 | expression : binary_expression | analyzer/apisan/parse/sparser.py | p_expression_1 | oslab-swrc/apisan | python | def p_expression_1(self, p):
' '
p[0] = p[1] |
def p_binary_expression_1(self, p):
' binary_expression : cast_expression '
p[0] = p[1] | -9,182,160,903,065,062,000 | binary_expression : cast_expression | analyzer/apisan/parse/sparser.py | p_binary_expression_1 | oslab-swrc/apisan | python | def p_binary_expression_1(self, p):
' '
p[0] = p[1] |
def p_binary_expression_2(self, p):
' binary_expression : binary_expression TIMES binary_expression\n | binary_expression DIVIDE binary_expression\n | binary_expression MOD binary_expression\n | binary_expression PLUS bin... | 6,915,403,403,737,476,000 | binary_expression : binary_expression TIMES binary_expression
| binary_expression DIVIDE binary_expression
| binary_expression MOD binary_expression
| binary_expression PLUS binary_expression
| binary_expression MINUS binary_expression
| binary_expression RSHIFT binary_expression
| binary_expression LSHIFT binary_exp... | analyzer/apisan/parse/sparser.py | p_binary_expression_2 | oslab-swrc/apisan | python | def p_binary_expression_2(self, p):
' binary_expression : binary_expression TIMES binary_expression\n | binary_expression DIVIDE binary_expression\n | binary_expression MOD binary_expression\n | binary_expression PLUS bin... |
def p_binary_expression_3(self, p):
' expression : expression CONSTRAINT_OP LBRACE constraint_list RBRACE '
p[0] = ConstraintSymbol(p[1], p[4]) | -1,963,574,743,673,760,800 | expression : expression CONSTRAINT_OP LBRACE constraint_list RBRACE | analyzer/apisan/parse/sparser.py | p_binary_expression_3 | oslab-swrc/apisan | python | def p_binary_expression_3(self, p):
' '
p[0] = ConstraintSymbol(p[1], p[4]) |
def p_constraint(self, p):
' constraint : LBRACKET concrete_integer_expression COMMA concrete_integer_expression RBRACKET '
p[0] = (p[2], p[4]) | -8,889,768,589,170,384,000 | constraint : LBRACKET concrete_integer_expression COMMA concrete_integer_expression RBRACKET | analyzer/apisan/parse/sparser.py | p_constraint | oslab-swrc/apisan | python | def p_constraint(self, p):
' '
p[0] = (p[2], p[4]) |
def p_constraint_list(self, p):
' constraint_list : constraint_list COMMA constraint\n | constraint '
if (len(p) == 2):
p[0] = [p[1]]
else:
p[0] = p[1]
p[1].append(p[3]) | -5,220,464,784,130,532,000 | constraint_list : constraint_list COMMA constraint
| constraint | analyzer/apisan/parse/sparser.py | p_constraint_list | oslab-swrc/apisan | python | def p_constraint_list(self, p):
' constraint_list : constraint_list COMMA constraint\n | constraint '
if (len(p) == 2):
p[0] = [p[1]]
else:
p[0] = p[1]
p[1].append(p[3]) |
def p_cast_expression_1(self, p):
' cast_expression : unary_expression '
p[0] = p[1] | 2,346,770,209,569,342,000 | cast_expression : unary_expression | analyzer/apisan/parse/sparser.py | p_cast_expression_1 | oslab-swrc/apisan | python | def p_cast_expression_1(self, p):
' '
p[0] = p[1] |
def p_unary_expression_1(self, p):
' unary_expression : postfix_expression '
p[0] = p[1] | 4,318,103,696,975,526,000 | unary_expression : postfix_expression | analyzer/apisan/parse/sparser.py | p_unary_expression_1 | oslab-swrc/apisan | python | def p_unary_expression_1(self, p):
' '
p[0] = p[1] |
def p_unary_expression_2(self, p):
' unary_expression : AND postfix_expression '
p[0] = p[2] | -4,938,042,286,868,855,000 | unary_expression : AND postfix_expression | analyzer/apisan/parse/sparser.py | p_unary_expression_2 | oslab-swrc/apisan | python | def p_unary_expression_2(self, p):
' '
p[0] = p[2] |
def p_postfix_expression_1(self, p):
' postfix_expression : primary_expression '
p[0] = p[1] | -6,792,552,474,756,700,000 | postfix_expression : primary_expression | analyzer/apisan/parse/sparser.py | p_postfix_expression_1 | oslab-swrc/apisan | python | def p_postfix_expression_1(self, p):
' '
p[0] = p[1] |
def p_postfix_expression_2(self, p):
' postfix_expression : postfix_expression ARROW ID'
p[0] = FieldSymbol(p[1], p[3]) | 4,118,578,218,121,580,000 | postfix_expression : postfix_expression ARROW ID | analyzer/apisan/parse/sparser.py | p_postfix_expression_2 | oslab-swrc/apisan | python | def p_postfix_expression_2(self, p):
' '
p[0] = FieldSymbol(p[1], p[3]) |
def p_postfix_expression3(self, p):
' postfix_expression : postfix_expression LBRACKET expression RBRACKET '
p[0] = ArraySymbol(p[1], p[3]) | -7,503,193,322,489,411,000 | postfix_expression : postfix_expression LBRACKET expression RBRACKET | analyzer/apisan/parse/sparser.py | p_postfix_expression3 | oslab-swrc/apisan | python | def p_postfix_expression3(self, p):
' '
p[0] = ArraySymbol(p[1], p[3]) |
def p_postfix_expression4(self, p):
' postfix_expression : postfix_expression LPAREN argument_list RPAREN '
p[0] = CallSymbol(p[1], p[3]) | 3,751,622,720,951,135,000 | postfix_expression : postfix_expression LPAREN argument_list RPAREN | analyzer/apisan/parse/sparser.py | p_postfix_expression4 | oslab-swrc/apisan | python | def p_postfix_expression4(self, p):
' '
p[0] = CallSymbol(p[1], p[3]) |
def p_primary_expression_1(self, p):
' primary_expression : ID '
p[0] = IDSymbol(p[1]) | 6,044,687,616,587,051,000 | primary_expression : ID | analyzer/apisan/parse/sparser.py | p_primary_expression_1 | oslab-swrc/apisan | python | def p_primary_expression_1(self, p):
' '
p[0] = IDSymbol(p[1]) |
def p_primary_expression_2(self, p):
' primary_expression : concrete_integer_expression '
p[0] = ConcreteIntSymbol(p[1]) | -328,179,987,849,410,370 | primary_expression : concrete_integer_expression | analyzer/apisan/parse/sparser.py | p_primary_expression_2 | oslab-swrc/apisan | python | def p_primary_expression_2(self, p):
' '
p[0] = ConcreteIntSymbol(p[1]) |
def p_primary_expression_3(self, p):
'primary_expression : LPAREN expression RPAREN'
p[0] = p[2] | 7,522,107,969,994,399,000 | primary_expression : LPAREN expression RPAREN | analyzer/apisan/parse/sparser.py | p_primary_expression_3 | oslab-swrc/apisan | python | def p_primary_expression_3(self, p):
p[0] = p[2] |
def p_primary_expression_4(self, p):
' primary_expression : STRING_LITERAL '
p[0] = StringLiteralSymbol(p[1]) | 8,210,178,987,876,999,000 | primary_expression : STRING_LITERAL | analyzer/apisan/parse/sparser.py | p_primary_expression_4 | oslab-swrc/apisan | python | def p_primary_expression_4(self, p):
' '
p[0] = StringLiteralSymbol(p[1]) |
def p_concrete_integer(self, p):
' concrete_integer_expression : INT_CONST_DEC\n | MINUS INT_CONST_DEC '
if (len(p) == 3):
p[0] = (- int(p[2]))
else:
p[0] = int(p[1]) | 4,772,510,855,737,581,000 | concrete_integer_expression : INT_CONST_DEC
| MINUS INT_CONST_DEC | analyzer/apisan/parse/sparser.py | p_concrete_integer | oslab-swrc/apisan | python | def p_concrete_integer(self, p):
' concrete_integer_expression : INT_CONST_DEC\n | MINUS INT_CONST_DEC '
if (len(p) == 3):
p[0] = (- int(p[2]))
else:
p[0] = int(p[1]) |
def p_argument_list(self, p):
' argument_list :\n | expression\n | argument_list COMMA expression '
if (len(p) == 1):
p[0] = []
elif (len(p) == 2):
p[0] = [p[1]]
else:
p[0] = p[1]
p[1].append(p[3]) | -1,090,489,905,007,694,100 | argument_list :
| expression
| argument_list COMMA expression | analyzer/apisan/parse/sparser.py | p_argument_list | oslab-swrc/apisan | python | def p_argument_list(self, p):
' argument_list :\n | expression\n | argument_list COMMA expression '
if (len(p) == 1):
p[0] = []
elif (len(p) == 2):
p[0] = [p[1]]
else:
p[0] = p[1]
p[1].append(p[3]) |
def date_time(timestr):
'from str return timestr + msec'
(t_a, t_b) = timestr.split('.')
return (time.strptime(t_a, '%Y/%b/%d %H:%M:%S'), float(('0.' + t_b))) | -6,460,670,341,405,587,000 | from str return timestr + msec | yamtbx/dataproc/XIO/plugins/minicbf_interpreter.py | date_time | harumome/kamo | python | def date_time(timestr):
(t_a, t_b) = timestr.split('.')
return (time.strptime(t_a, '%Y/%b/%d %H:%M:%S'), float(('0.' + t_b))) |
def date_seconds(timestr):
'from str return seconds'
(t_a, msec) = date_time(timestr)
return (time.mktime(t_a) + msec) | 8,607,692,014,939,393,000 | from str return seconds | yamtbx/dataproc/XIO/plugins/minicbf_interpreter.py | date_seconds | harumome/kamo | python | def date_seconds(timestr):
(t_a, msec) = date_time(timestr)
return (time.mktime(t_a) + msec) |
def get_edge_resolution(pixel_x, width, distance, wavelength):
'Calculate EdgeResolution'
from math import sin, atan
if (abs(DISTANCE(distance)) > 0.0):
rad = ((0.5 * float(FLOAT2(pixel_x))) * int(width))
return (FLOAT1(wavelength) / (2 * sin((0.5 * atan((rad / DISTANCE(distance)))))))
e... | -276,537,080,251,477,400 | Calculate EdgeResolution | yamtbx/dataproc/XIO/plugins/minicbf_interpreter.py | get_edge_resolution | harumome/kamo | python | def get_edge_resolution(pixel_x, width, distance, wavelength):
from math import sin, atan
if (abs(DISTANCE(distance)) > 0.0):
rad = ((0.5 * float(FLOAT2(pixel_x))) * int(width))
return (FLOAT1(wavelength) / (2 * sin((0.5 * atan((rad / DISTANCE(distance)))))))
else:
return 0.0 |
def getRawHeadDict(self, raw_head):
'Intepret the ascii structure of the minicbf image header.'
i_1 = (28 + raw_head.find('_array_data.header_contents'))
i_2 = raw_head.find('_array_data.data', i_1)
i_3 = (raw_head.find('--CIF-BINARY-FORMAT-SECTION--', i_2) + 29)
i_4 = (i_3 + 500)
lis = [line[2:... | -7,886,909,478,486,367,000 | Intepret the ascii structure of the minicbf image header. | yamtbx/dataproc/XIO/plugins/minicbf_interpreter.py | getRawHeadDict | harumome/kamo | python | def getRawHeadDict(self, raw_head):
i_1 = (28 + raw_head.find('_array_data.header_contents'))
i_2 = raw_head.find('_array_data.data', i_1)
i_3 = (raw_head.find('--CIF-BINARY-FORMAT-SECTION--', i_2) + 29)
i_4 = (i_3 + 500)
lis = [line[2:].strip().split(' ', 1) for line in raw_head[i_1:i_2].split... |
def iteritems(obj, **kwargs):
"replacement for six's iteritems for Python2/3 compat\n uses 'iteritems' if available and otherwise uses 'items'.\n\n Passes kwargs to method.\n "
func = getattr(obj, 'iteritems', None)
if (not func):
func = obj.items
return func(**kwargs) | 3,271,272,364,481,752,600 | replacement for six's iteritems for Python2/3 compat
uses 'iteritems' if available and otherwise uses 'items'.
Passes kwargs to method. | statsmodels/compat/python.py | iteritems | Aziiz1989/statsmodels | python | def iteritems(obj, **kwargs):
"replacement for six's iteritems for Python2/3 compat\n uses 'iteritems' if available and otherwise uses 'items'.\n\n Passes kwargs to method.\n "
func = getattr(obj, 'iteritems', None)
if (not func):
func = obj.items
return func(**kwargs) |
def getargspec(func):
'\n Simple workaroung for getargspec deprecation that returns\n an ArgSpec-like object\n '
sig = inspect.signature(func)
parameters = sig.parameters
(args, defaults) = ([], [])
(varargs, keywords) = (None, None)
for key in parameters:
parameter ... | 4,329,741,620,168,690,700 | Simple workaroung for getargspec deprecation that returns
an ArgSpec-like object | statsmodels/compat/python.py | getargspec | Aziiz1989/statsmodels | python | def getargspec(func):
'\n Simple workaroung for getargspec deprecation that returns\n an ArgSpec-like object\n '
sig = inspect.signature(func)
parameters = sig.parameters
(args, defaults) = ([], [])
(varargs, keywords) = (None, None)
for key in parameters:
parameter ... |
def train(self, X, y):
'\n Train the classifier. For k-nearest neighbors this is just\n memorizing the training data.\n\n Inputs:\n - X: A numpy array of shape (num_train, D) containing the training data\n consisting of num_train samples each of dimension D.\n - y: A nump... | 1,106,634,005,181,075,800 | Train the classifier. For k-nearest neighbors this is just
memorizing the training data.
Inputs:
- X: A numpy array of shape (num_train, D) containing the training data
consisting of num_train samples each of dimension D.
- y: A numpy array of shape (N,) containing the training labels, where
y[i] is the label f... | assignments/2021/assignment1/cs231n/classifiers/k_nearest_neighbor.py | train | Michellemingxuan/stanford_cs231n | python | def train(self, X, y):
'\n Train the classifier. For k-nearest neighbors this is just\n memorizing the training data.\n\n Inputs:\n - X: A numpy array of shape (num_train, D) containing the training data\n consisting of num_train samples each of dimension D.\n - y: A nump... |
def predict(self, X, k=1, num_loops=0):
'\n Predict labels for test data using this classifier.\n\n Inputs:\n - X: A numpy array of shape (num_test, D) containing test data consisting\n of num_test samples each of dimension D.\n - k: The number of nearest neighbors that vote ... | -2,996,105,026,029,196,000 | Predict labels for test data using this classifier.
Inputs:
- X: A numpy array of shape (num_test, D) containing test data consisting
of num_test samples each of dimension D.
- k: The number of nearest neighbors that vote for the predicted labels.
- num_loops: Determines which implementation to use to compute dis... | assignments/2021/assignment1/cs231n/classifiers/k_nearest_neighbor.py | predict | Michellemingxuan/stanford_cs231n | python | def predict(self, X, k=1, num_loops=0):
'\n Predict labels for test data using this classifier.\n\n Inputs:\n - X: A numpy array of shape (num_test, D) containing test data consisting\n of num_test samples each of dimension D.\n - k: The number of nearest neighbors that vote ... |
def compute_distances_two_loops(self, X):
'\n Compute the distance between each test point in X and each training point\n in self.X_train using a nested loop over both the training data and the\n test data.\n\n Inputs:\n - X: A numpy array of shape (num_test, D) containing test da... | 8,778,991,418,094,518,000 | Compute the distance between each test point in X and each training point
in self.X_train using a nested loop over both the training data and the
test data.
Inputs:
- X: A numpy array of shape (num_test, D) containing test data.
Returns:
- dists: A numpy array of shape (num_test, num_train) where dists[i, j]
is the... | assignments/2021/assignment1/cs231n/classifiers/k_nearest_neighbor.py | compute_distances_two_loops | Michellemingxuan/stanford_cs231n | python | def compute_distances_two_loops(self, X):
'\n Compute the distance between each test point in X and each training point\n in self.X_train using a nested loop over both the training data and the\n test data.\n\n Inputs:\n - X: A numpy array of shape (num_test, D) containing test da... |
def compute_distances_one_loop(self, X):
'\n Compute the distance between each test point in X and each training point\n in self.X_train using a single loop over the test data.\n\n Input / Output: Same as compute_distances_two_loops\n '
num_test = X.shape[0]
num_train = self.X_tr... | 5,453,297,031,028,455,000 | Compute the distance between each test point in X and each training point
in self.X_train using a single loop over the test data.
Input / Output: Same as compute_distances_two_loops | assignments/2021/assignment1/cs231n/classifiers/k_nearest_neighbor.py | compute_distances_one_loop | Michellemingxuan/stanford_cs231n | python | def compute_distances_one_loop(self, X):
'\n Compute the distance between each test point in X and each training point\n in self.X_train using a single loop over the test data.\n\n Input / Output: Same as compute_distances_two_loops\n '
num_test = X.shape[0]
num_train = self.X_tr... |
def compute_distances_no_loops(self, X):
'\n Compute the distance between each test point in X and each training point\n in self.X_train using no explicit loops.\n\n Input / Output: Same as compute_distances_two_loops\n '
num_test = X.shape[0]
num_train = self.X_train.shape[0]
... | -7,016,626,351,587,641,000 | Compute the distance between each test point in X and each training point
in self.X_train using no explicit loops.
Input / Output: Same as compute_distances_two_loops | assignments/2021/assignment1/cs231n/classifiers/k_nearest_neighbor.py | compute_distances_no_loops | Michellemingxuan/stanford_cs231n | python | def compute_distances_no_loops(self, X):
'\n Compute the distance between each test point in X and each training point\n in self.X_train using no explicit loops.\n\n Input / Output: Same as compute_distances_two_loops\n '
num_test = X.shape[0]
num_train = self.X_train.shape[0]
... |
def predict_labels(self, dists, k=1):
'\n Given a matrix of distances between test points and training points,\n predict a label for each test point.\n\n Inputs:\n - dists: A numpy array of shape (num_test, num_train) where dists[i, j]\n gives the distance betwen the ith test po... | -7,229,769,627,711,926,000 | Given a matrix of distances between test points and training points,
predict a label for each test point.
Inputs:
- dists: A numpy array of shape (num_test, num_train) where dists[i, j]
gives the distance betwen the ith test point and the jth training point.
Returns:
- y: A numpy array of shape (num_test,) containi... | assignments/2021/assignment1/cs231n/classifiers/k_nearest_neighbor.py | predict_labels | Michellemingxuan/stanford_cs231n | python | def predict_labels(self, dists, k=1):
'\n Given a matrix of distances between test points and training points,\n predict a label for each test point.\n\n Inputs:\n - dists: A numpy array of shape (num_test, num_train) where dists[i, j]\n gives the distance betwen the ith test po... |
@pytest.fixture(scope='module')
def containerized_rses(rucio_client):
'\n Detects if containerized rses for xrootd & ssh are available in the testing environment.\n :return: A list of (rse_name, rse_id) tuples.\n '
from rucio.common.exception import InvalidRSEExpression
rses = []
try:
x... | 2,804,995,855,742,133,000 | Detects if containerized rses for xrootd & ssh are available in the testing environment.
:return: A list of (rse_name, rse_id) tuples. | lib/rucio/tests/conftest.py | containerized_rses | R-16Bob/rucio | python | @pytest.fixture(scope='module')
def containerized_rses(rucio_client):
'\n Detects if containerized rses for xrootd & ssh are available in the testing environment.\n :return: A list of (rse_name, rse_id) tuples.\n '
from rucio.common.exception import InvalidRSEExpression
rses = []
try:
x... |
@pytest.fixture(scope='class')
def rse_factory_unittest(request, vo):
'\n unittest classes can get access to rse_factory fixture via this fixture\n '
from rucio.tests.temp_factories import TemporaryRSEFactory
with TemporaryRSEFactory(vo=vo) as factory:
request.cls.rse_factory = factory
... | -5,738,361,266,967,748,000 | unittest classes can get access to rse_factory fixture via this fixture | lib/rucio/tests/conftest.py | rse_factory_unittest | R-16Bob/rucio | python | @pytest.fixture(scope='class')
def rse_factory_unittest(request, vo):
'\n \n '
from rucio.tests.temp_factories import TemporaryRSEFactory
with TemporaryRSEFactory(vo=vo) as factory:
request.cls.rse_factory = factory
(yield factory)
factory.cleanup() |
@pytest.fixture
def core_config_mock(request):
'\n Fixture to allow having per-test core.config tables without affecting the other parallel tests.\n\n This override works only in tests which use core function calls directly, not in the ones working\n via the API, because the normal config table is not touc... | -8,479,526,265,431,728,000 | Fixture to allow having per-test core.config tables without affecting the other parallel tests.
This override works only in tests which use core function calls directly, not in the ones working
via the API, because the normal config table is not touched and the rucio instance answering API
calls is not aware of this m... | lib/rucio/tests/conftest.py | core_config_mock | R-16Bob/rucio | python | @pytest.fixture
def core_config_mock(request):
'\n Fixture to allow having per-test core.config tables without affecting the other parallel tests.\n\n This override works only in tests which use core function calls directly, not in the ones working\n via the API, because the normal config table is not touc... |
@pytest.fixture
def file_config_mock(request):
'\n Fixture which allows to have an isolated in-memory configuration file instance which\n is not persisted after exiting the fixture.\n\n This override works only in tests which use config calls directly, not in the ones working\n via the API, as the serve... | -2,383,599,826,401,361,400 | Fixture which allows to have an isolated in-memory configuration file instance which
is not persisted after exiting the fixture.
This override works only in tests which use config calls directly, not in the ones working
via the API, as the server config is not changed. | lib/rucio/tests/conftest.py | file_config_mock | R-16Bob/rucio | python | @pytest.fixture
def file_config_mock(request):
'\n Fixture which allows to have an isolated in-memory configuration file instance which\n is not persisted after exiting the fixture.\n\n This override works only in tests which use config calls directly, not in the ones working\n via the API, as the serve... |
@pytest.fixture
def caches_mock(request):
'\n Fixture which overrides the different internal caches with in-memory ones for the duration\n of a particular test.\n\n This override works only in tests which use core function calls directly, not in the ones\n working via API.\n\n The fixture acts by by ... | 4,544,694,118,791,536,000 | Fixture which overrides the different internal caches with in-memory ones for the duration
of a particular test.
This override works only in tests which use core function calls directly, not in the ones
working via API.
The fixture acts by by mock.patch the REGION object in the provided list of modules to mock. | lib/rucio/tests/conftest.py | caches_mock | R-16Bob/rucio | python | @pytest.fixture
def caches_mock(request):
'\n Fixture which overrides the different internal caches with in-memory ones for the duration\n of a particular test.\n\n This override works only in tests which use core function calls directly, not in the ones\n working via API.\n\n The fixture acts by by ... |
@pytest.fixture
def metrics_mock():
'\n Overrides the prometheus metric registry and allows to verify if the desired\n prometheus metrics were correctly recorded.\n '
from unittest import mock
from prometheus_client import CollectorRegistry
with mock.patch('rucio.core.monitor.REGISTRY', new=Col... | 3,437,373,712,124,519,000 | Overrides the prometheus metric registry and allows to verify if the desired
prometheus metrics were correctly recorded. | lib/rucio/tests/conftest.py | metrics_mock | R-16Bob/rucio | python | @pytest.fixture
def metrics_mock():
'\n Overrides the prometheus metric registry and allows to verify if the desired\n prometheus metrics were correctly recorded.\n '
from unittest import mock
from prometheus_client import CollectorRegistry
with mock.patch('rucio.core.monitor.REGISTRY', new=Col... |
def s_GROUPPASSWORD(self, value):
'if set USERPASSWORD of group GROUPPASSWORD same as it\n if not any value set, key should not exists\n '
if (value in (None, DEFAULT_NO_KEY)):
user_pwd = self.data.get(USERPASSWORD, None)
if (user_pwd is not None):
return user_pwd
... | 2,257,729,412,670,996,700 | if set USERPASSWORD of group GROUPPASSWORD same as it
if not any value set, key should not exists | antilles-core/openHPC_web_project/tests/user/mock_libuser.py | s_GROUPPASSWORD | CarrotXin/Antilles | python | def s_GROUPPASSWORD(self, value):
'if set USERPASSWORD of group GROUPPASSWORD same as it\n if not any value set, key should not exists\n '
if (value in (None, DEFAULT_NO_KEY)):
user_pwd = self.data.get(USERPASSWORD, None)
if (user_pwd is not None):
return user_pwd
... |
def ensure_no_empty_passwords(apps: StateApps, schema_editor: DatabaseSchemaEditor) -> None:
'With CVE-2019-18933, it was possible for certain users created\n using social login (e.g. Google/GitHub auth) to have the empty\n string as their password in the Zulip database, rather than\n Django\'s "unusable p... | -8,432,326,075,367,990,000 | With CVE-2019-18933, it was possible for certain users created
using social login (e.g. Google/GitHub auth) to have the empty
string as their password in the Zulip database, rather than
Django's "unusable password" (i.e. no password at all). This was a
serious security issue for organizations with both password and
Go... | zerver/migrations/0209_user_profile_no_empty_password.py | ensure_no_empty_passwords | Bpapman/zulip | python | def ensure_no_empty_passwords(apps: StateApps, schema_editor: DatabaseSchemaEditor) -> None:
'With CVE-2019-18933, it was possible for certain users created\n using social login (e.g. Google/GitHub auth) to have the empty\n string as their password in the Zulip database, rather than\n Django\'s "unusable p... |
def __init__(self, artworks=None, genres=None, id=None, people=None, release_dates=None, remoteids=None, runtime=None, trailers=None, translations=None, url=None):
'Movie - a model defined in Swagger'
self._artworks = None
self._genres = None
self._id = None
self._people = None
self._release_dat... | 1,450,112,740,000,856,000 | Movie - a model defined in Swagger | tvdb_api/models/movie.py | __init__ | h3llrais3r/tvdb_api | python | def __init__(self, artworks=None, genres=None, id=None, people=None, release_dates=None, remoteids=None, runtime=None, trailers=None, translations=None, url=None):
self._artworks = None
self._genres = None
self._id = None
self._people = None
self._release_dates = None
self._remoteids = None... |
@property
def artworks(self):
'Gets the artworks of this Movie. # noqa: E501\n\n\n :return: The artworks of this Movie. # noqa: E501\n :rtype: list[MovieArtwork]\n '
return self._artworks | -2,393,834,981,830,680,600 | Gets the artworks of this Movie. # noqa: E501
:return: The artworks of this Movie. # noqa: E501
:rtype: list[MovieArtwork] | tvdb_api/models/movie.py | artworks | h3llrais3r/tvdb_api | python | @property
def artworks(self):
'Gets the artworks of this Movie. # noqa: E501\n\n\n :return: The artworks of this Movie. # noqa: E501\n :rtype: list[MovieArtwork]\n '
return self._artworks |
@artworks.setter
def artworks(self, artworks):
'Sets the artworks of this Movie.\n\n\n :param artworks: The artworks of this Movie. # noqa: E501\n :type: list[MovieArtwork]\n '
self._artworks = artworks | -1,086,038,072,461,534,200 | Sets the artworks of this Movie.
:param artworks: The artworks of this Movie. # noqa: E501
:type: list[MovieArtwork] | tvdb_api/models/movie.py | artworks | h3llrais3r/tvdb_api | python | @artworks.setter
def artworks(self, artworks):
'Sets the artworks of this Movie.\n\n\n :param artworks: The artworks of this Movie. # noqa: E501\n :type: list[MovieArtwork]\n '
self._artworks = artworks |
@property
def genres(self):
'Gets the genres of this Movie. # noqa: E501\n\n\n :return: The genres of this Movie. # noqa: E501\n :rtype: list[MovieGenre]\n '
return self._genres | 7,144,432,880,067,460,000 | Gets the genres of this Movie. # noqa: E501
:return: The genres of this Movie. # noqa: E501
:rtype: list[MovieGenre] | tvdb_api/models/movie.py | genres | h3llrais3r/tvdb_api | python | @property
def genres(self):
'Gets the genres of this Movie. # noqa: E501\n\n\n :return: The genres of this Movie. # noqa: E501\n :rtype: list[MovieGenre]\n '
return self._genres |
@genres.setter
def genres(self, genres):
'Sets the genres of this Movie.\n\n\n :param genres: The genres of this Movie. # noqa: E501\n :type: list[MovieGenre]\n '
self._genres = genres | -8,035,082,629,329,302,000 | Sets the genres of this Movie.
:param genres: The genres of this Movie. # noqa: E501
:type: list[MovieGenre] | tvdb_api/models/movie.py | genres | h3llrais3r/tvdb_api | python | @genres.setter
def genres(self, genres):
'Sets the genres of this Movie.\n\n\n :param genres: The genres of this Movie. # noqa: E501\n :type: list[MovieGenre]\n '
self._genres = genres |
@property
def id(self):
'Gets the id of this Movie. # noqa: E501\n\n\n :return: The id of this Movie. # noqa: E501\n :rtype: int\n '
return self._id | 133,836,784,827,236,960 | Gets the id of this Movie. # noqa: E501
:return: The id of this Movie. # noqa: E501
:rtype: int | tvdb_api/models/movie.py | id | h3llrais3r/tvdb_api | python | @property
def id(self):
'Gets the id of this Movie. # noqa: E501\n\n\n :return: The id of this Movie. # noqa: E501\n :rtype: int\n '
return self._id |
@id.setter
def id(self, id):
'Sets the id of this Movie.\n\n\n :param id: The id of this Movie. # noqa: E501\n :type: int\n '
self._id = id | -400,809,097,172,074,600 | Sets the id of this Movie.
:param id: The id of this Movie. # noqa: E501
:type: int | tvdb_api/models/movie.py | id | h3llrais3r/tvdb_api | python | @id.setter
def id(self, id):
'Sets the id of this Movie.\n\n\n :param id: The id of this Movie. # noqa: E501\n :type: int\n '
self._id = id |
@property
def people(self):
'Gets the people of this Movie. # noqa: E501\n\n\n :return: The people of this Movie. # noqa: E501\n :rtype: MoviePeople\n '
return self._people | -1,147,309,872,900,875,500 | Gets the people of this Movie. # noqa: E501
:return: The people of this Movie. # noqa: E501
:rtype: MoviePeople | tvdb_api/models/movie.py | people | h3llrais3r/tvdb_api | python | @property
def people(self):
'Gets the people of this Movie. # noqa: E501\n\n\n :return: The people of this Movie. # noqa: E501\n :rtype: MoviePeople\n '
return self._people |
@people.setter
def people(self, people):
'Sets the people of this Movie.\n\n\n :param people: The people of this Movie. # noqa: E501\n :type: MoviePeople\n '
self._people = people | 8,841,761,709,071,807,000 | Sets the people of this Movie.
:param people: The people of this Movie. # noqa: E501
:type: MoviePeople | tvdb_api/models/movie.py | people | h3llrais3r/tvdb_api | python | @people.setter
def people(self, people):
'Sets the people of this Movie.\n\n\n :param people: The people of this Movie. # noqa: E501\n :type: MoviePeople\n '
self._people = people |
@property
def release_dates(self):
'Gets the release_dates of this Movie. # noqa: E501\n\n\n :return: The release_dates of this Movie. # noqa: E501\n :rtype: list[MovieReleaseDate]\n '
return self._release_dates | 4,026,720,840,994,479,600 | Gets the release_dates of this Movie. # noqa: E501
:return: The release_dates of this Movie. # noqa: E501
:rtype: list[MovieReleaseDate] | tvdb_api/models/movie.py | release_dates | h3llrais3r/tvdb_api | python | @property
def release_dates(self):
'Gets the release_dates of this Movie. # noqa: E501\n\n\n :return: The release_dates of this Movie. # noqa: E501\n :rtype: list[MovieReleaseDate]\n '
return self._release_dates |
@release_dates.setter
def release_dates(self, release_dates):
'Sets the release_dates of this Movie.\n\n\n :param release_dates: The release_dates of this Movie. # noqa: E501\n :type: list[MovieReleaseDate]\n '
self._release_dates = release_dates | -6,582,223,386,136,288,000 | Sets the release_dates of this Movie.
:param release_dates: The release_dates of this Movie. # noqa: E501
:type: list[MovieReleaseDate] | tvdb_api/models/movie.py | release_dates | h3llrais3r/tvdb_api | python | @release_dates.setter
def release_dates(self, release_dates):
'Sets the release_dates of this Movie.\n\n\n :param release_dates: The release_dates of this Movie. # noqa: E501\n :type: list[MovieReleaseDate]\n '
self._release_dates = release_dates |
@property
def remoteids(self):
'Gets the remoteids of this Movie. # noqa: E501\n\n\n :return: The remoteids of this Movie. # noqa: E501\n :rtype: list[MovieRemoteId]\n '
return self._remoteids | 82,586,091,699,628,220 | Gets the remoteids of this Movie. # noqa: E501
:return: The remoteids of this Movie. # noqa: E501
:rtype: list[MovieRemoteId] | tvdb_api/models/movie.py | remoteids | h3llrais3r/tvdb_api | python | @property
def remoteids(self):
'Gets the remoteids of this Movie. # noqa: E501\n\n\n :return: The remoteids of this Movie. # noqa: E501\n :rtype: list[MovieRemoteId]\n '
return self._remoteids |
@remoteids.setter
def remoteids(self, remoteids):
'Sets the remoteids of this Movie.\n\n\n :param remoteids: The remoteids of this Movie. # noqa: E501\n :type: list[MovieRemoteId]\n '
self._remoteids = remoteids | 6,932,675,821,644,166,000 | Sets the remoteids of this Movie.
:param remoteids: The remoteids of this Movie. # noqa: E501
:type: list[MovieRemoteId] | tvdb_api/models/movie.py | remoteids | h3llrais3r/tvdb_api | python | @remoteids.setter
def remoteids(self, remoteids):
'Sets the remoteids of this Movie.\n\n\n :param remoteids: The remoteids of this Movie. # noqa: E501\n :type: list[MovieRemoteId]\n '
self._remoteids = remoteids |
@property
def runtime(self):
'Gets the runtime of this Movie. # noqa: E501\n\n\n :return: The runtime of this Movie. # noqa: E501\n :rtype: int\n '
return self._runtime | -5,657,135,229,381,579,000 | Gets the runtime of this Movie. # noqa: E501
:return: The runtime of this Movie. # noqa: E501
:rtype: int | tvdb_api/models/movie.py | runtime | h3llrais3r/tvdb_api | python | @property
def runtime(self):
'Gets the runtime of this Movie. # noqa: E501\n\n\n :return: The runtime of this Movie. # noqa: E501\n :rtype: int\n '
return self._runtime |
@runtime.setter
def runtime(self, runtime):
'Sets the runtime of this Movie.\n\n\n :param runtime: The runtime of this Movie. # noqa: E501\n :type: int\n '
self._runtime = runtime | -8,879,695,535,615,070,000 | Sets the runtime of this Movie.
:param runtime: The runtime of this Movie. # noqa: E501
:type: int | tvdb_api/models/movie.py | runtime | h3llrais3r/tvdb_api | python | @runtime.setter
def runtime(self, runtime):
'Sets the runtime of this Movie.\n\n\n :param runtime: The runtime of this Movie. # noqa: E501\n :type: int\n '
self._runtime = runtime |
@property
def trailers(self):
'Gets the trailers of this Movie. # noqa: E501\n\n\n :return: The trailers of this Movie. # noqa: E501\n :rtype: list[MovieTrailer]\n '
return self._trailers | -4,756,530,408,680,252,000 | Gets the trailers of this Movie. # noqa: E501
:return: The trailers of this Movie. # noqa: E501
:rtype: list[MovieTrailer] | tvdb_api/models/movie.py | trailers | h3llrais3r/tvdb_api | python | @property
def trailers(self):
'Gets the trailers of this Movie. # noqa: E501\n\n\n :return: The trailers of this Movie. # noqa: E501\n :rtype: list[MovieTrailer]\n '
return self._trailers |
@trailers.setter
def trailers(self, trailers):
'Sets the trailers of this Movie.\n\n\n :param trailers: The trailers of this Movie. # noqa: E501\n :type: list[MovieTrailer]\n '
self._trailers = trailers | 7,242,678,631,285,110,000 | Sets the trailers of this Movie.
:param trailers: The trailers of this Movie. # noqa: E501
:type: list[MovieTrailer] | tvdb_api/models/movie.py | trailers | h3llrais3r/tvdb_api | python | @trailers.setter
def trailers(self, trailers):
'Sets the trailers of this Movie.\n\n\n :param trailers: The trailers of this Movie. # noqa: E501\n :type: list[MovieTrailer]\n '
self._trailers = trailers |
@property
def translations(self):
'Gets the translations of this Movie. # noqa: E501\n\n\n :return: The translations of this Movie. # noqa: E501\n :rtype: list[MovieTranslation]\n '
return self._translations | 6,026,753,750,882,946,000 | Gets the translations of this Movie. # noqa: E501
:return: The translations of this Movie. # noqa: E501
:rtype: list[MovieTranslation] | tvdb_api/models/movie.py | translations | h3llrais3r/tvdb_api | python | @property
def translations(self):
'Gets the translations of this Movie. # noqa: E501\n\n\n :return: The translations of this Movie. # noqa: E501\n :rtype: list[MovieTranslation]\n '
return self._translations |
@translations.setter
def translations(self, translations):
'Sets the translations of this Movie.\n\n\n :param translations: The translations of this Movie. # noqa: E501\n :type: list[MovieTranslation]\n '
self._translations = translations | 4,669,909,626,875,010,000 | Sets the translations of this Movie.
:param translations: The translations of this Movie. # noqa: E501
:type: list[MovieTranslation] | tvdb_api/models/movie.py | translations | h3llrais3r/tvdb_api | python | @translations.setter
def translations(self, translations):
'Sets the translations of this Movie.\n\n\n :param translations: The translations of this Movie. # noqa: E501\n :type: list[MovieTranslation]\n '
self._translations = translations |
@property
def url(self):
'Gets the url of this Movie. # noqa: E501\n\n\n :return: The url of this Movie. # noqa: E501\n :rtype: str\n '
return self._url | 1,514,740,167,924,753,700 | Gets the url of this Movie. # noqa: E501
:return: The url of this Movie. # noqa: E501
:rtype: str | tvdb_api/models/movie.py | url | h3llrais3r/tvdb_api | python | @property
def url(self):
'Gets the url of this Movie. # noqa: E501\n\n\n :return: The url of this Movie. # noqa: E501\n :rtype: str\n '
return self._url |
@url.setter
def url(self, url):
'Sets the url of this Movie.\n\n\n :param url: The url of this Movie. # noqa: E501\n :type: str\n '
self._url = url | 5,967,116,398,014,488,000 | Sets the url of this Movie.
:param url: The url of this Movie. # noqa: E501
:type: str | tvdb_api/models/movie.py | url | h3llrais3r/tvdb_api | python | @url.setter
def url(self, url):
'Sets the url of this Movie.\n\n\n :param url: The url of this Movie. # noqa: E501\n :type: str\n '
self._url = url |
def to_dict(self):
'Returns the model properties as a dict'
result = {}
for (attr, _) in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map((lambda x: (x.to_dict() if hasattr(x, 'to_dict') else x)), value))
e... | -2,365,698,491,032,322,600 | Returns the model properties as a dict | tvdb_api/models/movie.py | to_dict | h3llrais3r/tvdb_api | python | def to_dict(self):
result = {}
for (attr, _) in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map((lambda x: (x.to_dict() if hasattr(x, 'to_dict') else x)), value))
elif hasattr(value, 'to_dict'):
... |
def to_str(self):
'Returns the string representation of the model'
return pprint.pformat(self.to_dict()) | 5,849,158,643,760,736,000 | Returns the string representation of the model | tvdb_api/models/movie.py | to_str | h3llrais3r/tvdb_api | python | def to_str(self):
return pprint.pformat(self.to_dict()) |
def __repr__(self):
'For `print` and `pprint`'
return self.to_str() | -8,960,031,694,814,905,000 | For `print` and `pprint` | tvdb_api/models/movie.py | __repr__ | h3llrais3r/tvdb_api | python | def __repr__(self):
return self.to_str() |
def __eq__(self, other):
'Returns true if both objects are equal'
if (not isinstance(other, Movie)):
return False
return (self.__dict__ == other.__dict__) | 5,689,336,831,722,514,000 | Returns true if both objects are equal | tvdb_api/models/movie.py | __eq__ | h3llrais3r/tvdb_api | python | def __eq__(self, other):
if (not isinstance(other, Movie)):
return False
return (self.__dict__ == other.__dict__) |
def __ne__(self, other):
'Returns true if both objects are not equal'
return (not (self == other)) | 7,764,124,047,908,058,000 | Returns true if both objects are not equal | tvdb_api/models/movie.py | __ne__ | h3llrais3r/tvdb_api | python | def __ne__(self, other):
return (not (self == other)) |
def mol_sim_matrix(fingerprints1, fingerprints2, method='cosine', filename=None, max_size=1000, print_progress=True):
"Create Matrix of all molecular similarities (based on molecular fingerprints).\n\n If filename is not None, the result will be saved as npy.\n To create molecular fingerprints see mol_fingerp... | -4,397,187,001,534,330,000 | Create Matrix of all molecular similarities (based on molecular fingerprints).
If filename is not None, the result will be saved as npy.
To create molecular fingerprints see mol_fingerprints() function from MS_functions.
Args:
----
fingerprints1: list
List of molecular fingerprints (numpy arrays).
fingerprints2: ... | matchms/old/ms_similarity_classical.py | mol_sim_matrix | matchms/old-iomega-spec2vec | python | def mol_sim_matrix(fingerprints1, fingerprints2, method='cosine', filename=None, max_size=1000, print_progress=True):
"Create Matrix of all molecular similarities (based on molecular fingerprints).\n\n If filename is not None, the result will be saved as npy.\n To create molecular fingerprints see mol_fingerp... |
def cosine_score_greedy(spec1, spec2, mass_shift, tol, min_intens=0, use_numba=True):
'Calculate cosine score between spectrum1 and spectrum2.\n\n If mass_shifted = True it will shift the spectra with respect to each other\n by difference in their parentmasses.\n\n Args:\n ----\n spec1: Spectrum peak... | -1,856,239,111,906,763,300 | Calculate cosine score between spectrum1 and spectrum2.
If mass_shifted = True it will shift the spectra with respect to each other
by difference in their parentmasses.
Args:
----
spec1: Spectrum peaks and intensities as numpy array.
spec2: Spectrum peaks and intensities as numpy array.
tol: float
Tolerance value... | matchms/old/ms_similarity_classical.py | cosine_score_greedy | matchms/old-iomega-spec2vec | python | def cosine_score_greedy(spec1, spec2, mass_shift, tol, min_intens=0, use_numba=True):
'Calculate cosine score between spectrum1 and spectrum2.\n\n If mass_shifted = True it will shift the spectra with respect to each other\n by difference in their parentmasses.\n\n Args:\n ----\n spec1: Spectrum peak... |
def cosine_score_hungarian(spec1, spec2, mass_shift, tol, min_intens=0):
"Taking full care of weighted bipartite matching problem.\n\n Use Hungarian algorithm (slow...)\n\n Args:\n --------\n spec1: Spectrum peaks and intensities as numpy array.\n spec2: Spectrum peaks and intensities as numpy array.... | 7,721,985,818,695,637,000 | Taking full care of weighted bipartite matching problem.
Use Hungarian algorithm (slow...)
Args:
--------
spec1: Spectrum peaks and intensities as numpy array.
spec2: Spectrum peaks and intensities as numpy array.
mass_shift: float
Difference in parent mass of both spectra to account for. Set to 'None'
when n... | matchms/old/ms_similarity_classical.py | cosine_score_hungarian | matchms/old-iomega-spec2vec | python | def cosine_score_hungarian(spec1, spec2, mass_shift, tol, min_intens=0):
"Taking full care of weighted bipartite matching problem.\n\n Use Hungarian algorithm (slow...)\n\n Args:\n --------\n spec1: Spectrum peaks and intensities as numpy array.\n spec2: Spectrum peaks and intensities as numpy array.... |
def cosine_matrix_fast(spectra, tol, max_mz, min_mz=0):
'Calculates cosine similarity matrix.\n\n Be careful! Binning is here done by creating one-hot vectors.\n It is hence really actual "bining" and different from the tolerance-based\n approach used for the cosine_matrix or molnet_matrix!\n\n Also: to... | -5,577,614,660,094,574,000 | Calculates cosine similarity matrix.
Be careful! Binning is here done by creating one-hot vectors.
It is hence really actual "bining" and different from the tolerance-based
approach used for the cosine_matrix or molnet_matrix!
Also: tol here is about tol/2 when compared to cosine_matrix or molnet_matrix... | matchms/old/ms_similarity_classical.py | cosine_matrix_fast | matchms/old-iomega-spec2vec | python | def cosine_matrix_fast(spectra, tol, max_mz, min_mz=0):
'Calculates cosine similarity matrix.\n\n Be careful! Binning is here done by creating one-hot vectors.\n It is hence really actual "bining" and different from the tolerance-based\n approach used for the cosine_matrix or molnet_matrix!\n\n Also: to... |
def cosine_score_matrix(spectra, tol, max_mz=1000.0, min_intens=0, mass_shifting=False, method='hungarian', num_workers=4, filename=None, safety_points=None):
'Create Matrix of all modified cosine similarities.\n\n Takes some time to calculate, so better only do it once and save as npy.\n\n Now implemented: p... | -7,387,584,936,006,276,000 | Create Matrix of all modified cosine similarities.
Takes some time to calculate, so better only do it once and save as npy.
Now implemented: parallelization of code using concurrent.futures and numba options.
spectra: list
List of spectra (of Spectrum class)
tol: float
Tolerance to still count peaks a match ... | matchms/old/ms_similarity_classical.py | cosine_score_matrix | matchms/old-iomega-spec2vec | python | def cosine_score_matrix(spectra, tol, max_mz=1000.0, min_intens=0, mass_shifting=False, method='hungarian', num_workers=4, filename=None, safety_points=None):
'Create Matrix of all modified cosine similarities.\n\n Takes some time to calculate, so better only do it once and save as npy.\n\n Now implemented: p... |
def modcos_pair(X, len_spectra):
'Single molnet pair calculation\n '
(spectra_i, spectra_j, i, j, mass_shift, tol, min_intens, method, counter) = X
if (method == 'greedy'):
(molnet_pair, used_matches) = cosine_score_greedy(spectra_i, spectra_j, mass_shift, tol, min_intens=min_intens, use_numba=Fa... | 2,678,553,399,383,915,000 | Single molnet pair calculation | matchms/old/ms_similarity_classical.py | modcos_pair | matchms/old-iomega-spec2vec | python | def modcos_pair(X, len_spectra):
'\n '
(spectra_i, spectra_j, i, j, mass_shift, tol, min_intens, method, counter) = X
if (method == 'greedy'):
(molnet_pair, used_matches) = cosine_score_greedy(spectra_i, spectra_j, mass_shift, tol, min_intens=min_intens, use_numba=False)
elif (method == 'gree... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.