body_hash
stringlengths
64
64
body
stringlengths
23
109k
docstring
stringlengths
1
57k
path
stringlengths
4
198
name
stringlengths
1
115
repository_name
stringlengths
7
111
repository_stars
float64
0
191k
lang
stringclasses
1 value
body_without_docstring
stringlengths
14
108k
unified
stringlengths
45
133k
0756b862c437ed1e9b6fc03c15d6356580a4e6ece71010ce89cde1cab2a1dd47
def to_json(self): '\n Returns the model as raw JSON\n ' return json.dumps(sanitize_for_serialization(self.to_dict()))
Returns the model as raw JSON
build/PureCloudPlatformClientV2/models/analytics_conversation_async_query_response.py
to_json
cjohnson-ctl/platform-client-sdk-python
10
python
def to_json(self): '\n \n ' return json.dumps(sanitize_for_serialization(self.to_dict()))
def to_json(self): '\n \n ' return json.dumps(sanitize_for_serialization(self.to_dict()))<|docstring|>Returns the model as raw JSON<|endoftext|>
c373d87dd29c1e96dce460ab571bff86e58edb298ba83c85d8cc7603a6505de4
def to_str(self): '\n Returns the string representation of the model\n ' return pformat(self.to_dict())
Returns the string representation of the model
build/PureCloudPlatformClientV2/models/analytics_conversation_async_query_response.py
to_str
cjohnson-ctl/platform-client-sdk-python
10
python
def to_str(self): '\n \n ' return pformat(self.to_dict())
def to_str(self): '\n \n ' return pformat(self.to_dict())<|docstring|>Returns the string representation of the model<|endoftext|>
1034ff7dd2eef24d21e3c2fa7409b793ab5cbb8cd75a2eb0ab3e62604b26264d
def __repr__(self): '\n For `print` and `pprint`\n ' return self.to_str()
For `print` and `pprint`
build/PureCloudPlatformClientV2/models/analytics_conversation_async_query_response.py
__repr__
cjohnson-ctl/platform-client-sdk-python
10
python
def __repr__(self): '\n \n ' return self.to_str()
def __repr__(self): '\n \n ' return self.to_str()<|docstring|>For `print` and `pprint`<|endoftext|>
a43b3ce7478646f0122f200e4de04f4f5ed99329a4b75930eecef4ff54a23351
def __eq__(self, other): '\n Returns true if both objects are equal\n ' return (self.__dict__ == other.__dict__)
Returns true if both objects are equal
build/PureCloudPlatformClientV2/models/analytics_conversation_async_query_response.py
__eq__
cjohnson-ctl/platform-client-sdk-python
10
python
def __eq__(self, other): '\n \n ' return (self.__dict__ == other.__dict__)
def __eq__(self, other): '\n \n ' return (self.__dict__ == other.__dict__)<|docstring|>Returns true if both objects are equal<|endoftext|>
e5050f8e1402e3a4c90d6c6e229c4c9e2b8ec61e0be457915ea9d976f7e6b0b4
def __ne__(self, other): '\n Returns true if both objects are not equal\n ' return (not (self == other))
Returns true if both objects are not equal
build/PureCloudPlatformClientV2/models/analytics_conversation_async_query_response.py
__ne__
cjohnson-ctl/platform-client-sdk-python
10
python
def __ne__(self, other): '\n \n ' return (not (self == other))
def __ne__(self, other): '\n \n ' return (not (self == other))<|docstring|>Returns true if both objects are not equal<|endoftext|>
f61222522cdb39e05c61cfc730dc7fb9d6f146db04bab7a197430edbcd19f2d8
def setParam(self, k: int, row: int, column: int): 'NMFのパラメータ設定\n\n Args:\n k (int): 因子数\n row (int): 列数\n column (int): 行数\n ' self.__k = k self.__row = row self.__column = column self.__dictionary = np.random.random_sample([self.__row, self.__k]) self.__activation = np.random.random_sample([self.__k, self.__column])
NMFのパラメータ設定 Args: k (int): 因子数 row (int): 列数 column (int): 行数
src/NMF.py
setParam
T-Sumida/Simple_NMF
0
python
def setParam(self, k: int, row: int, column: int): 'NMFのパラメータ設定\n\n Args:\n k (int): 因子数\n row (int): 列数\n column (int): 行数\n ' self.__k = k self.__row = row self.__column = column self.__dictionary = np.random.random_sample([self.__row, self.__k]) self.__activation = np.random.random_sample([self.__k, self.__column])
def setParam(self, k: int, row: int, column: int): 'NMFのパラメータ設定\n\n Args:\n k (int): 因子数\n row (int): 列数\n column (int): 行数\n ' self.__k = k self.__row = row self.__column = column self.__dictionary = np.random.random_sample([self.__row, self.__k]) self.__activation = np.random.random_sample([self.__k, self.__column])<|docstring|>NMFのパラメータ設定 Args: k (int): 因子数 row (int): 列数 column (int): 行数<|endoftext|>
78bddb17bcdce549a04f7ba724b03b38bdb2f22ea690bbd655fd3654725695b3
def setDictionary(self, index: int, data: List): '辞書行列へのデータ設定\n Args:\n index (int): 因子インデックス ( 0 <= index < k)\n data (List): データ\n ' if ((index >= self.__k) and (len(data) != self.__row)): print('Please NMF.setParam(k,row,column)') print(f'k = {self.__k}') print(f'row = {self.__row}') return self.__dictionary[(:, index)] = np.array(data[:self.__row], np.float32)
辞書行列へのデータ設定 Args: index (int): 因子インデックス ( 0 <= index < k) data (List): データ
src/NMF.py
setDictionary
T-Sumida/Simple_NMF
0
python
def setDictionary(self, index: int, data: List): '辞書行列へのデータ設定\n Args:\n index (int): 因子インデックス ( 0 <= index < k)\n data (List): データ\n ' if ((index >= self.__k) and (len(data) != self.__row)): print('Please NMF.setParam(k,row,column)') print(f'k = {self.__k}') print(f'row = {self.__row}') return self.__dictionary[(:, index)] = np.array(data[:self.__row], np.float32)
def setDictionary(self, index: int, data: List): '辞書行列へのデータ設定\n Args:\n index (int): 因子インデックス ( 0 <= index < k)\n data (List): データ\n ' if ((index >= self.__k) and (len(data) != self.__row)): print('Please NMF.setParam(k,row,column)') print(f'k = {self.__k}') print(f'row = {self.__row}') return self.__dictionary[(:, index)] = np.array(data[:self.__row], np.float32)<|docstring|>辞書行列へのデータ設定 Args: index (int): 因子インデックス ( 0 <= index < k) data (List): データ<|endoftext|>
05c055eecc8a8a161b7aeab07899d7499a44650d7ef48cde3be77336ee66e492
def setAnalyzData(self, data: List, k: int): '分解対象行列データを登録\n\n Args:\n data (List): 分解対象行列\n k (int): 因子数\n ' if (len(np.shape(data)) == 1): self.__data = np.ones([np.shape(data)[0], 1], np.float32) self.setParam(k, np.shape(data)[0], 1) else: self.__data = data self.setParam(k, np.shape(data)[0], np.shape(data)[1])
分解対象行列データを登録 Args: data (List): 分解対象行列 k (int): 因子数
src/NMF.py
setAnalyzData
T-Sumida/Simple_NMF
0
python
def setAnalyzData(self, data: List, k: int): '分解対象行列データを登録\n\n Args:\n data (List): 分解対象行列\n k (int): 因子数\n ' if (len(np.shape(data)) == 1): self.__data = np.ones([np.shape(data)[0], 1], np.float32) self.setParam(k, np.shape(data)[0], 1) else: self.__data = data self.setParam(k, np.shape(data)[0], np.shape(data)[1])
def setAnalyzData(self, data: List, k: int): '分解対象行列データを登録\n\n Args:\n data (List): 分解対象行列\n k (int): 因子数\n ' if (len(np.shape(data)) == 1): self.__data = np.ones([np.shape(data)[0], 1], np.float32) self.setParam(k, np.shape(data)[0], 1) else: self.__data = data self.setParam(k, np.shape(data)[0], np.shape(data)[1])<|docstring|>分解対象行列データを登録 Args: data (List): 分解対象行列 k (int): 因子数<|endoftext|>
36d7d8ace9477e2a22c34f163eae347d9e261e48a31d6c6ae54cd80615df7ae1
def separate_euc_with_template(self, iter: int=200) -> Tuple[(np.array, np.array)]: 'テンプレートありのEUC-divergence仕様の分離処理\n\n Args:\n iter (int, optional): 反復更新回数. Defaults to 200.\n\n Returns:\n Tuple[np.array, np.array]: [辞書行列, 励起行列]\n ' counter = 0 while (counter < iter): approx = np.dot(self.__dictionary, self.__activation) wh = np.dot(np.transpose(self.__dictionary), self.__data) wt = np.dot(np.transpose(self.__dictionary), approx) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__activation = (self.__activation * bias) counter += 1 return (self.__dictionary, self.__activation)
テンプレートありのEUC-divergence仕様の分離処理 Args: iter (int, optional): 反復更新回数. Defaults to 200. Returns: Tuple[np.array, np.array]: [辞書行列, 励起行列]
src/NMF.py
separate_euc_with_template
T-Sumida/Simple_NMF
0
python
def separate_euc_with_template(self, iter: int=200) -> Tuple[(np.array, np.array)]: 'テンプレートありのEUC-divergence仕様の分離処理\n\n Args:\n iter (int, optional): 反復更新回数. Defaults to 200.\n\n Returns:\n Tuple[np.array, np.array]: [辞書行列, 励起行列]\n ' counter = 0 while (counter < iter): approx = np.dot(self.__dictionary, self.__activation) wh = np.dot(np.transpose(self.__dictionary), self.__data) wt = np.dot(np.transpose(self.__dictionary), approx) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__activation = (self.__activation * bias) counter += 1 return (self.__dictionary, self.__activation)
def separate_euc_with_template(self, iter: int=200) -> Tuple[(np.array, np.array)]: 'テンプレートありのEUC-divergence仕様の分離処理\n\n Args:\n iter (int, optional): 反復更新回数. Defaults to 200.\n\n Returns:\n Tuple[np.array, np.array]: [辞書行列, 励起行列]\n ' counter = 0 while (counter < iter): approx = np.dot(self.__dictionary, self.__activation) wh = np.dot(np.transpose(self.__dictionary), self.__data) wt = np.dot(np.transpose(self.__dictionary), approx) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__activation = (self.__activation * bias) counter += 1 return (self.__dictionary, self.__activation)<|docstring|>テンプレートありのEUC-divergence仕様の分離処理 Args: iter (int, optional): 反復更新回数. Defaults to 200. Returns: Tuple[np.array, np.array]: [辞書行列, 励起行列]<|endoftext|>
0e799f7462bb2c222aae65b3ab415115fabfd0ea0130851ed5543d2d7deb2983
def separate_kl_with_template(self, iter: int=200) -> Tuple[(np.array, np.array)]: 'テンプレートありのKL-divergence仕様の分離処理\n\n Args:\n iter (int, optional): 反復更新回数. Defaults to 200.\n\n Returns:\n Tuple[np.array, np.array]: [辞書行列, 励起行列]\n ' counter = 0 while (counter < iter): approx = np.dot(self.__dictionary, self.__activation) w = (self.__data / approx) w[np.isnan(w)] = 0 wh = np.dot(np.transpose(self.__dictionary), w) wt = np.ones([1, self.__k], np.float32) wt[:] = sum(self.__dictionary[(:, :)]) wt = np.transpose(wt) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__activation = (self.__activation * bias) counter += 1 return (self.__dictionary, self.__activation)
テンプレートありのKL-divergence仕様の分離処理 Args: iter (int, optional): 反復更新回数. Defaults to 200. Returns: Tuple[np.array, np.array]: [辞書行列, 励起行列]
src/NMF.py
separate_kl_with_template
T-Sumida/Simple_NMF
0
python
def separate_kl_with_template(self, iter: int=200) -> Tuple[(np.array, np.array)]: 'テンプレートありのKL-divergence仕様の分離処理\n\n Args:\n iter (int, optional): 反復更新回数. Defaults to 200.\n\n Returns:\n Tuple[np.array, np.array]: [辞書行列, 励起行列]\n ' counter = 0 while (counter < iter): approx = np.dot(self.__dictionary, self.__activation) w = (self.__data / approx) w[np.isnan(w)] = 0 wh = np.dot(np.transpose(self.__dictionary), w) wt = np.ones([1, self.__k], np.float32) wt[:] = sum(self.__dictionary[(:, :)]) wt = np.transpose(wt) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__activation = (self.__activation * bias) counter += 1 return (self.__dictionary, self.__activation)
def separate_kl_with_template(self, iter: int=200) -> Tuple[(np.array, np.array)]: 'テンプレートありのKL-divergence仕様の分離処理\n\n Args:\n iter (int, optional): 反復更新回数. Defaults to 200.\n\n Returns:\n Tuple[np.array, np.array]: [辞書行列, 励起行列]\n ' counter = 0 while (counter < iter): approx = np.dot(self.__dictionary, self.__activation) w = (self.__data / approx) w[np.isnan(w)] = 0 wh = np.dot(np.transpose(self.__dictionary), w) wt = np.ones([1, self.__k], np.float32) wt[:] = sum(self.__dictionary[(:, :)]) wt = np.transpose(wt) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__activation = (self.__activation * bias) counter += 1 return (self.__dictionary, self.__activation)<|docstring|>テンプレートありのKL-divergence仕様の分離処理 Args: iter (int, optional): 反復更新回数. Defaults to 200. Returns: Tuple[np.array, np.array]: [辞書行列, 励起行列]<|endoftext|>
696ef745264ad7b687c628f498f61d778cefa7a076ac10b4c4bcec8253657184
def separate_is_with_template(self, iter: int=200) -> Tuple[(np.array, np.array)]: 'テンプレートありのIS-divergence仕様の分離処理\n\n Args:\n iter (int, optional): 反復更新回数. Defaults to 200.\n\n Returns:\n Tuple[np.array, np.array]: [辞書行列, 励起行列]\n ' counter = 0 while (counter < iter): approx = np.dot(self.__dictionary, self.__activation) wt = np.ones([1, self.__k], np.float32) w1 = (self.__data / approx) w2 = (np.transpose(self.__dictionary) / sum(np.transpose(approx[:]))) w1[np.isnan(w1)] = 0 w2[np.isnan(w2)] = 0 wh = np.dot(w2, w1) wt[:] = sum(np.transpose(w2[:])) wt = np.transpose(wt) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__activation = (self.__activation * np.sqrt(bias)) counter += 1 return (self.__dictionary, self.__activation)
テンプレートありのIS-divergence仕様の分離処理 Args: iter (int, optional): 反復更新回数. Defaults to 200. Returns: Tuple[np.array, np.array]: [辞書行列, 励起行列]
src/NMF.py
separate_is_with_template
T-Sumida/Simple_NMF
0
python
def separate_is_with_template(self, iter: int=200) -> Tuple[(np.array, np.array)]: 'テンプレートありのIS-divergence仕様の分離処理\n\n Args:\n iter (int, optional): 反復更新回数. Defaults to 200.\n\n Returns:\n Tuple[np.array, np.array]: [辞書行列, 励起行列]\n ' counter = 0 while (counter < iter): approx = np.dot(self.__dictionary, self.__activation) wt = np.ones([1, self.__k], np.float32) w1 = (self.__data / approx) w2 = (np.transpose(self.__dictionary) / sum(np.transpose(approx[:]))) w1[np.isnan(w1)] = 0 w2[np.isnan(w2)] = 0 wh = np.dot(w2, w1) wt[:] = sum(np.transpose(w2[:])) wt = np.transpose(wt) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__activation = (self.__activation * np.sqrt(bias)) counter += 1 return (self.__dictionary, self.__activation)
def separate_is_with_template(self, iter: int=200) -> Tuple[(np.array, np.array)]: 'テンプレートありのIS-divergence仕様の分離処理\n\n Args:\n iter (int, optional): 反復更新回数. Defaults to 200.\n\n Returns:\n Tuple[np.array, np.array]: [辞書行列, 励起行列]\n ' counter = 0 while (counter < iter): approx = np.dot(self.__dictionary, self.__activation) wt = np.ones([1, self.__k], np.float32) w1 = (self.__data / approx) w2 = (np.transpose(self.__dictionary) / sum(np.transpose(approx[:]))) w1[np.isnan(w1)] = 0 w2[np.isnan(w2)] = 0 wh = np.dot(w2, w1) wt[:] = sum(np.transpose(w2[:])) wt = np.transpose(wt) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__activation = (self.__activation * np.sqrt(bias)) counter += 1 return (self.__dictionary, self.__activation)<|docstring|>テンプレートありのIS-divergence仕様の分離処理 Args: iter (int, optional): 反復更新回数. Defaults to 200. Returns: Tuple[np.array, np.array]: [辞書行列, 励起行列]<|endoftext|>
1e8199b12b3d32b4ccd6345cc7e6d16bcda46b8b79559fe81a5e59254b343151
def separate_euc_without_template(self, iter: int=200) -> Tuple[(np.array, np.array)]: 'テンプレートなしのEUC-divergence仕様の分離処理\n\n Args:\n iter (int, optional): 反復更新回数. Defaults to 200.\n\n Returns:\n Tuple[np.array, np.array]: [辞書行列, 励起行列]\n ' counter = 0 while (counter < iter): approx = np.dot(self.__dictionary, self.__activation) wh = np.dot(np.transpose(self.__dictionary), self.__data) wt = np.dot(np.transpose(self.__dictionary), approx) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__activation = (self.__activation * bias) approx = np.dot(self.__dictionary, self.__activation) wh = np.dot(self.__data, np.transpose(self.__activation)) wt = np.dot(approx, np.transpose(self.__activation)) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__dictionary = (self.__dictionary * bias) counter += 1 return (self.__dictionary, self.__activation)
テンプレートなしのEUC-divergence仕様の分離処理 Args: iter (int, optional): 反復更新回数. Defaults to 200. Returns: Tuple[np.array, np.array]: [辞書行列, 励起行列]
src/NMF.py
separate_euc_without_template
T-Sumida/Simple_NMF
0
python
def separate_euc_without_template(self, iter: int=200) -> Tuple[(np.array, np.array)]: 'テンプレートなしのEUC-divergence仕様の分離処理\n\n Args:\n iter (int, optional): 反復更新回数. Defaults to 200.\n\n Returns:\n Tuple[np.array, np.array]: [辞書行列, 励起行列]\n ' counter = 0 while (counter < iter): approx = np.dot(self.__dictionary, self.__activation) wh = np.dot(np.transpose(self.__dictionary), self.__data) wt = np.dot(np.transpose(self.__dictionary), approx) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__activation = (self.__activation * bias) approx = np.dot(self.__dictionary, self.__activation) wh = np.dot(self.__data, np.transpose(self.__activation)) wt = np.dot(approx, np.transpose(self.__activation)) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__dictionary = (self.__dictionary * bias) counter += 1 return (self.__dictionary, self.__activation)
def separate_euc_without_template(self, iter: int=200) -> Tuple[(np.array, np.array)]: 'テンプレートなしのEUC-divergence仕様の分離処理\n\n Args:\n iter (int, optional): 反復更新回数. Defaults to 200.\n\n Returns:\n Tuple[np.array, np.array]: [辞書行列, 励起行列]\n ' counter = 0 while (counter < iter): approx = np.dot(self.__dictionary, self.__activation) wh = np.dot(np.transpose(self.__dictionary), self.__data) wt = np.dot(np.transpose(self.__dictionary), approx) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__activation = (self.__activation * bias) approx = np.dot(self.__dictionary, self.__activation) wh = np.dot(self.__data, np.transpose(self.__activation)) wt = np.dot(approx, np.transpose(self.__activation)) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__dictionary = (self.__dictionary * bias) counter += 1 return (self.__dictionary, self.__activation)<|docstring|>テンプレートなしのEUC-divergence仕様の分離処理 Args: iter (int, optional): 反復更新回数. Defaults to 200. Returns: Tuple[np.array, np.array]: [辞書行列, 励起行列]<|endoftext|>
96870ae91a827efcfe08d82da1ecb0c449322aa76553cf42f8df5be8973d6990
def separate_kl_without_template(self, iter: int=200) -> Tuple[(np.array, np.array)]: 'テンプレートなしのKL-divergence仕様の分離処理\n\n Args:\n iter (int, optional): 反復更新回数. Defaults to 200.\n\n Returns:\n Tuple[np.array, np.array]: [辞書行列, 励起行列]\n ' counter = 0 while (counter < iter): approx = np.dot(self.__dictionary, self.__activation) w = (self.__data / approx) w[np.isnan(w)] = 0 wh = np.dot(np.transpose(self.__dictionary), w) wt = np.ones([1, self.__k], np.float32) wt[:] = sum(self.__dictionary[(:, :)]) wt = np.transpose(wt) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__activation = (self.__activation * bias) approx = np.dot(self.__dictionary, self.__activation) w = (self.__data / approx) w[np.isnan(w)] = 0 wh = np.dot(w, np.transpose(self.__activation)) wt = np.ones([self.__k, 1], np.float32) wt = sum(np.transpose(self.__activation[:])) wt = np.transpose(wt) bias = (wh / wt) self.__dictionary = (self.__dictionary * bias) counter += 1 return (self.__dictionary, self.__activation)
テンプレートなしのKL-divergence仕様の分離処理 Args: iter (int, optional): 反復更新回数. Defaults to 200. Returns: Tuple[np.array, np.array]: [辞書行列, 励起行列]
src/NMF.py
separate_kl_without_template
T-Sumida/Simple_NMF
0
python
def separate_kl_without_template(self, iter: int=200) -> Tuple[(np.array, np.array)]: 'テンプレートなしのKL-divergence仕様の分離処理\n\n Args:\n iter (int, optional): 反復更新回数. Defaults to 200.\n\n Returns:\n Tuple[np.array, np.array]: [辞書行列, 励起行列]\n ' counter = 0 while (counter < iter): approx = np.dot(self.__dictionary, self.__activation) w = (self.__data / approx) w[np.isnan(w)] = 0 wh = np.dot(np.transpose(self.__dictionary), w) wt = np.ones([1, self.__k], np.float32) wt[:] = sum(self.__dictionary[(:, :)]) wt = np.transpose(wt) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__activation = (self.__activation * bias) approx = np.dot(self.__dictionary, self.__activation) w = (self.__data / approx) w[np.isnan(w)] = 0 wh = np.dot(w, np.transpose(self.__activation)) wt = np.ones([self.__k, 1], np.float32) wt = sum(np.transpose(self.__activation[:])) wt = np.transpose(wt) bias = (wh / wt) self.__dictionary = (self.__dictionary * bias) counter += 1 return (self.__dictionary, self.__activation)
def separate_kl_without_template(self, iter: int=200) -> Tuple[(np.array, np.array)]: 'テンプレートなしのKL-divergence仕様の分離処理\n\n Args:\n iter (int, optional): 反復更新回数. Defaults to 200.\n\n Returns:\n Tuple[np.array, np.array]: [辞書行列, 励起行列]\n ' counter = 0 while (counter < iter): approx = np.dot(self.__dictionary, self.__activation) w = (self.__data / approx) w[np.isnan(w)] = 0 wh = np.dot(np.transpose(self.__dictionary), w) wt = np.ones([1, self.__k], np.float32) wt[:] = sum(self.__dictionary[(:, :)]) wt = np.transpose(wt) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__activation = (self.__activation * bias) approx = np.dot(self.__dictionary, self.__activation) w = (self.__data / approx) w[np.isnan(w)] = 0 wh = np.dot(w, np.transpose(self.__activation)) wt = np.ones([self.__k, 1], np.float32) wt = sum(np.transpose(self.__activation[:])) wt = np.transpose(wt) bias = (wh / wt) self.__dictionary = (self.__dictionary * bias) counter += 1 return (self.__dictionary, self.__activation)<|docstring|>テンプレートなしのKL-divergence仕様の分離処理 Args: iter (int, optional): 反復更新回数. Defaults to 200. Returns: Tuple[np.array, np.array]: [辞書行列, 励起行列]<|endoftext|>
1f8e4b1104edfc4ecbc6de0d5b5de2c5ed739c3fbb0a3753bb5ae91945102d89
def separate_is_without_template(self, iter: int=200) -> Tuple[(np.array, np.array)]: 'テンプレートなしのIS-divergence仕様の分離処理\n\n Args:\n iter (int, optional): 反復更新回数. Defaults to 200.\n\n Returns:\n Tuple[np.array, np.array]: [辞書行列, 励起行列]\n ' counter = 0 while (counter < iter): approx = np.dot(self.__dictionary, self.__activation) wt = np.ones([1, self.__k], np.float32) w1 = (self.__data / approx) w2 = (np.transpose(self.__dictionary) / sum(np.transpose(approx[:]))) w1[np.isnan(w1)] = 0 w2[np.isnan(w2)] = 0 wh = np.dot(w2, w1) wt[:] = sum(np.transpose(w2[:])) wt = np.transpose(wt) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__activation = (self.__activation * np.sqrt(bias)) approx = np.dot(self.__dictionary, self.__activation) w1 = (self.__data / approx) w2 = (self.__activation / sum(approx[:])) w1[np.isnan(w1)] = 0 w2[np.isnan(w2)] = 0 wh = np.dot(w1, np.transpose(w2)) wt = sum(np.transpose(w2[:])) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__dictionary = (self.__dictionary * np.sqrt(bias)) counter += 1 return (self.__dictionary, self.__activation)
テンプレートなしのIS-divergence仕様の分離処理 Args: iter (int, optional): 反復更新回数. Defaults to 200. Returns: Tuple[np.array, np.array]: [辞書行列, 励起行列]
src/NMF.py
separate_is_without_template
T-Sumida/Simple_NMF
0
python
def separate_is_without_template(self, iter: int=200) -> Tuple[(np.array, np.array)]: 'テンプレートなしのIS-divergence仕様の分離処理\n\n Args:\n iter (int, optional): 反復更新回数. Defaults to 200.\n\n Returns:\n Tuple[np.array, np.array]: [辞書行列, 励起行列]\n ' counter = 0 while (counter < iter): approx = np.dot(self.__dictionary, self.__activation) wt = np.ones([1, self.__k], np.float32) w1 = (self.__data / approx) w2 = (np.transpose(self.__dictionary) / sum(np.transpose(approx[:]))) w1[np.isnan(w1)] = 0 w2[np.isnan(w2)] = 0 wh = np.dot(w2, w1) wt[:] = sum(np.transpose(w2[:])) wt = np.transpose(wt) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__activation = (self.__activation * np.sqrt(bias)) approx = np.dot(self.__dictionary, self.__activation) w1 = (self.__data / approx) w2 = (self.__activation / sum(approx[:])) w1[np.isnan(w1)] = 0 w2[np.isnan(w2)] = 0 wh = np.dot(w1, np.transpose(w2)) wt = sum(np.transpose(w2[:])) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__dictionary = (self.__dictionary * np.sqrt(bias)) counter += 1 return (self.__dictionary, self.__activation)
def separate_is_without_template(self, iter: int=200) -> Tuple[(np.array, np.array)]: 'テンプレートなしのIS-divergence仕様の分離処理\n\n Args:\n iter (int, optional): 反復更新回数. Defaults to 200.\n\n Returns:\n Tuple[np.array, np.array]: [辞書行列, 励起行列]\n ' counter = 0 while (counter < iter): approx = np.dot(self.__dictionary, self.__activation) wt = np.ones([1, self.__k], np.float32) w1 = (self.__data / approx) w2 = (np.transpose(self.__dictionary) / sum(np.transpose(approx[:]))) w1[np.isnan(w1)] = 0 w2[np.isnan(w2)] = 0 wh = np.dot(w2, w1) wt[:] = sum(np.transpose(w2[:])) wt = np.transpose(wt) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__activation = (self.__activation * np.sqrt(bias)) approx = np.dot(self.__dictionary, self.__activation) w1 = (self.__data / approx) w2 = (self.__activation / sum(approx[:])) w1[np.isnan(w1)] = 0 w2[np.isnan(w2)] = 0 wh = np.dot(w1, np.transpose(w2)) wt = sum(np.transpose(w2[:])) bias = (wh / wt) bias[np.isnan(bias)] = 0 self.__dictionary = (self.__dictionary * np.sqrt(bias)) counter += 1 return (self.__dictionary, self.__activation)<|docstring|>テンプレートなしのIS-divergence仕様の分離処理 Args: iter (int, optional): 反復更新回数. Defaults to 200. Returns: Tuple[np.array, np.array]: [辞書行列, 励起行列]<|endoftext|>
8962653875756ac342a24ed59ac9de707158f44c4348c385e2f8ff10c19ba764
def __init__(self, vfcn: Union[(nn.Module, Policy)], gamma: float=0.99, lamda: float=0.95, num_epoch: int=10, batch_size: int=64, standardize_adv: bool=True, standardizer: Optional[RunningStandardizer]=None, max_grad_norm: Optional[float]=None, lr: float=0.0005, lr_scheduler=None, lr_scheduler_hparam: Optional[dict]=None): '\n Constructor\n\n :param vfcn: value function, which can be a `FNN` or a `Policy`\n :param gamma: temporal discount factor\n :param lamda: regulates the trade-off between bias (max for 0) and variance (max for 1), see [1]\n :param num_epoch: number of iterations over all gathered samples during one estimator update\n :param batch_size: number of samples per estimator update batch\n :param standardize_adv: if `True`, the advantages are standardized to be $~ N(0,1)$\n :param standardizer: pass `None` to use stateless standardisation, alternatively pass `RunningStandardizer()`\n to use a standardizer wich keeps track of past values\n :param max_grad_norm: maximum L2 norm of the gradients for clipping, set to `None` to disable gradient clipping\n :param lr: (initial) learning rate for the optimizer which can be by modified by the scheduler.\n By default, the learning rate is constant.\n :param lr_scheduler: learning rate scheduler that does one step per epoch (pass through the whole data set)\n :param lr_scheduler_hparam: hyper-parameters for the learning rate scheduler\n ' if (not isinstance(vfcn, (nn.Module, Policy))): raise pyrado.TypeErr(given=vfcn, expected_type=[nn.Module, Policy]) if isinstance(vfcn, Policy): if (not (vfcn.env_spec.act_space == ValueFunctionSpace)): raise pyrado.ShapeErr(msg='The given act_space held by the vfcn should be a ValueFunctionSpace.') if (not (0 <= gamma <= 1)): raise pyrado.ValueErr(given=gamma, ge_constraint='0', le_constraint='1') if (not (0 <= lamda <= 1)): raise pyrado.ValueErr(given=lamda, ge_constraint='0', le_constraint='1') super().__init__() self._vfcn = vfcn self.gamma = gamma self.lamda = lamda self.num_epoch = num_epoch self.batch_size = batch_size self.max_grad_norm = max_grad_norm self.standardize_adv = standardize_adv self.standardizer = standardizer self.loss_fcn = nn.MSELoss() self.optim = to.optim.Adam(self._vfcn.parameters(), lr=lr, eps=1e-05) self._lr_scheduler = lr_scheduler self._lr_scheduler_hparam = lr_scheduler_hparam if (lr_scheduler is not None): self._lr_scheduler = lr_scheduler(self.optim, **lr_scheduler_hparam)
Constructor :param vfcn: value function, which can be a `FNN` or a `Policy` :param gamma: temporal discount factor :param lamda: regulates the trade-off between bias (max for 0) and variance (max for 1), see [1] :param num_epoch: number of iterations over all gathered samples during one estimator update :param batch_size: number of samples per estimator update batch :param standardize_adv: if `True`, the advantages are standardized to be $~ N(0,1)$ :param standardizer: pass `None` to use stateless standardisation, alternatively pass `RunningStandardizer()` to use a standardizer wich keeps track of past values :param max_grad_norm: maximum L2 norm of the gradients for clipping, set to `None` to disable gradient clipping :param lr: (initial) learning rate for the optimizer which can be by modified by the scheduler. By default, the learning rate is constant. :param lr_scheduler: learning rate scheduler that does one step per epoch (pass through the whole data set) :param lr_scheduler_hparam: hyper-parameters for the learning rate scheduler
Pyrado/pyrado/algorithms/step_based/gae.py
__init__
swami1995/SimuRLacra
52
python
def __init__(self, vfcn: Union[(nn.Module, Policy)], gamma: float=0.99, lamda: float=0.95, num_epoch: int=10, batch_size: int=64, standardize_adv: bool=True, standardizer: Optional[RunningStandardizer]=None, max_grad_norm: Optional[float]=None, lr: float=0.0005, lr_scheduler=None, lr_scheduler_hparam: Optional[dict]=None): '\n Constructor\n\n :param vfcn: value function, which can be a `FNN` or a `Policy`\n :param gamma: temporal discount factor\n :param lamda: regulates the trade-off between bias (max for 0) and variance (max for 1), see [1]\n :param num_epoch: number of iterations over all gathered samples during one estimator update\n :param batch_size: number of samples per estimator update batch\n :param standardize_adv: if `True`, the advantages are standardized to be $~ N(0,1)$\n :param standardizer: pass `None` to use stateless standardisation, alternatively pass `RunningStandardizer()`\n to use a standardizer wich keeps track of past values\n :param max_grad_norm: maximum L2 norm of the gradients for clipping, set to `None` to disable gradient clipping\n :param lr: (initial) learning rate for the optimizer which can be by modified by the scheduler.\n By default, the learning rate is constant.\n :param lr_scheduler: learning rate scheduler that does one step per epoch (pass through the whole data set)\n :param lr_scheduler_hparam: hyper-parameters for the learning rate scheduler\n ' if (not isinstance(vfcn, (nn.Module, Policy))): raise pyrado.TypeErr(given=vfcn, expected_type=[nn.Module, Policy]) if isinstance(vfcn, Policy): if (not (vfcn.env_spec.act_space == ValueFunctionSpace)): raise pyrado.ShapeErr(msg='The given act_space held by the vfcn should be a ValueFunctionSpace.') if (not (0 <= gamma <= 1)): raise pyrado.ValueErr(given=gamma, ge_constraint='0', le_constraint='1') if (not (0 <= lamda <= 1)): raise pyrado.ValueErr(given=lamda, ge_constraint='0', le_constraint='1') super().__init__() self._vfcn = vfcn self.gamma = gamma self.lamda = lamda self.num_epoch = num_epoch self.batch_size = batch_size self.max_grad_norm = max_grad_norm self.standardize_adv = standardize_adv self.standardizer = standardizer self.loss_fcn = nn.MSELoss() self.optim = to.optim.Adam(self._vfcn.parameters(), lr=lr, eps=1e-05) self._lr_scheduler = lr_scheduler self._lr_scheduler_hparam = lr_scheduler_hparam if (lr_scheduler is not None): self._lr_scheduler = lr_scheduler(self.optim, **lr_scheduler_hparam)
def __init__(self, vfcn: Union[(nn.Module, Policy)], gamma: float=0.99, lamda: float=0.95, num_epoch: int=10, batch_size: int=64, standardize_adv: bool=True, standardizer: Optional[RunningStandardizer]=None, max_grad_norm: Optional[float]=None, lr: float=0.0005, lr_scheduler=None, lr_scheduler_hparam: Optional[dict]=None): '\n Constructor\n\n :param vfcn: value function, which can be a `FNN` or a `Policy`\n :param gamma: temporal discount factor\n :param lamda: regulates the trade-off between bias (max for 0) and variance (max for 1), see [1]\n :param num_epoch: number of iterations over all gathered samples during one estimator update\n :param batch_size: number of samples per estimator update batch\n :param standardize_adv: if `True`, the advantages are standardized to be $~ N(0,1)$\n :param standardizer: pass `None` to use stateless standardisation, alternatively pass `RunningStandardizer()`\n to use a standardizer wich keeps track of past values\n :param max_grad_norm: maximum L2 norm of the gradients for clipping, set to `None` to disable gradient clipping\n :param lr: (initial) learning rate for the optimizer which can be by modified by the scheduler.\n By default, the learning rate is constant.\n :param lr_scheduler: learning rate scheduler that does one step per epoch (pass through the whole data set)\n :param lr_scheduler_hparam: hyper-parameters for the learning rate scheduler\n ' if (not isinstance(vfcn, (nn.Module, Policy))): raise pyrado.TypeErr(given=vfcn, expected_type=[nn.Module, Policy]) if isinstance(vfcn, Policy): if (not (vfcn.env_spec.act_space == ValueFunctionSpace)): raise pyrado.ShapeErr(msg='The given act_space held by the vfcn should be a ValueFunctionSpace.') if (not (0 <= gamma <= 1)): raise pyrado.ValueErr(given=gamma, ge_constraint='0', le_constraint='1') if (not (0 <= lamda <= 1)): raise pyrado.ValueErr(given=lamda, ge_constraint='0', le_constraint='1') super().__init__() self._vfcn = vfcn self.gamma = gamma self.lamda = lamda self.num_epoch = num_epoch self.batch_size = batch_size self.max_grad_norm = max_grad_norm self.standardize_adv = standardize_adv self.standardizer = standardizer self.loss_fcn = nn.MSELoss() self.optim = to.optim.Adam(self._vfcn.parameters(), lr=lr, eps=1e-05) self._lr_scheduler = lr_scheduler self._lr_scheduler_hparam = lr_scheduler_hparam if (lr_scheduler is not None): self._lr_scheduler = lr_scheduler(self.optim, **lr_scheduler_hparam)<|docstring|>Constructor :param vfcn: value function, which can be a `FNN` or a `Policy` :param gamma: temporal discount factor :param lamda: regulates the trade-off between bias (max for 0) and variance (max for 1), see [1] :param num_epoch: number of iterations over all gathered samples during one estimator update :param batch_size: number of samples per estimator update batch :param standardize_adv: if `True`, the advantages are standardized to be $~ N(0,1)$ :param standardizer: pass `None` to use stateless standardisation, alternatively pass `RunningStandardizer()` to use a standardizer wich keeps track of past values :param max_grad_norm: maximum L2 norm of the gradients for clipping, set to `None` to disable gradient clipping :param lr: (initial) learning rate for the optimizer which can be by modified by the scheduler. By default, the learning rate is constant. :param lr_scheduler: learning rate scheduler that does one step per epoch (pass through the whole data set) :param lr_scheduler_hparam: hyper-parameters for the learning rate scheduler<|endoftext|>
1659ca88d3c894c1383c98902e7cde35a13b4bd96a79facf350896ad49ef6cab
@property def vfcn(self) -> Union[(nn.Module, Policy)]: 'Get the value function approximator.' return self._vfcn
Get the value function approximator.
Pyrado/pyrado/algorithms/step_based/gae.py
vfcn
swami1995/SimuRLacra
52
python
@property def vfcn(self) -> Union[(nn.Module, Policy)]: return self._vfcn
@property def vfcn(self) -> Union[(nn.Module, Policy)]: return self._vfcn<|docstring|>Get the value function approximator.<|endoftext|>
c3996304c00616da9757f3736d732118bbbf88c1d42cfa413227472e941edb75
@vfcn.setter def vfcn(self, vfcn: Union[(nn.Module, Policy)]): 'Set the value function approximator.' if (not isinstance(vfcn, (nn.Module, Policy))): raise pyrado.TypeErr(given=vfcn, expected_type=[nn.Module, Policy]) self._vfcn = vfcn if (self._lr_scheduler is not None): self._lr_scheduler.last_epoch = (- 1)
Set the value function approximator.
Pyrado/pyrado/algorithms/step_based/gae.py
vfcn
swami1995/SimuRLacra
52
python
@vfcn.setter def vfcn(self, vfcn: Union[(nn.Module, Policy)]): if (not isinstance(vfcn, (nn.Module, Policy))): raise pyrado.TypeErr(given=vfcn, expected_type=[nn.Module, Policy]) self._vfcn = vfcn if (self._lr_scheduler is not None): self._lr_scheduler.last_epoch = (- 1)
@vfcn.setter def vfcn(self, vfcn: Union[(nn.Module, Policy)]): if (not isinstance(vfcn, (nn.Module, Policy))): raise pyrado.TypeErr(given=vfcn, expected_type=[nn.Module, Policy]) self._vfcn = vfcn if (self._lr_scheduler is not None): self._lr_scheduler.last_epoch = (- 1)<|docstring|>Set the value function approximator.<|endoftext|>
bbb8c1380daa2fdd74a90650e5901abf932f5b222852412015084f9f3b004383
def gae(self, concat_ros: StepSequence, v_pred: Optional[to.Tensor]=None, requires_grad: bool=False) -> to.Tensor: '\n Compute the generalized advantage estimation as described in [1].\n\n :param concat_ros: concatenated rollouts (sequence of steps from potentially different rollouts)\n :param v_pred: state-value predictions if already computed, else pass None\n :param requires_grad: is the gradient required\n :return adv: tensor of advantages\n ' with ExitStack() as stack: if (not requires_grad): stack.enter_context(to.no_grad()) if (v_pred is None): v_pred = self.values(concat_ros) adv = to.empty_like(v_pred) for k in reversed(range(concat_ros.length)): if concat_ros[k].done: adv[k] = (concat_ros[k].reward - v_pred[k]) else: adv[k] = (((concat_ros[k].reward + (self.gamma * v_pred[(k + 1)])) - v_pred[k]) + ((self.gamma * self.lamda) * adv[(k + 1)])) if self.standardize_adv: if isinstance(self.standardizer, RunningStandardizer): adv = self.standardizer(adv, axis=0) else: adv = standardize(adv) return adv
Compute the generalized advantage estimation as described in [1]. :param concat_ros: concatenated rollouts (sequence of steps from potentially different rollouts) :param v_pred: state-value predictions if already computed, else pass None :param requires_grad: is the gradient required :return adv: tensor of advantages
Pyrado/pyrado/algorithms/step_based/gae.py
gae
swami1995/SimuRLacra
52
python
def gae(self, concat_ros: StepSequence, v_pred: Optional[to.Tensor]=None, requires_grad: bool=False) -> to.Tensor: '\n Compute the generalized advantage estimation as described in [1].\n\n :param concat_ros: concatenated rollouts (sequence of steps from potentially different rollouts)\n :param v_pred: state-value predictions if already computed, else pass None\n :param requires_grad: is the gradient required\n :return adv: tensor of advantages\n ' with ExitStack() as stack: if (not requires_grad): stack.enter_context(to.no_grad()) if (v_pred is None): v_pred = self.values(concat_ros) adv = to.empty_like(v_pred) for k in reversed(range(concat_ros.length)): if concat_ros[k].done: adv[k] = (concat_ros[k].reward - v_pred[k]) else: adv[k] = (((concat_ros[k].reward + (self.gamma * v_pred[(k + 1)])) - v_pred[k]) + ((self.gamma * self.lamda) * adv[(k + 1)])) if self.standardize_adv: if isinstance(self.standardizer, RunningStandardizer): adv = self.standardizer(adv, axis=0) else: adv = standardize(adv) return adv
def gae(self, concat_ros: StepSequence, v_pred: Optional[to.Tensor]=None, requires_grad: bool=False) -> to.Tensor: '\n Compute the generalized advantage estimation as described in [1].\n\n :param concat_ros: concatenated rollouts (sequence of steps from potentially different rollouts)\n :param v_pred: state-value predictions if already computed, else pass None\n :param requires_grad: is the gradient required\n :return adv: tensor of advantages\n ' with ExitStack() as stack: if (not requires_grad): stack.enter_context(to.no_grad()) if (v_pred is None): v_pred = self.values(concat_ros) adv = to.empty_like(v_pred) for k in reversed(range(concat_ros.length)): if concat_ros[k].done: adv[k] = (concat_ros[k].reward - v_pred[k]) else: adv[k] = (((concat_ros[k].reward + (self.gamma * v_pred[(k + 1)])) - v_pred[k]) + ((self.gamma * self.lamda) * adv[(k + 1)])) if self.standardize_adv: if isinstance(self.standardizer, RunningStandardizer): adv = self.standardizer(adv, axis=0) else: adv = standardize(adv) return adv<|docstring|>Compute the generalized advantage estimation as described in [1]. :param concat_ros: concatenated rollouts (sequence of steps from potentially different rollouts) :param v_pred: state-value predictions if already computed, else pass None :param requires_grad: is the gradient required :return adv: tensor of advantages<|endoftext|>
c2b48960f7d8308fb3debfde1563b545f6b80b0603b6ed2f4a8f4b32f68498a2
def tdlamda_returns(self, v_pred: to.Tensor=None, adv: to.Tensor=None, concat_ros: StepSequence=None) -> to.Tensor: '\n Compute the TD($\\lambda$) returns based on the predictions of the network (introduces a bias).\n\n :param v_pred: state-value predictions if already computed, pass `None` to compute form given rollouts\n :param adv: advantages if already computed, pass `None` to compute form given rollouts\n :param concat_ros: rollouts to compute predicted values and advantages from if they are not provided\n :return: exponentially weighted returns based on the value function estimator\n ' with to.no_grad(): if (v_pred is None): if (concat_ros is None): raise pyrado.TypeErr(given=concat_ros, expected_type=StepSequence) v_pred = self.values(concat_ros) if (adv is None): if (concat_ros is None): raise pyrado.TypeErr(given=concat_ros, expected_type=StepSequence) adv = self.gae(concat_ros, v_pred) return (v_pred + adv)
Compute the TD($\lambda$) returns based on the predictions of the network (introduces a bias). :param v_pred: state-value predictions if already computed, pass `None` to compute form given rollouts :param adv: advantages if already computed, pass `None` to compute form given rollouts :param concat_ros: rollouts to compute predicted values and advantages from if they are not provided :return: exponentially weighted returns based on the value function estimator
Pyrado/pyrado/algorithms/step_based/gae.py
tdlamda_returns
swami1995/SimuRLacra
52
python
def tdlamda_returns(self, v_pred: to.Tensor=None, adv: to.Tensor=None, concat_ros: StepSequence=None) -> to.Tensor: '\n Compute the TD($\\lambda$) returns based on the predictions of the network (introduces a bias).\n\n :param v_pred: state-value predictions if already computed, pass `None` to compute form given rollouts\n :param adv: advantages if already computed, pass `None` to compute form given rollouts\n :param concat_ros: rollouts to compute predicted values and advantages from if they are not provided\n :return: exponentially weighted returns based on the value function estimator\n ' with to.no_grad(): if (v_pred is None): if (concat_ros is None): raise pyrado.TypeErr(given=concat_ros, expected_type=StepSequence) v_pred = self.values(concat_ros) if (adv is None): if (concat_ros is None): raise pyrado.TypeErr(given=concat_ros, expected_type=StepSequence) adv = self.gae(concat_ros, v_pred) return (v_pred + adv)
def tdlamda_returns(self, v_pred: to.Tensor=None, adv: to.Tensor=None, concat_ros: StepSequence=None) -> to.Tensor: '\n Compute the TD($\\lambda$) returns based on the predictions of the network (introduces a bias).\n\n :param v_pred: state-value predictions if already computed, pass `None` to compute form given rollouts\n :param adv: advantages if already computed, pass `None` to compute form given rollouts\n :param concat_ros: rollouts to compute predicted values and advantages from if they are not provided\n :return: exponentially weighted returns based on the value function estimator\n ' with to.no_grad(): if (v_pred is None): if (concat_ros is None): raise pyrado.TypeErr(given=concat_ros, expected_type=StepSequence) v_pred = self.values(concat_ros) if (adv is None): if (concat_ros is None): raise pyrado.TypeErr(given=concat_ros, expected_type=StepSequence) adv = self.gae(concat_ros, v_pred) return (v_pred + adv)<|docstring|>Compute the TD($\lambda$) returns based on the predictions of the network (introduces a bias). :param v_pred: state-value predictions if already computed, pass `None` to compute form given rollouts :param adv: advantages if already computed, pass `None` to compute form given rollouts :param concat_ros: rollouts to compute predicted values and advantages from if they are not provided :return: exponentially weighted returns based on the value function estimator<|endoftext|>
a355c052f02733f086a7362abb0135e44d775e2faea935936856dee28859a21f
def values(self, concat_ros: StepSequence) -> to.Tensor: "\n Compute the states' values for all observations.\n\n :param concat_ros: concatenated rollouts\n :return: states' values\n " if isinstance(self._vfcn, Policy): v_pred = self._vfcn.evaluate(concat_ros, hidden_states_name='vf_hidden_states') else: v_pred = self._vfcn(concat_ros.observations) return v_pred
Compute the states' values for all observations. :param concat_ros: concatenated rollouts :return: states' values
Pyrado/pyrado/algorithms/step_based/gae.py
values
swami1995/SimuRLacra
52
python
def values(self, concat_ros: StepSequence) -> to.Tensor: "\n Compute the states' values for all observations.\n\n :param concat_ros: concatenated rollouts\n :return: states' values\n " if isinstance(self._vfcn, Policy): v_pred = self._vfcn.evaluate(concat_ros, hidden_states_name='vf_hidden_states') else: v_pred = self._vfcn(concat_ros.observations) return v_pred
def values(self, concat_ros: StepSequence) -> to.Tensor: "\n Compute the states' values for all observations.\n\n :param concat_ros: concatenated rollouts\n :return: states' values\n " if isinstance(self._vfcn, Policy): v_pred = self._vfcn.evaluate(concat_ros, hidden_states_name='vf_hidden_states') else: v_pred = self._vfcn(concat_ros.observations) return v_pred<|docstring|>Compute the states' values for all observations. :param concat_ros: concatenated rollouts :return: states' values<|endoftext|>
64763728eb768f56059e19bc9032f2e3b6de439407af85614e9786e7d5ea63ec
def update(self, rollouts: Sequence[StepSequence], use_empirical_returns: bool=False): '\n Adapt the parameters of the advantage function estimator, minimizing the MSE loss for the given samples.\n\n :param rollouts: batch of rollouts\n :param use_empirical_returns: use the return from the rollout (True) or the ones from the V-fcn (False)\n :return adv: tensor of advantages after V-function updates\n ' concat_ros = StepSequence.concat(rollouts) concat_ros.torch(data_type=to.get_default_dtype()) if use_empirical_returns: v_targ = discounted_values(rollouts, self.gamma).view((- 1), 1) else: v_targ = self.tdlamda_returns(concat_ros=concat_ros) concat_ros.add_data('v_targ', v_targ) with to.no_grad(): v_pred_old = self.values(concat_ros) loss_old = self.loss_fcn(v_pred_old, v_targ) vfcn_grad_norm = [] for e in range(self.num_epoch): for batch in tqdm(concat_ros.split_shuffled_batches(self.batch_size, complete_rollouts=isinstance(self.vfcn, RecurrentPolicy)), total=num_iter_from_rollouts(None, concat_ros, self.batch_size), desc=f'Epoch {e}', unit='batches', file=sys.stdout, leave=False): self.optim.zero_grad() v_pred = self.values(batch) vfcn_loss = self.loss_fcn(v_pred, batch.v_targ) vfcn_loss.backward() vfcn_grad_norm.append(Algorithm.clip_grad(self.vfcn, self.max_grad_norm)) self.optim.step() if (self._lr_scheduler is not None): self._lr_scheduler.step() adv = self.gae(concat_ros) with to.no_grad(): v_pred_new = self.values(concat_ros) loss_new = self.loss_fcn(v_pred_new, v_targ) vfcn_loss_impr = (loss_old - loss_new) explvar = explained_var(v_pred_new, v_targ) self.logger.add_value('explained var critic', explvar, 4) self.logger.add_value('loss improv critic', vfcn_loss_impr, 4) self.logger.add_value('avg grad norm critic', np.mean(vfcn_grad_norm), 4) if (self._lr_scheduler is not None): self.logger.add_value('lr critic', np.mean(self._lr_scheduler.get_last_lr()), 6) return adv
Adapt the parameters of the advantage function estimator, minimizing the MSE loss for the given samples. :param rollouts: batch of rollouts :param use_empirical_returns: use the return from the rollout (True) or the ones from the V-fcn (False) :return adv: tensor of advantages after V-function updates
Pyrado/pyrado/algorithms/step_based/gae.py
update
swami1995/SimuRLacra
52
python
def update(self, rollouts: Sequence[StepSequence], use_empirical_returns: bool=False): '\n Adapt the parameters of the advantage function estimator, minimizing the MSE loss for the given samples.\n\n :param rollouts: batch of rollouts\n :param use_empirical_returns: use the return from the rollout (True) or the ones from the V-fcn (False)\n :return adv: tensor of advantages after V-function updates\n ' concat_ros = StepSequence.concat(rollouts) concat_ros.torch(data_type=to.get_default_dtype()) if use_empirical_returns: v_targ = discounted_values(rollouts, self.gamma).view((- 1), 1) else: v_targ = self.tdlamda_returns(concat_ros=concat_ros) concat_ros.add_data('v_targ', v_targ) with to.no_grad(): v_pred_old = self.values(concat_ros) loss_old = self.loss_fcn(v_pred_old, v_targ) vfcn_grad_norm = [] for e in range(self.num_epoch): for batch in tqdm(concat_ros.split_shuffled_batches(self.batch_size, complete_rollouts=isinstance(self.vfcn, RecurrentPolicy)), total=num_iter_from_rollouts(None, concat_ros, self.batch_size), desc=f'Epoch {e}', unit='batches', file=sys.stdout, leave=False): self.optim.zero_grad() v_pred = self.values(batch) vfcn_loss = self.loss_fcn(v_pred, batch.v_targ) vfcn_loss.backward() vfcn_grad_norm.append(Algorithm.clip_grad(self.vfcn, self.max_grad_norm)) self.optim.step() if (self._lr_scheduler is not None): self._lr_scheduler.step() adv = self.gae(concat_ros) with to.no_grad(): v_pred_new = self.values(concat_ros) loss_new = self.loss_fcn(v_pred_new, v_targ) vfcn_loss_impr = (loss_old - loss_new) explvar = explained_var(v_pred_new, v_targ) self.logger.add_value('explained var critic', explvar, 4) self.logger.add_value('loss improv critic', vfcn_loss_impr, 4) self.logger.add_value('avg grad norm critic', np.mean(vfcn_grad_norm), 4) if (self._lr_scheduler is not None): self.logger.add_value('lr critic', np.mean(self._lr_scheduler.get_last_lr()), 6) return adv
def update(self, rollouts: Sequence[StepSequence], use_empirical_returns: bool=False): '\n Adapt the parameters of the advantage function estimator, minimizing the MSE loss for the given samples.\n\n :param rollouts: batch of rollouts\n :param use_empirical_returns: use the return from the rollout (True) or the ones from the V-fcn (False)\n :return adv: tensor of advantages after V-function updates\n ' concat_ros = StepSequence.concat(rollouts) concat_ros.torch(data_type=to.get_default_dtype()) if use_empirical_returns: v_targ = discounted_values(rollouts, self.gamma).view((- 1), 1) else: v_targ = self.tdlamda_returns(concat_ros=concat_ros) concat_ros.add_data('v_targ', v_targ) with to.no_grad(): v_pred_old = self.values(concat_ros) loss_old = self.loss_fcn(v_pred_old, v_targ) vfcn_grad_norm = [] for e in range(self.num_epoch): for batch in tqdm(concat_ros.split_shuffled_batches(self.batch_size, complete_rollouts=isinstance(self.vfcn, RecurrentPolicy)), total=num_iter_from_rollouts(None, concat_ros, self.batch_size), desc=f'Epoch {e}', unit='batches', file=sys.stdout, leave=False): self.optim.zero_grad() v_pred = self.values(batch) vfcn_loss = self.loss_fcn(v_pred, batch.v_targ) vfcn_loss.backward() vfcn_grad_norm.append(Algorithm.clip_grad(self.vfcn, self.max_grad_norm)) self.optim.step() if (self._lr_scheduler is not None): self._lr_scheduler.step() adv = self.gae(concat_ros) with to.no_grad(): v_pred_new = self.values(concat_ros) loss_new = self.loss_fcn(v_pred_new, v_targ) vfcn_loss_impr = (loss_old - loss_new) explvar = explained_var(v_pred_new, v_targ) self.logger.add_value('explained var critic', explvar, 4) self.logger.add_value('loss improv critic', vfcn_loss_impr, 4) self.logger.add_value('avg grad norm critic', np.mean(vfcn_grad_norm), 4) if (self._lr_scheduler is not None): self.logger.add_value('lr critic', np.mean(self._lr_scheduler.get_last_lr()), 6) return adv<|docstring|>Adapt the parameters of the advantage function estimator, minimizing the MSE loss for the given samples. :param rollouts: batch of rollouts :param use_empirical_returns: use the return from the rollout (True) or the ones from the V-fcn (False) :return adv: tensor of advantages after V-function updates<|endoftext|>
1f04d745225bd1e7a6c04eb2e30ed8f970c3c3061abddc8f0b76a3a064e851eb
def reset(self): "\n Reset the advantage estimator to it's initial state.\n The default implementation resets the learning rate scheduler if there is one.\n " if (self._lr_scheduler is not None): self._lr_scheduler.last_epoch = (- 1)
Reset the advantage estimator to it's initial state. The default implementation resets the learning rate scheduler if there is one.
Pyrado/pyrado/algorithms/step_based/gae.py
reset
swami1995/SimuRLacra
52
python
def reset(self): "\n Reset the advantage estimator to it's initial state.\n The default implementation resets the learning rate scheduler if there is one.\n " if (self._lr_scheduler is not None): self._lr_scheduler.last_epoch = (- 1)
def reset(self): "\n Reset the advantage estimator to it's initial state.\n The default implementation resets the learning rate scheduler if there is one.\n " if (self._lr_scheduler is not None): self._lr_scheduler.last_epoch = (- 1)<|docstring|>Reset the advantage estimator to it's initial state. The default implementation resets the learning rate scheduler if there is one.<|endoftext|>
f0b89038dcb23741ccd063e046a76b264bca5ba1d6baaeedf4048306f397252f
def gelu(x): '\n GELU activation\n https://arxiv.org/abs/1606.08415\n https://github.com/huggingface/pytorch-openai-transformer-lm/blob/master/model_pytorch.py#L14\n https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/modeling.py\n ' return ((0.5 * x) * (1.0 + torch.erf((x / math.sqrt(2.0)))))
GELU activation https://arxiv.org/abs/1606.08415 https://github.com/huggingface/pytorch-openai-transformer-lm/blob/master/model_pytorch.py#L14 https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/modeling.py
codegen_sources/model/src/model/transformer.py
gelu
Syamgith/CodeGen
241
python
def gelu(x): '\n GELU activation\n https://arxiv.org/abs/1606.08415\n https://github.com/huggingface/pytorch-openai-transformer-lm/blob/master/model_pytorch.py#L14\n https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/modeling.py\n ' return ((0.5 * x) * (1.0 + torch.erf((x / math.sqrt(2.0)))))
def gelu(x): '\n GELU activation\n https://arxiv.org/abs/1606.08415\n https://github.com/huggingface/pytorch-openai-transformer-lm/blob/master/model_pytorch.py#L14\n https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/modeling.py\n ' return ((0.5 * x) * (1.0 + torch.erf((x / math.sqrt(2.0)))))<|docstring|>GELU activation https://arxiv.org/abs/1606.08415 https://github.com/huggingface/pytorch-openai-transformer-lm/blob/master/model_pytorch.py#L14 https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/modeling.py<|endoftext|>
da92ee51efff049a6145d5cf48b33e96524a4ad47b4d7b78ac0e6eae46c58bc4
def get_masks(slen, lengths, causal): '\n Generate hidden states mask, and optionally an attention mask.\n ' assert (lengths.max().item() <= slen) bs = lengths.size(0) alen = torch.arange(slen, dtype=torch.long, device=lengths.device) mask = (alen < lengths[(:, None)]) if causal: attn_mask = (alen[(None, None, :)].repeat(bs, slen, 1) <= alen[(None, :, None)]) else: attn_mask = mask assert (mask.size() == (bs, slen)) assert ((causal is False) or (attn_mask.size() == (bs, slen, slen))) return (mask, attn_mask)
Generate hidden states mask, and optionally an attention mask.
codegen_sources/model/src/model/transformer.py
get_masks
Syamgith/CodeGen
241
python
def get_masks(slen, lengths, causal): '\n \n ' assert (lengths.max().item() <= slen) bs = lengths.size(0) alen = torch.arange(slen, dtype=torch.long, device=lengths.device) mask = (alen < lengths[(:, None)]) if causal: attn_mask = (alen[(None, None, :)].repeat(bs, slen, 1) <= alen[(None, :, None)]) else: attn_mask = mask assert (mask.size() == (bs, slen)) assert ((causal is False) or (attn_mask.size() == (bs, slen, slen))) return (mask, attn_mask)
def get_masks(slen, lengths, causal): '\n \n ' assert (lengths.max().item() <= slen) bs = lengths.size(0) alen = torch.arange(slen, dtype=torch.long, device=lengths.device) mask = (alen < lengths[(:, None)]) if causal: attn_mask = (alen[(None, None, :)].repeat(bs, slen, 1) <= alen[(None, :, None)]) else: attn_mask = mask assert (mask.size() == (bs, slen)) assert ((causal is False) or (attn_mask.size() == (bs, slen, slen))) return (mask, attn_mask)<|docstring|>Generate hidden states mask, and optionally an attention mask.<|endoftext|>
991b3de82dcd918f1fba5c8ad1d07371c3b0a4a84d4331a6161a236838d4cd1a
def create_position_ids_from_input_ids(input_ids, padding_idx): "\n Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols\n are ignored. This is modified from fairseq's `utils.make_positions`.\n Args:\n x: torch.Tensor x:\n Returns: torch.Tensor\n " mask = input_ids.ne(padding_idx).int() incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) * mask) return (incremental_indices.long() + padding_idx)
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols are ignored. This is modified from fairseq's `utils.make_positions`. Args: x: torch.Tensor x: Returns: torch.Tensor
codegen_sources/model/src/model/transformer.py
create_position_ids_from_input_ids
Syamgith/CodeGen
241
python
def create_position_ids_from_input_ids(input_ids, padding_idx): "\n Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols\n are ignored. This is modified from fairseq's `utils.make_positions`.\n Args:\n x: torch.Tensor x:\n Returns: torch.Tensor\n " mask = input_ids.ne(padding_idx).int() incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) * mask) return (incremental_indices.long() + padding_idx)
def create_position_ids_from_input_ids(input_ids, padding_idx): "\n Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols\n are ignored. This is modified from fairseq's `utils.make_positions`.\n Args:\n x: torch.Tensor x:\n Returns: torch.Tensor\n " mask = input_ids.ne(padding_idx).int() incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) * mask) return (incremental_indices.long() + padding_idx)<|docstring|>Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols are ignored. This is modified from fairseq's `utils.make_positions`. Args: x: torch.Tensor x: Returns: torch.Tensor<|endoftext|>
7eec26c174f2d42032f728da41aba4a8ea5ee03a73ae6f300f1cbc062daf64b0
def forward(self, x, y, get_scores=False): '\n Compute the loss, and optionally the scores.\n ' assert ((y == self.pad_index).sum().item() == 0) scores = self.proj(x).view((- 1), self.n_words) loss = F.cross_entropy(scores.float(), y, reduction='mean').type_as(scores) return (scores, loss)
Compute the loss, and optionally the scores.
codegen_sources/model/src/model/transformer.py
forward
Syamgith/CodeGen
241
python
def forward(self, x, y, get_scores=False): '\n \n ' assert ((y == self.pad_index).sum().item() == 0) scores = self.proj(x).view((- 1), self.n_words) loss = F.cross_entropy(scores.float(), y, reduction='mean').type_as(scores) return (scores, loss)
def forward(self, x, y, get_scores=False): '\n \n ' assert ((y == self.pad_index).sum().item() == 0) scores = self.proj(x).view((- 1), self.n_words) loss = F.cross_entropy(scores.float(), y, reduction='mean').type_as(scores) return (scores, loss)<|docstring|>Compute the loss, and optionally the scores.<|endoftext|>
9b71d28142f99f26590432a9598f06b38a0f206e8bb1277e91b40fec1cbc2f2f
def get_scores(self, x): '\n Compute scores.\n ' assert (x.dim() == 2) return self.proj(x)
Compute scores.
codegen_sources/model/src/model/transformer.py
get_scores
Syamgith/CodeGen
241
python
def get_scores(self, x): '\n \n ' assert (x.dim() == 2) return self.proj(x)
def get_scores(self, x): '\n \n ' assert (x.dim() == 2) return self.proj(x)<|docstring|>Compute scores.<|endoftext|>
463923fcf2dc575010a03c5214c697ba19496c623fc0c21473923f17fed45b67
def forward(self, x, y, pred_mask, get_scores=False): '\n Compute the loss, and optionally the scores.\n x : len x bs x emb_dim\n y : len x bs\n ' x = x[pred_mask.unsqueeze((- 1)).expand_as(x)].view((- 1), self.emb_dim) scores = self.proj(x).view((- 1), self.n_classes) assert (sum(sum(pred_mask.int())).item() == scores.shape[0]) if (y is None): return scores y = y[pred_mask].view((- 1)) loss = F.cross_entropy(scores.float(), y, reduction='mean').type_as(scores) return (scores, loss)
Compute the loss, and optionally the scores. x : len x bs x emb_dim y : len x bs
codegen_sources/model/src/model/transformer.py
forward
Syamgith/CodeGen
241
python
def forward(self, x, y, pred_mask, get_scores=False): '\n Compute the loss, and optionally the scores.\n x : len x bs x emb_dim\n y : len x bs\n ' x = x[pred_mask.unsqueeze((- 1)).expand_as(x)].view((- 1), self.emb_dim) scores = self.proj(x).view((- 1), self.n_classes) assert (sum(sum(pred_mask.int())).item() == scores.shape[0]) if (y is None): return scores y = y[pred_mask].view((- 1)) loss = F.cross_entropy(scores.float(), y, reduction='mean').type_as(scores) return (scores, loss)
def forward(self, x, y, pred_mask, get_scores=False): '\n Compute the loss, and optionally the scores.\n x : len x bs x emb_dim\n y : len x bs\n ' x = x[pred_mask.unsqueeze((- 1)).expand_as(x)].view((- 1), self.emb_dim) scores = self.proj(x).view((- 1), self.n_classes) assert (sum(sum(pred_mask.int())).item() == scores.shape[0]) if (y is None): return scores y = y[pred_mask].view((- 1)) loss = F.cross_entropy(scores.float(), y, reduction='mean').type_as(scores) return (scores, loss)<|docstring|>Compute the loss, and optionally the scores. x : len x bs x emb_dim y : len x bs<|endoftext|>
28b5cce6cde80451af35e32a4a28dfc6a6594b6a91f37bea01b78095d7214013
def forward(self, input, mask, kv=None, use_cache=False): '\n Self-attention (if kv is None) or attention over source sentence (provided by kv).\n ' assert (not (use_cache and (self.cache is None))) (bs, qlen, dim) = input.size() if (kv is None): klen = (qlen if (not use_cache) else (self.cache['slen'] + qlen)) else: klen = kv.size(1) assert (dim == self.dim), ('Dimensions do not match: %s input vs %s configured' % (dim, self.dim)) n_heads = self.n_heads dim_per_head = (dim // n_heads) mask_reshape = ((bs, 1, qlen, klen) if (mask.dim() == 3) else (bs, 1, 1, klen)) def shape(x): ' projection ' return x.view(bs, (- 1), self.n_heads, dim_per_head).transpose(1, 2) def unshape(x): ' compute context ' return x.transpose(1, 2).contiguous().view(bs, (- 1), (self.n_heads * dim_per_head)) q = shape(self.q_lin(input)) if (kv is None): k = shape(self.k_lin(input)) v = shape(self.v_lin(input)) elif ((not use_cache) or (self.layer_id not in self.cache)): k = v = kv k = shape(self.k_lin(k)) v = shape(self.v_lin(v)) if use_cache: if (self.layer_id in self.cache): if (kv is None): (k_, v_) = self.cache[self.layer_id] k = torch.cat([k_, k], dim=2) v = torch.cat([v_, v], dim=2) else: (k, v) = self.cache[self.layer_id] self.cache[self.layer_id] = (k, v) q = (q / math.sqrt(dim_per_head)) scores = torch.matmul(q, k.transpose(2, 3)) mask = (mask == 0).view(mask_reshape).expand_as(scores) scores.masked_fill_(mask, (- float('inf'))) weights = F.softmax(scores.float(), dim=(- 1)).type_as(scores) weights = F.dropout(weights, p=self.dropout, training=self.training) context = torch.matmul(weights, v) context = unshape(context) return self.out_lin(context)
Self-attention (if kv is None) or attention over source sentence (provided by kv).
codegen_sources/model/src/model/transformer.py
forward
Syamgith/CodeGen
241
python
def forward(self, input, mask, kv=None, use_cache=False): '\n \n ' assert (not (use_cache and (self.cache is None))) (bs, qlen, dim) = input.size() if (kv is None): klen = (qlen if (not use_cache) else (self.cache['slen'] + qlen)) else: klen = kv.size(1) assert (dim == self.dim), ('Dimensions do not match: %s input vs %s configured' % (dim, self.dim)) n_heads = self.n_heads dim_per_head = (dim // n_heads) mask_reshape = ((bs, 1, qlen, klen) if (mask.dim() == 3) else (bs, 1, 1, klen)) def shape(x): ' projection ' return x.view(bs, (- 1), self.n_heads, dim_per_head).transpose(1, 2) def unshape(x): ' compute context ' return x.transpose(1, 2).contiguous().view(bs, (- 1), (self.n_heads * dim_per_head)) q = shape(self.q_lin(input)) if (kv is None): k = shape(self.k_lin(input)) v = shape(self.v_lin(input)) elif ((not use_cache) or (self.layer_id not in self.cache)): k = v = kv k = shape(self.k_lin(k)) v = shape(self.v_lin(v)) if use_cache: if (self.layer_id in self.cache): if (kv is None): (k_, v_) = self.cache[self.layer_id] k = torch.cat([k_, k], dim=2) v = torch.cat([v_, v], dim=2) else: (k, v) = self.cache[self.layer_id] self.cache[self.layer_id] = (k, v) q = (q / math.sqrt(dim_per_head)) scores = torch.matmul(q, k.transpose(2, 3)) mask = (mask == 0).view(mask_reshape).expand_as(scores) scores.masked_fill_(mask, (- float('inf'))) weights = F.softmax(scores.float(), dim=(- 1)).type_as(scores) weights = F.dropout(weights, p=self.dropout, training=self.training) context = torch.matmul(weights, v) context = unshape(context) return self.out_lin(context)
def forward(self, input, mask, kv=None, use_cache=False): '\n \n ' assert (not (use_cache and (self.cache is None))) (bs, qlen, dim) = input.size() if (kv is None): klen = (qlen if (not use_cache) else (self.cache['slen'] + qlen)) else: klen = kv.size(1) assert (dim == self.dim), ('Dimensions do not match: %s input vs %s configured' % (dim, self.dim)) n_heads = self.n_heads dim_per_head = (dim // n_heads) mask_reshape = ((bs, 1, qlen, klen) if (mask.dim() == 3) else (bs, 1, 1, klen)) def shape(x): ' projection ' return x.view(bs, (- 1), self.n_heads, dim_per_head).transpose(1, 2) def unshape(x): ' compute context ' return x.transpose(1, 2).contiguous().view(bs, (- 1), (self.n_heads * dim_per_head)) q = shape(self.q_lin(input)) if (kv is None): k = shape(self.k_lin(input)) v = shape(self.v_lin(input)) elif ((not use_cache) or (self.layer_id not in self.cache)): k = v = kv k = shape(self.k_lin(k)) v = shape(self.v_lin(v)) if use_cache: if (self.layer_id in self.cache): if (kv is None): (k_, v_) = self.cache[self.layer_id] k = torch.cat([k_, k], dim=2) v = torch.cat([v_, v], dim=2) else: (k, v) = self.cache[self.layer_id] self.cache[self.layer_id] = (k, v) q = (q / math.sqrt(dim_per_head)) scores = torch.matmul(q, k.transpose(2, 3)) mask = (mask == 0).view(mask_reshape).expand_as(scores) scores.masked_fill_(mask, (- float('inf'))) weights = F.softmax(scores.float(), dim=(- 1)).type_as(scores) weights = F.dropout(weights, p=self.dropout, training=self.training) context = torch.matmul(weights, v) context = unshape(context) return self.out_lin(context)<|docstring|>Self-attention (if kv is None) or attention over source sentence (provided by kv).<|endoftext|>
98b3829e76523d4c8dfcdf439d1a53cb0fb65423affe1c03806426a9175db999
def __init__(self, params, dico, is_encoder, with_output): '\n Transformer model (encoder or decoder).\n ' super().__init__() self.is_encoder = is_encoder self.is_decoder = (not is_encoder) self.with_output = with_output self.use_span_embeddings = (params.spans_emb_encoder and self.is_encoder) self.n_langs = params.n_langs self.n_words = params.n_words self.eos_index = params.eos_index self.pad_index = params.pad_index self.dico = dico self.id2lang = params.id2lang self.lang2id = params.lang2id self.use_lang_emb = getattr(params, 'use_lang_emb', True) assert (len(self.dico) == self.n_words) assert (len(self.id2lang) == len(self.lang2id) == self.n_langs) self.dim = (params.emb_dim_encoder if is_encoder else params.emb_dim_decoder) self.hidden_dim = (self.dim * 4) self.n_heads = params.n_heads self.n_layers = (params.n_layers_encoder if is_encoder else params.n_layers_decoder) self.dropout = params.dropout self.attention_dropout = params.attention_dropout self.roberta_mode = getattr(params, 'roberta_mode', False) self.gelu_activation = params.gelu_activation assert (self.gelu_activation or (not self.roberta_mode)) if self.roberta_mode: self.position_embeddings = Embedding(N_MAX_POSITIONS, self.dim, self.pad_index) else: self.position_embeddings = Embedding(N_MAX_POSITIONS, self.dim) if params.sinusoidal_embeddings: create_sinusoidal_embeddings(N_MAX_POSITIONS, self.dim, out=self.position_embeddings.weight) if ((params.n_langs > 0) and self.use_lang_emb): self.lang_embeddings = Embedding(self.n_langs, self.dim) self.embeddings = Embedding(self.n_words, self.dim, padding_idx=self.pad_index) if self.use_span_embeddings: self.spans_embeddings = Embedding(params.n_classes_classif, self.dim, padding_idx=self.pad_index) self.layer_norm_emb = nn.LayerNorm(self.dim, eps=LAYER_NORM_EPSILON) self.attentions = nn.ModuleList() self.layer_norm1 = nn.ModuleList() self.ffns = nn.ModuleList() self.layer_norm2 = nn.ModuleList() if self.is_decoder: self.layer_norm15 = nn.ModuleList() self.encoder_attn = nn.ModuleList() self.cache = None for layer_id in range(self.n_layers): self.attentions.append(MultiHeadAttention(self.n_heads, self.dim, dropout=self.attention_dropout)) self.layer_norm1.append(nn.LayerNorm(self.dim, eps=LAYER_NORM_EPSILON)) if self.is_decoder: self.layer_norm15.append(nn.LayerNorm(self.dim, eps=LAYER_NORM_EPSILON)) self.encoder_attn.append(MultiHeadAttention(self.n_heads, self.dim, dim_encoder=params.emb_dim_encoder, dropout=self.attention_dropout)) self.ffns.append(TransformerFFN(self.dim, self.hidden_dim, self.dim, dropout=self.dropout, gelu_activation=self.gelu_activation)) self.layer_norm2.append(nn.LayerNorm(self.dim, eps=LAYER_NORM_EPSILON)) if self.with_output: self.pred_layer = PredLayer(params) if params.share_inout_emb: self.pred_layer.proj.weight = self.embeddings.weight
Transformer model (encoder or decoder).
codegen_sources/model/src/model/transformer.py
__init__
Syamgith/CodeGen
241
python
def __init__(self, params, dico, is_encoder, with_output): '\n \n ' super().__init__() self.is_encoder = is_encoder self.is_decoder = (not is_encoder) self.with_output = with_output self.use_span_embeddings = (params.spans_emb_encoder and self.is_encoder) self.n_langs = params.n_langs self.n_words = params.n_words self.eos_index = params.eos_index self.pad_index = params.pad_index self.dico = dico self.id2lang = params.id2lang self.lang2id = params.lang2id self.use_lang_emb = getattr(params, 'use_lang_emb', True) assert (len(self.dico) == self.n_words) assert (len(self.id2lang) == len(self.lang2id) == self.n_langs) self.dim = (params.emb_dim_encoder if is_encoder else params.emb_dim_decoder) self.hidden_dim = (self.dim * 4) self.n_heads = params.n_heads self.n_layers = (params.n_layers_encoder if is_encoder else params.n_layers_decoder) self.dropout = params.dropout self.attention_dropout = params.attention_dropout self.roberta_mode = getattr(params, 'roberta_mode', False) self.gelu_activation = params.gelu_activation assert (self.gelu_activation or (not self.roberta_mode)) if self.roberta_mode: self.position_embeddings = Embedding(N_MAX_POSITIONS, self.dim, self.pad_index) else: self.position_embeddings = Embedding(N_MAX_POSITIONS, self.dim) if params.sinusoidal_embeddings: create_sinusoidal_embeddings(N_MAX_POSITIONS, self.dim, out=self.position_embeddings.weight) if ((params.n_langs > 0) and self.use_lang_emb): self.lang_embeddings = Embedding(self.n_langs, self.dim) self.embeddings = Embedding(self.n_words, self.dim, padding_idx=self.pad_index) if self.use_span_embeddings: self.spans_embeddings = Embedding(params.n_classes_classif, self.dim, padding_idx=self.pad_index) self.layer_norm_emb = nn.LayerNorm(self.dim, eps=LAYER_NORM_EPSILON) self.attentions = nn.ModuleList() self.layer_norm1 = nn.ModuleList() self.ffns = nn.ModuleList() self.layer_norm2 = nn.ModuleList() if self.is_decoder: self.layer_norm15 = nn.ModuleList() self.encoder_attn = nn.ModuleList() self.cache = None for layer_id in range(self.n_layers): self.attentions.append(MultiHeadAttention(self.n_heads, self.dim, dropout=self.attention_dropout)) self.layer_norm1.append(nn.LayerNorm(self.dim, eps=LAYER_NORM_EPSILON)) if self.is_decoder: self.layer_norm15.append(nn.LayerNorm(self.dim, eps=LAYER_NORM_EPSILON)) self.encoder_attn.append(MultiHeadAttention(self.n_heads, self.dim, dim_encoder=params.emb_dim_encoder, dropout=self.attention_dropout)) self.ffns.append(TransformerFFN(self.dim, self.hidden_dim, self.dim, dropout=self.dropout, gelu_activation=self.gelu_activation)) self.layer_norm2.append(nn.LayerNorm(self.dim, eps=LAYER_NORM_EPSILON)) if self.with_output: self.pred_layer = PredLayer(params) if params.share_inout_emb: self.pred_layer.proj.weight = self.embeddings.weight
def __init__(self, params, dico, is_encoder, with_output): '\n \n ' super().__init__() self.is_encoder = is_encoder self.is_decoder = (not is_encoder) self.with_output = with_output self.use_span_embeddings = (params.spans_emb_encoder and self.is_encoder) self.n_langs = params.n_langs self.n_words = params.n_words self.eos_index = params.eos_index self.pad_index = params.pad_index self.dico = dico self.id2lang = params.id2lang self.lang2id = params.lang2id self.use_lang_emb = getattr(params, 'use_lang_emb', True) assert (len(self.dico) == self.n_words) assert (len(self.id2lang) == len(self.lang2id) == self.n_langs) self.dim = (params.emb_dim_encoder if is_encoder else params.emb_dim_decoder) self.hidden_dim = (self.dim * 4) self.n_heads = params.n_heads self.n_layers = (params.n_layers_encoder if is_encoder else params.n_layers_decoder) self.dropout = params.dropout self.attention_dropout = params.attention_dropout self.roberta_mode = getattr(params, 'roberta_mode', False) self.gelu_activation = params.gelu_activation assert (self.gelu_activation or (not self.roberta_mode)) if self.roberta_mode: self.position_embeddings = Embedding(N_MAX_POSITIONS, self.dim, self.pad_index) else: self.position_embeddings = Embedding(N_MAX_POSITIONS, self.dim) if params.sinusoidal_embeddings: create_sinusoidal_embeddings(N_MAX_POSITIONS, self.dim, out=self.position_embeddings.weight) if ((params.n_langs > 0) and self.use_lang_emb): self.lang_embeddings = Embedding(self.n_langs, self.dim) self.embeddings = Embedding(self.n_words, self.dim, padding_idx=self.pad_index) if self.use_span_embeddings: self.spans_embeddings = Embedding(params.n_classes_classif, self.dim, padding_idx=self.pad_index) self.layer_norm_emb = nn.LayerNorm(self.dim, eps=LAYER_NORM_EPSILON) self.attentions = nn.ModuleList() self.layer_norm1 = nn.ModuleList() self.ffns = nn.ModuleList() self.layer_norm2 = nn.ModuleList() if self.is_decoder: self.layer_norm15 = nn.ModuleList() self.encoder_attn = nn.ModuleList() self.cache = None for layer_id in range(self.n_layers): self.attentions.append(MultiHeadAttention(self.n_heads, self.dim, dropout=self.attention_dropout)) self.layer_norm1.append(nn.LayerNorm(self.dim, eps=LAYER_NORM_EPSILON)) if self.is_decoder: self.layer_norm15.append(nn.LayerNorm(self.dim, eps=LAYER_NORM_EPSILON)) self.encoder_attn.append(MultiHeadAttention(self.n_heads, self.dim, dim_encoder=params.emb_dim_encoder, dropout=self.attention_dropout)) self.ffns.append(TransformerFFN(self.dim, self.hidden_dim, self.dim, dropout=self.dropout, gelu_activation=self.gelu_activation)) self.layer_norm2.append(nn.LayerNorm(self.dim, eps=LAYER_NORM_EPSILON)) if self.with_output: self.pred_layer = PredLayer(params) if params.share_inout_emb: self.pred_layer.proj.weight = self.embeddings.weight<|docstring|>Transformer model (encoder or decoder).<|endoftext|>
e31c226e6667c01c522b07e61305422c1d5bd17b0d89c4b730aeb9d039b826cd
def forward(self, mode, **kwargs): '\n Forward function with different forward modes.\n ### Small hack to handle PyTorch distributed.\n ' if (mode == 'fwd'): return self.fwd(**kwargs) elif (mode == 'predict'): return self.predict(**kwargs) else: raise Exception(('Unknown mode: %s' % mode))
Forward function with different forward modes. ### Small hack to handle PyTorch distributed.
codegen_sources/model/src/model/transformer.py
forward
Syamgith/CodeGen
241
python
def forward(self, mode, **kwargs): '\n Forward function with different forward modes.\n ### Small hack to handle PyTorch distributed.\n ' if (mode == 'fwd'): return self.fwd(**kwargs) elif (mode == 'predict'): return self.predict(**kwargs) else: raise Exception(('Unknown mode: %s' % mode))
def forward(self, mode, **kwargs): '\n Forward function with different forward modes.\n ### Small hack to handle PyTorch distributed.\n ' if (mode == 'fwd'): return self.fwd(**kwargs) elif (mode == 'predict'): return self.predict(**kwargs) else: raise Exception(('Unknown mode: %s' % mode))<|docstring|>Forward function with different forward modes. ### Small hack to handle PyTorch distributed.<|endoftext|>
f6b4dd0eb4ba7ce78d24f40df4b92b480e32a279eade888fc77717b1d0cfa3a9
def fwd(self, x, lengths, causal, src_enc=None, src_len=None, positions=None, langs=None, use_cache=False, spans=None): '\n Inputs:\n `x` LongTensor(slen, bs), containing word indices\n `lengths` LongTensor(bs), containing the length of each sentence\n `causal` Boolean, if True, the attention is only done over previous hidden states\n `positions` LongTensor(slen, bs), containing word positions\n `langs` LongTensor(slen, bs), containing language IDs\n `spans` LongTensor(slen, bs), containing the spans if use_spans is set to True\n ' assert (not (use_cache and (self.cache is None))) if self.use_span_embeddings: assert (spans is not None) (slen, bs) = x.size() assert (lengths.size(0) == bs) assert (lengths.max().item() <= slen) x = x.transpose(0, 1) assert ((src_enc is None) == (src_len is None)) if (src_enc is not None): assert self.is_decoder assert (src_enc.size(0) == bs) (mask, attn_mask) = get_masks(slen, lengths, causal) if (self.is_decoder and (src_enc is not None)): src_mask = (torch.arange(src_enc.shape[1], dtype=torch.long, device=lengths.device) < src_len[(:, None)]) if (positions is None): if self.roberta_mode: positions = create_position_ids_from_input_ids(x, self.pad_index) else: positions = x.new(slen).long() positions = torch.arange(slen, out=positions).unsqueeze(0) else: assert (positions.size() == (slen, bs)) positions = positions.transpose(0, 1) if (langs is not None): assert (langs.size() == (slen, bs)) langs = langs.transpose(0, 1) if use_cache: _slen = (slen - self.cache['slen']) x = x[(:, (- _slen):)] positions = positions[(:, (- _slen):)] if (langs is not None): langs = langs[(:, (- _slen):)] mask = mask[(:, (- _slen):)] attn_mask = attn_mask[(:, (- _slen):)] tensor = self.embeddings(x) if self.use_span_embeddings: tensor = (tensor + self.spans_embeddings(spans.T)) tensor = (tensor + self.position_embeddings(positions).expand_as(tensor)) if ((langs is not None) and self.use_lang_emb): tensor = (tensor + self.lang_embeddings(langs)) tensor = self.layer_norm_emb(tensor) tensor = F.dropout(tensor, p=self.dropout, training=self.training) tensor *= mask.unsqueeze((- 1)).to(tensor.dtype) for i in range(self.n_layers): self.attentions[i].cache = self.cache attn = self.attentions[i](tensor, attn_mask, use_cache=use_cache) attn = F.dropout(attn, p=self.dropout, training=self.training) tensor = (tensor + attn) tensor = self.layer_norm1[i](tensor) if (self.is_decoder and (src_enc is not None)): assert (src_enc.shape[1] == src_mask.shape[(- 1)]) self.encoder_attn[i].cache = self.cache attn = self.encoder_attn[i](tensor, src_mask, kv=src_enc, use_cache=use_cache) attn = F.dropout(attn, p=self.dropout, training=self.training) tensor = (tensor + attn) tensor = self.layer_norm15[i](tensor) tensor = (tensor + self.ffns[i](tensor)) tensor = self.layer_norm2[i](tensor) tensor *= mask.unsqueeze((- 1)).to(tensor.dtype) if use_cache: self.cache['slen'] += tensor.size(1) tensor = tensor.transpose(0, 1) return tensor
Inputs: `x` LongTensor(slen, bs), containing word indices `lengths` LongTensor(bs), containing the length of each sentence `causal` Boolean, if True, the attention is only done over previous hidden states `positions` LongTensor(slen, bs), containing word positions `langs` LongTensor(slen, bs), containing language IDs `spans` LongTensor(slen, bs), containing the spans if use_spans is set to True
codegen_sources/model/src/model/transformer.py
fwd
Syamgith/CodeGen
241
python
def fwd(self, x, lengths, causal, src_enc=None, src_len=None, positions=None, langs=None, use_cache=False, spans=None): '\n Inputs:\n `x` LongTensor(slen, bs), containing word indices\n `lengths` LongTensor(bs), containing the length of each sentence\n `causal` Boolean, if True, the attention is only done over previous hidden states\n `positions` LongTensor(slen, bs), containing word positions\n `langs` LongTensor(slen, bs), containing language IDs\n `spans` LongTensor(slen, bs), containing the spans if use_spans is set to True\n ' assert (not (use_cache and (self.cache is None))) if self.use_span_embeddings: assert (spans is not None) (slen, bs) = x.size() assert (lengths.size(0) == bs) assert (lengths.max().item() <= slen) x = x.transpose(0, 1) assert ((src_enc is None) == (src_len is None)) if (src_enc is not None): assert self.is_decoder assert (src_enc.size(0) == bs) (mask, attn_mask) = get_masks(slen, lengths, causal) if (self.is_decoder and (src_enc is not None)): src_mask = (torch.arange(src_enc.shape[1], dtype=torch.long, device=lengths.device) < src_len[(:, None)]) if (positions is None): if self.roberta_mode: positions = create_position_ids_from_input_ids(x, self.pad_index) else: positions = x.new(slen).long() positions = torch.arange(slen, out=positions).unsqueeze(0) else: assert (positions.size() == (slen, bs)) positions = positions.transpose(0, 1) if (langs is not None): assert (langs.size() == (slen, bs)) langs = langs.transpose(0, 1) if use_cache: _slen = (slen - self.cache['slen']) x = x[(:, (- _slen):)] positions = positions[(:, (- _slen):)] if (langs is not None): langs = langs[(:, (- _slen):)] mask = mask[(:, (- _slen):)] attn_mask = attn_mask[(:, (- _slen):)] tensor = self.embeddings(x) if self.use_span_embeddings: tensor = (tensor + self.spans_embeddings(spans.T)) tensor = (tensor + self.position_embeddings(positions).expand_as(tensor)) if ((langs is not None) and self.use_lang_emb): tensor = (tensor + self.lang_embeddings(langs)) tensor = self.layer_norm_emb(tensor) tensor = F.dropout(tensor, p=self.dropout, training=self.training) tensor *= mask.unsqueeze((- 1)).to(tensor.dtype) for i in range(self.n_layers): self.attentions[i].cache = self.cache attn = self.attentions[i](tensor, attn_mask, use_cache=use_cache) attn = F.dropout(attn, p=self.dropout, training=self.training) tensor = (tensor + attn) tensor = self.layer_norm1[i](tensor) if (self.is_decoder and (src_enc is not None)): assert (src_enc.shape[1] == src_mask.shape[(- 1)]) self.encoder_attn[i].cache = self.cache attn = self.encoder_attn[i](tensor, src_mask, kv=src_enc, use_cache=use_cache) attn = F.dropout(attn, p=self.dropout, training=self.training) tensor = (tensor + attn) tensor = self.layer_norm15[i](tensor) tensor = (tensor + self.ffns[i](tensor)) tensor = self.layer_norm2[i](tensor) tensor *= mask.unsqueeze((- 1)).to(tensor.dtype) if use_cache: self.cache['slen'] += tensor.size(1) tensor = tensor.transpose(0, 1) return tensor
def fwd(self, x, lengths, causal, src_enc=None, src_len=None, positions=None, langs=None, use_cache=False, spans=None): '\n Inputs:\n `x` LongTensor(slen, bs), containing word indices\n `lengths` LongTensor(bs), containing the length of each sentence\n `causal` Boolean, if True, the attention is only done over previous hidden states\n `positions` LongTensor(slen, bs), containing word positions\n `langs` LongTensor(slen, bs), containing language IDs\n `spans` LongTensor(slen, bs), containing the spans if use_spans is set to True\n ' assert (not (use_cache and (self.cache is None))) if self.use_span_embeddings: assert (spans is not None) (slen, bs) = x.size() assert (lengths.size(0) == bs) assert (lengths.max().item() <= slen) x = x.transpose(0, 1) assert ((src_enc is None) == (src_len is None)) if (src_enc is not None): assert self.is_decoder assert (src_enc.size(0) == bs) (mask, attn_mask) = get_masks(slen, lengths, causal) if (self.is_decoder and (src_enc is not None)): src_mask = (torch.arange(src_enc.shape[1], dtype=torch.long, device=lengths.device) < src_len[(:, None)]) if (positions is None): if self.roberta_mode: positions = create_position_ids_from_input_ids(x, self.pad_index) else: positions = x.new(slen).long() positions = torch.arange(slen, out=positions).unsqueeze(0) else: assert (positions.size() == (slen, bs)) positions = positions.transpose(0, 1) if (langs is not None): assert (langs.size() == (slen, bs)) langs = langs.transpose(0, 1) if use_cache: _slen = (slen - self.cache['slen']) x = x[(:, (- _slen):)] positions = positions[(:, (- _slen):)] if (langs is not None): langs = langs[(:, (- _slen):)] mask = mask[(:, (- _slen):)] attn_mask = attn_mask[(:, (- _slen):)] tensor = self.embeddings(x) if self.use_span_embeddings: tensor = (tensor + self.spans_embeddings(spans.T)) tensor = (tensor + self.position_embeddings(positions).expand_as(tensor)) if ((langs is not None) and self.use_lang_emb): tensor = (tensor + self.lang_embeddings(langs)) tensor = self.layer_norm_emb(tensor) tensor = F.dropout(tensor, p=self.dropout, training=self.training) tensor *= mask.unsqueeze((- 1)).to(tensor.dtype) for i in range(self.n_layers): self.attentions[i].cache = self.cache attn = self.attentions[i](tensor, attn_mask, use_cache=use_cache) attn = F.dropout(attn, p=self.dropout, training=self.training) tensor = (tensor + attn) tensor = self.layer_norm1[i](tensor) if (self.is_decoder and (src_enc is not None)): assert (src_enc.shape[1] == src_mask.shape[(- 1)]) self.encoder_attn[i].cache = self.cache attn = self.encoder_attn[i](tensor, src_mask, kv=src_enc, use_cache=use_cache) attn = F.dropout(attn, p=self.dropout, training=self.training) tensor = (tensor + attn) tensor = self.layer_norm15[i](tensor) tensor = (tensor + self.ffns[i](tensor)) tensor = self.layer_norm2[i](tensor) tensor *= mask.unsqueeze((- 1)).to(tensor.dtype) if use_cache: self.cache['slen'] += tensor.size(1) tensor = tensor.transpose(0, 1) return tensor<|docstring|>Inputs: `x` LongTensor(slen, bs), containing word indices `lengths` LongTensor(bs), containing the length of each sentence `causal` Boolean, if True, the attention is only done over previous hidden states `positions` LongTensor(slen, bs), containing word positions `langs` LongTensor(slen, bs), containing language IDs `spans` LongTensor(slen, bs), containing the spans if use_spans is set to True<|endoftext|>
60a9aa2cde7e8d61c5e57e63bf71a210d96bc2dc970e1cc2fa86a593452f306d
def predict(self, tensor, pred_mask, y, get_scores): '\n Given the last hidden state, compute word scores and/or the loss.\n `pred_mask` is a ByteTensor of shape (slen, bs), filled with 1 when\n we need to predict a word\n `y` is a LongTensor of shape (pred_mask.sum(),)\n `get_scores` is a boolean specifying whether we need to return scores\n ' masked_tensor = tensor[pred_mask.unsqueeze((- 1)).expand_as(tensor)].view((- 1), self.dim) (scores, loss) = self.pred_layer(masked_tensor, y, get_scores) return (scores, loss)
Given the last hidden state, compute word scores and/or the loss. `pred_mask` is a ByteTensor of shape (slen, bs), filled with 1 when we need to predict a word `y` is a LongTensor of shape (pred_mask.sum(),) `get_scores` is a boolean specifying whether we need to return scores
codegen_sources/model/src/model/transformer.py
predict
Syamgith/CodeGen
241
python
def predict(self, tensor, pred_mask, y, get_scores): '\n Given the last hidden state, compute word scores and/or the loss.\n `pred_mask` is a ByteTensor of shape (slen, bs), filled with 1 when\n we need to predict a word\n `y` is a LongTensor of shape (pred_mask.sum(),)\n `get_scores` is a boolean specifying whether we need to return scores\n ' masked_tensor = tensor[pred_mask.unsqueeze((- 1)).expand_as(tensor)].view((- 1), self.dim) (scores, loss) = self.pred_layer(masked_tensor, y, get_scores) return (scores, loss)
def predict(self, tensor, pred_mask, y, get_scores): '\n Given the last hidden state, compute word scores and/or the loss.\n `pred_mask` is a ByteTensor of shape (slen, bs), filled with 1 when\n we need to predict a word\n `y` is a LongTensor of shape (pred_mask.sum(),)\n `get_scores` is a boolean specifying whether we need to return scores\n ' masked_tensor = tensor[pred_mask.unsqueeze((- 1)).expand_as(tensor)].view((- 1), self.dim) (scores, loss) = self.pred_layer(masked_tensor, y, get_scores) return (scores, loss)<|docstring|>Given the last hidden state, compute word scores and/or the loss. `pred_mask` is a ByteTensor of shape (slen, bs), filled with 1 when we need to predict a word `y` is a LongTensor of shape (pred_mask.sum(),) `get_scores` is a boolean specifying whether we need to return scores<|endoftext|>
fc4d89a1f04064888d7844fa3b364244c4e171371fccecec8be92b7a366b9d83
def generate(self, src_enc, src_len, tgt_lang_id, max_len=200, sample_temperature=None): '\n Decode a sentence given initial start.\n `x`:\n - LongTensor(bs, slen)\n <EOS> W1 W2 W3 <EOS> <PAD>\n <EOS> W1 W2 W3 W4 <EOS>\n `lengths`:\n - LongTensor(bs) [5, 6]\n `positions`:\n - False, for regular "arange" positions (LM)\n - True, to reset positions from the new generation (MT)\n `langs`:\n - must be None if the model only supports one language\n - lang_id if only one language is involved (LM)\n - (lang_id1, lang_id2) if two languages are involved (MT)\n ' if isinstance(max_len, int): max_lengths = src_len.clone().fill_(max_len) global_max_len = max_len else: max_lengths = max_len global_max_len = int(max_lengths.max()) bs = len(src_len) assert (src_enc.size(0) == bs) generated = src_len.new(global_max_len, bs) generated.fill_(self.pad_index) generated[0].fill_(self.eos_index) positions = src_len.new(global_max_len).long() positions = torch.arange(global_max_len, out=positions).unsqueeze(1).expand(global_max_len, bs) if self.roberta_mode: positions = ((positions + self.pad_index) + 1) langs = src_len.new(global_max_len).long().fill_(tgt_lang_id) langs = langs.unsqueeze(1).expand(global_max_len, bs) cur_len = 1 gen_len = src_len.clone().fill_(1) unfinished_sents = src_len.clone().fill_(1) self.cache = {'slen': 0} previous_unfinished_mask = unfinished_sents.ne(0) while (cur_len < global_max_len): unfinished_mask = unfinished_sents.ne(0) should_modify = unfinished_mask.ne(previous_unfinished_mask).any() restricted_mask = unfinished_mask[previous_unfinished_mask] if (should_modify and (self.cache is not None)): for (k, v) in self.cache.items(): if isinstance(k, int): assert (len(v) == 2) self.cache[k] = (cached_tensor[restricted_mask] for cached_tensor in v) tensor = self.forward('fwd', x=generated[(:cur_len, unfinished_mask)], lengths=gen_len[unfinished_mask], positions=positions[(:cur_len, unfinished_mask)], langs=langs[:cur_len][(:, unfinished_mask)], causal=True, src_enc=src_enc[unfinished_mask], src_len=src_len[unfinished_mask], use_cache=True) assert (tensor.size() == (1, unfinished_mask.sum().item(), self.dim)), (cur_len, global_max_len, src_enc.size(), tensor.size(), (1, bs, self.dim)) tensor = tensor.data[((- 1), :, :)].type_as(src_enc) scores = self.pred_layer.get_scores(tensor) if (sample_temperature is None): next_words = torch.topk(scores, 1)[1].squeeze(1) else: next_words = torch.multinomial(F.softmax((scores.float() / sample_temperature), dim=1), 1).squeeze(1) assert (next_words.size() == (unfinished_mask.sum().item(),)) generated[(cur_len, unfinished_mask)] = next_words gen_len.add_(unfinished_sents) generated[cur_len].masked_fill_((max_lengths.eq((cur_len + 1)) & unfinished_sents.eq(1)), self.eos_index) unfinished_sents[unfinished_mask] = unfinished_sents[unfinished_mask].mul(next_words.ne(self.eos_index).long()).mul(max_lengths[unfinished_mask].ne((cur_len + 1)).long()) cur_len = (cur_len + 1) previous_unfinished_mask = unfinished_mask if (unfinished_sents.max() == 0): break assert ((generated == self.eos_index).sum() == (2 * bs)) return (generated[:cur_len], gen_len)
Decode a sentence given initial start. `x`: - LongTensor(bs, slen) <EOS> W1 W2 W3 <EOS> <PAD> <EOS> W1 W2 W3 W4 <EOS> `lengths`: - LongTensor(bs) [5, 6] `positions`: - False, for regular "arange" positions (LM) - True, to reset positions from the new generation (MT) `langs`: - must be None if the model only supports one language - lang_id if only one language is involved (LM) - (lang_id1, lang_id2) if two languages are involved (MT)
codegen_sources/model/src/model/transformer.py
generate
Syamgith/CodeGen
241
python
def generate(self, src_enc, src_len, tgt_lang_id, max_len=200, sample_temperature=None): '\n Decode a sentence given initial start.\n `x`:\n - LongTensor(bs, slen)\n <EOS> W1 W2 W3 <EOS> <PAD>\n <EOS> W1 W2 W3 W4 <EOS>\n `lengths`:\n - LongTensor(bs) [5, 6]\n `positions`:\n - False, for regular "arange" positions (LM)\n - True, to reset positions from the new generation (MT)\n `langs`:\n - must be None if the model only supports one language\n - lang_id if only one language is involved (LM)\n - (lang_id1, lang_id2) if two languages are involved (MT)\n ' if isinstance(max_len, int): max_lengths = src_len.clone().fill_(max_len) global_max_len = max_len else: max_lengths = max_len global_max_len = int(max_lengths.max()) bs = len(src_len) assert (src_enc.size(0) == bs) generated = src_len.new(global_max_len, bs) generated.fill_(self.pad_index) generated[0].fill_(self.eos_index) positions = src_len.new(global_max_len).long() positions = torch.arange(global_max_len, out=positions).unsqueeze(1).expand(global_max_len, bs) if self.roberta_mode: positions = ((positions + self.pad_index) + 1) langs = src_len.new(global_max_len).long().fill_(tgt_lang_id) langs = langs.unsqueeze(1).expand(global_max_len, bs) cur_len = 1 gen_len = src_len.clone().fill_(1) unfinished_sents = src_len.clone().fill_(1) self.cache = {'slen': 0} previous_unfinished_mask = unfinished_sents.ne(0) while (cur_len < global_max_len): unfinished_mask = unfinished_sents.ne(0) should_modify = unfinished_mask.ne(previous_unfinished_mask).any() restricted_mask = unfinished_mask[previous_unfinished_mask] if (should_modify and (self.cache is not None)): for (k, v) in self.cache.items(): if isinstance(k, int): assert (len(v) == 2) self.cache[k] = (cached_tensor[restricted_mask] for cached_tensor in v) tensor = self.forward('fwd', x=generated[(:cur_len, unfinished_mask)], lengths=gen_len[unfinished_mask], positions=positions[(:cur_len, unfinished_mask)], langs=langs[:cur_len][(:, unfinished_mask)], causal=True, src_enc=src_enc[unfinished_mask], src_len=src_len[unfinished_mask], use_cache=True) assert (tensor.size() == (1, unfinished_mask.sum().item(), self.dim)), (cur_len, global_max_len, src_enc.size(), tensor.size(), (1, bs, self.dim)) tensor = tensor.data[((- 1), :, :)].type_as(src_enc) scores = self.pred_layer.get_scores(tensor) if (sample_temperature is None): next_words = torch.topk(scores, 1)[1].squeeze(1) else: next_words = torch.multinomial(F.softmax((scores.float() / sample_temperature), dim=1), 1).squeeze(1) assert (next_words.size() == (unfinished_mask.sum().item(),)) generated[(cur_len, unfinished_mask)] = next_words gen_len.add_(unfinished_sents) generated[cur_len].masked_fill_((max_lengths.eq((cur_len + 1)) & unfinished_sents.eq(1)), self.eos_index) unfinished_sents[unfinished_mask] = unfinished_sents[unfinished_mask].mul(next_words.ne(self.eos_index).long()).mul(max_lengths[unfinished_mask].ne((cur_len + 1)).long()) cur_len = (cur_len + 1) previous_unfinished_mask = unfinished_mask if (unfinished_sents.max() == 0): break assert ((generated == self.eos_index).sum() == (2 * bs)) return (generated[:cur_len], gen_len)
def generate(self, src_enc, src_len, tgt_lang_id, max_len=200, sample_temperature=None): '\n Decode a sentence given initial start.\n `x`:\n - LongTensor(bs, slen)\n <EOS> W1 W2 W3 <EOS> <PAD>\n <EOS> W1 W2 W3 W4 <EOS>\n `lengths`:\n - LongTensor(bs) [5, 6]\n `positions`:\n - False, for regular "arange" positions (LM)\n - True, to reset positions from the new generation (MT)\n `langs`:\n - must be None if the model only supports one language\n - lang_id if only one language is involved (LM)\n - (lang_id1, lang_id2) if two languages are involved (MT)\n ' if isinstance(max_len, int): max_lengths = src_len.clone().fill_(max_len) global_max_len = max_len else: max_lengths = max_len global_max_len = int(max_lengths.max()) bs = len(src_len) assert (src_enc.size(0) == bs) generated = src_len.new(global_max_len, bs) generated.fill_(self.pad_index) generated[0].fill_(self.eos_index) positions = src_len.new(global_max_len).long() positions = torch.arange(global_max_len, out=positions).unsqueeze(1).expand(global_max_len, bs) if self.roberta_mode: positions = ((positions + self.pad_index) + 1) langs = src_len.new(global_max_len).long().fill_(tgt_lang_id) langs = langs.unsqueeze(1).expand(global_max_len, bs) cur_len = 1 gen_len = src_len.clone().fill_(1) unfinished_sents = src_len.clone().fill_(1) self.cache = {'slen': 0} previous_unfinished_mask = unfinished_sents.ne(0) while (cur_len < global_max_len): unfinished_mask = unfinished_sents.ne(0) should_modify = unfinished_mask.ne(previous_unfinished_mask).any() restricted_mask = unfinished_mask[previous_unfinished_mask] if (should_modify and (self.cache is not None)): for (k, v) in self.cache.items(): if isinstance(k, int): assert (len(v) == 2) self.cache[k] = (cached_tensor[restricted_mask] for cached_tensor in v) tensor = self.forward('fwd', x=generated[(:cur_len, unfinished_mask)], lengths=gen_len[unfinished_mask], positions=positions[(:cur_len, unfinished_mask)], langs=langs[:cur_len][(:, unfinished_mask)], causal=True, src_enc=src_enc[unfinished_mask], src_len=src_len[unfinished_mask], use_cache=True) assert (tensor.size() == (1, unfinished_mask.sum().item(), self.dim)), (cur_len, global_max_len, src_enc.size(), tensor.size(), (1, bs, self.dim)) tensor = tensor.data[((- 1), :, :)].type_as(src_enc) scores = self.pred_layer.get_scores(tensor) if (sample_temperature is None): next_words = torch.topk(scores, 1)[1].squeeze(1) else: next_words = torch.multinomial(F.softmax((scores.float() / sample_temperature), dim=1), 1).squeeze(1) assert (next_words.size() == (unfinished_mask.sum().item(),)) generated[(cur_len, unfinished_mask)] = next_words gen_len.add_(unfinished_sents) generated[cur_len].masked_fill_((max_lengths.eq((cur_len + 1)) & unfinished_sents.eq(1)), self.eos_index) unfinished_sents[unfinished_mask] = unfinished_sents[unfinished_mask].mul(next_words.ne(self.eos_index).long()).mul(max_lengths[unfinished_mask].ne((cur_len + 1)).long()) cur_len = (cur_len + 1) previous_unfinished_mask = unfinished_mask if (unfinished_sents.max() == 0): break assert ((generated == self.eos_index).sum() == (2 * bs)) return (generated[:cur_len], gen_len)<|docstring|>Decode a sentence given initial start. `x`: - LongTensor(bs, slen) <EOS> W1 W2 W3 <EOS> <PAD> <EOS> W1 W2 W3 W4 <EOS> `lengths`: - LongTensor(bs) [5, 6] `positions`: - False, for regular "arange" positions (LM) - True, to reset positions from the new generation (MT) `langs`: - must be None if the model only supports one language - lang_id if only one language is involved (LM) - (lang_id1, lang_id2) if two languages are involved (MT)<|endoftext|>
6c2f53f4516b287c9fd7d782ff5f3874d08196305a8e59d4006c2656e5e7a816
def generate_beam(self, src_enc, src_len, tgt_lang_id, beam_size, length_penalty, early_stopping, max_len=200): '\n Decode a sentence given initial start.\n `x`:\n - LongTensor(bs, slen)\n <EOS> W1 W2 W3 <EOS> <PAD>\n <EOS> W1 W2 W3 W4 <EOS>\n `lengths`:\n - LongTensor(bs) [5, 6]\n `positions`:\n - False, for regular "arange" positions (LM)\n - True, to reset positions from the new generation (MT)\n `langs`:\n - must be None if the model only supports one language\n - lang_id if only one language is involved (LM)\n - (lang_id1, lang_id2) if two languages are involved (MT)\n ' if isinstance(max_len, int): max_lengths = src_len.clone().fill_(max_len) global_max_len = max_len else: max_lengths = max_len global_max_len = int(max_lengths.max()) assert (src_enc.size(0) == src_len.size(0)) assert (beam_size >= 1) bs = len(src_len) n_words = self.n_words src_enc = src_enc.unsqueeze(1).expand(((bs, beam_size) + src_enc.shape[1:])).contiguous().view((((bs * beam_size),) + src_enc.shape[1:])) src_len = src_len.unsqueeze(1).expand(bs, beam_size).contiguous().view((- 1)) generated = src_len.new(global_max_len, (bs * beam_size)) generated.fill_(self.pad_index) generated[0].fill_(self.eos_index) generated_hyps = [BeamHypotheses(beam_size, global_max_len, length_penalty, early_stopping) for _ in range(bs)] positions = src_len.new(global_max_len).long() positions = torch.arange(global_max_len, out=positions).unsqueeze(1).expand_as(generated) if self.roberta_mode: positions = ((positions + self.pad_index) + 1) langs = positions.clone().fill_(tgt_lang_id) beam_scores = src_enc.new(bs, beam_size).float().fill_(0) beam_scores[(:, 1:)] = (- 1000000000.0) beam_scores = beam_scores.view((- 1)) cur_len = 1 self.cache = {'slen': 0} done = [False for _ in range(bs)] while (cur_len < global_max_len): tensor = self.forward('fwd', x=generated[:cur_len], lengths=src_len.new((bs * beam_size)).fill_(cur_len), positions=positions[:cur_len], langs=langs[:cur_len], causal=True, src_enc=src_enc, src_len=src_len, use_cache=True) assert (tensor.size() == (1, (bs * beam_size), self.dim)) tensor = tensor.data[((- 1), :, :)].type_as(src_enc) scores = self.pred_layer.get_scores(tensor) scores = F.log_softmax(scores.float(), dim=(- 1)) assert (scores.size() == ((bs * beam_size), n_words)) _scores = (scores + beam_scores[(:, None)].expand_as(scores)) _scores = _scores.view(bs, (beam_size * n_words)) (next_scores, next_words) = torch.topk(_scores, (2 * beam_size), dim=1, largest=True, sorted=True) assert (next_scores.size() == next_words.size() == (bs, (2 * beam_size))) next_batch_beam = [] for sent_id in range(bs): done[sent_id] = (done[sent_id] or generated_hyps[sent_id].is_done(next_scores[sent_id].max().item())) if done[sent_id]: next_batch_beam.extend(([(0, self.pad_index, 0)] * beam_size)) continue next_sent_beam = [] for (idx, value) in zip(next_words[sent_id], next_scores[sent_id]): beam_id = (idx // n_words) word_id = (idx % n_words) if ((word_id == self.eos_index) or ((cur_len + 1) == global_max_len)): generated_hyps[sent_id].add(generated[(:cur_len, ((sent_id * beam_size) + beam_id))].clone(), value.item()) else: next_sent_beam.append((value, word_id, ((sent_id * beam_size) + beam_id))) if (len(next_sent_beam) == beam_size): break assert ((len(next_sent_beam) == 0) if ((cur_len + 1) == global_max_len) else beam_size) if (len(next_sent_beam) == 0): next_sent_beam = ([(0, self.pad_index, 0)] * beam_size) next_batch_beam.extend(next_sent_beam) assert (len(next_batch_beam) == (beam_size * (sent_id + 1))) assert (len(next_batch_beam) == (bs * beam_size)) beam_scores = beam_scores.new([x[0] for x in next_batch_beam]) beam_words = generated.new([x[1] for x in next_batch_beam]) beam_idx = src_len.new([x[2] for x in next_batch_beam]) generated = generated[(:, beam_idx)] generated[cur_len] = beam_words for k in self.cache.keys(): if (k != 'slen'): self.cache[k] = (self.cache[k][0][beam_idx], self.cache[k][1][beam_idx]) cur_len = (cur_len + 1) if all(done): break tgt_len = src_len.new(bs, beam_size) best = [] for (i, hypotheses) in enumerate(generated_hyps): sorted_hyps = [h[1] for h in sorted(hypotheses.hyp, key=(lambda x: x[0]), reverse=True)] for (j, hyp) in enumerate(sorted_hyps): tgt_len[(i, j)] = (len(hyp) + 1) best.append(sorted_hyps) decoded = src_len.new(tgt_len.max().item(), beam_size, bs).fill_(self.pad_index) for (i, hypo_list) in enumerate(best): for (hyp_index, hypo) in enumerate(hypo_list): decoded[(:len(hypo), hyp_index, i)] = hypo decoded[(len(hypo), hyp_index, i)] = self.eos_index assert ((decoded == self.eos_index).sum() == ((2 * beam_size) * bs)) return (decoded, tgt_len, sorted([h[0] for h in hypotheses.hyp], reverse=True))
Decode a sentence given initial start. `x`: - LongTensor(bs, slen) <EOS> W1 W2 W3 <EOS> <PAD> <EOS> W1 W2 W3 W4 <EOS> `lengths`: - LongTensor(bs) [5, 6] `positions`: - False, for regular "arange" positions (LM) - True, to reset positions from the new generation (MT) `langs`: - must be None if the model only supports one language - lang_id if only one language is involved (LM) - (lang_id1, lang_id2) if two languages are involved (MT)
codegen_sources/model/src/model/transformer.py
generate_beam
Syamgith/CodeGen
241
python
def generate_beam(self, src_enc, src_len, tgt_lang_id, beam_size, length_penalty, early_stopping, max_len=200): '\n Decode a sentence given initial start.\n `x`:\n - LongTensor(bs, slen)\n <EOS> W1 W2 W3 <EOS> <PAD>\n <EOS> W1 W2 W3 W4 <EOS>\n `lengths`:\n - LongTensor(bs) [5, 6]\n `positions`:\n - False, for regular "arange" positions (LM)\n - True, to reset positions from the new generation (MT)\n `langs`:\n - must be None if the model only supports one language\n - lang_id if only one language is involved (LM)\n - (lang_id1, lang_id2) if two languages are involved (MT)\n ' if isinstance(max_len, int): max_lengths = src_len.clone().fill_(max_len) global_max_len = max_len else: max_lengths = max_len global_max_len = int(max_lengths.max()) assert (src_enc.size(0) == src_len.size(0)) assert (beam_size >= 1) bs = len(src_len) n_words = self.n_words src_enc = src_enc.unsqueeze(1).expand(((bs, beam_size) + src_enc.shape[1:])).contiguous().view((((bs * beam_size),) + src_enc.shape[1:])) src_len = src_len.unsqueeze(1).expand(bs, beam_size).contiguous().view((- 1)) generated = src_len.new(global_max_len, (bs * beam_size)) generated.fill_(self.pad_index) generated[0].fill_(self.eos_index) generated_hyps = [BeamHypotheses(beam_size, global_max_len, length_penalty, early_stopping) for _ in range(bs)] positions = src_len.new(global_max_len).long() positions = torch.arange(global_max_len, out=positions).unsqueeze(1).expand_as(generated) if self.roberta_mode: positions = ((positions + self.pad_index) + 1) langs = positions.clone().fill_(tgt_lang_id) beam_scores = src_enc.new(bs, beam_size).float().fill_(0) beam_scores[(:, 1:)] = (- 1000000000.0) beam_scores = beam_scores.view((- 1)) cur_len = 1 self.cache = {'slen': 0} done = [False for _ in range(bs)] while (cur_len < global_max_len): tensor = self.forward('fwd', x=generated[:cur_len], lengths=src_len.new((bs * beam_size)).fill_(cur_len), positions=positions[:cur_len], langs=langs[:cur_len], causal=True, src_enc=src_enc, src_len=src_len, use_cache=True) assert (tensor.size() == (1, (bs * beam_size), self.dim)) tensor = tensor.data[((- 1), :, :)].type_as(src_enc) scores = self.pred_layer.get_scores(tensor) scores = F.log_softmax(scores.float(), dim=(- 1)) assert (scores.size() == ((bs * beam_size), n_words)) _scores = (scores + beam_scores[(:, None)].expand_as(scores)) _scores = _scores.view(bs, (beam_size * n_words)) (next_scores, next_words) = torch.topk(_scores, (2 * beam_size), dim=1, largest=True, sorted=True) assert (next_scores.size() == next_words.size() == (bs, (2 * beam_size))) next_batch_beam = [] for sent_id in range(bs): done[sent_id] = (done[sent_id] or generated_hyps[sent_id].is_done(next_scores[sent_id].max().item())) if done[sent_id]: next_batch_beam.extend(([(0, self.pad_index, 0)] * beam_size)) continue next_sent_beam = [] for (idx, value) in zip(next_words[sent_id], next_scores[sent_id]): beam_id = (idx // n_words) word_id = (idx % n_words) if ((word_id == self.eos_index) or ((cur_len + 1) == global_max_len)): generated_hyps[sent_id].add(generated[(:cur_len, ((sent_id * beam_size) + beam_id))].clone(), value.item()) else: next_sent_beam.append((value, word_id, ((sent_id * beam_size) + beam_id))) if (len(next_sent_beam) == beam_size): break assert ((len(next_sent_beam) == 0) if ((cur_len + 1) == global_max_len) else beam_size) if (len(next_sent_beam) == 0): next_sent_beam = ([(0, self.pad_index, 0)] * beam_size) next_batch_beam.extend(next_sent_beam) assert (len(next_batch_beam) == (beam_size * (sent_id + 1))) assert (len(next_batch_beam) == (bs * beam_size)) beam_scores = beam_scores.new([x[0] for x in next_batch_beam]) beam_words = generated.new([x[1] for x in next_batch_beam]) beam_idx = src_len.new([x[2] for x in next_batch_beam]) generated = generated[(:, beam_idx)] generated[cur_len] = beam_words for k in self.cache.keys(): if (k != 'slen'): self.cache[k] = (self.cache[k][0][beam_idx], self.cache[k][1][beam_idx]) cur_len = (cur_len + 1) if all(done): break tgt_len = src_len.new(bs, beam_size) best = [] for (i, hypotheses) in enumerate(generated_hyps): sorted_hyps = [h[1] for h in sorted(hypotheses.hyp, key=(lambda x: x[0]), reverse=True)] for (j, hyp) in enumerate(sorted_hyps): tgt_len[(i, j)] = (len(hyp) + 1) best.append(sorted_hyps) decoded = src_len.new(tgt_len.max().item(), beam_size, bs).fill_(self.pad_index) for (i, hypo_list) in enumerate(best): for (hyp_index, hypo) in enumerate(hypo_list): decoded[(:len(hypo), hyp_index, i)] = hypo decoded[(len(hypo), hyp_index, i)] = self.eos_index assert ((decoded == self.eos_index).sum() == ((2 * beam_size) * bs)) return (decoded, tgt_len, sorted([h[0] for h in hypotheses.hyp], reverse=True))
def generate_beam(self, src_enc, src_len, tgt_lang_id, beam_size, length_penalty, early_stopping, max_len=200): '\n Decode a sentence given initial start.\n `x`:\n - LongTensor(bs, slen)\n <EOS> W1 W2 W3 <EOS> <PAD>\n <EOS> W1 W2 W3 W4 <EOS>\n `lengths`:\n - LongTensor(bs) [5, 6]\n `positions`:\n - False, for regular "arange" positions (LM)\n - True, to reset positions from the new generation (MT)\n `langs`:\n - must be None if the model only supports one language\n - lang_id if only one language is involved (LM)\n - (lang_id1, lang_id2) if two languages are involved (MT)\n ' if isinstance(max_len, int): max_lengths = src_len.clone().fill_(max_len) global_max_len = max_len else: max_lengths = max_len global_max_len = int(max_lengths.max()) assert (src_enc.size(0) == src_len.size(0)) assert (beam_size >= 1) bs = len(src_len) n_words = self.n_words src_enc = src_enc.unsqueeze(1).expand(((bs, beam_size) + src_enc.shape[1:])).contiguous().view((((bs * beam_size),) + src_enc.shape[1:])) src_len = src_len.unsqueeze(1).expand(bs, beam_size).contiguous().view((- 1)) generated = src_len.new(global_max_len, (bs * beam_size)) generated.fill_(self.pad_index) generated[0].fill_(self.eos_index) generated_hyps = [BeamHypotheses(beam_size, global_max_len, length_penalty, early_stopping) for _ in range(bs)] positions = src_len.new(global_max_len).long() positions = torch.arange(global_max_len, out=positions).unsqueeze(1).expand_as(generated) if self.roberta_mode: positions = ((positions + self.pad_index) + 1) langs = positions.clone().fill_(tgt_lang_id) beam_scores = src_enc.new(bs, beam_size).float().fill_(0) beam_scores[(:, 1:)] = (- 1000000000.0) beam_scores = beam_scores.view((- 1)) cur_len = 1 self.cache = {'slen': 0} done = [False for _ in range(bs)] while (cur_len < global_max_len): tensor = self.forward('fwd', x=generated[:cur_len], lengths=src_len.new((bs * beam_size)).fill_(cur_len), positions=positions[:cur_len], langs=langs[:cur_len], causal=True, src_enc=src_enc, src_len=src_len, use_cache=True) assert (tensor.size() == (1, (bs * beam_size), self.dim)) tensor = tensor.data[((- 1), :, :)].type_as(src_enc) scores = self.pred_layer.get_scores(tensor) scores = F.log_softmax(scores.float(), dim=(- 1)) assert (scores.size() == ((bs * beam_size), n_words)) _scores = (scores + beam_scores[(:, None)].expand_as(scores)) _scores = _scores.view(bs, (beam_size * n_words)) (next_scores, next_words) = torch.topk(_scores, (2 * beam_size), dim=1, largest=True, sorted=True) assert (next_scores.size() == next_words.size() == (bs, (2 * beam_size))) next_batch_beam = [] for sent_id in range(bs): done[sent_id] = (done[sent_id] or generated_hyps[sent_id].is_done(next_scores[sent_id].max().item())) if done[sent_id]: next_batch_beam.extend(([(0, self.pad_index, 0)] * beam_size)) continue next_sent_beam = [] for (idx, value) in zip(next_words[sent_id], next_scores[sent_id]): beam_id = (idx // n_words) word_id = (idx % n_words) if ((word_id == self.eos_index) or ((cur_len + 1) == global_max_len)): generated_hyps[sent_id].add(generated[(:cur_len, ((sent_id * beam_size) + beam_id))].clone(), value.item()) else: next_sent_beam.append((value, word_id, ((sent_id * beam_size) + beam_id))) if (len(next_sent_beam) == beam_size): break assert ((len(next_sent_beam) == 0) if ((cur_len + 1) == global_max_len) else beam_size) if (len(next_sent_beam) == 0): next_sent_beam = ([(0, self.pad_index, 0)] * beam_size) next_batch_beam.extend(next_sent_beam) assert (len(next_batch_beam) == (beam_size * (sent_id + 1))) assert (len(next_batch_beam) == (bs * beam_size)) beam_scores = beam_scores.new([x[0] for x in next_batch_beam]) beam_words = generated.new([x[1] for x in next_batch_beam]) beam_idx = src_len.new([x[2] for x in next_batch_beam]) generated = generated[(:, beam_idx)] generated[cur_len] = beam_words for k in self.cache.keys(): if (k != 'slen'): self.cache[k] = (self.cache[k][0][beam_idx], self.cache[k][1][beam_idx]) cur_len = (cur_len + 1) if all(done): break tgt_len = src_len.new(bs, beam_size) best = [] for (i, hypotheses) in enumerate(generated_hyps): sorted_hyps = [h[1] for h in sorted(hypotheses.hyp, key=(lambda x: x[0]), reverse=True)] for (j, hyp) in enumerate(sorted_hyps): tgt_len[(i, j)] = (len(hyp) + 1) best.append(sorted_hyps) decoded = src_len.new(tgt_len.max().item(), beam_size, bs).fill_(self.pad_index) for (i, hypo_list) in enumerate(best): for (hyp_index, hypo) in enumerate(hypo_list): decoded[(:len(hypo), hyp_index, i)] = hypo decoded[(len(hypo), hyp_index, i)] = self.eos_index assert ((decoded == self.eos_index).sum() == ((2 * beam_size) * bs)) return (decoded, tgt_len, sorted([h[0] for h in hypotheses.hyp], reverse=True))<|docstring|>Decode a sentence given initial start. `x`: - LongTensor(bs, slen) <EOS> W1 W2 W3 <EOS> <PAD> <EOS> W1 W2 W3 W4 <EOS> `lengths`: - LongTensor(bs) [5, 6] `positions`: - False, for regular "arange" positions (LM) - True, to reset positions from the new generation (MT) `langs`: - must be None if the model only supports one language - lang_id if only one language is involved (LM) - (lang_id1, lang_id2) if two languages are involved (MT)<|endoftext|>
e216cf9230a6fcd8e78cd12c2461846cb2e35b722837b333c4ad33d722be9ee8
def __init__(self, n_hyp, max_len, length_penalty, early_stopping): '\n Initialize n-best list of hypotheses.\n ' self.max_len = (max_len - 1) self.length_penalty = length_penalty self.early_stopping = early_stopping self.n_hyp = n_hyp self.hyp = [] self.worst_score = 1000000000.0
Initialize n-best list of hypotheses.
codegen_sources/model/src/model/transformer.py
__init__
Syamgith/CodeGen
241
python
def __init__(self, n_hyp, max_len, length_penalty, early_stopping): '\n \n ' self.max_len = (max_len - 1) self.length_penalty = length_penalty self.early_stopping = early_stopping self.n_hyp = n_hyp self.hyp = [] self.worst_score = 1000000000.0
def __init__(self, n_hyp, max_len, length_penalty, early_stopping): '\n \n ' self.max_len = (max_len - 1) self.length_penalty = length_penalty self.early_stopping = early_stopping self.n_hyp = n_hyp self.hyp = [] self.worst_score = 1000000000.0<|docstring|>Initialize n-best list of hypotheses.<|endoftext|>
db6a36dccd016d19258163dce53a54f83fab473300e432e21ff149c6aa545fc5
def __len__(self): '\n Number of hypotheses in the list.\n ' return len(self.hyp)
Number of hypotheses in the list.
codegen_sources/model/src/model/transformer.py
__len__
Syamgith/CodeGen
241
python
def __len__(self): '\n \n ' return len(self.hyp)
def __len__(self): '\n \n ' return len(self.hyp)<|docstring|>Number of hypotheses in the list.<|endoftext|>
cb77d4ffe3352d15ee04c3ab4561299b9253870ed3aeffcb0ce0253482d8ec93
def add(self, hyp, sum_logprobs): '\n Add a new hypothesis to the list.\n ' score = (sum_logprobs / (len(hyp) ** self.length_penalty)) if ((len(self) < self.n_hyp) or (score > self.worst_score)): self.hyp.append((score, hyp)) if (len(self) > self.n_hyp): sorted_scores = sorted([(s, idx) for (idx, (s, _)) in enumerate(self.hyp)]) del self.hyp[sorted_scores[0][1]] self.worst_score = sorted_scores[1][0] else: self.worst_score = min(score, self.worst_score)
Add a new hypothesis to the list.
codegen_sources/model/src/model/transformer.py
add
Syamgith/CodeGen
241
python
def add(self, hyp, sum_logprobs): '\n \n ' score = (sum_logprobs / (len(hyp) ** self.length_penalty)) if ((len(self) < self.n_hyp) or (score > self.worst_score)): self.hyp.append((score, hyp)) if (len(self) > self.n_hyp): sorted_scores = sorted([(s, idx) for (idx, (s, _)) in enumerate(self.hyp)]) del self.hyp[sorted_scores[0][1]] self.worst_score = sorted_scores[1][0] else: self.worst_score = min(score, self.worst_score)
def add(self, hyp, sum_logprobs): '\n \n ' score = (sum_logprobs / (len(hyp) ** self.length_penalty)) if ((len(self) < self.n_hyp) or (score > self.worst_score)): self.hyp.append((score, hyp)) if (len(self) > self.n_hyp): sorted_scores = sorted([(s, idx) for (idx, (s, _)) in enumerate(self.hyp)]) del self.hyp[sorted_scores[0][1]] self.worst_score = sorted_scores[1][0] else: self.worst_score = min(score, self.worst_score)<|docstring|>Add a new hypothesis to the list.<|endoftext|>
73608afea607056639e793ca4a204bf11b649ce5e1b6eebe8f325e7d5d5557c2
def is_done(self, best_sum_logprobs): '\n If there are enough hypotheses and that none of the hypotheses being generated\n can become better than the worst one in the heap, then we are done with this sentence.\n ' if (len(self) < self.n_hyp): return False elif self.early_stopping: return True else: return (self.worst_score >= (best_sum_logprobs / (self.max_len ** self.length_penalty)))
If there are enough hypotheses and that none of the hypotheses being generated can become better than the worst one in the heap, then we are done with this sentence.
codegen_sources/model/src/model/transformer.py
is_done
Syamgith/CodeGen
241
python
def is_done(self, best_sum_logprobs): '\n If there are enough hypotheses and that none of the hypotheses being generated\n can become better than the worst one in the heap, then we are done with this sentence.\n ' if (len(self) < self.n_hyp): return False elif self.early_stopping: return True else: return (self.worst_score >= (best_sum_logprobs / (self.max_len ** self.length_penalty)))
def is_done(self, best_sum_logprobs): '\n If there are enough hypotheses and that none of the hypotheses being generated\n can become better than the worst one in the heap, then we are done with this sentence.\n ' if (len(self) < self.n_hyp): return False elif self.early_stopping: return True else: return (self.worst_score >= (best_sum_logprobs / (self.max_len ** self.length_penalty)))<|docstring|>If there are enough hypotheses and that none of the hypotheses being generated can become better than the worst one in the heap, then we are done with this sentence.<|endoftext|>
056c313973c78048cb96c2df99a31c28936bda33569c68550c3b898fa90b0cf2
def shape(x): ' projection ' return x.view(bs, (- 1), self.n_heads, dim_per_head).transpose(1, 2)
projection
codegen_sources/model/src/model/transformer.py
shape
Syamgith/CodeGen
241
python
def shape(x): ' ' return x.view(bs, (- 1), self.n_heads, dim_per_head).transpose(1, 2)
def shape(x): ' ' return x.view(bs, (- 1), self.n_heads, dim_per_head).transpose(1, 2)<|docstring|>projection<|endoftext|>
dc30746fb98fe781bf45cc29ee242f01db582b8169da0d0b32ea274636b42557
def unshape(x): ' compute context ' return x.transpose(1, 2).contiguous().view(bs, (- 1), (self.n_heads * dim_per_head))
compute context
codegen_sources/model/src/model/transformer.py
unshape
Syamgith/CodeGen
241
python
def unshape(x): ' ' return x.transpose(1, 2).contiguous().view(bs, (- 1), (self.n_heads * dim_per_head))
def unshape(x): ' ' return x.transpose(1, 2).contiguous().view(bs, (- 1), (self.n_heads * dim_per_head))<|docstring|>compute context<|endoftext|>
ef9ed5f281a782bb3d92d4fda1fb24a8a7da6e55d6f3207fbda3fef1c510883d
def get_soup(url): 'Method to get the soup from the url' counter = 5 timeout_counter = 5 headers = {'User-Agent': 'TrelloUpdater', 'From': 'example@example.com'} while True: try: r = requests.get(url, headers=headers, timeout=10) except requests.exceptions.ConnectionError: print('There is no network connectivity on this computer, will try again after {} seconds'.format(counter)) counter *= counter continue except requests.exceptions.Timeout: print('There is a problem at the target. Retry again in {} seconds'.format(timeout_counter)) continue except requests.exceptions.TooManyRedirects: print('There is something wrong with the URL, has the website moved?') raise MovedURL('Website has moved?') except requests.exceptions.RequestException as e: print('Something has horribly gone wrong, contact the developer') print(e) raise HorriblyGoneWrongError('Something has horribly gone wrong, contact the developer') break while (r.status_code != 200): sleep(0.5) try: r = requests.get(url, headers=headers, timeout=10) except requests.exceptions.ConnectionError: print('Requests blocked! Lets wait for {} seconds'.format(counter)) counter *= counter continue except requests.exceptions.Timeout: print('There is a problem at the target. Retrying again in {} seconds'.format(timeout_counter)) continue except requests.exceptions.TooManyRedirects: print('There is somthing wrong with the URL, has the website moved?') raise MovedURL('Website has moved') except requests.exceptions.RequestException as e: print('Something horribly gone wrong, contact the developer') print(e) raise HorriblyGoneWrongError('Somethign has horribly gone wrong, contact the developer') break sleep(0.5) return BeautifulSoup(r.text, 'html.parser')
Method to get the soup from the url
trello/utility.py
get_soup
anubhavcodes/pytrello
0
python
def get_soup(url): counter = 5 timeout_counter = 5 headers = {'User-Agent': 'TrelloUpdater', 'From': 'example@example.com'} while True: try: r = requests.get(url, headers=headers, timeout=10) except requests.exceptions.ConnectionError: print('There is no network connectivity on this computer, will try again after {} seconds'.format(counter)) counter *= counter continue except requests.exceptions.Timeout: print('There is a problem at the target. Retry again in {} seconds'.format(timeout_counter)) continue except requests.exceptions.TooManyRedirects: print('There is something wrong with the URL, has the website moved?') raise MovedURL('Website has moved?') except requests.exceptions.RequestException as e: print('Something has horribly gone wrong, contact the developer') print(e) raise HorriblyGoneWrongError('Something has horribly gone wrong, contact the developer') break while (r.status_code != 200): sleep(0.5) try: r = requests.get(url, headers=headers, timeout=10) except requests.exceptions.ConnectionError: print('Requests blocked! Lets wait for {} seconds'.format(counter)) counter *= counter continue except requests.exceptions.Timeout: print('There is a problem at the target. Retrying again in {} seconds'.format(timeout_counter)) continue except requests.exceptions.TooManyRedirects: print('There is somthing wrong with the URL, has the website moved?') raise MovedURL('Website has moved') except requests.exceptions.RequestException as e: print('Something horribly gone wrong, contact the developer') print(e) raise HorriblyGoneWrongError('Somethign has horribly gone wrong, contact the developer') break sleep(0.5) return BeautifulSoup(r.text, 'html.parser')
def get_soup(url): counter = 5 timeout_counter = 5 headers = {'User-Agent': 'TrelloUpdater', 'From': 'example@example.com'} while True: try: r = requests.get(url, headers=headers, timeout=10) except requests.exceptions.ConnectionError: print('There is no network connectivity on this computer, will try again after {} seconds'.format(counter)) counter *= counter continue except requests.exceptions.Timeout: print('There is a problem at the target. Retry again in {} seconds'.format(timeout_counter)) continue except requests.exceptions.TooManyRedirects: print('There is something wrong with the URL, has the website moved?') raise MovedURL('Website has moved?') except requests.exceptions.RequestException as e: print('Something has horribly gone wrong, contact the developer') print(e) raise HorriblyGoneWrongError('Something has horribly gone wrong, contact the developer') break while (r.status_code != 200): sleep(0.5) try: r = requests.get(url, headers=headers, timeout=10) except requests.exceptions.ConnectionError: print('Requests blocked! Lets wait for {} seconds'.format(counter)) counter *= counter continue except requests.exceptions.Timeout: print('There is a problem at the target. Retrying again in {} seconds'.format(timeout_counter)) continue except requests.exceptions.TooManyRedirects: print('There is somthing wrong with the URL, has the website moved?') raise MovedURL('Website has moved') except requests.exceptions.RequestException as e: print('Something horribly gone wrong, contact the developer') print(e) raise HorriblyGoneWrongError('Somethign has horribly gone wrong, contact the developer') break sleep(0.5) return BeautifulSoup(r.text, 'html.parser')<|docstring|>Method to get the soup from the url<|endoftext|>
51abd91ccb37ec109dea7efe2f32b0ecff853a61e33a1982bdd394ac533ecf4a
def participle(in_path, out_path, with_label=False, t2s_path=None): '对形如id 句子1 句子2 (标签)的文件进行分词并保存\n :param in_path: 单个文件路径\n :param out_path: 分词后所得文件的保存地址\n :param with_label: 输入文件是否带有标签\n :param t2s_path: 繁体转简体字典路径\n :return:\n ' t2s = None if isinstance(t2s_path, str): t2s = loadDict(t2s_path) try: jieba.load_userdict('data/myDict.txt') except IOError: jieba.load_userdict('myDict.txt') else: print('Here needs correct path for dict!') target = codecs.open(out_path, 'w', encoding='utf-8') en2ch = {'huabei': '花呗', 'jiebei': '借呗', 'mayi': '蚂蚁', 'xiugai': '修改', 'zhifu': '支付', 'zhifubao': '支付宝', 'mobike': '摩拜', 'zhebi': '这笔', 'xinyong': '信用', 'neng': '能', 'buneng': '不能', 'keyi': '可以', 'tongguo': '通过', 'changshi': '尝试', 'bunengyongle': '不能用了', 'mobie': '摩拜', 'feichang': '非常', 'huankuan': '还款', 'huanqian': '还钱', 'jieqian': '借钱', 'shouqian': '收钱', 'shoukuan': '收款'} with codecs.open(in_path, 'r', encoding='utf-8') as f: print('open a file.') lineNum = 1 line = f.readline() while line: print('---processing ', lineNum, ' article---') if isinstance(t2s_path, str): for (k, v) in t2s.items(): line = line.replace(k, v) for (k, v) in sorted(en2ch.items(), key=(lambda x: len(x[0])), reverse=True): line = line.replace(k, v) line = str_to_list(line) line1 = process_str(line[1]) line2 = process_str(line[2]) sent1 = '|'.join([w.strip() for w in jieba.cut(line1) if (len(w.strip()) > 0)]) sent2 = '|'.join([w.strip() for w in jieba.cut(line2) if (len(w.strip()) > 0)]) line_ = ((((line[0] + '\t') + sent1) + '\t') + sent2) if with_label: line_ += ('\t' + line[3]) target.write((line_ + '\n')) lineNum = (lineNum + 1) line = f.readline() print('well done.') target.close()
对形如id 句子1 句子2 (标签)的文件进行分词并保存 :param in_path: 单个文件路径 :param out_path: 分词后所得文件的保存地址 :param with_label: 输入文件是否带有标签 :param t2s_path: 繁体转简体字典路径 :return:
data/data_utils.py
participle
wslc1314/atec_nlp_sim_update
3
python
def participle(in_path, out_path, with_label=False, t2s_path=None): '对形如id 句子1 句子2 (标签)的文件进行分词并保存\n :param in_path: 单个文件路径\n :param out_path: 分词后所得文件的保存地址\n :param with_label: 输入文件是否带有标签\n :param t2s_path: 繁体转简体字典路径\n :return:\n ' t2s = None if isinstance(t2s_path, str): t2s = loadDict(t2s_path) try: jieba.load_userdict('data/myDict.txt') except IOError: jieba.load_userdict('myDict.txt') else: print('Here needs correct path for dict!') target = codecs.open(out_path, 'w', encoding='utf-8') en2ch = {'huabei': '花呗', 'jiebei': '借呗', 'mayi': '蚂蚁', 'xiugai': '修改', 'zhifu': '支付', 'zhifubao': '支付宝', 'mobike': '摩拜', 'zhebi': '这笔', 'xinyong': '信用', 'neng': '能', 'buneng': '不能', 'keyi': '可以', 'tongguo': '通过', 'changshi': '尝试', 'bunengyongle': '不能用了', 'mobie': '摩拜', 'feichang': '非常', 'huankuan': '还款', 'huanqian': '还钱', 'jieqian': '借钱', 'shouqian': '收钱', 'shoukuan': '收款'} with codecs.open(in_path, 'r', encoding='utf-8') as f: print('open a file.') lineNum = 1 line = f.readline() while line: print('---processing ', lineNum, ' article---') if isinstance(t2s_path, str): for (k, v) in t2s.items(): line = line.replace(k, v) for (k, v) in sorted(en2ch.items(), key=(lambda x: len(x[0])), reverse=True): line = line.replace(k, v) line = str_to_list(line) line1 = process_str(line[1]) line2 = process_str(line[2]) sent1 = '|'.join([w.strip() for w in jieba.cut(line1) if (len(w.strip()) > 0)]) sent2 = '|'.join([w.strip() for w in jieba.cut(line2) if (len(w.strip()) > 0)]) line_ = ((((line[0] + '\t') + sent1) + '\t') + sent2) if with_label: line_ += ('\t' + line[3]) target.write((line_ + '\n')) lineNum = (lineNum + 1) line = f.readline() print('well done.') target.close()
def participle(in_path, out_path, with_label=False, t2s_path=None): '对形如id 句子1 句子2 (标签)的文件进行分词并保存\n :param in_path: 单个文件路径\n :param out_path: 分词后所得文件的保存地址\n :param with_label: 输入文件是否带有标签\n :param t2s_path: 繁体转简体字典路径\n :return:\n ' t2s = None if isinstance(t2s_path, str): t2s = loadDict(t2s_path) try: jieba.load_userdict('data/myDict.txt') except IOError: jieba.load_userdict('myDict.txt') else: print('Here needs correct path for dict!') target = codecs.open(out_path, 'w', encoding='utf-8') en2ch = {'huabei': '花呗', 'jiebei': '借呗', 'mayi': '蚂蚁', 'xiugai': '修改', 'zhifu': '支付', 'zhifubao': '支付宝', 'mobike': '摩拜', 'zhebi': '这笔', 'xinyong': '信用', 'neng': '能', 'buneng': '不能', 'keyi': '可以', 'tongguo': '通过', 'changshi': '尝试', 'bunengyongle': '不能用了', 'mobie': '摩拜', 'feichang': '非常', 'huankuan': '还款', 'huanqian': '还钱', 'jieqian': '借钱', 'shouqian': '收钱', 'shoukuan': '收款'} with codecs.open(in_path, 'r', encoding='utf-8') as f: print('open a file.') lineNum = 1 line = f.readline() while line: print('---processing ', lineNum, ' article---') if isinstance(t2s_path, str): for (k, v) in t2s.items(): line = line.replace(k, v) for (k, v) in sorted(en2ch.items(), key=(lambda x: len(x[0])), reverse=True): line = line.replace(k, v) line = str_to_list(line) line1 = process_str(line[1]) line2 = process_str(line[2]) sent1 = '|'.join([w.strip() for w in jieba.cut(line1) if (len(w.strip()) > 0)]) sent2 = '|'.join([w.strip() for w in jieba.cut(line2) if (len(w.strip()) > 0)]) line_ = ((((line[0] + '\t') + sent1) + '\t') + sent2) if with_label: line_ += ('\t' + line[3]) target.write((line_ + '\n')) lineNum = (lineNum + 1) line = f.readline() print('well done.') target.close()<|docstring|>对形如id 句子1 句子2 (标签)的文件进行分词并保存 :param in_path: 单个文件路径 :param out_path: 分词后所得文件的保存地址 :param with_label: 输入文件是否带有标签 :param t2s_path: 繁体转简体字典路径 :return:<|endoftext|>
ad3d7946bcecaba8a333b36cf6419cad1cd42395452f2a2107e0272648879caa
def split_train_val(trainFile, num_split=10, random_state=19941229): '将训练数据划分为训练集和验证集\n ' with codecs.open(trainFile, 'r', encoding='utf-8') as f: raw_data = f.readlines() kf = KFold(n_splits=num_split, shuffle=True, random_state=random_state) save_dir = ensure_dir_exist(os.path.join(os.path.dirname(trainFile), str(num_split))) count = 0 for (train_index, test_index) in kf.split(raw_data): train = np.asarray(raw_data)[train_index].tolist() test = np.asarray(raw_data)[test_index].tolist() with codecs.open((((save_dir + '/train') + str(count)) + '.csv'), 'w', encoding='utf-8') as f: f.writelines(train) with codecs.open((((save_dir + '/valid') + str(count)) + '.csv'), 'w', encoding='utf-8') as f: f.writelines(test) count += 1
将训练数据划分为训练集和验证集
data/data_utils.py
split_train_val
wslc1314/atec_nlp_sim_update
3
python
def split_train_val(trainFile, num_split=10, random_state=19941229): '\n ' with codecs.open(trainFile, 'r', encoding='utf-8') as f: raw_data = f.readlines() kf = KFold(n_splits=num_split, shuffle=True, random_state=random_state) save_dir = ensure_dir_exist(os.path.join(os.path.dirname(trainFile), str(num_split))) count = 0 for (train_index, test_index) in kf.split(raw_data): train = np.asarray(raw_data)[train_index].tolist() test = np.asarray(raw_data)[test_index].tolist() with codecs.open((((save_dir + '/train') + str(count)) + '.csv'), 'w', encoding='utf-8') as f: f.writelines(train) with codecs.open((((save_dir + '/valid') + str(count)) + '.csv'), 'w', encoding='utf-8') as f: f.writelines(test) count += 1
def split_train_val(trainFile, num_split=10, random_state=19941229): '\n ' with codecs.open(trainFile, 'r', encoding='utf-8') as f: raw_data = f.readlines() kf = KFold(n_splits=num_split, shuffle=True, random_state=random_state) save_dir = ensure_dir_exist(os.path.join(os.path.dirname(trainFile), str(num_split))) count = 0 for (train_index, test_index) in kf.split(raw_data): train = np.asarray(raw_data)[train_index].tolist() test = np.asarray(raw_data)[test_index].tolist() with codecs.open((((save_dir + '/train') + str(count)) + '.csv'), 'w', encoding='utf-8') as f: f.writelines(train) with codecs.open((((save_dir + '/valid') + str(count)) + '.csv'), 'w', encoding='utf-8') as f: f.writelines(test) count += 1<|docstring|>将训练数据划分为训练集和验证集<|endoftext|>
11bb26c2dc7c693d2a392447e592edea19ead0de094a91d93df14d3ccc450ee6
def read_cut_file(file_path, with_label=False, dictPathW=None, dictPathC=None, modeC=0): '对形如id 句子1 句子2 (标签)的分词后的文件进行读取\n :param modeC: 0 字表示句子;1 字表示词,词表示句子;>1 modeC个数的字表示词,词表示句子\n ' (index, label) = ([], []) (sent1, sent2, sent1_len, sent2_len) = ([], [], [], []) (sent1c, sent2c, sent1c_len, sent2c_len) = ([], [], [], []) (v2i_w, v2i_c) = (None, None) if isinstance(dictPathW, str): v2i_w = loadDict(dictPathW)['word']['v2i'] if isinstance(dictPathC, str): v2i_c = loadDict(dictPathC)['char']['v2i'] with codecs.open(file_path, 'r', encoding='utf-8') as f: raw_data = f.readlines() for line in raw_data: line = str_to_list(line) index.append(int(line[0])) tmp1 = [t.strip() for t in str_to_list(line[1], '|') if (len(t.strip()) > 0)] tmp2 = [t.strip() for t in str_to_list(line[2], '|') if (len(t.strip()) > 0)] if isinstance(dictPathW, str): sent1_ = list(map((lambda s: int(v2i_w.get(s, v2i_w['<unk>']))), tmp1)) sent2_ = list(map((lambda s: int(v2i_w.get(s, v2i_w['<unk>']))), tmp2)) else: sent1_ = tmp1[:] sent2_ = tmp2[:] if (modeC == 0): if isinstance(dictPathC, str): sent1c_ = list(map((lambda s: int(v2i_c.get(s, v2i_c['<unk>']))), [_ for _ in ''.join(tmp1)])) sent2c_ = list(map((lambda s: int(v2i_c.get(s, v2i_c['<unk>']))), [_ for _ in ''.join(tmp2)])) else: sent1c_ = list([_ for _ in ''.join(tmp1)]) sent2c_ = list([_ for _ in ''.join(tmp2)]) sent1c_len.append(len(sent1c_)) sent2c_len.append(len(sent2c_)) else: if isinstance(dictPathC, str): sent1c_ = [list(map((lambda s: int(v2i_c.get(s, v2i_c['<unk>']))), t)) for t in tmp1] sent2c_ = [list(map((lambda s: int(v2i_c.get(s, v2i_c['<unk>']))), t)) for t in tmp2] else: sent1c_ = [[t_ for t_ in t] for t in tmp1] sent2c_ = [[t_ for t_ in t] for t in tmp2] sent1c_len.append([len(s) for s in sent1c_]) sent2c_len.append([len(s) for s in sent2c_]) if (modeC > 1): if isinstance(dictPathC, str): sent1c_ = [(t + (modeC * [v2i_c['<pad>']]))[:modeC] for t in sent1c_] sent2c_ = [(t + (modeC * [v2i_c['<pad>']]))[:modeC] for t in sent2c_] else: sent1c_ = [(t + (modeC * ['<pad>']))[:modeC] for t in sent1c_] sent2c_ = [(t + (modeC * ['<pad>']))[:modeC] for t in sent2c_] assert np.allclose(np.asarray([len(s) for s in sent1c_]), np.asarray([modeC])) assert np.allclose(np.asarray([len(s) for s in sent2c_]), np.asarray([modeC])) sent1.append(sent1_) sent2.append(sent2_) sent1_len.append(len(sent1_)) sent2_len.append(len(sent2_)) sent1c.append(sent1c_) sent2c.append(sent2c_) if with_label: label.append(int(line[3])) else: label.append(None) res = {'id': index, 'label': label, 'sent1w': sent1, 'sent2w': sent2, 'sent1w_len': sent1_len, 'sent2w_len': sent2_len, 'sent1c': sent1c, 'sent2c': sent2c, 'sent1c_len': sent1c_len, 'sent2c_len': sent2c_len} return res
对形如id 句子1 句子2 (标签)的分词后的文件进行读取 :param modeC: 0 字表示句子;1 字表示词,词表示句子;>1 modeC个数的字表示词,词表示句子
data/data_utils.py
read_cut_file
wslc1314/atec_nlp_sim_update
3
python
def read_cut_file(file_path, with_label=False, dictPathW=None, dictPathC=None, modeC=0): '对形如id 句子1 句子2 (标签)的分词后的文件进行读取\n :param modeC: 0 字表示句子;1 字表示词,词表示句子;>1 modeC个数的字表示词,词表示句子\n ' (index, label) = ([], []) (sent1, sent2, sent1_len, sent2_len) = ([], [], [], []) (sent1c, sent2c, sent1c_len, sent2c_len) = ([], [], [], []) (v2i_w, v2i_c) = (None, None) if isinstance(dictPathW, str): v2i_w = loadDict(dictPathW)['word']['v2i'] if isinstance(dictPathC, str): v2i_c = loadDict(dictPathC)['char']['v2i'] with codecs.open(file_path, 'r', encoding='utf-8') as f: raw_data = f.readlines() for line in raw_data: line = str_to_list(line) index.append(int(line[0])) tmp1 = [t.strip() for t in str_to_list(line[1], '|') if (len(t.strip()) > 0)] tmp2 = [t.strip() for t in str_to_list(line[2], '|') if (len(t.strip()) > 0)] if isinstance(dictPathW, str): sent1_ = list(map((lambda s: int(v2i_w.get(s, v2i_w['<unk>']))), tmp1)) sent2_ = list(map((lambda s: int(v2i_w.get(s, v2i_w['<unk>']))), tmp2)) else: sent1_ = tmp1[:] sent2_ = tmp2[:] if (modeC == 0): if isinstance(dictPathC, str): sent1c_ = list(map((lambda s: int(v2i_c.get(s, v2i_c['<unk>']))), [_ for _ in .join(tmp1)])) sent2c_ = list(map((lambda s: int(v2i_c.get(s, v2i_c['<unk>']))), [_ for _ in .join(tmp2)])) else: sent1c_ = list([_ for _ in .join(tmp1)]) sent2c_ = list([_ for _ in .join(tmp2)]) sent1c_len.append(len(sent1c_)) sent2c_len.append(len(sent2c_)) else: if isinstance(dictPathC, str): sent1c_ = [list(map((lambda s: int(v2i_c.get(s, v2i_c['<unk>']))), t)) for t in tmp1] sent2c_ = [list(map((lambda s: int(v2i_c.get(s, v2i_c['<unk>']))), t)) for t in tmp2] else: sent1c_ = [[t_ for t_ in t] for t in tmp1] sent2c_ = [[t_ for t_ in t] for t in tmp2] sent1c_len.append([len(s) for s in sent1c_]) sent2c_len.append([len(s) for s in sent2c_]) if (modeC > 1): if isinstance(dictPathC, str): sent1c_ = [(t + (modeC * [v2i_c['<pad>']]))[:modeC] for t in sent1c_] sent2c_ = [(t + (modeC * [v2i_c['<pad>']]))[:modeC] for t in sent2c_] else: sent1c_ = [(t + (modeC * ['<pad>']))[:modeC] for t in sent1c_] sent2c_ = [(t + (modeC * ['<pad>']))[:modeC] for t in sent2c_] assert np.allclose(np.asarray([len(s) for s in sent1c_]), np.asarray([modeC])) assert np.allclose(np.asarray([len(s) for s in sent2c_]), np.asarray([modeC])) sent1.append(sent1_) sent2.append(sent2_) sent1_len.append(len(sent1_)) sent2_len.append(len(sent2_)) sent1c.append(sent1c_) sent2c.append(sent2c_) if with_label: label.append(int(line[3])) else: label.append(None) res = {'id': index, 'label': label, 'sent1w': sent1, 'sent2w': sent2, 'sent1w_len': sent1_len, 'sent2w_len': sent2_len, 'sent1c': sent1c, 'sent2c': sent2c, 'sent1c_len': sent1c_len, 'sent2c_len': sent2c_len} return res
def read_cut_file(file_path, with_label=False, dictPathW=None, dictPathC=None, modeC=0): '对形如id 句子1 句子2 (标签)的分词后的文件进行读取\n :param modeC: 0 字表示句子;1 字表示词,词表示句子;>1 modeC个数的字表示词,词表示句子\n ' (index, label) = ([], []) (sent1, sent2, sent1_len, sent2_len) = ([], [], [], []) (sent1c, sent2c, sent1c_len, sent2c_len) = ([], [], [], []) (v2i_w, v2i_c) = (None, None) if isinstance(dictPathW, str): v2i_w = loadDict(dictPathW)['word']['v2i'] if isinstance(dictPathC, str): v2i_c = loadDict(dictPathC)['char']['v2i'] with codecs.open(file_path, 'r', encoding='utf-8') as f: raw_data = f.readlines() for line in raw_data: line = str_to_list(line) index.append(int(line[0])) tmp1 = [t.strip() for t in str_to_list(line[1], '|') if (len(t.strip()) > 0)] tmp2 = [t.strip() for t in str_to_list(line[2], '|') if (len(t.strip()) > 0)] if isinstance(dictPathW, str): sent1_ = list(map((lambda s: int(v2i_w.get(s, v2i_w['<unk>']))), tmp1)) sent2_ = list(map((lambda s: int(v2i_w.get(s, v2i_w['<unk>']))), tmp2)) else: sent1_ = tmp1[:] sent2_ = tmp2[:] if (modeC == 0): if isinstance(dictPathC, str): sent1c_ = list(map((lambda s: int(v2i_c.get(s, v2i_c['<unk>']))), [_ for _ in .join(tmp1)])) sent2c_ = list(map((lambda s: int(v2i_c.get(s, v2i_c['<unk>']))), [_ for _ in .join(tmp2)])) else: sent1c_ = list([_ for _ in .join(tmp1)]) sent2c_ = list([_ for _ in .join(tmp2)]) sent1c_len.append(len(sent1c_)) sent2c_len.append(len(sent2c_)) else: if isinstance(dictPathC, str): sent1c_ = [list(map((lambda s: int(v2i_c.get(s, v2i_c['<unk>']))), t)) for t in tmp1] sent2c_ = [list(map((lambda s: int(v2i_c.get(s, v2i_c['<unk>']))), t)) for t in tmp2] else: sent1c_ = [[t_ for t_ in t] for t in tmp1] sent2c_ = [[t_ for t_ in t] for t in tmp2] sent1c_len.append([len(s) for s in sent1c_]) sent2c_len.append([len(s) for s in sent2c_]) if (modeC > 1): if isinstance(dictPathC, str): sent1c_ = [(t + (modeC * [v2i_c['<pad>']]))[:modeC] for t in sent1c_] sent2c_ = [(t + (modeC * [v2i_c['<pad>']]))[:modeC] for t in sent2c_] else: sent1c_ = [(t + (modeC * ['<pad>']))[:modeC] for t in sent1c_] sent2c_ = [(t + (modeC * ['<pad>']))[:modeC] for t in sent2c_] assert np.allclose(np.asarray([len(s) for s in sent1c_]), np.asarray([modeC])) assert np.allclose(np.asarray([len(s) for s in sent2c_]), np.asarray([modeC])) sent1.append(sent1_) sent2.append(sent2_) sent1_len.append(len(sent1_)) sent2_len.append(len(sent2_)) sent1c.append(sent1c_) sent2c.append(sent2c_) if with_label: label.append(int(line[3])) else: label.append(None) res = {'id': index, 'label': label, 'sent1w': sent1, 'sent2w': sent2, 'sent1w_len': sent1_len, 'sent2w_len': sent2_len, 'sent1c': sent1c, 'sent2c': sent2c, 'sent1c_len': sent1c_len, 'sent2c_len': sent2c_len} return res<|docstring|>对形如id 句子1 句子2 (标签)的分词后的文件进行读取 :param modeC: 0 字表示句子;1 字表示词,词表示句子;>1 modeC个数的字表示词,词表示句子<|endoftext|>
e9663fdd972d15f1889b1d61faac1e05834db78aa3e9d731b79695296f213f95
def test_match_templates(self): '\n Test with simple interface check for matching best templates with the std star flux\n ' from lvmspec.fluxcalibration import match_templates frame = get_frame_data() flux = {'b': frame.flux, 'r': (frame.flux * 1.1), 'z': (frame.flux * 1.2)} wave = {'b': frame.wave, 'r': (frame.wave + 10), 'z': (frame.wave + 20)} ivar = {'b': frame.ivar, 'r': (frame.ivar / 1.1), 'z': (frame.ivar / 1.2)} resol_data = {'b': frame.resolution_data, 'r': frame.resolution_data, 'z': frame.resolution_data} nmodels = 10 (modelwave, modelflux) = get_models(nmodels) teff = np.random.uniform(5000, 7000, nmodels) logg = np.random.uniform(4.0, 5.0, nmodels) feh = np.random.uniform((- 2.5), (- 0.5), nmodels) stdfibers = np.random.choice(9, 3, replace=False) frame.fibermap['OBJTYPE'][stdfibers] = 'STD' bestid = (- np.ones(len(stdfibers))) bestwave = np.zeros((bestid.shape[0], modelflux.shape[1])) bestflux = np.zeros((bestid.shape[0], modelflux.shape[1])) red_chisq = np.zeros(len(stdfibers)) for i in range(len(stdfibers)): stdflux = {'b': flux['b'][i], 'r': flux['r'][i], 'z': flux['z'][i]} stdivar = {'b': ivar['b'][i], 'r': ivar['r'][i], 'z': ivar['z'][i]} stdresol_data = {'b': resol_data['b'][i], 'r': resol_data['r'][i], 'z': resol_data['z'][i]} (bestid, redshift, chi2) = match_templates(wave, stdflux, stdivar, stdresol_data, modelwave, modelflux, teff, logg, feh)
Test with simple interface check for matching best templates with the std star flux
py/lvmspec/test/test_flux_calibration.py
test_match_templates
sdss/lvmspec
0
python
def test_match_templates(self): '\n \n ' from lvmspec.fluxcalibration import match_templates frame = get_frame_data() flux = {'b': frame.flux, 'r': (frame.flux * 1.1), 'z': (frame.flux * 1.2)} wave = {'b': frame.wave, 'r': (frame.wave + 10), 'z': (frame.wave + 20)} ivar = {'b': frame.ivar, 'r': (frame.ivar / 1.1), 'z': (frame.ivar / 1.2)} resol_data = {'b': frame.resolution_data, 'r': frame.resolution_data, 'z': frame.resolution_data} nmodels = 10 (modelwave, modelflux) = get_models(nmodels) teff = np.random.uniform(5000, 7000, nmodels) logg = np.random.uniform(4.0, 5.0, nmodels) feh = np.random.uniform((- 2.5), (- 0.5), nmodels) stdfibers = np.random.choice(9, 3, replace=False) frame.fibermap['OBJTYPE'][stdfibers] = 'STD' bestid = (- np.ones(len(stdfibers))) bestwave = np.zeros((bestid.shape[0], modelflux.shape[1])) bestflux = np.zeros((bestid.shape[0], modelflux.shape[1])) red_chisq = np.zeros(len(stdfibers)) for i in range(len(stdfibers)): stdflux = {'b': flux['b'][i], 'r': flux['r'][i], 'z': flux['z'][i]} stdivar = {'b': ivar['b'][i], 'r': ivar['r'][i], 'z': ivar['z'][i]} stdresol_data = {'b': resol_data['b'][i], 'r': resol_data['r'][i], 'z': resol_data['z'][i]} (bestid, redshift, chi2) = match_templates(wave, stdflux, stdivar, stdresol_data, modelwave, modelflux, teff, logg, feh)
def test_match_templates(self): '\n \n ' from lvmspec.fluxcalibration import match_templates frame = get_frame_data() flux = {'b': frame.flux, 'r': (frame.flux * 1.1), 'z': (frame.flux * 1.2)} wave = {'b': frame.wave, 'r': (frame.wave + 10), 'z': (frame.wave + 20)} ivar = {'b': frame.ivar, 'r': (frame.ivar / 1.1), 'z': (frame.ivar / 1.2)} resol_data = {'b': frame.resolution_data, 'r': frame.resolution_data, 'z': frame.resolution_data} nmodels = 10 (modelwave, modelflux) = get_models(nmodels) teff = np.random.uniform(5000, 7000, nmodels) logg = np.random.uniform(4.0, 5.0, nmodels) feh = np.random.uniform((- 2.5), (- 0.5), nmodels) stdfibers = np.random.choice(9, 3, replace=False) frame.fibermap['OBJTYPE'][stdfibers] = 'STD' bestid = (- np.ones(len(stdfibers))) bestwave = np.zeros((bestid.shape[0], modelflux.shape[1])) bestflux = np.zeros((bestid.shape[0], modelflux.shape[1])) red_chisq = np.zeros(len(stdfibers)) for i in range(len(stdfibers)): stdflux = {'b': flux['b'][i], 'r': flux['r'][i], 'z': flux['z'][i]} stdivar = {'b': ivar['b'][i], 'r': ivar['r'][i], 'z': ivar['z'][i]} stdresol_data = {'b': resol_data['b'][i], 'r': resol_data['r'][i], 'z': resol_data['z'][i]} (bestid, redshift, chi2) = match_templates(wave, stdflux, stdivar, stdresol_data, modelwave, modelflux, teff, logg, feh)<|docstring|>Test with simple interface check for matching best templates with the std star flux<|endoftext|>
8a3a16bde07fd09c5c9ae5f94eb5282bd86c4a4476ae093260095a28ccae47fc
def test_normalize_templates(self): '\n Test for normalization to a given magnitude for calibration\n ' stdwave = np.linspace(3000, 11000, 10000) stdflux = (np.cos(stdwave) + 100.0) mags = np.array((20, 21)) filters = ['SDSS_I', 'SDSS_R'] normflux = normalize_templates(stdwave, stdflux, mags, filters) self.assertEqual(stdflux.shape, normflux.shape) r = speclite.filters.load_filter('sdss2010-r') rmag = r.get_ab_magnitude((1e-17 * normflux), stdwave) self.assertAlmostEqual(rmag, mags[1])
Test for normalization to a given magnitude for calibration
py/lvmspec/test/test_flux_calibration.py
test_normalize_templates
sdss/lvmspec
0
python
def test_normalize_templates(self): '\n \n ' stdwave = np.linspace(3000, 11000, 10000) stdflux = (np.cos(stdwave) + 100.0) mags = np.array((20, 21)) filters = ['SDSS_I', 'SDSS_R'] normflux = normalize_templates(stdwave, stdflux, mags, filters) self.assertEqual(stdflux.shape, normflux.shape) r = speclite.filters.load_filter('sdss2010-r') rmag = r.get_ab_magnitude((1e-17 * normflux), stdwave) self.assertAlmostEqual(rmag, mags[1])
def test_normalize_templates(self): '\n \n ' stdwave = np.linspace(3000, 11000, 10000) stdflux = (np.cos(stdwave) + 100.0) mags = np.array((20, 21)) filters = ['SDSS_I', 'SDSS_R'] normflux = normalize_templates(stdwave, stdflux, mags, filters) self.assertEqual(stdflux.shape, normflux.shape) r = speclite.filters.load_filter('sdss2010-r') rmag = r.get_ab_magnitude((1e-17 * normflux), stdwave) self.assertAlmostEqual(rmag, mags[1])<|docstring|>Test for normalization to a given magnitude for calibration<|endoftext|>
c4185afb91727d8386e5d96fc0f8d79fe45f25f26d1b1c6054859a37904f6d11
def test_compute_fluxcalibration(self): ' Test compute_fluxcalibration interface\n ' frame = get_frame_data() (modelwave, modelflux) = get_models() stdfibers = np.random.choice(9, 3, replace=False) frame.fibermap['OBJTYPE'][stdfibers] = 'STD' input_model_wave = modelwave input_model_flux = modelflux[0:3] fluxCalib = compute_flux_calibration(frame, input_model_wave, input_model_flux, input_model_fibers=stdfibers, nsig_clipping=4.0) self.assertTrue(np.array_equal(fluxCalib.wave, frame.wave)) self.assertEqual(fluxCalib.calib.shape, frame.flux.shape) self.assertFalse(np.any(fluxCalib.mask))
Test compute_fluxcalibration interface
py/lvmspec/test/test_flux_calibration.py
test_compute_fluxcalibration
sdss/lvmspec
0
python
def test_compute_fluxcalibration(self): ' \n ' frame = get_frame_data() (modelwave, modelflux) = get_models() stdfibers = np.random.choice(9, 3, replace=False) frame.fibermap['OBJTYPE'][stdfibers] = 'STD' input_model_wave = modelwave input_model_flux = modelflux[0:3] fluxCalib = compute_flux_calibration(frame, input_model_wave, input_model_flux, input_model_fibers=stdfibers, nsig_clipping=4.0) self.assertTrue(np.array_equal(fluxCalib.wave, frame.wave)) self.assertEqual(fluxCalib.calib.shape, frame.flux.shape) self.assertFalse(np.any(fluxCalib.mask))
def test_compute_fluxcalibration(self): ' \n ' frame = get_frame_data() (modelwave, modelflux) = get_models() stdfibers = np.random.choice(9, 3, replace=False) frame.fibermap['OBJTYPE'][stdfibers] = 'STD' input_model_wave = modelwave input_model_flux = modelflux[0:3] fluxCalib = compute_flux_calibration(frame, input_model_wave, input_model_flux, input_model_fibers=stdfibers, nsig_clipping=4.0) self.assertTrue(np.array_equal(fluxCalib.wave, frame.wave)) self.assertEqual(fluxCalib.calib.shape, frame.flux.shape) self.assertFalse(np.any(fluxCalib.mask))<|docstring|>Test compute_fluxcalibration interface<|endoftext|>
7eeac7116dcc13f840503ee17ee055cedb8da1b940d7ef115c5e553f5b5b1cef
def test_outliers(self): 'Test fluxcalib when input starts with large outliers' frame = get_frame_data() (modelwave, modelflux) = get_models() nstd = 5 frame.fibermap['OBJTYPE'][0:nstd] = 'STD' nstd = np.count_nonzero((frame.fibermap['OBJTYPE'] == 'STD')) frame.flux[0] = np.mean(frame.flux[0]) fluxCalib = compute_flux_calibration(frame, modelwave, modelflux[0:nstd], input_model_fibers=np.arange(nstd))
Test fluxcalib when input starts with large outliers
py/lvmspec/test/test_flux_calibration.py
test_outliers
sdss/lvmspec
0
python
def test_outliers(self): frame = get_frame_data() (modelwave, modelflux) = get_models() nstd = 5 frame.fibermap['OBJTYPE'][0:nstd] = 'STD' nstd = np.count_nonzero((frame.fibermap['OBJTYPE'] == 'STD')) frame.flux[0] = np.mean(frame.flux[0]) fluxCalib = compute_flux_calibration(frame, modelwave, modelflux[0:nstd], input_model_fibers=np.arange(nstd))
def test_outliers(self): frame = get_frame_data() (modelwave, modelflux) = get_models() nstd = 5 frame.fibermap['OBJTYPE'][0:nstd] = 'STD' nstd = np.count_nonzero((frame.fibermap['OBJTYPE'] == 'STD')) frame.flux[0] = np.mean(frame.flux[0]) fluxCalib = compute_flux_calibration(frame, modelwave, modelflux[0:nstd], input_model_fibers=np.arange(nstd))<|docstring|>Test fluxcalib when input starts with large outliers<|endoftext|>
6319421916e107001824abffc793425f7c73419639b47de0192c3c7c609df346
def test_masked_data(self): 'Test compute_fluxcalibration with some ivar=0 data\n ' frame = get_frame_data() (modelwave, modelflux) = get_models() nstd = 1 frame.fibermap['OBJTYPE'][2:(2 + nstd)] = 'STD' frame.ivar[(2:(2 + nstd), 20:22)] = 0 fluxCalib = compute_flux_calibration(frame, modelwave, modelflux[2:(2 + nstd)], input_model_fibers=np.arange(2, (2 + nstd)), debug=True) self.assertTrue(np.array_equal(fluxCalib.wave, frame.wave)) self.assertEqual(fluxCalib.calib.shape, frame.flux.shape)
Test compute_fluxcalibration with some ivar=0 data
py/lvmspec/test/test_flux_calibration.py
test_masked_data
sdss/lvmspec
0
python
def test_masked_data(self): '\n ' frame = get_frame_data() (modelwave, modelflux) = get_models() nstd = 1 frame.fibermap['OBJTYPE'][2:(2 + nstd)] = 'STD' frame.ivar[(2:(2 + nstd), 20:22)] = 0 fluxCalib = compute_flux_calibration(frame, modelwave, modelflux[2:(2 + nstd)], input_model_fibers=np.arange(2, (2 + nstd)), debug=True) self.assertTrue(np.array_equal(fluxCalib.wave, frame.wave)) self.assertEqual(fluxCalib.calib.shape, frame.flux.shape)
def test_masked_data(self): '\n ' frame = get_frame_data() (modelwave, modelflux) = get_models() nstd = 1 frame.fibermap['OBJTYPE'][2:(2 + nstd)] = 'STD' frame.ivar[(2:(2 + nstd), 20:22)] = 0 fluxCalib = compute_flux_calibration(frame, modelwave, modelflux[2:(2 + nstd)], input_model_fibers=np.arange(2, (2 + nstd)), debug=True) self.assertTrue(np.array_equal(fluxCalib.wave, frame.wave)) self.assertEqual(fluxCalib.calib.shape, frame.flux.shape)<|docstring|>Test compute_fluxcalibration with some ivar=0 data<|endoftext|>
b6bff0a1d0091556961cd96e41c234bfab413f81f3e679b22d17b862378e62d2
@property def saturation(self): ' The wait queue length of the sda is the 9th column in\n /sys/block/sda/stat.' wait_queue_lengths = list() with open('/proc/diskstats') as diskstats_file: for line in diskstats_file: tokens = line.split() if (tokens[2] in self._partitions): wait_queue_lengths.append((tokens[2], tokens[11])) return tuple(wait_queue_lengths)
The wait queue length of the sda is the 9th column in /sys/block/sda/stat.
use/metrics/storio.py
saturation
atsikiridis/use-tool-py
0
python
@property def saturation(self): ' The wait queue length of the sda is the 9th column in\n /sys/block/sda/stat.' wait_queue_lengths = list() with open('/proc/diskstats') as diskstats_file: for line in diskstats_file: tokens = line.split() if (tokens[2] in self._partitions): wait_queue_lengths.append((tokens[2], tokens[11])) return tuple(wait_queue_lengths)
@property def saturation(self): ' The wait queue length of the sda is the 9th column in\n /sys/block/sda/stat.' wait_queue_lengths = list() with open('/proc/diskstats') as diskstats_file: for line in diskstats_file: tokens = line.split() if (tokens[2] in self._partitions): wait_queue_lengths.append((tokens[2], tokens[11])) return tuple(wait_queue_lengths)<|docstring|>The wait queue length of the sda is the 9th column in /sys/block/sda/stat.<|endoftext|>
17d4bc43ed2fe30c6d97d99e20997a20c49f61db0456dbb73c6578feb3d05d5b
@token.setter def token(self, token: str) -> None: '\n it allows to change the token in runtime.\n Ex.: (prod to test)\n :param token:\n :return:\n ' self._token = token
it allows to change the token in runtime. Ex.: (prod to test) :param token: :return:
python_iugu/client/client.py
token
guiflemes/python_iugu
2
python
@token.setter def token(self, token: str) -> None: '\n it allows to change the token in runtime.\n Ex.: (prod to test)\n :param token:\n :return:\n ' self._token = token
@token.setter def token(self, token: str) -> None: '\n it allows to change the token in runtime.\n Ex.: (prod to test)\n :param token:\n :return:\n ' self._token = token<|docstring|>it allows to change the token in runtime. Ex.: (prod to test) :param token: :return:<|endoftext|>
81e8ea3b0b73a1197bb1cf8037ff42849cea61e76d95308040dbdc1cfef5bf90
def add(self, asset: Asset) -> None: 'Commits an asset to persistence.\n\n Args:\n asset (Asset): Object to be persisted\n\n ' asset.passport.filepath = self._create_filepath(asset=asset) self._register_asset(asset) self._io.save(asset=asset, filepath=asset.filepath)
Commits an asset to persistence. Args: asset (Asset): Object to be persisted
cvr/core/asset.py
add
john-james-ai/cvr
0
python
def add(self, asset: Asset) -> None: 'Commits an asset to persistence.\n\n Args:\n asset (Asset): Object to be persisted\n\n ' asset.passport.filepath = self._create_filepath(asset=asset) self._register_asset(asset) self._io.save(asset=asset, filepath=asset.filepath)
def add(self, asset: Asset) -> None: 'Commits an asset to persistence.\n\n Args:\n asset (Asset): Object to be persisted\n\n ' asset.passport.filepath = self._create_filepath(asset=asset) self._register_asset(asset) self._io.save(asset=asset, filepath=asset.filepath)<|docstring|>Commits an asset to persistence. Args: asset (Asset): Object to be persisted<|endoftext|>
311e6b1d932f79b29ed85c05814f2d6f9fb1dd923ce91235ec858171b2ebb181
def get(self, asset_type: str, stage: str, name: str, version: int=None) -> Asset: 'Retrieves an asset by asset_type, stage, name, and optional version\n\n Args:\n asset_type (str): The class of the asset in lower case\n stage (str): The stage in the development pipeline\n name (str): The name of the asset\n version (int): The version of the asset\n Returns:\n asset (Asset): Asset being requested.\n ' if version: registry = self._search_registry_by_version(asset_type, name, stage, version) else: registry = self._search_registry_by_asset(asset_type, name, stage) try: filepath = registry['filepath'].values[0] return self._io.load(filepath) except IndexError: return None
Retrieves an asset by asset_type, stage, name, and optional version Args: asset_type (str): The class of the asset in lower case stage (str): The stage in the development pipeline name (str): The name of the asset version (int): The version of the asset Returns: asset (Asset): Asset being requested.
cvr/core/asset.py
get
john-james-ai/cvr
0
python
def get(self, asset_type: str, stage: str, name: str, version: int=None) -> Asset: 'Retrieves an asset by asset_type, stage, name, and optional version\n\n Args:\n asset_type (str): The class of the asset in lower case\n stage (str): The stage in the development pipeline\n name (str): The name of the asset\n version (int): The version of the asset\n Returns:\n asset (Asset): Asset being requested.\n ' if version: registry = self._search_registry_by_version(asset_type, name, stage, version) else: registry = self._search_registry_by_asset(asset_type, name, stage) try: filepath = registry['filepath'].values[0] return self._io.load(filepath) except IndexError: return None
def get(self, asset_type: str, stage: str, name: str, version: int=None) -> Asset: 'Retrieves an asset by asset_type, stage, name, and optional version\n\n Args:\n asset_type (str): The class of the asset in lower case\n stage (str): The stage in the development pipeline\n name (str): The name of the asset\n version (int): The version of the asset\n Returns:\n asset (Asset): Asset being requested.\n ' if version: registry = self._search_registry_by_version(asset_type, name, stage, version) else: registry = self._search_registry_by_asset(asset_type, name, stage) try: filepath = registry['filepath'].values[0] return self._io.load(filepath) except IndexError: return None<|docstring|>Retrieves an asset by asset_type, stage, name, and optional version Args: asset_type (str): The class of the asset in lower case stage (str): The stage in the development pipeline name (str): The name of the asset version (int): The version of the asset Returns: asset (Asset): Asset being requested.<|endoftext|>
233ef457566e5f9edf114b14d46f0242897c47bced2af76403bdc215a62da4a9
def get_by_aid(self, aid: str) -> Asset: 'Retrieves an asset by aid.\n\n Args:\n aid (str): asset id\n Returns:\n asset (Asset): Asset being requested.\n ' registry = self._io.load(self._registry) item = registry.loc[(registry['aid'] == aid)] try: filepath = item['filepath'].values[0] return self._io.load(filepath) except IndexError: return None
Retrieves an asset by aid. Args: aid (str): asset id Returns: asset (Asset): Asset being requested.
cvr/core/asset.py
get_by_aid
john-james-ai/cvr
0
python
def get_by_aid(self, aid: str) -> Asset: 'Retrieves an asset by aid.\n\n Args:\n aid (str): asset id\n Returns:\n asset (Asset): Asset being requested.\n ' registry = self._io.load(self._registry) item = registry.loc[(registry['aid'] == aid)] try: filepath = item['filepath'].values[0] return self._io.load(filepath) except IndexError: return None
def get_by_aid(self, aid: str) -> Asset: 'Retrieves an asset by aid.\n\n Args:\n aid (str): asset id\n Returns:\n asset (Asset): Asset being requested.\n ' registry = self._io.load(self._registry) item = registry.loc[(registry['aid'] == aid)] try: filepath = item['filepath'].values[0] return self._io.load(filepath) except IndexError: return None<|docstring|>Retrieves an asset by aid. Args: aid (str): asset id Returns: asset (Asset): Asset being requested.<|endoftext|>
26e8baf73d357c194537e86b7411b8768d894032ef6587fcd2a6b674209a1e5f
def get_assets(self, asset_type: str=None) -> pd.DataFrame: 'Returns the registry, optionally filtered by asset_type\n\n Args:\n asset_type (str): asset type\n\n Returns:\n assets (pd.DataFrame):\n ' assets = self._io.load(self._registry) if asset_type: return assets.loc[(assets['asset_type'] == asset_type)] else: return assets
Returns the registry, optionally filtered by asset_type Args: asset_type (str): asset type Returns: assets (pd.DataFrame):
cvr/core/asset.py
get_assets
john-james-ai/cvr
0
python
def get_assets(self, asset_type: str=None) -> pd.DataFrame: 'Returns the registry, optionally filtered by asset_type\n\n Args:\n asset_type (str): asset type\n\n Returns:\n assets (pd.DataFrame):\n ' assets = self._io.load(self._registry) if asset_type: return assets.loc[(assets['asset_type'] == asset_type)] else: return assets
def get_assets(self, asset_type: str=None) -> pd.DataFrame: 'Returns the registry, optionally filtered by asset_type\n\n Args:\n asset_type (str): asset type\n\n Returns:\n assets (pd.DataFrame):\n ' assets = self._io.load(self._registry) if asset_type: return assets.loc[(assets['asset_type'] == asset_type)] else: return assets<|docstring|>Returns the registry, optionally filtered by asset_type Args: asset_type (str): asset type Returns: assets (pd.DataFrame):<|endoftext|>
3b87fce9fe779ad67f9bdf9af18d57deb7a37d0b70eab9588759379137244b81
def set_version(self, asset: Asset) -> Asset: 'Sets the version number on the asset and returns it.\n\n Args:\n asset (Asset): Asset\n\n Returns:\n asset (Asset): Asset with version property set\n\n ' matching_assets = self._search_registry_by_asset(asset_type=asset.passport.asset_type, name=asset.passport.name, stage=asset.passport.stage) if (matching_assets is not None): asset.passport.version = len(matching_assets) else: asset.passport.version = 0 return asset
Sets the version number on the asset and returns it. Args: asset (Asset): Asset Returns: asset (Asset): Asset with version property set
cvr/core/asset.py
set_version
john-james-ai/cvr
0
python
def set_version(self, asset: Asset) -> Asset: 'Sets the version number on the asset and returns it.\n\n Args:\n asset (Asset): Asset\n\n Returns:\n asset (Asset): Asset with version property set\n\n ' matching_assets = self._search_registry_by_asset(asset_type=asset.passport.asset_type, name=asset.passport.name, stage=asset.passport.stage) if (matching_assets is not None): asset.passport.version = len(matching_assets) else: asset.passport.version = 0 return asset
def set_version(self, asset: Asset) -> Asset: 'Sets the version number on the asset and returns it.\n\n Args:\n asset (Asset): Asset\n\n Returns:\n asset (Asset): Asset with version property set\n\n ' matching_assets = self._search_registry_by_asset(asset_type=asset.passport.asset_type, name=asset.passport.name, stage=asset.passport.stage) if (matching_assets is not None): asset.passport.version = len(matching_assets) else: asset.passport.version = 0 return asset<|docstring|>Sets the version number on the asset and returns it. Args: asset (Asset): Asset Returns: asset (Asset): Asset with version property set<|endoftext|>
f91aee33798bdb8df96e08ad74d77a3dfb08974543d2a12ef589306d4c733673
def delete(self, aid: str, ignore_errors: bool=True) -> None: 'Deletes an asset, parameterized by the asset id.\n\n Args:\n aid (str): asset id\n Returns:\n asset (Asset): Asset being requested.\n ' registry = self._io.load(self._registry) try: item = registry.loc[(registry['aid'] == aid)] filepath = item['filepath'].values[0] self._io.remove(filepath=filepath) registry = registry.loc[(registry['aid'] != aid)] self._io.save(registry, self._registry) except AttributeError: return None
Deletes an asset, parameterized by the asset id. Args: aid (str): asset id Returns: asset (Asset): Asset being requested.
cvr/core/asset.py
delete
john-james-ai/cvr
0
python
def delete(self, aid: str, ignore_errors: bool=True) -> None: 'Deletes an asset, parameterized by the asset id.\n\n Args:\n aid (str): asset id\n Returns:\n asset (Asset): Asset being requested.\n ' registry = self._io.load(self._registry) try: item = registry.loc[(registry['aid'] == aid)] filepath = item['filepath'].values[0] self._io.remove(filepath=filepath) registry = registry.loc[(registry['aid'] != aid)] self._io.save(registry, self._registry) except AttributeError: return None
def delete(self, aid: str, ignore_errors: bool=True) -> None: 'Deletes an asset, parameterized by the asset id.\n\n Args:\n aid (str): asset id\n Returns:\n asset (Asset): Asset being requested.\n ' registry = self._io.load(self._registry) try: item = registry.loc[(registry['aid'] == aid)] filepath = item['filepath'].values[0] self._io.remove(filepath=filepath) registry = registry.loc[(registry['aid'] != aid)] self._io.save(registry, self._registry) except AttributeError: return None<|docstring|>Deletes an asset, parameterized by the asset id. Args: aid (str): asset id Returns: asset (Asset): Asset being requested.<|endoftext|>
0bf72508c25baf4de18ce28b87856c08f6a51cf9d1749835a1878ebf295d24ed
def exists(self, aid: str) -> bool: 'Returns true if the asset version exists.\n\n Args:\n aid (str): asset id\n\n Returns:\n bool True if asset exists, False otherwise.\n ' registry = self._io.load(self._registry) item = registry.loc[(registry['aid'] == aid)] return (len(item) > 0)
Returns true if the asset version exists. Args: aid (str): asset id Returns: bool True if asset exists, False otherwise.
cvr/core/asset.py
exists
john-james-ai/cvr
0
python
def exists(self, aid: str) -> bool: 'Returns true if the asset version exists.\n\n Args:\n aid (str): asset id\n\n Returns:\n bool True if asset exists, False otherwise.\n ' registry = self._io.load(self._registry) item = registry.loc[(registry['aid'] == aid)] return (len(item) > 0)
def exists(self, aid: str) -> bool: 'Returns true if the asset version exists.\n\n Args:\n aid (str): asset id\n\n Returns:\n bool True if asset exists, False otherwise.\n ' registry = self._io.load(self._registry) item = registry.loc[(registry['aid'] == aid)] return (len(item) > 0)<|docstring|>Returns true if the asset version exists. Args: aid (str): asset id Returns: bool True if asset exists, False otherwise.<|endoftext|>
f418ef45e9311613507ebe7e879bedece9dd424e7c75675e70a86d9d324c45d7
def _create_filepath(self, asset: Asset, fileext='.pkl') -> str: 'Forms the filepath for an asset.' filename = (((((((asset.passport.stage + '_') + asset.passport.asset_type) + '_') + asset.passport.name) + '_v') + str(asset.passport.version).zfill(3)) + fileext) return os.path.join(self._directory, filename)
Forms the filepath for an asset.
cvr/core/asset.py
_create_filepath
john-james-ai/cvr
0
python
def _create_filepath(self, asset: Asset, fileext='.pkl') -> str: filename = (((((((asset.passport.stage + '_') + asset.passport.asset_type) + '_') + asset.passport.name) + '_v') + str(asset.passport.version).zfill(3)) + fileext) return os.path.join(self._directory, filename)
def _create_filepath(self, asset: Asset, fileext='.pkl') -> str: filename = (((((((asset.passport.stage + '_') + asset.passport.asset_type) + '_') + asset.passport.name) + '_v') + str(asset.passport.version).zfill(3)) + fileext) return os.path.join(self._directory, filename)<|docstring|>Forms the filepath for an asset.<|endoftext|>
0880a862c40da714c49c22f45a7d3fa0a38322b1907b4046f46fe0c38313169f
def _register_asset(self, asset: Asset) -> None: 'Posts the asset to the registry.' registry = self._io.load(self._registry) registry = (registry if (registry is not None) else pd.DataFrame()) item = {'aid': asset.passport.aid, 'stage': asset.passport.stage, 'asset_type': asset.passport.asset_type, 'created': asset.passport.created, 'name': asset.passport.name, 'version': asset.passport.version, 'creator': asset.passport.creator, 'filepath': asset.passport.filepath} item = pd.DataFrame(data=item, index=[0]) registry = pd.concat([registry, item], axis=0) self._io.save(registry, self._registry)
Posts the asset to the registry.
cvr/core/asset.py
_register_asset
john-james-ai/cvr
0
python
def _register_asset(self, asset: Asset) -> None: registry = self._io.load(self._registry) registry = (registry if (registry is not None) else pd.DataFrame()) item = {'aid': asset.passport.aid, 'stage': asset.passport.stage, 'asset_type': asset.passport.asset_type, 'created': asset.passport.created, 'name': asset.passport.name, 'version': asset.passport.version, 'creator': asset.passport.creator, 'filepath': asset.passport.filepath} item = pd.DataFrame(data=item, index=[0]) registry = pd.concat([registry, item], axis=0) self._io.save(registry, self._registry)
def _register_asset(self, asset: Asset) -> None: registry = self._io.load(self._registry) registry = (registry if (registry is not None) else pd.DataFrame()) item = {'aid': asset.passport.aid, 'stage': asset.passport.stage, 'asset_type': asset.passport.asset_type, 'created': asset.passport.created, 'name': asset.passport.name, 'version': asset.passport.version, 'creator': asset.passport.creator, 'filepath': asset.passport.filepath} item = pd.DataFrame(data=item, index=[0]) registry = pd.concat([registry, item], axis=0) self._io.save(registry, self._registry)<|docstring|>Posts the asset to the registry.<|endoftext|>
89c17b8f3d9e7555c7f71d449fdfcc5f027b055877b666fac94401dd0cd6321c
def _search_registry_by_version(self, asset_type: str, name: str, stage: str, version: int) -> pd.DataFrame: 'Return one-row dataframe containing version registration.' registry = self._io.load(self._registry) try: return registry.loc[((((registry['asset_type'] == asset_type) & (registry['stage'] == stage)) & (registry['name'] == name)) & (registry['version'] == version))] except AttributeError: return None
Return one-row dataframe containing version registration.
cvr/core/asset.py
_search_registry_by_version
john-james-ai/cvr
0
python
def _search_registry_by_version(self, asset_type: str, name: str, stage: str, version: int) -> pd.DataFrame: registry = self._io.load(self._registry) try: return registry.loc[((((registry['asset_type'] == asset_type) & (registry['stage'] == stage)) & (registry['name'] == name)) & (registry['version'] == version))] except AttributeError: return None
def _search_registry_by_version(self, asset_type: str, name: str, stage: str, version: int) -> pd.DataFrame: registry = self._io.load(self._registry) try: return registry.loc[((((registry['asset_type'] == asset_type) & (registry['stage'] == stage)) & (registry['name'] == name)) & (registry['version'] == version))] except AttributeError: return None<|docstring|>Return one-row dataframe containing version registration.<|endoftext|>
4030ff554b31ebfe6b98c498cf0abe0320bd0bcffb2564db02961ccaa2aecf64
def _search_registry_by_asset(self, asset_type: str, name: str, stage: str) -> pd.DataFrame: 'Returns latest version of asset.' registry = self._io.load(self._registry) try: assets = registry.loc[(((registry['asset_type'] == asset_type) & (registry['stage'] == stage)) & (registry['name'] == name))] asset = assets.loc[(assets['version'] == assets['version'].max())] return asset except AttributeError: return None
Returns latest version of asset.
cvr/core/asset.py
_search_registry_by_asset
john-james-ai/cvr
0
python
def _search_registry_by_asset(self, asset_type: str, name: str, stage: str) -> pd.DataFrame: registry = self._io.load(self._registry) try: assets = registry.loc[(((registry['asset_type'] == asset_type) & (registry['stage'] == stage)) & (registry['name'] == name))] asset = assets.loc[(assets['version'] == assets['version'].max())] return asset except AttributeError: return None
def _search_registry_by_asset(self, asset_type: str, name: str, stage: str) -> pd.DataFrame: registry = self._io.load(self._registry) try: assets = registry.loc[(((registry['asset_type'] == asset_type) & (registry['stage'] == stage)) & (registry['name'] == name))] asset = assets.loc[(assets['version'] == assets['version'].max())] return asset except AttributeError: return None<|docstring|>Returns latest version of asset.<|endoftext|>
5d8a9ad6644ef910b49ee977a821c5f119a0d86ed0a9ae9f38047add5d73019e
def __init__(self, def_output): 'Init.\n\n Args:\n def_output (Output): Default output.\n ' self.name = 'unnamed' self.def_output = def_output router = DataRouter(def_output) self.router = router self.plugins = [] self.type = None
Init. Args: def_output (Output): Default output.
swak/pluginpod.py
__init__
haje01/swak
0
python
def __init__(self, def_output): 'Init.\n\n Args:\n def_output (Output): Default output.\n ' self.name = 'unnamed' self.def_output = def_output router = DataRouter(def_output) self.router = router self.plugins = [] self.type = None
def __init__(self, def_output): 'Init.\n\n Args:\n def_output (Output): Default output.\n ' self.name = 'unnamed' self.def_output = def_output router = DataRouter(def_output) self.router = router self.plugins = [] self.type = None<|docstring|>Init. Args: def_output (Output): Default output.<|endoftext|>
b81074be601000a8b6fea0039545386af46c4ac7d2b74ee035a5230e87584291
def register_plugin(self, tag, plugin, insert_first=False): 'Register a plugin by data tag pattern.\n\n Args:\n tag: Tag pattern.\n plugin: Plugin to regiseter.\n insert_first (bool): Do not append, insert at first.\n ' logging.info("register_plugin - pod name '{}' tag '{}' plugin '{}' first '{}'".format(self.name, tag, plugin, insert_first)) assert (self.router is not None) assert (plugin not in self.plugins) if insert_first: self.plugins.insert(0, plugin) else: self.plugins.append(plugin) self.router.add_rule(tag, plugin, insert_first)
Register a plugin by data tag pattern. Args: tag: Tag pattern. plugin: Plugin to regiseter. insert_first (bool): Do not append, insert at first.
swak/pluginpod.py
register_plugin
haje01/swak
0
python
def register_plugin(self, tag, plugin, insert_first=False): 'Register a plugin by data tag pattern.\n\n Args:\n tag: Tag pattern.\n plugin: Plugin to regiseter.\n insert_first (bool): Do not append, insert at first.\n ' logging.info("register_plugin - pod name '{}' tag '{}' plugin '{}' first '{}'".format(self.name, tag, plugin, insert_first)) assert (self.router is not None) assert (plugin not in self.plugins) if insert_first: self.plugins.insert(0, plugin) else: self.plugins.append(plugin) self.router.add_rule(tag, plugin, insert_first)
def register_plugin(self, tag, plugin, insert_first=False): 'Register a plugin by data tag pattern.\n\n Args:\n tag: Tag pattern.\n plugin: Plugin to regiseter.\n insert_first (bool): Do not append, insert at first.\n ' logging.info("register_plugin - pod name '{}' tag '{}' plugin '{}' first '{}'".format(self.name, tag, plugin, insert_first)) assert (self.router is not None) assert (plugin not in self.plugins) if insert_first: self.plugins.insert(0, plugin) else: self.plugins.append(plugin) self.router.add_rule(tag, plugin, insert_first)<|docstring|>Register a plugin by data tag pattern. Args: tag: Tag pattern. plugin: Plugin to regiseter. insert_first (bool): Do not append, insert at first.<|endoftext|>
9bf1d925c252996f59c1cff563d6d9b75be6d90ced885a058aec76ed21cd359c
def init_from_commands(self, tag, cmds): 'Init agent from plugin commands.\n\n Args:\n tag (str): data tag.\n cmds (list): Seperated plugin commands list.\n check_input (bool): Check for input commands.\n\n Returns:\n Input: Starting input plugin\n ' logging.info('init_from_commands') input_pl = None assert (getattr(cmds, '__iter__') is not None) last_idx = (len(cmds) - 1) for (i, cmd) in enumerate(cmds): args = cmd[1:] pname = cmd[0] if (pname == 'tag'): assert (i == last_idx) break plugin = create_plugin_by_name(pname, args) self.register_plugin(tag, plugin) if ((i == 0) and isinstance(plugin, Input)): input_pl = plugin return input_pl
Init agent from plugin commands. Args: tag (str): data tag. cmds (list): Seperated plugin commands list. check_input (bool): Check for input commands. Returns: Input: Starting input plugin
swak/pluginpod.py
init_from_commands
haje01/swak
0
python
def init_from_commands(self, tag, cmds): 'Init agent from plugin commands.\n\n Args:\n tag (str): data tag.\n cmds (list): Seperated plugin commands list.\n check_input (bool): Check for input commands.\n\n Returns:\n Input: Starting input plugin\n ' logging.info('init_from_commands') input_pl = None assert (getattr(cmds, '__iter__') is not None) last_idx = (len(cmds) - 1) for (i, cmd) in enumerate(cmds): args = cmd[1:] pname = cmd[0] if (pname == 'tag'): assert (i == last_idx) break plugin = create_plugin_by_name(pname, args) self.register_plugin(tag, plugin) if ((i == 0) and isinstance(plugin, Input)): input_pl = plugin return input_pl
def init_from_commands(self, tag, cmds): 'Init agent from plugin commands.\n\n Args:\n tag (str): data tag.\n cmds (list): Seperated plugin commands list.\n check_input (bool): Check for input commands.\n\n Returns:\n Input: Starting input plugin\n ' logging.info('init_from_commands') input_pl = None assert (getattr(cmds, '__iter__') is not None) last_idx = (len(cmds) - 1) for (i, cmd) in enumerate(cmds): args = cmd[1:] pname = cmd[0] if (pname == 'tag'): assert (i == last_idx) break plugin = create_plugin_by_name(pname, args) self.register_plugin(tag, plugin) if ((i == 0) and isinstance(plugin, Input)): input_pl = plugin return input_pl<|docstring|>Init agent from plugin commands. Args: tag (str): data tag. cmds (list): Seperated plugin commands list. check_input (bool): Check for input commands. Returns: Input: Starting input plugin<|endoftext|>
00f44a383dbe71b8642829970eac3de4df2d2062be021f9306624ca97d350775
def iter_plugins(self): "Iterate all plugins in the agent.\n\n Yield router's default output if no output is exists.\n " no_output = True for plugin in self.plugins: if isinstance(plugin, Output): no_output = False (yield plugin) if no_output: (yield self.router.def_output)
Iterate all plugins in the agent. Yield router's default output if no output is exists.
swak/pluginpod.py
iter_plugins
haje01/swak
0
python
def iter_plugins(self): "Iterate all plugins in the agent.\n\n Yield router's default output if no output is exists.\n " no_output = True for plugin in self.plugins: if isinstance(plugin, Output): no_output = False (yield plugin) if no_output: (yield self.router.def_output)
def iter_plugins(self): "Iterate all plugins in the agent.\n\n Yield router's default output if no output is exists.\n " no_output = True for plugin in self.plugins: if isinstance(plugin, Output): no_output = False (yield plugin) if no_output: (yield self.router.def_output)<|docstring|>Iterate all plugins in the agent. Yield router's default output if no output is exists.<|endoftext|>
b3c3aa419cc88d32ae866f9e8ca07cec01b2a24bee12656e75569f2848c5a2d7
def iter_inputs(self): 'Iterate all inputs.' for plugin in self.iter_plugins(): if isinstance(plugin, Input): (yield plugin)
Iterate all inputs.
swak/pluginpod.py
iter_inputs
haje01/swak
0
python
def iter_inputs(self): for plugin in self.iter_plugins(): if isinstance(plugin, Input): (yield plugin)
def iter_inputs(self): for plugin in self.iter_plugins(): if isinstance(plugin, Input): (yield plugin)<|docstring|>Iterate all inputs.<|endoftext|>
0956b3394a61ff1a431eab7a753627097c6c2e517c95fc8abe326164f4566198
def iter_outputs(self): 'Iterate all outputs.' for plugin in self.iter_plugins(): if isinstance(plugin, Output): (yield plugin)
Iterate all outputs.
swak/pluginpod.py
iter_outputs
haje01/swak
0
python
def iter_outputs(self): for plugin in self.iter_plugins(): if isinstance(plugin, Output): (yield plugin)
def iter_outputs(self): for plugin in self.iter_plugins(): if isinstance(plugin, Output): (yield plugin)<|docstring|>Iterate all outputs.<|endoftext|>
192ac32daed09ce953b46cfbbc58d6a6902dc6ef77b3a1aa92ddff7881d45a73
def start(self): 'Start plugins in the router.' logging.info('starting all plugins') for plugin in self.iter_plugins(): plugin.start()
Start plugins in the router.
swak/pluginpod.py
start
haje01/swak
0
python
def start(self): logging.info('starting all plugins') for plugin in self.iter_plugins(): plugin.start()
def start(self): logging.info('starting all plugins') for plugin in self.iter_plugins(): plugin.start()<|docstring|>Start plugins in the router.<|endoftext|>
2bae1c5806a81bea1f724292070b0309b05e5a3dda7e3cce1853130d58d9fe10
def stop(self): 'Stop plugins in the router.' logging.info('stopping all plugins') for plugin in self.iter_plugins(): plugin.stop()
Stop plugins in the router.
swak/pluginpod.py
stop
haje01/swak
0
python
def stop(self): logging.info('stopping all plugins') for plugin in self.iter_plugins(): plugin.stop()
def stop(self): logging.info('stopping all plugins') for plugin in self.iter_plugins(): plugin.stop()<|docstring|>Stop plugins in the router.<|endoftext|>
89bbf3acbba6446fdb8d2796ce62c41bd5556bd5ae72d8d0df1fc98e989808b5
def flush(self, flush_all=False): 'Flush all output plugins.\n\n Args:\n flush_all (bool): Whether flush all or just one.\n ' logging.debug('flushing all output plugins') for output in self.iter_outputs(): output.flush(flush_all)
Flush all output plugins. Args: flush_all (bool): Whether flush all or just one.
swak/pluginpod.py
flush
haje01/swak
0
python
def flush(self, flush_all=False): 'Flush all output plugins.\n\n Args:\n flush_all (bool): Whether flush all or just one.\n ' logging.debug('flushing all output plugins') for output in self.iter_outputs(): output.flush(flush_all)
def flush(self, flush_all=False): 'Flush all output plugins.\n\n Args:\n flush_all (bool): Whether flush all or just one.\n ' logging.debug('flushing all output plugins') for output in self.iter_outputs(): output.flush(flush_all)<|docstring|>Flush all output plugins. Args: flush_all (bool): Whether flush all or just one.<|endoftext|>
b60e019a2fc123bed8222778ee1cb0568a7acd7fc05888e08ebf409db5ad5481
def shutdown(self): 'Shutdown plugins in the router.' logging.info('shutting down all output plugins') for plugin in self.iter_plugins(): plugin.shutdown()
Shutdown plugins in the router.
swak/pluginpod.py
shutdown
haje01/swak
0
python
def shutdown(self): logging.info('shutting down all output plugins') for plugin in self.iter_plugins(): plugin.shutdown()
def shutdown(self): logging.info('shutting down all output plugins') for plugin in self.iter_plugins(): plugin.shutdown()<|docstring|>Shutdown plugins in the router.<|endoftext|>
c11b4b56b8954fc26d66efead5eca3ec7d58fa39e2817fa79adebab350caca90
def may_chunking(self): 'Chunking for all outputs if needed.' for output in self.iter_outputs(): output.may_chunking()
Chunking for all outputs if needed.
swak/pluginpod.py
may_chunking
haje01/swak
0
python
def may_chunking(self): for output in self.iter_outputs(): output.may_chunking()
def may_chunking(self): for output in self.iter_outputs(): output.may_chunking()<|docstring|>Chunking for all outputs if needed.<|endoftext|>
c99bcb2fd89518708973dd4fac473ee13681ba4f88976a6ff2e7c83b72229d3c
def may_flushing(self, last_flush_interval=None): 'Flushing for all outputs if needed.' logging.debug('may_flushing') for output in self.iter_outputs(): output.may_flushing(last_flush_interval)
Flushing for all outputs if needed.
swak/pluginpod.py
may_flushing
haje01/swak
0
python
def may_flushing(self, last_flush_interval=None): logging.debug('may_flushing') for output in self.iter_outputs(): output.may_flushing(last_flush_interval)
def may_flushing(self, last_flush_interval=None): logging.debug('may_flushing') for output in self.iter_outputs(): output.may_flushing(last_flush_interval)<|docstring|>Flushing for all outputs if needed.<|endoftext|>
7d48db3d3497b7b1f2b5c29e4252f8e64bffcf720ebab94f6dcd465e431ccc63
def process(self, stop_event): 'Read from input and emit through router for service.\n\n This funciont is for service agent.\n\n Args:\n stop_event (threading.Event): Stop event\n ' self.start() logging.info('start processing') for (tag, ds) in self.input.read(stop_event): if (not ((tag is None) or ds.empty())): self.router.emit_stream(tag, ds, stop_event) self.may_flushing() logging.info('stop event received') self.stop() self.shutdown()
Read from input and emit through router for service. This funciont is for service agent. Args: stop_event (threading.Event): Stop event
swak/pluginpod.py
process
haje01/swak
0
python
def process(self, stop_event): 'Read from input and emit through router for service.\n\n This funciont is for service agent.\n\n Args:\n stop_event (threading.Event): Stop event\n ' self.start() logging.info('start processing') for (tag, ds) in self.input.read(stop_event): if (not ((tag is None) or ds.empty())): self.router.emit_stream(tag, ds, stop_event) self.may_flushing() logging.info('stop event received') self.stop() self.shutdown()
def process(self, stop_event): 'Read from input and emit through router for service.\n\n This funciont is for service agent.\n\n Args:\n stop_event (threading.Event): Stop event\n ' self.start() logging.info('start processing') for (tag, ds) in self.input.read(stop_event): if (not ((tag is None) or ds.empty())): self.router.emit_stream(tag, ds, stop_event) self.may_flushing() logging.info('stop event received') self.stop() self.shutdown()<|docstring|>Read from input and emit through router for service. This funciont is for service agent. Args: stop_event (threading.Event): Stop event<|endoftext|>
fa7f017879dd65666d23c515a0d6c270848521053fac6775c5c9d10d40fc0c2d
def simple_process(self, input_pl): 'Read from input and emit through router.\n\n This funciont is for test agent.\n\n Args:\n input_pl (swak.plugin.Input): Input plugin to read data.\n ' ainput = (input_pl if (input_pl is not None) else self.input) for (tag, ds) in ainput.read(None): if (not ds.empty()): self.router.emit_stream(tag, ds) self.may_flushing()
Read from input and emit through router. This funciont is for test agent. Args: input_pl (swak.plugin.Input): Input plugin to read data.
swak/pluginpod.py
simple_process
haje01/swak
0
python
def simple_process(self, input_pl): 'Read from input and emit through router.\n\n This funciont is for test agent.\n\n Args:\n input_pl (swak.plugin.Input): Input plugin to read data.\n ' ainput = (input_pl if (input_pl is not None) else self.input) for (tag, ds) in ainput.read(None): if (not ds.empty()): self.router.emit_stream(tag, ds) self.may_flushing()
def simple_process(self, input_pl): 'Read from input and emit through router.\n\n This funciont is for test agent.\n\n Args:\n input_pl (swak.plugin.Input): Input plugin to read data.\n ' ainput = (input_pl if (input_pl is not None) else self.input) for (tag, ds) in ainput.read(None): if (not ds.empty()): self.router.emit_stream(tag, ds) self.may_flushing()<|docstring|>Read from input and emit through router. This funciont is for test agent. Args: input_pl (swak.plugin.Input): Input plugin to read data.<|endoftext|>
e9ca35c094da5d951bde8109b0538a07f7ee880cd5a135d1282691754e5a9426
@property def input(self): 'Return input plugin.' assert (len(self.plugins) > 0) first = self.plugins[0] assert (isinstance(first, Input) or isinstance(first, ProxyInput)) return first
Return input plugin.
swak/pluginpod.py
input
haje01/swak
0
python
@property def input(self): assert (len(self.plugins) > 0) first = self.plugins[0] assert (isinstance(first, Input) or isinstance(first, ProxyInput)) return first
@property def input(self): assert (len(self.plugins) > 0) first = self.plugins[0] assert (isinstance(first, Input) or isinstance(first, ProxyInput)) return first<|docstring|>Return input plugin.<|endoftext|>
3a6ab8105257ee86145867c6969d1515654e9e8023cf72f08527de2ad7f780b4
def makeDebugFile(self, filename, content): "This should be used instead of print() because of the dirty\n\t\tway I'm using the exec() built-in function. For debugging \n\t\tpurposes." if (filename in self.debugNames): with open(((filename + str(self.batchNumber)) + '.txt'), 'w+') as f: f.write(content) self.debugNames.append(filename) else: with open(((filename + str(self.batchNumber)) + '.txt'), 'a+') as f: f.write(content)
This should be used instead of print() because of the dirty way I'm using the exec() built-in function. For debugging purposes.
pythonFinal.py
makeDebugFile
austenstrine/RandomPyCodeGenerator
0
python
def makeDebugFile(self, filename, content): "This should be used instead of print() because of the dirty\n\t\tway I'm using the exec() built-in function. For debugging \n\t\tpurposes." if (filename in self.debugNames): with open(((filename + str(self.batchNumber)) + '.txt'), 'w+') as f: f.write(content) self.debugNames.append(filename) else: with open(((filename + str(self.batchNumber)) + '.txt'), 'a+') as f: f.write(content)
def makeDebugFile(self, filename, content): "This should be used instead of print() because of the dirty\n\t\tway I'm using the exec() built-in function. For debugging \n\t\tpurposes." if (filename in self.debugNames): with open(((filename + str(self.batchNumber)) + '.txt'), 'w+') as f: f.write(content) self.debugNames.append(filename) else: with open(((filename + str(self.batchNumber)) + '.txt'), 'a+') as f: f.write(content)<|docstring|>This should be used instead of print() because of the dirty way I'm using the exec() built-in function. For debugging purposes.<|endoftext|>
e374ad7056f5355ba58a987fb3a36e221ceb2fc26e9fc3788b6b1cca11724ee8
def randVarInScope(self): 'Returns a random variable that exists in the currently\n\t\topen scope for the generated file' upperBound = (len(self.variable_names.peek()) - 1) randomVarIndex = None rint = randint(1, 3) if (rint == 1): randomVarIndex = randint(0, upperBound) elif (rint == 2): randomVarIndex = randint(int((upperBound / 2)), upperBound) else: randomVarIndex = randint((int((upperBound / 4)) * 3), upperBound) return self.variable_names.peek()[randomVarIndex]
Returns a random variable that exists in the currently open scope for the generated file
pythonFinal.py
randVarInScope
austenstrine/RandomPyCodeGenerator
0
python
def randVarInScope(self): 'Returns a random variable that exists in the currently\n\t\topen scope for the generated file' upperBound = (len(self.variable_names.peek()) - 1) randomVarIndex = None rint = randint(1, 3) if (rint == 1): randomVarIndex = randint(0, upperBound) elif (rint == 2): randomVarIndex = randint(int((upperBound / 2)), upperBound) else: randomVarIndex = randint((int((upperBound / 4)) * 3), upperBound) return self.variable_names.peek()[randomVarIndex]
def randVarInScope(self): 'Returns a random variable that exists in the currently\n\t\topen scope for the generated file' upperBound = (len(self.variable_names.peek()) - 1) randomVarIndex = None rint = randint(1, 3) if (rint == 1): randomVarIndex = randint(0, upperBound) elif (rint == 2): randomVarIndex = randint(int((upperBound / 2)), upperBound) else: randomVarIndex = randint((int((upperBound / 4)) * 3), upperBound) return self.variable_names.peek()[randomVarIndex]<|docstring|>Returns a random variable that exists in the currently open scope for the generated file<|endoftext|>
0bd25cc08d900b7b6ca276967e31696b3fa5b087efdf632f299e27ed5694b123
def mkTab(self): 'Generates the correct number of tabs for the scope level.' return (self.TAB * self.scope_depth)
Generates the correct number of tabs for the scope level.
pythonFinal.py
mkTab
austenstrine/RandomPyCodeGenerator
0
python
def mkTab(self): return (self.TAB * self.scope_depth)
def mkTab(self): return (self.TAB * self.scope_depth)<|docstring|>Generates the correct number of tabs for the scope level.<|endoftext|>
9e8b03a622e616c39d25a66a9cc4fd5b46bade85a08d933af6cae220e3fe9cc2
def getValOrVar(self): 'Returns either a random value or a random pre-existing variable' return self.genConditionalVals()[randint(0, 1)]
Returns either a random value or a random pre-existing variable
pythonFinal.py
getValOrVar
austenstrine/RandomPyCodeGenerator
0
python
def getValOrVar(self): return self.genConditionalVals()[randint(0, 1)]
def getValOrVar(self): return self.genConditionalVals()[randint(0, 1)]<|docstring|>Returns either a random value or a random pre-existing variable<|endoftext|>
eef922049eabd09470a2725b89043e2003e9beae8912c249fa08866a9f8ef654
def genConditionalVals(self): 'Returns a tuple containing 1. a random variable in scope, and \n\t\t2. either another random variable in scope, or a random value based \n\t\toff of the input and output values' result1 = self.variable_names.peek()[randint(0, (len(self.variable_names.peek()) - 1))] igr = randint(1, 2) result2 = None if (igr == 1): result2 = str(randint(((self.DESIRED_OUTPUT - (self.DESIRED_OUTPUT * 2)) - 1), ((self.DESIRED_OUTPUT * 2) + 1))) else: result2 = self.randVarInScope() return (result1, result2)
Returns a tuple containing 1. a random variable in scope, and 2. either another random variable in scope, or a random value based off of the input and output values
pythonFinal.py
genConditionalVals
austenstrine/RandomPyCodeGenerator
0
python
def genConditionalVals(self): 'Returns a tuple containing 1. a random variable in scope, and \n\t\t2. either another random variable in scope, or a random value based \n\t\toff of the input and output values' result1 = self.variable_names.peek()[randint(0, (len(self.variable_names.peek()) - 1))] igr = randint(1, 2) result2 = None if (igr == 1): result2 = str(randint(((self.DESIRED_OUTPUT - (self.DESIRED_OUTPUT * 2)) - 1), ((self.DESIRED_OUTPUT * 2) + 1))) else: result2 = self.randVarInScope() return (result1, result2)
def genConditionalVals(self): 'Returns a tuple containing 1. a random variable in scope, and \n\t\t2. either another random variable in scope, or a random value based \n\t\toff of the input and output values' result1 = self.variable_names.peek()[randint(0, (len(self.variable_names.peek()) - 1))] igr = randint(1, 2) result2 = None if (igr == 1): result2 = str(randint(((self.DESIRED_OUTPUT - (self.DESIRED_OUTPUT * 2)) - 1), ((self.DESIRED_OUTPUT * 2) + 1))) else: result2 = self.randVarInScope() return (result1, result2)<|docstring|>Returns a tuple containing 1. a random variable in scope, and 2. either another random variable in scope, or a random value based off of the input and output values<|endoftext|>
a003fcc926ef49315d18240dddd3528498026126722b42c461e5575245efb92a
def decrementScope(self): 'To be used at the end of loops, conditionals, and any other such structures to ensure proper tablature' if self.returnDeclared: if ((self.scope_depth - 1) < self.MIN_SCOPE): self.scope_depth -= 1 self.end = True return '' if (self.scope_depth <= self.MIN_SCOPE): if self.returnDeclared: self.end = True return '' else: return self.OPTIONS[3]() self.variable_names.pop() self.scope_depth -= 1 self.returnDeclared = False return ''
To be used at the end of loops, conditionals, and any other such structures to ensure proper tablature
pythonFinal.py
decrementScope
austenstrine/RandomPyCodeGenerator
0
python
def decrementScope(self): if self.returnDeclared: if ((self.scope_depth - 1) < self.MIN_SCOPE): self.scope_depth -= 1 self.end = True return if (self.scope_depth <= self.MIN_SCOPE): if self.returnDeclared: self.end = True return else: return self.OPTIONS[3]() self.variable_names.pop() self.scope_depth -= 1 self.returnDeclared = False return
def decrementScope(self): if self.returnDeclared: if ((self.scope_depth - 1) < self.MIN_SCOPE): self.scope_depth -= 1 self.end = True return if (self.scope_depth <= self.MIN_SCOPE): if self.returnDeclared: self.end = True return else: return self.OPTIONS[3]() self.variable_names.pop() self.scope_depth -= 1 self.returnDeclared = False return <|docstring|>To be used at the end of loops, conditionals, and any other such structures to ensure proper tablature<|endoftext|>
2cfff8fecf8bc86d39d7a23237521f61e71b856aa9f18e5f09648ab9f4b23916
def genReturn(self): 'Generates a string representation of a return statement' statement = None self.returnDeclared = True r = randint(1, 3) if (r == 1): var = self.randVarInScope() statement = (((((((((self.mkTab() + self.RETURN) + var) + self.RETURN_END) + self.NEWLINE) + self.mkTab()) + self.END) + ' ') + var) + self.NEWLINE) else: var = self.genMutPhrase() statement = (((((((((self.mkTab() + self.RETURN) + var) + self.RETURN_END) + self.NEWLINE) + self.mkTab()) + self.END) + ' ') + var) + self.NEWLINE) statement += self.decrementScope() return statement
Generates a string representation of a return statement
pythonFinal.py
genReturn
austenstrine/RandomPyCodeGenerator
0
python
def genReturn(self): statement = None self.returnDeclared = True r = randint(1, 3) if (r == 1): var = self.randVarInScope() statement = (((((((((self.mkTab() + self.RETURN) + var) + self.RETURN_END) + self.NEWLINE) + self.mkTab()) + self.END) + ' ') + var) + self.NEWLINE) else: var = self.genMutPhrase() statement = (((((((((self.mkTab() + self.RETURN) + var) + self.RETURN_END) + self.NEWLINE) + self.mkTab()) + self.END) + ' ') + var) + self.NEWLINE) statement += self.decrementScope() return statement
def genReturn(self): statement = None self.returnDeclared = True r = randint(1, 3) if (r == 1): var = self.randVarInScope() statement = (((((((((self.mkTab() + self.RETURN) + var) + self.RETURN_END) + self.NEWLINE) + self.mkTab()) + self.END) + ' ') + var) + self.NEWLINE) else: var = self.genMutPhrase() statement = (((((((((self.mkTab() + self.RETURN) + var) + self.RETURN_END) + self.NEWLINE) + self.mkTab()) + self.END) + ' ') + var) + self.NEWLINE) statement += self.decrementScope() return statement<|docstring|>Generates a string representation of a return statement<|endoftext|>
bec3b7600c8ea0f59ef8b1d96f7498776175de3fd15452c5ee35b67d90d9cddb
def genNewVariable(self): 'Generates a string representation of a new varialbe, and data to assign it to' length = randint(1, 5) string = '' LLLength = len(self.LETTER_LIST) while (length > 0): string += self.LETTER_LIST[randint(0, (LLLength - 1))] length -= 1 tabbedString = (self.mkTab() + string) tabbedString += (((self.ASSIGNMENT + ' ') + self.getValOrVar()) + self.NEWLINE) if (not (string in self.variable_names.peek())): lister = list(self.variable_names.pop()) lister.append(string) self.variable_names.push(lister) else: return self.genNewVariable() return tabbedString
Generates a string representation of a new varialbe, and data to assign it to
pythonFinal.py
genNewVariable
austenstrine/RandomPyCodeGenerator
0
python
def genNewVariable(self): length = randint(1, 5) string = LLLength = len(self.LETTER_LIST) while (length > 0): string += self.LETTER_LIST[randint(0, (LLLength - 1))] length -= 1 tabbedString = (self.mkTab() + string) tabbedString += (((self.ASSIGNMENT + ' ') + self.getValOrVar()) + self.NEWLINE) if (not (string in self.variable_names.peek())): lister = list(self.variable_names.pop()) lister.append(string) self.variable_names.push(lister) else: return self.genNewVariable() return tabbedString
def genNewVariable(self): length = randint(1, 5) string = LLLength = len(self.LETTER_LIST) while (length > 0): string += self.LETTER_LIST[randint(0, (LLLength - 1))] length -= 1 tabbedString = (self.mkTab() + string) tabbedString += (((self.ASSIGNMENT + ' ') + self.getValOrVar()) + self.NEWLINE) if (not (string in self.variable_names.peek())): lister = list(self.variable_names.pop()) lister.append(string) self.variable_names.push(lister) else: return self.genNewVariable() return tabbedString<|docstring|>Generates a string representation of a new varialbe, and data to assign it to<|endoftext|>
760a67f6f3ae14d2ecd62dc464ebb2b2d22c1d472d727e479eec172c6dda9dae
def genAlterVariable(self): 'Generates a string representation of an alteration of an existing variable' existing = (self.mkTab() + self.randVarInScope()) if (randint(0, 1) == 1): existing += ((self.MUT_ASSIGN[randint(0, 1)] + ' ') + self.getValOrVar()) else: existing += ((((self.ASSIGNMENT + ' ') + self.getValOrVar()) + self.MUTATORS[randint(0, 1)]) + self.getValOrVar()) existing += self.NEWLINE return existing
Generates a string representation of an alteration of an existing variable
pythonFinal.py
genAlterVariable
austenstrine/RandomPyCodeGenerator
0
python
def genAlterVariable(self): existing = (self.mkTab() + self.randVarInScope()) if (randint(0, 1) == 1): existing += ((self.MUT_ASSIGN[randint(0, 1)] + ' ') + self.getValOrVar()) else: existing += ((((self.ASSIGNMENT + ' ') + self.getValOrVar()) + self.MUTATORS[randint(0, 1)]) + self.getValOrVar()) existing += self.NEWLINE return existing
def genAlterVariable(self): existing = (self.mkTab() + self.randVarInScope()) if (randint(0, 1) == 1): existing += ((self.MUT_ASSIGN[randint(0, 1)] + ' ') + self.getValOrVar()) else: existing += ((((self.ASSIGNMENT + ' ') + self.getValOrVar()) + self.MUTATORS[randint(0, 1)]) + self.getValOrVar()) existing += self.NEWLINE return existing<|docstring|>Generates a string representation of an alteration of an existing variable<|endoftext|>
64de32a857f5543b9b0f75b96577cc0077d312bb059e8217b19dfcff5c16c32b
def genConditional(self): 'Generates a string representation of a conditional statement' if self.end: return self.decrementScope() self.variable_names.push(self.variable_names.peek()) conditionalResult = 0 operatorResult = randint(0, 6) optionsResult = None (value1Result, value2Result) = self.genConditionalVals() string = (((((((((self.mkTab() + self.CONDITIONAL_LIST[conditionalResult]) + ' ') + value1Result) + ' ') + self.COMPARATIVE_OPERATOR_LIST[operatorResult]) + ' ') + value2Result) + ':') + self.NEWLINE) self.scope_depth += 1 if (self.scope_depth < self.MAX_SCOPE): optionsResult = randint(0, 3) string += self.OPTIONS[optionsResult]() else: self.end = True if self.returnDeclared: string += self.decrementScope() else: string += self.genReturn() return string if (optionsResult != 3): string += self.decrementScope() return string
Generates a string representation of a conditional statement
pythonFinal.py
genConditional
austenstrine/RandomPyCodeGenerator
0
python
def genConditional(self): if self.end: return self.decrementScope() self.variable_names.push(self.variable_names.peek()) conditionalResult = 0 operatorResult = randint(0, 6) optionsResult = None (value1Result, value2Result) = self.genConditionalVals() string = (((((((((self.mkTab() + self.CONDITIONAL_LIST[conditionalResult]) + ' ') + value1Result) + ' ') + self.COMPARATIVE_OPERATOR_LIST[operatorResult]) + ' ') + value2Result) + ':') + self.NEWLINE) self.scope_depth += 1 if (self.scope_depth < self.MAX_SCOPE): optionsResult = randint(0, 3) string += self.OPTIONS[optionsResult]() else: self.end = True if self.returnDeclared: string += self.decrementScope() else: string += self.genReturn() return string if (optionsResult != 3): string += self.decrementScope() return string
def genConditional(self): if self.end: return self.decrementScope() self.variable_names.push(self.variable_names.peek()) conditionalResult = 0 operatorResult = randint(0, 6) optionsResult = None (value1Result, value2Result) = self.genConditionalVals() string = (((((((((self.mkTab() + self.CONDITIONAL_LIST[conditionalResult]) + ' ') + value1Result) + ' ') + self.COMPARATIVE_OPERATOR_LIST[operatorResult]) + ' ') + value2Result) + ':') + self.NEWLINE) self.scope_depth += 1 if (self.scope_depth < self.MAX_SCOPE): optionsResult = randint(0, 3) string += self.OPTIONS[optionsResult]() else: self.end = True if self.returnDeclared: string += self.decrementScope() else: string += self.genReturn() return string if (optionsResult != 3): string += self.decrementScope() return string<|docstring|>Generates a string representation of a conditional statement<|endoftext|>
83a7e38b5943b293da97ce3458ada857761f2ac95af2f46fa4c8d40d403b979c
def size(t: T) -> int: '\n Total number of words in the dictionary\n ' return len(t)
Total number of words in the dictionary
uwb/src/dictionary.py
size
lippirk/uwb
0
python
def size(t: T) -> int: '\n \n ' return len(t)
def size(t: T) -> int: '\n \n ' return len(t)<|docstring|>Total number of words in the dictionary<|endoftext|>
3815e18d123f526f976a0ccd9b69d0c915cd4ba876172e541a4b297ca1693aee
def __init__(self, config, **kwargs): '\n Creates a new service client\n\n :param dict config:\n Configuration keys and values as per `SDK and Tool Configuration <https://docs.cloud.oracle.com/Content/API/Concepts/sdkconfig.htm>`__.\n The :py:meth:`~oci.config.from_file` method can be used to load configuration from a file. Alternatively, a ``dict`` can be passed. You can validate_config\n the dict using :py:meth:`~oci.config.validate_config`\n\n :param str service_endpoint: (optional)\n The endpoint of the service to call using this client. For example ``https://iaas.us-ashburn-1.oraclecloud.com``. If this keyword argument is\n not provided then it will be derived using the region in the config parameter. You should only provide this keyword argument if you have an explicit\n need to specify a service endpoint.\n\n :param timeout: (optional)\n The connection and read timeouts for the client. The default values are connection timeout 10 seconds and read timeout 60 seconds. This keyword argument can be provided\n as a single float, in which case the value provided is used for both the read and connection timeouts, or as a tuple of two floats. If\n a tuple is provided then the first value is used as the connection timeout and the second value as the read timeout.\n :type timeout: float or tuple(float, float)\n\n :param signer: (optional)\n The signer to use when signing requests made by the service client. The default is to use a :py:class:`~oci.signer.Signer` based on the values\n provided in the config parameter.\n\n One use case for this parameter is for `Instance Principals authentication <https://docs.cloud.oracle.com/Content/Identity/Tasks/callingservicesfrominstances.htm>`__\n by passing an instance of :py:class:`~oci.auth.signers.InstancePrincipalsSecurityTokenSigner` as the value for this keyword argument\n :type signer: :py:class:`~oci.signer.AbstractBaseSigner`\n\n :param obj retry_strategy: (optional)\n A retry strategy to apply to all calls made by this service client (i.e. at the client level). There is no retry strategy applied by default.\n Retry strategies can also be applied at the operation level by passing a ``retry_strategy`` keyword argument as part of calling the operation.\n Any value provided at the operation level will override whatever is specified at the client level.\n\n This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`\n is also available. The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.\n\n :param obj circuit_breaker_strategy: (optional)\n A circuit breaker strategy to apply to all calls made by this service client (i.e. at the client level).\n This client uses :py:data:`~oci.circuit_breaker.DEFAULT_CIRCUIT_BREAKER_STRATEGY` as default if no circuit breaker strategy is provided.\n The specifics of circuit breaker strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/circuit_breakers.html>`__.\n\n :param function circuit_breaker_callback: (optional)\n Callback function to receive any exceptions triggerred by the circuit breaker.\n\n :param allow_control_chars: (optional)\n allow_control_chars is a boolean to indicate whether or not this client should allow control characters in the response object. By default, the client will not\n allow control characters to be in the response object.\n ' validate_config(config, signer=kwargs.get('signer')) if ('signer' in kwargs): signer = kwargs['signer'] elif (AUTHENTICATION_TYPE_FIELD_NAME in config): signer = get_signer_from_authentication_type(config) else: signer = Signer(tenancy=config['tenancy'], user=config['user'], fingerprint=config['fingerprint'], private_key_file_location=config.get('key_file'), pass_phrase=get_config_value_or_default(config, 'pass_phrase'), private_key_content=config.get('key_content')) base_client_init_kwargs = {'regional_client': True, 'service_endpoint': kwargs.get('service_endpoint'), 'base_path': '/oalapp/service/onesubs/proxy/20210501', 'service_endpoint_template': 'https://csaap-e.oracle.com', 'skip_deserialization': kwargs.get('skip_deserialization', False), 'circuit_breaker_strategy': kwargs.get('circuit_breaker_strategy', circuit_breaker.GLOBAL_CIRCUIT_BREAKER_STRATEGY)} if ('timeout' in kwargs): base_client_init_kwargs['timeout'] = kwargs.get('timeout') if (base_client_init_kwargs.get('circuit_breaker_strategy') is None): base_client_init_kwargs['circuit_breaker_strategy'] = circuit_breaker.DEFAULT_CIRCUIT_BREAKER_STRATEGY if ('allow_control_chars' in kwargs): base_client_init_kwargs['allow_control_chars'] = kwargs.get('allow_control_chars') self.base_client = BaseClient('computed_usage', config, signer, osub_usage_type_mapping, **base_client_init_kwargs) self.retry_strategy = kwargs.get('retry_strategy') self.circuit_breaker_callback = kwargs.get('circuit_breaker_callback')
Creates a new service client :param dict config: Configuration keys and values as per `SDK and Tool Configuration <https://docs.cloud.oracle.com/Content/API/Concepts/sdkconfig.htm>`__. The :py:meth:`~oci.config.from_file` method can be used to load configuration from a file. Alternatively, a ``dict`` can be passed. You can validate_config the dict using :py:meth:`~oci.config.validate_config` :param str service_endpoint: (optional) The endpoint of the service to call using this client. For example ``https://iaas.us-ashburn-1.oraclecloud.com``. If this keyword argument is not provided then it will be derived using the region in the config parameter. You should only provide this keyword argument if you have an explicit need to specify a service endpoint. :param timeout: (optional) The connection and read timeouts for the client. The default values are connection timeout 10 seconds and read timeout 60 seconds. This keyword argument can be provided as a single float, in which case the value provided is used for both the read and connection timeouts, or as a tuple of two floats. If a tuple is provided then the first value is used as the connection timeout and the second value as the read timeout. :type timeout: float or tuple(float, float) :param signer: (optional) The signer to use when signing requests made by the service client. The default is to use a :py:class:`~oci.signer.Signer` based on the values provided in the config parameter. One use case for this parameter is for `Instance Principals authentication <https://docs.cloud.oracle.com/Content/Identity/Tasks/callingservicesfrominstances.htm>`__ by passing an instance of :py:class:`~oci.auth.signers.InstancePrincipalsSecurityTokenSigner` as the value for this keyword argument :type signer: :py:class:`~oci.signer.AbstractBaseSigner` :param obj retry_strategy: (optional) A retry strategy to apply to all calls made by this service client (i.e. at the client level). There is no retry strategy applied by default. Retry strategies can also be applied at the operation level by passing a ``retry_strategy`` keyword argument as part of calling the operation. Any value provided at the operation level will override whatever is specified at the client level. This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` is also available. The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__. :param obj circuit_breaker_strategy: (optional) A circuit breaker strategy to apply to all calls made by this service client (i.e. at the client level). This client uses :py:data:`~oci.circuit_breaker.DEFAULT_CIRCUIT_BREAKER_STRATEGY` as default if no circuit breaker strategy is provided. The specifics of circuit breaker strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/circuit_breakers.html>`__. :param function circuit_breaker_callback: (optional) Callback function to receive any exceptions triggerred by the circuit breaker. :param allow_control_chars: (optional) allow_control_chars is a boolean to indicate whether or not this client should allow control characters in the response object. By default, the client will not allow control characters to be in the response object.
src/oci/osub_usage/computed_usage_client.py
__init__
ionaryu/oci-python-sdk
0
python
def __init__(self, config, **kwargs): '\n Creates a new service client\n\n :param dict config:\n Configuration keys and values as per `SDK and Tool Configuration <https://docs.cloud.oracle.com/Content/API/Concepts/sdkconfig.htm>`__.\n The :py:meth:`~oci.config.from_file` method can be used to load configuration from a file. Alternatively, a ``dict`` can be passed. You can validate_config\n the dict using :py:meth:`~oci.config.validate_config`\n\n :param str service_endpoint: (optional)\n The endpoint of the service to call using this client. For example ``https://iaas.us-ashburn-1.oraclecloud.com``. If this keyword argument is\n not provided then it will be derived using the region in the config parameter. You should only provide this keyword argument if you have an explicit\n need to specify a service endpoint.\n\n :param timeout: (optional)\n The connection and read timeouts for the client. The default values are connection timeout 10 seconds and read timeout 60 seconds. This keyword argument can be provided\n as a single float, in which case the value provided is used for both the read and connection timeouts, or as a tuple of two floats. If\n a tuple is provided then the first value is used as the connection timeout and the second value as the read timeout.\n :type timeout: float or tuple(float, float)\n\n :param signer: (optional)\n The signer to use when signing requests made by the service client. The default is to use a :py:class:`~oci.signer.Signer` based on the values\n provided in the config parameter.\n\n One use case for this parameter is for `Instance Principals authentication <https://docs.cloud.oracle.com/Content/Identity/Tasks/callingservicesfrominstances.htm>`__\n by passing an instance of :py:class:`~oci.auth.signers.InstancePrincipalsSecurityTokenSigner` as the value for this keyword argument\n :type signer: :py:class:`~oci.signer.AbstractBaseSigner`\n\n :param obj retry_strategy: (optional)\n A retry strategy to apply to all calls made by this service client (i.e. at the client level). There is no retry strategy applied by default.\n Retry strategies can also be applied at the operation level by passing a ``retry_strategy`` keyword argument as part of calling the operation.\n Any value provided at the operation level will override whatever is specified at the client level.\n\n This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`\n is also available. The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.\n\n :param obj circuit_breaker_strategy: (optional)\n A circuit breaker strategy to apply to all calls made by this service client (i.e. at the client level).\n This client uses :py:data:`~oci.circuit_breaker.DEFAULT_CIRCUIT_BREAKER_STRATEGY` as default if no circuit breaker strategy is provided.\n The specifics of circuit breaker strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/circuit_breakers.html>`__.\n\n :param function circuit_breaker_callback: (optional)\n Callback function to receive any exceptions triggerred by the circuit breaker.\n\n :param allow_control_chars: (optional)\n allow_control_chars is a boolean to indicate whether or not this client should allow control characters in the response object. By default, the client will not\n allow control characters to be in the response object.\n ' validate_config(config, signer=kwargs.get('signer')) if ('signer' in kwargs): signer = kwargs['signer'] elif (AUTHENTICATION_TYPE_FIELD_NAME in config): signer = get_signer_from_authentication_type(config) else: signer = Signer(tenancy=config['tenancy'], user=config['user'], fingerprint=config['fingerprint'], private_key_file_location=config.get('key_file'), pass_phrase=get_config_value_or_default(config, 'pass_phrase'), private_key_content=config.get('key_content')) base_client_init_kwargs = {'regional_client': True, 'service_endpoint': kwargs.get('service_endpoint'), 'base_path': '/oalapp/service/onesubs/proxy/20210501', 'service_endpoint_template': 'https://csaap-e.oracle.com', 'skip_deserialization': kwargs.get('skip_deserialization', False), 'circuit_breaker_strategy': kwargs.get('circuit_breaker_strategy', circuit_breaker.GLOBAL_CIRCUIT_BREAKER_STRATEGY)} if ('timeout' in kwargs): base_client_init_kwargs['timeout'] = kwargs.get('timeout') if (base_client_init_kwargs.get('circuit_breaker_strategy') is None): base_client_init_kwargs['circuit_breaker_strategy'] = circuit_breaker.DEFAULT_CIRCUIT_BREAKER_STRATEGY if ('allow_control_chars' in kwargs): base_client_init_kwargs['allow_control_chars'] = kwargs.get('allow_control_chars') self.base_client = BaseClient('computed_usage', config, signer, osub_usage_type_mapping, **base_client_init_kwargs) self.retry_strategy = kwargs.get('retry_strategy') self.circuit_breaker_callback = kwargs.get('circuit_breaker_callback')
def __init__(self, config, **kwargs): '\n Creates a new service client\n\n :param dict config:\n Configuration keys and values as per `SDK and Tool Configuration <https://docs.cloud.oracle.com/Content/API/Concepts/sdkconfig.htm>`__.\n The :py:meth:`~oci.config.from_file` method can be used to load configuration from a file. Alternatively, a ``dict`` can be passed. You can validate_config\n the dict using :py:meth:`~oci.config.validate_config`\n\n :param str service_endpoint: (optional)\n The endpoint of the service to call using this client. For example ``https://iaas.us-ashburn-1.oraclecloud.com``. If this keyword argument is\n not provided then it will be derived using the region in the config parameter. You should only provide this keyword argument if you have an explicit\n need to specify a service endpoint.\n\n :param timeout: (optional)\n The connection and read timeouts for the client. The default values are connection timeout 10 seconds and read timeout 60 seconds. This keyword argument can be provided\n as a single float, in which case the value provided is used for both the read and connection timeouts, or as a tuple of two floats. If\n a tuple is provided then the first value is used as the connection timeout and the second value as the read timeout.\n :type timeout: float or tuple(float, float)\n\n :param signer: (optional)\n The signer to use when signing requests made by the service client. The default is to use a :py:class:`~oci.signer.Signer` based on the values\n provided in the config parameter.\n\n One use case for this parameter is for `Instance Principals authentication <https://docs.cloud.oracle.com/Content/Identity/Tasks/callingservicesfrominstances.htm>`__\n by passing an instance of :py:class:`~oci.auth.signers.InstancePrincipalsSecurityTokenSigner` as the value for this keyword argument\n :type signer: :py:class:`~oci.signer.AbstractBaseSigner`\n\n :param obj retry_strategy: (optional)\n A retry strategy to apply to all calls made by this service client (i.e. at the client level). There is no retry strategy applied by default.\n Retry strategies can also be applied at the operation level by passing a ``retry_strategy`` keyword argument as part of calling the operation.\n Any value provided at the operation level will override whatever is specified at the client level.\n\n This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY`\n is also available. The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.\n\n :param obj circuit_breaker_strategy: (optional)\n A circuit breaker strategy to apply to all calls made by this service client (i.e. at the client level).\n This client uses :py:data:`~oci.circuit_breaker.DEFAULT_CIRCUIT_BREAKER_STRATEGY` as default if no circuit breaker strategy is provided.\n The specifics of circuit breaker strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/circuit_breakers.html>`__.\n\n :param function circuit_breaker_callback: (optional)\n Callback function to receive any exceptions triggerred by the circuit breaker.\n\n :param allow_control_chars: (optional)\n allow_control_chars is a boolean to indicate whether or not this client should allow control characters in the response object. By default, the client will not\n allow control characters to be in the response object.\n ' validate_config(config, signer=kwargs.get('signer')) if ('signer' in kwargs): signer = kwargs['signer'] elif (AUTHENTICATION_TYPE_FIELD_NAME in config): signer = get_signer_from_authentication_type(config) else: signer = Signer(tenancy=config['tenancy'], user=config['user'], fingerprint=config['fingerprint'], private_key_file_location=config.get('key_file'), pass_phrase=get_config_value_or_default(config, 'pass_phrase'), private_key_content=config.get('key_content')) base_client_init_kwargs = {'regional_client': True, 'service_endpoint': kwargs.get('service_endpoint'), 'base_path': '/oalapp/service/onesubs/proxy/20210501', 'service_endpoint_template': 'https://csaap-e.oracle.com', 'skip_deserialization': kwargs.get('skip_deserialization', False), 'circuit_breaker_strategy': kwargs.get('circuit_breaker_strategy', circuit_breaker.GLOBAL_CIRCUIT_BREAKER_STRATEGY)} if ('timeout' in kwargs): base_client_init_kwargs['timeout'] = kwargs.get('timeout') if (base_client_init_kwargs.get('circuit_breaker_strategy') is None): base_client_init_kwargs['circuit_breaker_strategy'] = circuit_breaker.DEFAULT_CIRCUIT_BREAKER_STRATEGY if ('allow_control_chars' in kwargs): base_client_init_kwargs['allow_control_chars'] = kwargs.get('allow_control_chars') self.base_client = BaseClient('computed_usage', config, signer, osub_usage_type_mapping, **base_client_init_kwargs) self.retry_strategy = kwargs.get('retry_strategy') self.circuit_breaker_callback = kwargs.get('circuit_breaker_callback')<|docstring|>Creates a new service client :param dict config: Configuration keys and values as per `SDK and Tool Configuration <https://docs.cloud.oracle.com/Content/API/Concepts/sdkconfig.htm>`__. The :py:meth:`~oci.config.from_file` method can be used to load configuration from a file. Alternatively, a ``dict`` can be passed. You can validate_config the dict using :py:meth:`~oci.config.validate_config` :param str service_endpoint: (optional) The endpoint of the service to call using this client. For example ``https://iaas.us-ashburn-1.oraclecloud.com``. If this keyword argument is not provided then it will be derived using the region in the config parameter. You should only provide this keyword argument if you have an explicit need to specify a service endpoint. :param timeout: (optional) The connection and read timeouts for the client. The default values are connection timeout 10 seconds and read timeout 60 seconds. This keyword argument can be provided as a single float, in which case the value provided is used for both the read and connection timeouts, or as a tuple of two floats. If a tuple is provided then the first value is used as the connection timeout and the second value as the read timeout. :type timeout: float or tuple(float, float) :param signer: (optional) The signer to use when signing requests made by the service client. The default is to use a :py:class:`~oci.signer.Signer` based on the values provided in the config parameter. One use case for this parameter is for `Instance Principals authentication <https://docs.cloud.oracle.com/Content/Identity/Tasks/callingservicesfrominstances.htm>`__ by passing an instance of :py:class:`~oci.auth.signers.InstancePrincipalsSecurityTokenSigner` as the value for this keyword argument :type signer: :py:class:`~oci.signer.AbstractBaseSigner` :param obj retry_strategy: (optional) A retry strategy to apply to all calls made by this service client (i.e. at the client level). There is no retry strategy applied by default. Retry strategies can also be applied at the operation level by passing a ``retry_strategy`` keyword argument as part of calling the operation. Any value provided at the operation level will override whatever is specified at the client level. This should be one of the strategies available in the :py:mod:`~oci.retry` module. A convenience :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` is also available. The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__. :param obj circuit_breaker_strategy: (optional) A circuit breaker strategy to apply to all calls made by this service client (i.e. at the client level). This client uses :py:data:`~oci.circuit_breaker.DEFAULT_CIRCUIT_BREAKER_STRATEGY` as default if no circuit breaker strategy is provided. The specifics of circuit breaker strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/circuit_breakers.html>`__. :param function circuit_breaker_callback: (optional) Callback function to receive any exceptions triggerred by the circuit breaker. :param allow_control_chars: (optional) allow_control_chars is a boolean to indicate whether or not this client should allow control characters in the response object. By default, the client will not allow control characters to be in the response object.<|endoftext|>
c952536c2dfc2a3b2a801ac482e92a4cd385c2f4f43a3d7e63f0ccc27a32fa5a
def get_computed_usage(self, computed_usage_id, compartment_id, **kwargs): '\n This is an API which returns Computed Usage corresponding to the id passed\n\n\n :param str computed_usage_id: (required)\n The Computed Usage Id\n\n :param str compartment_id: (required)\n The OCID of the root compartment.\n\n :param list[str] fields: (optional)\n Partial response refers to an optimization technique offered\n by the RESTful web APIs to return only the information\n (fields) required by the client. This parameter is used to control what fields to\n return.\n\n :param str opc_request_id: (optional)\n Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID.\n\n :param str x_one_origin_region: (optional)\n The OCI home region name in case home region is not us-ashburn-1 (IAD), e.g. ap-mumbai-1, us-phoenix-1 etc.\n\n :param obj retry_strategy: (optional)\n A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.\n\n This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.\n The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.\n\n To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.\n\n :param bool allow_control_chars: (optional)\n allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.\n By default, the response will not allow control characters in strings\n\n :return: A :class:`~oci.response.Response` object with data of type :class:`~oci.osub_usage.models.ComputedUsage`\n :rtype: :class:`~oci.response.Response`\n\n :example:\n Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/osubusage/get_computed_usage.py.html>`__ to see an example of how to use get_computed_usage API.\n ' resource_path = '/computedUsages/{computedUsageId}' method = 'GET' expected_kwargs = ['allow_control_chars', 'retry_strategy', 'fields', 'opc_request_id', 'x_one_origin_region'] extra_kwargs = [_key for _key in six.iterkeys(kwargs) if (_key not in expected_kwargs)] if extra_kwargs: raise ValueError('get_computed_usage got unknown kwargs: {!r}'.format(extra_kwargs)) path_params = {'computedUsageId': computed_usage_id} path_params = {k: v for (k, v) in six.iteritems(path_params) if (v is not missing)} for (k, v) in six.iteritems(path_params): if ((v is None) or (isinstance(v, six.string_types) and (len(v.strip()) == 0))): raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k)) query_params = {'compartmentId': compartment_id, 'fields': self.base_client.generate_collection_format_param(kwargs.get('fields', missing), 'multi')} query_params = {k: v for (k, v) in six.iteritems(query_params) if ((v is not missing) and (v is not None))} header_params = {'accept': 'application/json', 'content-type': 'application/json', 'opc-request-id': kwargs.get('opc_request_id', missing), 'x-one-origin-region': kwargs.get('x_one_origin_region', missing)} header_params = {k: v for (k, v) in six.iteritems(header_params) if ((v is not missing) and (v is not None))} retry_strategy = self.base_client.get_preferred_retry_strategy(operation_retry_strategy=kwargs.get('retry_strategy'), client_retry_strategy=self.retry_strategy) if retry_strategy: if (not isinstance(retry_strategy, retry.NoneRetryStrategy)): self.base_client.add_opc_client_retries_header(header_params) retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback) return retry_strategy.make_retrying_call(self.base_client.call_api, resource_path=resource_path, method=method, path_params=path_params, query_params=query_params, header_params=header_params, response_type='ComputedUsage') else: return self.base_client.call_api(resource_path=resource_path, method=method, path_params=path_params, query_params=query_params, header_params=header_params, response_type='ComputedUsage')
This is an API which returns Computed Usage corresponding to the id passed :param str computed_usage_id: (required) The Computed Usage Id :param str compartment_id: (required) The OCID of the root compartment. :param list[str] fields: (optional) Partial response refers to an optimization technique offered by the RESTful web APIs to return only the information (fields) required by the client. This parameter is used to control what fields to return. :param str opc_request_id: (optional) Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID. :param str x_one_origin_region: (optional) The OCI home region name in case home region is not us-ashburn-1 (IAD), e.g. ap-mumbai-1, us-phoenix-1 etc. :param obj retry_strategy: (optional) A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level. This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it. The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__. To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`. :param bool allow_control_chars: (optional) allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object. By default, the response will not allow control characters in strings :return: A :class:`~oci.response.Response` object with data of type :class:`~oci.osub_usage.models.ComputedUsage` :rtype: :class:`~oci.response.Response` :example: Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/osubusage/get_computed_usage.py.html>`__ to see an example of how to use get_computed_usage API.
src/oci/osub_usage/computed_usage_client.py
get_computed_usage
ionaryu/oci-python-sdk
0
python
def get_computed_usage(self, computed_usage_id, compartment_id, **kwargs): '\n This is an API which returns Computed Usage corresponding to the id passed\n\n\n :param str computed_usage_id: (required)\n The Computed Usage Id\n\n :param str compartment_id: (required)\n The OCID of the root compartment.\n\n :param list[str] fields: (optional)\n Partial response refers to an optimization technique offered\n by the RESTful web APIs to return only the information\n (fields) required by the client. This parameter is used to control what fields to\n return.\n\n :param str opc_request_id: (optional)\n Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID.\n\n :param str x_one_origin_region: (optional)\n The OCI home region name in case home region is not us-ashburn-1 (IAD), e.g. ap-mumbai-1, us-phoenix-1 etc.\n\n :param obj retry_strategy: (optional)\n A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.\n\n This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.\n The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.\n\n To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.\n\n :param bool allow_control_chars: (optional)\n allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.\n By default, the response will not allow control characters in strings\n\n :return: A :class:`~oci.response.Response` object with data of type :class:`~oci.osub_usage.models.ComputedUsage`\n :rtype: :class:`~oci.response.Response`\n\n :example:\n Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/osubusage/get_computed_usage.py.html>`__ to see an example of how to use get_computed_usage API.\n ' resource_path = '/computedUsages/{computedUsageId}' method = 'GET' expected_kwargs = ['allow_control_chars', 'retry_strategy', 'fields', 'opc_request_id', 'x_one_origin_region'] extra_kwargs = [_key for _key in six.iterkeys(kwargs) if (_key not in expected_kwargs)] if extra_kwargs: raise ValueError('get_computed_usage got unknown kwargs: {!r}'.format(extra_kwargs)) path_params = {'computedUsageId': computed_usage_id} path_params = {k: v for (k, v) in six.iteritems(path_params) if (v is not missing)} for (k, v) in six.iteritems(path_params): if ((v is None) or (isinstance(v, six.string_types) and (len(v.strip()) == 0))): raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k)) query_params = {'compartmentId': compartment_id, 'fields': self.base_client.generate_collection_format_param(kwargs.get('fields', missing), 'multi')} query_params = {k: v for (k, v) in six.iteritems(query_params) if ((v is not missing) and (v is not None))} header_params = {'accept': 'application/json', 'content-type': 'application/json', 'opc-request-id': kwargs.get('opc_request_id', missing), 'x-one-origin-region': kwargs.get('x_one_origin_region', missing)} header_params = {k: v for (k, v) in six.iteritems(header_params) if ((v is not missing) and (v is not None))} retry_strategy = self.base_client.get_preferred_retry_strategy(operation_retry_strategy=kwargs.get('retry_strategy'), client_retry_strategy=self.retry_strategy) if retry_strategy: if (not isinstance(retry_strategy, retry.NoneRetryStrategy)): self.base_client.add_opc_client_retries_header(header_params) retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback) return retry_strategy.make_retrying_call(self.base_client.call_api, resource_path=resource_path, method=method, path_params=path_params, query_params=query_params, header_params=header_params, response_type='ComputedUsage') else: return self.base_client.call_api(resource_path=resource_path, method=method, path_params=path_params, query_params=query_params, header_params=header_params, response_type='ComputedUsage')
def get_computed_usage(self, computed_usage_id, compartment_id, **kwargs): '\n This is an API which returns Computed Usage corresponding to the id passed\n\n\n :param str computed_usage_id: (required)\n The Computed Usage Id\n\n :param str compartment_id: (required)\n The OCID of the root compartment.\n\n :param list[str] fields: (optional)\n Partial response refers to an optimization technique offered\n by the RESTful web APIs to return only the information\n (fields) required by the client. This parameter is used to control what fields to\n return.\n\n :param str opc_request_id: (optional)\n Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID.\n\n :param str x_one_origin_region: (optional)\n The OCI home region name in case home region is not us-ashburn-1 (IAD), e.g. ap-mumbai-1, us-phoenix-1 etc.\n\n :param obj retry_strategy: (optional)\n A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.\n\n This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.\n The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.\n\n To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.\n\n :param bool allow_control_chars: (optional)\n allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.\n By default, the response will not allow control characters in strings\n\n :return: A :class:`~oci.response.Response` object with data of type :class:`~oci.osub_usage.models.ComputedUsage`\n :rtype: :class:`~oci.response.Response`\n\n :example:\n Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/osubusage/get_computed_usage.py.html>`__ to see an example of how to use get_computed_usage API.\n ' resource_path = '/computedUsages/{computedUsageId}' method = 'GET' expected_kwargs = ['allow_control_chars', 'retry_strategy', 'fields', 'opc_request_id', 'x_one_origin_region'] extra_kwargs = [_key for _key in six.iterkeys(kwargs) if (_key not in expected_kwargs)] if extra_kwargs: raise ValueError('get_computed_usage got unknown kwargs: {!r}'.format(extra_kwargs)) path_params = {'computedUsageId': computed_usage_id} path_params = {k: v for (k, v) in six.iteritems(path_params) if (v is not missing)} for (k, v) in six.iteritems(path_params): if ((v is None) or (isinstance(v, six.string_types) and (len(v.strip()) == 0))): raise ValueError('Parameter {} cannot be None, whitespace or empty string'.format(k)) query_params = {'compartmentId': compartment_id, 'fields': self.base_client.generate_collection_format_param(kwargs.get('fields', missing), 'multi')} query_params = {k: v for (k, v) in six.iteritems(query_params) if ((v is not missing) and (v is not None))} header_params = {'accept': 'application/json', 'content-type': 'application/json', 'opc-request-id': kwargs.get('opc_request_id', missing), 'x-one-origin-region': kwargs.get('x_one_origin_region', missing)} header_params = {k: v for (k, v) in six.iteritems(header_params) if ((v is not missing) and (v is not None))} retry_strategy = self.base_client.get_preferred_retry_strategy(operation_retry_strategy=kwargs.get('retry_strategy'), client_retry_strategy=self.retry_strategy) if retry_strategy: if (not isinstance(retry_strategy, retry.NoneRetryStrategy)): self.base_client.add_opc_client_retries_header(header_params) retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback) return retry_strategy.make_retrying_call(self.base_client.call_api, resource_path=resource_path, method=method, path_params=path_params, query_params=query_params, header_params=header_params, response_type='ComputedUsage') else: return self.base_client.call_api(resource_path=resource_path, method=method, path_params=path_params, query_params=query_params, header_params=header_params, response_type='ComputedUsage')<|docstring|>This is an API which returns Computed Usage corresponding to the id passed :param str computed_usage_id: (required) The Computed Usage Id :param str compartment_id: (required) The OCID of the root compartment. :param list[str] fields: (optional) Partial response refers to an optimization technique offered by the RESTful web APIs to return only the information (fields) required by the client. This parameter is used to control what fields to return. :param str opc_request_id: (optional) Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID. :param str x_one_origin_region: (optional) The OCI home region name in case home region is not us-ashburn-1 (IAD), e.g. ap-mumbai-1, us-phoenix-1 etc. :param obj retry_strategy: (optional) A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level. This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it. The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__. To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`. :param bool allow_control_chars: (optional) allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object. By default, the response will not allow control characters in strings :return: A :class:`~oci.response.Response` object with data of type :class:`~oci.osub_usage.models.ComputedUsage` :rtype: :class:`~oci.response.Response` :example: Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/osubusage/get_computed_usage.py.html>`__ to see an example of how to use get_computed_usage API.<|endoftext|>
e62207cfb0c6870dce78ecdadde4fe2ebcc8fa559693960697d397d6809cc8e8
def list_computed_usage_aggregateds(self, compartment_id, subscription_id, time_from, time_to, **kwargs): '\n This is a collection API which returns a list of aggregated computed usage details (there can be multiple Parent Products under a given SubID each of which is represented under Subscription Service Line # in SPM).\n\n\n :param str compartment_id: (required)\n The OCID of the root compartment.\n\n :param str subscription_id: (required)\n Subscription Id is an identifier associated to the service used for filter the Computed Usage in SPM.\n\n :param datetime time_from: (required)\n Initial date to filter Computed Usage data in SPM. In the case of non aggregated data the time period between of fromDate and toDate , expressed in RFC 3339 timestamp format.\n\n :param datetime time_to: (required)\n Final date to filter Computed Usage data in SPM, expressed in RFC 3339 timestamp format.\n\n :param str parent_product: (optional)\n Product part number for subscribed service line, called parent product.\n\n :param str grouping: (optional)\n Grouping criteria to use for aggregate the computed Usage, either hourly (`HOURLY`), daily (`DAILY`), monthly(`MONTHLY`) or none (`NONE`) to not follow a grouping criteria by date.\n\n Allowed values are: "HOURLY", "DAILY", "MONTHLY", "NONE"\n\n :param int limit: (optional)\n The maximum number aggregatedComputedUsages of items to return within the Subscription "List" call, this\n counts the overall count across all items\n Example: `500`\n\n :param str page: (optional)\n The value of the `opc-next-page` response header from the previous "List" call.\n\n :param str opc_request_id: (optional)\n Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID.\n\n :param str x_one_origin_region: (optional)\n The OCI home region name in case home region is not us-ashburn-1 (IAD), e.g. ap-mumbai-1, us-phoenix-1 etc.\n\n :param obj retry_strategy: (optional)\n A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.\n\n This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.\n The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.\n\n To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.\n\n :param bool allow_control_chars: (optional)\n allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.\n By default, the response will not allow control characters in strings\n\n :return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.osub_usage.models.ComputedUsageAggregatedSummary`\n :rtype: :class:`~oci.response.Response`\n\n :example:\n Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/osubusage/list_computed_usage_aggregateds.py.html>`__ to see an example of how to use list_computed_usage_aggregateds API.\n ' resource_path = '/computedUsages/aggregated' method = 'GET' expected_kwargs = ['allow_control_chars', 'retry_strategy', 'parent_product', 'grouping', 'limit', 'page', 'opc_request_id', 'x_one_origin_region'] extra_kwargs = [_key for _key in six.iterkeys(kwargs) if (_key not in expected_kwargs)] if extra_kwargs: raise ValueError('list_computed_usage_aggregateds got unknown kwargs: {!r}'.format(extra_kwargs)) if ('grouping' in kwargs): grouping_allowed_values = ['HOURLY', 'DAILY', 'MONTHLY', 'NONE'] if (kwargs['grouping'] not in grouping_allowed_values): raise ValueError('Invalid value for `grouping`, must be one of {0}'.format(grouping_allowed_values)) query_params = {'compartmentId': compartment_id, 'subscriptionId': subscription_id, 'timeFrom': time_from, 'timeTo': time_to, 'parentProduct': kwargs.get('parent_product', missing), 'grouping': kwargs.get('grouping', missing), 'limit': kwargs.get('limit', missing), 'page': kwargs.get('page', missing)} query_params = {k: v for (k, v) in six.iteritems(query_params) if ((v is not missing) and (v is not None))} header_params = {'accept': 'application/json', 'content-type': 'application/json', 'opc-request-id': kwargs.get('opc_request_id', missing), 'x-one-origin-region': kwargs.get('x_one_origin_region', missing)} header_params = {k: v for (k, v) in six.iteritems(header_params) if ((v is not missing) and (v is not None))} retry_strategy = self.base_client.get_preferred_retry_strategy(operation_retry_strategy=kwargs.get('retry_strategy'), client_retry_strategy=self.retry_strategy) if retry_strategy: if (not isinstance(retry_strategy, retry.NoneRetryStrategy)): self.base_client.add_opc_client_retries_header(header_params) retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback) return retry_strategy.make_retrying_call(self.base_client.call_api, resource_path=resource_path, method=method, query_params=query_params, header_params=header_params, response_type='list[ComputedUsageAggregatedSummary]') else: return self.base_client.call_api(resource_path=resource_path, method=method, query_params=query_params, header_params=header_params, response_type='list[ComputedUsageAggregatedSummary]')
This is a collection API which returns a list of aggregated computed usage details (there can be multiple Parent Products under a given SubID each of which is represented under Subscription Service Line # in SPM). :param str compartment_id: (required) The OCID of the root compartment. :param str subscription_id: (required) Subscription Id is an identifier associated to the service used for filter the Computed Usage in SPM. :param datetime time_from: (required) Initial date to filter Computed Usage data in SPM. In the case of non aggregated data the time period between of fromDate and toDate , expressed in RFC 3339 timestamp format. :param datetime time_to: (required) Final date to filter Computed Usage data in SPM, expressed in RFC 3339 timestamp format. :param str parent_product: (optional) Product part number for subscribed service line, called parent product. :param str grouping: (optional) Grouping criteria to use for aggregate the computed Usage, either hourly (`HOURLY`), daily (`DAILY`), monthly(`MONTHLY`) or none (`NONE`) to not follow a grouping criteria by date. Allowed values are: "HOURLY", "DAILY", "MONTHLY", "NONE" :param int limit: (optional) The maximum number aggregatedComputedUsages of items to return within the Subscription "List" call, this counts the overall count across all items Example: `500` :param str page: (optional) The value of the `opc-next-page` response header from the previous "List" call. :param str opc_request_id: (optional) Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID. :param str x_one_origin_region: (optional) The OCI home region name in case home region is not us-ashburn-1 (IAD), e.g. ap-mumbai-1, us-phoenix-1 etc. :param obj retry_strategy: (optional) A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level. This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it. The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__. To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`. :param bool allow_control_chars: (optional) allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object. By default, the response will not allow control characters in strings :return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.osub_usage.models.ComputedUsageAggregatedSummary` :rtype: :class:`~oci.response.Response` :example: Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/osubusage/list_computed_usage_aggregateds.py.html>`__ to see an example of how to use list_computed_usage_aggregateds API.
src/oci/osub_usage/computed_usage_client.py
list_computed_usage_aggregateds
ionaryu/oci-python-sdk
0
python
def list_computed_usage_aggregateds(self, compartment_id, subscription_id, time_from, time_to, **kwargs): '\n This is a collection API which returns a list of aggregated computed usage details (there can be multiple Parent Products under a given SubID each of which is represented under Subscription Service Line # in SPM).\n\n\n :param str compartment_id: (required)\n The OCID of the root compartment.\n\n :param str subscription_id: (required)\n Subscription Id is an identifier associated to the service used for filter the Computed Usage in SPM.\n\n :param datetime time_from: (required)\n Initial date to filter Computed Usage data in SPM. In the case of non aggregated data the time period between of fromDate and toDate , expressed in RFC 3339 timestamp format.\n\n :param datetime time_to: (required)\n Final date to filter Computed Usage data in SPM, expressed in RFC 3339 timestamp format.\n\n :param str parent_product: (optional)\n Product part number for subscribed service line, called parent product.\n\n :param str grouping: (optional)\n Grouping criteria to use for aggregate the computed Usage, either hourly (`HOURLY`), daily (`DAILY`), monthly(`MONTHLY`) or none (`NONE`) to not follow a grouping criteria by date.\n\n Allowed values are: "HOURLY", "DAILY", "MONTHLY", "NONE"\n\n :param int limit: (optional)\n The maximum number aggregatedComputedUsages of items to return within the Subscription "List" call, this\n counts the overall count across all items\n Example: `500`\n\n :param str page: (optional)\n The value of the `opc-next-page` response header from the previous "List" call.\n\n :param str opc_request_id: (optional)\n Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID.\n\n :param str x_one_origin_region: (optional)\n The OCI home region name in case home region is not us-ashburn-1 (IAD), e.g. ap-mumbai-1, us-phoenix-1 etc.\n\n :param obj retry_strategy: (optional)\n A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.\n\n This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.\n The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.\n\n To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.\n\n :param bool allow_control_chars: (optional)\n allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.\n By default, the response will not allow control characters in strings\n\n :return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.osub_usage.models.ComputedUsageAggregatedSummary`\n :rtype: :class:`~oci.response.Response`\n\n :example:\n Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/osubusage/list_computed_usage_aggregateds.py.html>`__ to see an example of how to use list_computed_usage_aggregateds API.\n ' resource_path = '/computedUsages/aggregated' method = 'GET' expected_kwargs = ['allow_control_chars', 'retry_strategy', 'parent_product', 'grouping', 'limit', 'page', 'opc_request_id', 'x_one_origin_region'] extra_kwargs = [_key for _key in six.iterkeys(kwargs) if (_key not in expected_kwargs)] if extra_kwargs: raise ValueError('list_computed_usage_aggregateds got unknown kwargs: {!r}'.format(extra_kwargs)) if ('grouping' in kwargs): grouping_allowed_values = ['HOURLY', 'DAILY', 'MONTHLY', 'NONE'] if (kwargs['grouping'] not in grouping_allowed_values): raise ValueError('Invalid value for `grouping`, must be one of {0}'.format(grouping_allowed_values)) query_params = {'compartmentId': compartment_id, 'subscriptionId': subscription_id, 'timeFrom': time_from, 'timeTo': time_to, 'parentProduct': kwargs.get('parent_product', missing), 'grouping': kwargs.get('grouping', missing), 'limit': kwargs.get('limit', missing), 'page': kwargs.get('page', missing)} query_params = {k: v for (k, v) in six.iteritems(query_params) if ((v is not missing) and (v is not None))} header_params = {'accept': 'application/json', 'content-type': 'application/json', 'opc-request-id': kwargs.get('opc_request_id', missing), 'x-one-origin-region': kwargs.get('x_one_origin_region', missing)} header_params = {k: v for (k, v) in six.iteritems(header_params) if ((v is not missing) and (v is not None))} retry_strategy = self.base_client.get_preferred_retry_strategy(operation_retry_strategy=kwargs.get('retry_strategy'), client_retry_strategy=self.retry_strategy) if retry_strategy: if (not isinstance(retry_strategy, retry.NoneRetryStrategy)): self.base_client.add_opc_client_retries_header(header_params) retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback) return retry_strategy.make_retrying_call(self.base_client.call_api, resource_path=resource_path, method=method, query_params=query_params, header_params=header_params, response_type='list[ComputedUsageAggregatedSummary]') else: return self.base_client.call_api(resource_path=resource_path, method=method, query_params=query_params, header_params=header_params, response_type='list[ComputedUsageAggregatedSummary]')
def list_computed_usage_aggregateds(self, compartment_id, subscription_id, time_from, time_to, **kwargs): '\n This is a collection API which returns a list of aggregated computed usage details (there can be multiple Parent Products under a given SubID each of which is represented under Subscription Service Line # in SPM).\n\n\n :param str compartment_id: (required)\n The OCID of the root compartment.\n\n :param str subscription_id: (required)\n Subscription Id is an identifier associated to the service used for filter the Computed Usage in SPM.\n\n :param datetime time_from: (required)\n Initial date to filter Computed Usage data in SPM. In the case of non aggregated data the time period between of fromDate and toDate , expressed in RFC 3339 timestamp format.\n\n :param datetime time_to: (required)\n Final date to filter Computed Usage data in SPM, expressed in RFC 3339 timestamp format.\n\n :param str parent_product: (optional)\n Product part number for subscribed service line, called parent product.\n\n :param str grouping: (optional)\n Grouping criteria to use for aggregate the computed Usage, either hourly (`HOURLY`), daily (`DAILY`), monthly(`MONTHLY`) or none (`NONE`) to not follow a grouping criteria by date.\n\n Allowed values are: "HOURLY", "DAILY", "MONTHLY", "NONE"\n\n :param int limit: (optional)\n The maximum number aggregatedComputedUsages of items to return within the Subscription "List" call, this\n counts the overall count across all items\n Example: `500`\n\n :param str page: (optional)\n The value of the `opc-next-page` response header from the previous "List" call.\n\n :param str opc_request_id: (optional)\n Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID.\n\n :param str x_one_origin_region: (optional)\n The OCI home region name in case home region is not us-ashburn-1 (IAD), e.g. ap-mumbai-1, us-phoenix-1 etc.\n\n :param obj retry_strategy: (optional)\n A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.\n\n This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.\n The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.\n\n To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.\n\n :param bool allow_control_chars: (optional)\n allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.\n By default, the response will not allow control characters in strings\n\n :return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.osub_usage.models.ComputedUsageAggregatedSummary`\n :rtype: :class:`~oci.response.Response`\n\n :example:\n Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/osubusage/list_computed_usage_aggregateds.py.html>`__ to see an example of how to use list_computed_usage_aggregateds API.\n ' resource_path = '/computedUsages/aggregated' method = 'GET' expected_kwargs = ['allow_control_chars', 'retry_strategy', 'parent_product', 'grouping', 'limit', 'page', 'opc_request_id', 'x_one_origin_region'] extra_kwargs = [_key for _key in six.iterkeys(kwargs) if (_key not in expected_kwargs)] if extra_kwargs: raise ValueError('list_computed_usage_aggregateds got unknown kwargs: {!r}'.format(extra_kwargs)) if ('grouping' in kwargs): grouping_allowed_values = ['HOURLY', 'DAILY', 'MONTHLY', 'NONE'] if (kwargs['grouping'] not in grouping_allowed_values): raise ValueError('Invalid value for `grouping`, must be one of {0}'.format(grouping_allowed_values)) query_params = {'compartmentId': compartment_id, 'subscriptionId': subscription_id, 'timeFrom': time_from, 'timeTo': time_to, 'parentProduct': kwargs.get('parent_product', missing), 'grouping': kwargs.get('grouping', missing), 'limit': kwargs.get('limit', missing), 'page': kwargs.get('page', missing)} query_params = {k: v for (k, v) in six.iteritems(query_params) if ((v is not missing) and (v is not None))} header_params = {'accept': 'application/json', 'content-type': 'application/json', 'opc-request-id': kwargs.get('opc_request_id', missing), 'x-one-origin-region': kwargs.get('x_one_origin_region', missing)} header_params = {k: v for (k, v) in six.iteritems(header_params) if ((v is not missing) and (v is not None))} retry_strategy = self.base_client.get_preferred_retry_strategy(operation_retry_strategy=kwargs.get('retry_strategy'), client_retry_strategy=self.retry_strategy) if retry_strategy: if (not isinstance(retry_strategy, retry.NoneRetryStrategy)): self.base_client.add_opc_client_retries_header(header_params) retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback) return retry_strategy.make_retrying_call(self.base_client.call_api, resource_path=resource_path, method=method, query_params=query_params, header_params=header_params, response_type='list[ComputedUsageAggregatedSummary]') else: return self.base_client.call_api(resource_path=resource_path, method=method, query_params=query_params, header_params=header_params, response_type='list[ComputedUsageAggregatedSummary]')<|docstring|>This is a collection API which returns a list of aggregated computed usage details (there can be multiple Parent Products under a given SubID each of which is represented under Subscription Service Line # in SPM). :param str compartment_id: (required) The OCID of the root compartment. :param str subscription_id: (required) Subscription Id is an identifier associated to the service used for filter the Computed Usage in SPM. :param datetime time_from: (required) Initial date to filter Computed Usage data in SPM. In the case of non aggregated data the time period between of fromDate and toDate , expressed in RFC 3339 timestamp format. :param datetime time_to: (required) Final date to filter Computed Usage data in SPM, expressed in RFC 3339 timestamp format. :param str parent_product: (optional) Product part number for subscribed service line, called parent product. :param str grouping: (optional) Grouping criteria to use for aggregate the computed Usage, either hourly (`HOURLY`), daily (`DAILY`), monthly(`MONTHLY`) or none (`NONE`) to not follow a grouping criteria by date. Allowed values are: "HOURLY", "DAILY", "MONTHLY", "NONE" :param int limit: (optional) The maximum number aggregatedComputedUsages of items to return within the Subscription "List" call, this counts the overall count across all items Example: `500` :param str page: (optional) The value of the `opc-next-page` response header from the previous "List" call. :param str opc_request_id: (optional) Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID. :param str x_one_origin_region: (optional) The OCI home region name in case home region is not us-ashburn-1 (IAD), e.g. ap-mumbai-1, us-phoenix-1 etc. :param obj retry_strategy: (optional) A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level. This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it. The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__. To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`. :param bool allow_control_chars: (optional) allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object. By default, the response will not allow control characters in strings :return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.osub_usage.models.ComputedUsageAggregatedSummary` :rtype: :class:`~oci.response.Response` :example: Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/osubusage/list_computed_usage_aggregateds.py.html>`__ to see an example of how to use list_computed_usage_aggregateds API.<|endoftext|>
0e4f5d6c8dd50273a5fc06cdc5ffe7659302900c1c75b548991ad446e9b37bdf
def list_computed_usages(self, compartment_id, subscription_id, time_from, time_to, **kwargs): '\n This is a collection API which returns a list of Computed Usages for given filters.\n\n\n :param str compartment_id: (required)\n The OCID of the root compartment.\n\n :param str subscription_id: (required)\n Subscription Id is an identifier associated to the service used for filter the Computed Usage in SPM.\n\n :param datetime time_from: (required)\n Initial date to filter Computed Usage data in SPM. In the case of non aggregated data the time period between of fromDate and toDate , expressed in RFC 3339 timestamp format.\n\n :param datetime time_to: (required)\n Final date to filter Computed Usage data in SPM, expressed in RFC 3339 timestamp format.\n\n :param str parent_product: (optional)\n Product part number for subscribed service line, called parent product.\n\n :param str computed_product: (optional)\n Product part number for Computed Usage .\n\n :param int limit: (optional)\n The maximum number of items to return in a paginated "List" call.\n\n Example: `500`\n\n :param str page: (optional)\n The value of the `opc-next-page` response header from the previous "List" call.\n\n :param str sort_order: (optional)\n The sort order to use, either ascending (`ASC`) or descending (`DESC`).\n\n Allowed values are: "ASC", "DESC"\n\n :param str sort_by: (optional)\n The field to sort by. You can provide one sort order (`sortOrder`).\n\n Allowed values are: "timeCreated", "timeOfArrival", "timeMeteredOn"\n\n :param str opc_request_id: (optional)\n Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID.\n\n :param str x_one_origin_region: (optional)\n The OCI home region name in case home region is not us-ashburn-1 (IAD), e.g. ap-mumbai-1, us-phoenix-1 etc.\n\n :param obj retry_strategy: (optional)\n A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.\n\n This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.\n The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.\n\n To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.\n\n :param bool allow_control_chars: (optional)\n allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.\n By default, the response will not allow control characters in strings\n\n :return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.osub_usage.models.ComputedUsageSummary`\n :rtype: :class:`~oci.response.Response`\n\n :example:\n Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/osubusage/list_computed_usages.py.html>`__ to see an example of how to use list_computed_usages API.\n ' resource_path = '/computedUsages' method = 'GET' expected_kwargs = ['allow_control_chars', 'retry_strategy', 'parent_product', 'computed_product', 'limit', 'page', 'sort_order', 'sort_by', 'opc_request_id', 'x_one_origin_region'] extra_kwargs = [_key for _key in six.iterkeys(kwargs) if (_key not in expected_kwargs)] if extra_kwargs: raise ValueError('list_computed_usages got unknown kwargs: {!r}'.format(extra_kwargs)) if ('sort_order' in kwargs): sort_order_allowed_values = ['ASC', 'DESC'] if (kwargs['sort_order'] not in sort_order_allowed_values): raise ValueError('Invalid value for `sort_order`, must be one of {0}'.format(sort_order_allowed_values)) if ('sort_by' in kwargs): sort_by_allowed_values = ['timeCreated', 'timeOfArrival', 'timeMeteredOn'] if (kwargs['sort_by'] not in sort_by_allowed_values): raise ValueError('Invalid value for `sort_by`, must be one of {0}'.format(sort_by_allowed_values)) query_params = {'compartmentId': compartment_id, 'subscriptionId': subscription_id, 'timeFrom': time_from, 'timeTo': time_to, 'parentProduct': kwargs.get('parent_product', missing), 'computedProduct': kwargs.get('computed_product', missing), 'limit': kwargs.get('limit', missing), 'page': kwargs.get('page', missing), 'sortOrder': kwargs.get('sort_order', missing), 'sortBy': kwargs.get('sort_by', missing)} query_params = {k: v for (k, v) in six.iteritems(query_params) if ((v is not missing) and (v is not None))} header_params = {'accept': 'application/json', 'content-type': 'application/json', 'opc-request-id': kwargs.get('opc_request_id', missing), 'x-one-origin-region': kwargs.get('x_one_origin_region', missing)} header_params = {k: v for (k, v) in six.iteritems(header_params) if ((v is not missing) and (v is not None))} retry_strategy = self.base_client.get_preferred_retry_strategy(operation_retry_strategy=kwargs.get('retry_strategy'), client_retry_strategy=self.retry_strategy) if retry_strategy: if (not isinstance(retry_strategy, retry.NoneRetryStrategy)): self.base_client.add_opc_client_retries_header(header_params) retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback) return retry_strategy.make_retrying_call(self.base_client.call_api, resource_path=resource_path, method=method, query_params=query_params, header_params=header_params, response_type='list[ComputedUsageSummary]') else: return self.base_client.call_api(resource_path=resource_path, method=method, query_params=query_params, header_params=header_params, response_type='list[ComputedUsageSummary]')
This is a collection API which returns a list of Computed Usages for given filters. :param str compartment_id: (required) The OCID of the root compartment. :param str subscription_id: (required) Subscription Id is an identifier associated to the service used for filter the Computed Usage in SPM. :param datetime time_from: (required) Initial date to filter Computed Usage data in SPM. In the case of non aggregated data the time period between of fromDate and toDate , expressed in RFC 3339 timestamp format. :param datetime time_to: (required) Final date to filter Computed Usage data in SPM, expressed in RFC 3339 timestamp format. :param str parent_product: (optional) Product part number for subscribed service line, called parent product. :param str computed_product: (optional) Product part number for Computed Usage . :param int limit: (optional) The maximum number of items to return in a paginated "List" call. Example: `500` :param str page: (optional) The value of the `opc-next-page` response header from the previous "List" call. :param str sort_order: (optional) The sort order to use, either ascending (`ASC`) or descending (`DESC`). Allowed values are: "ASC", "DESC" :param str sort_by: (optional) The field to sort by. You can provide one sort order (`sortOrder`). Allowed values are: "timeCreated", "timeOfArrival", "timeMeteredOn" :param str opc_request_id: (optional) Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID. :param str x_one_origin_region: (optional) The OCI home region name in case home region is not us-ashburn-1 (IAD), e.g. ap-mumbai-1, us-phoenix-1 etc. :param obj retry_strategy: (optional) A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level. This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it. The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__. To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`. :param bool allow_control_chars: (optional) allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object. By default, the response will not allow control characters in strings :return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.osub_usage.models.ComputedUsageSummary` :rtype: :class:`~oci.response.Response` :example: Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/osubusage/list_computed_usages.py.html>`__ to see an example of how to use list_computed_usages API.
src/oci/osub_usage/computed_usage_client.py
list_computed_usages
ionaryu/oci-python-sdk
0
python
def list_computed_usages(self, compartment_id, subscription_id, time_from, time_to, **kwargs): '\n This is a collection API which returns a list of Computed Usages for given filters.\n\n\n :param str compartment_id: (required)\n The OCID of the root compartment.\n\n :param str subscription_id: (required)\n Subscription Id is an identifier associated to the service used for filter the Computed Usage in SPM.\n\n :param datetime time_from: (required)\n Initial date to filter Computed Usage data in SPM. In the case of non aggregated data the time period between of fromDate and toDate , expressed in RFC 3339 timestamp format.\n\n :param datetime time_to: (required)\n Final date to filter Computed Usage data in SPM, expressed in RFC 3339 timestamp format.\n\n :param str parent_product: (optional)\n Product part number for subscribed service line, called parent product.\n\n :param str computed_product: (optional)\n Product part number for Computed Usage .\n\n :param int limit: (optional)\n The maximum number of items to return in a paginated "List" call.\n\n Example: `500`\n\n :param str page: (optional)\n The value of the `opc-next-page` response header from the previous "List" call.\n\n :param str sort_order: (optional)\n The sort order to use, either ascending (`ASC`) or descending (`DESC`).\n\n Allowed values are: "ASC", "DESC"\n\n :param str sort_by: (optional)\n The field to sort by. You can provide one sort order (`sortOrder`).\n\n Allowed values are: "timeCreated", "timeOfArrival", "timeMeteredOn"\n\n :param str opc_request_id: (optional)\n Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID.\n\n :param str x_one_origin_region: (optional)\n The OCI home region name in case home region is not us-ashburn-1 (IAD), e.g. ap-mumbai-1, us-phoenix-1 etc.\n\n :param obj retry_strategy: (optional)\n A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.\n\n This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.\n The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.\n\n To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.\n\n :param bool allow_control_chars: (optional)\n allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.\n By default, the response will not allow control characters in strings\n\n :return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.osub_usage.models.ComputedUsageSummary`\n :rtype: :class:`~oci.response.Response`\n\n :example:\n Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/osubusage/list_computed_usages.py.html>`__ to see an example of how to use list_computed_usages API.\n ' resource_path = '/computedUsages' method = 'GET' expected_kwargs = ['allow_control_chars', 'retry_strategy', 'parent_product', 'computed_product', 'limit', 'page', 'sort_order', 'sort_by', 'opc_request_id', 'x_one_origin_region'] extra_kwargs = [_key for _key in six.iterkeys(kwargs) if (_key not in expected_kwargs)] if extra_kwargs: raise ValueError('list_computed_usages got unknown kwargs: {!r}'.format(extra_kwargs)) if ('sort_order' in kwargs): sort_order_allowed_values = ['ASC', 'DESC'] if (kwargs['sort_order'] not in sort_order_allowed_values): raise ValueError('Invalid value for `sort_order`, must be one of {0}'.format(sort_order_allowed_values)) if ('sort_by' in kwargs): sort_by_allowed_values = ['timeCreated', 'timeOfArrival', 'timeMeteredOn'] if (kwargs['sort_by'] not in sort_by_allowed_values): raise ValueError('Invalid value for `sort_by`, must be one of {0}'.format(sort_by_allowed_values)) query_params = {'compartmentId': compartment_id, 'subscriptionId': subscription_id, 'timeFrom': time_from, 'timeTo': time_to, 'parentProduct': kwargs.get('parent_product', missing), 'computedProduct': kwargs.get('computed_product', missing), 'limit': kwargs.get('limit', missing), 'page': kwargs.get('page', missing), 'sortOrder': kwargs.get('sort_order', missing), 'sortBy': kwargs.get('sort_by', missing)} query_params = {k: v for (k, v) in six.iteritems(query_params) if ((v is not missing) and (v is not None))} header_params = {'accept': 'application/json', 'content-type': 'application/json', 'opc-request-id': kwargs.get('opc_request_id', missing), 'x-one-origin-region': kwargs.get('x_one_origin_region', missing)} header_params = {k: v for (k, v) in six.iteritems(header_params) if ((v is not missing) and (v is not None))} retry_strategy = self.base_client.get_preferred_retry_strategy(operation_retry_strategy=kwargs.get('retry_strategy'), client_retry_strategy=self.retry_strategy) if retry_strategy: if (not isinstance(retry_strategy, retry.NoneRetryStrategy)): self.base_client.add_opc_client_retries_header(header_params) retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback) return retry_strategy.make_retrying_call(self.base_client.call_api, resource_path=resource_path, method=method, query_params=query_params, header_params=header_params, response_type='list[ComputedUsageSummary]') else: return self.base_client.call_api(resource_path=resource_path, method=method, query_params=query_params, header_params=header_params, response_type='list[ComputedUsageSummary]')
def list_computed_usages(self, compartment_id, subscription_id, time_from, time_to, **kwargs): '\n This is a collection API which returns a list of Computed Usages for given filters.\n\n\n :param str compartment_id: (required)\n The OCID of the root compartment.\n\n :param str subscription_id: (required)\n Subscription Id is an identifier associated to the service used for filter the Computed Usage in SPM.\n\n :param datetime time_from: (required)\n Initial date to filter Computed Usage data in SPM. In the case of non aggregated data the time period between of fromDate and toDate , expressed in RFC 3339 timestamp format.\n\n :param datetime time_to: (required)\n Final date to filter Computed Usage data in SPM, expressed in RFC 3339 timestamp format.\n\n :param str parent_product: (optional)\n Product part number for subscribed service line, called parent product.\n\n :param str computed_product: (optional)\n Product part number for Computed Usage .\n\n :param int limit: (optional)\n The maximum number of items to return in a paginated "List" call.\n\n Example: `500`\n\n :param str page: (optional)\n The value of the `opc-next-page` response header from the previous "List" call.\n\n :param str sort_order: (optional)\n The sort order to use, either ascending (`ASC`) or descending (`DESC`).\n\n Allowed values are: "ASC", "DESC"\n\n :param str sort_by: (optional)\n The field to sort by. You can provide one sort order (`sortOrder`).\n\n Allowed values are: "timeCreated", "timeOfArrival", "timeMeteredOn"\n\n :param str opc_request_id: (optional)\n Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID.\n\n :param str x_one_origin_region: (optional)\n The OCI home region name in case home region is not us-ashburn-1 (IAD), e.g. ap-mumbai-1, us-phoenix-1 etc.\n\n :param obj retry_strategy: (optional)\n A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level.\n\n This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it.\n The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__.\n\n To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`.\n\n :param bool allow_control_chars: (optional)\n allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object.\n By default, the response will not allow control characters in strings\n\n :return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.osub_usage.models.ComputedUsageSummary`\n :rtype: :class:`~oci.response.Response`\n\n :example:\n Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/osubusage/list_computed_usages.py.html>`__ to see an example of how to use list_computed_usages API.\n ' resource_path = '/computedUsages' method = 'GET' expected_kwargs = ['allow_control_chars', 'retry_strategy', 'parent_product', 'computed_product', 'limit', 'page', 'sort_order', 'sort_by', 'opc_request_id', 'x_one_origin_region'] extra_kwargs = [_key for _key in six.iterkeys(kwargs) if (_key not in expected_kwargs)] if extra_kwargs: raise ValueError('list_computed_usages got unknown kwargs: {!r}'.format(extra_kwargs)) if ('sort_order' in kwargs): sort_order_allowed_values = ['ASC', 'DESC'] if (kwargs['sort_order'] not in sort_order_allowed_values): raise ValueError('Invalid value for `sort_order`, must be one of {0}'.format(sort_order_allowed_values)) if ('sort_by' in kwargs): sort_by_allowed_values = ['timeCreated', 'timeOfArrival', 'timeMeteredOn'] if (kwargs['sort_by'] not in sort_by_allowed_values): raise ValueError('Invalid value for `sort_by`, must be one of {0}'.format(sort_by_allowed_values)) query_params = {'compartmentId': compartment_id, 'subscriptionId': subscription_id, 'timeFrom': time_from, 'timeTo': time_to, 'parentProduct': kwargs.get('parent_product', missing), 'computedProduct': kwargs.get('computed_product', missing), 'limit': kwargs.get('limit', missing), 'page': kwargs.get('page', missing), 'sortOrder': kwargs.get('sort_order', missing), 'sortBy': kwargs.get('sort_by', missing)} query_params = {k: v for (k, v) in six.iteritems(query_params) if ((v is not missing) and (v is not None))} header_params = {'accept': 'application/json', 'content-type': 'application/json', 'opc-request-id': kwargs.get('opc_request_id', missing), 'x-one-origin-region': kwargs.get('x_one_origin_region', missing)} header_params = {k: v for (k, v) in six.iteritems(header_params) if ((v is not missing) and (v is not None))} retry_strategy = self.base_client.get_preferred_retry_strategy(operation_retry_strategy=kwargs.get('retry_strategy'), client_retry_strategy=self.retry_strategy) if retry_strategy: if (not isinstance(retry_strategy, retry.NoneRetryStrategy)): self.base_client.add_opc_client_retries_header(header_params) retry_strategy.add_circuit_breaker_callback(self.circuit_breaker_callback) return retry_strategy.make_retrying_call(self.base_client.call_api, resource_path=resource_path, method=method, query_params=query_params, header_params=header_params, response_type='list[ComputedUsageSummary]') else: return self.base_client.call_api(resource_path=resource_path, method=method, query_params=query_params, header_params=header_params, response_type='list[ComputedUsageSummary]')<|docstring|>This is a collection API which returns a list of Computed Usages for given filters. :param str compartment_id: (required) The OCID of the root compartment. :param str subscription_id: (required) Subscription Id is an identifier associated to the service used for filter the Computed Usage in SPM. :param datetime time_from: (required) Initial date to filter Computed Usage data in SPM. In the case of non aggregated data the time period between of fromDate and toDate , expressed in RFC 3339 timestamp format. :param datetime time_to: (required) Final date to filter Computed Usage data in SPM, expressed in RFC 3339 timestamp format. :param str parent_product: (optional) Product part number for subscribed service line, called parent product. :param str computed_product: (optional) Product part number for Computed Usage . :param int limit: (optional) The maximum number of items to return in a paginated "List" call. Example: `500` :param str page: (optional) The value of the `opc-next-page` response header from the previous "List" call. :param str sort_order: (optional) The sort order to use, either ascending (`ASC`) or descending (`DESC`). Allowed values are: "ASC", "DESC" :param str sort_by: (optional) The field to sort by. You can provide one sort order (`sortOrder`). Allowed values are: "timeCreated", "timeOfArrival", "timeMeteredOn" :param str opc_request_id: (optional) Unique Oracle-assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID. :param str x_one_origin_region: (optional) The OCI home region name in case home region is not us-ashburn-1 (IAD), e.g. ap-mumbai-1, us-phoenix-1 etc. :param obj retry_strategy: (optional) A retry strategy to apply to this specific operation/call. This will override any retry strategy set at the client-level. This should be one of the strategies available in the :py:mod:`~oci.retry` module. This operation will not retry by default, users can also use the convenient :py:data:`~oci.retry.DEFAULT_RETRY_STRATEGY` provided by the SDK to enable retries for it. The specifics of the default retry strategy are described `here <https://docs.oracle.com/en-us/iaas/tools/python/latest/sdk_behaviors/retries.html>`__. To have this operation explicitly not perform any retries, pass an instance of :py:class:`~oci.retry.NoneRetryStrategy`. :param bool allow_control_chars: (optional) allow_control_chars is a boolean to indicate whether or not this request should allow control characters in the response object. By default, the response will not allow control characters in strings :return: A :class:`~oci.response.Response` object with data of type list of :class:`~oci.osub_usage.models.ComputedUsageSummary` :rtype: :class:`~oci.response.Response` :example: Click `here <https://docs.cloud.oracle.com/en-us/iaas/tools/python-sdk-examples/latest/osubusage/list_computed_usages.py.html>`__ to see an example of how to use list_computed_usages API.<|endoftext|>
25aae24aa6de815c5cdb2e47f66edb0ecbef104cf4b5f0ed49b669dbcdc216a9
def __init__(self, batch_size=128, num_workers=4, augment=[]): 'Loads the CIFAR10 train and test sets.\n \n Parameters\n ----------\n batch_size: int\n num_workers: int\n augment: list of torchvision.transforms objects\n ' train_transform = Compose((augment + self.normalize_transforms)) self.train_dataset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=train_transform) self.train_loader = DataLoader(self.train_dataset, batch_size=batch_size, shuffle=True, num_workers=num_workers, pin_memory=True) test_transform = Compose(self.normalize_transforms) self.test_dataset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=test_transform) self.test_loader = DataLoader(self.test_dataset, batch_size=batch_size, shuffle=False, num_workers=num_workers, pin_memory=True) print('Train examples:', data_loader_sample_count(self.train_loader)) print(' Test examples:', data_loader_sample_count(self.test_loader))
Loads the CIFAR10 train and test sets. Parameters ---------- batch_size: int num_workers: int augment: list of torchvision.transforms objects
datasets/CIFAR10.py
__init__
hollance/Ignition
12
python
def __init__(self, batch_size=128, num_workers=4, augment=[]): 'Loads the CIFAR10 train and test sets.\n \n Parameters\n ----------\n batch_size: int\n num_workers: int\n augment: list of torchvision.transforms objects\n ' train_transform = Compose((augment + self.normalize_transforms)) self.train_dataset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=train_transform) self.train_loader = DataLoader(self.train_dataset, batch_size=batch_size, shuffle=True, num_workers=num_workers, pin_memory=True) test_transform = Compose(self.normalize_transforms) self.test_dataset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=test_transform) self.test_loader = DataLoader(self.test_dataset, batch_size=batch_size, shuffle=False, num_workers=num_workers, pin_memory=True) print('Train examples:', data_loader_sample_count(self.train_loader)) print(' Test examples:', data_loader_sample_count(self.test_loader))
def __init__(self, batch_size=128, num_workers=4, augment=[]): 'Loads the CIFAR10 train and test sets.\n \n Parameters\n ----------\n batch_size: int\n num_workers: int\n augment: list of torchvision.transforms objects\n ' train_transform = Compose((augment + self.normalize_transforms)) self.train_dataset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=train_transform) self.train_loader = DataLoader(self.train_dataset, batch_size=batch_size, shuffle=True, num_workers=num_workers, pin_memory=True) test_transform = Compose(self.normalize_transforms) self.test_dataset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=test_transform) self.test_loader = DataLoader(self.test_dataset, batch_size=batch_size, shuffle=False, num_workers=num_workers, pin_memory=True) print('Train examples:', data_loader_sample_count(self.train_loader)) print(' Test examples:', data_loader_sample_count(self.test_loader))<|docstring|>Loads the CIFAR10 train and test sets. Parameters ---------- batch_size: int num_workers: int augment: list of torchvision.transforms objects<|endoftext|>
b06b9a7091716fdac272da3ed58e3536ab8299f9ee922ac5ee398621ca772f1f
def euclidean_norm(vct): 'Calculate the Euclidean (L-2) norm of a vector.' return (np.sum((vct ** 2)) ** 0.5)
Calculate the Euclidean (L-2) norm of a vector.
mici/solvers.py
euclidean_norm
matt-graham/mici
137
python
def euclidean_norm(vct): return (np.sum((vct ** 2)) ** 0.5)
def euclidean_norm(vct): return (np.sum((vct ** 2)) ** 0.5)<|docstring|>Calculate the Euclidean (L-2) norm of a vector.<|endoftext|>
c8b209c24be2bf718dc1960adb29ceab8954bbf1a444aa85732c5b30dc81469d
def maximum_norm(vct): 'Calculate the maximum (L-infinity) norm of a vector.' return np.max(abs(vct))
Calculate the maximum (L-infinity) norm of a vector.
mici/solvers.py
maximum_norm
matt-graham/mici
137
python
def maximum_norm(vct): return np.max(abs(vct))
def maximum_norm(vct): return np.max(abs(vct))<|docstring|>Calculate the maximum (L-infinity) norm of a vector.<|endoftext|>
e2f44115b1b1d4132f2938215c9b0fe15df25a4348858641efef98c92551c249
def solve_fixed_point_direct(func, x0, convergence_tol=1e-09, divergence_tol=10000000000.0, max_iters=100, norm=maximum_norm): 'Solve fixed point equation `func(x) = x` using direct iteration.\n\n Args:\n func (Callable[[array], array]): Function to find fixed point of.\n x0 (array): Initial state (function argument).\n convergence_tol (float): Convergence tolerance - solver successfully\n terminates when `norm(func(x) - x) < convergence_tol`.\n divergence_tol (float): Divergence tolerance - solver aborts if\n `norm(func(x) - x) > divergence_tol` on any iteration.\n max_iters (int): Maximum number of iterations before raising exception.\n norm (Callable[[array], float]): Norm to use to assess convergence.\n\n Returns:\n Solution to fixed point equation with\n `norm(func(x) - x) < convergence_tol`.\n\n Raises:\n `mici.errors.ConvergenceError` if solver does not converge within\n `max_iters` iterations, diverges or encounters a `ValueError` during\n the iteration.\n ' for i in range(max_iters): try: x = func(x0) error = norm((x - x0)) if ((error > divergence_tol) or np.isnan(error)): raise ConvergenceError(f'Fixed point iteration diverged on iteration {i}.Last error={error:.1e}.') if (error < convergence_tol): return x x0 = x except (ValueError, LinAlgError) as e: raise ConvergenceError(f'{type(e)} at iteration {i} of fixed point solver ({e}).') raise ConvergenceError(f'Fixed point iteration did not converge. Last error={error:.1e}.')
Solve fixed point equation `func(x) = x` using direct iteration. Args: func (Callable[[array], array]): Function to find fixed point of. x0 (array): Initial state (function argument). convergence_tol (float): Convergence tolerance - solver successfully terminates when `norm(func(x) - x) < convergence_tol`. divergence_tol (float): Divergence tolerance - solver aborts if `norm(func(x) - x) > divergence_tol` on any iteration. max_iters (int): Maximum number of iterations before raising exception. norm (Callable[[array], float]): Norm to use to assess convergence. Returns: Solution to fixed point equation with `norm(func(x) - x) < convergence_tol`. Raises: `mici.errors.ConvergenceError` if solver does not converge within `max_iters` iterations, diverges or encounters a `ValueError` during the iteration.
mici/solvers.py
solve_fixed_point_direct
matt-graham/mici
137
python
def solve_fixed_point_direct(func, x0, convergence_tol=1e-09, divergence_tol=10000000000.0, max_iters=100, norm=maximum_norm): 'Solve fixed point equation `func(x) = x` using direct iteration.\n\n Args:\n func (Callable[[array], array]): Function to find fixed point of.\n x0 (array): Initial state (function argument).\n convergence_tol (float): Convergence tolerance - solver successfully\n terminates when `norm(func(x) - x) < convergence_tol`.\n divergence_tol (float): Divergence tolerance - solver aborts if\n `norm(func(x) - x) > divergence_tol` on any iteration.\n max_iters (int): Maximum number of iterations before raising exception.\n norm (Callable[[array], float]): Norm to use to assess convergence.\n\n Returns:\n Solution to fixed point equation with\n `norm(func(x) - x) < convergence_tol`.\n\n Raises:\n `mici.errors.ConvergenceError` if solver does not converge within\n `max_iters` iterations, diverges or encounters a `ValueError` during\n the iteration.\n ' for i in range(max_iters): try: x = func(x0) error = norm((x - x0)) if ((error > divergence_tol) or np.isnan(error)): raise ConvergenceError(f'Fixed point iteration diverged on iteration {i}.Last error={error:.1e}.') if (error < convergence_tol): return x x0 = x except (ValueError, LinAlgError) as e: raise ConvergenceError(f'{type(e)} at iteration {i} of fixed point solver ({e}).') raise ConvergenceError(f'Fixed point iteration did not converge. Last error={error:.1e}.')
def solve_fixed_point_direct(func, x0, convergence_tol=1e-09, divergence_tol=10000000000.0, max_iters=100, norm=maximum_norm): 'Solve fixed point equation `func(x) = x` using direct iteration.\n\n Args:\n func (Callable[[array], array]): Function to find fixed point of.\n x0 (array): Initial state (function argument).\n convergence_tol (float): Convergence tolerance - solver successfully\n terminates when `norm(func(x) - x) < convergence_tol`.\n divergence_tol (float): Divergence tolerance - solver aborts if\n `norm(func(x) - x) > divergence_tol` on any iteration.\n max_iters (int): Maximum number of iterations before raising exception.\n norm (Callable[[array], float]): Norm to use to assess convergence.\n\n Returns:\n Solution to fixed point equation with\n `norm(func(x) - x) < convergence_tol`.\n\n Raises:\n `mici.errors.ConvergenceError` if solver does not converge within\n `max_iters` iterations, diverges or encounters a `ValueError` during\n the iteration.\n ' for i in range(max_iters): try: x = func(x0) error = norm((x - x0)) if ((error > divergence_tol) or np.isnan(error)): raise ConvergenceError(f'Fixed point iteration diverged on iteration {i}.Last error={error:.1e}.') if (error < convergence_tol): return x x0 = x except (ValueError, LinAlgError) as e: raise ConvergenceError(f'{type(e)} at iteration {i} of fixed point solver ({e}).') raise ConvergenceError(f'Fixed point iteration did not converge. Last error={error:.1e}.')<|docstring|>Solve fixed point equation `func(x) = x` using direct iteration. Args: func (Callable[[array], array]): Function to find fixed point of. x0 (array): Initial state (function argument). convergence_tol (float): Convergence tolerance - solver successfully terminates when `norm(func(x) - x) < convergence_tol`. divergence_tol (float): Divergence tolerance - solver aborts if `norm(func(x) - x) > divergence_tol` on any iteration. max_iters (int): Maximum number of iterations before raising exception. norm (Callable[[array], float]): Norm to use to assess convergence. Returns: Solution to fixed point equation with `norm(func(x) - x) < convergence_tol`. Raises: `mici.errors.ConvergenceError` if solver does not converge within `max_iters` iterations, diverges or encounters a `ValueError` during the iteration.<|endoftext|>
20542a8a2643bc75c7e7116496f1b97abed52ef3985bfdf4e1ca05de233fb652
def solve_fixed_point_steffensen(func, x0, convergence_tol=1e-09, divergence_tol=10000000000.0, max_iters=100, norm=maximum_norm): "Solve fixed point equation `func(x) = x` using Steffensen's method.\n\n Steffennsen's method [1] achieves quadratic convergence but at the cost of\n two function evaluations per iteration so for functions where convergence\n is achieved in a small number of iterations, direct iteration may be\n cheaper.\n\n [1] : https://en.wikipedia.org/wiki/Steffensen%27s_method\n\n Args:\n func (Callable[[array], array]): Function to find fixed point of.\n x0 (array): Initial state (function argument).\n convergence_tol (float): Convergence tolerance - solver successfully\n terminates when `norm(func(x) - x) < convergence_tol`.\n divergence_tol (float): Divergence tolerance - solver aborts if\n `norm(func(x) - x) > divergence_tol` on any iteration.\n max_iters (int): Maximum number of iterations before raising exception.\n norm (Callable[[array], float]): Norm to use to assess convergence.\n\n Returns:\n Solution to fixed point equation with\n `norm(func(x) - x) < convergence_tol`.\n\n Raises:\n `mici.errors.ConvergenceError` if solver does not converge within\n `max_iters` iterations, diverges or encounters a `ValueError` during\n the iteration.\n " for i in range(max_iters): try: x1 = func(x0) x2 = func(x1) denom = ((x2 - (2 * x1)) + x0) denom[(abs(denom) == 0.0)] = np.finfo(x0.dtype).eps x = (x0 - (((x1 - x0) ** 2) / denom)) error = norm((x - x0)) if ((error > divergence_tol) or np.isnan(error)): raise ConvergenceError(f'Fixed point iteration diverged on iteration {i}.Last error={error:.1e}.') if (error < convergence_tol): return x x0 = x except (ValueError, LinAlgError) as e: raise ConvergenceError(f'{type(e)} at iteration {i} of fixed point solver ({e}).') raise ConvergenceError(f'Fixed point iteration did not converge. Last error={error:.1e}.')
Solve fixed point equation `func(x) = x` using Steffensen's method. Steffennsen's method [1] achieves quadratic convergence but at the cost of two function evaluations per iteration so for functions where convergence is achieved in a small number of iterations, direct iteration may be cheaper. [1] : https://en.wikipedia.org/wiki/Steffensen%27s_method Args: func (Callable[[array], array]): Function to find fixed point of. x0 (array): Initial state (function argument). convergence_tol (float): Convergence tolerance - solver successfully terminates when `norm(func(x) - x) < convergence_tol`. divergence_tol (float): Divergence tolerance - solver aborts if `norm(func(x) - x) > divergence_tol` on any iteration. max_iters (int): Maximum number of iterations before raising exception. norm (Callable[[array], float]): Norm to use to assess convergence. Returns: Solution to fixed point equation with `norm(func(x) - x) < convergence_tol`. Raises: `mici.errors.ConvergenceError` if solver does not converge within `max_iters` iterations, diverges or encounters a `ValueError` during the iteration.
mici/solvers.py
solve_fixed_point_steffensen
matt-graham/mici
137
python
def solve_fixed_point_steffensen(func, x0, convergence_tol=1e-09, divergence_tol=10000000000.0, max_iters=100, norm=maximum_norm): "Solve fixed point equation `func(x) = x` using Steffensen's method.\n\n Steffennsen's method [1] achieves quadratic convergence but at the cost of\n two function evaluations per iteration so for functions where convergence\n is achieved in a small number of iterations, direct iteration may be\n cheaper.\n\n [1] : https://en.wikipedia.org/wiki/Steffensen%27s_method\n\n Args:\n func (Callable[[array], array]): Function to find fixed point of.\n x0 (array): Initial state (function argument).\n convergence_tol (float): Convergence tolerance - solver successfully\n terminates when `norm(func(x) - x) < convergence_tol`.\n divergence_tol (float): Divergence tolerance - solver aborts if\n `norm(func(x) - x) > divergence_tol` on any iteration.\n max_iters (int): Maximum number of iterations before raising exception.\n norm (Callable[[array], float]): Norm to use to assess convergence.\n\n Returns:\n Solution to fixed point equation with\n `norm(func(x) - x) < convergence_tol`.\n\n Raises:\n `mici.errors.ConvergenceError` if solver does not converge within\n `max_iters` iterations, diverges or encounters a `ValueError` during\n the iteration.\n " for i in range(max_iters): try: x1 = func(x0) x2 = func(x1) denom = ((x2 - (2 * x1)) + x0) denom[(abs(denom) == 0.0)] = np.finfo(x0.dtype).eps x = (x0 - (((x1 - x0) ** 2) / denom)) error = norm((x - x0)) if ((error > divergence_tol) or np.isnan(error)): raise ConvergenceError(f'Fixed point iteration diverged on iteration {i}.Last error={error:.1e}.') if (error < convergence_tol): return x x0 = x except (ValueError, LinAlgError) as e: raise ConvergenceError(f'{type(e)} at iteration {i} of fixed point solver ({e}).') raise ConvergenceError(f'Fixed point iteration did not converge. Last error={error:.1e}.')
def solve_fixed_point_steffensen(func, x0, convergence_tol=1e-09, divergence_tol=10000000000.0, max_iters=100, norm=maximum_norm): "Solve fixed point equation `func(x) = x` using Steffensen's method.\n\n Steffennsen's method [1] achieves quadratic convergence but at the cost of\n two function evaluations per iteration so for functions where convergence\n is achieved in a small number of iterations, direct iteration may be\n cheaper.\n\n [1] : https://en.wikipedia.org/wiki/Steffensen%27s_method\n\n Args:\n func (Callable[[array], array]): Function to find fixed point of.\n x0 (array): Initial state (function argument).\n convergence_tol (float): Convergence tolerance - solver successfully\n terminates when `norm(func(x) - x) < convergence_tol`.\n divergence_tol (float): Divergence tolerance - solver aborts if\n `norm(func(x) - x) > divergence_tol` on any iteration.\n max_iters (int): Maximum number of iterations before raising exception.\n norm (Callable[[array], float]): Norm to use to assess convergence.\n\n Returns:\n Solution to fixed point equation with\n `norm(func(x) - x) < convergence_tol`.\n\n Raises:\n `mici.errors.ConvergenceError` if solver does not converge within\n `max_iters` iterations, diverges or encounters a `ValueError` during\n the iteration.\n " for i in range(max_iters): try: x1 = func(x0) x2 = func(x1) denom = ((x2 - (2 * x1)) + x0) denom[(abs(denom) == 0.0)] = np.finfo(x0.dtype).eps x = (x0 - (((x1 - x0) ** 2) / denom)) error = norm((x - x0)) if ((error > divergence_tol) or np.isnan(error)): raise ConvergenceError(f'Fixed point iteration diverged on iteration {i}.Last error={error:.1e}.') if (error < convergence_tol): return x x0 = x except (ValueError, LinAlgError) as e: raise ConvergenceError(f'{type(e)} at iteration {i} of fixed point solver ({e}).') raise ConvergenceError(f'Fixed point iteration did not converge. Last error={error:.1e}.')<|docstring|>Solve fixed point equation `func(x) = x` using Steffensen's method. Steffennsen's method [1] achieves quadratic convergence but at the cost of two function evaluations per iteration so for functions where convergence is achieved in a small number of iterations, direct iteration may be cheaper. [1] : https://en.wikipedia.org/wiki/Steffensen%27s_method Args: func (Callable[[array], array]): Function to find fixed point of. x0 (array): Initial state (function argument). convergence_tol (float): Convergence tolerance - solver successfully terminates when `norm(func(x) - x) < convergence_tol`. divergence_tol (float): Divergence tolerance - solver aborts if `norm(func(x) - x) > divergence_tol` on any iteration. max_iters (int): Maximum number of iterations before raising exception. norm (Callable[[array], float]): Norm to use to assess convergence. Returns: Solution to fixed point equation with `norm(func(x) - x) < convergence_tol`. Raises: `mici.errors.ConvergenceError` if solver does not converge within `max_iters` iterations, diverges or encounters a `ValueError` during the iteration.<|endoftext|>
93d6a92b582f0c6d5d84d09d4f002c29c977c1d8270a6a43879c8c1d88ea298c
def solve_projection_onto_manifold_quasi_newton(state, state_prev, dt, system, constraint_tol=1e-09, position_tol=1e-08, divergence_tol=10000000000.0, max_iters=50, norm=maximum_norm): 'Solve constraint equation using quasi-Newton method.\n\n Uses a quasi-Newton iteration to solve the non-linear system of equations\n in `λ`\n\n system.constr(\n state.pos + dh2_flow_pos_dmom @\n system.jacob_constr(state_prev).T @ λ) == 0\n\n where `dh2_flow_pos_dmom = system.dh2_flow_dmom(dt)[0]` is the derivative\n of the action of the (linear) `system.h2_flow` map on the state momentum\n component with respect to the position component, `state` is a post\n (unconstrained) `system.h2_flow` update state with position component\n outside of the manifold and `state_prev` is the corresponding pre-update\n state in the co-tangent bundle.\n\n Only requires re-evaluating the constraint function `system.constr` within\n the solver loop and no recomputation of matrix decompositions on each\n iteration.\n\n Args:\n state (mici.states.ChainState): Post `h2_flow `update state to project.\n state_prev (mici.states.ChainState): Previous state in co-tangent\n bundle manifold before `h2_flow` update which defines the\n co-tangent space to perform projection in.\n dt (float): Integrator time step used in `h2_flow` update.\n system (mici.systems.ConstrainedEuclideanMetricSystem): Hamiltonian\n system defining `h2_flow` and `constr` functions used to define\n constraint equation to solve.\n constraint_tol (float): Convergence tolerance in constraint space.\n Iteration will continue until `norm(constr(pos)) < constraint_tol`\n where `pos` is the position at the current iteration.\n position_tol (float): Convergence tolerance in position space.\n Iteration will continue until `norm(delt_pos) < position_tol`\n where `delta_pos` is the change in the position in the current\n iteration.\n divergence_tol (float): Divergence tolerance - solver aborts if\n `norm(constr(pos)) > divergence_tol` on any iteration where `pos`\n is the position at the current iteration and raises\n `mici.errors.ConvergenceError`.\n max_iters (int): Maximum number of iterations to perform before\n aborting and raising `mici.errors.ConvergenceError`.\n norm (Callable[[array], float]): Norm to use to test for convergence.\n\n Returns:\n Updated `state` object with position component satisfying constraint\n equation to within `constraint_tol`, i.e.\n `norm(system.constr(state.pos)) < constraint_tol`.\n\n Raises:\n `mici.errors.ConvergenceError` if solver does not converge within\n `max_iters` iterations, diverges or encounters a `ValueError` during\n the iteration.\n ' mu = np.zeros_like(state.pos) jacob_constr_prev = system.jacob_constr(state_prev) (dh2_flow_pos_dmom, dh2_flow_mom_dmom) = system.dh2_flow_dmom(abs(dt)) inv_jacob_constr_inner_product = system.jacob_constr_inner_product(jacob_constr_prev, dh2_flow_pos_dmom).inv for i in range(max_iters): try: constr = system.constr(state) error = norm(constr) delta_mu = (jacob_constr_prev.T @ (inv_jacob_constr_inner_product @ constr)) delta_pos = (dh2_flow_pos_dmom @ delta_mu) if ((error > divergence_tol) or np.isnan(error)): raise ConvergenceError(f'Quasi-Newton solver diverged on iteration {i}. Last |constr|={error:.1e}, |delta_pos|={norm(delta_pos):.1e}.') elif ((error < constraint_tol) and (norm(delta_pos) < position_tol)): state.mom -= ((np.sign(dt) * dh2_flow_mom_dmom) @ mu) return state mu += delta_mu state.pos -= delta_pos except (ValueError, LinAlgError) as e: raise ConvergenceError(f'{type(e)} at iteration {i} of quasi-Newton solver ({e}).') raise ConvergenceError(f'Quasi-Newton solver did not converge with {max_iters} iterations. Last |constr|={error:.1e}, |delta_pos|={norm(delta_pos)}.')
Solve constraint equation using quasi-Newton method. Uses a quasi-Newton iteration to solve the non-linear system of equations in `λ` system.constr( state.pos + dh2_flow_pos_dmom @ system.jacob_constr(state_prev).T @ λ) == 0 where `dh2_flow_pos_dmom = system.dh2_flow_dmom(dt)[0]` is the derivative of the action of the (linear) `system.h2_flow` map on the state momentum component with respect to the position component, `state` is a post (unconstrained) `system.h2_flow` update state with position component outside of the manifold and `state_prev` is the corresponding pre-update state in the co-tangent bundle. Only requires re-evaluating the constraint function `system.constr` within the solver loop and no recomputation of matrix decompositions on each iteration. Args: state (mici.states.ChainState): Post `h2_flow `update state to project. state_prev (mici.states.ChainState): Previous state in co-tangent bundle manifold before `h2_flow` update which defines the co-tangent space to perform projection in. dt (float): Integrator time step used in `h2_flow` update. system (mici.systems.ConstrainedEuclideanMetricSystem): Hamiltonian system defining `h2_flow` and `constr` functions used to define constraint equation to solve. constraint_tol (float): Convergence tolerance in constraint space. Iteration will continue until `norm(constr(pos)) < constraint_tol` where `pos` is the position at the current iteration. position_tol (float): Convergence tolerance in position space. Iteration will continue until `norm(delt_pos) < position_tol` where `delta_pos` is the change in the position in the current iteration. divergence_tol (float): Divergence tolerance - solver aborts if `norm(constr(pos)) > divergence_tol` on any iteration where `pos` is the position at the current iteration and raises `mici.errors.ConvergenceError`. max_iters (int): Maximum number of iterations to perform before aborting and raising `mici.errors.ConvergenceError`. norm (Callable[[array], float]): Norm to use to test for convergence. Returns: Updated `state` object with position component satisfying constraint equation to within `constraint_tol`, i.e. `norm(system.constr(state.pos)) < constraint_tol`. Raises: `mici.errors.ConvergenceError` if solver does not converge within `max_iters` iterations, diverges or encounters a `ValueError` during the iteration.
mici/solvers.py
solve_projection_onto_manifold_quasi_newton
matt-graham/mici
137
python
def solve_projection_onto_manifold_quasi_newton(state, state_prev, dt, system, constraint_tol=1e-09, position_tol=1e-08, divergence_tol=10000000000.0, max_iters=50, norm=maximum_norm): 'Solve constraint equation using quasi-Newton method.\n\n Uses a quasi-Newton iteration to solve the non-linear system of equations\n in `λ`\n\n system.constr(\n state.pos + dh2_flow_pos_dmom @\n system.jacob_constr(state_prev).T @ λ) == 0\n\n where `dh2_flow_pos_dmom = system.dh2_flow_dmom(dt)[0]` is the derivative\n of the action of the (linear) `system.h2_flow` map on the state momentum\n component with respect to the position component, `state` is a post\n (unconstrained) `system.h2_flow` update state with position component\n outside of the manifold and `state_prev` is the corresponding pre-update\n state in the co-tangent bundle.\n\n Only requires re-evaluating the constraint function `system.constr` within\n the solver loop and no recomputation of matrix decompositions on each\n iteration.\n\n Args:\n state (mici.states.ChainState): Post `h2_flow `update state to project.\n state_prev (mici.states.ChainState): Previous state in co-tangent\n bundle manifold before `h2_flow` update which defines the\n co-tangent space to perform projection in.\n dt (float): Integrator time step used in `h2_flow` update.\n system (mici.systems.ConstrainedEuclideanMetricSystem): Hamiltonian\n system defining `h2_flow` and `constr` functions used to define\n constraint equation to solve.\n constraint_tol (float): Convergence tolerance in constraint space.\n Iteration will continue until `norm(constr(pos)) < constraint_tol`\n where `pos` is the position at the current iteration.\n position_tol (float): Convergence tolerance in position space.\n Iteration will continue until `norm(delt_pos) < position_tol`\n where `delta_pos` is the change in the position in the current\n iteration.\n divergence_tol (float): Divergence tolerance - solver aborts if\n `norm(constr(pos)) > divergence_tol` on any iteration where `pos`\n is the position at the current iteration and raises\n `mici.errors.ConvergenceError`.\n max_iters (int): Maximum number of iterations to perform before\n aborting and raising `mici.errors.ConvergenceError`.\n norm (Callable[[array], float]): Norm to use to test for convergence.\n\n Returns:\n Updated `state` object with position component satisfying constraint\n equation to within `constraint_tol`, i.e.\n `norm(system.constr(state.pos)) < constraint_tol`.\n\n Raises:\n `mici.errors.ConvergenceError` if solver does not converge within\n `max_iters` iterations, diverges or encounters a `ValueError` during\n the iteration.\n ' mu = np.zeros_like(state.pos) jacob_constr_prev = system.jacob_constr(state_prev) (dh2_flow_pos_dmom, dh2_flow_mom_dmom) = system.dh2_flow_dmom(abs(dt)) inv_jacob_constr_inner_product = system.jacob_constr_inner_product(jacob_constr_prev, dh2_flow_pos_dmom).inv for i in range(max_iters): try: constr = system.constr(state) error = norm(constr) delta_mu = (jacob_constr_prev.T @ (inv_jacob_constr_inner_product @ constr)) delta_pos = (dh2_flow_pos_dmom @ delta_mu) if ((error > divergence_tol) or np.isnan(error)): raise ConvergenceError(f'Quasi-Newton solver diverged on iteration {i}. Last |constr|={error:.1e}, |delta_pos|={norm(delta_pos):.1e}.') elif ((error < constraint_tol) and (norm(delta_pos) < position_tol)): state.mom -= ((np.sign(dt) * dh2_flow_mom_dmom) @ mu) return state mu += delta_mu state.pos -= delta_pos except (ValueError, LinAlgError) as e: raise ConvergenceError(f'{type(e)} at iteration {i} of quasi-Newton solver ({e}).') raise ConvergenceError(f'Quasi-Newton solver did not converge with {max_iters} iterations. Last |constr|={error:.1e}, |delta_pos|={norm(delta_pos)}.')
def solve_projection_onto_manifold_quasi_newton(state, state_prev, dt, system, constraint_tol=1e-09, position_tol=1e-08, divergence_tol=10000000000.0, max_iters=50, norm=maximum_norm): 'Solve constraint equation using quasi-Newton method.\n\n Uses a quasi-Newton iteration to solve the non-linear system of equations\n in `λ`\n\n system.constr(\n state.pos + dh2_flow_pos_dmom @\n system.jacob_constr(state_prev).T @ λ) == 0\n\n where `dh2_flow_pos_dmom = system.dh2_flow_dmom(dt)[0]` is the derivative\n of the action of the (linear) `system.h2_flow` map on the state momentum\n component with respect to the position component, `state` is a post\n (unconstrained) `system.h2_flow` update state with position component\n outside of the manifold and `state_prev` is the corresponding pre-update\n state in the co-tangent bundle.\n\n Only requires re-evaluating the constraint function `system.constr` within\n the solver loop and no recomputation of matrix decompositions on each\n iteration.\n\n Args:\n state (mici.states.ChainState): Post `h2_flow `update state to project.\n state_prev (mici.states.ChainState): Previous state in co-tangent\n bundle manifold before `h2_flow` update which defines the\n co-tangent space to perform projection in.\n dt (float): Integrator time step used in `h2_flow` update.\n system (mici.systems.ConstrainedEuclideanMetricSystem): Hamiltonian\n system defining `h2_flow` and `constr` functions used to define\n constraint equation to solve.\n constraint_tol (float): Convergence tolerance in constraint space.\n Iteration will continue until `norm(constr(pos)) < constraint_tol`\n where `pos` is the position at the current iteration.\n position_tol (float): Convergence tolerance in position space.\n Iteration will continue until `norm(delt_pos) < position_tol`\n where `delta_pos` is the change in the position in the current\n iteration.\n divergence_tol (float): Divergence tolerance - solver aborts if\n `norm(constr(pos)) > divergence_tol` on any iteration where `pos`\n is the position at the current iteration and raises\n `mici.errors.ConvergenceError`.\n max_iters (int): Maximum number of iterations to perform before\n aborting and raising `mici.errors.ConvergenceError`.\n norm (Callable[[array], float]): Norm to use to test for convergence.\n\n Returns:\n Updated `state` object with position component satisfying constraint\n equation to within `constraint_tol`, i.e.\n `norm(system.constr(state.pos)) < constraint_tol`.\n\n Raises:\n `mici.errors.ConvergenceError` if solver does not converge within\n `max_iters` iterations, diverges or encounters a `ValueError` during\n the iteration.\n ' mu = np.zeros_like(state.pos) jacob_constr_prev = system.jacob_constr(state_prev) (dh2_flow_pos_dmom, dh2_flow_mom_dmom) = system.dh2_flow_dmom(abs(dt)) inv_jacob_constr_inner_product = system.jacob_constr_inner_product(jacob_constr_prev, dh2_flow_pos_dmom).inv for i in range(max_iters): try: constr = system.constr(state) error = norm(constr) delta_mu = (jacob_constr_prev.T @ (inv_jacob_constr_inner_product @ constr)) delta_pos = (dh2_flow_pos_dmom @ delta_mu) if ((error > divergence_tol) or np.isnan(error)): raise ConvergenceError(f'Quasi-Newton solver diverged on iteration {i}. Last |constr|={error:.1e}, |delta_pos|={norm(delta_pos):.1e}.') elif ((error < constraint_tol) and (norm(delta_pos) < position_tol)): state.mom -= ((np.sign(dt) * dh2_flow_mom_dmom) @ mu) return state mu += delta_mu state.pos -= delta_pos except (ValueError, LinAlgError) as e: raise ConvergenceError(f'{type(e)} at iteration {i} of quasi-Newton solver ({e}).') raise ConvergenceError(f'Quasi-Newton solver did not converge with {max_iters} iterations. Last |constr|={error:.1e}, |delta_pos|={norm(delta_pos)}.')<|docstring|>Solve constraint equation using quasi-Newton method. Uses a quasi-Newton iteration to solve the non-linear system of equations in `λ` system.constr( state.pos + dh2_flow_pos_dmom @ system.jacob_constr(state_prev).T @ λ) == 0 where `dh2_flow_pos_dmom = system.dh2_flow_dmom(dt)[0]` is the derivative of the action of the (linear) `system.h2_flow` map on the state momentum component with respect to the position component, `state` is a post (unconstrained) `system.h2_flow` update state with position component outside of the manifold and `state_prev` is the corresponding pre-update state in the co-tangent bundle. Only requires re-evaluating the constraint function `system.constr` within the solver loop and no recomputation of matrix decompositions on each iteration. Args: state (mici.states.ChainState): Post `h2_flow `update state to project. state_prev (mici.states.ChainState): Previous state in co-tangent bundle manifold before `h2_flow` update which defines the co-tangent space to perform projection in. dt (float): Integrator time step used in `h2_flow` update. system (mici.systems.ConstrainedEuclideanMetricSystem): Hamiltonian system defining `h2_flow` and `constr` functions used to define constraint equation to solve. constraint_tol (float): Convergence tolerance in constraint space. Iteration will continue until `norm(constr(pos)) < constraint_tol` where `pos` is the position at the current iteration. position_tol (float): Convergence tolerance in position space. Iteration will continue until `norm(delt_pos) < position_tol` where `delta_pos` is the change in the position in the current iteration. divergence_tol (float): Divergence tolerance - solver aborts if `norm(constr(pos)) > divergence_tol` on any iteration where `pos` is the position at the current iteration and raises `mici.errors.ConvergenceError`. max_iters (int): Maximum number of iterations to perform before aborting and raising `mici.errors.ConvergenceError`. norm (Callable[[array], float]): Norm to use to test for convergence. Returns: Updated `state` object with position component satisfying constraint equation to within `constraint_tol`, i.e. `norm(system.constr(state.pos)) < constraint_tol`. Raises: `mici.errors.ConvergenceError` if solver does not converge within `max_iters` iterations, diverges or encounters a `ValueError` during the iteration.<|endoftext|>
74edb197a6ef6e5b127c1b0daa27f1ccd71a5d300dcbf910598215b99f3cdd7a
def solve_projection_onto_manifold_newton(state, state_prev, dt, system, constraint_tol=1e-09, position_tol=1e-08, divergence_tol=10000000000.0, max_iters=50, norm=maximum_norm): 'Solve constraint equation using Newton method.\n\n Uses a Newton iteration to solve the non-linear system of equations in `λ`\n\n system.constr(\n state.pos + dh2_flow_pos_dmom @\n system.jacob_constr(state_prev).T @ λ) == 0\n\n where `dh2_flow_pos_dmom = system.dh2_flow_dmom(dt)[0]` is the derivative\n of the action of the (linear) `system.h2_flow` map on the state momentum\n component with respect to the position component, `state` is a post\n (unconstrained) `system.h2_flow` update state with position component\n outside of the manifold and `state_prev` is the corresponding pre-update\n state in the co-tangent bundle.\n\n Requires re-evaluating both the constraint function `system.constr` and\n constraint Jacobian `system.jacob_constr` within the solver loop and\n computation of matrix decompositions of a preconditioned matrix on each\n iteration.\n\n Args:\n state (mici.states.ChainState): Post `h2_flow `update state to project.\n state_prev (mici.states.ChainState): Previous state in co-tangent\n bundle manifold before `h2_flow` update which defines the\n co-tangent space to perform projection in.\n dt (float): Integrator time step used in `h2_flow` update.\n system (mici.systems.ConstrainedEuclideanMetricSystem): Hamiltonian\n system defining `h2_flow` and `constr` functions used to define\n constraint equation to solve.\n constraint_tol (float): Convergence tolerance in constraint space.\n Iteration will continue until `norm(constr(pos)) < constraint_tol`\n where `pos` is the position at the current iteration.\n position_tol (float): Convergence tolerance in position space.\n Iteration will continue until `norm(delt_pos) < position_tol`\n where `delta_pos` is the change in the position in the current\n iteration.\n divergence_tol (float): Divergence tolerance - solver aborts if\n `norm(constr(pos)) > divergence_tol` on any iteration where `pos`\n is the position at the current iteration and raises\n `mici.errors.ConvergenceError`.\n max_iters (int): Maximum number of iterations to perform before\n aborting and raising `mici.errors.ConvergenceError`.\n norm (Callable[[array], float]): Norm to use to test for convergence.\n\n Returns:\n Updated `state` object with position component satisfying constraint\n equation to within `constraint_tol`, i.e.\n `norm(system.constr(state.pos)) < constraint_tol`.\n\n Raises:\n `mici.errors.ConvergenceError` if solver does not converge within\n `max_iters` iterations, diverges or encounters a `ValueError` during\n the iteration.\n ' mu = np.zeros_like(state.pos) jacob_constr_prev = system.jacob_constr(state_prev) (dh2_flow_pos_dmom, dh2_flow_mom_dmom) = system.dh2_flow_dmom(abs(dt)) for i in range(max_iters): try: jacob_constr = system.jacob_constr(state) constr = system.constr(state) error = norm(constr) delta_mu = (jacob_constr_prev.T @ (system.jacob_constr_inner_product(jacob_constr, dh2_flow_pos_dmom, jacob_constr_prev).inv @ constr)) delta_pos = (dh2_flow_pos_dmom @ delta_mu) if ((error > divergence_tol) or np.isnan(error)): raise ConvergenceError(f'Newton solver diverged at iteration {i}. Last |constr|={error:.1e}, |delta_pos|={norm(delta_pos):.1e}.') if ((error < constraint_tol) and (norm(delta_pos) < position_tol)): state.mom -= ((np.sign(dt) * dh2_flow_mom_dmom) @ mu) return state mu += delta_mu state.pos -= delta_pos except (ValueError, LinAlgError) as e: raise ConvergenceError(f'{type(e)} at iteration {i} of Newton solver ({e}).') raise ConvergenceError(f'Newton solver did not converge in {max_iters} iterations. Last |constr|={error:.1e}, |delta_pos|={norm(delta_pos)}.')
Solve constraint equation using Newton method. Uses a Newton iteration to solve the non-linear system of equations in `λ` system.constr( state.pos + dh2_flow_pos_dmom @ system.jacob_constr(state_prev).T @ λ) == 0 where `dh2_flow_pos_dmom = system.dh2_flow_dmom(dt)[0]` is the derivative of the action of the (linear) `system.h2_flow` map on the state momentum component with respect to the position component, `state` is a post (unconstrained) `system.h2_flow` update state with position component outside of the manifold and `state_prev` is the corresponding pre-update state in the co-tangent bundle. Requires re-evaluating both the constraint function `system.constr` and constraint Jacobian `system.jacob_constr` within the solver loop and computation of matrix decompositions of a preconditioned matrix on each iteration. Args: state (mici.states.ChainState): Post `h2_flow `update state to project. state_prev (mici.states.ChainState): Previous state in co-tangent bundle manifold before `h2_flow` update which defines the co-tangent space to perform projection in. dt (float): Integrator time step used in `h2_flow` update. system (mici.systems.ConstrainedEuclideanMetricSystem): Hamiltonian system defining `h2_flow` and `constr` functions used to define constraint equation to solve. constraint_tol (float): Convergence tolerance in constraint space. Iteration will continue until `norm(constr(pos)) < constraint_tol` where `pos` is the position at the current iteration. position_tol (float): Convergence tolerance in position space. Iteration will continue until `norm(delt_pos) < position_tol` where `delta_pos` is the change in the position in the current iteration. divergence_tol (float): Divergence tolerance - solver aborts if `norm(constr(pos)) > divergence_tol` on any iteration where `pos` is the position at the current iteration and raises `mici.errors.ConvergenceError`. max_iters (int): Maximum number of iterations to perform before aborting and raising `mici.errors.ConvergenceError`. norm (Callable[[array], float]): Norm to use to test for convergence. Returns: Updated `state` object with position component satisfying constraint equation to within `constraint_tol`, i.e. `norm(system.constr(state.pos)) < constraint_tol`. Raises: `mici.errors.ConvergenceError` if solver does not converge within `max_iters` iterations, diverges or encounters a `ValueError` during the iteration.
mici/solvers.py
solve_projection_onto_manifold_newton
matt-graham/mici
137
python
def solve_projection_onto_manifold_newton(state, state_prev, dt, system, constraint_tol=1e-09, position_tol=1e-08, divergence_tol=10000000000.0, max_iters=50, norm=maximum_norm): 'Solve constraint equation using Newton method.\n\n Uses a Newton iteration to solve the non-linear system of equations in `λ`\n\n system.constr(\n state.pos + dh2_flow_pos_dmom @\n system.jacob_constr(state_prev).T @ λ) == 0\n\n where `dh2_flow_pos_dmom = system.dh2_flow_dmom(dt)[0]` is the derivative\n of the action of the (linear) `system.h2_flow` map on the state momentum\n component with respect to the position component, `state` is a post\n (unconstrained) `system.h2_flow` update state with position component\n outside of the manifold and `state_prev` is the corresponding pre-update\n state in the co-tangent bundle.\n\n Requires re-evaluating both the constraint function `system.constr` and\n constraint Jacobian `system.jacob_constr` within the solver loop and\n computation of matrix decompositions of a preconditioned matrix on each\n iteration.\n\n Args:\n state (mici.states.ChainState): Post `h2_flow `update state to project.\n state_prev (mici.states.ChainState): Previous state in co-tangent\n bundle manifold before `h2_flow` update which defines the\n co-tangent space to perform projection in.\n dt (float): Integrator time step used in `h2_flow` update.\n system (mici.systems.ConstrainedEuclideanMetricSystem): Hamiltonian\n system defining `h2_flow` and `constr` functions used to define\n constraint equation to solve.\n constraint_tol (float): Convergence tolerance in constraint space.\n Iteration will continue until `norm(constr(pos)) < constraint_tol`\n where `pos` is the position at the current iteration.\n position_tol (float): Convergence tolerance in position space.\n Iteration will continue until `norm(delt_pos) < position_tol`\n where `delta_pos` is the change in the position in the current\n iteration.\n divergence_tol (float): Divergence tolerance - solver aborts if\n `norm(constr(pos)) > divergence_tol` on any iteration where `pos`\n is the position at the current iteration and raises\n `mici.errors.ConvergenceError`.\n max_iters (int): Maximum number of iterations to perform before\n aborting and raising `mici.errors.ConvergenceError`.\n norm (Callable[[array], float]): Norm to use to test for convergence.\n\n Returns:\n Updated `state` object with position component satisfying constraint\n equation to within `constraint_tol`, i.e.\n `norm(system.constr(state.pos)) < constraint_tol`.\n\n Raises:\n `mici.errors.ConvergenceError` if solver does not converge within\n `max_iters` iterations, diverges or encounters a `ValueError` during\n the iteration.\n ' mu = np.zeros_like(state.pos) jacob_constr_prev = system.jacob_constr(state_prev) (dh2_flow_pos_dmom, dh2_flow_mom_dmom) = system.dh2_flow_dmom(abs(dt)) for i in range(max_iters): try: jacob_constr = system.jacob_constr(state) constr = system.constr(state) error = norm(constr) delta_mu = (jacob_constr_prev.T @ (system.jacob_constr_inner_product(jacob_constr, dh2_flow_pos_dmom, jacob_constr_prev).inv @ constr)) delta_pos = (dh2_flow_pos_dmom @ delta_mu) if ((error > divergence_tol) or np.isnan(error)): raise ConvergenceError(f'Newton solver diverged at iteration {i}. Last |constr|={error:.1e}, |delta_pos|={norm(delta_pos):.1e}.') if ((error < constraint_tol) and (norm(delta_pos) < position_tol)): state.mom -= ((np.sign(dt) * dh2_flow_mom_dmom) @ mu) return state mu += delta_mu state.pos -= delta_pos except (ValueError, LinAlgError) as e: raise ConvergenceError(f'{type(e)} at iteration {i} of Newton solver ({e}).') raise ConvergenceError(f'Newton solver did not converge in {max_iters} iterations. Last |constr|={error:.1e}, |delta_pos|={norm(delta_pos)}.')
def solve_projection_onto_manifold_newton(state, state_prev, dt, system, constraint_tol=1e-09, position_tol=1e-08, divergence_tol=10000000000.0, max_iters=50, norm=maximum_norm): 'Solve constraint equation using Newton method.\n\n Uses a Newton iteration to solve the non-linear system of equations in `λ`\n\n system.constr(\n state.pos + dh2_flow_pos_dmom @\n system.jacob_constr(state_prev).T @ λ) == 0\n\n where `dh2_flow_pos_dmom = system.dh2_flow_dmom(dt)[0]` is the derivative\n of the action of the (linear) `system.h2_flow` map on the state momentum\n component with respect to the position component, `state` is a post\n (unconstrained) `system.h2_flow` update state with position component\n outside of the manifold and `state_prev` is the corresponding pre-update\n state in the co-tangent bundle.\n\n Requires re-evaluating both the constraint function `system.constr` and\n constraint Jacobian `system.jacob_constr` within the solver loop and\n computation of matrix decompositions of a preconditioned matrix on each\n iteration.\n\n Args:\n state (mici.states.ChainState): Post `h2_flow `update state to project.\n state_prev (mici.states.ChainState): Previous state in co-tangent\n bundle manifold before `h2_flow` update which defines the\n co-tangent space to perform projection in.\n dt (float): Integrator time step used in `h2_flow` update.\n system (mici.systems.ConstrainedEuclideanMetricSystem): Hamiltonian\n system defining `h2_flow` and `constr` functions used to define\n constraint equation to solve.\n constraint_tol (float): Convergence tolerance in constraint space.\n Iteration will continue until `norm(constr(pos)) < constraint_tol`\n where `pos` is the position at the current iteration.\n position_tol (float): Convergence tolerance in position space.\n Iteration will continue until `norm(delt_pos) < position_tol`\n where `delta_pos` is the change in the position in the current\n iteration.\n divergence_tol (float): Divergence tolerance - solver aborts if\n `norm(constr(pos)) > divergence_tol` on any iteration where `pos`\n is the position at the current iteration and raises\n `mici.errors.ConvergenceError`.\n max_iters (int): Maximum number of iterations to perform before\n aborting and raising `mici.errors.ConvergenceError`.\n norm (Callable[[array], float]): Norm to use to test for convergence.\n\n Returns:\n Updated `state` object with position component satisfying constraint\n equation to within `constraint_tol`, i.e.\n `norm(system.constr(state.pos)) < constraint_tol`.\n\n Raises:\n `mici.errors.ConvergenceError` if solver does not converge within\n `max_iters` iterations, diverges or encounters a `ValueError` during\n the iteration.\n ' mu = np.zeros_like(state.pos) jacob_constr_prev = system.jacob_constr(state_prev) (dh2_flow_pos_dmom, dh2_flow_mom_dmom) = system.dh2_flow_dmom(abs(dt)) for i in range(max_iters): try: jacob_constr = system.jacob_constr(state) constr = system.constr(state) error = norm(constr) delta_mu = (jacob_constr_prev.T @ (system.jacob_constr_inner_product(jacob_constr, dh2_flow_pos_dmom, jacob_constr_prev).inv @ constr)) delta_pos = (dh2_flow_pos_dmom @ delta_mu) if ((error > divergence_tol) or np.isnan(error)): raise ConvergenceError(f'Newton solver diverged at iteration {i}. Last |constr|={error:.1e}, |delta_pos|={norm(delta_pos):.1e}.') if ((error < constraint_tol) and (norm(delta_pos) < position_tol)): state.mom -= ((np.sign(dt) * dh2_flow_mom_dmom) @ mu) return state mu += delta_mu state.pos -= delta_pos except (ValueError, LinAlgError) as e: raise ConvergenceError(f'{type(e)} at iteration {i} of Newton solver ({e}).') raise ConvergenceError(f'Newton solver did not converge in {max_iters} iterations. Last |constr|={error:.1e}, |delta_pos|={norm(delta_pos)}.')<|docstring|>Solve constraint equation using Newton method. Uses a Newton iteration to solve the non-linear system of equations in `λ` system.constr( state.pos + dh2_flow_pos_dmom @ system.jacob_constr(state_prev).T @ λ) == 0 where `dh2_flow_pos_dmom = system.dh2_flow_dmom(dt)[0]` is the derivative of the action of the (linear) `system.h2_flow` map on the state momentum component with respect to the position component, `state` is a post (unconstrained) `system.h2_flow` update state with position component outside of the manifold and `state_prev` is the corresponding pre-update state in the co-tangent bundle. Requires re-evaluating both the constraint function `system.constr` and constraint Jacobian `system.jacob_constr` within the solver loop and computation of matrix decompositions of a preconditioned matrix on each iteration. Args: state (mici.states.ChainState): Post `h2_flow `update state to project. state_prev (mici.states.ChainState): Previous state in co-tangent bundle manifold before `h2_flow` update which defines the co-tangent space to perform projection in. dt (float): Integrator time step used in `h2_flow` update. system (mici.systems.ConstrainedEuclideanMetricSystem): Hamiltonian system defining `h2_flow` and `constr` functions used to define constraint equation to solve. constraint_tol (float): Convergence tolerance in constraint space. Iteration will continue until `norm(constr(pos)) < constraint_tol` where `pos` is the position at the current iteration. position_tol (float): Convergence tolerance in position space. Iteration will continue until `norm(delt_pos) < position_tol` where `delta_pos` is the change in the position in the current iteration. divergence_tol (float): Divergence tolerance - solver aborts if `norm(constr(pos)) > divergence_tol` on any iteration where `pos` is the position at the current iteration and raises `mici.errors.ConvergenceError`. max_iters (int): Maximum number of iterations to perform before aborting and raising `mici.errors.ConvergenceError`. norm (Callable[[array], float]): Norm to use to test for convergence. Returns: Updated `state` object with position component satisfying constraint equation to within `constraint_tol`, i.e. `norm(system.constr(state.pos)) < constraint_tol`. Raises: `mici.errors.ConvergenceError` if solver does not converge within `max_iters` iterations, diverges or encounters a `ValueError` during the iteration.<|endoftext|>