input stringlengths 2.65k 237k | output stringclasses 1
value |
|---|---|
5: 20 'MBD0004D0D1/\\x01Ole'
6: O 114798 'MBD0004D0D1/\\x01Ole10Native'
7: 11312 'Workbook'
To get more info about the embedded object, use option -i like this:
C:\Demo>oledump.py -s 6 -i Book1-insert-object-calc-rol3.exe.xls
String 1: calc-rol3.exe
String 2: C:\Demo\ole\CALC-R~1.EXE
String 3: C:\Demo\ole\CALC-R~1.EXE
Size embedded file: 114688
MD5 embedded file: bef425b95e45c54d649a19a7c55556a0
SHA256 embedded file: 211b63ae126411545f9177ec80114883d32f7e3c7ccf81ee4e5dd6ffe3a10e2d
To extract the embedded file, use option -e and redirect the output to a file like this:
C:\Demo>oledump.py -s 6 -e Book1-insert-object-calc-rol3.exe.xls > extracted.bin
Use option --storages to display storages (by default, oledump only lists streams). Indicator . is used for storages except for the Root Entry which has indicator R.
Option -f can be used to find embedded OLE files. This is useful, for example, in the following scenario:
AutoCAD drawing files (.dwg) can contain VBA macros. Although the .dwg file format is a proprietary format, VBA macros are stored as an embedded OLE file. The header of a DWG file contains a pointer to the embedded OLE file, but since an OLE file starts with a MAGIC sequence (D0CF11E0), you can just scan the input file for this sequence.
This can be done using option -f (--find). This option takes a value: letter l or a positive integer.
To have an overview of embedded OLE files, use option "-f l" (letter l) like this:
C:\Demo>oledump.py -f l Drawing1vba.dwg
Position of potential embedded OLE files:
1 0x00008090
This will report the position of every (potential) embedded OLE file inside the input file. Here you can see that there is one file at position 0x8090.
You can then select this file and analyze it, using -f 1 (integer 1):
C:\Demo>oledump.py -f 1 Drawing1vba.dwg
1: 374 'VBA_Project/PROJECT'
2: 38 'VBA_Project/PROJECTwm'
3: M 1255 'VBA_Project/VBA/ThisDrawing'
4: 1896 'VBA_Project/VBA/_VBA_PROJECT'
5: 315 'VBA_Project/VBA/dir'
6: 16 'VBA_Project_Version'
And then you can use option -s to select streams and analyze them.
Analyzing the content of streams (and VBA macros) can be quite challenging. To help with the analysis, oledump provides support for plugins and YARA rules.
plugins are Python programs that take the stream content as input and try to analyze it. Plugins can analyze the raw stream content or the decompressed VBA macro source code. Plugins analyze all streams, you don't need to select a particular stream.
VBA macros code in malicious documents is often obfuscated, and hard to understand. plugin_http_heuristics is a plugin for VBA macros that tries to recover the URL used to download the trojan in a malicious Office document. This URL is often obfuscated, for example by using hexadecimal or base64 strings to represent the URL. plugin_http_heuristics tries several heuristics to recover a URL.
Example:
C:\Demo>oledump.py -p plugin_http_heuristics sample.xls
1: 104 '\\x01CompObj'
2: 256 '\\x05DocumentSummaryInformation'
3: 228 '\\x05SummaryInformation'
4: 4372 'Workbook'
5: 583 '_VBA_PROJECT_CUR/PROJECT'
6: 83 '_VBA_PROJECT_CUR/PROJECTwm'
7: m 976 '_VBA_PROJECT_CUR/VBA/????1'
Plugin: HTTP Heuristics plugin
8: m 976 '_VBA_PROJECT_CUR/VBA/????2'
Plugin: HTTP Heuristics plugin
9: m 976 '_VBA_PROJECT_CUR/VBA/????3'
Plugin: HTTP Heuristics plugin
10: M 261251 '_VBA_PROJECT_CUR/VBA/????????'
Plugin: HTTP Heuristics plugin
http://???.???.???.??:8080/stat/lld.php
11: 8775 '_VBA_PROJECT_CUR/VBA/_VBA_PROJECT'
12: 1398 '_VBA_PROJECT_CUR/VBA/__SRP_0'
13: 212 '_VBA_PROJECT_CUR/VBA/__SRP_1'
14: 456 '_VBA_PROJECT_CUR/VBA/__SRP_2'
15: 385 '_VBA_PROJECT_CUR/VBA/__SRP_3'
16: 550 '_VBA_PROJECT_CUR/VBA/dir'
Option -q (quiet) only displays output from the plugins, it suppresses output from oledump. This makes it easier to spot URLs:
C:\Demo>oledump.py -p plugin_http_heuristics -q sample.xls
http://???.???.???.??:8080/stat/lld.php
When specifying plugins, you do not need to give the full path nor the .py extension (it's allowed though). If you just give the filename without a path, oledump will search for the plugin in the current directory and in the directory where oledump.py is located. You can specify more than one plugin by separating their names with a comma (,), or by using a at-file. A at-file is a text file containing the names of the plugins (one per line). If plugins are located in a different directory, you could specify it with the --plugindir option. To indicate to oledump that a text file is a at-file, you prefix iw with @, like this:
oledump.py -p @all-plugins.txt sample.xls
Some plugins take options too. Use --pluginoptions to specify these options.
oledump can scan the content of the streams with YARA rules (the YARA Python module must be installed). You provide the YARA rules with option -y. You can provide one file with YARA rules, an at-file (@file containing the filenames of the YARA files) or a directory. In case of a directory, all files inside the directory are read as YARA files. Or you can provide the YARA rule with the option value (and adhoc rule) if it starts with # (literal), #s# (string), #x# (hexadecimal string), #r# (regex string), #q# (quote), #h# (hexadecimal) or #b# (base64). Example: -y "#rule demo {strings: $a=\"demo\" condition: $a}"
Using #s#demo will instruct oledump to generate a rule to search for string demo (rule string {strings: $a = "demo" ascii wide nocase condition: $a) and use that rule.
All streams are scanned with the provided YARA rules, you can not use option -s to select an individual stream.
Example:
C:\Demo>oledump.py -y contains_pe_file.yara Book1-insert-object-exe.xls
1: 107 '\\x01CompObj'
2: 256 '\\x05DocumentSummaryInformation'
3: 216 '\\x05SummaryInformation'
4: 76 'MBD0049DB15/\\x01CompObj'
5: O 60326 'MBD0049DB15/\\x01Ole10Native'
YARA rule: Contains_PE_File
6: 19567 'Workbook'
In this example, you use YARA rule contains_pe_file.yara to find PE files (executables) inside Microsoft Office files. The rule triggered for stream 5, because it contains an EXE file embedded as OLE object.
If you want more information about what was detected by the YARA rule, use option --yarastrings like in this example:
C:\Demo>oledump.py -y contains_pe_file.yara --yarastrings Book1-insert-object-exe.xls
1: 107 '\\x01CompObj'
2: 256 '\\x05DocumentSummaryInformation'
3: 216 '\\x05SummaryInformation'
4: 76 'MBD0049DB15/\\x01CompObj'
5: O 60326 'MBD0049DB15/\\x01Ole10Native'
YARA rule: Contains_PE_File
000064 $a:
4d5a
'MZ'
6: 19567 'Workbook'
YARA rule contains_pe_file detects PE files by finding string MZ followed by string PE at the correct offset (AddressOfNewExeHeader).
The rule looks like this:
rule Contains_PE_File
{
meta:
author = "<NAME> (https://DidierStevens.com)"
description = "Detect a PE file inside a byte sequence"
method = "Find string MZ followed by string PE at the correct offset (AddressOfNewExeHeader)"
strings:
$a = "MZ"
condition:
for any i in (1..#a): (uint32(@a[i] + uint32(@a[i] + 0x3C)) == 0x00004550)
}
Distributed together with oledump are the YARA rules maldoc.yara. These are YARA rules to detect shellcode, based on Frank Boldewin's shellcode detector used in OfficeMalScanner.
Two external variables are declared for use in YARA rules: streamname contains the stream name, and VBA is True when the YARA engine is given VBA source code to scan.
When looking for traces of Windows executable code (PE files, shellcode, ...) with YARA rules, one must take into account the fact that the executable code might have been encoded (for example via XOR and a key) to evade detection.
To deal with this possibility, oledump supports decoders. A decoder is another type of plugin, that will bruteforce a type of encoding on each stream. For example, decoder_xor1 will encode each stream via XOR and a key of 1 byte. So effectively, 256 different encodings of the stream will be scanned by the YARA rules. 256 encodings because: XOR key 0x00, XOR key 0x01, XOR key 0x02, ..., XOR key 0xFF
Here is an example:
C:\Demo>oledump.py -y contains_pe_file.yara -D decoder_xor1 Book1-insert-object-exe-xor14.xls
1: 107 '\\x01CompObj'
2: 256 '\\x05DocumentSummaryInformation'
3: 216 '\\x05SummaryInformation'
4: 76 'MBD0049DB15/\\x01CompObj'
5: O 60326 'MBD0049DB15/\\x01Ole10Native'
YARA rule (stream decoder: XOR 1 byte key 0x14): Contains_PE_File
6: 19567 'Workbook'
The YARA rule triggers on stream 5. It contains a PE file encoded via XORing each byte with 0x14.
You can specify decoders in exactly the same way as plugins, for example specifying more than one decoder separated by a comma ,.
If decoders are located in a different directory, you could specify it with the --decoderdir option.
C:\Demo>oledump.py -y contains_pe_file.yara -D decoder_xor1,decoder_rol1,decoder_add1 Book1-insert-object-exe-xor14.xls
1: 107 '\\x01CompObj'
2: 256 '\\x05DocumentSummaryInformation'
3: 216 '\\x05SummaryInformation'
4: 76 'MBD0049DB15/\\x01CompObj'
5: O 60326 'MBD0049DB15/\\x01Ole10Native'
YARA rule (stream decoder: XOR 1 byte key 0x14): Contains_PE_File
6: 19567 'Workbook'
Some decoders take options, to be provided with option --decoderoptions.
OLE files contain metadata. Use option -M to display it.
Example:
C:\Demo>oledump.py -M Book1.xls
Properties SummaryInformation:
codepage: 1252 ANSI Latin 1; Western European (Windows)
author: <NAME>
last_saved_by: <NAME>
| |
import sys
import tensorflow as tf
from tensorflow.python.ops import variable_scope as vs
from cnn import cnn, DynamicMaxPooling
jumper = tf.load_op_library('./jumper.so')
def batch_slice(batch, start, offset, pad_values=None):
bs = tf.shape(batch)[0]
max_offset = tf.reduce_max(offset)
min_last = tf.reduce_min(tf.shape(batch)[1] - start)
pad_len = tf.reduce_max([max_offset - min_last, 0])
rank = len(batch.get_shape())
remain = tf.shape(batch)[2:]
# padding
batch_pad = tf.pad(batch, [[0, 0], [0, pad_len]] + [[0, 0] for r in range(rank - 2)], 'CONSTANT',
constant_values=pad_values)
dim_len = tf.shape(batch_pad)[1]
# gather
ind_center = start + tf.range(bs) * dim_len
ind_region = tf.reshape(tf.expand_dims(ind_center, axis=-1) + tf.expand_dims(tf.range(max_offset), axis=0), [-1])
region = tf.reshape(tf.gather(tf.reshape(batch_pad, tf.concat([[-1], remain], axis=0)), ind_region),
tf.concat([[bs, max_offset], remain], axis=0))
return region
def get_glimpse_location(match_matrix, dq_size, location, glimpse):
'''
get next glimpse location (g_t+1) based on last jump location (j_t)
'''
if glimpse == 'fix_hard':
gp_d_position = tf.cast(tf.floor(location[:, 0] + location[:, 2]), dtype=tf.int32)
gp_d_offset = tf.reduce_min([tf.ones_like(dq_size[:, 0], dtype=tf.int32) * glimpse_fix_size,
dq_size[:, 0] - gp_d_position], axis=0)
glimpse_location = tf.stack([tf.cast(gp_d_position, dtype=tf.float32),
tf.zeros_like(location[:, 1]),
tf.cast(gp_d_offset, dtype=tf.float32),
tf.cast(dq_size[:, 1], dtype=tf.float32)], axis=1)
elif glimpse == 'all_next_hard':
gp_d_position = tf.cast(tf.floor(location[:, 0] + location[:, 2]), dtype=tf.int32)
gp_d_offset = dq_size[:, 0] - gp_d_position
glimpse_location = tf.stack([tf.cast(gp_d_position, dtype=tf.float32),
tf.zeros_like(location[:, 1]),
tf.cast(gp_d_offset, dtype=tf.float32),
tf.cast(dq_size[:, 1], dtype=tf.float32)], axis=1)
else:
raise NotImplementedError()
return glimpse_location
def get_jump_location(match_matrix, dq_size, location, jump, **kwargs):
'''
get next jump location (j_t+1) based on glimpse location (g_t+1)
'''
if jump == 'max_hard':
max_d_offset = tf.cast(tf.floor(tf.reduce_max(location[:, 2])), dtype=tf.int32)
# padding
match_matrix_pad = tf.pad(match_matrix, [[0, 0], [0, max_d_offset], [0, 0]], 'CONSTANT',
constant_values=sys.float_info.min)
d_len = tf.shape(match_matrix_pad)[1]
start = tf.cast(tf.floor(location[:, 0]), dtype=tf.int32)
gp_ind_center = start + tf.range(bs) * d_len
gp_ind_region = tf.reshape(tf.expand_dims(gp_ind_center, axis=-1) +
tf.expand_dims(tf.range(max_d_offset), axis=0), [-1])
glimpse_region = tf.reshape(tf.gather(tf.reshape(match_matrix_pad, [-1, max_q_len]), gp_ind_region),
[-1, max_d_offset, max_q_len])
d_loc = tf.argmax(tf.reduce_max(tf.abs(glimpse_region), axis=2), axis=1) + start
new_location = tf.stack([tf.cast(d_loc, dtype=tf.float32),
location[:, 1], tf.ones([bs]), location[:, 3]], axis=1)
elif jump == 'min_density_hard':
#new_location = jumper.min_density(match_matrix=match_matrix, dq_size=dq_size, location=location,
# min_density=min_density)
# there is no need to use multi-thread op, because this is fast and thus not the bottleneck
new_location = jumper.min_density_multi_cpu(
match_matrix=match_matrix, dq_size=dq_size, location=location, min_density=kwargs['min_density'],
min_jump_offset=kwargs['min_jump_offset'], use_ratio=False, only_one=False)
new_location = tf.stop_gradient(new_location)
elif jump == 'all':
new_location = tf.stop_gradient(location)
elif jump == 'test':
new_location = location[:, 0] + tf.reduce_min([tf.ones_like(location[:, 1]), location[:, 1]])
new_location = tf.stack([new_location, location[:, 1], tf.ones([bs]), location[:, 3]], axis=1)
else:
raise NotImplementedError()
return new_location
def get_representation(match_matrix, dq_size, query, query_emb, doc, doc_emb, word_vector, location, \
represent, **kwargs):
'''
get the representation based on location (j_t+1)
'''
bs = tf.shape(query)[0]
word_vector_dim = word_vector.get_shape().as_list()[1]
separate = kwargs['separate']
state_ta = kwargs['state_ta']
location_ta = kwargs['location_ta']
doc_repr_ta = kwargs['doc_repr_ta']
query_repr_ta = kwargs['query_repr_ta']
time = kwargs['time']
is_stop = kwargs['is_stop']
cur_location = location_ta.read(time)
cur_next_location = location_ta.read(time + 1)
with vs.variable_scope('ReprCond'):
# use last representation if the location remains unchanged
doc_reuse = \
tf.logical_and(tf.reduce_all(tf.equal(cur_location[:, 0:4:2], cur_next_location[:, 0:4:2])),
tf.greater_equal(time, 1))
query_reuse = \
tf.logical_and(tf.reduce_all(tf.equal(cur_location[:, 1:4:2], cur_next_location[:, 1:4:2])),
tf.greater_equal(time, 1))
if represent == 'sum_hard':
state_ta = tf.cond(tf.greater(time, 0), lambda: state_ta, lambda: state_ta.write(0, tf.zeros([bs, 1])))
start = tf.cast(tf.floor(location[:, :2]), dtype=tf.int32)
end = tf.cast(tf.floor(location[:, :2] + location[:, 2:]), dtype=tf.int32)
ind = tf.constant(0)
representation_ta = tf.TensorArray(dtype=tf.float32, size=bs,
name='representation_ta', clear_after_read=False)
def body(i, m, s, e, r):
r_i = tf.reduce_sum(m[i][s[i, 0]:e[i, 0], s[i, 1]:e[i, 1]])
r = r.write(i, tf.reshape(r_i, [1]))
return i + 1, m, s, e, r
_, _, _, _, representation_ta = \
tf.while_loop(lambda i, m, s, e, r: i < bs, body,
[ind, match_matrix, start, end, representation_ta],
parallel_iterations=1000)
representation = representation_ta.stack()
elif represent == 'interaction_copy_hard':
'''
This represent method just copy the match_matrix selected by current region to state_ta.
Must guarantee that the offset of doc is the same for different step/jump. Offset on query
is not important because we select regions only based on location of doc.
Otherwise, the TensorArray will raise inconsistent shape exception.
'''
start = tf.cast(tf.floor(location[:, :2]), dtype=tf.int32)
offset = tf.cast(tf.floor(location[:, 2:]), dtype=tf.int32)
d_start, d_offset = start[:, 0], offset[:, 0]
local_match_matrix = batch_slice(match_matrix, d_start, d_offset, pad_values=0)
# initialize the first element of state_ta
state_ta = tf.cond(tf.greater(time, 0), lambda: state_ta,
lambda: state_ta.write(0, tf.zeros_like(local_match_matrix)))
representation = local_match_matrix
elif represent == 'interaction_cnn_hard_resize':
state_ta = tf.cond(tf.greater(time, 0), lambda: state_ta, lambda: state_ta.write(0, tf.zeros([bs, 200])))
# in this implementation of "interaction_cnn_hard_resize", we don't calculate similarity matrix again
if 'max_jump_offset' not in kwargs or 'max_jump_offset2' not in kwargs:
raise ValueError('max_jump_offset and max_jump_offset2 must be set when InterCNN is used')
max_jump_offset = kwargs['max_jump_offset']
max_jump_offset2 = kwargs['max_jump_offset2']
start = tf.cast(tf.floor(location[:, :2]), dtype=tf.int32)
offset = tf.cast(tf.floor(location[:, 2:]), dtype=tf.int32)
d_start, d_offset = start[:, 0], offset[:, 0]
q_start, q_offset = start[:, 1], offset[:, 1]
d_end = d_start + d_offset - 1
q_end = q_start + q_offset - 1
d_start = d_start / dq_size[:, 0]
d_end = d_end / dq_size[:, 0]
q_start = q_start / dq_size[:, 1]
q_end = q_end / dq_size[:, 1]
local_match_matrix = tf.image.crop_and_resize(
tf.expand_dims(match_matrix, -1),
boxes=tf.cast(tf.stack([d_start, q_start, d_end, q_end], axis=-1), dtype=tf.float32),
box_ind=tf.range(bs),
crop_size=[max_jump_offset, max_jump_offset2],
method='bilinear',
name='local_interaction'
)
with vs.variable_scope('InterCNN'):
inter_repr = cnn(local_match_matrix,
architecture=[(5, 5, 1, 8), (max_jump_offset/5, max_jump_offset2/5)],
activation='relu',
dpool_index=None)
representation = tf.reshape(inter_repr, [bs, -1])
elif represent in {'rnn_hard', 'cnn_hard', 'interaction_cnn_hard'}:
if represent in {'rnn_hard', 'cnn_hard'}:
state_ta = tf.cond(tf.greater(time, 0), lambda: state_ta, lambda: state_ta.write(0, tf.zeros([bs, 1])))
elif represent in {'interaction_cnn_hard'}:
state_ta = tf.cond(tf.greater(time, 0), lambda: state_ta, lambda: state_ta.write(0, tf.zeros([bs, 200])))
start = tf.cast(tf.floor(location[:, :2]), dtype=tf.int32)
offset = tf.cast(tf.floor(location[:, 2:]), dtype=tf.int32)
d_start, d_offset = start[:, 0], offset[:, 0]
q_start, q_offset = start[:, 1], offset[:, 1]
d_region = batch_slice(doc, d_start, d_offset, pad_values=0)
q_region = batch_slice(query, q_start, q_offset, pad_values=0)
d_region = tf.nn.embedding_lookup(word_vector, d_region)
q_region = tf.nn.embedding_lookup(word_vector, q_region)
if represent == 'interaction_cnn_hard':
# this implementation seems to be slow, wo don't use it
if 'max_jump_offset' not in kwargs or 'max_jump_offset2' not in kwargs:
raise ValueError('max_jump_offset and max_jump_offset2 must be set when InterCNN is used')
max_jump_offset = kwargs['max_jump_offset']
max_jump_offset2 = kwargs['max_jump_offset2']
local_match_matrix = tf.matmul(d_region, tf.transpose(q_region, [0, 2, 1]))
local_match_matrix = tf.pad(local_match_matrix,
[[0, 0], [0, max_jump_offset-tf.shape(local_match_matrix)[1]],
[0, max_jump_offset2-tf.shape(local_match_matrix)[2]]], 'CONSTANT', constant_values=0)
local_match_matrix.set_shape([None, max_jump_offset, max_jump_offset2])
local_match_matrix = tf.expand_dims(local_match_matrix, 3)
with vs.variable_scope('InterCNN'):
inter_dpool_index = DynamicMaxPooling.dynamic_pooling_index_2d(d_offset, q_offset,
max_jump_offset, max_jump_offset2)
inter_repr = cnn(local_match_matrix, architecture=[(5, 5, 1, 8), (5, 5)], activation='relu',
#inter_repr = cnn(local_match_matrix, architecture=[(5, 5, 1, 16), (500, 10), (5, 5, 16, 16), (1, 1), (5, 5, 16, 16), (10, 1), (5, 5, 16, 100), (25, 10)], activation='relu',
dpool_index=inter_dpool_index)
representation = tf.reshape(inter_repr, [bs, -1])
elif represent == 'rnn_hard':
#rnn_cell = tf.nn.rnn_cell.BasicRNNCell(kwargs['rnn_size'])
rnn_cell = tf.nn.rnn_cell.GRUCell(kwargs['rnn_size'])
initial_state = rnn_cell.zero_state(bs, dtype=tf.float32)
d_outputs, d_state = tf.nn.dynamic_rnn(rnn_cell, d_region, initial_state=initial_state,
sequence_length=d_offset, dtype=tf.float32)
q_outputs, q_state = tf.nn.dynamic_rnn(rnn_cell, q_region, initial_state=initial_state,
sequence_length=q_offset, dtype=tf.float32)
representation = tf.reduce_sum(d_state * q_state, axis=1, keep_dims=True)
elif represent == 'cnn_hard':
if 'max_jump_offset' not in kwargs:
raise ValueError('max_jump_offset must be set when CNN is used')
max_jump_offset = kwargs['max_jump_offset']
doc_after_pool_size = max_jump_offset
doc_arch = [[3, word_vector_dim, 4], [doc_after_pool_size]]
query_arch = [[3, word_vector_dim, 4], [max_jump_offset]]
#doc_arch, query_arch = [[3, word_vector_dim, 4], [10]], [[3, word_vector_dim, 4], [5]]
doc_repr_ta = tf.cond(tf.greater(time, 0), lambda: doc_repr_ta,
lambda: doc_repr_ta.write(0, tf.zeros([bs, 10, doc_arch[-2][-1]])))
query_repr_ta = tf.cond(tf.greater(time, 0), lambda: query_repr_ta,
lambda: query_repr_ta.write(0, tf.zeros([bs, 5, query_arch[-2][-1]])))
def get_doc_repr():
nonlocal d_region, max_jump_offset, word_vector_dim, separate, d_offset, doc_arch, doc_after_pool_size
d_region = tf.pad(d_region, [[0, 0], [0, max_jump_offset - tf.shape(d_region)[1]], [0, 0]],
'CONSTANT', constant_values=0)
d_region.set_shape([None, max_jump_offset, word_vector_dim])
with vs.variable_scope('DocCNN' if separate else 'CNN'):
doc_dpool_index = DynamicMaxPooling.dynamic_pooling_index_1d(d_offset, max_jump_offset)
doc_repr = cnn(d_region, architecture=doc_arch, activation='relu',
dpool_index=doc_dpool_index)
with vs.variable_scope('LengthOrderAwareMaskPooling'):
mask_prob = tf.minimum(tf.ceil(doc_after_pool_size ** 2 / dq_size[:, 0]), doc_after_pool_size) / 50
# length-aware mask
mask_ber = tf.distributions.Bernoulli(probs=mask_prob)
mask = tf.transpose(mask_ber.sample([doc_after_pool_size]), [1, 0])
# order-aware pooling
#mask_for_zero = tf.cast(tf.expand_dims(tf.range(doc_after_pool_size), axis=0) < \
# (doc_after_pool_size - tf.reduce_sum(mask, axis=1, keep_dims=True)), dtype=tf.int32)
#mask = tf.cast(tf.concat([mask, mask_for_zero], axis=1), dtype=tf.bool)
#doc_repr = tf.boolean_mask(tf.concat([doc_repr, tf.zeros_like(doc_repr)], axis=1), mask)
#doc_repr = tf.reshape(doc_repr, [bs, doc_after_pool_size, doc_arch[-2][-1]])
# normal pooling
doc_repr = doc_repr * tf.cast(tf.expand_dims(mask, axis=-1), dtype=tf.float32)
# pooling
doc_repr = tf.layers.max_pooling1d(doc_repr, pool_size=[5], strides=[5],
padding='SAME', name='pool')
return doc_repr
def get_query_repr():
nonlocal q_region, max_jump_offset, word_vector_dim, separate, q_offset, query_arch
q_region = tf.pad(q_region, [[0, 0], [0, max_jump_offset - tf.shape(q_region)[1]], [0, 0]],
'CONSTANT', constant_values=0)
q_region.set_shape([None, max_jump_offset, word_vector_dim])
with vs.variable_scope('QueryCNN' if separate else 'CNN'):
if not separate:
vs.get_variable_scope().reuse_variables()
query_dpool_index = DynamicMaxPooling.dynamic_pooling_index_1d(q_offset, max_jump_offset)
query_repr = cnn(q_region, architecture=query_arch, activation='relu',
dpool_index=query_dpool_index)
query_repr = tf.layers.max_pooling1d(query_repr, pool_size=[10], strides=[10],
padding='SAME', name='pool')
return query_repr
doc_repr = tf.cond(doc_reuse, lambda: doc_repr_ta.read(time), get_doc_repr)
query_repr = tf.cond(query_reuse, lambda: query_repr_ta.read(time), get_query_repr)
#doc_repr = tf.cond(tf.constant(False), lambda: doc_repr_ta.read(time), get_doc_repr)
#query_repr = | |
is to use CEMS data to characterize each generator unit. So if CEMS has enough information to describe a generator unit we will over-write the eGRID data. If not, we will use the eGRID data instead. (CEMS data is expected to be more accurate because it has actual hourly performance of the generator units that we can use to calculate their operational characteristics. eGRID is reported on an annual basis and might be averaged out in different ways than we would prefer.)
print('Compiling CEMS data...')
#dictionary of which states are in which nerc region (b/c CEMS file downloads have the state in the filename)
states = {'FRCC': ['fl'],
'WECC': ['ca','or','wa', 'nv','mt','id','wy','ut','co','az','nm','tx'],
'SPP' : ['nm','ks','tx','ok','la','ar','mo'],
'RFC' : ['wi','mi','il','in','oh','ky','wv','va','md','pa','nj'],
'NPCC' : ['ny','ct','de','ri','ma','vt','nh','me'],
'SERC' : ['mo','ar','tx','la','ms','tn','ky','il','va','al','fl','ga','sc','nc'],
'MRO': ['ia','il','mi','mn','mo','mt','nd','ne','sd','wi','wy'],
'TRE': ['ok','tx']}
tz_mapping = {'ca':0,'or':0,'wa':0, 'nv':0,'mt':1,'id':1,'wy':1,'ut':1,'co':1,'az':1,'nm':1,'tx':1}
#compile the different months of CEMS files into one dataframe, df_cems. (CEMS data is downloaded by state and by month, so compiling a year of data for ERCOT / TRE, for example, requires reading in 12 Texas .csv files and 12 Oklahoma .csv files)
df_cems = pandas.DataFrame()
for s in states[self.nerc]:
for m in ['01','02','03','04','05','06','07','08','09','10','11', '12']:
print(s + ': ' + m)
df_cems_add = pandas.read_csv(self.cems_folder + '/%s/%s%s%s.csv'%(str(self.year),str(self.year),s,m))
df_cems_add = df_cems_add[['ORISPL_CODE', 'UNITID', 'OP_DATE','OP_HOUR','GLOAD (MW)', 'SO2_MASS (lbs)', 'NOX_MASS (lbs)', 'CO2_MASS (tons)', 'HEAT_INPUT (mmBtu)']].dropna()
df_cems_add.columns=['orispl', 'unit', 'date','hour','mwh', 'so2_tot', 'nox_tot', 'co2_tot', 'mmbtu']
if self.tz_aware:
if tz_mapping[s] == 1: # shift onto California Time
tmp = df_cems_add.copy(deep=True)
tmp['date'] = pandas.to_datetime(tmp['date'])
inds1 = tmp.loc[tmp['hour'].isin(numpy.arange(1, 24))].index
inds2 = tmp.loc[tmp['hour']==0].index
tmp2 = tmp.loc[(tmp['hour']==23)&(tmp['date']==datetime.datetime(self.year, 12, 31))].copy(deep=True)
tmp.loc[inds1, 'hour'] -= 1 # it is one hour earlier in California
tmp.loc[inds2, 'date'] -= datetime.timedelta(days=1)
tmp.loc[inds2, 'hour'] = 23
tmp = pandas.concat((tmp, tmp2), ignore_index=True) # fill in first hour of following year with last hour of this year
df_cems_add = tmp.loc[tmp['date'].dt.year == self.year].copy(deep=True)
df_cems_add['date'] = df_cems_add['date'].dt.strftime("%m-%d-%Y")
df_cems = pandas.concat([df_cems, df_cems_add])
#create the 'orispl_unit' column, which combines orispl and unit into a unique tag for each generation unit
df_cems['orispl_unit'] = df_cems['orispl'].map(str) + '_' + df_cems['unit'].map(str)
#bring in geography data and only keep generators within self.nerc
df_cems = df_cems.merge(df_plnt, left_index=True, how='left', on='orispl')
df_cems = df_cems[df_cems['nerc']==self.nerc]
#convert emissions to kg
df_cems.co2_tot = df_cems.co2_tot * 907.185 #tons to kg
df_cems.so2_tot = df_cems.so2_tot * 0.454 #lbs to kg
df_cems.nox_tot = df_cems.nox_tot * 0.454 #lbs to kg
#calculate the hourly heat and emissions rates. Later we will take the medians over each week to define the generators weekly heat and emissions rates.
df_cems['heat_rate'] = df_cems.mmbtu / df_cems.mwh
df_cems['co2'] = df_cems.co2_tot / df_cems.mwh
df_cems['so2'] = df_cems.so2_tot / df_cems.mwh
df_cems['nox'] = df_cems.nox_tot / df_cems.mwh
df_cems.replace([scipy.inf, -scipy.inf], scipy.nan, inplace=True) #don't want inf messing up median calculations
#drop any bogus data. For example, the smallest mmbtu we would expect to see is 25MW(smallest unit) * 0.4(smallest minimum output) * 6.0 (smallest heat rate) = 60 mmbtu. Any entries with less than 60 mmbtu fuel or less than 6.0 heat rate, let's get rid of that row of data.
df_cems = df_cems[(df_cems.heat_rate >= 6.0) & (df_cems.mmbtu >= 60)]
#calculate emissions rates and heat rate for each week and each generator
#rather than parsing the dates (which takes forever because this is such a big dataframe) we can create month and day columns for slicing the data based on time of year
df_orispl_unit = df_cems.copy(deep=True)
df_orispl_unit.date = df_orispl_unit.date.str.replace('/','-')
temp = pandas.DataFrame(df_orispl_unit.date.str.split('-').tolist(), columns=['month', 'day', 'year'], index=df_orispl_unit.index).astype(float)
df_orispl_unit['monthday'] = temp.year*10000 + temp.month*100 + temp.day
###
#loop through the weeks, slice the data, and find the average heat rates and emissions rates
#first, add a column 't' that says which week of the simulation we are in
df_orispl_unit['t'] = 52
for t in scipy.arange(52)+1:
start = (datetime.datetime.strptime(str(self.year) + '-01-01', '%Y-%m-%d') + datetime.timedelta(days=7.05*(t-1)-1)).strftime('%Y-%m-%d')
end = (datetime.datetime.strptime(str(self.year) + '-01-01', '%Y-%m-%d') + datetime.timedelta(days=7.05*(t)-1)).strftime('%Y-%m-%d')
start_monthday = float(start[0:4])*10000 + float(start[5:7])*100 + float(start[8:])
end_monthday = float(end[0:4])*10000 + float(end[5:7])*100 + float(end[8:])
#slice the data for the days corresponding to the time series period, t
df_orispl_unit.loc[(df_orispl_unit.monthday >= start_monthday) & (df_orispl_unit.monthday < end_monthday), 't'] = t
#remove outlier emissions and heat rates. These happen at hours where a generator's output is very low (e.g. less than 10 MWh). To remove these, we will remove any datapoints where mwh < 10.0 and heat_rate < 30.0 (0.5% percentiles of the 2014 TRE data).
df_orispl_unit = df_orispl_unit[(df_orispl_unit.mwh >= 10.0) & (df_orispl_unit.heat_rate <= 30.0)]
#aggregate by orispl_unit and t to get the heat rate, emissions rates, and capacity for each unit at each t
temp_2 = df_orispl_unit.groupby(['orispl_unit', 't'], as_index=False).agg('median')[['orispl_unit', 't', 'heat_rate', 'co2', 'so2', 'nox']].copy(deep=True)
temp_2['mw'] = df_orispl_unit.groupby(['orispl_unit', 't'], as_index=False).agg('max')['mwh'].copy(deep=True)
#condense df_orispl_unit down to where we just have 1 row for each unique orispl_unit
df_orispl_unit = df_orispl_unit.groupby('orispl_unit', as_index=False).agg('max')[['orispl_unit', 'orispl', 'ba', 'nerc', 'egrid', 'mwh']]
df_orispl_unit.rename(columns={'mwh':'mw'}, inplace=True)
for c in ['heat_rate', 'co2', 'so2', 'nox', 'mw']:
temp_3 = temp_2.set_index(['orispl_unit', 't'])[c].unstack().reset_index()
temp_3.columns = list(['orispl_unit']) + ([c + str(a) for a in scipy.arange(52)+1])
if not self.hist_downtime:
#remove any outlier values in the 1st or 99th percentiles
max_array = temp_3.copy().drop(columns='orispl_unit').quantile(0.99, axis=1)
min_array = temp_3.copy().drop(columns='orispl_unit').quantile(0.01, axis=1)
median_array = temp_3.copy().drop(columns='orispl_unit').median(axis=1)
for i in temp_3.index:
test = temp_3.drop(columns='orispl_unit').iloc[i]
test[test > max_array[i]] = scipy.NaN
test[test < min_array[i]] = scipy.NaN
test = list(test) #had a hard time putting the results back into temp_3 without using a list
#if the first entry in test is nan, we want to fill that with the median value so that we can use ffill later
if math.isnan(test[0]):
test[0] = median_array[i]
test.insert(0, temp_3.iloc[i].orispl_unit)
temp_3.iloc[i] = test
#for any nan values (assuming these are offline generators without any output data), fill nans with a large heat_rate that will move the generator towards the end of the merit order and large-ish emissions rate, so if the generator is dispatched in the model it will jack up prices but emissions won't be heavily affected (note, previously I just replaced all nans with 99999, but I was concerned that this might lead to a few hours of the year with extremely high emissions numbers that threw off the data)
M = float(scipy.where(c=='heat_rate', 50.0, scipy.where(c=='co2', 1500.0, scipy.where(c=='so2', 4.0, scipy.where(c=='nox', 3.0, scipy.where(c=='mw', 0.0, 99.0)))))) #M here defines the heat rate and emissions data we will give to generators that were not online in the historical data
#if we are using hist_downtime, then replace scipy.NaN with M. That way offline generators can still be dispatched, but they will have high cost and high emissions.
if self.hist_downtime:
temp_3 = temp_3.fillna(M)
#if we are not using hist_downtime, then use ffill to populate the scipy.NaN values. This allows us to use the last observed value for the generator to populate data that we don't have for it. For example, if generator G had a heat rate of 8.5 during time t-1, but we don't have data for time t, then we assume that generator G has a heat rate of 8.5 for t. When we do this, we can begin to include generators that might be available for dispatch but were not turned on because prices were too low. However, we also remove any chance of capturing legitimate maintenance downtime that would impact the historical data. So, for validation purposes, we probably want to have hist_downtime = True. For future scenario analysis, we probably want to have hist_downtime = False.
if not self.hist_downtime:
temp_3 = temp_3.fillna(method='ffill', axis=1) # was erroroneously filling down columns in this version of python/pandas
# temp_3.iloc[0] = temp_3.iloc[0].fillna(method='ffill') #for some reason the first row was not doing fillna(ffill)
#merge temp_3 with df_orispl_unit. Now we have weekly heat rates, emissions rates, and capacities for each generator. These values depend on whether we are including hist_downtime
df_orispl_unit = df_orispl_unit.merge(temp_3, on='orispl_unit', how='left')
#merge df_orispl_unit into df. Now we have a dataframe with weekly heat rate and emissions rates for any plants in CEMS with that data. There will be some nan values in df for those weekly columns (e.g. 'heat_rate1', 'co223', etc. that | |
# coding: utf-8
"""
SignRequest API
API for SignRequest.com
OpenAPI spec version: v1
Contact: <EMAIL>
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
from signrequest_client.models.required_attachment import RequiredAttachment # noqa: F401,E501
from signrequest_client.models.signer import Signer # noqa: F401,E501
class InlineSignRequest(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'from_email': 'str',
'from_email_name': 'str',
'is_being_prepared': 'bool',
'prepare_url': 'str',
'redirect_url': 'str',
'redirect_url_declined': 'str',
'required_attachments': 'list[RequiredAttachment]',
'disable_attachments': 'bool',
'disable_text_signatures': 'bool',
'disable_text': 'bool',
'disable_date': 'bool',
'disable_emails': 'bool',
'disable_upload_signatures': 'bool',
'disable_blockchain_proof': 'bool',
'text_message_verification_locked': 'bool',
'subject': 'str',
'message': 'str',
'who': 'str',
'send_reminders': 'bool',
'signers': 'list[Signer]',
'uuid': 'str'
}
attribute_map = {
'from_email': 'from_email',
'from_email_name': 'from_email_name',
'is_being_prepared': 'is_being_prepared',
'prepare_url': 'prepare_url',
'redirect_url': 'redirect_url',
'redirect_url_declined': 'redirect_url_declined',
'required_attachments': 'required_attachments',
'disable_attachments': 'disable_attachments',
'disable_text_signatures': 'disable_text_signatures',
'disable_text': 'disable_text',
'disable_date': 'disable_date',
'disable_emails': 'disable_emails',
'disable_upload_signatures': 'disable_upload_signatures',
'disable_blockchain_proof': 'disable_blockchain_proof',
'text_message_verification_locked': 'text_message_verification_locked',
'subject': 'subject',
'message': 'message',
'who': 'who',
'send_reminders': 'send_reminders',
'signers': 'signers',
'uuid': 'uuid'
}
def __init__(self, from_email=None, from_email_name=None, is_being_prepared=None, prepare_url=None, redirect_url=None, redirect_url_declined=None, required_attachments=None, disable_attachments=None, disable_text_signatures=None, disable_text=None, disable_date=None, disable_emails=None, disable_upload_signatures=None, disable_blockchain_proof=None, text_message_verification_locked=None, subject=None, message=None, who=None, send_reminders=None, signers=None, uuid=None): # noqa: E501
"""InlineSignRequest - a model defined in Swagger""" # noqa: E501
self._from_email = None
self._from_email_name = None
self._is_being_prepared = None
self._prepare_url = None
self._redirect_url = None
self._redirect_url_declined = None
self._required_attachments = None
self._disable_attachments = None
self._disable_text_signatures = None
self._disable_text = None
self._disable_date = None
self._disable_emails = None
self._disable_upload_signatures = None
self._disable_blockchain_proof = None
self._text_message_verification_locked = None
self._subject = None
self._message = None
self._who = None
self._send_reminders = None
self._signers = None
self._uuid = None
self.discriminator = None
if from_email is not None:
self.from_email = from_email
if from_email_name is not None:
self.from_email_name = from_email_name
if is_being_prepared is not None:
self.is_being_prepared = is_being_prepared
if prepare_url is not None:
self.prepare_url = prepare_url
if redirect_url is not None:
self.redirect_url = redirect_url
if redirect_url_declined is not None:
self.redirect_url_declined = redirect_url_declined
if required_attachments is not None:
self.required_attachments = required_attachments
if disable_attachments is not None:
self.disable_attachments = disable_attachments
if disable_text_signatures is not None:
self.disable_text_signatures = disable_text_signatures
if disable_text is not None:
self.disable_text = disable_text
if disable_date is not None:
self.disable_date = disable_date
if disable_emails is not None:
self.disable_emails = disable_emails
if disable_upload_signatures is not None:
self.disable_upload_signatures = disable_upload_signatures
if disable_blockchain_proof is not None:
self.disable_blockchain_proof = disable_blockchain_proof
if text_message_verification_locked is not None:
self.text_message_verification_locked = text_message_verification_locked
if subject is not None:
self.subject = subject
if message is not None:
self.message = message
if who is not None:
self.who = who
if send_reminders is not None:
self.send_reminders = send_reminders
if signers is not None:
self.signers = signers
if uuid is not None:
self.uuid = uuid
@property
def from_email(self):
"""Gets the from_email of this InlineSignRequest. # noqa: E501
Email of user sending the SignRequest (must be a validated email) # noqa: E501
:return: The from_email of this InlineSignRequest. # noqa: E501
:rtype: str
"""
return self._from_email
@from_email.setter
def from_email(self, from_email):
"""Sets the from_email of this InlineSignRequest.
Email of user sending the SignRequest (must be a validated email) # noqa: E501
:param from_email: The from_email of this InlineSignRequest. # noqa: E501
:type: str
"""
if from_email is not None and len(from_email) < 1:
raise ValueError("Invalid value for `from_email`, length must be greater than or equal to `1`") # noqa: E501
self._from_email = from_email
@property
def from_email_name(self):
"""Gets the from_email_name of this InlineSignRequest. # noqa: E501
Name to be used in the `From` email header, e.g. `{from_email_name} <<EMAIL>>` # noqa: E501
:return: The from_email_name of this InlineSignRequest. # noqa: E501
:rtype: str
"""
return self._from_email_name
@from_email_name.setter
def from_email_name(self, from_email_name):
"""Sets the from_email_name of this InlineSignRequest.
Name to be used in the `From` email header, e.g. `{from_email_name} <<EMAIL>>` # noqa: E501
:param from_email_name: The from_email_name of this InlineSignRequest. # noqa: E501
:type: str
"""
if from_email_name is not None and len(from_email_name) < 1:
raise ValueError("Invalid value for `from_email_name`, length must be greater than or equal to `1`") # noqa: E501
self._from_email_name = from_email_name
@property
def is_being_prepared(self):
"""Gets the is_being_prepared of this InlineSignRequest. # noqa: E501
Have the sender of a SignRequest prepare the document before sending the request out, see: [prepare using the web interface](#section/Preparing-a-document/Prepare-using-the-web-interface) # noqa: E501
:return: The is_being_prepared of this InlineSignRequest. # noqa: E501
:rtype: bool
"""
return self._is_being_prepared
@is_being_prepared.setter
def is_being_prepared(self, is_being_prepared):
"""Sets the is_being_prepared of this InlineSignRequest.
Have the sender of a SignRequest prepare the document before sending the request out, see: [prepare using the web interface](#section/Preparing-a-document/Prepare-using-the-web-interface) # noqa: E501
:param is_being_prepared: The is_being_prepared of this InlineSignRequest. # noqa: E501
:type: bool
"""
self._is_being_prepared = is_being_prepared
@property
def prepare_url(self):
"""Gets the prepare_url of this InlineSignRequest. # noqa: E501
:return: The prepare_url of this InlineSignRequest. # noqa: E501
:rtype: str
"""
return self._prepare_url
@prepare_url.setter
def prepare_url(self, prepare_url):
"""Sets the prepare_url of this InlineSignRequest.
:param prepare_url: The prepare_url of this InlineSignRequest. # noqa: E501
:type: str
"""
if prepare_url is not None and len(prepare_url) < 1:
raise ValueError("Invalid value for `prepare_url`, length must be greater than or equal to `1`") # noqa: E501
self._prepare_url = prepare_url
@property
def redirect_url(self):
"""Gets the redirect_url of this InlineSignRequest. # noqa: E501
URL at which SignRequest will redirect to when a document is signed # noqa: E501
:return: The redirect_url of this InlineSignRequest. # noqa: E501
:rtype: str
"""
return self._redirect_url
@redirect_url.setter
def redirect_url(self, redirect_url):
"""Sets the redirect_url of this InlineSignRequest.
URL at which SignRequest will redirect to when a document is signed # noqa: E501
:param redirect_url: The redirect_url of this InlineSignRequest. # noqa: E501
:type: str
"""
if redirect_url is not None and len(redirect_url) < 1:
raise ValueError("Invalid value for `redirect_url`, length must be greater than or equal to `1`") # noqa: E501
self._redirect_url = redirect_url
@property
def redirect_url_declined(self):
"""Gets the redirect_url_declined of this InlineSignRequest. # noqa: E501
URL at which SignRequest will redirect to when a document is declined # noqa: E501
:return: The redirect_url_declined of this InlineSignRequest. # noqa: E501
:rtype: str
"""
return self._redirect_url_declined
@redirect_url_declined.setter
def redirect_url_declined(self, redirect_url_declined):
"""Sets the redirect_url_declined of this InlineSignRequest.
URL at which SignRequest will redirect to when a document is declined # noqa: E501
:param redirect_url_declined: The redirect_url_declined of this InlineSignRequest. # noqa: E501
:type: str
"""
if redirect_url_declined is not None and len(redirect_url_declined) < 1:
raise ValueError("Invalid value for `redirect_url_declined`, length must be greater than or equal to `1`") # noqa: E501
self._redirect_url_declined = redirect_url_declined
@property
def required_attachments(self):
"""Gets the required_attachments of this InlineSignRequest. # noqa: E501
:return: The required_attachments of this InlineSignRequest. # noqa: E501
:rtype: list[RequiredAttachment]
"""
return self._required_attachments
@required_attachments.setter
def required_attachments(self, required_attachments):
"""Sets the required_attachments of this InlineSignRequest.
:param required_attachments: The required_attachments of this InlineSignRequest. # noqa: E501
:type: list[RequiredAttachment]
"""
self._required_attachments = required_attachments
@property
def disable_attachments(self):
"""Gets the disable_attachments of this InlineSignRequest. # noqa: E501
Disable uploading/adding of attachments # noqa: E501
:return: The disable_attachments of this InlineSignRequest. # noqa: E501
:rtype: bool
"""
return self._disable_attachments
@disable_attachments.setter
def disable_attachments(self, disable_attachments):
"""Sets the disable_attachments of this InlineSignRequest.
Disable uploading/adding of attachments # noqa: E501
:param disable_attachments: The disable_attachments of this InlineSignRequest. # noqa: E501
:type: bool
"""
self._disable_attachments = disable_attachments
@property
def disable_text_signatures(self):
"""Gets the disable_text_signatures of this InlineSignRequest. # noqa: E501
Disable usage of signatures generated by typing (text) # noqa: E501
:return: The disable_text_signatures of this InlineSignRequest. # noqa: E501
:rtype: bool
"""
return self._disable_text_signatures
@disable_text_signatures.setter
def disable_text_signatures(self, disable_text_signatures):
"""Sets the disable_text_signatures of this InlineSignRequest.
Disable usage of signatures generated by typing (text) # noqa: E501
:param disable_text_signatures: The disable_text_signatures of this InlineSignRequest. # noqa: E501
:type: bool
"""
self._disable_text_signatures = disable_text_signatures
@property
def disable_text(self):
"""Gets the disable_text of this InlineSignRequest. # noqa: E501
Disable adding of text # noqa: E501
:return: The disable_text of this InlineSignRequest. # noqa: E501
:rtype: bool
"""
return self._disable_text
@disable_text.setter
| |
sender_internal_identification: Optional[pulumi.Input[str]] = None,
sender_internal_sub_identification: Optional[pulumi.Input[str]] = None,
sender_reverse_routing_address: Optional[pulumi.Input[str]] = None,
transaction_set_control_number_lower_bound: Optional[pulumi.Input[int]] = None,
transaction_set_control_number_prefix: Optional[pulumi.Input[str]] = None,
transaction_set_control_number_suffix: Optional[pulumi.Input[str]] = None,
transaction_set_control_number_upper_bound: Optional[pulumi.Input[int]] = None):
"""
:param pulumi.Input[str] application_reference_id: The application reference id.
:param pulumi.Input[bool] apply_delimiter_string_advice: The value indicating whether to apply delimiter string advice.
:param pulumi.Input[str] communication_agreement_id: The communication agreement id.
:param pulumi.Input[bool] create_grouping_segments: The value indicating whether to create grouping segments.
:param pulumi.Input[bool] enable_default_group_headers: The value indicating whether to enable default group headers.
:param pulumi.Input[str] functional_group_id: The functional group id.
:param pulumi.Input[str] group_application_password: The group application password.
:param pulumi.Input[str] group_application_receiver_id: The group application receiver id.
:param pulumi.Input[str] group_application_receiver_qualifier: The group application receiver qualifier.
:param pulumi.Input[str] group_application_sender_id: The group application sender id.
:param pulumi.Input[str] group_application_sender_qualifier: The group application sender qualifier.
:param pulumi.Input[str] group_association_assigned_code: The group association assigned code.
:param pulumi.Input[int] group_control_number_lower_bound: The group control number lower bound.
:param pulumi.Input[str] group_control_number_prefix: The group control number prefix.
:param pulumi.Input[str] group_control_number_suffix: The group control number suffix.
:param pulumi.Input[int] group_control_number_upper_bound: The group control number upper bound.
:param pulumi.Input[str] group_controlling_agency_code: The group controlling agency code.
:param pulumi.Input[str] group_message_release: The group message release.
:param pulumi.Input[str] group_message_version: The group message version.
:param pulumi.Input[int] interchange_control_number_lower_bound: The interchange control number lower bound.
:param pulumi.Input[str] interchange_control_number_prefix: The interchange control number prefix.
:param pulumi.Input[str] interchange_control_number_suffix: The interchange control number suffix.
:param pulumi.Input[int] interchange_control_number_upper_bound: The interchange control number upper bound.
:param pulumi.Input[bool] is_test_interchange: The value indicating whether the message is a test interchange.
:param pulumi.Input[bool] overwrite_existing_transaction_set_control_number: The value indicating whether to overwrite existing transaction set control number.
:param pulumi.Input[str] processing_priority_code: The processing priority code.
:param pulumi.Input[str] receiver_internal_identification: The receiver internal identification.
:param pulumi.Input[str] receiver_internal_sub_identification: The receiver internal sub identification.
:param pulumi.Input[str] receiver_reverse_routing_address: The receiver reverse routing address.
:param pulumi.Input[str] recipient_reference_password_qualifier: The recipient reference password qualifier.
:param pulumi.Input[str] recipient_reference_password_value: The recipient reference password value.
:param pulumi.Input[bool] rollover_group_control_number: The value indicating whether to rollover group control number.
:param pulumi.Input[bool] rollover_interchange_control_number: The value indicating whether to rollover interchange control number.
:param pulumi.Input[bool] rollover_transaction_set_control_number: The value indicating whether to rollover transaction set control number.
:param pulumi.Input[str] sender_internal_identification: The sender internal identification.
:param pulumi.Input[str] sender_internal_sub_identification: The sender internal sub identification.
:param pulumi.Input[str] sender_reverse_routing_address: The sender reverse routing address.
:param pulumi.Input[int] transaction_set_control_number_lower_bound: The transaction set control number lower bound.
:param pulumi.Input[str] transaction_set_control_number_prefix: The transaction set control number prefix.
:param pulumi.Input[str] transaction_set_control_number_suffix: The transaction set control number suffix.
:param pulumi.Input[int] transaction_set_control_number_upper_bound: The transaction set control number upper bound.
"""
if application_reference_id is not None:
pulumi.set(__self__, "application_reference_id", application_reference_id)
if apply_delimiter_string_advice is not None:
pulumi.set(__self__, "apply_delimiter_string_advice", apply_delimiter_string_advice)
if communication_agreement_id is not None:
pulumi.set(__self__, "communication_agreement_id", communication_agreement_id)
if create_grouping_segments is not None:
pulumi.set(__self__, "create_grouping_segments", create_grouping_segments)
if enable_default_group_headers is not None:
pulumi.set(__self__, "enable_default_group_headers", enable_default_group_headers)
if functional_group_id is not None:
pulumi.set(__self__, "functional_group_id", functional_group_id)
if group_application_password is not None:
pulumi.set(__self__, "group_application_password", group_application_password)
if group_application_receiver_id is not None:
pulumi.set(__self__, "group_application_receiver_id", group_application_receiver_id)
if group_application_receiver_qualifier is not None:
pulumi.set(__self__, "group_application_receiver_qualifier", group_application_receiver_qualifier)
if group_application_sender_id is not None:
pulumi.set(__self__, "group_application_sender_id", group_application_sender_id)
if group_application_sender_qualifier is not None:
pulumi.set(__self__, "group_application_sender_qualifier", group_application_sender_qualifier)
if group_association_assigned_code is not None:
pulumi.set(__self__, "group_association_assigned_code", group_association_assigned_code)
if group_control_number_lower_bound is not None:
pulumi.set(__self__, "group_control_number_lower_bound", group_control_number_lower_bound)
if group_control_number_prefix is not None:
pulumi.set(__self__, "group_control_number_prefix", group_control_number_prefix)
if group_control_number_suffix is not None:
pulumi.set(__self__, "group_control_number_suffix", group_control_number_suffix)
if group_control_number_upper_bound is not None:
pulumi.set(__self__, "group_control_number_upper_bound", group_control_number_upper_bound)
if group_controlling_agency_code is not None:
pulumi.set(__self__, "group_controlling_agency_code", group_controlling_agency_code)
if group_message_release is not None:
pulumi.set(__self__, "group_message_release", group_message_release)
if group_message_version is not None:
pulumi.set(__self__, "group_message_version", group_message_version)
if interchange_control_number_lower_bound is not None:
pulumi.set(__self__, "interchange_control_number_lower_bound", interchange_control_number_lower_bound)
if interchange_control_number_prefix is not None:
pulumi.set(__self__, "interchange_control_number_prefix", interchange_control_number_prefix)
if interchange_control_number_suffix is not None:
pulumi.set(__self__, "interchange_control_number_suffix", interchange_control_number_suffix)
if interchange_control_number_upper_bound is not None:
pulumi.set(__self__, "interchange_control_number_upper_bound", interchange_control_number_upper_bound)
if is_test_interchange is not None:
pulumi.set(__self__, "is_test_interchange", is_test_interchange)
if overwrite_existing_transaction_set_control_number is not None:
pulumi.set(__self__, "overwrite_existing_transaction_set_control_number", overwrite_existing_transaction_set_control_number)
if processing_priority_code is not None:
pulumi.set(__self__, "processing_priority_code", processing_priority_code)
if receiver_internal_identification is not None:
pulumi.set(__self__, "receiver_internal_identification", receiver_internal_identification)
if receiver_internal_sub_identification is not None:
pulumi.set(__self__, "receiver_internal_sub_identification", receiver_internal_sub_identification)
if receiver_reverse_routing_address is not None:
pulumi.set(__self__, "receiver_reverse_routing_address", receiver_reverse_routing_address)
if recipient_reference_password_qualifier is not None:
pulumi.set(__self__, "recipient_reference_password_qualifier", recipient_reference_password_qualifier)
if recipient_reference_password_value is not None:
pulumi.set(__self__, "recipient_reference_password_value", recipient_reference_password_value)
if rollover_group_control_number is not None:
pulumi.set(__self__, "rollover_group_control_number", rollover_group_control_number)
if rollover_interchange_control_number is not None:
pulumi.set(__self__, "rollover_interchange_control_number", rollover_interchange_control_number)
if rollover_transaction_set_control_number is not None:
pulumi.set(__self__, "rollover_transaction_set_control_number", rollover_transaction_set_control_number)
if sender_internal_identification is not None:
pulumi.set(__self__, "sender_internal_identification", sender_internal_identification)
if sender_internal_sub_identification is not None:
pulumi.set(__self__, "sender_internal_sub_identification", sender_internal_sub_identification)
if sender_reverse_routing_address is not None:
pulumi.set(__self__, "sender_reverse_routing_address", sender_reverse_routing_address)
if transaction_set_control_number_lower_bound is not None:
pulumi.set(__self__, "transaction_set_control_number_lower_bound", transaction_set_control_number_lower_bound)
if transaction_set_control_number_prefix is not None:
pulumi.set(__self__, "transaction_set_control_number_prefix", transaction_set_control_number_prefix)
if transaction_set_control_number_suffix is not None:
pulumi.set(__self__, "transaction_set_control_number_suffix", transaction_set_control_number_suffix)
if transaction_set_control_number_upper_bound is not None:
pulumi.set(__self__, "transaction_set_control_number_upper_bound", transaction_set_control_number_upper_bound)
@property
@pulumi.getter(name="applicationReferenceId")
def application_reference_id(self) -> Optional[pulumi.Input[str]]:
"""
The application reference id.
"""
return pulumi.get(self, "application_reference_id")
@application_reference_id.setter
def application_reference_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "application_reference_id", value)
@property
@pulumi.getter(name="applyDelimiterStringAdvice")
def apply_delimiter_string_advice(self) -> Optional[pulumi.Input[bool]]:
"""
The value indicating whether to apply delimiter string advice.
"""
return pulumi.get(self, "apply_delimiter_string_advice")
@apply_delimiter_string_advice.setter
def apply_delimiter_string_advice(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "apply_delimiter_string_advice", value)
@property
@pulumi.getter(name="communicationAgreementId")
def communication_agreement_id(self) -> Optional[pulumi.Input[str]]:
"""
The communication agreement id.
"""
return pulumi.get(self, "communication_agreement_id")
@communication_agreement_id.setter
def communication_agreement_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "communication_agreement_id", value)
@property
@pulumi.getter(name="createGroupingSegments")
def create_grouping_segments(self) -> Optional[pulumi.Input[bool]]:
"""
The value indicating whether to create grouping segments.
"""
return pulumi.get(self, "create_grouping_segments")
@create_grouping_segments.setter
def create_grouping_segments(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "create_grouping_segments", value)
@property
@pulumi.getter(name="enableDefaultGroupHeaders")
def enable_default_group_headers(self) -> Optional[pulumi.Input[bool]]:
"""
The value indicating whether to enable default group headers.
"""
return pulumi.get(self, "enable_default_group_headers")
@enable_default_group_headers.setter
def enable_default_group_headers(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "enable_default_group_headers", value)
@property
@pulumi.getter(name="functionalGroupId")
def functional_group_id(self) -> Optional[pulumi.Input[str]]:
"""
The functional group id.
"""
return pulumi.get(self, "functional_group_id")
@functional_group_id.setter
def functional_group_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "functional_group_id", value)
@property
@pulumi.getter(name="groupApplicationPassword")
def group_application_password(self) -> Optional[pulumi.Input[str]]:
"""
The group application password.
"""
return pulumi.get(self, "group_application_password")
@group_application_password.setter
def group_application_password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "group_application_password", value)
@property
@pulumi.getter(name="groupApplicationReceiverId")
def group_application_receiver_id(self) -> Optional[pulumi.Input[str]]:
"""
The group application receiver id.
"""
return pulumi.get(self, "group_application_receiver_id")
@group_application_receiver_id.setter
def group_application_receiver_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "group_application_receiver_id", value)
@property
@pulumi.getter(name="groupApplicationReceiverQualifier")
def group_application_receiver_qualifier(self) -> Optional[pulumi.Input[str]]:
"""
The group application receiver qualifier.
"""
return pulumi.get(self, "group_application_receiver_qualifier")
@group_application_receiver_qualifier.setter
def group_application_receiver_qualifier(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "group_application_receiver_qualifier", value)
@property
@pulumi.getter(name="groupApplicationSenderId")
def group_application_sender_id(self) -> Optional[pulumi.Input[str]]:
"""
The group application sender id.
"""
return pulumi.get(self, "group_application_sender_id")
@group_application_sender_id.setter
def group_application_sender_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "group_application_sender_id", value)
@property
@pulumi.getter(name="groupApplicationSenderQualifier")
def group_application_sender_qualifier(self) -> Optional[pulumi.Input[str]]:
"""
The group application sender qualifier.
"""
return pulumi.get(self, "group_application_sender_qualifier")
@group_application_sender_qualifier.setter
def group_application_sender_qualifier(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "group_application_sender_qualifier", value)
@property
@pulumi.getter(name="groupAssociationAssignedCode")
def group_association_assigned_code(self) -> Optional[pulumi.Input[str]]:
"""
The group association assigned code.
"""
return pulumi.get(self, "group_association_assigned_code")
@group_association_assigned_code.setter
def group_association_assigned_code(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "group_association_assigned_code", value)
@property
@pulumi.getter(name="groupControlNumberLowerBound")
def group_control_number_lower_bound(self) -> Optional[pulumi.Input[int]]:
"""
The group control number lower bound.
"""
return pulumi.get(self, "group_control_number_lower_bound")
@group_control_number_lower_bound.setter
def group_control_number_lower_bound(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "group_control_number_lower_bound", value)
@property
@pulumi.getter(name="groupControlNumberPrefix")
def group_control_number_prefix(self) -> Optional[pulumi.Input[str]]:
"""
The group control number prefix.
"""
return pulumi.get(self, "group_control_number_prefix")
@group_control_number_prefix.setter
def group_control_number_prefix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "group_control_number_prefix", value)
@property
@pulumi.getter(name="groupControlNumberSuffix")
def group_control_number_suffix(self) -> Optional[pulumi.Input[str]]:
"""
The group control number suffix.
"""
return pulumi.get(self, "group_control_number_suffix")
@group_control_number_suffix.setter
def group_control_number_suffix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "group_control_number_suffix", value)
@property
@pulumi.getter(name="groupControlNumberUpperBound")
def group_control_number_upper_bound(self) -> Optional[pulumi.Input[int]]:
"""
The group control number upper bound.
"""
return pulumi.get(self, "group_control_number_upper_bound")
@group_control_number_upper_bound.setter
def group_control_number_upper_bound(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "group_control_number_upper_bound", value)
@property
@pulumi.getter(name="groupControllingAgencyCode")
def group_controlling_agency_code(self) -> Optional[pulumi.Input[str]]:
"""
The group controlling agency code.
"""
return pulumi.get(self, "group_controlling_agency_code")
@group_controlling_agency_code.setter
def group_controlling_agency_code(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "group_controlling_agency_code", value)
@property
@pulumi.getter(name="groupMessageRelease")
def group_message_release(self) -> Optional[pulumi.Input[str]]:
"""
The group message release.
"""
return pulumi.get(self, "group_message_release")
@group_message_release.setter
def group_message_release(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "group_message_release", value)
@property
@pulumi.getter(name="groupMessageVersion")
def group_message_version(self) -> Optional[pulumi.Input[str]]:
"""
The group message version.
"""
return pulumi.get(self, "group_message_version")
@group_message_version.setter
def group_message_version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "group_message_version", value)
@property
@pulumi.getter(name="interchangeControlNumberLowerBound")
def interchange_control_number_lower_bound(self) -> Optional[pulumi.Input[int]]:
"""
The interchange control number lower bound.
"""
return pulumi.get(self, "interchange_control_number_lower_bound")
@interchange_control_number_lower_bound.setter
def interchange_control_number_lower_bound(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "interchange_control_number_lower_bound", value)
@property
@pulumi.getter(name="interchangeControlNumberPrefix")
def interchange_control_number_prefix(self) -> Optional[pulumi.Input[str]]:
"""
The interchange control number prefix.
"""
return pulumi.get(self, "interchange_control_number_prefix")
@interchange_control_number_prefix.setter
def interchange_control_number_prefix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "interchange_control_number_prefix", value)
@property
@pulumi.getter(name="interchangeControlNumberSuffix")
def interchange_control_number_suffix(self) -> Optional[pulumi.Input[str]]:
"""
The interchange control number suffix.
"""
return pulumi.get(self, "interchange_control_number_suffix")
@interchange_control_number_suffix.setter
def interchange_control_number_suffix(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "interchange_control_number_suffix", value)
@property
@pulumi.getter(name="interchangeControlNumberUpperBound")
def interchange_control_number_upper_bound(self) -> Optional[pulumi.Input[int]]:
"""
The interchange control number upper bound.
"""
return pulumi.get(self, "interchange_control_number_upper_bound")
@interchange_control_number_upper_bound.setter
def interchange_control_number_upper_bound(self, value: Optional[pulumi.Input[int]]):
| |
cache: %s", new_server)
try:
del self.server_details_by_id[new_server['server_id']]
except KeyError:
self.log.debug("Server: %s has already been removed from the cache", new_server['server_id'])
def filter_excluded_servers(self):
proj_list = set([])
if self.exclude_server_id_rules:
# Filter out excluded servers
for exclude_id_rule in self.exclude_server_id_rules:
for server_id in list(self.server_details_by_id):
if re.match(exclude_id_rule, server_id):
del self.server_details_by_id[server_id]
for _, server in iteritems(self.server_details_by_id):
proj_list.add(server.get('project_name'))
projects_filtered = pattern_filter(
proj_list,
whitelist=self.include_project_name_rules,
blacklist=self.exclude_project_name_rules
)
self.server_details_by_id = {
sid: server for (sid, server)
in iteritems(self.server_details_by_id)
if server.get('project_name') in projects_filtered
}
def get_stats_for_single_server(self, server_details, tags=None, use_shortname=False):
def _is_valid_metric(label):
return label in NOVA_SERVER_METRICS or any(seg in label for seg in NOVA_SERVER_INTERFACE_SEGMENTS)
def _is_interface_metric(label):
return any(seg in label for seg in NOVA_SERVER_INTERFACE_SEGMENTS)
hypervisor_hostname = server_details.get('hypervisor_hostname')
host_tags = self._get_host_aggregate_tag(hypervisor_hostname, use_shortname=use_shortname)
host_tags.append('availability_zone:{}'.format(server_details.get('availability_zone', 'NA')))
self.external_host_tags[server_details.get('server_name')] = host_tags
server_id = server_details.get('server_id')
server_name = server_details.get('server_name')
hypervisor_hostname = server_details.get('hypervisor_hostname')
project_name = server_details.get('project_name')
server_stats = {}
try:
server_stats = self.get_server_diagnostics(server_id)
except InstancePowerOffFailure: # 409 response code came back fro nova
self.log.debug("Server %s is powered off and cannot be monitored", server_id)
del self.server_details_by_id[server_id]
except requests.exceptions.HTTPError as e:
if e.response.status_code == 404:
self.log.debug("Server %s is not in an ACTIVE state and cannot be monitored, %s", server_id, e)
del self.server_details_by_id[server_id]
else:
self.log.debug("Received HTTP Error when reaching the nova endpoint")
return
except Exception as e:
self.warning("Unknown error when monitoring %s : %s" % (server_id, e))
return
if server_stats:
tags = tags or []
if project_name:
tags.append("project_name:{}".format(project_name))
if hypervisor_hostname:
tags.append("hypervisor:{}".format(hypervisor_hostname))
if server_name:
tags.append("server_name:{}".format(server_name))
# microversion pre 2.48
for m in server_stats:
if _is_interface_metric(m):
# Example of interface metric
# tap123456_rx_errors
metric_pre = re.split("(_rx|_tx)", m)
interface = "interface:{}".format(metric_pre[0])
self.gauge(
"openstack.nova.server.{}{}".format(metric_pre[1].replace("_", ""), metric_pre[2]),
server_stats[m],
tags=tags+host_tags+[interface],
hostname=server_id,
)
elif _is_valid_metric(m):
self.gauge(
"openstack.nova.server.{}".format(m.replace("-", "_")),
server_stats[m],
tags=tags+host_tags,
hostname=server_id,
)
def get_stats_for_single_project(self, project, tags=None):
def _is_valid_metric(label):
return label in PROJECT_METRICS
if tags is None:
tags = []
server_tags = copy.deepcopy(tags)
project_name = project.get('name')
project_id = project.get('id')
self.log.debug("Collecting metrics for project. name: {} id: {}".format(project_name, project['id']))
server_stats = self.get_project_limits(project['id'])
server_tags.append('tenant_id:{}'.format(project_id))
if project_name:
server_tags.append('project_name:{}'.format(project_name))
try:
for st in server_stats:
if _is_valid_metric(st):
metric_key = PROJECT_METRICS[st]
self.gauge(
"openstack.nova.limits.{}".format(metric_key),
server_stats[st],
tags=server_tags,
)
except KeyError:
self.log.warn("Unexpected response, not submitting limits metrics for project id".format(project['id']))
# Cache util
def _is_expired(self, entry):
assert entry in ["aggregates", "physical_hosts", "hypervisors"]
ttl = self.CACHE_TTL.get(entry)
last_fetch_time = getattr(self, self.FETCH_TIME_ACCESSORS.get(entry))
return datetime.now() - last_fetch_time > timedelta(seconds=ttl)
def _get_and_set_aggregate_list(self):
if not self._aggregate_list or self._is_expired("aggregates"):
self._aggregate_list = self.get_all_aggregate_hypervisors()
self._last_aggregate_fetch_time = datetime.now()
return self._aggregate_list
def _send_api_service_checks(self, project_scope, tags):
# Nova
headers = {"X-Auth-Token": project_scope.auth_token}
try:
self.log.debug("Nova endpoint: {}".format(project_scope.nova_endpoint))
requests.get(
project_scope.nova_endpoint,
headers=headers,
verify=self.ssl_verify,
timeout=DEFAULT_API_REQUEST_TIMEOUT,
proxies=self.proxy_config,
)
self.service_check(
self.COMPUTE_API_SC,
AgentCheck.OK,
tags=["keystone_server: {}".format(self.keystone_server_url)] + tags,
)
except (requests.exceptions.HTTPError, requests.exceptions.Timeout, requests.exceptions.ConnectionError):
self.service_check(
self.COMPUTE_API_SC,
AgentCheck.CRITICAL,
tags=["keystone_server: {}".format(self.keystone_server_url)] + tags,
)
# Neutron
try:
self.log.debug("Neutron endpoint: {}".format(project_scope.neutron_endpoint))
requests.get(
project_scope.neutron_endpoint,
headers=headers,
verify=self.ssl_verify,
timeout=DEFAULT_API_REQUEST_TIMEOUT,
proxies=self.proxy_config,
)
self.service_check(
self.NETWORK_API_SC,
AgentCheck.OK,
tags=["keystone_server: {}".format(self.keystone_server_url)] + tags,
)
except (requests.exceptions.HTTPError, requests.exceptions.Timeout, requests.exceptions.ConnectionError):
self.service_check(
self.NETWORK_API_SC,
AgentCheck.CRITICAL,
tags=["keystone_server: {}".format(self.keystone_server_url)] + tags,
)
def init_instance_scope_cache(self, instance):
"""
Guarantees a valid auth scope for this instance, and returns it
Communicates with the identity server and initializes a new scope when one is absent, or has been forcibly
removed due to token expiry
"""
instance_scope = None
custom_tags = instance.get('tags', [])
if custom_tags is None:
custom_tags = []
try:
instance_scope = self.get_instance_scope(instance)
except KeyError:
# We are missing the entire instance scope either because it is the first time we initialize it or because
# authentication previously failed and got removed from the cache
# Let's populate it now
try:
self.log.debug("Fetch scope for instance {}".format(instance))
instance_scope = ScopeFetcher.from_config(self.log, self.init_config, instance,
proxy_config=self.proxy_config)
# Set keystone api with proper token
self._keystone_api = KeystoneApi(self.log, self.ssl_verify, self.proxy_config,
self.keystone_server_url, instance_scope.auth_token)
self.service_check(
self.IDENTITY_API_SC,
AgentCheck.OK,
tags=["keystone_server: {}".format(self.keystone_server_url)] + custom_tags,
)
except KeystoneUnreachable as e:
self.log.warning("The agent could not contact the specified identity server at {} . "
"Are you sure it is up at that address?".format(self.keystone_server_url))
self.log.debug("Problem grabbing auth token: %s", e)
self.service_check(
self.IDENTITY_API_SC,
AgentCheck.CRITICAL,
tags=["keystone_server: {}".format(self.keystone_server_url)] + custom_tags,
)
# If Keystone is down/unreachable, we default the
# Nova and Neutron APIs to UNKNOWN since we cannot access the service catalog
self.service_check(
self.NETWORK_API_SC,
AgentCheck.UNKNOWN,
tags=["keystone_server: {}".format(self.keystone_server_url)] + custom_tags,
)
self.service_check(
self.COMPUTE_API_SC,
AgentCheck.UNKNOWN,
tags=["keystone_server: {}".format(self.keystone_server_url)] + custom_tags,
)
except MissingNovaEndpoint as e:
self.warning("The agent could not find a compatible Nova endpoint in your service catalog!")
self.log.debug("Failed to get nova endpoint for response catalog: %s", e)
self.service_check(
self.COMPUTE_API_SC,
AgentCheck.CRITICAL,
tags=["keystone_server: {}".format(self.keystone_server_url)] + custom_tags,
)
except MissingNeutronEndpoint:
self.warning("The agent could not find a compatible Neutron endpoint in your service catalog!")
self.service_check(
self.NETWORK_API_SC,
AgentCheck.CRITICAL,
tags=["keystone_server: {}".format(self.keystone_server_url)] + custom_tags,
)
if not instance_scope:
# Fast fail in the absence of an instance_scope
raise IncompleteConfig()
self.set_scopes_cache(instance, instance_scope)
return instance_scope
@traced
def check(self, instance):
# have we been backed off
if not self._backoff.should_run(instance):
self.log.info('Skipping run due to exponential backoff in effect')
return
custom_tags = instance.get("tags", [])
collect_limits_from_all_projects = is_affirmative(instance.get('collect_limits_from_all_projects', True))
collect_hypervisor_load = is_affirmative(instance.get('collect_hypervisor_load', False))
use_shortname = is_affirmative(instance.get('use_shortname', False))
projects = {}
try:
instance_name = get_instance_name(instance)
# Authenticate and add the instance scope to instance_scopes cache
self.init_instance_scope_cache(instance)
# Init instance_scope
self.instance_scope = self.get_instance_scope(instance)
project_scopes = self.get_project_scopes(instance)
for _, project_scope in iteritems(project_scopes):
self._send_api_service_checks(project_scope, custom_tags)
self.log.debug("Running check with credentials: \n")
self.log.debug("Nova Url: %s", project_scope.nova_endpoint)
self.log.debug("Neutron Url: %s", project_scope.neutron_endpoint)
self._neutron_api = NeutronApi(self.log,
self.ssl_verify,
self.proxy_config,
project_scope.neutron_endpoint,
project_scope.auth_token)
self._compute_api = ComputeApi(self.log,
self.ssl_verify,
self.proxy_config,
project_scope.nova_endpoint,
project_scope.auth_token)
project = self.get_and_update_project_details(project_scope)
if project and project.get('name'):
projects[project.get('name')] = project
if collect_limits_from_all_projects:
scope_projects = self.get_projects(project_scope.auth_token)
if scope_projects:
for proj in scope_projects:
projects[proj['name']] = proj
filtered_projects = pattern_filter([p for p in projects],
whitelist=self.include_project_name_rules,
blacklist=self.exclude_project_name_rules)
projects = {name: v for (name, v) in iteritems(projects) if name in filtered_projects}
for name, project in iteritems(projects):
self.get_stats_for_single_project(project, custom_tags)
self.get_stats_for_all_hypervisors(instance, custom_tags=custom_tags,
use_shortname=use_shortname,
collect_hypervisor_load=collect_hypervisor_load)
# This updates the server cache directly
self.get_all_servers(project_scope.auth_token, instance_name)
self.filter_excluded_servers()
# Deep copy the cache so we can remove things from the Original during the iteration
# Allows us to remove bad servers from the cache if need be
server_cache_copy = copy.deepcopy(self.server_details_by_id)
self.log.debug("Fetch stats from %s server(s)" % len(server_cache_copy))
for server in server_cache_copy:
server_tags = copy.deepcopy(custom_tags)
server_tags.append("nova_managed_server")
self.get_stats_for_single_server(server_cache_copy[server], tags=server_tags,
use_shortname=use_shortname)
# For now, monitor all networks
self.get_network_stats(custom_tags)
if set_external_tags is not None:
set_external_tags(self.get_external_host_tags())
except IncompleteConfig as e:
if isinstance(e, IncompleteIdentity):
self.warning(
"Please specify the user via the `user` variable in your init_config.\n"
+ "This is the user you would use to authenticate with Keystone v3 via password auth.\n"
+ "The user should look like:"
+ "{'password': '<PASSWORD>', 'name': 'my_name', 'domain': {'id': 'my_domain_id'}}"
)
else:
self.warning("Configuration Incomplete! Check your openstack.yaml file")
except AuthenticationNeeded:
# Delete the scope, we'll populate a new one on the next run for this instance
self.delete_instance_scope()
except (requests.exceptions.HTTPError, requests.exceptions.Timeout, requests.exceptions.ConnectionError) as e:
if isinstance(e, requests.exceptions.HTTPError) and e.response.status_code < 500:
self.warning("Error reaching nova API: %s" % e)
else:
# exponential backoff
self.do_backoff(instance)
return
self._backoff.reset_backoff(instance)
def do_backoff(self, instance):
backoff_interval, retries = self._backoff.do_backoff(instance)
instance_name = get_instance_name(instance)
tags = instance.get('tags', [])
hypervisor_name = self.hypervisor_name_cache.get(instance_name)
if hypervisor_name:
tags.extend("hypervisor:{}".format(hypervisor_name))
self.gauge("openstack.backoff.interval", backoff_interval, tags=tags)
self.gauge("openstack.backoff.retries", retries, tags=tags)
self.warning("There were some problems reaching the nova API - applying exponential backoff")
def get_and_update_project_details(self, project_scope):
"""
Returns the project that this instance of the check is scoped to
"""
try:
project_details, tenant_id, project_name = self.get_project_details(
project_scope.tenant_id,
project_scope.name,
project_scope.domain_id)
# Set the tenant_id so we won't have to fetch it next time
if project_scope.tenant_id:
project_scope.tenant_id = tenant_id
if project_scope.name:
project_scope.name = project_name
return project_details
except Exception as e:
self.warning('Unable to get the project details: {}'.format(e))
raise e
def _get_host_aggregate_tag(self, hyp_hostname, use_shortname=False):
tags = []
hyp_hostname = hyp_hostname.split('.')[0] if use_shortname else hyp_hostname
if hyp_hostname in self._get_and_set_aggregate_list():
tags.append('aggregate:{}'.format(self._aggregate_list[hyp_hostname].get('aggregate', "unknown")))
# Need to check if there is a value for availability_zone
# because it is possible to have an aggregate without an AZ
try:
if self._aggregate_list[hyp_hostname].get('availability_zone'):
tags.append('availability_zone:{}'
.format(self._aggregate_list[hyp_hostname]['availability_zone']))
except KeyError:
self.log.debug('Unable to get the availability_zone for hypervisor: {}'.format(hyp_hostname))
else:
self.log.info('Unable to find hostname %s in aggregate list. Assuming this host is unaggregated',
hyp_hostname)
return tags
# For attaching tags to hosts that are not the host running the agent
def get_external_host_tags(self):
""" Returns a list of tags for every guest server that is detected by the OpenStack
integration.
List of pairs (hostname, list_of_tags)
"""
self.log.debug("Collecting external_host_tags now")
external_host_tags = []
for k, | |
not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, localTrabalho)
if subclass is not None:
return subclass(*args_, **kwargs_)
if localTrabalho.subclass:
return localTrabalho.subclass(*args_, **kwargs_)
else:
return localTrabalho(*args_, **kwargs_)
factory = staticmethod(factory)
def get_localTrabGeral(self): return self.localTrabGeral
def set_localTrabGeral(self, localTrabGeral): self.localTrabGeral = localTrabGeral
def get_localTrabDom(self): return self.localTrabDom
def set_localTrabDom(self, localTrabDom): self.localTrabDom = localTrabDom
def hasContent_(self):
if (
self.localTrabGeral is not None or
self.localTrabDom is not None
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='localTrabalho', namespacedef_='', pretty_print=True):
imported_ns_def_ = GenerateDSNamespaceDefs_.get('localTrabalho')
if imported_ns_def_ is not None:
namespacedef_ = imported_ns_def_
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='localTrabalho')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='localTrabalho', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='localTrabalho'):
pass
def exportChildren(self, outfile, level, namespace_='', name_='localTrabalho', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.localTrabGeral is not None:
self.localTrabGeral.export(outfile, level, namespace_, name_='localTrabGeral', pretty_print=pretty_print)
if self.localTrabDom is not None:
self.localTrabDom.export(outfile, level, namespace_, name_='localTrabDom', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'localTrabGeral':
obj_ = TLocalTrab.factory()
obj_.build(child_)
self.localTrabGeral = obj_
obj_.original_tagname_ = 'localTrabGeral'
elif nodeName_ == 'localTrabDom':
obj_ = TEnderecoBrasil.factory()
obj_.build(child_)
self.localTrabDom = obj_
obj_.original_tagname_ = 'localTrabDom'
# end class localTrabalho
class horContratual(GeneratedsSuper):
"""Informações do Horário Contratual do Trabalhador. O preenchimento é
obrigatório se {tpRegJor} = [1]."""
subclass = None
superclass = None
def __init__(self, qtdHrsSem=None, tpJornada=None, dscTpJorn=None, tmpParc=None, horario=None):
self.original_tagname_ = None
self.qtdHrsSem = qtdHrsSem
self.tpJornada = tpJornada
self.dscTpJorn = dscTpJorn
self.tmpParc = tmpParc
if horario is None:
self.horario = []
else:
self.horario = horario
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, horContratual)
if subclass is not None:
return subclass(*args_, **kwargs_)
if horContratual.subclass:
return horContratual.subclass(*args_, **kwargs_)
else:
return horContratual(*args_, **kwargs_)
factory = staticmethod(factory)
def get_qtdHrsSem(self): return self.qtdHrsSem
def set_qtdHrsSem(self, qtdHrsSem): self.qtdHrsSem = qtdHrsSem
def get_tpJornada(self): return self.tpJornada
def set_tpJornada(self, tpJornada): self.tpJornada = tpJornada
def get_dscTpJorn(self): return self.dscTpJorn
def set_dscTpJorn(self, dscTpJorn): self.dscTpJorn = dscTpJorn
def get_tmpParc(self): return self.tmpParc
def set_tmpParc(self, tmpParc): self.tmpParc = tmpParc
def get_horario(self): return self.horario
def set_horario(self, horario): self.horario = horario
def add_horario(self, value): self.horario.append(value)
def insert_horario_at(self, index, value): self.horario.insert(index, value)
def replace_horario_at(self, index, value): self.horario[index] = value
def hasContent_(self):
if (
self.qtdHrsSem is not None or
self.tpJornada is not None or
self.dscTpJorn is not None or
self.tmpParc is not None or
self.horario
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='horContratual', namespacedef_='', pretty_print=True):
imported_ns_def_ = GenerateDSNamespaceDefs_.get('horContratual')
if imported_ns_def_ is not None:
namespacedef_ = imported_ns_def_
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='horContratual')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='horContratual', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='horContratual'):
pass
def exportChildren(self, outfile, level, namespace_='', name_='horContratual', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.qtdHrsSem is not None:
showIndent(outfile, level, pretty_print)
outfile.write('<%sqtdHrsSem>%s</%sqtdHrsSem>%s' % (namespace_, self.gds_format_float(self.qtdHrsSem, input_name='qtdHrsSem'), namespace_, eol_))
if self.tpJornada is not None:
showIndent(outfile, level, pretty_print)
outfile.write('<%stpJornada>%s</%stpJornada>%s' % (namespace_, self.gds_format_integer(self.tpJornada, input_name='tpJornada'), namespace_, eol_))
if self.dscTpJorn is not None:
showIndent(outfile, level, pretty_print)
outfile.write('<%sdscTpJorn>%s</%sdscTpJorn>%s' % (namespace_, self.gds_encode(self.gds_format_string(quote_xml(self.dscTpJorn), input_name='dscTpJorn')), namespace_, eol_))
if self.tmpParc is not None:
showIndent(outfile, level, pretty_print)
outfile.write('<%stmpParc>%s</%stmpParc>%s' % (namespace_, self.gds_format_integer(self.tmpParc, input_name='tmpParc'), namespace_, eol_))
for horario_ in self.horario:
horario_.export(outfile, level, namespace_, name_='horario', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'qtdHrsSem':
sval_ = child_.text
try:
fval_ = float(sval_)
except (TypeError, ValueError) as exp:
raise_parse_error(child_, 'requires float or double: %s' % exp)
fval_ = self.gds_validate_float(fval_, node, 'qtdHrsSem')
self.qtdHrsSem = fval_
elif nodeName_ == 'tpJornada':
sval_ = child_.text
try:
ival_ = int(sval_)
except (TypeError, ValueError) as exp:
raise_parse_error(child_, 'requires integer: %s' % exp)
ival_ = self.gds_validate_integer(ival_, node, 'tpJornada')
self.tpJornada = ival_
elif nodeName_ == 'dscTpJorn':
dscTpJorn_ = child_.text
dscTpJorn_ = self.gds_validate_string(dscTpJorn_, node, 'dscTpJorn')
self.dscTpJorn = dscTpJorn_
elif nodeName_ == 'tmpParc':
sval_ = child_.text
try:
ival_ = int(sval_)
except (TypeError, ValueError) as exp:
raise_parse_error(child_, 'requires integer: %s' % exp)
ival_ = self.gds_validate_integer(ival_, node, 'tmpParc')
self.tmpParc = ival_
elif nodeName_ == 'horario':
obj_ = THorario.factory()
obj_.build(child_)
self.horario.append(obj_)
obj_.original_tagname_ = 'horario'
# end class horContratual
class qtdHrsSem(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self):
self.original_tagname_ = None
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, qtdHrsSem)
if subclass is not None:
return subclass(*args_, **kwargs_)
if qtdHrsSem.subclass:
return qtdHrsSem.subclass(*args_, **kwargs_)
else:
return qtdHrsSem(*args_, **kwargs_)
factory = staticmethod(factory)
def hasContent_(self):
if (
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='qtdHrsSem', namespacedef_='', pretty_print=True):
imported_ns_def_ = GenerateDSNamespaceDefs_.get('qtdHrsSem')
if imported_ns_def_ is not None:
namespacedef_ = imported_ns_def_
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='qtdHrsSem')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='qtdHrsSem', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='qtdHrsSem'):
pass
def exportChildren(self, outfile, level, namespace_='', name_='qtdHrsSem', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
# end class qtdHrsSem
class tpJornada(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self):
self.original_tagname_ = None
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, tpJornada)
if subclass is not None:
return subclass(*args_, **kwargs_)
if tpJornada.subclass:
return tpJornada.subclass(*args_, **kwargs_)
else:
return tpJornada(*args_, **kwargs_)
factory = staticmethod(factory)
def hasContent_(self):
if (
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='tpJornada', namespacedef_='', pretty_print=True):
imported_ns_def_ = GenerateDSNamespaceDefs_.get('tpJornada')
if imported_ns_def_ is not None:
namespacedef_ = imported_ns_def_
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='tpJornada')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='tpJornada', pretty_print=pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='tpJornada'):
pass
def exportChildren(self, outfile, level, namespace_='', name_='tpJornada', fromsubclass_=False, pretty_print=True):
pass
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
pass
# end class tpJornada
class dscTpJorn(GeneratedsSuper):
subclass = None
superclass = None
def __init__(self):
self.original_tagname_ = None
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, dscTpJorn)
if subclass is not None:
return subclass(*args_, **kwargs_)
if dscTpJorn.subclass:
return dscTpJorn.subclass(*args_, **kwargs_)
else:
return dscTpJorn(*args_, **kwargs_)
factory = staticmethod(factory)
def hasContent_(self):
if (
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='dscTpJorn', namespacedef_='', pretty_print=True):
imported_ns_def_ = GenerateDSNamespaceDefs_.get('dscTpJorn')
if imported_ns_def_ is not None:
namespacedef_ = imported_ns_def_
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='dscTpJorn')
| |
ensemble layer by layer.
grow_op = training_ops.grow_tree_ensemble(
tree_ensemble_handle,
stamp_token=0,
next_stamp_token=1,
learning_rate=0.1,
partition_ids=[
handler1_partitions, handler2_partitions, handler3_partitions
],
gains=[handler1_gains, handler2_gains, handler3_gains],
splits=[handler1_split, handler2_split, handler3_split],
learner_config=learner_config.SerializeToString(),
dropout_seed=123,
center_bias=True,
max_tree_depth=learner_config.constraints.max_tree_depth,
weak_learner_type=learner_pb2.LearnerConfig.OBLIVIOUS_DECISION_TREE)
session.run(grow_op)
# Expect the split for partition 1 to be chosen from handler 1 and
# the split for partition 2 to be chosen from handler 2.
# The grown tree should not be finalized as max tree depth is 3 and
# it's only grown 2 layers.
# The partition 1 split weights get added to original leaf weight 7.143.
# The partition 2 split weights get added to original leaf weight -4.375.
new_stamp, serialized = session.run(
model_ops.tree_ensemble_serialize(tree_ensemble_handle))
stats = session.run(
training_ops.tree_ensemble_stats(tree_ensemble_handle, stamp_token=1))
tree_ensemble_config.ParseFromString(serialized)
expected_result = """
trees {
nodes {
oblivious_dense_float_binary_split {
feature_column: 4
threshold: 7
}
node_metadata {
gain: 7.62
original_oblivious_leaves {
}
}
}
nodes {
oblivious_dense_float_binary_split {
feature_column: 0
threshold: 0.23
}
node_metadata {
gain: 2.7
original_oblivious_leaves {
vector {
value: 7.143
}
}
original_oblivious_leaves {
vector {
value: -4.375
}
}
}
}
nodes {
leaf {
vector {
value: 6.543
}
}
}
nodes {
leaf {
vector {
value: 7.383
}
}
}
nodes {
leaf {
vector {
value: -4.075
}
}
}
nodes {
leaf {
vector {
value: -3.975
}
}
}
}
tree_weights: 0.1
tree_metadata {
num_tree_weight_updates: 1
num_layers_grown: 2
}
growing_metadata {
num_trees_attempted: 1
num_layers_attempted: 2
}
"""
self.assertEqual(new_stamp, 1)
self.assertEqual(stats.num_trees, 0)
self.assertEqual(stats.num_layers, 2)
self.assertEqual(stats.active_tree, 1)
self.assertEqual(stats.active_layer, 2)
self.assertEqual(stats.attempted_trees, 1)
self.assertEqual(stats.attempted_layers, 2)
self.assertProtoEquals(expected_result, tree_ensemble_config)
def testGrowEnsembleWithEmptyNodesMiddleCase(self):
"""Test case: The middle existing leaves don't have examples."""
with self.cached_session() as session:
tree_ensemble_config = tree_config_pb2.DecisionTreeEnsembleConfig()
text_format.Merge(
"""
trees {
nodes {
oblivious_dense_float_binary_split {
feature_column: 4
threshold: 7
}
node_metadata {
gain: 7.62
original_oblivious_leaves {
}
}
}
nodes {
oblivious_dense_float_binary_split {
feature_column: 1
threshold: 0.23
}
node_metadata {
gain: 2.7
original_oblivious_leaves {
vector {
value: 7.143
}
}
original_oblivious_leaves {
vector {
value: -4.375
}
}
}
}
nodes {
leaf {
vector {
value: 6.543
}
}
}
nodes {
leaf {
vector {
value: 7.5
}
}
}
nodes {
leaf {
vector {
value: -4.075
}
}
}
nodes {
leaf {
vector {
value: -3.975
}
}
}
}
tree_weights: 0.1
tree_metadata {
num_tree_weight_updates: 1
num_layers_grown: 2
}
growing_metadata {
num_trees_attempted: 1
num_layers_attempted: 2
}
""", tree_ensemble_config)
tree_ensemble_handle = model_ops.tree_ensemble_variable(
stamp_token=0,
tree_ensemble_config=tree_ensemble_config.SerializeToString(),
name="tree_ensemble")
resources.initialize_resources(resources.shared_resources()).run()
# Prepare learner config.
learner_config = _gen_learner_config(
num_classes=2,
l1_reg=0,
l2_reg=0,
tree_complexity=0,
max_depth=6,
min_node_weight=0,
pruning_mode=learner_pb2.LearnerConfig.PRE_PRUNE,
growing_mode=learner_pb2.LearnerConfig.LAYER_BY_LAYER)
# Prepare handler inputs.
handler1_partitions = np.array([0], dtype=np.int32)
handler1_gains = np.array([1.8], dtype=np.float32)
handler1_split = [
_gen_dense_oblivious_split_info(0, 0.9, [1.0, 2.0, 3.0, 4.0], [2, 5])
]
# The tree currently has depth 2, so the ids for the four leaves are in
# the range [2, 6). In this test case we are assuming that our examples
# only fall in leaves 2 and 5.
# Grow tree ensemble layer by layer.
grow_op = training_ops.grow_tree_ensemble(
tree_ensemble_handle,
stamp_token=0,
next_stamp_token=1,
learning_rate=0.1,
partition_ids=[handler1_partitions],
gains=[handler1_gains],
splits=[handler1_split],
learner_config=learner_config.SerializeToString(),
dropout_seed=123,
center_bias=True,
max_tree_depth=learner_config.constraints.max_tree_depth,
weak_learner_type=learner_pb2.LearnerConfig.OBLIVIOUS_DECISION_TREE)
session.run(grow_op)
new_stamp, serialized = session.run(
model_ops.tree_ensemble_serialize(tree_ensemble_handle))
stats = session.run(
training_ops.tree_ensemble_stats(tree_ensemble_handle, stamp_token=1))
tree_ensemble_config.ParseFromString(serialized)
expected_result = """
trees {
nodes {
oblivious_dense_float_binary_split {
feature_column: 4
threshold: 7
}
node_metadata {
gain: 7.62
original_oblivious_leaves {
}
}
}
nodes {
oblivious_dense_float_binary_split {
feature_column: 1
threshold: 0.23
}
node_metadata {
gain: 2.7
original_oblivious_leaves {
vector {
value: 7.143
}
}
original_oblivious_leaves {
vector {
value: -4.375
}
}
}
}
nodes {
oblivious_dense_float_binary_split {
feature_column: 0
threshold: 0.9
}
node_metadata {
gain: 1.8
original_oblivious_leaves {
vector {
value: 6.543
}
}
original_oblivious_leaves {
vector {
value: 7.5
}
}
original_oblivious_leaves {
vector {
value: -4.075
}
}
original_oblivious_leaves {
vector {
value: -3.975
}
}
}
}
nodes {
leaf {
vector {
value: 7.543
}
}
}
nodes {
leaf {
vector {
value: 8.543
}
}
}
nodes {
leaf {
vector {
value: 7.5
}
}
}
nodes {
leaf {
vector {
value: 7.5
}
}
}
nodes {
leaf {
vector {
value: -4.075
}
}
}
nodes {
leaf {
vector {
value: -4.075
}
}
}
nodes {
leaf {
vector {
value: -0.975
}
}
}
nodes {
leaf {
vector {
value: 0.025
}
}
}
}
tree_weights: 0.1
tree_metadata {
num_tree_weight_updates: 1
num_layers_grown: 3
}
growing_metadata {
num_trees_attempted: 1
num_layers_attempted: 3
}
"""
self.assertEqual(new_stamp, 1)
self.assertEqual(stats.num_trees, 0)
self.assertEqual(stats.num_layers, 3)
self.assertEqual(stats.active_tree, 1)
self.assertEqual(stats.active_layer, 3)
self.assertEqual(stats.attempted_trees, 1)
self.assertEqual(stats.attempted_layers, 3)
self.assertProtoEquals(expected_result, tree_ensemble_config)
def testGrowEnsembleWithEmptyNodesBorderCase(self):
"""Test case: The first and last existing leaves don't have examples."""
with self.cached_session() as session:
tree_ensemble_config = tree_config_pb2.DecisionTreeEnsembleConfig()
text_format.Merge(
"""
trees {
nodes {
oblivious_dense_float_binary_split {
feature_column: 4
threshold: 7
}
node_metadata {
gain: 7.62
original_oblivious_leaves {
}
}
}
nodes {
oblivious_dense_float_binary_split {
feature_column: 1
threshold: 0.23
}
node_metadata {
gain: 2.7
original_oblivious_leaves {
vector {
value: 7.143
}
}
original_oblivious_leaves {
vector {
value: -4.375
}
}
}
}
nodes {
leaf {
vector {
value: 6.543
}
}
}
nodes {
leaf {
vector {
value: 7.5
}
}
}
nodes {
leaf {
vector {
value: -4.075
}
}
}
nodes {
leaf {
vector {
value: -3.975
}
}
}
}
tree_weights: 0.1
tree_metadata {
num_tree_weight_updates: 1
num_layers_grown: 2
}
growing_metadata {
num_trees_attempted: 1
num_layers_attempted: 2
}
""", tree_ensemble_config)
tree_ensemble_handle = model_ops.tree_ensemble_variable(
stamp_token=0,
tree_ensemble_config=tree_ensemble_config.SerializeToString(),
name="tree_ensemble")
resources.initialize_resources(resources.shared_resources()).run()
# Prepare learner config.
learner_config = _gen_learner_config(
num_classes=2,
l1_reg=0,
l2_reg=0,
tree_complexity=0,
max_depth=6,
min_node_weight=0,
pruning_mode=learner_pb2.LearnerConfig.PRE_PRUNE,
growing_mode=learner_pb2.LearnerConfig.LAYER_BY_LAYER)
# Prepare handler inputs.
handler1_partitions = np.array([0], dtype=np.int32)
handler1_gains = np.array([1.8], dtype=np.float32)
handler1_split = [
_gen_dense_oblivious_split_info(0, 0.9, [1.0, 2.0, 3.0, 4.0], [3, 4])
]
# The tree currently has depth 2, so the ids for the four leaves are in
# the range [2, 6). In this test case we are assuming that our examples
# only fall in leaves 3 and 4.
# Grow tree ensemble layer by layer.
grow_op = training_ops.grow_tree_ensemble(
tree_ensemble_handle,
stamp_token=0,
next_stamp_token=1,
learning_rate=0.1,
partition_ids=[handler1_partitions],
gains=[handler1_gains],
splits=[handler1_split],
learner_config=learner_config.SerializeToString(),
dropout_seed=123,
center_bias=True,
max_tree_depth=learner_config.constraints.max_tree_depth,
weak_learner_type=learner_pb2.LearnerConfig.OBLIVIOUS_DECISION_TREE)
session.run(grow_op)
new_stamp, serialized = session.run(
model_ops.tree_ensemble_serialize(tree_ensemble_handle))
stats = session.run(
training_ops.tree_ensemble_stats(tree_ensemble_handle, stamp_token=1))
tree_ensemble_config.ParseFromString(serialized)
expected_result = """
trees {
nodes {
oblivious_dense_float_binary_split {
feature_column: 4
threshold: 7
}
node_metadata {
gain: 7.62
original_oblivious_leaves {
}
}
}
nodes {
oblivious_dense_float_binary_split {
feature_column: 1
threshold: 0.23
}
node_metadata {
gain: 2.7
original_oblivious_leaves {
vector {
value: 7.143
}
}
original_oblivious_leaves {
vector {
value: -4.375
}
}
}
}
nodes {
oblivious_dense_float_binary_split {
feature_column: 0
threshold: 0.9
}
node_metadata {
gain: 1.8
original_oblivious_leaves {
vector {
value: 6.543
}
}
original_oblivious_leaves {
vector {
value: 7.5
}
}
original_oblivious_leaves {
vector {
value: -4.075
}
}
original_oblivious_leaves {
vector {
value: -3.975
}
}
}
}
nodes {
leaf {
vector {
value: 6.543
}
}
}
nodes {
leaf {
vector {
value: 6.543
}
}
}
nodes {
leaf {
vector {
value: 8.5
}
}
}
nodes {
leaf {
vector {
value: 9.5
}
}
}
nodes {
leaf {
vector {
value: -1.075
}
}
}
nodes {
leaf {
vector {
value: -0.075
}
}
}
nodes {
leaf {
vector {
value: -3.975
}
}
}
nodes {
leaf {
vector {
value: -3.975
}
}
}
}
tree_weights: 0.1
tree_metadata {
num_tree_weight_updates: 1
num_layers_grown: 3
}
growing_metadata {
num_trees_attempted: 1
num_layers_attempted: 3
}
"""
self.assertEqual(new_stamp, 1)
self.assertEqual(stats.num_trees, 0)
self.assertEqual(stats.num_layers, 3)
self.assertEqual(stats.active_tree, 1)
self.assertEqual(stats.active_layer, 3)
self.assertEqual(stats.attempted_trees, 1)
self.assertEqual(stats.attempted_layers, 3)
self.assertProtoEquals(expected_result, tree_ensemble_config)
def testGrowExistingEnsembleTreeFinalizedWithDropout(self):
"""Test growing an existing ensemble with the last tree finalized."""
with self.cached_session() as session:
# Create existing ensemble with one root split and one bias tree.
tree_ensemble_config = tree_config_pb2.DecisionTreeEnsembleConfig()
text_format.Merge("""
trees {
nodes {
leaf {
vector {
value: -0.32
value: 0.28
}
}
}
}
trees {
nodes {
categorical_id_binary_split {
feature_column: 3
feature_id: 7
left_id: 1
right_id: 2
}
node_metadata {
gain: 1.3
}
}
nodes {
leaf {
sparse_vector {
index: 0
value: 2.3
}
}
}
| |
hour=9),
datetime(1999, 1, 4, hour=9),
datetime(1999, 1, 5, hour=9),
datetime(1999, 1, 6, hour=9),
datetime(1999, 1, 7, hour=9),
datetime(1999, 1, 8, hour=9),
datetime(1999, 1, 9, hour=9),
datetime(1999, 1, 10, hour=9),
datetime(1999, 1, 11, hour=9),
datetime(1999, 1, 12, hour=9),
datetime(1999, 1, 13, hour=9),
datetime(1999, 1, 14, hour=9),
datetime(1999, 1, 15, hour=9),
datetime(1999, 1, 16, hour=9),
datetime(1999, 1, 17, hour=9),
datetime(1999, 1, 18, hour=9),
datetime(1999, 1, 19, hour=9),
datetime(1999, 1, 20, hour=9),
datetime(1999, 1, 21, hour=9),
datetime(1999, 1, 22, hour=9),
datetime(1999, 1, 23, hour=9),
datetime(1999, 1, 24, hour=9),
datetime(1999, 1, 25, hour=9),
datetime(1999, 1, 26, hour=9),
datetime(1999, 1, 27, hour=9),
datetime(1999, 1, 28, hour=9),
datetime(1999, 1, 29, hour=9),
datetime(1999, 1, 30, hour=9),
datetime(1999, 1, 31, hour=9),
datetime(2000, 1, 1, hour=9),
datetime(2000, 1, 2, hour=9),
datetime(2000, 1, 3, hour=9),
datetime(2000, 1, 4, hour=9),
datetime(2000, 1, 5, hour=9),
datetime(2000, 1, 6, hour=9),
datetime(2000, 1, 7, hour=9),
datetime(2000, 1, 8, hour=9),
datetime(2000, 1, 9, hour=9),
datetime(2000, 1, 10, hour=9),
datetime(2000, 1, 11, hour=9),
datetime(2000, 1, 12, hour=9),
datetime(2000, 1, 13, hour=9),
datetime(2000, 1, 14, hour=9),
datetime(2000, 1, 15, hour=9),
datetime(2000, 1, 16, hour=9),
datetime(2000, 1, 17, hour=9),
datetime(2000, 1, 18, hour=9),
datetime(2000, 1, 19, hour=9),
datetime(2000, 1, 20, hour=9),
datetime(2000, 1, 21, hour=9),
datetime(2000, 1, 22, hour=9),
datetime(2000, 1, 23, hour=9),
datetime(2000, 1, 24, hour=9),
datetime(2000, 1, 25, hour=9),
datetime(2000, 1, 26, hour=9),
datetime(2000, 1, 27, hour=9),
datetime(2000, 1, 28, hour=9),
datetime(2000, 1, 29, hour=9),
datetime(2000, 1, 30, hour=9),
datetime(2000, 1, 31, hour=9),
)
self.assertEqual(tuple(rule.iterate_from(start)), expected)
def test_daily_on_hours_and_minutes(self):
"""Daily every 20 minutes from 9:00 a.m. to 4:40 p.m.
RRULE:FREQ=DAILY;BYHOUR=9,10,11,12,13,14,15,16;BYMINUTE=0,20,40;COUNT=48
DTSTART:19970902T090000
"""
rule = RecurrenceRule(DAILY,
on_hours=(9, 10, 11, 12, 13, 14, 15, 16),
on_minutes=(0, 20, 40),
count=48)
start = datetime(1997, 9, 2, hour=9)
expected = (
datetime(1997, 9, 2, hour= 9, minute= 0),
datetime(1997, 9, 2, hour= 9, minute=20),
datetime(1997, 9, 2, hour= 9, minute=40),
datetime(1997, 9, 2, hour=10, minute= 0),
datetime(1997, 9, 2, hour=10, minute=20),
datetime(1997, 9, 2, hour=10, minute=40),
datetime(1997, 9, 2, hour=11, minute= 0),
datetime(1997, 9, 2, hour=11, minute=20),
datetime(1997, 9, 2, hour=11, minute=40),
datetime(1997, 9, 2, hour=12, minute= 0),
datetime(1997, 9, 2, hour=12, minute=20),
datetime(1997, 9, 2, hour=12, minute=40),
datetime(1997, 9, 2, hour=13, minute= 0),
datetime(1997, 9, 2, hour=13, minute=20),
datetime(1997, 9, 2, hour=13, minute=40),
datetime(1997, 9, 2, hour=14, minute= 0),
datetime(1997, 9, 2, hour=14, minute=20),
datetime(1997, 9, 2, hour=14, minute=40),
datetime(1997, 9, 2, hour=15, minute= 0),
datetime(1997, 9, 2, hour=15, minute=20),
datetime(1997, 9, 2, hour=15, minute=40),
datetime(1997, 9, 2, hour=16, minute= 0),
datetime(1997, 9, 2, hour=16, minute=20),
datetime(1997, 9, 2, hour=16, minute=40),
datetime(1997, 9, 3, hour= 9, minute= 0),
datetime(1997, 9, 3, hour= 9, minute=20),
datetime(1997, 9, 3, hour= 9, minute=40),
datetime(1997, 9, 3, hour=10, minute= 0),
datetime(1997, 9, 3, hour=10, minute=20),
datetime(1997, 9, 3, hour=10, minute=40),
datetime(1997, 9, 3, hour=11, minute= 0),
datetime(1997, 9, 3, hour=11, minute=20),
datetime(1997, 9, 3, hour=11, minute=40),
datetime(1997, 9, 3, hour=12, minute= 0),
datetime(1997, 9, 3, hour=12, minute=20),
datetime(1997, 9, 3, hour=12, minute=40),
datetime(1997, 9, 3, hour=13, minute= 0),
datetime(1997, 9, 3, hour=13, minute=20),
datetime(1997, 9, 3, hour=13, minute=40),
datetime(1997, 9, 3, hour=14, minute= 0),
datetime(1997, 9, 3, hour=14, minute=20),
datetime(1997, 9, 3, hour=14, minute=40),
datetime(1997, 9, 3, hour=15, minute= 0),
datetime(1997, 9, 3, hour=15, minute=20),
datetime(1997, 9, 3, hour=15, minute=40),
datetime(1997, 9, 3, hour=16, minute= 0),
datetime(1997, 9, 3, hour=16, minute=20),
datetime(1997, 9, 3, hour=16, minute=40),
)
self.assertEqual(tuple(rule.iterate_from(start)), expected)
def test_every_3_hours(self):
"""Every 3 hours from 9:00 a.m. to 5:00 p.m.
RRULE:FREQ=HOURLY;INTERVAL=3;UNTIL=19970902T170000
DTSTART:19970902T090000
"""
rule = RecurrenceRule(HOURLY,
interval=3,
until=datetime(1997, 9, 2, hour=17))
start = datetime(1997, 9, 2, hour=9)
expected = (
datetime(1997, 9, 2, hour= 9),
datetime(1997, 9, 2, hour=12),
datetime(1997, 9, 2, hour=15),
)
self.assertEqual(tuple(rule.iterate_from(start)), expected)
def test_every_15_minutes(self):
"""Every 15 minutes.
RRULE:FREQ=MINUTELY;INTERVAL=15;COUNT=6
DTSTART:19970902T090000
"""
rule = RecurrenceRule(MINUTELY,
interval=15,
count=6)
start = datetime(1997, 9, 2, hour=9)
expected = (
datetime(1997, 9, 2, hour= 9, minute= 0),
datetime(1997, 9, 2, hour= 9, minute=15),
datetime(1997, 9, 2, hour= 9, minute=30),
datetime(1997, 9, 2, hour= 9, minute=45),
datetime(1997, 9, 2, hour=10, minute= 0),
datetime(1997, 9, 2, hour=10, minute=15),
)
self.assertEqual(tuple(rule.iterate_from(start)), expected)
def test_every_90_minutes(self):
"""Every 90 minutes.
RRULE:FREQ=MINUTELY;INTERVAL=90;COUNT=4
DTSTART:19970902T090000
"""
rule = RecurrenceRule(MINUTELY,
interval=90,
count=4)
start = datetime(1997, 9, 2, hour=9)
expected = (
datetime(1997, 9, 2, hour= 9, minute= 0),
datetime(1997, 9, 2, hour=10, minute=30),
datetime(1997, 9, 2, hour=12, minute= 0),
datetime(1997, 9, 2, hour=13, minute=30),
)
self.assertEqual(tuple(rule.iterate_from(start)), expected)
def test_weekly_on_week_days_3(self):
"""Weekly on Thursday and Sunday.
RRULE:FREQ=WEEKLY;COUNT=35;BYDAY=SU,TH
DTSTART:20111120T100000
"""
rule = RecurrenceRule(WEEKLY,
on_week_days=(THURSDAY, SUNDAY),
count=35)
start = datetime(2011, 11, 20, hour=10)
expected = (
datetime(2011, 11, 20, hour=10),
datetime(2011, 11, 24, hour=10),
datetime(2011, 11, 27, hour=10),
datetime(2011, 12, 1, hour=10),
datetime(2011, 12, 4, hour=10),
datetime(2011, 12, 8, hour=10),
datetime(2011, 12, 11, hour=10),
datetime(2011, 12, 15, hour=10),
datetime(2011, 12, 18, hour=10),
datetime(2011, 12, 22, hour=10),
datetime(2011, 12, 25, hour=10),
datetime(2011, 12, 29, hour=10),
datetime(2012, 1, 1, hour=10),
datetime(2012, 1, 5, hour=10),
datetime(2012, 1, 8, hour=10),
datetime(2012, 1, 12, hour=10),
datetime(2012, 1, 15, hour=10),
datetime(2012, 1, 19, hour=10),
datetime(2012, 1, 22, hour=10),
datetime(2012, 1, 26, hour=10),
datetime(2012, 1, 29, hour=10),
datetime(2012, 2, 2, hour=10),
datetime(2012, 2, 5, hour=10),
datetime(2012, 2, 9, hour=10),
datetime(2012, 2, 12, hour=10),
datetime(2012, 2, 16, hour=10),
datetime(2012, 2, 19, hour=10),
datetime(2012, 2, 23, hour=10),
datetime(2012, 2, 26, hour=10),
datetime(2012, 3, 1, hour=10),
datetime(2012, 3, 4, hour=10),
datetime(2012, 3, 8, hour=10),
datetime(2012, 3, 11, hour=10),
datetime(2012, 3, 15, hour=10),
datetime(2012, 3, 18, hour=10),
)
self.assertEqual(tuple(rule.iterate_from(start)), expected)
def test_daily_on_months(self):
"""Daily on unordered months.
RRULE:FREQ=DAILY;UNTIL=20141206T000000;BYMONTH=11,12,1,2,3,4,10
DTSTART:20141030T000000
"""
rule = RecurrenceRule(DAILY,
on_months=(
NOVEMBER,
DECEMBER,
JANUARY,
FEBRUARY,
MARCH,
APRIL,
OCTOBER,
),
until=datetime(2014, 12, 6))
start = datetime(2014, 10, 30)
expected = (
datetime(2014, 10, 30),
datetime(2014, 10, 31),
datetime(2014, 11, 1),
datetime(2014, 11, 2),
datetime(2014, 11, 3),
datetime(2014, 11, 4),
datetime(2014, 11, 5),
datetime(2014, 11, 6),
datetime(2014, 11, 7),
datetime(2014, 11, 8),
datetime(2014, 11, 9),
datetime(2014, 11, 10),
datetime(2014, 11, 11),
datetime(2014, 11, 12),
datetime(2014, 11, 13),
datetime(2014, 11, 14),
datetime(2014, 11, 15),
datetime(2014, 11, 16),
datetime(2014, 11, 17),
datetime(2014, 11, 18),
datetime(2014, 11, 19),
datetime(2014, 11, 20),
datetime(2014, 11, 21),
datetime(2014, 11, 22),
datetime(2014, 11, 23),
datetime(2014, 11, 24),
datetime(2014, 11, 25),
datetime(2014, 11, 26),
datetime(2014, 11, 27),
datetime(2014, 11, 28),
datetime(2014, 11, 29),
datetime(2014, 11, 30),
datetime(2014, 12, 1),
datetime(2014, 12, 2),
datetime(2014, 12, 3),
datetime(2014, 12, 4),
datetime(2014, 12, 5),
datetime(2014, 12, 6),
)
self.assertEqual(tuple(rule.iterate_from(start)), expected)
def test_yearly_on_weeks(self):
"""Yearly on first 2 weeks.
RRULE:FREQ=YEARLY;BYWEEKNO=1,2;UNTIL=20170101T000000
DTSTART:20130101T000000
"""
rule = RecurrenceRule(YEARLY,
on_weeks=(1, 2),
until=datetime(2017, 1, 1))
start = datetime(2013, 1, 1)
expected = (
datetime(2013, 1, 1),
datetime(2013, 1, 8),
datetime(2013, 12, 31),
datetime(2014, 1, 7),
datetime(2014, 12, 30),
datetime(2015, 1, 6),
datetime(2016, 1, 5),
datetime(2016, 1, 12),
)
self.assertEqual(tuple(rule.iterate_from(start)), expected)
def test_yearly_on_weeks_and_set_pos(self):
"""Yearly on first 2 weeks.
RRULE:FREQ=YEARLY;BYWEEKNO=1,2;BYSETPOS=1;UNTIL=20170101T000000
DTSTART:20130101T000000
"""
rule = RecurrenceRule(YEARLY,
on_weeks=(1, 2),
on_set_pos=(1,),
until=datetime(2017, 1, 1))
start = datetime(2013, 1, 1)
expected = (
datetime(2013, 1, 1),
datetime(2013, 12, 31),
datetime(2014, 12, 30),
datetime(2016, 1, 5),
)
self.assertEqual(tuple(rule.iterate_from(start)), expected)
def test_yearly_on_leap_year_day_1(self):
"""Yearly on last day of leap year.
RRULE:FREQ=YEARLY;BYYEARDAY=366;UNTIL=20200101T000000
DTSTART:20121231T120000
"""
rule = RecurrenceRule(YEARLY,
on_year_days=(366,),
until=datetime(2020, 1, 1))
start = datetime(2012, 12, 31, hour=12)
expected = (
datetime(2012, 12, 31, hour=12),
datetime(2016, 12, 31, hour=12),
)
self.assertEqual(tuple(rule.iterate_from(start)), expected)
def test_yearly_on_nth_week_day_and_month_1(self):
"""Yearly on the last Friday of October.
RRULE:FREQ=YEARLY;BYDAY=-1FR;BYMONTH=10;UNTIL=20150101T000000
DTSTART:20101029T120000
"""
rule = RecurrenceRule(YEARLY,
on_months=(OCTOBER,),
on_week_days=(FRIDAY(-1),),
until=datetime(2015, 1, 1))
start = datetime(2010, 10, 29, hour=12)
expected = (
datetime(2010, 10, 29, hour=12),
datetime(2011, 10, 28, hour=12),
datetime(2012, 10, 26, hour=12),
datetime(2013, 10, 25, hour=12),
datetime(2014, 10, 31, hour=12),
)
self.assertEqual(tuple(rule.iterate_from(start)), expected)
def test_yearly_on_nth_week_day_and_month_2(self):
"""Yearly on the first Friday of April.
RRULE:FREQ=YEARLY;BYDAY=1FR;BYMONTH=4;UNTIL=20150101T000000
DTSTART:20100402T120000
"""
rule = RecurrenceRule(YEARLY,
on_months=(APRIL,),
on_week_days=(FRIDAY(1),),
until=datetime(2015, 1, 1))
start = datetime(2010, 4, 2, hour=12)
expected = (
datetime(2010, 4, 2, hour=12),
datetime(2011, 4, 1, hour=12),
datetime(2012, 4, 6, hour=12),
datetime(2013, 4, 5, hour=12),
datetime(2014, 4, 4, hour=12),
)
self.assertEqual(tuple(rule.iterate_from(start)), expected)
def test_monthly_on_month_day_2(self):
"""Monthly on the 31st.
RRULE:FREQ=MONTHLY;BYMONTHDAY=31;COUNT=12
DTSTART:20150131T000000
"""
rule = RecurrenceRule(MONTHLY,
on_month_days=(31,),
count=12)
start = datetime(2015, 1, 31)
expected = (
datetime(2015, 1, 31),
| |
<reponame>Bloomstack/gunicorn
# -*- coding: utf-8 -
#
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
import errno
import os
import random
import select
import signal
import sys
import time
import traceback
from gunicorn.errors import HaltServer, AppImportError
from gunicorn.pidfile import Pidfile
from gunicorn import sock, systemd, util
from gunicorn import __version__, SERVER_SOFTWARE
class Arbiter(object):
"""
Arbiter maintain the workers processes alive. It launches or
kills them if needed. It also manages application reloading
via SIGHUP/USR2.
"""
# A flag indicating if a worker failed to
# to boot. If a worker process exist with
# this error code, the arbiter will terminate.
WORKER_BOOT_ERROR = 3
# A flag indicating if an application failed to be loaded
APP_LOAD_ERROR = 4
START_CTX = {}
LISTENERS = []
WORKERS = {}
PIPE = []
# I love dynamic languages
SIG_QUEUE = []
SIGNALS = [getattr(signal, "SIG%s" % x)
for x in "HUP QUIT INT TERM TTIN TTOU USR1 USR2 WINCH".split()]
SIG_NAMES = dict(
(getattr(signal, name), name[3:].lower()) for name in dir(signal)
if name[:3] == "SIG" and name[3] != "_"
)
def __init__(self, app):
os.environ["SERVER_SOFTWARE"] = SERVER_SOFTWARE
self._num_workers = None
self._last_logged_active_worker_count = None
self.log = None
self.setup(app)
self.pidfile = None
self.systemd = False
self.worker_age = 0
self.reexec_pid = 0
self.master_pid = 0
self.master_name = "Master"
cwd = util.getcwd()
args = sys.argv[:]
args.insert(0, sys.executable)
# init start context
self.START_CTX = {
"args": args,
"cwd": cwd,
0: sys.executable
}
def _get_num_workers(self):
return self._num_workers
def _set_num_workers(self, value):
old_value = self._num_workers
self._num_workers = value
self.cfg.nworkers_changed(self, value, old_value)
num_workers = property(_get_num_workers, _set_num_workers)
def setup(self, app):
self.app = app
self.cfg = app.cfg
if self.log is None:
self.log = self.cfg.logger_class(app.cfg)
# reopen files
if 'GUNICORN_FD' in os.environ:
self.log.reopen_files()
self.worker_class = self.cfg.worker_class
self.address = self.cfg.address
self.num_workers = self.cfg.workers
self.timeout = self.cfg.timeout
self.proc_name = self.cfg.proc_name
self.log.debug('Current configuration:\n{0}'.format(
'\n'.join(
' {0}: {1}'.format(config, value.value)
for config, value
in sorted(self.cfg.settings.items(),
key=lambda setting: setting[1]))))
# set enviroment' variables
if self.cfg.env:
for k, v in self.cfg.env.items():
os.environ[k] = v
if self.cfg.preload_app:
self.app.wsgi()
def start(self):
"""\
Initialize the arbiter. Start listening and set pidfile if needed.
"""
self.log.info("Starting gunicorn %s", __version__)
if 'GUNICORN_PID' in os.environ:
self.master_pid = int(os.environ.get('GUNICORN_PID'))
self.proc_name = self.proc_name + ".2"
self.master_name = "Master.2"
self.pid = os.getpid()
if self.cfg.pidfile is not None:
pidname = self.cfg.pidfile
if self.master_pid != 0:
pidname += ".2"
self.pidfile = Pidfile(pidname)
self.pidfile.create(self.pid)
self.cfg.on_starting(self)
self.init_signals()
if not self.LISTENERS:
fds = None
listen_fds = systemd.listen_fds()
if listen_fds:
self.systemd = True
fds = range(systemd.SD_LISTEN_FDS_START,
systemd.SD_LISTEN_FDS_START + listen_fds)
elif self.master_pid:
fds = []
for fd in os.environ.pop('GUNICORN_FD').split(','):
fds.append(int(fd))
self.LISTENERS = sock.create_sockets(self.cfg, self.log, fds)
listeners_str = ",".join([str(l) for l in self.LISTENERS])
self.log.debug("Arbiter booted")
self.log.info("Listening at: %s (%s)", listeners_str, self.pid)
self.log.info("Using worker: %s", self.cfg.worker_class_str)
systemd.sd_notify("READY=1\nSTATUS=Gunicorn arbiter booted", self.log)
# check worker class requirements
if hasattr(self.worker_class, "check_config"):
self.worker_class.check_config(self.cfg, self.log)
self.cfg.when_ready(self)
def init_signals(self):
"""\
Initialize master signal handling. Most of the signals
are queued. Child signals only wake up the master.
"""
# close old PIPE
for p in self.PIPE:
os.close(p)
# initialize the pipe
self.PIPE = pair = os.pipe()
for p in pair:
util.set_non_blocking(p)
util.close_on_exec(p)
self.log.close_on_exec()
# initialize all signals
for s in self.SIGNALS:
signal.signal(s, self.signal)
signal.signal(signal.SIGCHLD, self.handle_chld)
def signal(self, sig, frame):
if len(self.SIG_QUEUE) < 5:
self.SIG_QUEUE.append(sig)
self.wakeup()
def run(self):
"Main master loop."
self.start()
util._setproctitle("master [%s]" % self.proc_name)
try:
self.manage_workers()
while True:
self.maybe_promote_master()
sig = self.SIG_QUEUE.pop(0) if self.SIG_QUEUE else None
if sig is None:
self.sleep()
self.murder_workers()
self.manage_workers()
continue
if sig not in self.SIG_NAMES:
self.log.info("Ignoring unknown signal: %s", sig)
continue
signame = self.SIG_NAMES.get(sig)
handler = getattr(self, "handle_%s" % signame, None)
if not handler:
self.log.error("Unhandled signal: %s", signame)
continue
self.log.info("Handling signal: %s", signame)
handler()
self.wakeup()
except (StopIteration, KeyboardInterrupt):
self.halt()
except HaltServer as inst:
self.halt(reason=inst.reason, exit_status=inst.exit_status)
except SystemExit:
raise
except Exception:
self.log.info("Unhandled exception in main loop",
exc_info=True)
self.stop(False)
if self.pidfile is not None:
self.pidfile.unlink()
sys.exit(-1)
def handle_chld(self, sig, frame):
"SIGCHLD handling"
self.reap_workers()
self.wakeup()
def handle_hup(self):
"""\
HUP handling.
- Reload configuration
- Start the new worker processes with a new configuration
- Gracefully shutdown the old worker processes
"""
self.log.info("Hang up: %s", self.master_name)
self.reload()
def handle_term(self):
"SIGTERM handling"
raise StopIteration
def handle_int(self):
"SIGINT handling"
self.stop(False)
raise StopIteration
def handle_quit(self):
"SIGQUIT handling"
self.stop(False)
raise StopIteration
def handle_ttin(self):
"""\
SIGTTIN handling.
Increases the number of workers by one.
"""
self.num_workers += 1
self.manage_workers()
def handle_ttou(self):
"""\
SIGTTOU handling.
Decreases the number of workers by one.
"""
if self.num_workers <= 1:
return
self.num_workers -= 1
self.manage_workers()
def handle_usr1(self):
"""\
SIGUSR1 handling.
Kill all workers by sending them a SIGUSR1
"""
self.log.reopen_files()
self.kill_workers(signal.SIGUSR1)
def handle_usr2(self):
"""\
SIGUSR2 handling.
Creates a new arbiter/worker set as a fork of the current
arbiter without affecting old workers. Use this to do live
deployment with the ability to backout a change.
"""
self.reexec()
def handle_winch(self):
"""SIGWINCH handling"""
if self.cfg.daemon:
self.log.info("graceful stop of workers")
self.num_workers = 0
self.kill_workers(signal.SIGTERM)
else:
self.log.debug("SIGWINCH ignored. Not daemonized")
def maybe_promote_master(self):
if self.master_pid == 0:
return
if self.master_pid != os.getppid():
self.log.info("Master has been promoted.")
# reset master infos
self.master_name = "Master"
self.master_pid = 0
self.proc_name = self.cfg.proc_name
del os.environ['GUNICORN_PID']
# rename the pidfile
if self.pidfile is not None:
self.pidfile.rename(self.cfg.pidfile)
# reset proctitle
util._setproctitle("master [%s]" % self.proc_name)
def wakeup(self):
"""\
Wake up the arbiter by writing to the PIPE
"""
try:
os.write(self.PIPE[1], b'.')
except IOError as e:
if e.errno not in [errno.EAGAIN, errno.EINTR]:
raise
def halt(self, reason=None, exit_status=0):
""" halt arbiter """
self.stop()
self.log.info("Shutting down: %s", self.master_name)
if reason is not None:
self.log.info("Reason: %s", reason)
if self.pidfile is not None:
self.pidfile.unlink()
self.cfg.on_exit(self)
sys.exit(exit_status)
def sleep(self):
"""\
Sleep until PIPE is readable or we timeout.
A readable PIPE means a signal occurred.
"""
try:
ready = select.select([self.PIPE[0]], [], [], 1.0)
if not ready[0]:
return
while os.read(self.PIPE[0], 1):
pass
except (select.error, OSError) as e:
# TODO: select.error is a subclass of OSError since Python 3.3.
error_number = getattr(e, 'errno', e.args[0])
if error_number not in [errno.EAGAIN, errno.EINTR]:
raise
except KeyboardInterrupt:
sys.exit()
def stop(self, graceful=True):
"""\
Stop workers
:attr graceful: boolean, If True (the default) workers will be
killed gracefully (ie. trying to wait for the current connection)
"""
unlink = (
self.reexec_pid == self.master_pid == 0
and not self.systemd
and not self.cfg.reuse_port
)
sock.close_sockets(self.LISTENERS, unlink)
self.LISTENERS = []
sig = signal.SIGTERM
if not graceful:
sig = signal.SIGQUIT
limit = time.time() + self.cfg.graceful_timeout
# instruct the workers to exit
self.kill_workers(sig)
# wait until the graceful timeout
while self.WORKERS and time.time() < limit:
time.sleep(0.1)
self.kill_workers(signal.SIGKILL)
def reexec(self):
"""\
Relaunch the master and workers.
"""
if self.reexec_pid != 0:
self.log.warning("USR2 signal ignored. Child exists.")
return
if self.master_pid != 0:
self.log.warning("USR2 signal ignored. Parent exists.")
return
master_pid = os.getpid()
self.reexec_pid = os.fork()
if self.reexec_pid != 0:
return
self.cfg.pre_exec(self)
environ = self.cfg.env_orig.copy()
environ['GUNICORN_PID'] = str(master_pid)
if self.systemd:
environ['LISTEN_PID'] = str(os.getpid())
environ['LISTEN_FDS'] = str(len(self.LISTENERS))
else:
environ['GUNICORN_FD'] = ','.join(
str(l.fileno()) for l in self.LISTENERS)
os.chdir(self.START_CTX['cwd'])
# exec the process using the original environment
os.execvpe(self.START_CTX[0], self.START_CTX['args'], environ)
def reload(self):
old_address = self.cfg.address
# reset old environment
for k in self.cfg.env:
if k in self.cfg.env_orig:
# reset the key to the value it had before
# we launched gunicorn
os.environ[k] = self.cfg.env_orig[k]
else:
# delete the value set by gunicorn
try:
del os.environ[k]
except KeyError:
pass
# reload conf
self.app.reload()
self.setup(self.app)
# reopen log files
self.log.reopen_files()
# do we need to change listener ?
if old_address != self.cfg.address:
# close all listeners
for l in self.LISTENERS:
l.close()
# init new listeners
self.LISTENERS = sock.create_sockets(self.cfg, self.log)
listeners_str = ",".join([str(l) for l in self.LISTENERS])
self.log.info("Listening at: %s", listeners_str)
# do some actions on reload
self.cfg.on_reload(self)
# unlink pidfile
if self.pidfile is not None:
self.pidfile.unlink()
# create new pidfile
if self.cfg.pidfile is not None:
self.pidfile = Pidfile(self.cfg.pidfile)
self.pidfile.create(self.pid)
# set new proc_name
util._setproctitle("master [%s]" % self.proc_name)
# spawn new workers
for _ in range(self.cfg.workers):
self.spawn_worker()
# manage workers
self.manage_workers()
def murder_workers(self):
"""\
Kill unused/idle workers
"""
if not self.timeout:
return
workers = list(self.WORKERS.items())
for (pid, worker) in workers:
try:
if time.time() - worker.tmp.last_update() <= self.timeout:
continue
except (OSError, ValueError):
continue
if not | |
<filename>bci_lib/Stages/FeatureExtraction/FeatureExtraction.py
from abc import ABC
from bci_lib import *
from bci_lib.DataTypes import Stage, Database
from typing import Any, Tuple, List
import numpy as np
import mne
from sklearn import decomposition
class PSD(Stage):
"""
DESCRIPTION
-----------
Compute the power spectral density (PSD) using multitapers.
Attributes
-----------
id : bci_lib.ID
id of the stage
database : bci_lib.Database
The dictionary which we held all our data in and it's accessible from all stages
inputs : Tuple[ID, ...]
It's the tuple of some ids(bci_lib.ID) of input datas
outputs : Tuple[ID, ...]
It's the tuple of some ids(bci_lib.ID) of output datas
"""
in_out = {RawData: TwoDFeature, EpochsData: TwoDFeature}
def __init__(self, id: ID, database: Database, inputs: Tuple[ID, ...], outputs: Tuple[ID, ...]):
"""
DESCRIPTION
-----------
The Constructor for PSD
Parameters
-----------
id : bci_lib.ID
id of the stage
database : bci_lib.Database
The dictionary which we held all our data in and it's accessible from all stages
inputs : Tuple[ID, ...]
It's the tuple of some ids(bci_lib.ID) of input datas
outputs : Tuple[ID, ...]
It's the tuple of some ids(bci_lib.ID) of output datas
-----------
"""
super(PSD, self).__init__(id, database, inputs, outputs)
if inputs[0].get_type() is RawData:
self.set_params = self._set_params_raw
elif inputs[0].get_type() is EpochsData:
self.set_params = self._set_params_epoch
else:
raise Exception('Input Data type is not RawData nor EpochData')
def do_task(self):
"""
DESCRIPTION
-----------
Imports input from database and performs the task on it and saves output to database
-----------
"""
labels = []
features = []
input_datas_list = self._get_input()
input_data = input_datas_list[0]
freq_set = self._params.pop('freq_set')
if isinstance(input_data, RawData):
# maybe using 'eeg' in picks in get_data
input_raw_array = input_data.get_data().get_data(
picks=self._params['picks'])
features = np.zeros((input_raw_array.shape[0], len(freq_set)))
for i in range(len(freq_set)):
psd_data, freqs = mne.time_frequency.psd_multitaper(inst=input_data.get_data(), fmin=freq_set[i][0],
fmax=freq_set[i][1], **self._params)
features[:, i] = psd_data.sum(axis=2)
labels = input_data.get_labels("Stim")
elif isinstance(input_data, EpochsData):
input_epoch_array = input_data.get_data().get_data(
picks=self._params['picks'])
features = np.zeros(
(input_epoch_array.shape[0], input_epoch_array.shape[1], len(freq_set)))
for i in range(len(freq_set)):
psd_data, freqs = mne.time_frequency.psd_multitaper(inst=input_data.get_data(), fmin=freq_set[i][0],
fmax=freq_set[i][1], **self._params)
features[:, :, i] = psd_data.sum(axis=2)
for i in range(features.shape[0]):
_, label = input_data.get_data().next(return_event_id=True)
labels.append(label)
else:
raise Exception('Input Data type is not RawData nor EpochData\n'
'input type={}'.format(type(input_data)))
feature_data = TwoDFeature(id=input_data.get_id(),
data=np.array(features), label=np.array(labels))
self._set_output(feature_data, self._outputs[0])
self._params['freq_set'] = freq_set
self._finish()
def _set_params_epoch(self, freq_set, tmin=None, tmax=None, bandwidth=None, adaptive=False, low_bias=True,
normalization='length', picks='data', proj=False, n_jobs=1, verbose=None):
"""
DESCRIPTION
-----------
Compute the power spectral density (PSD) using multitapers.
Calculates spectral density for orthogonal tapers, then averages them together for each channel/epoch.
Parameters
-----------
inst : instance of Epochs or Raw or Evoked
The data for PSD calculation.
fmin : float
Min frequency of interest.
fmax : float
Max frequency of interest.
tmin : float | None
Min time of interest.
tmax : float | None
Max time of interest.
bandwidth : float
The bandwidth of the multi taper windowing function in Hz. The default
value is a window half-bandwidth of 4.
adaptive : bool
Use adaptive weights to combine the tapered spectra into PSD
(slow, use n_jobs 1 to speed up computation).
low_bias : bool
Only use tapers with more than 90% spectral concentration within
bandwidth.
normalization : str
Either "full" or "length" (default). If "full", the PSD will
be normalized by the sampling rate as well as the length of
the signal (as in nitime).
picks : str | list | slice | None
Channels to include. Slices and lists of integers will be
interpreted as channel indices. In lists, channel *type* strings
(e.g., `['meg', 'eeg']`) will pick channels of those
types, channel *name* strings (e.g., `['MEG0111', 'MEG2623']`
will pick the given channels. Can also be the string values
"all" to pick all channels, or "data" to pick :term:`data channels`.
None (default) will pick good data channels(excluding reference MEG channels).
proj : bool
Apply SSP projection vectors. If inst is ndarray this is not used.
n_jobs : int
The number of jobs to run in parallel (default 1).
Requires the joblib package.
verbose : bool, str, int, or None
If not None, override default verbose level (see :func:`mne.verbose`
and :ref:`Logging documentation tut_logging ` for more).
Example
-----------
-----------
"""
params_dict = {'freq_set': freq_set, 'tmin': tmin, 'tmax': tmax, 'bandwidth': bandwidth,
'adaptive': adaptive, 'low_bias': low_bias, 'normalization': normalization,
'picks': picks, 'proj': proj, 'n_jobs': n_jobs, 'verbose': verbose}
for k, v in params_dict.items():
self._params[k] = v
def _set_params_raw(self, freq_set, tmin=None, tmax=None, bandwidth=None, adaptive=False, low_bias=True,
normalization='length', picks='data', proj=False, n_jobs=1, verbose=None):
"""
DESCRIPTION
-----------
Compute the power spectral density (PSD) using multitapers.
Calculates spectral density for orthogonal tapers, then averages them together for each channel/epoch.
Parameters
-----------
inst : instance of Epochs or Raw or Evoked
The data for PSD calculation.
fmin : float
Min frequency of interest.
fmax : float
Max frequency of interest.
tmin : float | None
Min time of interest.
tmax : float | None
Max time of interest.
bandwidth : float
The bandwidth of the multi taper windowing function in Hz. The default
value is a window half-bandwidth of 4.
adaptive : bool
Use adaptive weights to combine the tapered spectra into PSD
(slow, use n_jobs 1 to speed up computation).
low_bias : bool
Only use tapers with more than 90% spectral concentration within
bandwidth.
normalization : str
Either "full" or "length" (default). If "full", the PSD will
be normalized by the sampling rate as well as the length of
the signal (as in nitime).
picks : str | list | slice | None
Channels to include. Slices and lists of integers will be
interpreted as channel indices. In lists, channel *type* strings
(e.g., `['meg', 'eeg']`) will pick channels of those
types, channel *name* strings (e.g., `['MEG0111', 'MEG2623']`
will pick the given channels. Can also be the string values
"all" to pick all channels, or "data" to pick :term:`data channels`.
None (default) will pick good data channels(excluding reference MEG channels).
proj : bool
Apply SSP projection vectors. If inst is ndarray this is not used.
n_jobs : int
The number of jobs to run in parallel (default 1).
Requires the joblib package.
verbose : bool, str, int, or None
If not None, override default verbose level (see :func:`mne.verbose`
and :ref:`Logging documentation tut_logging ` for more).
Example
-----------
-----------
"""
params_dict = {'freq_set': freq_set, 'tmin': tmin, 'tmax': tmax, 'bandwidth': bandwidth,
'adaptive': adaptive, 'low_bias': low_bias, 'normalization': normalization,
'picks': picks, 'proj': proj, 'n_jobs': n_jobs, 'verbose': verbose}
for k, v in params_dict.items():
self._params[k] = v
class PCA(Stage):
"""
DESCRIPTION
-----------
Principal component analysis (PCA).
Attributes
-----------
id : bci_lib.ID
id of the stage
database : bci_lib.Database
The dictionary which we held all our data in and it's accessible from all stages
inputs : Tuple[ID, ...]
It's the tuple of some ids(bci_lib.ID) of input datas
outputs : Tuple[ID, ...]
It's the tuple of some ids(bci_lib.ID) of output datas
mark_as_test : bool | false
It can determine whether the data labeled as train(false) or test(true)
"""
in_out = {RawData: TwoDFeature, EpochsData: TwoDFeature}
def __init__(self, id: ID, database: Database, inputs: Tuple[ID, ...], outputs: Tuple[ID, ...],
mark_as_test: bool = False):
"""
DESCRIPTION
-----------
The Constructor for PCA
Parameters
-----------
id : bci_lib.ID
id of the stage
database : bci_lib.Database
The dictionary which we held all our data in and it's accessible from all stages
inputs : Tuple[ID, ...]
It's the tuple of some ids(bci_lib.ID) of input datas
outputs : Tuple[ID, ...]
It's the tuple of some ids(bci_lib.ID) of output datas
mark_as_test : bool | false
It can determine whether the data labeled as train(false) or test(true)
-----------
"""
super(PCA, self).__init__(id, database, inputs, outputs)
if not isinstance(mark_as_test, bool):
raise TypeError('mark_as_test is not bool type')
self.cache = dict()
self.cache['mark_as_test'] = mark_as_test
if not mark_as_test:
self.cache['pca_model'] = None
self.cache['train_data_shape'] = None
self.cache['train_data_type'] = None
if inputs[0].get_type() is RawData:
self.set_params = self.set_params_pca
elif inputs[0].get_type() is EpochsData:
self.set_params = self.set_params_pca
else:
raise Exception('Input Data type is not RawData nor EpochData\n'
'input type={}'.format(inputs[0].get_type()))
def do_task(self):
"""
DESCRIPTION
-----------
Imports input from database and performs the task on it and saves output to database
-----------
"""
input_datas_list = self._get_input()
picks = self._params.pop('picks')
if not self.cache['mark_as_test']:
input_data_train = input_datas_list[0]
labels_tr = []
self.cache['pca_model'] = decomposition.PCA(**self._params)
if isinstance(input_data_train, RawData):
self.train_data_type = RawData
input_array_train = input_data_train.get_data().get_data(picks=picks)
output_train = np.zeros(
(input_array_train.shape[0], self._params['n_components']))
input_array_train = input_array_train.reshape(
input_array_train.shape[0], -1)
self.train_data_shape = input_array_train.shape
self.cache['pca_model'].fit(input_array_train)
output_train = self.cache['pca_model'].transform(
input_array_train)
labels_tr = input_data_train.get_labels("Stim")
elif isinstance(input_data_train, EpochsData):
self.cache['train_data_type'] = EpochsData
input_array_train = input_data_train.get_data().get_data(picks=picks)
output_train = np.zeros(
(input_array_train.shape[0], self._params['n_components']))
input_array_train = input_array_train.reshape(
input_array_train.shape[0], -1)
self.cache['train_data_shape'] = input_array_train.shape
self.cache['pca_model'].fit(input_array_train)
output_train = self.cache['pca_model'].transform(
input_array_train)
for i in range(input_array_train.shape[0]):
_, label = input_data_train.get_data().next(return_event_id=True)
labels_tr.append(label)
else:
raise Exception('Input Data type is not RawData nor EpochData\n'
'input type={}'.format(type(input_data_train)))
feature_tr = TwoDFeature(id=input_data_train.get_id(), data=np.array(output_train),
label=np.array(labels_tr))
self._set_output(feature_tr, output_id=self._outputs[0])
else:
input_data_test = input_datas_list[0]
labels_te = []
input_array_test = input_data_test.get_data().get_data(picks='eeg')
output_test = np.zeros(
(input_array_test.shape[0], self._params['n_components']))
input_array_test = input_array_test.reshape(
input_array_test.shape[0], -1)
if self.cache['train_data_shape'][0] == input_array_test.shape[0]:
if self.cache['train_data_type'] == RawData and isinstance(input_data_test, RawData):
output_test = self.cache['pca_model'].transform(
input_array_test)
labels_te = input_data_test.get_labels("Stim")
elif self.cache['train_data_type'] == RawData and isinstance(input_data_test, EpochsData):
input_array_test = input_data_test.get_data().get_data(picks='eeg')
output_test = np.zeros(
(input_array_test.shape[0], self._params['n_components']))
input_array_test = input_array_test.reshape(
input_array_test.shape[0], -1)
output_test = self.cache['pca_model'].transform(
input_array_test)
for i in range(input_array_test.shape[0]):
_, label = input_data_test.get_data().next(return_event_id=True)
labels_te.append(label)
else:
raise Exception('Input Data type is not RawData nor EpochData\n'
'input type={}'.format(type(input_data_test)))
else:
raise Exception('test and train datas shape does not match\n'
'test datas shape={}\ntrain datas shape={}'.format(type(input_data_test),
self.cache['train_data_shape']))
feature_te = TwoDFeature(id=input_data_test.get_id(), data=np.array(output_test),
label=np.array(labels_te))
self._set_output(feature_te, output_id=self._outputs[0])
self._params['picks'] = picks
self._finish()
def set_params_pca(self, picks='eeg,csd', copy=True, iterated_power='auto', n_compoonents=3, random_state=None,
svd_solver='auto', tol=0.0, whiten=False):
"""
DESCRIPTION
-----------
Principal component analysis (PCA).
Linear dimensionality reduction using Singular Value Decomposition of the data to project it to a lower dimensional space. The input data is centered but not scaled for each feature before applying the SVD.
It uses the LAPACK implementation of the full SVD or a randomized truncated SVD by the method of Halko et al. 2009, depending on the shape of the input data and the number of components to extract.
It can also use the scipy.sparse.linalg ARPACK implementation of the truncated SVD.
Notice that this class does not support sparse input. See :class:TruncatedSVD for an alternative with sparse data.
Parameters
-----------
picks : str | list | | |
[1, 0, 2, 0, 1, 1, 0, 0]
sage: [c.phi(2) for c in C]
[1, 2, 0, 1, 0, 0, 1, 0]
"""
highest_weight_crystal = self.parent()._highest_weight_crystal
positions, gi = self._gi(i)
m=max(gi)
if not highest_weight_crystal and i == 0:
raise NotImplementedError
# I think the M below should still work in this case
M = Integer(m)/2 - Integer(1)/2
return M
def epsilon(self, i):
r"""
Return the distance to the start of the `i`-string.
EXAMPLES::
sage: C = crystals.AlcovePaths(['A',2],[1,1])
sage: [c.epsilon(1) for c in C]
[0, 1, 0, 0, 1, 0, 2, 1]
sage: [c.epsilon(2) for c in C]
[0, 0, 1, 1, 0, 2, 0, 1]
"""
#crude but functional
j = 0
temp = self
temp = temp.e(i)
while temp is not None:
j+=1
temp = temp.e(i)
return j
def weight(self):
"""
Returns the weight of self.
EXAMPLES::
sage: C = crystals.AlcovePaths(['A',2],[2,0])
sage: for i in C: i.weight()
2*Lambda[1]
Lambda[2]
Lambda[1] - Lambda[2]
-2*Lambda[1] + 2*Lambda[2]
-Lambda[1]
-2*Lambda[2]
sage: B = crystals.AlcovePaths(['A',2,1],[1,0,0])
sage: p = B.module_generators[0].f_string([0,1,2])
sage: p.weight()
Lambda[0] - delta
"""
root_space = self.parent().R.root_space()
weight = -self.parent().weight
for i in self.value[::-1]:
root = root_space(i.root)
weight = -i.height*root + weight.reflection(root)
return -weight
#def __repr__(self):
#return str(self.integer_sequence())
def plot(self):
r"""
Return a plot ``self``.
.. NOTE::
Currently only implemented for types `A_2`, `B_2`, and `C_2`.
EXAMPLES::
sage: C = crystals.AlcovePaths(['A',2],[2,0])
sage: x = C( () ).f(1).f(2)
sage: x.plot() # Not tested - creates a pdf
"""
ct = self.parent()._R._cartan_type.dual()
word = self.parent()._R.word()
integer_sequence = self.integer_sequence()
foldings = [False for i in word]
for i in integer_sequence:
foldings[i] = True
affine_ambient_space = RootSystem(ct.affine()).ambient_space()
return affine_ambient_space.plot() + affine_ambient_space.plot_alcove_walk( word, foldings=foldings, labels=False)
def __eq__(self, other):
r"""
Test equality of ``self.value`` and ``other.value``.
EXAMPLES::
sage: C=crystals.AlcovePaths(['B',2],[1,0])
sage: lst=list(C)
sage: lst[2] == lst[2]
True
sage: lst[2] == lst[1]
False
"""
#note: may want to use _eq_ for coercion
try:
return self.value == other.value
except (NameError, AttributeError):
return False
def __lt__(self, other):
r"""
Test if ``self.value`` is less than ``other.value`` in dictionary order.
EXAMPLES::
sage: C = crystals.AlcovePaths(['A',2],[2,0])
sage: x = C( () )
sage: x.__lt__(x.f(1))
True
sage: a=x.f(1) ; b = x.f(1).f(1).f(2)
sage: a.__lt__(b)
False
"""
return self.value < other.value
def __gt__(self, other):
r"""
Test if ``self.value`` is greater than ``other.value`` in dictionary
order.
EXAMPLES::
sage: C = crystals.AlcovePaths(['A',2],[2,0])
sage: x = C( () )
sage: x.__gt__(x.f(1))
False
sage: a=x.f(1) ; b = x.f(1).f(1).f(2)
sage: a.__gt__(b)
True
"""
return self.value > other.value
def _folding_data(self, i):
r"""
Compute information needed to build the graph `g_{\alpha_i}`.
Results of this method are sent to _gi for further processing.
INPUT:
- ``i`` -- element of the index_set of the underlying root_system.
OUTPUT:
A dictionary where the keys are of type RootsWithHeight which record
positions where `\pm \alpha_i` shows up in the folded `\lambda` chain.
The values are `1` if `\alpha_i` is in the corresponding position in
the folded `\lambda`-chain, `-1` if `-\alpha_i` is in the corresponding
position in the folded `\lambda`-chain.
.. NOTE::
*infinity* is a special key that records the "sign at infinity".
::
sage: C = crystals.AlcovePaths(['A',2],[1,1])
sage: x = C( () ).f(1)
sage: fd = x._folding_data(2); fd # # random output
{(alpha[2], 0): 1, (alpha[1] + alpha[2], 1): 1, 'infinity': 1}
sage: fd['infinity']
1
sage: fd.values()
[1, 1, 1]
"""
Parent = self.parent()
#self.value contains the admissible sequence as a tuple of Element
finite_cartan_type = Parent._finite_cartan_type # bool
J = list(self.value)
#NOTE: R is a RootsWithHeight object and NOT a RootSystem object
R = Parent._R
weight = Parent.weight
signs = {}
# 0 arrows in the case of finite Cartan type
# always allow 0 arrows
if finite_cartan_type and i == 0:
Beta = R._root_lattice.highest_root()
elif i in self.index_set():
Beta = R._root_lattice.simple_root(i)
max_height_Beta = weight.scalar(Beta.associated_coroot())
if len(J) == 0:
for k in range( max_height_Beta ) :
x = R(Beta, k)
signs[x]=self._sign(Beta)
signs['infinity'] = self._sign(Beta)
elif len(J) > 0 :
#NOTE: we assume J is sorted by order on Element of RootsWithHeight
for k in range( max_height_Beta ):
x = R(Beta, k)
if x <= J[0]:
signs[x] = self._sign(Beta)
for j in range( len(J) ):
Beta = Beta.reflection(J[j].root)
sign_Beta = self._sign(Beta)
max_height_Beta = weight.scalar(
(sign_Beta * Beta).associated_coroot())
# some optimization so we don't initialize too many objects
# range(c1,c2) can be replaced by range(max_height_Beta) but it
# checks unnecessary extra things
c1 = J[j]._cmp_v[0] * max_height_Beta
if j == len(J) - 1:
c2 = max_height_Beta
else:
c2 = min (max_height_Beta, J[j+1]._cmp_v[0]*max_height_Beta + 1)
for k in range(c1,c2):
x=R( sign_Beta * Beta , k)
if (
( j < len(J) - 1 and J[j] < x <= J[j+1] ) or
( j == len(J) - 1 and J[j] < x)
):
signs[x] = sign_Beta
signs['infinity'] = sign_Beta # tail sign tells something about last step
# in g_alpha
if finite_cartan_type and i==0:
signs = { x : -signs[x] for x in signs.keys() }
return signs
def e(self, i):
r"""
Return the `i`-th crystal raising operator on ``self``.
INPUT:
- ``i`` -- element of the index set of the underlying root system.
EXAMPLES::
sage: C = crystals.AlcovePaths(['A',2],[2,0]); C
Highest weight crystal of alcove paths of type ['A', 2] and weight 2*Lambda[1]
sage: x = C( () )
sage: x.e(1)
sage: x.f(1) == x.f(1).f(2).e(2)
True
"""
Parent = self.parent()
finite_cartan_type = Parent._finite_cartan_type
J = list(self.value)
positions, gi = self._gi(i)
m=max(gi)
m_index = len(gi)-1-list(reversed(gi)).index(m) # last max in gi
if finite_cartan_type and i==0 :
M = Integer(m)/2 + Integer(1)/2
else:
M = Integer(m)/2 - Integer(1)/2
KR_test = finite_cartan_type and i==0 and m_index < len(gi) - 1
KR_test = KR_test and M >= 1
######################################################################
# NOTE:
# In the KR_case we want to insure that positions[m_index] is in J
# If m_index > 0 then it's always true
# If m_index == 0 then M >=1 guarantees this
######################################################################
if ( (not finite_cartan_type or i!=0) and m_index < len(gi)-1 # alpha_i is a simple root
) or KR_test:
J.remove(positions[m_index])
if m_index+1 < len(positions): # if m_index+1 != 'infinity'
# i.e. positions[m_index+1] makes sense
J.append(positions[m_index+1])
return_value = Parent ( tuple( sorted(J) ) )
# we attach to each admissible sequence a list
# which encodes a path (via root operators) from the () generator
# to the admissible sequence
# this is useful for investing the crystal
try:
return_value.i_string = self.i_string + [['e',i]]
except AttributeError:
return_value.i_string = [['e',i]]
return return_value
else:
return None
@cached_method
def _gi(self, i):
r"""
Compute information needed to build the graph `g_{\alpha_i}`.
This graph is used to apply the `i`-th crystal operator.
INPUT:
- ``i`` - element of the index_set of the underlying root_system.
OUTPUT:
A tuple ``(positions, gi)``:
- ``positions`` -- is a list of RootsWithHeight. These appear sorted in
their natural order, and record where `\pm \alpha_i` shows up in
the folded `\lambda`-chain.
- ``gi`` -- is a list of integers recording the height
(up to affine transformation) of `\pm \alpha_i`
in the folded `\lambda`-chain whose location is recorded by
``positions``.
.. NOTE::
- ``positions`` has length one less than ``gi`` since it does not
contain the position 'infinity'.
- To get the real `g_{\alpha_i}` one has to divide by 2 and add 1/2
or divide by 2 and subtract 1/2 depending on if
``self._finite_cartan_type==True and i == 0``
or not. This is done in crystal operator methods.
EXAMPLES::
sage: C=crystals.AlcovePaths(['A',2],[1,1])
sage: x=C( () ).f(1)
sage: x._gi(2)
([(alpha[2], 0), (alpha[1] + alpha[2], 1)], [1, 3, 5])
"""
signs = self._folding_data(i)
positions = sorted( [ x for x in signs.keys() if x != 'infinity' ] )
if len(positions)==0 :
return ( positions, [ signs['infinity'] ] )
gi = [ signs[ positions[0] ] ]
for j in range(1,len(positions)):
gi.append(
gi[j-1] +
signs[positions[j-1]]*self._eps(positions[j-1]) + signs[positions[j]] )
gi.append( gi[-1] +
signs[positions[-1]]*self._eps(positions[-1]) + signs['infinity'] )
return (positions, gi)
def f(self, i):
r"""
Returns the `i`-th crystal lowering operator on ``self``.
INPUT:
- | |
<reponame>spacetelescope/pysynphot_DONOTUSE
# Licensed under a 3-clause BSD style license - see LICENSE.rst
"""Test spectrum.py module and related functionalities that are not covered
by ``test_spectrum_source.py`` nor ``test_spectrum_bandpass.py``."""
# STDLIB
import os
import shutil
import tempfile
# THIRD-PARTY
import numpy as np
import pytest
# ASTROPY
from astropy import units as u
from astropy.io import fits
from astropy.modeling.models import Const1D, RedshiftScaleFactor
from astropy.tests.helper import assert_quantity_allclose
# LOCAL
from synphot.tests.test_units import _wave, _flux_jy, _flux_photlam
from synphot import exceptions, units
from synphot.compat import ASTROPY_LT_4_0
from synphot.models import Box1D, Empirical1D, GaussianFlux1D, get_waveset
from synphot.spectrum import SourceSpectrum, SpectralElement
def setup_module(module):
import astropy.constants as const
from astropy.constants import si, astropyconst13
const.h = si.h = astropyconst13.h
def teardown_module(module):
import astropy.constants as const
if ASTROPY_LT_4_0:
from astropy.constants import si, astropyconst20
const.h = si.h = astropyconst20.h
else:
from astropy.constants import si, astropyconst40
const.h = si.h = astropyconst40.h
class TestCheckOverlap:
"""Test spectrum overlap check. This method is ever only used
in the form of ``bp.check_overlap(sp)``, so that is what is
tested here.
"""
def setup_class(self):
self.bp = SpectralElement(
Empirical1D, points=[2999.9, 3000, 6000, 6000.1],
lookup_table=[0, 1, 1, 0])
def test_full(self):
"""As long as we don't have to extrapolate or taper
source spectrum, it's okay.
"""
sp = SourceSpectrum(
Empirical1D, points=[999.9, 1000, 9000, 9000.1],
lookup_table=[0, 1, 1, 0])
assert self.bp.check_overlap(sp) == 'full'
sp = SourceSpectrum(
Empirical1D, points=[3999.9, 4000, 4500, 4500.1],
lookup_table=[0, 1, 1, 0])
assert self.bp.check_overlap(sp) == 'full'
def test_partial_most(self):
"""99% overlap."""
sp = SourceSpectrum(
Empirical1D, points=[3005, 3005.1, 6000.1, 6000.2],
lookup_table=[0, 1, 1, 0])
assert self.bp.check_overlap(sp) == 'partial_most'
def test_partial_notmost(self):
"""Extrapolation or taper required."""
sp = SourceSpectrum(
Empirical1D, points=[3999.9, 4500.1], lookup_table=[1, 1])
assert self.bp.check_overlap(sp) == 'partial_notmost'
def test_none(self):
"""No overlap at all."""
sp = SourceSpectrum(
Empirical1D, points=[99.9, 100, 2999.8, 2999.9],
lookup_table=[0, 1, 1, 0])
assert self.bp.check_overlap(sp) == 'none'
def test_special_cases(self):
"""One of them has no waveset defined."""
# Other has no waveset
sp = SourceSpectrum(Const1D, amplitude=1)
assert self.bp.check_overlap(sp) == 'full'
# Self has no waveset
bp = SpectralElement(Const1D, amplitude=1)
sp = SourceSpectrum(Box1D, amplitude=1, x_0=5000, width=10)
assert bp.check_overlap(sp) == 'partial_notmost'
def test_exceptions(self):
"""Invalid input."""
with pytest.raises(exceptions.SynphotError):
self.bp.check_overlap(1)
class TestForceExtrap:
"""Test forcing extrapolation on a source spectrum."""
@pytest.mark.parametrize('z', [0, 0.03])
def test_empirical(self, z):
sp = SourceSpectrum(Empirical1D, points=[1000, 2000, 3000, 4000],
lookup_table=[0.5, 0.6, 10.6, 1.5], fill_value=0)
sp.z = z
w = [900, 4300]
assert_quantity_allclose(sp(w), 0 * units.PHOTLAM) # No extrapolation
is_forced = sp.force_extrapolation() # Force extrapolation
assert is_forced
assert_quantity_allclose(sp(w), [0.5, 1.5] * units.PHOTLAM)
def test_analytical(self):
"""Forcing is not possible."""
sp = SourceSpectrum(GaussianFlux1D, mean=5500, total_flux=1, fwhm=10)
w = [100, 10000]
assert_quantity_allclose(sp(w), 0 * units.PHOTLAM)
is_forced = sp.force_extrapolation()
assert not is_forced
assert_quantity_allclose(sp(w), 0 * units.PHOTLAM)
class TestWaveset:
"""Tests related to spectrum waveset."""
def test_none(self):
sp = SourceSpectrum(Const1D, amplitude=1)
assert sp.waveset is None
def test_sampleset(self):
tf_unit = u.erg / (u.cm * u.cm * u.s)
sp = SourceSpectrum(
GaussianFlux1D, total_flux=(1 * tf_unit), mean=5000, fwhm=10)
np.testing.assert_array_equal(sp.waveset.value, sp.model.sampleset())
def test_box1d(self):
bp = SpectralElement(Box1D, x_0=2000, width=1)
w = bp.waveset.value
w_true = bp.model.sampleset()
np.testing.assert_array_equal(w, w_true)
np.testing.assert_allclose(
w[([0, 1, -2, -1], )], bp.model.sampleset(minimal=True))
# Make sure scale does not change waveset
bp2 = bp * 2
bp3 = 0.5 * bp
np.testing.assert_array_equal(bp2.waveset.value, w_true)
np.testing.assert_array_equal(bp3.waveset.value, w_true)
def test_composite_none(self):
bp1 = SpectralElement(Box1D, amplitude=1, x_0=5000, width=10)
bp2 = SpectralElement(Const1D, amplitude=2)
bp = bp1 * bp2
np.testing.assert_array_equal(bp.waveset, bp1.waveset)
def test_composite(self):
totflux = 1 * (u.erg / (u.cm * u.cm * u.s))
g1 = SourceSpectrum(
GaussianFlux1D, total_flux=totflux, mean=5000, fwhm=10)
g2 = SourceSpectrum(
GaussianFlux1D, total_flux=totflux, mean=6500, fwhm=100)
g3 = SourceSpectrum(
GaussianFlux1D, total_flux=totflux, mean=7500, fwhm=5)
sp = SpectralElement(Box1D, x_0=1000, width=1) * (g1 + g2 + g3)
assert_quantity_allclose(
sp.waveset[::100],
[999.49, 1000.49, 5019.95906231, 6699.59062307,
7509.7672007] * u.AA)
def test_redshift(self):
tf_unit = u.erg / (u.cm * u.cm * u.s)
sp = SourceSpectrum(
GaussianFlux1D, total_flux=(1 * tf_unit), mean=5000, fwhm=10)
sp.z = 1.3
m = RedshiftScaleFactor(z=1.3)
w_step25_z0 = [4978.76695499, 4989.3834775, 5000, 5010.6165225] * u.AA
assert_quantity_allclose(sp.waveset[::25], m(w_step25_z0))
def test_redshift_none(self):
sp = SourceSpectrum(Const1D, amplitude=1, z=1.3)
assert sp.waveset is None
def test_complicated_tree(self):
"""Throw everything in and insert redshift and scale in the middle."""
# On one side, we have a composite bandpass.
bp1 = SpectralElement(Const1D, amplitude=1.01)
bp2 = SpectralElement(
Empirical1D, points=[4999, 5000.001, 5030], lookup_table=[0, 1, 0])
bp = bp1 * (0.8 * bp2) # [4999, 5000.001, 5030]
# On the other side, we have composite spectrum with
# scale and redshift.
sp1 = SourceSpectrum(
Empirical1D, points=[5001, 5011, 5020], lookup_table=[0, 1, 0])
sp2 = SourceSpectrum(
Empirical1D, points=[5000, 5010, 5020], lookup_table=[0, 1, 0])
sp3 = sp2 + (sp1 * 0.5) # [5000, 5001, 5010, 5011, 5020]
sp3.z = 0.01 # [5050, 5051.01, 5060.1, 5061.11, 5070.2]
sp = sp1 + sp3 # [5001, 5011, 5020, 5050, 5051.01, 5060.1, 5061.11, 5070.2] # noqa
sp_final = sp * bp
np.testing.assert_array_equal(
sp_final.waveset.value,
[4999, 5000.001, 5001, 5011, 5020, 5030, 5050, 5051.01,
5060.1, 5061.11, 5070.2])
def test_exceptions(self):
with pytest.raises(exceptions.SynphotError):
get_waveset('foo')
class TestMathOperators:
"""Test spectrum math operators."""
def setup_class(self):
self.sp_1 = SourceSpectrum(
Empirical1D, points=[3999.9, 4000.0, 5000.0, 6000.0, 6000.1],
lookup_table=[0, 3.5e-14, 4e-14, 4.5e-14, 0] * units.FLAM)
self.sp_2 = SourceSpectrum(
Empirical1D, points=_wave, lookup_table=_flux_jy,
meta={'PHOTLAM': [9.7654e-3, 1.003896e-2, 9.78473e-3]})
self.bp_1 = SpectralElement(
Empirical1D, points=[399.99, 400.01, 500.0, 590.0, 600.1] * u.nm,
lookup_table=[0, 0.1, 0.2, 0.3, 0])
def test_source_add(self):
"""Compare with ASTROLIB PYSYNPHOT."""
ans = self.sp_1 + self.sp_2
assert_quantity_allclose(
ans(ans.waveset),
[0.00976521, 0.01681283, 0.01970276, 0.01998463, 0.0197387,
0.01985257, 0.02337638, 0.00978454] * units.PHOTLAM, rtol=1e-4)
def test_source_sub(self):
"""Compare with ASTROLIB PYSYNPHOT."""
ans = self.sp_1 - self.sp_2
assert_quantity_allclose(
ans(ans.waveset),
[-9.76520783e-03, -2.71758275e-03, 1.72346256e-04, -9.29051118e-05,
1.69629843e-04, 2.83499328e-04, 3.80731187e-03,
-9.78453651e-03] * units.PHOTLAM,
rtol=1e-4)
def test_source_addsub_circular(self):
"""sp = sp + sp - sp"""
ans = self.sp_1 + self.sp_1 - self.sp_1
assert_quantity_allclose(ans(ans.waveset), self.sp_1(ans.waveset))
def test_source_addsub_exception(self):
with pytest.raises(exceptions.IncompatibleSources):
self.sp_1 + self.bp_1
@pytest.mark.parametrize('x', [2, 2 * u.dimensionless_unscaled])
def test_source_mul_div_scalar(self, x):
w = self.sp_1.waveset
ans1 = self.sp_1 * x
assert_quantity_allclose(
ans1(w),
[0, 0.01409552, 0.02013646, 0.02718424, 0] * units.PHOTLAM,
rtol=1e-6)
# rmul does not work with Quantity
if not isinstance(x, u.Quantity):
ans2 = x * self.sp_1
assert_quantity_allclose(ans1(w), ans2(w), rtol=0)
ans3 = self.sp_1 / x
assert_quantity_allclose(
ans3(w),
[0, 0.00352388, 0.00503411, 0.00679606, 0] * units.PHOTLAM,
atol=1e-7 * units.PHOTLAM)
def test_source_mul_div_spec(self):
"""Compare mul with ASTROLIB PYSYNPHOT. Also test bp * sp."""
ans1 = self.sp_1 * self.bp_1
ans2 = self.bp_1 * self.sp_1
w = ans1.waveset[:-1]
assert_quantity_allclose(
ans1(w),
[0, 3.52381254e-04, 7.04792712e-04, 2.01360717e-03, 3.97184014e-03,
4.03718269e-05, 0] * units.PHOTLAM, rtol=1e-4)
assert_quantity_allclose(ans1(w), ans2(w), rtol=0)
ans3 = self.sp_1 / self.bp_1
assert_quantity_allclose(
ans3(w),
[0, 0.14095528, 0.07048066, 0.05034117, 0.04413243, 4.57601236,
0] * units.PHOTLAM)
ans4 = self.sp_1 / self.sp_1
assert_quantity_allclose(
ans4([4000, 5000, 6000]), 1 * u.dimensionless_unscaled)
# Dividing throughput by flux does not make sense.
with pytest.raises(exceptions.IncompatibleSources):
self.bp_1 / self.sp_1
def test_source_mul_div_exceptions(self):
"""Only mul is tested but truediv uses the same validation."""
with pytest.raises(exceptions.IncompatibleSources):
self.sp_1 * self.sp_2
with pytest.raises(exceptions.IncompatibleSources):
self.sp_1 * [1, 2]
with pytest.raises(exceptions.IncompatibleSources):
self.sp_1 * (1 - 1j)
with pytest.raises(exceptions.IncompatibleSources):
self.sp_1 * u.Quantity([1, 2])
with pytest.raises(exceptions.IncompatibleSources):
self.sp_1 * u.Quantity(1 - 1j)
with pytest.raises(exceptions.IncompatibleSources):
self.sp_1 * (1 * u.AA)
def test_bandpass_addsub(self):
"""Not supported."""
with pytest.raises(NotImplementedError):
self.bp_1 + self.bp_1
with pytest.raises(NotImplementedError):
self.bp_1 + 2.0
with pytest.raises(NotImplementedError):
self.bp_1 - self.bp_1
with pytest.raises(NotImplementedError):
self.bp_1 - 2.0
@pytest.mark.parametrize('x', [2.0, 2.0 * u.dimensionless_unscaled])
def test_bandpass_mul_div_scalar(self, x):
w = self.bp_1.waveset
ans1 = self.bp_1 * x
assert_quantity_allclose(ans1(w), [0, 0.2, 0.4, 0.6, 0])
# rmul does not work with Quantity
if not isinstance(x, u.Quantity):
ans2 = x * self.bp_1
assert_quantity_allclose(ans1(w), ans2(w), rtol=0)
ans3 = self.bp_1 / x
assert_quantity_allclose(ans3(w), [0, 0.05, 0.1, 0.15, 0])
def test_bandpass_mul_div_bandpass(self):
ans1 = self.bp_1 * self.bp_1
assert_quantity_allclose(
ans1(ans1.waveset), [0, 0.01, 0.04, 0.09, 0])
w = [4000.1, 5000, 5900] # Avoid div by zero
ans2 = self.bp_1 / self.bp_1
assert_quantity_allclose(ans2(w), 1)
def test_bandpass_mul_div_exceptions(self):
"""Only mul is tested but truediv uses the same validation."""
class DummyObject:
pass
with pytest.raises(exceptions.IncompatibleSources):
self.bp_1 * DummyObject()
with pytest.raises(exceptions.IncompatibleSources):
self.bp_1 * u.Quantity([1, 2])
with pytest.raises(exceptions.IncompatibleSources):
self.bp_1 * u.Quantity(1 - 1j)
with pytest.raises(exceptions.IncompatibleSources):
self.bp_1 * (1 * u.AA)
with pytest.raises(exceptions.IncompatibleSources):
self.bp_1 * [1, 2]
with pytest.raises(exceptions.IncompatibleSources):
self.bp_1 * (1 - 1j)
class TestWriteSpec:
"""Test spectrum to_fits() method."""
def setup_class(self):
self.outdir = tempfile.mkdtemp()
self.sp = SourceSpectrum(
Empirical1D, points=_wave, lookup_table=_flux_photlam,
meta={'expr': 'Test source'})
self.bp = SpectralElement(
Empirical1D, points=_wave, lookup_table=np.ones(_wave.shape),
meta={'expr': 'Test bandpass'})
@pytest.mark.parametrize(
('is_sp', 'ext_hdr'),
[(True, None),
(True, {'foo': 'foo'}),
(False, None),
(False, {'foo': 'foo'})])
def test_write(self, is_sp, ext_hdr):
outfile = os.path.join(self.outdir, 'outspec.fits')
if is_sp:
sp1 = self.sp
else:
sp1 = self.bp
if ext_hdr is None:
sp1.to_fits(outfile, overwrite=True, trim_zero=False,
pad_zero_ends=False)
else:
sp1.to_fits(outfile, | |
self.tsc.save()
utils.create_test_list_membership(self.test_list, self.tsc)
upload = utils.create_test(name="file_upload", test_type=models.UPLOAD)
upload.calculation_procedure = "import json; result=json.load(FILE)"
upload.save()
utils.create_test_list_membership(self.test_list, upload)
filepath = os.path.join(settings.PROJECT_ROOT, "qa", "tests", "TESTRUNNER_test_file.json")
upload_data = open(filepath, 'r').read()
self.data['tests']['file_upload'] = {
'value': upload_data,
'filename': "tmp.json",
'encoding': "text",
'comment': "test comment",
}
response = self.client.post(self.create_url, self.data)
assert response.status_code == status.HTTP_201_CREATED
tic = models.TestInstance.objects.get(unit_test_info__test=self.tsc)
assert tic.string_value == "test"
tiu = models.TestInstance.objects.get(unit_test_info__test=upload)
assert tiu.attachment_set.count() == 1
assert tiu.attachment_set.first().finalized
assert tiu.comment == "test comment"
def test_file_upload_no_filename(self):
"""
Add a file upload test and ensure we can upload, process and have
composite tests depend on it being processed correctly.
"""
upload = utils.create_test(name="file_upload", test_type=models.UPLOAD)
upload.calculation_procedure = "import json; result=json.load(FILE)"
upload.save()
utils.create_test_list_membership(self.test_list, upload)
filepath = os.path.join(settings.PROJECT_ROOT, "qa", "tests", "TESTRUNNER_test_file.json")
upload_data = open(filepath, 'rb').read()
self.data['tests']['file_upload'] = {
'value': base64.b64encode(upload_data),
'comment': "test comment",
}
response = self.client.post(self.create_url, self.data)
assert response.status_code == status.HTTP_400_BAD_REQUEST
def test_file_upload_not_b64(self):
"""
A text upload without specifying text encoding should raise an error.
"""
upload = utils.create_test(name="file_upload", test_type=models.UPLOAD)
upload.calculation_procedure = "import json; result=json.load(FILE)"
upload.save()
utils.create_test_list_membership(self.test_list, upload)
self.data['tests']['file_upload'] = {
'filename': "tmp.txt",
'value': 'not b64 encoded',
}
response = self.client.post(self.create_url, self.data)
assert response.status_code == status.HTTP_400_BAD_REQUEST
def test_file_upload_invalid_proc(self):
"""
A text upload without specifying text encoding should raise an error.
"""
upload = utils.create_test(name="file_upload", test_type=models.UPLOAD)
upload.calculation_procedure = "1/0"
upload.save()
utils.create_test_list_membership(self.test_list, upload)
self.data['tests']['file_upload'] = {
'filename': "tmp.txt",
'encoding': 'text',
'value': 'text',
}
response = self.client.post(self.create_url, self.data)
assert response.status_code == status.HTTP_400_BAD_REQUEST
def test_file_upload_with_user_attached(self):
"""
Add a file upload test and ensure we can upload, process and have
composite tests depend on it being processed correctly.
"""
# FAILS on windows:
#
# File "C:\home\code\qatrackplus\qatrack\attachments\models.py", line 50, in move_tmp_file
# os.rename(start_path, new_path)
# PermissionError: [WinError 32] The process cannot access the file because it is being used by another process
self.tsc.calculation_procedure = "result = file_upload['baz']['baz1']"
self.tsc.save()
utils.create_test_list_membership(self.test_list, self.tsc)
upload = utils.create_test(name="file_upload", test_type=models.UPLOAD)
upload.calculation_procedure = "import json; result=json.load(FILE);"
upload.calculation_procedure += "UTILS.write_file('test_user_attached.txt', 'hello user')"
upload.save()
utils.create_test_list_membership(self.test_list, upload)
filepath = os.path.join(settings.PROJECT_ROOT, "qa", "tests", "TESTRUNNER_test_file.json")
upload_data = open(filepath, 'rb').read()
self.data['tests']['file_upload'] = {
'value': base64.b64encode(upload_data),
'filename': "tmp.json",
}
response = self.client.post(self.create_url, self.data)
assert response.status_code == status.HTTP_201_CREATED
tic = models.TestInstance.objects.get(unit_test_info__test=self.tsc)
assert tic.string_value == "test"
tiu = models.TestInstance.objects.get(unit_test_info__test=upload)
assert tiu.attachment_set.count() == 2
for a in tiu.attachment_set.all():
assert a.finalized
def test_basic_edit(self):
resp = self.client.post(self.create_url, self.data)
new_data = {'tests': {'test1': {'value': 99}}}
edit_resp = self.client.patch(resp.data['url'], new_data)
assert edit_resp.status_code == 200
assert models.TestInstance.objects.get(unit_test_info__test__slug="test1").value == 99
def test_composite_edit(self):
utils.create_test_list_membership(self.test_list, self.tc)
resp = self.client.post(self.create_url, self.data)
assert models.TestInstance.objects.get(unit_test_info__test__slug="testc").value == 3
new_data = {'tests': {'test1': {'value': 99}, 'test2': {'value': 101}}}
self.client.patch(resp.data['url'], new_data)
assert models.TestInstance.objects.get(unit_test_info__test__slug="testc").value == 200
def test_edit_work_completed(self):
resp = self.client.post(self.create_url, self.data)
new_data = {'work_completed': '2020-07-25 10:49:47'}
edit_resp = self.client.patch(resp.data['url'], new_data)
assert edit_resp.status_code == 200
assert models.TestListInstance.objects.first().work_completed.year == 2020
def test_edit_user_status(self):
s2 = utils.create_status(name="user status", slug="user_status", is_default=False, requires_review=False)
s2_url = reverse("testinstancestatus-detail", kwargs={'pk': s2.pk})
resp = self.client.post(self.create_url, self.data)
new_data = {'status': s2_url}
edit_resp = self.client.patch(resp.data['url'], new_data)
assert edit_resp.status_code == 200
assert models.TestInstance.objects.filter(status=s2).count() == self.ntests
assert models.TestListInstance.objects.all().count() == 1
assert models.TestListInstance.objects.unreviewed().count() == 0
def test_edit_wc_ws_error(self):
resp = self.client.post(self.create_url, self.data)
new_data = {
'work_completed': '2020-07-25 10:49:47',
'work_started': '2021-07-25 10:49:47',
}
edit_resp = self.client.patch(resp.data['url'], new_data)
assert edit_resp.status_code == 400
def test_different_editor(self):
"""
If a test list is created, and then modified by a different user, the
modified_by user should be set correctly.
"""
resp = self.client.post(self.create_url, self.data)
new_data = {'tests': {'test1': {'value': 99}}}
self.client.logout()
user = utils.create_user(uname="user2")
self.client.force_authenticate(user=user)
edit_resp = self.client.patch(resp.data['url'], new_data)
assert edit_resp.status_code == 200
tli = models.TestListInstance.objects.first()
assert tli.created_by.username == "user"
assert tli.modified_by.username == "user2"
def test_edit_perms(self):
"""
Check user with editing perms can edit tl
If user has no edit perms, attempting to edit should return a 403.
"""
resp = self.client.post(self.create_url, self.data)
self.client.logout()
user = utils.create_user(uname="user2", is_staff=False, is_superuser=False)
user.user_permissions.add(Permission.objects.get(codename="change_testlistinstance"))
self.client.force_authenticate(user=user)
new_data = {'tests': {'test1': {'value': 99}}}
edit_resp = self.client.patch(resp.data['url'], new_data)
assert edit_resp.status_code == 200
def test_no_edit_perms(self):
"""
Check user without editing perms can not edit tl
"""
resp = self.client.post(self.create_url, self.data)
self.client.logout()
user = utils.create_user(uname="user2", is_staff=False, is_superuser=False)
self.client.force_authenticate(user=user)
new_data = {'tests': {'test1': {'value': 99}}}
edit_resp = self.client.patch(resp.data['url'], new_data)
assert edit_resp.status_code == 403
def test_tl_comments(self):
self.data['comment'] = "test list comment"
# origial comment
resp = self.client.post(self.create_url, self.data)
tli = models.TestListInstance.objects.first()
assert tli.comments.first().comment == "test list comment"
# add a comment with the edit and preserve original
new_data = {'comment': 'edit comment'}
self.client.patch(resp.data['url'], new_data)
assert tli.comments.count() == 2
assert 'edit comment' in tli.comments.values_list("comment", flat=True)
def test_comment_preserved(self):
self.data['tests']['test1']['comment'] = 'original comment'
resp = self.client.post(self.create_url, self.data)
new_data = {'tests': {'test1': {'value': 99}}}
edit_resp = self.client.patch(resp.data['url'], new_data)
assert edit_resp.status_code == 200
assert models.TestInstance.objects.get(unit_test_info__test__slug="test1").comment == "original comment"
def test_comment_updated(self):
self.data['tests']['test1']['comment'] = 'original comment'
resp = self.client.post(self.create_url, self.data)
new_data = {'tests': {'test1': {'value': 99, 'comment': 'new comment'}}}
edit_resp = self.client.patch(resp.data['url'], new_data)
assert edit_resp.status_code == 200
assert models.TestInstance.objects.get(unit_test_info__test__slug="test1").comment == "new comment"
def test_skip_preserved(self):
self.data['tests']['test1'] = {'skipped': True}
resp = self.client.post(self.create_url, self.data)
new_data = {'tests': {'test2': {'value': 99}}}
self.client.patch(resp.data['url'], new_data)
assert models.TestInstance.objects.get(unit_test_info__test__slug="test1").skipped
def test_unskip(self):
self.data['tests']['test1'] = {'skipped': True}
resp = self.client.post(self.create_url, self.data)
new_data = {'tests': {'test1': {'value': 99}}}
self.client.patch(resp.data['url'], new_data)
ti = models.TestInstance.objects.get(unit_test_info__test__slug="test1")
assert not ti.skipped
assert ti.value == 99
def test_new_skip(self):
resp = self.client.post(self.create_url, self.data)
new_data = {'tests': {'test1': {'skipped': True}}}
self.client.patch(resp.data['url'], new_data)
ti = models.TestInstance.objects.get(unit_test_info__test__slug="test1")
assert ti.skipped
assert ti.value is None
def test_complete_in_progress(self):
self.data['in_progress'] = True
resp = self.client.post(self.create_url, self.data)
assert models.TestListInstance.objects.all().first().in_progress
new_data = {'in_progress': False}
self.client.patch(resp.data['url'], new_data)
assert not models.TestListInstance.objects.all().first().in_progress
def test_no_put(self):
"""All updates should be via patch"""
resp = self.client.post(self.create_url, self.data)
new_data = {'tests': {'test1': {'value': 1}}}
edit_resp = self.client.put(resp.data['url'], new_data)
assert edit_resp.status_code == 405
def test_utc_due_date_updated_on_create(self):
assert self.utc.due_date is None
self.client.post(self.create_url, self.data)
self.utc.refresh_from_db()
assert self.utc.due_date is not None
def test_utc_due_date_updated_on_edit(self):
self.data['work_completed'] = '2019-07-25 10:49:47'
resp = self.client.post(self.create_url, self.data)
self.utc.refresh_from_db()
new_data = {'work_completed': '2020-07-25 10:49:47'}
edit_resp = self.client.patch(resp.data['url'], new_data)
assert edit_resp.status_code == 200
self.utc.refresh_from_db()
assert self.utc.due_date.year == 2020
def test_edit_with_file_upload(self):
"""
Ensure making a simple edit to a tli with an text upload test succeeds.
"""
# FAILS on windows:
#
# File "C:\home\code\qatrackplus\qatrack\attachments\models.py", line 50, in move_tmp_file
# os.rename(start_path, new_path)
# PermissionError: [WinError 32] The process cannot access the file because it is being used by another process
self.tsc.calculation_procedure = "result = file_upload['baz']['baz1']"
self.tsc.save()
utils.create_test_list_membership(self.test_list, self.tsc)
upload = utils.create_test(name="file_upload", test_type=models.UPLOAD)
upload.calculation_procedure = "import json; result=json.load(FILE)"
upload.save()
utils.create_test_list_membership(self.test_list, upload)
filepath = os.path.join(settings.PROJECT_ROOT, "qa", "tests", "TESTRUNNER_test_file.json")
upload_data = open(filepath, 'rb').read()
self.data['tests']['file_upload'] = {
'value': base64.b64encode(upload_data),
'filename': "tmp.json",
'comment': "test comment",
}
resp = self.client.post(self.create_url, self.data)
new_data = {'tests': {'test1': {'value': 99}}}
edit_resp = self.client.patch(resp.data['url'], new_data)
assert edit_resp.status_code == 200
tic = models.TestInstance.objects.get(unit_test_info__test=self.tsc)
assert tic.string_value == "test"
def test_edit_with_text_file_upload(self):
"""
Ensure making a simple edit to a tli with an upload test succeeds.
"""
# FAILS on windows:
#
# File "C:\home\code\qatrackplus\qatrack\attachments\models.py", line 52, in move_tmp_file
# os.rename(start_path, new_path)
# PermissionError: [WinError 32] The process cannot access the file because it is being used by another process
self.tsc.calculation_procedure = "result = file_upload['baz']['baz1']"
self.tsc.save()
utils.create_test_list_membership(self.test_list, self.tsc)
upload = utils.create_test(name="file_upload", test_type=models.UPLOAD)
upload.calculation_procedure = "import json; result=json.load(FILE)"
upload.save()
utils.create_test_list_membership(self.test_list, upload)
filepath = os.path.join(settings.PROJECT_ROOT, "qa", "tests", "TESTRUNNER_test_file.json")
upload_data = json.loads(open(filepath, 'rb').read().decode())
upload_data['baz']['baz1'] = "edited content"
self.data['tests']['file_upload'] = {
'value': json.dumps(upload_data),
'encoding': 'text',
'filename': "tmp.json",
'comment': "test comment",
}
resp = self.client.post(self.create_url, self.data)
tiu = models.TestInstance.objects.get(unit_test_info__test=upload)
assert tiu.attachment_set.count() == 1
new_data = {'tests': {'test1': {'value': 99}}}
edit_resp = self.client.patch(resp.data['url'], new_data)
assert edit_resp.status_code == 200
tic = models.TestInstance.objects.get(unit_test_info__test=self.tsc)
assert tic.string_value == "edited content"
tiu.refresh_from_db()
assert tiu.attachment_set.count() == 1
def test_edit_with_file_upload_and_user_attached(self):
"""
Ensure making a simple edit to a tli with an upload test succeeds.
"""
# FAILS on windows:
#
# File "C:\home\code\qatrackplus\qatrack\attachments\models.py", line 50, in move_tmp_file
# os.rename(start_path, new_path)
# PermissionError: [WinError 32] The process cannot access the file because it is being used by another process
self.tsc.calculation_procedure = "result = file_upload['baz']['baz1']"
self.tsc.save()
utils.create_test_list_membership(self.test_list, self.tsc)
upload = utils.create_test(name="file_upload", test_type=models.UPLOAD)
upload.calculation_procedure = "import json; result=json.load(FILE);"
upload.calculation_procedure += "UTILS.write_file('test_user_attached.txt', 'hello user')"
upload.save()
utils.create_test_list_membership(self.test_list, upload)
filepath = os.path.join(settings.PROJECT_ROOT, "qa", "tests", "TESTRUNNER_test_file.json")
upload_data = open(filepath, 'rb').read()
self.data['tests']['file_upload'] = {
'value': base64.b64encode(upload_data),
'filename': "tmp.json",
'comment': "test comment",
}
resp = self.client.post(self.create_url, self.data)
tiu = models.TestInstance.objects.get(unit_test_info__test=upload)
assert tiu.attachment_set.count() == 2
new_data = {'tests': {'test1': {'value': 99}}}
edit_resp = self.client.patch(resp.data['url'], new_data)
assert edit_resp.status_code == 200
tic | |
cocoDt, "segm")
results = {}
for i in cocoEval.params.catIds:
cocoEval = COCOeval(cocoGt, cocoDt, "segm")
cocoEval.params.iouThrs = np.array([.5])
cocoEval.params.catIds = [i]
cocoEval.params.areaRngLbl = ["all"]
cocoEval.evaluate()
cocoEval.accumulate()
stat = list(cocoEval.summarize().values())
assert len(stat) == 1
results[i] = stat[0]
return results
def get_image_ids(pred_annList):
idList = set()
for p in pred_annList:
idList.add(p["image_id"])
return list(idList)
# def pred_for_coco2014(exp_dict, pred_annList):
# if exp_dict["dataset_name"] == "CocoDetection2014":
# train_set,_ = ut.load_trainval(exp_dict)
# for p in pred_annList:
# p["image_id"] = int(p["image_id"])
# p["category_id"] = train_set.label2category[p["category_id"]]
# return pred_annList
def test_baselines(exp_dict, reset=None):
#### Best Objectness
# pointDict = load_LCFCNPoints(exp_dict)
pred_annList = load_UpperBound(exp_dict, reset=reset)
if os.path.exists(exp_dict["path_baselines"]) and reset != "reset":
result_list = ut.load_pkl(exp_dict["path_baselines"])
return result_list
else:
gt_annDict = load_gtAnnDict(exp_dict)
pred_annList = load_BestObjectness(exp_dict, reset=reset)
# idList1 = get_image_ids(pred_annList)
# idList2 = get_image_ids(gt_annDict["annotations"])
results = get_perSizeResults(gt_annDict, pred_annList)
result_dict = results["result_dict"]
result_dict["predict_method"] = "BestObjectness"
result_list = [result_dict]
#### Upper bound
pred_annList = load_UpperBound(exp_dict, reset=reset)
results = get_perSizeResults(gt_annDict, pred_annList)
result_dict = results["result_dict"]
result_dict["predict_method"] = "UpperBound"
result_list += [result_dict]
ut.save_pkl(exp_dict["path_baselines"], result_list)
print(pd.DataFrame(result_list))
return result_list
def validate(model, dataset, predict_method, n_val=None, return_annList=False):
pred_annList = dataset2annList(
model, dataset, predict_method=predict_method, n_val=n_val)
gt_annDict = load_gtAnnDict({"dataset_name": type(dataset).__name__})
results = get_perSizeResults(gt_annDict, pred_annList)
result_dict = results["result_dict"]
result_dict["predict_method"] = predict_method
if return_annList:
return result_dict, pred_annList
return result_dict
def test_best(exp_dict, reset=None):
_, val_set = load_trainval(exp_dict)
history = ut.load_history(exp_dict)
# if reset == "reset":
try:
pred_annList = ut.load_best_annList(exp_dict)
except:
model = ut.load_best_model(exp_dict)
pred_annList = dataset2annList(
model, val_set, predict_method="BestDice", n_val=None)
ut.save_pkl(exp_dict["path_best_annList"], pred_annList)
# else:
# pred_annList = ut.load_best_annList(exp_dict)
gt_annDict = load_gtAnnDict(exp_dict)
results = get_perSizeResults(gt_annDict, pred_annList)
result_dict = results["result_dict"]
result_dict["predict_method"] = "BestDice"
result_dict["epoch"] = history["best_model"]["epoch"]
result_list = test_baselines(exp_dict)
result_list += [result_dict]
print(pd.DataFrame(result_list))
def get_random_indices(mask, n_indices=10):
mask_ind = np.where(mask.squeeze())
n_pixels = mask_ind[0].shape[0]
P_ind = np.random.randint(0, n_pixels, n_indices)
yList = mask_ind[0][P_ind]
xList = mask_ind[1][P_ind]
return {"yList": yList, "xList": xList}
def propDict2seedList(propDict, n_neighbors=100, random_proposal=False):
seedList = []
for prop in propDict["propDict"]:
if len(prop["annList"]) == 0:
seedList += [{
"category_id": [prop["point"]["category_id"]],
"yList": [prop["point"]["y"]],
"xList": [prop["point"]["x"]],
"neigh": {
"yList": [prop["point"]["y"]],
"xList": [prop["point"]["x"]]
}
}]
else:
if random_proposal:
i = np.random.randint(0, len(prop["annList"]))
mask = prop["annList"][i]["mask"]
else:
mask = prop["annList"][0]["mask"]
seedList += [{
"category_id": [prop["point"]["category_id"]],
"yList": [prop["point"]["y"]],
"xList": [prop["point"]["x"]],
"neigh": get_random_indices(mask, n_indices=100)
}]
# Background
background = propDict["background"]
if background.sum() == 0:
y_axis = np.random.randint(0, background.shape[1], 100)
x_axis = np.random.randint(0, background.shape[2], 100)
background[0, y_axis, x_axis] = 1
bg_seeds = get_random_indices(
background, n_indices=len(propDict["propDict"]))
seedList += [{
"category_id": [0] * len(bg_seeds["yList"]),
"yList": bg_seeds["yList"].tolist(),
"xList": bg_seeds["xList"].tolist(),
"neigh": get_random_indices(background, n_indices=100)
}]
return seedList
def CombineSeeds(seedList, ind=None):
yList = []
xList = []
categoryList = []
if ind is None:
ind = range(len(seedList))
for i in ind:
yList += seedList[i]["yList"]
xList += seedList[i]["xList"]
categoryList += seedList[i]["category_id"]
assert len(categoryList) == len(yList)
return {"yList": yList, "xList": xList, "categoryList": categoryList}
# 0. load val
def load_trainval(exp_dict):
path_datasets = "datasets"
path_transforms = 'addons/transforms.py'
dataset_dict = ut.get_module_classes(path_datasets)
transform_dict = ut.get_functions(path_transforms)
dataset_name = exp_dict["dataset_name"]
train_set, val_set = ut.load_trainval({
"dataset_name": dataset_name,
"path_datasets": path_datasets,
"trainTransformer": "Tr_WTP_NoFlip",
"testTransformer": "Te_WTP",
"dataset_options": {},
"dataset_dict": dataset_dict,
"transform_dict": transform_dict
})
annList_path = val_set.path + "/annotations/{}_gt_annList.json".format(
val_set.split)
val_set.annList_path = annList_path
return train_set, val_set
# 1. Load gtAnnDict
def load_gtAnnDict(exp_dict, reset=None):
reset = None
_, val_set = load_trainval(exp_dict)
annList_path = val_set.annList_path
if os.path.exists(annList_path) and reset != "reset":
return ut.load_json(annList_path)
else:
ann_json = {}
ann_json["categories"] = val_set.categories
ann_json["type"] = "instances"
# Images
imageList = []
annList = []
id = 1
for i in range(len(val_set)):
print("{}/{}".format(i, len(val_set)))
batch = val_set[i]
image_id = batch["name"]
height, width = batch["images"].shape[-2:]
imageList += [{
"file_name": batch["name"],
"height": height,
"width": width,
"id": batch["name"]
}]
maskObjects = batch["maskObjects"]
maskClasses = batch["maskClasses"]
n_objects = maskObjects[maskObjects != 255].max().item()
for obj_id in range(1, n_objects + 1):
if obj_id == 0:
continue
binmask = (maskObjects == obj_id)
segmentation = maskUtils.encode(
np.asfortranarray(ms.t2n(binmask)))
segmentation["counts"] = segmentation["counts"].decode("utf-8")
uniques = (binmask.long() * maskClasses).unique()
uniques = uniques[uniques != 0]
assert len(uniques) == 1
category_id = uniques[0].item()
annList += [{
"segmentation": segmentation,
"iscrowd": 0,
# "bbox":maskUtils.toBbox(segmentation).tolist(),
"area": int(maskUtils.area(segmentation)),
"id": id,
"image_id": image_id,
"category_id": category_id
}]
id += 1
ann_json["annotations"] = annList
ann_json["images"] = imageList
ut.save_json(annList_path, ann_json)
# Save dummy results
anns = ut.load_json(annList_path)
fname_dummy = annList_path.replace(".json", "_best.json")
annList = anns["annotations"]
for a in annList:
a["score"] = 1
ut.save_json(fname_dummy, annList)
# 1. Load dummyAnnDict
def assert_gtAnnDict(exp_dict, reset=None):
_, val_set = load_trainval(exp_dict)
annList_path = val_set.annList_path
fname_dummy = annList_path.replace(".json", "_best.json")
# Test should be 100
cocoGt = pycocotools.coco.COCO(annList_path)
imgIds = sorted(cocoGt.getImgIds())
assert len(imgIds) == len(val_set)
assert len(ms.load_json(fname_dummy)) == len(
ut.load_json(annList_path)["annotations"])
assert len(ms.load_json(fname_dummy)) == len(cocoGt.anns)
imgIds = imgIds[0:100]
imgIds = np.random.choice(imgIds, min(100, len(imgIds)), replace=False)
cocoDt = cocoGt.loadRes(fname_dummy)
cocoEval = COCOeval(cocoGt, cocoDt, "segm")
# cocoEval.params.imgIds = imgIds.tolist()
cocoEval.params.iouThrs = np.array([.25, .5, .75])
cocoEval.evaluate()
cocoEval.accumulate()
stats = cocoEval.summarize()
assert stats["0.25_all"] == 1
assert stats["0.5_all"] == 1
assert stats["0.75_all"] == 1
def load_LCFCNPoints(exp_dict, reset=None):
dataset_name = exp_dict["dataset_name"]
base = "/mnt/projects/counting/Saves/main/"
if "Pascal" in dataset_name:
path = base + "dataset:Pascal2007_model:Res50FCN_metric:mRMSE_loss:water_loss_B_config:basic/"
elif "CityScapes" in dataset_name:
path = base + "dataset:CityScapes_model:Res50FCN_metric:mRMSE_loss:water_loss_B_config:basic/"
elif "CocoDetection2014" in dataset_name:
path = base + "dataset:CocoDetection2014_model:Res50FCN_metric:mRMSE_loss:water_loss_B_config:sample3000/"
elif "Kitti" in dataset_name:
path = base + "dataset:Kitti_model:Res50FCN_metric:mRMSE_loss:water_loss_B_config:basic/"
elif "Plants" in dataset_name:
path = base + "dataset:Plants_model:Res50FCN_metric:mRMSE_loss:water_loss_B_config:basic/"
else:
raise
fname = base + "lcfcn_points/{}.pkl".format(dataset_name)
if os.path.exists(fname):
history = ut.load_pkl(path + "history.pkl")
pointDict = ut.load_pkl(fname)
if pointDict["best_model"]["epoch"] != history["best_model"]["epoch"]:
reset = "reset"
if os.path.exists(fname) and reset != "reset":
return pointDict
else:
train_set, val_set = load_trainval(exp_dict)
# Create Model
model = exp_dict["model_dict"]["Res50FCN"](train_set)
model.load_state_dict(torch.load(path + "/State_Dicts/best_model.pth"))
history = ut.load_pkl(path + "history.pkl")
model.cuda()
loader = data.DataLoader(
val_set, batch_size=1, num_workers=1, drop_last=False)
pointDict = {}
model.eval()
for i, batch in enumerate(loader):
print(i, "/", len(loader), " - pointDict")
pointList = model.predict(
batch, predict_method="points")["pointList"]
pointDict[batch["name"][0]] = pointList
pointDict["best_model"] = history['best_model']
pointDict['exp_dict'] = history['exp_dict']
ut.save_pkl(fname, pointDict)
return pointDict
def blobs2annList(blobs, image_id):
n_classes, h, w = blobs.shape
annList = []
for i in range(n_classes):
blobs_class = blobs[i]
for u in np.unique(blobs_class):
if u == 0:
continue
binmask = (blobs_class == u).astype(int)
seg = maskUtils.encode(
np.asfortranarray(ms.t2n(binmask.squeeze())).astype("uint8"))
seg["counts"] = seg["counts"].decode("utf-8")
score = 1.0
annList += [{
"segmentation": seg,
"bbox": maskUtils.toBbox(seg).astype(int).tolist(),
"iscrowd": 0,
"area": int(maskUtils.area(seg)),
"image_id": image_id,
"category_id": i + 1,
"height": h,
"width": w,
"score": score
}]
return annList
def blobs2BestDice(blobs, categoryDict, propDict, batch):
h, w = blobs.shape
annList = []
blobs_copy = np.zeros(blobs.shape, int)
if "maskVoid" in batch:
maskVoid = batch["maskVoid"]
else:
maskVoid = None
for u in np.unique(blobs):
if u == 0:
continue
binmask = (blobs == u)
best_dice = 0.
best_mask = None
for ann in propDict['propDict'][u - 1]["annList"]:
score = dice(ann["mask"], binmask)
if score > best_dice:
best_dice = score
best_mask = ann["mask"]
prop_score = ann["score"]
if best_mask is None:
best_mask = (blobs == u).astype(int)
if maskVoid is not None:
binmask = best_mask * (ms.t2n(maskVoid).squeeze())
else:
binmask = best_mask
if best_mask is None:
blobs_copy[blobs == u] = u
else:
blobs_copy[best_mask == 1] = u
seg = maskUtils.encode(
np.asfortranarray(ms.t2n(binmask)).astype("uint8"))
seg["counts"] = seg["counts"].decode("utf-8")
score = best_dice
# if batch["dataset"] == "coco2014":
# image_id = int(batch["name"][0])
# else:
image_id = batch["name"][0]
annList += [{
"segmentation": seg,
"iscrowd": 0,
"area": int(maskUtils.area(seg)),
"image_id": image_id,
"category_id": int(categoryDict[u]),
"height": h,
"width": w,
"score": score
}]
return {"blobs": blobs_copy, "annList": annList}
@torch.no_grad()
def dataset2annList(model,
dataset,
predict_method="BestObjectness",
n_val=None):
loader = data.DataLoader(
dataset, batch_size=1, num_workers=1, drop_last=False)
annList = []
for i, batch in enumerate(loader):
print(i, "/", len(loader))
pred_dict = model.predict(batch, predict_method="BestDice")
annList += pred_dict["annList"]
return annList
def pointList2mask(pointList):
if "shape" in pointList[0]:
shape = pointList[0]
else:
shape = (1, pointList[0]["h"], pointList[0]["w"])
_, h, w = shape
mask = np.zeros(shape, "uint8")
for p in pointList:
if "category_id" in p:
category_id = p["category_id"]
else:
category_id = p["cls"]
if p["y"] < 1:
y = int(p["y"] * h)
x = int(p["x"] * w)
else:
y = int(p["y"])
x = int(p["x"])
mask[:, y, x] = int(category_id)
return {"mask": mask}
def pointList2annList(pointList):
annList = []
for p in pointList:
binmask = np.zeros((p["h"], p["w"]), "uint8")
binmask[int(p["y"] * p["h"]),
int(p["x"] * p["w"])] = 1.
annList += [
mask2ann(
binmask,
p["cls"],
image_id=-1,
maskVoid=None,
score=None,
)]
return annList
def pointList2points(pointList):
return pointList2mask(pointList)
def print_results(results):
pass
def probs2blobs(probs):
annList = []
probs = ut.t2n(probs)
n, n_classes, h, w = probs.shape
counts = np.zeros((n, n_classes - 1))
# Binary case
pred_mask = ut.t2n(probs.argmax(1))
blobs = np.zeros(pred_mask.shape)
points = np.zeros(pred_mask.shape)
max_id = 0
for i in range(n):
for category_id in np.unique(pred_mask[i]):
if category_id == 0:
continue
ind = pred_mask == category_id
connected_components = morph.label(ind)
uniques | |
# Copyright 2018-2019, <NAME> and The Tor Project
# See LICENSE for licensing information
"""
Interaction with a Tor relay's ORPort. :class:`~stem.client.Relay` is
a wrapper for :class:`~stem.socket.RelaySocket`, much the same way as
:class:`~stem.control.Controller` provides higher level functions for
:class:`~stem.socket.ControlSocket`.
.. versionadded:: 1.7.0
::
Relay - Connection with a tor relay's ORPort.
| +- connect - Establishes a connection with a relay.
|
|- is_alive - reports if our connection is open or closed
|- connection_time - time when we last connected or disconnected
|- close - shuts down our connection
|
+- create_circuit - establishes a new circuit
Circuit - Circuit we've established through a relay.
|- send - sends a message through this circuit
+- close - closes this circuit
"""
import hashlib
import threading
import stem
import stem.client.cell
import stem.socket
import stem.util.connection
from stem.client.cell import (
CELL_TYPE_SIZE,
FIXED_PAYLOAD_LEN,
Cell,
)
from stem.client.datatype import (
ZERO,
Address,
KDF,
LinkProtocol,
RelayCommand,
split,
)
__all__ = [
'cell',
'datatype',
]
DEFAULT_LINK_PROTOCOLS = (3, 4, 5)
class Relay(object):
"""
Connection with a Tor relay's ORPort.
:var int link_protocol: link protocol version we established
"""
def __init__(self, orport, link_protocol):
# TODO: Python 3.x adds a getbuffer() method which
# lets us get the size...
#
# https://stackoverflow.com/questions/26827055/python-how-to-get-iobytes-allocated-memory-length
#
# When we drop python 2.x support we should replace
# self._orport_buffer with an io.BytesIO.
self.link_protocol = LinkProtocol(link_protocol)
self._orport = orport
self._orport_buffer = b'' # unread bytes
self._orport_lock = threading.RLock()
self._circuits = {}
@staticmethod
def connect(address, port, link_protocols = DEFAULT_LINK_PROTOCOLS):
"""
Establishes a connection with the given ORPort.
:param str address: ip address of the relay
:param int port: ORPort of the relay
:param tuple link_protocols: acceptable link protocol versions
:raises:
* **ValueError** if address or port are invalid
* :class:`stem.SocketError` if we're unable to establish a connection
"""
relay_addr = Address(address)
if not stem.util.connection.is_valid_port(port):
raise ValueError("'%s' isn't a valid port" % port)
elif not link_protocols:
raise ValueError("Connection can't be established without a link protocol.")
try:
conn = stem.socket.RelaySocket(address, port)
except stem.SocketError as exc:
if 'Connection refused' in str(exc):
raise stem.SocketError("Failed to connect to %s:%i. Maybe it isn't an ORPort?" % (address, port))
# If not an ORPort (for instance, mistakenly connecting to a ControlPort
# instead) we'll likely fail during SSL negotiation. This can result
# in a variety of responses so normalizing what we can...
#
# Debian 9.5: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:661)
# Ubuntu 16.04: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:590)
# Ubuntu 12.04: [Errno 1] _ssl.c:504: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
if 'unknown protocol' in str(exc) or 'wrong version number' in str(exc):
raise stem.SocketError("Failed to SSL authenticate to %s:%i. Maybe it isn't an ORPort?" % (address, port))
raise
# To negotiate our link protocol the first VERSIONS cell is expected to use
# a circuit ID field size from protocol version 1-3 for backward
# compatibility...
#
# The first VERSIONS cell, and any cells sent before the
# first VERSIONS cell, always have CIRCID_LEN == 2 for backward
# compatibility.
conn.send(stem.client.cell.VersionsCell(link_protocols).pack(2))
response = conn.recv()
# Link negotiation ends right away if we lack a common protocol
# version. (#25139)
if not response:
conn.close()
raise stem.SocketError('Unable to establish a common link protocol with %s:%i' % (address, port))
versions_reply = stem.client.cell.Cell.pop(response, 2)[0]
common_protocols = set(link_protocols).intersection(versions_reply.versions)
if not common_protocols:
conn.close()
raise stem.SocketError('Unable to find a common link protocol. We support %s but %s:%i supports %s.' % (', '.join(link_protocols), address, port, ', '.join(versions_reply.versions)))
# Establishing connections requires sending a NETINFO, but including our
# address is optional. We can revisit including it when we have a usecase
# where it would help.
link_protocol = max(common_protocols)
conn.send(stem.client.cell.NetinfoCell(relay_addr, []).pack(link_protocol))
return Relay(conn, link_protocol)
def _recv(self, raw = False):
"""
Reads the next cell from our ORPort. If none is present this blocks
until one is available.
:param bool raw: provides bytes rather than parsing as a cell if **True**
:returns: next :class:`~stem.client.cell.Cell`
"""
with self._orport_lock:
# cells begin with [circ_id][cell_type][...]
circ_id_size = self.link_protocol.circ_id_size.size
while len(self._orport_buffer) < (circ_id_size + CELL_TYPE_SIZE.size):
self._orport_buffer += self._orport.recv() # read until we know the cell type
cell_type = Cell.by_value(CELL_TYPE_SIZE.pop(self._orport_buffer[circ_id_size:])[0])
if cell_type.IS_FIXED_SIZE:
cell_size = circ_id_size + CELL_TYPE_SIZE.size + FIXED_PAYLOAD_LEN
else:
# variable length, our next field is the payload size
while len(self._orport_buffer) < (circ_id_size + CELL_TYPE_SIZE.size + FIXED_PAYLOAD_LEN.size):
self._orport_buffer += self._orport.recv() # read until we know the cell size
payload_len = FIXED_PAYLOAD_LEN.pop(self._orport_buffer[circ_id_size + CELL_TYPE_SIZE.size:])[0]
cell_size = circ_id_size + CELL_TYPE_SIZE.size + FIXED_PAYLOAD_LEN.size + payload_len
while len(self._orport_buffer) < cell_size:
self._orport_buffer += self._orport.recv() # read until we have the full cell
if raw:
content, self._orport_buffer = split(self._orport_buffer, cell_size)
return content
else:
cell, self._orport_buffer = Cell.pop(self._orport_buffer, self.link_protocol)
return cell
def _msg(self, cell):
"""
Sends a cell on the ORPort and provides the response we receive in reply.
Unfortunately unlike control sockets, ORPorts don't have generalized rules
for predictable message IO. With control sockets...
* Each message we send receives a single reply.
* We may also receive asynchronous events marked with a 650 status.
ORPorts by contrast receive variable length cells with differing rules on
their arrival. As such making a best effort attempt at a send-and-receive
method in which we do the following...
* Discard any existing unread data from the socket.
* Send our request.
* Await up to a second for a reply.
It's quite possible this is a stupid approach. If so, patches welcome.
:param stem.client.cell.Cell cell: cell to be sent
:returns: **generator** with the cells received in reply
"""
self._orport.recv(timeout = 0) # discard unread data
self._orport.send(cell.pack(self.link_protocol))
response = self._orport.recv(timeout = 1)
for received_cell in stem.client.cell.Cell.pop(response, self.link_protocol):
yield received_cell
def is_alive(self):
"""
Checks if our socket is currently connected. This is a pass-through for our
socket's :func:`~stem.socket.BaseSocket.is_alive` method.
:returns: **bool** that's **True** if our socket is connected and **False** otherwise
"""
return self._orport.is_alive()
def connection_time(self):
"""
Provides the unix timestamp for when our socket was either connected or
disconnected. That is to say, the time we connected if we're currently
connected and the time we disconnected if we're not connected.
:returns: **float** for when we last connected or disconnected, zero if
we've never connected
"""
return self._orport.connection_time()
def close(self):
"""
Closes our socket connection. This is a pass-through for our socket's
:func:`~stem.socket.BaseSocket.close` method.
"""
with self._orport_lock:
return self._orport.close()
def create_circuit(self):
"""
Establishes a new circuit.
"""
with self._orport_lock:
circ_id = max(self._circuits) + 1 if self._circuits else self.link_protocol.first_circ_id
create_fast_cell = stem.client.cell.CreateFastCell(circ_id)
created_fast_cell = None
for cell in self._msg(create_fast_cell):
if isinstance(cell, stem.client.cell.CreatedFastCell):
created_fast_cell = cell
break
if not created_fast_cell:
raise ValueError('We should get a CREATED_FAST response from a CREATE_FAST request')
kdf = KDF.from_value(create_fast_cell.key_material + created_fast_cell.key_material)
if created_fast_cell.derivative_key != kdf.key_hash:
raise ValueError('Remote failed to prove that it knows our shared key')
circ = Circuit(self, circ_id, kdf)
self._circuits[circ.id] = circ
return circ
def __iter__(self):
with self._orport_lock:
for circ in self._circuits.values():
yield circ
def __enter__(self):
return self
def __exit__(self, exit_type, value, traceback):
self.close()
class Circuit(object):
"""
Circuit through which requests can be made of a `Tor relay's ORPort
<https://gitweb.torproject.org/torspec.git/tree/tor-spec.txt>`_.
:var stem.client.Relay relay: relay through which this circuit has been established
:var int id: circuit id
:var hashlib.sha1 forward_digest: digest for forward integrity check
:var hashlib.sha1 backward_digest: digest for backward integrity check
:var bytes forward_key: forward encryption key
:var bytes backward_key: backward encryption key
"""
def __init__(self, relay, circ_id, kdf):
if not stem.prereq.is_crypto_available():
raise ImportError('Circuit construction requires the cryptography module')
from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
from cryptography.hazmat.backends import default_backend
ctr = modes.CTR(ZERO * (algorithms.AES.block_size // 8))
self.relay = relay
self.id = circ_id
self.forward_digest = hashlib.sha1(kdf.forward_digest)
self.backward_digest = hashlib.sha1(kdf.backward_digest)
self.forward_key = Cipher(algorithms.AES(kdf.forward_key), ctr, default_backend()).encryptor()
self.backward_key = Cipher(algorithms.AES(kdf.backward_key), ctr, default_backend()).decryptor()
def directory(self, request, stream_id = 0):
"""
Request descriptors from the relay.
:param str request: directory request to make
:param int stream_id: specific stream this concerns
:returns: **str** with the requested descriptor data
"""
with self.relay._orport_lock:
self._send(RelayCommand.BEGIN_DIR, stream_id = stream_id)
self._send(RelayCommand.DATA, request, stream_id = stream_id)
response = []
while True:
# Decrypt relay cells received in response. Our digest/key only
# updates when handled successfully.
encrypted_cell = self.relay._recv(raw = True)
decrypted_cell, backward_key, backward_digest = stem.client.cell.RelayCell.decrypt(self.relay.link_protocol, | |
"""
This module supports embedded TeX expressions in matplotlib via dvipng
and dvips for the raster and postscript backends. The tex and
dvipng/dvips information is cached in ~/.matplotlib/tex.cache for reuse between
sessions
Requirements:
* latex
* \*Agg backends: dvipng
* PS backend: latex w/ psfrag, dvips, and Ghostscript 8.51
(older versions do not work properly)
Backends:
* \*Agg
* PS
* PDF
For raster output, you can get RGBA numpy arrays from TeX expressions
as follows::
texmanager = TexManager()
s = ('\\TeX\\ is Number '
'$\\displaystyle\\sum_{n=1}^\\infty\\frac{-e^{i\pi}}{2^n}$!')
Z = self.texmanager.get_rgba(s, size=12, dpi=80, rgb=(1,0,0))
To enable tex rendering of all text in your matplotlib figure, set
text.usetex in your matplotlibrc file (http://matplotlib.sf.net/matplotlibrc)
or include these two lines in your script::
from matplotlib import rc
rc('text', usetex=True)
"""
from __future__ import print_function
import copy
import glob
import os
import shutil
import sys
import warnings
from hashlib import md5
import distutils.version
import numpy as np
import matplotlib as mpl
from matplotlib import rcParams
from matplotlib._png import read_png
from matplotlib.cbook import mkdirs
from matplotlib.compat.subprocess import Popen, PIPE, STDOUT
import matplotlib.dviread as dviread
import re
DEBUG = False
if sys.platform.startswith('win'):
cmd_split = '&'
else:
cmd_split = ';'
def dvipng_hack_alpha():
try:
p = Popen(['dvipng', '-version'], stdin=PIPE, stdout=PIPE,
stderr=STDOUT, close_fds=(sys.platform != 'win32'))
except OSError:
mpl.verbose.report('No dvipng was found', 'helpful')
return False
stdin, stdout = p.stdin, p.stdout
for line in stdout:
if line.startswith(b'dvipng '):
version = line.split()[-1]
mpl.verbose.report('Found dvipng version %s' % version,
'helpful')
version = version.decode('ascii')
version = distutils.version.LooseVersion(version)
return version < distutils.version.LooseVersion('1.6')
mpl.verbose.report('Unexpected response from dvipng -version', 'helpful')
return False
class TexManager:
"""
Convert strings to dvi files using TeX, caching the results to a
working dir
"""
oldpath = mpl.get_home()
if oldpath is None:
oldpath = mpl.get_data_path()
oldcache = os.path.join(oldpath, '.tex.cache')
cachedir = mpl.get_cachedir()
if cachedir is not None:
texcache = os.path.join(cachedir, 'tex.cache')
else:
# Should only happen in a restricted environment (such as Google App
# Engine). Deal with this gracefully by not creating a cache directory.
texcache = None
if os.path.exists(oldcache):
if texcache is not None:
try:
shutil.move(oldcache, texcache)
except IOError as e:
warnings.warn('File could not be renamed: %s' % e)
else:
warnings.warn("""\
Found a TeX cache dir in the deprecated location "%s".
Moving it to the new default location "%s".""" % (oldcache, texcache))
else:
warnings.warn("""\
Could not rename old TeX cache dir "%s": a suitable configuration
directory could not be found.""" % oldcache)
if texcache is not None:
mkdirs(texcache)
_dvipng_hack_alpha = None
#_dvipng_hack_alpha = dvipng_hack_alpha()
# mappable cache of
rgba_arrayd = {}
grey_arrayd = {}
postscriptd = {}
pscnt = 0
serif = ('cmr', '')
sans_serif = ('cmss', '')
monospace = ('cmtt', '')
cursive = ('pzc', r'\usepackage{chancery}')
font_family = 'serif'
font_families = ('serif', 'sans-serif', 'cursive', 'monospace')
font_info = {'new century schoolbook': ('pnc',
r'\renewcommand{\rmdefault}{pnc}'),
'bookman': ('pbk', r'\renewcommand{\rmdefault}{pbk}'),
'times': ('ptm', r'\usepackage{mathptmx}'),
'palatino': ('ppl', r'\usepackage{mathpazo}'),
'zapf chancery': ('pzc', r'\usepackage{chancery}'),
'cursive': ('pzc', r'\usepackage{chancery}'),
'charter': ('pch', r'\usepackage{charter}'),
'serif': ('cmr', ''),
'sans-serif': ('cmss', ''),
'helvetica': ('phv', r'\usepackage{helvet}'),
'avant garde': ('pag', r'\usepackage{avant}'),
'courier': ('pcr', r'\usepackage{courier}'),
'monospace': ('cmtt', ''),
'computer modern roman': ('cmr', ''),
'computer modern sans serif': ('cmss', ''),
'computer modern typewriter': ('cmtt', '')}
_rc_cache = None
_rc_cache_keys = (('text.latex.preamble', ) +
tuple(['font.' + n for n in ('family', ) +
font_families]))
def __init__(self):
if self.texcache is None:
raise RuntimeError(
('Cannot create TexManager, as there is no cache directory '
'available'))
mkdirs(self.texcache)
ff = rcParams['font.family']
if len(ff) == 1 and ff[0].lower() in self.font_families:
self.font_family = ff[0].lower()
else:
mpl.verbose.report(
'font.family must be one of (%s) when text.usetex is True. '
'serif will be used by default.' %
', '.join(self.font_families),
'helpful')
self.font_family = 'serif'
fontconfig = [self.font_family]
for font_family, font_family_attr in [(ff, ff.replace('-', '_'))
for ff in self.font_families]:
for font in rcParams['font.' + font_family]:
if font.lower() in self.font_info:
setattr(self, font_family_attr,
self.font_info[font.lower()])
if DEBUG:
print('family: %s, font: %s, info: %s' %
(font_family, font,
self.font_info[font.lower()]))
break
else:
if DEBUG:
print('$s font is not compatible with usetex')
else:
mpl.verbose.report('No LaTeX-compatible font found for the '
'%s font family in rcParams. Using '
'default.' % ff, 'helpful')
setattr(self, font_family_attr, self.font_info[font_family])
fontconfig.append(getattr(self, font_family_attr)[0])
self._fontconfig = ''.join(fontconfig)
# The following packages and commands need to be included in the latex
# file's preamble:
cmd = [self.serif[1], self.sans_serif[1], self.monospace[1]]
if self.font_family == 'cursive':
cmd.append(self.cursive[1])
while r'\usepackage{type1cm}' in cmd:
cmd.remove(r'\usepackage{type1cm}')
cmd = '\n'.join(cmd)
self._font_preamble = '\n'.join([r'\usepackage{type1cm}', cmd,
r'\usepackage{textcomp}'])
def get_basefile(self, tex, fontsize, dpi=None):
"""
returns a filename based on a hash of the string, fontsize, and dpi
"""
s = ''.join([tex, self.get_font_config(), '%f' % fontsize,
self.get_custom_preamble(), str(dpi or '')])
# make sure hash is consistent for all strings, regardless of encoding:
bytes = unicode(s).encode('utf-8')
return os.path.join(self.texcache, md5(bytes).hexdigest())
def get_font_config(self):
"""Reinitializes self if relevant rcParams on have changed."""
if self._rc_cache is None:
self._rc_cache = dict([(k, None) for k in self._rc_cache_keys])
changed = [par for par in self._rc_cache_keys
if rcParams[par] != self._rc_cache[par]]
if changed:
if DEBUG:
print('DEBUG following keys changed:', changed)
for k in changed:
if DEBUG:
print('DEBUG %-20s: %-10s -> %-10s' %
(k, self._rc_cache[k], rcParams[k]))
# deepcopy may not be necessary, but feels more future-proof
self._rc_cache[k] = copy.deepcopy(rcParams[k])
if DEBUG:
print('DEBUG RE-INIT\nold fontconfig:', self._fontconfig)
self.__init__()
if DEBUG:
print('DEBUG fontconfig:', self._fontconfig)
return self._fontconfig
def get_font_preamble(self):
"""
returns a string containing font configuration for the tex preamble
"""
return self._font_preamble
def get_custom_preamble(self):
"""returns a string containing user additions to the tex preamble"""
return '\n'.join(rcParams['text.latex.preamble'])
def _get_shell_cmd(self, *args):
"""
On windows, changing directories can be complicated by the presence of
multiple drives. get_shell_cmd deals with this issue.
"""
if sys.platform == 'win32':
command = ['%s' % os.path.splitdrive(self.texcache)[0]]
else:
command = []
command.extend(args)
return ' && '.join(command)
def make_tex(self, tex, fontsize):
"""
Generate a tex file to render the tex string at a specific font size
returns the file name
"""
basefile = self.get_basefile(tex, fontsize)
texfile = '%s.tex' % basefile
custom_preamble = self.get_custom_preamble()
fontcmd = {'sans-serif': r'{\sffamily %s}',
'monospace': r'{\ttfamily %s}'}.get(self.font_family,
r'{\rmfamily %s}')
tex = fontcmd % tex
if rcParams['text.latex.unicode']:
unicode_preamble = r"""\usepackage{ucs}
\usepackage[utf8x]{inputenc}"""
else:
unicode_preamble = ''
s = r"""\documentclass{article}
%s
%s
%s
\usepackage[papersize={72in,72in},body={70in,70in},margin={1in,1in}]{geometry}
\pagestyle{empty}
\begin{document}
\fontsize{%f}{%f}%s
\end{document}
""" % (self._font_preamble, unicode_preamble, custom_preamble,
fontsize, fontsize * 1.25, tex)
with open(texfile, 'wb') as fh:
if rcParams['text.latex.unicode']:
fh.write(s.encode('utf8'))
else:
try:
fh.write(s.encode('ascii'))
except UnicodeEncodeError as err:
mpl.verbose.report("You are using unicode and latex, but "
"have not enabled the matplotlib "
"'text.latex.unicode' rcParam.",
'helpful')
raise
return texfile
_re_vbox = re.compile(
r"MatplotlibBox:\(([\d.]+)pt\+([\d.]+)pt\)x([\d.]+)pt")
def make_tex_preview(self, tex, fontsize):
"""
Generate a tex file to render the tex string at a specific
font size. It uses the preview.sty to determin the dimension
(width, height, descent) of the output.
returns the file name
"""
basefile = self.get_basefile(tex, fontsize)
texfile = '%s.tex' % basefile
custom_preamble = self.get_custom_preamble()
fontcmd = {'sans-serif': r'{\sffamily %s}',
'monospace': r'{\ttfamily %s}'}.get(self.font_family,
r'{\rmfamily %s}')
tex = fontcmd % tex
if rcParams['text.latex.unicode']:
unicode_preamble = r"""\usepackage{ucs}
\usepackage[utf8x]{inputenc}"""
else:
unicode_preamble = ''
# newbox, setbox, immediate, etc. are used to find the box
# extent of the rendered text.
s = r"""\documentclass{article}
%s
%s
%s
\usepackage[active,showbox,tightpage]{preview}
\usepackage[papersize={72in,72in},body={70in,70in},margin={1in,1in}]{geometry}
%% we override the default showbox as it is treated as an error and makes
%% the exit status not zero
\def\showbox#1{\immediate\write16{MatplotlibBox:(\the\ht#1+\the\dp#1)x\the\wd#1}}
\begin{document}
\begin{preview}
{\fontsize{%f}{%f}%s}
\end{preview}
\end{document}
""" % (self._font_preamble, unicode_preamble, custom_preamble,
fontsize, fontsize * 1.25, tex)
with open(texfile, 'wb') as fh:
if rcParams['text.latex.unicode']:
fh.write(s.encode('utf8'))
else:
try:
fh.write(s.encode('ascii'))
except UnicodeEncodeError as err:
mpl.verbose.report("You are using unicode and latex, but "
"have not enabled the matplotlib "
"'text.latex.unicode' rcParam.",
'helpful')
raise
return texfile
def make_dvi(self, tex, fontsize):
"""
generates a dvi file containing latex's layout of tex string
returns the file name
"""
if rcParams['text.latex.preview']:
return self.make_dvi_preview(tex, fontsize)
basefile = self.get_basefile(tex, fontsize)
dvifile = '%s.dvi' % basefile
if DEBUG or not os.path.exists(dvifile):
texfile = self.make_tex(tex, fontsize)
outfile = basefile + '.output'
command = self._get_shell_cmd(
'cd "%s"' % self.texcache,
'latex -interaction=nonstopmode %s > "%s"' %
(os.path.split(texfile)[-1], outfile))
mpl.verbose.report(command, 'debug')
exit_status = os.system(command)
try:
with open(outfile) as fh:
report = fh.read()
except IOError:
report = 'No latex error report available.'
try:
os.stat(dvifile)
exists = True
except OSError:
exists = False
if exit_status or not exists:
raise RuntimeError(
('LaTeX was not able to process the following '
'string:\n%s\nHere is the full report generated by '
'LaTeX: \n\n' % repr(tex)) + report)
else:
mpl.verbose.report(report, 'debug')
for fname in glob.glob(basefile + '*'):
if fname.endswith('dvi'):
pass
elif fname.endswith('tex'):
pass
else:
try:
os.remove(fname)
except OSError:
pass
return dvifile
def make_dvi_preview(self, tex, fontsize):
| |
lock)
def process(self, ctx, iprot, oprot):
args = getReverseCompactContacts_args()
args.read(iprot)
iprot.readMessageEnd()
result = getReverseCompactContacts_result()
try:
result.success = self._handler([ctx, args.ids])
except TalkException as e:
result.e = e
except TApplicationException as ex:
with self._lock:
_write_application_exception(ctx, oprot, "getReverseCompactContacts", exception=ex)
return
except Exception as e:
with self._lock:
_write_application_exception(ctx, oprot, "getReverseCompactContacts", ex_code=TApplicationExceptionType.INTERNAL_ERROR, message=e.message)
raise
with self._lock:
try:
oprot.write_response_headers(ctx)
oprot.writeMessageBegin('getReverseCompactContacts', TMessageType.REPLY, 0)
result.write(oprot)
oprot.writeMessageEnd()
oprot.get_transport().flush()
except TTransportException as e:
# catch a request too large error because the TMemoryOutputBuffer always throws that if too much data is written
if e.type == TTransportExceptionType.REQUEST_TOO_LARGE:
raise _write_application_exception(ctx, oprot, "getReverseCompactContacts", ex_code=TApplicationExceptionType.RESPONSE_TOO_LARGE, message=e.args[0])
else:
raise e
class _getPaidCallAdStatus(FProcessorFunction):
def __init__(self, handler, lock):
super(_getPaidCallAdStatus, self).__init__(handler, lock)
def process(self, ctx, iprot, oprot):
args = getPaidCallAdStatus_args()
args.read(iprot)
iprot.readMessageEnd()
result = getPaidCallAdStatus_result()
try:
result.success = self._handler([ctx])
except TalkException as e:
result.e = e
except TApplicationException as ex:
with self._lock:
_write_application_exception(ctx, oprot, "getPaidCallAdStatus", exception=ex)
return
except Exception as e:
with self._lock:
_write_application_exception(ctx, oprot, "getPaidCallAdStatus", ex_code=TApplicationExceptionType.INTERNAL_ERROR, message=e.message)
raise
with self._lock:
try:
oprot.write_response_headers(ctx)
oprot.writeMessageBegin('getPaidCallAdStatus', TMessageType.REPLY, 0)
result.write(oprot)
oprot.writeMessageEnd()
oprot.get_transport().flush()
except TTransportException as e:
# catch a request too large error because the TMemoryOutputBuffer always throws that if too much data is written
if e.type == TTransportExceptionType.REQUEST_TOO_LARGE:
raise _write_application_exception(ctx, oprot, "getPaidCallAdStatus", ex_code=TApplicationExceptionType.RESPONSE_TOO_LARGE, message=e.args[0])
else:
raise e
class _findContactByUseridWithoutAbuseBlockForChannel(FProcessorFunction):
def __init__(self, handler, lock):
super(_findContactByUseridWithoutAbuseBlockForChannel, self).__init__(handler, lock)
def process(self, ctx, iprot, oprot):
args = findContactByUseridWithoutAbuseBlockForChannel_args()
args.read(iprot)
iprot.readMessageEnd()
result = findContactByUseridWithoutAbuseBlockForChannel_result()
try:
result.success = self._handler([ctx, args.userid])
except TalkException as e:
result.e = e
except TApplicationException as ex:
with self._lock:
_write_application_exception(ctx, oprot, "findContactByUseridWithoutAbuseBlockForChannel", exception=ex)
return
except Exception as e:
with self._lock:
_write_application_exception(ctx, oprot, "findContactByUseridWithoutAbuseBlockForChannel", ex_code=TApplicationExceptionType.INTERNAL_ERROR, message=e.message)
raise
with self._lock:
try:
oprot.write_response_headers(ctx)
oprot.writeMessageBegin('findContactByUseridWithoutAbuseBlockForChannel', TMessageType.REPLY, 0)
result.write(oprot)
oprot.writeMessageEnd()
oprot.get_transport().flush()
except TTransportException as e:
# catch a request too large error because the TMemoryOutputBuffer always throws that if too much data is written
if e.type == TTransportExceptionType.REQUEST_TOO_LARGE:
raise _write_application_exception(ctx, oprot, "findContactByUseridWithoutAbuseBlockForChannel", ex_code=TApplicationExceptionType.RESPONSE_TOO_LARGE, message=e.args[0])
else:
raise e
class _getGroupMemberMids(FProcessorFunction):
def __init__(self, handler, lock):
super(_getGroupMemberMids, self).__init__(handler, lock)
def process(self, ctx, iprot, oprot):
args = getGroupMemberMids_args()
args.read(iprot)
iprot.readMessageEnd()
result = getGroupMemberMids_result()
try:
result.success = self._handler([ctx, args.groupId])
except TalkException as e:
result.e = e
except TApplicationException as ex:
with self._lock:
_write_application_exception(ctx, oprot, "getGroupMemberMids", exception=ex)
return
except Exception as e:
with self._lock:
_write_application_exception(ctx, oprot, "getGroupMemberMids", ex_code=TApplicationExceptionType.INTERNAL_ERROR, message=e.message)
raise
with self._lock:
try:
oprot.write_response_headers(ctx)
oprot.writeMessageBegin('getGroupMemberMids', TMessageType.REPLY, 0)
result.write(oprot)
oprot.writeMessageEnd()
oprot.get_transport().flush()
except TTransportException as e:
# catch a request too large error because the TMemoryOutputBuffer always throws that if too much data is written
if e.type == TTransportExceptionType.REQUEST_TOO_LARGE:
raise _write_application_exception(ctx, oprot, "getGroupMemberMids", ex_code=TApplicationExceptionType.RESPONSE_TOO_LARGE, message=e.args[0])
else:
raise e
class _sendMessageWithoutRelationship(FProcessorFunction):
def __init__(self, handler, lock):
super(_sendMessageWithoutRelationship, self).__init__(handler, lock)
def process(self, ctx, iprot, oprot):
args = sendMessageWithoutRelationship_args()
args.read(iprot)
iprot.readMessageEnd()
result = sendMessageWithoutRelationship_result()
try:
result.success = self._handler([ctx, args.message])
except TalkException as e:
result.e = e
except TApplicationException as ex:
with self._lock:
_write_application_exception(ctx, oprot, "sendMessageWithoutRelationship", exception=ex)
return
except Exception as e:
with self._lock:
_write_application_exception(ctx, oprot, "sendMessageWithoutRelationship", ex_code=TApplicationExceptionType.INTERNAL_ERROR, message=e.message)
raise
with self._lock:
try:
oprot.write_response_headers(ctx)
oprot.writeMessageBegin('sendMessageWithoutRelationship', TMessageType.REPLY, 0)
result.write(oprot)
oprot.writeMessageEnd()
oprot.get_transport().flush()
except TTransportException as e:
# catch a request too large error because the TMemoryOutputBuffer always throws that if too much data is written
if e.type == TTransportExceptionType.REQUEST_TOO_LARGE:
raise _write_application_exception(ctx, oprot, "sendMessageWithoutRelationship", ex_code=TApplicationExceptionType.RESPONSE_TOO_LARGE, message=e.args[0])
else:
raise e
class _displayBuddySubscriberCountInBulk(FProcessorFunction):
def __init__(self, handler, lock):
super(_displayBuddySubscriberCountInBulk, self).__init__(handler, lock)
def process(self, ctx, iprot, oprot):
args = displayBuddySubscriberCountInBulk_args()
args.read(iprot)
iprot.readMessageEnd()
result = displayBuddySubscriberCountInBulk_result()
try:
result.success = self._handler([ctx, args.mids])
except TalkException as e:
result.e = e
except TApplicationException as ex:
with self._lock:
_write_application_exception(ctx, oprot, "displayBuddySubscriberCountInBulk", exception=ex)
return
except Exception as e:
with self._lock:
_write_application_exception(ctx, oprot, "displayBuddySubscriberCountInBulk", ex_code=TApplicationExceptionType.INTERNAL_ERROR, message=e.message)
raise
with self._lock:
try:
oprot.write_response_headers(ctx)
oprot.writeMessageBegin('displayBuddySubscriberCountInBulk', TMessageType.REPLY, 0)
result.write(oprot)
oprot.writeMessageEnd()
oprot.get_transport().flush()
except TTransportException as e:
# catch a request too large error because the TMemoryOutputBuffer always throws that if too much data is written
if e.type == TTransportExceptionType.REQUEST_TOO_LARGE:
raise _write_application_exception(ctx, oprot, "displayBuddySubscriberCountInBulk", ex_code=TApplicationExceptionType.RESPONSE_TOO_LARGE, message=e.args[0])
else:
raise e
class _lookupRoomMembers(FProcessorFunction):
def __init__(self, handler, lock):
super(_lookupRoomMembers, self).__init__(handler, lock)
def process(self, ctx, iprot, oprot):
args = lookupRoomMembers_args()
args.read(iprot)
iprot.readMessageEnd()
result = lookupRoomMembers_result()
try:
result.success = self._handler([ctx, args.roomId, args.mids])
except TalkException as e:
result.e = e
except TApplicationException as ex:
with self._lock:
_write_application_exception(ctx, oprot, "lookupRoomMembers", exception=ex)
return
except Exception as e:
with self._lock:
_write_application_exception(ctx, oprot, "lookupRoomMembers", ex_code=TApplicationExceptionType.INTERNAL_ERROR, message=e.message)
raise
with self._lock:
try:
oprot.write_response_headers(ctx)
oprot.writeMessageBegin('lookupRoomMembers', TMessageType.REPLY, 0)
result.write(oprot)
oprot.writeMessageEnd()
oprot.get_transport().flush()
except TTransportException as e:
# catch a request too large error because the TMemoryOutputBuffer always throws that if too much data is written
if e.type == TTransportExceptionType.REQUEST_TOO_LARGE:
raise _write_application_exception(ctx, oprot, "lookupRoomMembers", ex_code=TApplicationExceptionType.RESPONSE_TOO_LARGE, message=e.args[0])
else:
raise e
class _getFavoriteMidsForChannel(FProcessorFunction):
def __init__(self, handler, lock):
super(_getFavoriteMidsForChannel, self).__init__(handler, lock)
def process(self, ctx, iprot, oprot):
args = getFavoriteMidsForChannel_args()
args.read(iprot)
iprot.readMessageEnd()
result = getFavoriteMidsForChannel_result()
try:
result.success = self._handler([ctx])
except TalkException as e:
result.e = e
except TApplicationException as ex:
with self._lock:
_write_application_exception(ctx, oprot, "getFavoriteMidsForChannel", exception=ex)
return
except Exception as e:
with self._lock:
_write_application_exception(ctx, oprot, "getFavoriteMidsForChannel", ex_code=TApplicationExceptionType.INTERNAL_ERROR, message=e.message)
raise
with self._lock:
try:
oprot.write_response_headers(ctx)
oprot.writeMessageBegin('getFavoriteMidsForChannel', TMessageType.REPLY, 0)
result.write(oprot)
oprot.writeMessageEnd()
oprot.get_transport().flush()
except TTransportException as e:
# catch a request too large error because the TMemoryOutputBuffer always throws that if too much data is written
if e.type == TTransportExceptionType.REQUEST_TOO_LARGE:
raise _write_application_exception(ctx, oprot, "getFavoriteMidsForChannel", ex_code=TApplicationExceptionType.RESPONSE_TOO_LARGE, message=e.args[0])
else:
raise e
class _getAllContactIdsForChannel(FProcessorFunction):
def __init__(self, handler, lock):
super(_getAllContactIdsForChannel, self).__init__(handler, lock)
def process(self, ctx, iprot, oprot):
args = getAllContactIdsForChannel_args()
args.read(iprot)
iprot.readMessageEnd()
result = getAllContactIdsForChannel_result()
try:
result.success = self._handler([ctx])
except TalkException as e:
result.e = e
except TApplicationException as ex:
with self._lock:
_write_application_exception(ctx, oprot, "getAllContactIdsForChannel", exception=ex)
return
except Exception as e:
with self._lock:
_write_application_exception(ctx, oprot, "getAllContactIdsForChannel", ex_code=TApplicationExceptionType.INTERNAL_ERROR, message=e.message)
raise
with self._lock:
try:
oprot.write_response_headers(ctx)
oprot.writeMessageBegin('getAllContactIdsForChannel', TMessageType.REPLY, 0)
result.write(oprot)
oprot.writeMessageEnd()
oprot.get_transport().flush()
except TTransportException as e:
# catch a request too large error because the TMemoryOutputBuffer always throws that if too much data is written
if e.type == TTransportExceptionType.REQUEST_TOO_LARGE:
raise _write_application_exception(ctx, oprot, "getAllContactIdsForChannel", ex_code=TApplicationExceptionType.RESPONSE_TOO_LARGE, message=e.args[0])
else:
raise e
class _displayBuddySubscriberCount(FProcessorFunction):
def __init__(self, handler, lock):
super(_displayBuddySubscriberCount, self).__init__(handler, lock)
def process(self, ctx, iprot, oprot):
args = displayBuddySubscriberCount_args()
args.read(iprot)
iprot.readMessageEnd()
result = displayBuddySubscriberCount_result()
try:
result.success = self._handler([ctx])
except TalkException as e:
result.e = e
except TApplicationException as ex:
with self._lock:
_write_application_exception(ctx, oprot, "displayBuddySubscriberCount", exception=ex)
return
except Exception as e:
with self._lock:
_write_application_exception(ctx, oprot, "displayBuddySubscriberCount", ex_code=TApplicationExceptionType.INTERNAL_ERROR, message=e.message)
raise
with self._lock:
try:
oprot.write_response_headers(ctx)
oprot.writeMessageBegin('displayBuddySubscriberCount', TMessageType.REPLY, 0)
result.write(oprot)
oprot.writeMessageEnd()
oprot.get_transport().flush()
except TTransportException as e:
# catch a request too large error because the TMemoryOutputBuffer always throws that if too much data is written
if e.type == TTransportExceptionType.REQUEST_TOO_LARGE:
raise _write_application_exception(ctx, oprot, "displayBuddySubscriberCount", ex_code=TApplicationExceptionType.RESPONSE_TOO_LARGE, message=e.args[0])
else:
raise e
class _getProfileForChannel(FProcessorFunction):
def __init__(self, handler, lock):
super(_getProfileForChannel, self).__init__(handler, lock)
def process(self, ctx, iprot, oprot):
args = getProfileForChannel_args()
args.read(iprot)
iprot.readMessageEnd()
result = getProfileForChannel_result()
try:
result.success = self._handler([ctx])
except TalkException as e:
result.e = e
except TApplicationException as ex:
with self._lock:
_write_application_exception(ctx, oprot, "getProfileForChannel", exception=ex)
return
except Exception as e:
with self._lock:
_write_application_exception(ctx, oprot, "getProfileForChannel", ex_code=TApplicationExceptionType.INTERNAL_ERROR, message=e.message)
raise
with self._lock:
try:
oprot.write_response_headers(ctx)
oprot.writeMessageBegin('getProfileForChannel', TMessageType.REPLY, 0)
result.write(oprot)
oprot.writeMessageEnd()
oprot.get_transport().flush()
except TTransportException as e:
# catch a request too large error because the TMemoryOutputBuffer always throws that if too much data is written
if e.type == TTransportExceptionType.REQUEST_TOO_LARGE:
raise _write_application_exception(ctx, oprot, "getProfileForChannel", ex_code=TApplicationExceptionType.RESPONSE_TOO_LARGE, message=e.args[0])
else:
raise e
class _getUserTickets(FProcessorFunction):
def __init__(self, handler, lock):
super(_getUserTickets, self).__init__(handler, lock)
def process(self, ctx, iprot, oprot):
args = getUserTickets_args()
args.read(iprot)
iprot.readMessageEnd()
result = getUserTickets_result()
try:
result.success = self._handler([ctx, args.userMids])
except TalkException as e:
result.e = e
except TApplicationException as ex:
with self._lock:
_write_application_exception(ctx, oprot, "getUserTickets", exception=ex)
return
except Exception as e:
with self._lock:
_write_application_exception(ctx, oprot, "getUserTickets", ex_code=TApplicationExceptionType.INTERNAL_ERROR, message=e.message)
raise
with self._lock:
try:
oprot.write_response_headers(ctx)
oprot.writeMessageBegin('getUserTickets', TMessageType.REPLY, 0)
result.write(oprot)
oprot.writeMessageEnd()
oprot.get_transport().flush()
except TTransportException as e:
# catch a request too large error because the TMemoryOutputBuffer always throws that if too much data is written
if e.type == TTransportExceptionType.REQUEST_TOO_LARGE:
raise _write_application_exception(ctx, oprot, "getUserTickets", ex_code=TApplicationExceptionType.RESPONSE_TOO_LARGE, message=e.args[0])
else:
raise e
class _getOAFriendMids(FProcessorFunction):
def __init__(self, handler, lock):
super(_getOAFriendMids, self).__init__(handler, lock)
def process(self, ctx, iprot, oprot):
args = getOAFriendMids_args()
args.read(iprot)
iprot.readMessageEnd()
result = getOAFriendMids_result()
try:
result.success = self._handler([ctx])
except TalkException as e:
result.e = e
except TApplicationException as ex:
with self._lock:
_write_application_exception(ctx, oprot, "getOAFriendMids", exception=ex)
return
except Exception as e:
with self._lock:
_write_application_exception(ctx, oprot, "getOAFriendMids", ex_code=TApplicationExceptionType.INTERNAL_ERROR, message=e.message)
raise
with self._lock:
try:
oprot.write_response_headers(ctx)
oprot.writeMessageBegin('getOAFriendMids', TMessageType.REPLY, 0)
result.write(oprot)
oprot.writeMessageEnd()
oprot.get_transport().flush()
except TTransportException | |
bestX, bestPf
########################### Visualization Methods
def plotThresholds(self, conDatNum, xlim=[-.01, .5], **kwargs):
"""
Function to sample the continuous data and plot the thresholds
calculated with the SVD call with a histogram of detex's best
estimate of the null space (see getFAS for more details)
Parameters
------
conDatNum : int
The number of continuous data chunks to use in the sampling,
duration of chunks defined in data fetcher
xlim : list (number, number)
The x limits on the plot (often it is useful to zoom in around 0)
**kwargs are passed to the getFAS call
"""
self.getFAS(conDatNum, **kwargs)
count = 0
for station in self.ssStations:
for ind, row in self.subspaces[station].iterrows():
beta_a, beta_b = row.FAS['betadist'][0:2]
plt.figure(count)
plt.subplot(2, 1, 1)
bins = np.mean(
[row.FAS['bins'][1:], row.FAS['bins'][:-1]], axis=0)
plt.plot(bins, row.FAS['hist'])
plt.title('Station %s %s' % (station, row.Name))
plt.axvline(row.Threshold, color='g')
beta = scipy.stats.beta.pdf(bins, beta_a, beta_b)
plt.plot(bins, beta * (max(row.FAS['hist']) / max(beta)), 'k')
plt.title('%s station %s' % (row.Name, row.Station))
plt.xlim(xlim)
plt.ylabel('Count')
plt.subplot(2, 1, 2)
bins = np.mean(
[row.FAS['bins'][1:], row.FAS['bins'][:-1]], axis=0)
plt.plot(bins, row.FAS['hist'])
plt.axvline(row.Threshold, color='g')
plt.plot(bins, beta * (max(row.FAS['hist']) / max(beta)), 'k')
plt.xlabel('Detection Statistic')
plt.ylabel('Count')
plt.semilogy()
plt.ylim(ymin=10 ** -1)
plt.xlim(xlim)
count += 1
def plotFracEnergy(self):
"""
Method to plot the fractional energy captured of by the subspace for
various dimensions of rep. Each event is plotted as a grey dotted
line, the average as a red solid line, and the chosen degree of rep.
is plotted as a solid green vertical line.
Similar to Harris 2006 Fig 8
"""
for a, station in enumerate(self.ssStations):
f = plt.figure(a + 1)
f.set_figheight(1.85 * len(self.subspaces[station]))
for ind, row in self.subspaces[station].iterrows():
if not isinstance(row.FracEnergy, dict):
msg = 'fractional energy not defiend, call SVD'
detex.log(__name__, msg, level='error')
plt.subplot(len(self.subspaces[station]), 1, ind + 1)
for event in row.Events:
plt.plot(row.FracEnergy[event], '--', color='0.6')
plt.plot(row.FracEnergy['Average'], 'r')
plt.axvline(row.NumBasis, 0, 1, color='g')
plt.ylim([0, 1.1])
plt.title('Station %s, %s' % (row.Station, row.Name))
f.subplots_adjust(hspace=.4)
f.text(0.5, 0.06, 'Dimension of Representation', ha='center')
f.text(0.04, 0.5, 'Fraction of Energy Captured',
va='center', rotation='vertical')
plt.show()
def plotAlignedEvents(self): # plot aligned subspaces in SubSpaces object
"""
Plots the aligned events for each station in each cluster.
Will trim waveforms if trim times (by pickTimes or attachPickTimes)
are defined.
"""
for a, station in enumerate(self.ssStations):
for ind, row in self.subspaces[station].iterrows():
plt.figure(figsize=[10, .9 * len(row.Events)])
# f.set_figheight(1.85 * len(row.Events))
# plt.subplot(len(self.subspaces[station]), 1, ind + 1)
events = row.Events
stKeys = row.SampleTrims.keys() # sample trim keys
for evenum, eve in enumerate(events):
# plt.subplot(len(self.subspaces[station]), 1, evenum + 1)
aliTD = row.AlignedTD[eve] # aligned wf for event eve
if 'Starttime' in stKeys and 'Endtime' in stKeys:
start = row.SampleTrims['Starttime']
stop = row.SampleTrims['Endtime']
aliwf = aliTD[start: stop]
else:
aliwf = row.AlignedTD[eve]
plt.plot(aliwf / (2 * max(aliwf)) + 1.5 * evenum, c='k')
plt.xlim([0, len(aliwf)])
plt.ylim(-1, 1.5 * evenum + 1)
plt.xticks([])
plt.yticks([])
plt.title('Station %s, %s, %d events' % (station, row.Name, len(events)))
plt.show()
def plotBasisVectors(self, onlyused=False):
"""
Plots the basis vectors selected after performing the SVD
If SVD has not been called will throw error
Parameters
------------
onlyUsed : bool
If true only the selected basis vectors will be plotted. See
SVD for how detex selects basis vectors.
If false all will be plotted (used in blue, unused in red)
"""
if not self.subspaces.values()[0].iloc[0].SVDdefined:
msg = 'SVD not performed, call SVD before plotting basis vectors'
detex.log(__name__, msg, level='error')
for subnum, station in enumerate(self.ssStations):
subsp = self.subspaces[station]
for ind, row in subsp.iterrows():
num_wfs = len(row.UsedSVDKeys) if onlyused else len(row.SVD)
keyz = row.SVD.keys()
keyz.sort(reverse=True)
keyz = keyz[:num_wfs]
plt.figure(figsize=[10, .9 * num_wfs])
for keynum, key in enumerate(keyz):
wf = row.SVD[key] / (2 * max(row.SVD[key])) - 1.5 * keynum
c = 'b' if keynum < len(row.UsedSVDKeys) else '.5'
plt.plot(wf, c=c)
plt.ylim(-1.5 * keynum - 1, 1)
plt.yticks([])
plt.xticks([])
plt.title('%s station %s' % (row.Name, row.Station))
def plotOffsetTimes(self):
"""
Function to loop through each station/subspace pair and make
histograms of offset times
"""
count = 1
for station in self.ssStations:
for ind, row in self.subspaces[station].iterrows():
if len(row.SampleTrims.keys()) < 1:
msg = 'subspaces must be trimmed before plotting offsets'
detex.log(__name__, msg, level='error')
plt.figure(count)
keys = row.Events
offsets = [row.Stats[x]['offset'] for x in keys]
plt.hist(offsets)
plt.title('%s %s' % (row.Station, row.Name))
plt.figure(count + 1)
numEvs = len(row.Events)
ranmin = np.zeros(numEvs)
ranmax = np.zeros(numEvs)
orsamps = np.zeros(numEvs)
for evenum, eve in enumerate(row.Events):
tem = self.clusters.temkey[
self.clusters.temkey.NAME == eve].iloc[0]
condat = row.AlignedTD[
eve] / max(2 * abs(row.AlignedTD[eve])) + evenum + 1
Nc, Sr = row.Stats[eve]['Nc'], row.Stats[
eve]['sampling_rate']
starTime = row.Stats[eve]['starttime']
ortime = obspy.core.UTCDateTime(tem.TIME).timestamp
orsamps[evenum] = row.SampleTrims[
'Starttime'] - (starTime - ortime) * Nc * Sr
plt.plot(condat, 'k')
plt.axvline(row.SampleTrims['Starttime'], c='g')
plt.plot(orsamps[evenum], evenum + 1, 'r*')
ran = row.SampleTrims['Endtime'] - orsamps[evenum]
ranmin[evenum] = orsamps[evenum] - ran * .1
ranmax[evenum] = row.SampleTrims['Endtime'] + ran * .1
plt.xlim(int(min(ranmin)), int(max(ranmax)))
plt.axvline(min(orsamps), c='r')
plt.axvline(max(orsamps), c='r')
count += 2
############################# Pick Times functions
def pickTimes(self, duration=30, traceLimit=15, repick=False,
subspace=True, singles=True):
"""
Calls a modified version of obspyck (https://github.com/megies/obspyck)
, a GUI for picking phases, so user can manually select start times
(trim) of unclustered and clustered events.
Triming down each waveform group to only include event phases,
and not pre and post event noise, will significantly decrease the
runtime for the subspace detection (called with detex method).
Trimming is required for singletons as any singletons without trim
times will not be used as detectors).
Parameters
--------------
duration : real number
the time after the first pick (in seconds) to trim waveforms.
The fact that the streams are multiplexed is taken into account.
If None is passed then the last pick will be used as the end time
for truncating waveforms.
traceLimit : int
Limits the number of traces that will show up to be manually
picked to traceLimit events. Avoids bogging down and/or killing
the GUI with too many events.
repick : boolean
If true repick times that already have sample trim times, else
only pick those that do not.
subspace : boolean
If true pick subspaces
singles : boolean
If true pick singletons
"""
qApp = PyQt4.QtGui.QApplication(sys.argv)
if subspace:
self._pickTimes(self.subspaces, duration, traceLimit,
qApp, repick=repick)
if singles:
self._pickTimes(self.singles, duration, traceLimit, qApp,
issubspace=False, repick=repick)
def _pickTimes(self, trdfDict, duration, traceLimit, qApp,
issubspace=True, repick=False):
"""
Function to initate GUI for picking, called by pickTimes
"""
for sta in trdfDict.keys():
for ind, row in trdfDict[sta].iterrows():
if not row.SampleTrims or repick: # if not picked or repick
# Make a modified obspy stream to pass to streamPick
st = self._makeOpStream(ind, row, traceLimit)
Pks = None # This is needed or it crashes OS X
Pks = detex.streamPick.streamPick(st, ap=qApp)
d1 = {}
for b in Pks._picks:
if b: # if any picks made
d1[b.phase_hint] = b.time.timestamp
if len(d1.keys()) > 0: # if any picks made
# get sample rate and number of chans
sr = row.Stats[row.Events[0]]['sampling_rate']
Nc = row.Stats[row.Events[0]]['Nc']
# get sample divisible by NC to keep traces aligned
fp = int(min(d1.values())) # first picked phase
d1['Starttime'] = fp - fp % Nc
# if duration paramenter is defined (it is usually
# better to leave it defined)
stime = d1['Starttime']
if duration:
etime = stime + int(duration * sr * Nc)
d1['Endtime'] = etime
d1['DurationSeconds'] = duration
else:
etime = int(max(d1.values()))
d1['Endtime'] = etime
dursecs = (etime - stime) / (sr * Nc)
d1['DurationSeconds'] = dursecs
trdfDict[sta].SampleTrims[ind] = d1
for event in row.Events: # update starttimes
sspa = trdfDict[sta]
stimeOld = sspa.Stats[ind][event]['starttime']
# get updated start time
stN = stimeOld + d1['Starttime'] / (Nc * sr)
ot = trdfDict[sta].Stats[ind][event]['origintime']
offset = stN - ot
trdfDict[sta].Stats[ind][event]['starttime'] = stN
trdfDict[sta].Stats[ind][event]['offset'] = offset
if not Pks.KeepGoing:
msg = 'aborting picking, progress saved'
detex.log(__name__, msg, pri=1)
return None
self._updateOffsets()
def _makeOpStream(self, ind, row, traceLimit):
"""
Make an obspy stream of the multiplexed data stored in main detex
DataFrame
"""
st = obspy.core.Stream()
count = 0
if 'AlignedTD' in row: # if this is a subspace
for key in row.Events:
if count | |
<reponame>vnetcon/curvy<filename>win64-postgresql/pgAdmin 4/web/pgadmin/browser/server_groups/servers/databases/schemas/foreign_tables/__init__.py
##########################################################################
#
# pgAdmin 4 - PostgreSQL Tools
#
# Copyright (C) 2013 - 2020, The pgAdmin Development Team
# This software is released under the PostgreSQL Licence
#
##########################################################################
"""Implements the Foreign Table Module."""
import sys
import traceback
from functools import wraps
import simplejson as json
from flask import render_template, make_response, request, jsonify, \
current_app
from flask_babelex import gettext
import pgadmin.browser.server_groups.servers.databases as databases
from config import PG_DEFAULT_DRIVER
from pgadmin.browser.server_groups.servers.databases.schemas.utils import \
SchemaChildModule, DataTypeReader
from pgadmin.browser.server_groups.servers.databases.utils import \
parse_sec_labels_from_db
from pgadmin.browser.server_groups.servers.utils import parse_priv_from_db, \
parse_priv_to_db
from pgadmin.browser.utils import PGChildNodeView
from pgadmin.utils import IS_PY2
from pgadmin.utils.ajax import make_json_response, internal_server_error, \
make_response as ajax_response, gone
from pgadmin.utils.compile_template_name import compile_template_path
from pgadmin.utils.driver import get_driver
from pgadmin.tools.schema_diff.node_registry import SchemaDiffRegistry
from pgadmin.tools.schema_diff.compare import SchemaDiffObjectCompare
# If we are in Python3
if not IS_PY2:
unicode = str
class ForeignTableModule(SchemaChildModule):
"""
class ForeignTableModule(CollectionNodeModule):
This class represents The Foreign Table Module.
Methods:
-------
* __init__(*args, **kwargs)
- Initialize the Foreign Table Module.
* get_nodes(gid, sid, did, scid)
- Generate the Foreign Table collection node.
* node_inode():
- Override this property to make the Foreign Table node as leaf node.
* script_load()
- Load the module script for Foreign Table, when schema node is
initialized.
"""
NODE_TYPE = 'foreign_table'
COLLECTION_LABEL = gettext("Foreign Tables")
def __init__(self, *args, **kwargs):
super(ForeignTableModule, self).__init__(*args, **kwargs)
self.min_ver = None
self.max_ver = None
self.min_gpdbver = 1000000000
def get_nodes(self, gid, sid, did, scid):
"""
Generate the Foreign Table collection node.
"""
yield self.generate_browser_collection_node(scid)
@property
def node_inode(self):
"""
Make the node as leaf node.
"""
return False
@property
def script_load(self):
"""
Load the module script for foreign table, when the
schema node is initialized.
"""
return databases.DatabaseModule.NODE_TYPE
blueprint = ForeignTableModule(__name__)
class ForeignTableView(PGChildNodeView, DataTypeReader,
SchemaDiffObjectCompare):
"""
class ForeignTableView(PGChildNodeView)
This class inherits PGChildNodeView to get the different routes for
the module.
The class is responsible to Create, Read, Update and Delete operations for
the Foreign Table.
Methods:
-------
* validate_request(f):
- Works as a decorator.
Validating request on the request of create, update and modified SQL.
* check_precondition(f):
- Works as a decorator.
- Checks database connection status.
- Attach connection object and template path.
* list(gid, sid, did, scid):
- List the Foreign Table.
* nodes(gid, sid, did, scid):
- Returns all the Foreign Table to generate Nodes in the browser.
* properties(gid, sid, did, scid, foid):
- Returns the Foreign Table properties.
* get_collations(gid, sid, did, scid, foid=None):
- Returns Collations.
* get_types(gid, sid, did, scid, foid=None):
- Returns Data Types.
* get_foreign_servers(gid, sid, did, scid, foid=None):
- Returns the Foreign Servers.
* get_tables(gid, sid, did, scid, foid=None):
- Returns the Foreign Tables as well as Plain Tables.
* get_columns(gid, sid, did, scid, foid=None):
- Returns the Table Columns.
* create(gid, sid, did, scid):
- Creates a new Foreign Table object.
* update(gid, sid, did, scid, foid):
- Updates the Foreign Table object.
* delete(gid, sid, did, scid, foid):
- Drops the Foreign Table object.
* sql(gid, sid, did, scid, foid):
- Returns the SQL for the Foreign Table object.
* msql(gid, sid, did, scid, foid=None):
- Returns the modified SQL.
* get_sql(gid, sid, data, scid, foid=None):
- Generates the SQL statements to create/update the Foreign Table object.
* dependents(gid, sid, did, scid, foid):
- Returns the dependents for the Foreign Table object.
* dependencies(gid, sid, did, scid, foid):
- Returns the dependencies for the Foreign Table object.
* select_sql(gid, sid, did, scid, foid):
- Returns sql for Script
* insert_sql(gid, sid, did, scid, foid):
- Returns sql for Script
* update_sql(gid, sid, did, scid, foid):
- Returns sql for Script
* delete_sql(gid, sid, did, scid, foid):
- Returns sql for Script
* compare(**kwargs):
- This function will compare the foreign table nodes from two different
schemas.
"""
node_type = blueprint.node_type
parent_ids = [
{'type': 'int', 'id': 'gid'},
{'type': 'int', 'id': 'sid'},
{'type': 'int', 'id': 'did'},
{'type': 'int', 'id': 'scid'}
]
ids = [
{'type': 'int', 'id': 'foid'}
]
operations = dict({
'obj': [
{'get': 'properties', 'delete': 'delete', 'put': 'update'},
{'get': 'list', 'post': 'create', 'delete': 'delete'}
],
'delete': [{'delete': 'delete'}, {'delete': 'delete'}],
'children': [{'get': 'children'}],
'nodes': [{'get': 'node'}, {'get': 'nodes'}],
'sql': [{'get': 'sql'}],
'msql': [{'get': 'msql'}, {'get': 'msql'}],
'stats': [{'get': 'statistics'}],
'dependency': [{'get': 'dependencies'}],
'dependent': [{'get': 'dependents'}],
'get_collations': [
{'get': 'get_collations'},
{'get': 'get_collations'}
],
'get_types': [{'get': 'types'}, {'get': 'types'}],
'get_foreign_servers': [{'get': 'get_foreign_servers'},
{'get': 'get_foreign_servers'}],
'get_tables': [{'get': 'get_tables'}, {'get': 'get_tables'}],
'get_columns': [{'get': 'get_columns'}, {'get': 'get_columns'}],
'select_sql': [{'get': 'select_sql'}],
'insert_sql': [{'get': 'insert_sql'}],
'update_sql': [{'get': 'update_sql'}],
'delete_sql': [{'get': 'delete_sql'}],
'compare': [{'get': 'compare'}, {'get': 'compare'}]
})
def validate_request(f):
"""
Works as a decorator.
Validating request on the request of create, update and modified SQL.
Required Args:
name: Name of the Foreign Table
ftsrvname: Foreign Server Name
Above both the arguments will not be validated in the update action.
"""
@wraps(f)
def wrap(self, **kwargs):
data = {}
if request.data:
req = json.loads(request.data, encoding='utf-8')
else:
req = request.args or request.form
if 'foid' not in kwargs:
required_args = [
'name',
'ftsrvname'
]
for arg in required_args:
if arg not in req or req[arg] == '':
return make_json_response(
status=410,
success=0,
errormsg=gettext(
"Could not find the required parameter (%s)." %
arg
)
)
try:
list_params = []
if request.method == 'GET':
list_params = ['constraints', 'columns', 'ftoptions',
'seclabels', 'inherits', 'acl']
else:
list_params = ['inherits']
for key in req:
if (
key in list_params and req[key] != '' and
req[key] is not None
):
# Coverts string into python list as expected.
data[key] = []
if type(req[key]) != list or len(req[key]) != 0:
data[key] = json.loads(req[key], encoding='utf-8')
if key == 'inherits':
# Convert Table ids from unicode/string to int
# and make tuple for 'IN' query.
inherits = tuple([int(x) for x in data[key]])
if len(inherits) == 1:
# Python tupple has , after the first param
# in case of single parameter.
# So, we need to make it tuple explicitly.
inherits = "(" + str(inherits[0]) + ")"
if inherits:
# Fetch Table Names from their respective Ids,
# as we need Table names to generate the SQL.
SQL = render_template(
"/".join([self.template_path,
'get_tables.sql']),
attrelid=inherits)
status, res = self.conn.execute_dict(SQL)
if not status:
return internal_server_error(errormsg=res)
if 'inherits' in res['rows'][0]:
data[key] = res['rows'][0]['inherits']
else:
data[key] = []
elif key == 'typnotnull':
data[key] = True if (req[key] == 'true' or req[key]
is True) else False if \
(req[key] == 'false' or req[key]) is False else ''
else:
data[key] = req[key]
except Exception as e:
return internal_server_error(errormsg=str(e))
self.request = data
return f(self, **kwargs)
return wrap
def check_precondition(f):
"""
Works as a decorator.
Checks the database connection status.
Attaches the connection object and template path to the class object.
"""
@wraps(f)
def wrap(*args, **kwargs):
self = args[0]
driver = get_driver(PG_DEFAULT_DRIVER)
self.manager = driver.connection_manager(kwargs['sid'])
# Get database connection
self.conn = self.manager.connection(did=kwargs['did'])
self.qtIdent = driver.qtIdent
# Set template path for sql scripts depending
# on the server version.
self.template_path = compile_template_path(
'foreign_tables/sql/',
self.manager.server_type,
self.manager.version
)
return f(*args, **kwargs)
return wrap
@check_precondition
def list(self, gid, sid, did, scid):
"""
List all the Foreign Tables.
Args:
gid: Server Group Id
sid: Server Id
did: Database Id
scid: Schema Id
"""
SQL = render_template("/".join([self.template_path, 'node.sql']),
scid=scid)
status, res = self.conn.execute_dict(SQL)
if not status:
return internal_server_error(errormsg=res)
return ajax_response(
response=res['rows'],
status=200
)
@check_precondition
def nodes(self, gid, sid, did, scid):
"""
Returns the Foreign Tables to generate the Nodes.
Args:
gid: Server Group Id
sid: Server Id
did: Database Id
scid: Schema Id
"""
res = []
SQL = render_template("/".join([self.template_path,
'node.sql']), scid=scid)
status, rset = self.conn.execute_2darray(SQL)
if not status:
return internal_server_error(errormsg=rset)
for row in rset['rows']:
res.append(
self.blueprint.generate_browser_node(
row['oid'],
scid,
row['name'],
icon="icon-foreign_table"
))
return make_json_response(
data=res,
status=200
)
@check_precondition
def node(self, gid, sid, did, scid, foid):
"""
Returns the Foreign Tables to generate the Nodes.
Args:
gid: Server Group Id
sid: Server Id
did: Database Id
scid: Schema Id
foid: Foreign Table Id
"""
SQL = render_template("/".join([self.template_path,
'node.sql']), foid=foid)
status, rset = self.conn.execute_2darray(SQL)
if not status:
return internal_server_error(errormsg=rset)
for row in rset['rows']:
return make_json_response(
data=self.blueprint.generate_browser_node(
row['oid'],
scid,
row['name'],
icon="icon-foreign_table"
),
status=200
)
return gone(gettext(
'Could not find the specified foreign table.'
))
@check_precondition
def | |
import os, sys, time, requests, zipfile, StringIO
import h2o_args
# from h2o_cmd import runInspect, infoFromSummary
import h2o_cmd, h2o_util
import h2o_browse as h2b
import h2o_print as h2p
from h2o_objects import H2O
from h2o_test import verboseprint, dump_json, check_sandbox_for_errors, get_sandbox_name, log
# print "h2o_methods"
def check_params_update_kwargs(params_dict, kw, function, print_params):
# only update params_dict..don't add
# throw away anything else as it should come from the model (propagating what RF used)
for k in kw:
if k in params_dict:
params_dict[k] = kw[k]
else:
raise Exception("illegal parameter '%s' in %s" % (k, function))
if print_params:
print "%s parameters:" % function, params_dict
sys.stdout.flush()
def get_cloud(self, noExtraErrorCheck=False, timeoutSecs=10):
# hardwire it to allow a 60 second timeout
a = self.do_json_request('Cloud.json', noExtraErrorCheck=noExtraErrorCheck, timeout=timeoutSecs)
version = a['version']
if version and version!='(unknown)' and version!='null' and version!='none':
if not version.startswith('2'):
h2p.red_print("h2o version at node[0] doesn't look like h2o version. (start with 2) %s" % version)
consensus = a['consensus']
locked = a['locked']
cloud_size = a['cloud_size']
cloud_name = a['cloud_name']
node_name = a['node_name']
node_id = self.node_id
verboseprint('%s%s %s%s %s%s %s%s %s%s' % (
"\tnode_id: ", node_id,
"\tcloud_size: ", cloud_size,
"\tconsensus: ", consensus,
"\tlocked: ", locked,
"\tversion: ", version,
))
return a
def h2o_log_msg(self, message=None, timeoutSecs=15):
if 1 == 0:
return
if not message:
message = "\n"
message += "\n#***********************"
message += "\npython_test_name: " + h2o_args.python_test_name
message += "\n#***********************"
params = {'message': message}
self.do_json_request('2/LogAndEcho', params=params, timeout=timeoutSecs)
def get_timeline(self):
return self.do_json_request('Timeline.json')
# Shutdown url is like a reset button. Doesn't send a response before it kills stuff
# safer if random things are wedged, rather than requiring response
# so request library might retry and get exception. allow that.
def shutdown_all(self):
try:
self.do_json_request('Shutdown.json', noExtraErrorCheck=True)
except:
pass
# don't want delayes between sending these to each node
# if you care, wait after you send them to each node
# Seems like it's not so good to just send to one node
# time.sleep(1) # a little delay needed?
return (True)
def put_value(self, value, key=None, repl=None):
return self.do_json_request(
'PutValue.json',
params={"value": value, "key": key, "replication_factor": repl},
extraComment=str(value) + "," + str(key) + "," + str(repl))
# {"Request2":0,"response_info":i
# {"h2o":"pytest-kevin-4530","node":"/192.168.0.37:54321","time":0,"status":"done","redirect_url":null},
# "levels":[null,null,null,null]}
# FIX! what is this for? R uses it. Get one per col? maybe something about enums
def levels(self, source=None):
return self.do_json_request(
'2/Levels2.json',
params={"source": source},
)
def export_files(self, print_params=True, timeoutSecs=60, **kwargs):
params_dict = {
'src_key': None,
'path': None,
'force': None,
}
check_params_update_kwargs(params_dict, kwargs, 'export_files', print_params)
return self.do_json_request(
'2/ExportFiles.json',
timeout=timeoutSecs,
params=params_dict,
)
def put_file(self, f, key=None, timeoutSecs=60):
if key is None:
key = os.path.basename(f)
### print "putfile specifying this key:", key
fileObj = open(f, 'rb')
resp = self.do_json_request(
'2/PostFile.json',
cmd='post',
timeout=timeoutSecs,
params={"key": key},
files={"file": fileObj},
extraComment=str(f))
verboseprint("\nput_file response: ", dump_json(resp))
fileObj.close()
return key
# noise is a 2-tuple ("StoreView", none) for url plus args for doing during poll to create noise
# so we can create noise with different urls!, and different parms to that url
# no noise if None
def poll_url(self, response,
timeoutSecs=10, retryDelaySecs=0.5, initialDelaySecs=0, pollTimeoutSecs=180,
noise=None, benchmarkLogging=None, noPoll=False, reuseFirstPollUrl=False, noPrint=False):
verboseprint('poll_url input: response:', dump_json(response))
### print "poll_url: pollTimeoutSecs", pollTimeoutSecs
### print "at top of poll_url, timeoutSecs: ", timeoutSecs
# for the rev 2 stuff..the job_key, destination_key and redirect_url are just in the response
# look for 'response'..if not there, assume the rev 2
def get_redirect_url(response):
url = None
params = None
# StoreView has old style, while beta_features
if 'response_info' in response:
response_info = response['response_info']
if 'redirect_url' not in response_info:
raise Exception("Response during polling must have 'redirect_url'\n%s" % dump_json(response))
if response_info['status'] != 'done':
redirect_url = response_info['redirect_url']
if redirect_url:
url = self.url(redirect_url)
params = None
else:
if response_info['status'] != 'done':
raise Exception(
"'redirect_url' during polling is null but status!='done': \n%s" % dump_json(response))
else:
if 'response' not in response:
raise Exception("'response' not in response.\n%s" % dump_json(response))
if response['response']['status'] != 'done':
if 'redirect_request' not in response['response']:
raise Exception("'redirect_request' not in response. \n%s" % dump_json(response))
url = self.url(response['response']['redirect_request'])
params = response['response']['redirect_request_args']
return (url, params)
# if we never poll
msgUsed = None
if 'response_info' in response: # trigger v2 for GBM always?
status = response['response_info']['status']
progress = response.get('progress', "")
else:
r = response['response']
status = r['status']
progress = r.get('progress', "")
doFirstPoll = status != 'done'
(url, params) = get_redirect_url(response)
# no need to recreate the string for messaging, in the loop..
if params:
paramsStr = '&'.join(['%s=%s' % (k, v) for (k, v) in params.items()])
else:
paramsStr = ''
# FIX! don't do JStack noise for tests that ask for it. JStack seems to have problems
noise_enable = noise and noise != ("JStack", None)
if noise_enable:
print "Using noise during poll_url:", noise
# noise_json should be like "Storeview"
(noise_json, noiseParams) = noise
noiseUrl = self.url(noise_json + ".json")
if noiseParams is None:
noiseParamsStr = ""
else:
noiseParamsStr = '&'.join(['%s=%s' % (k, v) for (k, v) in noiseParams.items()])
start = time.time()
count = 0
if initialDelaySecs:
time.sleep(initialDelaySecs)
# can end with status = 'redirect' or 'done'
# Update: on DRF2, the first RF redirects to progress. So we should follow that, and follow any redirect to view?
# so for v2, we'll always follow redirects?
# For v1, we're not forcing the first status to be 'poll' now..so it could be redirect or done?(NN score? if blocking)
# Don't follow the Parse redirect to Inspect, because we want parseResult['destination_key'] to be the end.
# note this doesn't affect polling with Inspect? (since it doesn't redirect ?
while status == 'poll' or doFirstPoll or (status == 'redirect' and 'Inspect' not in url):
count += 1
if ((time.time() - start) > timeoutSecs):
# show what we're polling with
emsg = "Exceeded timeoutSecs: %d secs while polling." % timeoutSecs + \
"status: %s, url: %s?%s" % (status, urlUsed, paramsUsedStr)
raise Exception(emsg)
if benchmarkLogging:
import h2o
h2o.cloudPerfH2O.get_log_save(benchmarkLogging)
# every other one?
create_noise = noise_enable and ((count % 2) == 0)
if create_noise:
urlUsed = noiseUrl
paramsUsed = noiseParams
paramsUsedStr = noiseParamsStr
msgUsed = "\nNoise during polling with"
else:
urlUsed = url
paramsUsed = params
paramsUsedStr = paramsStr
msgUsed = "\nPolling with"
print status, progress, urlUsed
time.sleep(retryDelaySecs)
response = self.do_json_request(fullUrl=urlUsed, timeout=pollTimeoutSecs, params=paramsUsed)
verboseprint(msgUsed, urlUsed, paramsUsedStr, "Response:", dump_json(response))
# hey, check the sandbox if we've been waiting a long time...rather than wait for timeout
if ((count % 6) == 0):
check_sandbox_for_errors(python_test_name=h2o_args.python_test_name)
if (create_noise):
# this guarantees the loop is done, so we don't need to worry about
# a 'return r' being interpreted from a noise response
status = 'poll'
progress = ''
else:
doFirstPoll = False
status = response['response_info']['status']
progress = response.get('progress', "")
# get the redirect url
if not reuseFirstPollUrl: # reuse url for all v1 stuff
(url, params) = get_redirect_url(response)
if noPoll:
return response
# won't print if we didn't poll
if msgUsed:
verboseprint(msgUsed, urlUsed, paramsUsedStr, "Response:", dump_json(response))
return response
# this is only for 2 (fvec)
def kmeans_view(self, model=None, timeoutSecs=30, **kwargs):
# defaults
params_dict = {
'_modelKey': model,
}
browseAlso = kwargs.get('browseAlso', False)
# only lets these params thru
check_params_update_kwargs(params_dict, kwargs, 'kmeans_view', print_params=True)
print "\nKMeans2ModelView params list:", params_dict
a = self.do_json_request('2/KMeans2ModelView.json', timeout=timeoutSecs, params=params_dict)
# kmeans_score doesn't need polling?
verboseprint("\nKMeans2Model View result:", dump_json(a))
if (browseAlso | h2o_args.browse_json):
print "Redoing the KMeans2ModelView through the browser, no results saved though"
h2b.browseJsonHistoryAsUrlLastMatch('KMeans2ModelView')
time.sleep(5)
return a
# additional params include: cols=.
# don't need to include in params_dict it doesn't need a default
# FIX! cols should be renamed in test for fvec
def kmeans(self, key, key2=None,
timeoutSecs=300, retryDelaySecs=0.2, initialDelaySecs=None, pollTimeoutSecs=180,
noise=None, benchmarkLogging=None, noPoll=False, **kwargs):
# defaults
# KMeans has more params than shown here
# KMeans2 has these params?
# max_iter=100&max_iter2=1&iterations=0
params_dict = {
'initialization': 'Furthest',
'k': 1,
'source': key,
'destination_key': key2,
'seed': None,
'cols': None,
'ignored_cols': None,
'ignored_cols_by_name': None,
'max_iter': None,
'normalize': None,
'drop_na_cols': None,
}
if key2 is not None: params_dict['destination_key'] = key2
browseAlso = kwargs.get('browseAlso', False)
# only lets these params thru
check_params_update_kwargs(params_dict, kwargs, 'kmeans', print_params=True)
algo = '2/KMeans2'
print "\n%s params list:" % algo, params_dict
a1 = self.do_json_request(algo + '.json',
timeout=timeoutSecs, params=params_dict)
if noPoll:
return a1
a1 = self.poll_url(a1, timeoutSecs=timeoutSecs, retryDelaySecs=retryDelaySecs,
initialDelaySecs=initialDelaySecs, pollTimeoutSecs=pollTimeoutSecs,
noise=noise, benchmarkLogging=benchmarkLogging)
print "For now, always dumping the last polled kmeans result ..are the | |
import os
import sys
from bcbio.rnaseq import (featureCounts, cufflinks, oncofuse, count, dexseq,
express, variation, stringtie, sailfish, spikein, pizzly, ericscript,
kallisto, salmon)
from bcbio.rnaseq.gtf import tx2genefile
from bcbio.ngsalign import bowtie2, alignprep
from bcbio.variation import joint, multi, vardict, vcfanno, vcfutils
import bcbio.pipeline.datadict as dd
from bcbio.utils import filter_missing, flatten, to_single_data
from bcbio.log import logger
def fast_rnaseq(samples, run_parallel):
samples = run_parallel("run_salmon_index", [samples])
samples = run_parallel("run_salmon_reads", samples)
samples = run_parallel("run_counts_spikein", samples)
samples = spikein.combine_spikein(samples)
return samples
def singlecell_rnaseq(samples, run_parallel):
quantifier = dd.get_in_samples(samples, dd.get_singlecell_quantifier)
quantifier = quantifier.lower()
samples = run_parallel("run_umi_transform", samples)
demultiplexed = run_parallel("demultiplex_samples", samples)
# break demultiplixed lanes into their own samples
samples = []
for lane in demultiplexed:
for index in lane:
samples.append([index])
samples = run_parallel("run_filter_barcodes", samples)
samples = run_parallel("run_barcode_histogram", samples)
if quantifier == "rapmap":
samples = run_parallel("run_rapmap_align", samples)
samples = run_parallel("run_tagcount", samples)
samples = run_parallel("run_concatenate_sparse_counts", [samples])
elif quantifier == "kallisto":
samples = run_parallel("run_kallisto_singlecell", samples)
else:
logger.error(("%s is not supported for singlecell RNA-seq "
"quantification." % quantifier))
sys.exit(1)
return samples
def rnaseq_variant_calling(samples, run_parallel):
"""
run RNA-seq variant calling using GATK
"""
samples = run_parallel("run_rnaseq_variant_calling", samples)
variantcaller = dd.get_variantcaller(to_single_data(samples[0]))
if variantcaller and ("gatk-haplotype" in variantcaller):
out = []
for d in joint.square_off(samples, run_parallel):
out.extend([[to_single_data(xs)] for xs in multi.split_variants_by_sample(to_single_data(d))])
samples = out
samples = run_parallel("run_rnaseq_ann_filter", samples)
out = []
for data in (to_single_data(xs) for xs in samples):
if "variants" not in data:
data["variants"] = []
data["variants"].append({"variantcaller": "gatk-haplotype", "vcf": data["vrn_file_orig"],
"population": {"vcf": data["vrn_file"]}})
data["vrn_file"] = data.pop("vrn_file_orig")
out.append([data])
samples = out
return samples
def run_rnaseq_variant_calling(data):
"""
run RNA-seq variant calling, variation file is stored in `vrn_file`
in the datadict
"""
variantcaller = dd.get_variantcaller(data)
if isinstance(variantcaller, list) and len(variantcaller) > 1:
logger.error("Only one variantcaller can be run for RNA-seq at "
"this time. Post an issue here "
"(https://github.com/bcbio/bcbio-nextgen/issues) "
"if this is something you need to do.")
sys.exit(1)
if variantcaller:
if "gatk-haplotype" in variantcaller:
data = variation.rnaseq_gatk_variant_calling(data)
if vardict.get_vardict_command(data):
data = variation.rnaseq_vardict_variant_calling(data)
if dd.get_vrn_file(data):
ann_file = vcfanno.run_vcfanno(dd.get_vrn_file(data), ["rnaedit"], data)
if ann_file:
data = dd.set_vrn_file(data, ann_file)
return [[data]]
def run_rnaseq_ann_filter(data):
"""Run RNA-seq annotation and filtering.
"""
ann_file = vcfanno.run_vcfanno(dd.get_vrn_file(data), ["rnaedit"], data)
if ann_file:
data = dd.set_vrn_file(data, ann_file)
filter_file = variation.gatk_filter_rnaseq(dd.get_vrn_file(data), data)
data = dd.set_vrn_file(data, filter_file)
return [[data]]
def quantitate(data):
"""CWL target for quantitation.
XXX Needs to be split and parallelized by expression caller, with merging
of multiple calls.
"""
data = to_single_data(to_single_data(data))
data = generate_transcript_counts(data)[0][0]
data["quant"] = {}
if "sailfish" in dd.get_expression_caller(data):
data = to_single_data(sailfish.run_sailfish(data)[0])
data["quant"]["tsv"] = data["sailfish"]
data["quant"]["hdf5"] = os.path.join(os.path.dirname(data["sailfish"]), "abundance.h5")
if ("kallisto" in dd.get_expression_caller(data) or "pizzly" in dd.get_fusion_caller(data, [])):
data = to_single_data(kallisto.run_kallisto_rnaseq(data)[0])
data["quant"]["tsv"] = os.path.join(data["kallisto_quant"], "abundance.tsv")
data["quant"]["hdf5"] = os.path.join(data["kallisto_quant"], "abundance.h5")
if "salmon" in dd.get_expression_caller(data):
data = to_single_data(salmon.run_salmon_reads(data)[0])
data["quant"]["tsv"] = data["salmon"]
data["quant"]["hdf5"] = os.path.join(os.path.dirname(data["salmon"]), "abundance.h5")
return [[data]]
def quantitate_expression_parallel(samples, run_parallel):
"""
quantitate expression, all programs run here should be multithreaded to
take advantage of the threaded run_parallel environment
"""
data = samples[0][0]
samples = run_parallel("generate_transcript_counts", samples)
if "cufflinks" in dd.get_expression_caller(data):
samples = run_parallel("run_cufflinks", samples)
if "stringtie" in dd.get_expression_caller(data):
samples = run_parallel("run_stringtie_expression", samples)
if ("kallisto" in dd.get_expression_caller(data) or
dd.get_fusion_mode(data) or
"pizzly" in dd.get_fusion_caller(data, [])):
samples = run_parallel("run_kallisto_index", [samples])
samples = run_parallel("run_kallisto_rnaseq", samples)
if "sailfish" in dd.get_expression_caller(data):
samples = run_parallel("run_sailfish_index", [samples])
samples = run_parallel("run_sailfish", samples)
# always run salmon
samples = run_parallel("run_salmon_index", [samples])
samples = run_parallel("run_salmon_reads", samples)
samples = run_parallel("detect_fusions", samples)
return samples
def detect_fusions(data):
# support the old style of fusion mode calling
if dd.get_fusion_mode(data, False):
data = dd.set_fusion_caller(data, ["oncofuse", "pizzly"])
logger.warning("``fusion_mode`` is deprecated in favor of turning on "
"callers with ``fusion_caller``. It will run pizzly and "
"oncofuse for now, but will eventually have support "
"dropped.")
fusion_caller = dd.get_fusion_caller(data, [])
if "oncofuse" in fusion_caller:
oncofuse_file = oncofuse.run(data)
if oncofuse_file:
data = dd.set_oncofuse_file(data, oncofuse_file)
if "pizzly" in fusion_caller:
pizzly_dir = pizzly.run_pizzly(data)
if pizzly_dir:
data = dd.set_pizzly_dir(data, pizzly_dir)
if "ericscript" in fusion_caller:
ericscript_dir = ericscript.run(data)
return [[data]]
def quantitate_expression_noparallel(samples, run_parallel):
"""
run transcript quantitation for algorithms that don't run in parallel
"""
data = samples[0][0]
if "express" in dd.get_expression_caller(data):
samples = run_parallel("run_express", samples)
if "dexseq" in dd.get_expression_caller(data):
samples = run_parallel("run_dexseq", samples)
return samples
def generate_transcript_counts(data):
"""Generate counts per transcript and per exon from an alignment"""
data["count_file"] = featureCounts.count(data)
if dd.get_fusion_mode(data, False) and not dd.get_fusion_caller(data):
oncofuse_file = oncofuse.run(data)
if oncofuse_file:
data = dd.set_oncofuse_file(data, oncofuse_file)
if dd.get_transcriptome_align(data):
# to create a disambiguated transcriptome file realign with bowtie2
if dd.get_disambiguate(data):
logger.info("Aligning to the transcriptome with bowtie2 using the "
"disambiguated reads.")
bam_path = data["work_bam"]
fastq_paths = alignprep._bgzip_from_bam(bam_path, data["dirs"], data, is_retry=False, output_infix='-transcriptome')
if len(fastq_paths) == 2:
file1, file2 = fastq_paths
else:
file1, file2 = fastq_paths[0], None
ref_file = dd.get_ref_file(data)
data = bowtie2.align_transcriptome(file1, file2, ref_file, data)
else:
file1, file2 = dd.get_input_sequence_files(data)
if not dd.get_transcriptome_bam(data):
ref_file = dd.get_ref_file(data)
logger.info("Transcriptome alignment was flagged to run, but the "
"transcriptome BAM file was not found. Aligning to the "
"transcriptome with bowtie2.")
data = bowtie2.align_transcriptome(file1, file2, ref_file, data)
data = spikein.counts_spikein(data)
return [[data]]
def run_stringtie_expression(data):
"""Calculate transcript and gene level FPKM with Stringtie"""
data = stringtie.run_stringtie_expression(data)
return [[data]]
def run_dexseq(data):
"""Quantitate exon-level counts with DEXSeq"""
if dd.get_dexseq_gff(data, None):
data = dexseq.bcbio_run(data)
return [[data]]
def run_express(data):
"""Quantitative isoform expression by eXpress"""
data = express.run(data)
return [[data]]
def combine_express(samples, combined):
"""Combine tpm, effective counts and fpkm from express results"""
to_combine = [dd.get_express_counts(x) for x in
dd.sample_data_iterator(samples) if dd.get_express_counts(x)]
gtf_file = dd.get_gtf_file(samples[0][0])
isoform_to_gene_file = os.path.join(os.path.dirname(combined), "isoform_to_gene.txt")
isoform_to_gene_file = express.isoform_to_gene_name(
gtf_file, isoform_to_gene_file, dd.sample_data_iterator(samples).next())
if len(to_combine) > 0:
eff_counts_combined_file = os.path.splitext(combined)[0] + ".isoform.express_counts"
eff_counts_combined = count.combine_count_files(to_combine, eff_counts_combined_file, ext=".counts")
to_combine = [dd.get_express_tpm(x) for x in
dd.sample_data_iterator(samples) if dd.get_express_tpm(x)]
tpm_counts_combined_file = os.path.splitext(combined)[0] + ".isoform.express_tpm"
tpm_counts_combined = count.combine_count_files(to_combine, tpm_counts_combined_file)
to_combine = [dd.get_express_fpkm(x) for x in dd.sample_data_iterator(samples)
if dd.get_express_fpkm(x)]
fpkm_counts_combined_file = os.path.splitext(combined)[0] + ".isoform.express_fpkm"
fpkm_counts_combined = count.combine_count_files(to_combine, fpkm_counts_combined_file, ext=".fpkm")
return {'counts': eff_counts_combined, 'tpm': tpm_counts_combined,
'fpkm': fpkm_counts_combined, 'isoform_to_gene': isoform_to_gene_file}
return {}
def run_cufflinks(data):
"""Quantitate transcript expression with Cufflinks"""
if "cufflinks" in dd.get_tools_off(data):
return [[data]]
work_bam = dd.get_work_bam(data)
ref_file = dd.get_sam_ref(data)
out_dir, fpkm_file, fpkm_isoform_file = cufflinks.run(work_bam, ref_file, data)
data = dd.set_cufflinks_dir(data, out_dir)
data = dd.set_fpkm(data, fpkm_file)
data = dd.set_fpkm_isoform(data, fpkm_isoform_file)
return [[data]]
def cufflinks_assemble(data):
bam_file = dd.get_work_bam(data)
ref_file = dd.get_sam_ref(data)
out_dir = os.path.join(dd.get_work_dir(data), "assembly")
num_cores = dd.get_num_cores(data)
assembled_gtf = cufflinks.assemble(bam_file, ref_file, num_cores, out_dir, data)
dd.get_assembled_gtf(data).append(assembled_gtf)
return [[data]]
def cufflinks_merge(*samples):
to_merge = set(filter_missing(flatten([dd.get_assembled_gtf(data) for data in
dd.sample_data_iterator(samples)])))
data = samples[0][0]
ref_file = dd.get_sam_ref(data)
gtf_file = dd.get_gtf_file(data)
num_cores = dd.get_num_cores(data)
merged_gtf = cufflinks.merge(to_merge, ref_file, gtf_file, num_cores,
samples[0][0])
updated_samples = []
for data in dd.sample_data_iterator(samples):
data = dd.set_merged_gtf(data, merged_gtf)
updated_samples.append([data])
return updated_samples
def stringtie_merge(*samples):
to_merge = set(filter_missing(flatten([dd.get_assembled_gtf(data) for data in
dd.sample_data_iterator(samples)])))
data = samples[0][0]
ref_file = dd.get_sam_ref(data)
gtf_file = dd.get_gtf_file(data)
num_cores = dd.get_num_cores(data)
merged_gtf = stringtie.merge(to_merge, ref_file, gtf_file, num_cores, data)
updated_samples = []
for data in dd.sample_data_iterator(samples):
data = dd.set_merged_gtf(data, merged_gtf)
updated_samples.append([data])
return updated_samples
def assemble_transcripts(run_parallel, samples):
"""
assembly strategy rationale implemented as suggested in
http://www.nature.com/nprot/journal/v7/n3/full/nprot.2012.016.html
run Cufflinks in without a reference GTF for each individual sample
merge the assemblies with Cuffmerge using a reference GTF
"""
assembler = dd.get_in_samples(samples, dd.get_transcript_assembler)
data = samples[0][0]
if assembler:
if "cufflinks" in assembler:
samples = run_parallel("cufflinks_assemble", samples)
if "stringtie" in assembler:
samples = run_parallel("run_stringtie_expression", samples)
if "stringtie" in assembler and stringtie.supports_merge(data):
samples = run_parallel("stringtie_merge", [samples])
else:
samples = run_parallel("cufflinks_merge", [samples])
return samples
def combine_files(samples):
"""
after quantitation, combine the counts/FPKM/TPM/etc into a single table with
all samples
"""
data = samples[0][0]
# prefer the supplied transcriptome gtf file
gtf_file = dd.get_transcriptome_gtf(data, None)
if not gtf_file:
gtf_file = dd.get_gtf_file(data, None)
dexseq_gff = dd.get_dexseq_gff(data)
# combine featureCount files
count_files = filter_missing([dd.get_count_file(x[0]) for x in samples])
combined = count.combine_count_files(count_files, ext=".counts")
annotated = count.annotate_combined_count_file(combined, gtf_file)
# add tx2gene file
tx2gene_file = os.path.join(dd.get_work_dir(data), "annotation", "tx2gene.csv")
if gtf_file:
tx2gene_file = tx2genefile(gtf_file, tx2gene_file, tsv=False)
# combine eXpress files
express_counts_combined = combine_express(samples, combined)
# combine Cufflinks files
fpkm_combined_file = os.path.splitext(combined)[0] + ".fpkm"
fpkm_files = filter_missing([dd.get_fpkm(x[0]) for x in samples])
if fpkm_files:
fpkm_combined = count.combine_count_files(fpkm_files, fpkm_combined_file)
else:
fpkm_combined = None
fpkm_isoform_combined_file = os.path.splitext(combined)[0] + ".isoform.fpkm"
isoform_files = filter_missing([dd.get_fpkm_isoform(x[0]) for x in samples])
if isoform_files:
fpkm_isoform_combined = count.combine_count_files(isoform_files,
fpkm_isoform_combined_file,
".isoform.fpkm")
else:
fpkm_isoform_combined = None
# combine DEXseq files
dexseq_combined_file = os.path.splitext(combined)[0] + ".dexseq"
to_combine_dexseq = filter_missing([dd.get_dexseq_counts(data[0]) for data
in samples])
if to_combine_dexseq:
dexseq_combined = count.combine_count_files(to_combine_dexseq,
dexseq_combined_file, ".dexseq")
dexseq.create_dexseq_annotation(dexseq_gff, dexseq_combined)
else:
dexseq_combined = None
samples = spikein.combine_spikein(samples)
updated_samples = []
for | |
# #print(query)
# else:
# return messageDetail.ReplyToChat("No user information available")
#
# #print(query)
# results = zendesk.search(query=query)
# #print(results)
#
# if str(results).startswith(
# "{'results': [], 'facets': None, 'next_page': None, 'previous_page': None, 'count': 0}"):
# return messageDetail.ReplyToChat(
# "This user does not exist on Zendesk, the name is misspelled or does not belong to this organisation")
# elif str(results).startswith(
# "{'results': [], 'facets': {'type': {'entry': 0, 'ticket': 0, 'organization': 0, 'user': 0, 'article': 0, 'group': 0}}, 'next_page': None, 'previous_page': None, 'count': 0}"):
# return messageDetail.ReplyToChat(
# "This organisation/company does not exist in Zendesk or name is misspelled.")
# else:
#
# data = json.dumps(results, indent=2)
# d = json.loads(data)
#
# for index in range(len(d["results"])):
# name = d["results"][index]["name"]
# email = str(d["results"][index]["email"])
# #print("EmailAddress from Zendesk: " + email)
#
# role = str(d["results"][index]["role"])
# botlog.LogSymphonyInfo("The calling user is a Zendesk " + role)
#
# #if role == "Administrator" or role == "admin" or role == "Agent" or role == "agent":
# if role == "Administrator" or role == "admin" or role == "Agent" or role == "agent":
# isAllowed = True
# botlog.LogSymphonyInfo("User is an " + role + " on Zendesk")
# else:
# isAllowed = False
#
# emailZendesk = email
#
# botlog.LogSymphonyInfo(firstName + " " + lastName + " (" + displayName + ") from Company/Pod name: " + str(companyName) + " with UID: " + str(userID))
# callerCheck = (firstName + " " + lastName + " - " + displayName + " - " + companyName + " - " + str(userID))
#
#
# if callerCheck in AccessFile and isAllowed:
# botlog.LogSymphonyInfo("User is an " + role + " on Zendesk and an Agent or Admin with the bot")
#
# streamType = (messageDetail.ChatRoom.Type)
# #print(streamType)
#
# showRequest = (messageDetail.Command.MessageText)
# message_split = showRequest.split()
#
# try:
# todayMinusDays = message_split[0]
# todayTicket = int(todayMinusDays)
# except:
# return messageDetail.ReplyToChat("Please use number of days to check back such as /today 1")
#
# if int(todayMinusDays) > 5:
# return messageDetail.ReplyToChat("For optimal performance, please use 5 days or less in your query")
#
# ticketdate_raw = datetime.today() - timedelta(days=int(todayTicket))
# ticketdate = str(ticketdate_raw)[:-16]
# #print(ticketdate)
#
# conn = http.client.HTTPSConnection(_configDef['zdesk_config']['zdesk_api'])
#
# headers = {
# 'username': _configDef['zdesk_config']['zdesk_email'] + "/token",
# 'password': _configDef['zdesk_config']['zdesk_password'],
# 'authorization': _configDef['zdesk_config']['zdesk_auth'],
# 'cache-control': "no-cache",
# 'Content-Type': 'application/json',
# }
#
# # base64Encoded = base64.b64encode(bytes((emailZendesk + "/token:" + _configDef['zdesk_config']['zdesk_password']), 'utf-8'))
# # base64Enc = (base64Encoded.decode("utf-8"))
# # print(str(base64Enc))
# # base = ("Basic " + base64Enc)
# # print(str(base))
# #
# # headers = {
# # 'email_address': emailZendesk +"/token",
# # 'password': <PASSWORD>Def['zdesk_config']['z<PASSWORD>_password']),
# # 'authorization': base,
# # 'cache-control': "no-cache",
# # 'content-type': "application/json"
# # }
#
# conn.request("GET", "/api/v2/search?query=type%3Aticket%20created%3E" + str(ticketdate), headers=headers)
#
# res = conn.getresponse()
# #data = res.read().decode("utf-8")
# data_raw = res.read()
# data = remove_emoji(data_raw)
# reply = str(data)
#
# #messageDetail.ReplyToChatV2("Loading Tickets created from " + ticketdate + " Please wait.")
#
# if str(reply).startswith("{\"results\":[],\"facets\":null,\"next_page\":null,\"previous_page\":null,\"count\":0}"):
# return messageDetail.ReplyToChatV2("No Zendesk ticket was created on " + ticketdate)
#
# data = json.dumps(reply, indent=2)
# data_dict = ast.literal_eval(data)
# d_tick = json.loads(data_dict)
# #print(d_tick)
#
# index = 1
# dataLenght = 0
# dataLenghtNew = 0
# for index in range(len(d_tick["results"])):
#
# requestid = str(d_tick["results"][index]["id"])
# #print(requestid)
# requeststatus = str(d_tick["results"][index]["status"])
# #print(requeststatus)
# requestpriority = str(d_tick["results"][index]["priority"])
# #print(requestpriority)
# requestseverity = str(d_tick["results"][index]["tags"])
#
# if (len(d_tick["results"][index]["tags"])) == 0:
# noTag = True
# else:
# noTag = False
#
# notSet = True
#
# if noTag:
# sev = "Not set"
# notSet = False
#
# for index_tags in range(len(d_tick["results"][index]["tags"])):
# tags = str((d_tick["results"][index]["tags"][index_tags]))
#
# if tags.startswith("severity_1"):
# sev = "Severity 1"
# notSet = False
# elif tags.startswith("severity_2"):
# sev = "Severity 2"
# notSet = False
# elif tags.startswith("severity_3"):
# sev = "Severity 3"
# notSet = False
# elif tags.startswith("severity_4"):
# sev = "Severity 4"
# notSet = False
#
# if notSet:
# sev = "Not Set"
# notSet = False
#
# requestseverity = sev
# assignee_flag = False
# #print(sev)
#
# requestsubject_temps = str(d_tick["results"][index]["subject"])
# requestsubject = str(requestsubject_temps).replace("&", "&").replace("<", "<").replace('"', """).replace("'", "'").replace(">", ">")
# requestdescription_temps = str(d_tick["results"][index]["description"])
# requestdescription = str(requestdescription_temps).replace("&", "&").replace("<", "<").replace('"', """).replace("'", "'").replace(">", ">").replace("\n\n \n\n \n\n \n\n", "<br/><br/>").replace("\n\n \n\n \n\n", "<br/><br/>").replace("\n\n \n\n \n", "<br/><br/>").replace("\n\n \n\n", "<br/><br/>").replace("\n\n", "<br/><br/>").replace("\n", "<br/>")
# requestorganization_id = str(d_tick["results"][index]["organization_id"])
# requestrequester_id = str(d_tick["results"][index]["requester_id"])
# #print(requestrequester_id)
# requestcreated_at = str(d_tick["results"][index]["created_at"]).replace("T", " ").replace("Z", "")
# requestupdated_at = str(d_tick["results"][index]["updated_at"]).replace("T", " ").replace("Z", "")
# requestassignee_id = str(d_tick["results"][index]["assignee_id"])
#
# request_id = str(requestid)
# #print(request_id)
# request_status = str(requeststatus)
# request_priority = str(requestpriority)
# request_severity = str(requestseverity)
# request_subject = str(requestsubject)
# request_desc = str(requestdescription)
# desc = str(request_desc)
# request_org = str(requestorganization_id)
# request_requestor = str(requestrequester_id)
# #print(request_requestor)
# request_created = str(requestcreated_at)
# request_updated = str(requestupdated_at)
#
# # To get the name of the requester given the requesterID
# conn.request("GET", "/api/v2/users/" + request_requestor, headers=headers)
# res = conn.getresponse()
# userRequesterId = res.read()
# tempUserRequester = str(userRequesterId.decode('utf-8'))
#
# data = json.dumps(tempUserRequester, indent=2)
# data_dict = ast.literal_eval(data)
# d_req = json.loads(data_dict)
# req_name = str(d_req["user"]["name"])
# requesterName = req_name
# #print(requesterName)
#
# try:
# request_assignee = str(requestassignee_id)
#
# # To get the name of the assignee given the assigneeID
# conn.request("GET", "/api/v2/users/" + request_assignee, headers=headers)
# res = conn.getresponse()
# userAssigneeId = res.read()
# tempUserAssignee = str(userAssigneeId.decode('utf-8'))
#
# data = json.dumps(tempUserAssignee, indent=2)
# data_dict = ast.literal_eval(data)
# d_user = json.loads(data_dict)
# assign_name = str(d_user["user"]["name"])
# assigneeName = assign_name
# #print(assigneeName)
#
# except:
# assigneeName = "Not assigned"
# assignee_flag = True
#
# requesterTicket = (_configDef['zdesk_config']['zdesk_link']) + str(request_id) + "/requester/requested_tickets"
# assigneeTicket = (_configDef['zdesk_config']['zdesk_url']) + "/agent/users/" + str(request_assignee) + "/assigned_tickets"
# OrgTicket = (_configDef['zdesk_config']['zdesk_link']) + str(request_id) + "/organization/tickets"
#
# # Convert the Zendesk ID to company name
# conn.request("GET", "/api/v2/users/" + requestrequester_id + "/organizations.json", headers=headers)
# res = conn.getresponse()
# companyID = res.read()
# compNameRaw = str(companyID.decode("utf-8"))
#
# data = json.dumps(compNameRaw, indent=2)
# data_dict = ast.literal_eval(data)
# d_org = json.loads(data_dict)
# try:
# org_Name = str(d_org["organizations"][0]["name"])
# org_name_temp = str(org_Name).replace("&", "&").replace("<", "<").replace('"', """).replace("'", "'").replace(">", ">")
# orgName = str(org_name_temp)
# #print(orgName)
# except:
# orgName = "Company not yet created"
#
# if assignee_flag:
#
# table_body = ""
# table_header += "<table style='border-collapse:collapse;border:2px solid black;table-layout:fixed;width:100%;box-shadow: 5px 5px'><thead><tr style='background-color:#4D94FF;color:#ffffff;font-size:1rem' class=\"tempo-text-color--white tempo-bg-color--black\">" \
# "<td style='width:15%;border:1px solid blue;border-bottom: double blue;text-align:center'>SUBJECT</td>" \
# "<td style='border:1px solid black;text-align:left'>" + request_subject + "</td></tr><tr>" \
# "<td style='border:1px solid black;text-align:left' colspan=\"2\">" + str(desc) + "</td></tr><tr>" \
# "<td style='width:2.5%;border:1px solid blue;border-bottom: double blue;text-align:center'>ID</td>" \
# "<td style='border:1px solid black;text-align:center'><a href=\"" + (_configDef['zdesk_config']['zdesk_link']) + str(request_id) + "\">" + str(request_id) + "</a></td></tr><tr>" \
# "<td style='width:4%;border:1px solid blue;border-bottom: double blue;text-align:center'>STATUS</td>" \
# "<td style='border:1px solid black;text-align:center'>" + request_status + "</td></tr><tr>" \
# "<td style='width:5%;border:1px solid blue;border-bottom: double blue;text-align:center'>PRIORITY</td>" \
# "<td style='border:1px solid black;text-align:center'>" + request_priority + "</td></tr><tr>" \
# "<td style='width:4.5%;border:1px solid blue;border-bottom: double blue;text-align:center'>SEVERITY</td>" \
# "<td style='border:1px solid black;text-align:center'>" + request_severity + "</td></tr><tr>" \
# "<td style='width:5%;border:1px solid blue;border-bottom: double blue;text-align:center'>COMPANY</td>" \
# "<td style='border:1px solid black;text-align:center'><a href=\"" + OrgTicket + "\">" + orgName + "</a></td></tr><tr>" \
# "<td style='width:7%;border:1px solid blue;border-bottom: double blue;text-align:center'>REQUESTER</td>" \
# "<td style='border:1px solid black;text-align:center'><a href=\"" + requesterTicket + "\">" + str(requesterName) + "</a></td></tr><tr>" \
# "<td style='width:5%;border:1px solid blue;border-bottom: double blue;text-align:center'>CREATED</td>" \
# "<td style='border:1px solid black;text-align:center'>" + request_created + "</td></tr><tr>" \
# "<td style='width:5%;border:1px solid blue;border-bottom: double blue;text-align:center'>UPDATED</td>" \
# "<td style='border:1px solid black;text-align:center'>" + request_updated + "</td></tr><tr>" \
# "<td style='width:7%;border:1px solid blue;border-bottom: double blue;text-align:center'>ASSIGNEE</td>" \
# "<td style='border:1px solid black;text-align:center'>" + assigneeName + "</td></tr><tr>" \
# "</tr></thead><tbody></tbody></table>"
#
# else:
#
# table_body = ""
# table_header += "<table style='border-collapse:collapse;border:2px solid black;table-layout:fixed;width:100%;box-shadow: | |
import re
import en_core_web_sm #English
import pt_core_news_sm #Portuguese
import ru_core_news_sm #Russian
import zh_core_web_sm #Mandarim
import ja_core_news_sm #Japanese
import es_core_news_sm #Spanish
import de_core_news_sm #German
import fr_core_news_sm #French
import it_core_news_sm #Italian
from langdetect import detect
from translate import Translator
def func_md5pattern(string):
md5regex = re.compile(r"(^[0-9a-fA-F\d]{32}$)")
if re.fullmatch(md5regex, string):
return True
else:
return False
def func_sha1pattern(string):
sha1regex = re.compile(r"(^[0-9a-fA-F\d]{40}$)")
if re.fullmatch(sha1regex, string):
return True
else:
return False
def func_sha256pattern(string):
sha256regex = re.compile(r"(^[0-9a-fA-F\d]{64}$)")
if re.fullmatch(sha256regex, string):
return True
else:
return False
def func_sha512pattern(string):
sha512regex = re.compile(r"(^[0-9a-fA-F\d]{128}$)")
if re.fullmatch(sha512regex, string):
return True
else:
return False
def func_emailpattern(string):
emailregex = re.compile(r"(^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+$)")
if re.fullmatch(emailregex, string):
return True
else:
return False
def func_passwordpattern(string):
passwordregex = re.compile(r"(^(?=.*[a-z])(?=.*[A-Z])(?=.*\d)(?=.*[@$!%*#?&])[A-Za-z\d@$!#%*?&]{6,20}$)")
if re.fullmatch(passwordregex, string):
return True
else:
return False
class ProcessString():
def __init__(self, string):
self.string = string
self.return_dict = {}
self.english_dict = {}
self.portugue_dict = {}
self.russian_dict = {}
self.chinese_dict = {}
self.japanese_dict = {}
self.spanish_dict = {}
self.german_dict = {}
self.french_dict = {}
self.italian_dict = {}
self.english_sentences = {}
self.portugue_sentences = {}
self.russian_sentences = {}
self.chinese_sentences = {}
self.japanese_sentences = {}
self.spanish_sentences = {}
self.german_sentences = {}
self.french_sentences = {}
self.italian_sentences = {}
self.translation = None
def func_start(self):
self.func_get_text_language()
if self.language == 'en':
self.func_process_english()
else:
# self.func_translate_to_english()
# self.func_process_english()
if self.language == 'pt':
self.func_process_portuguese()
elif self.language == 'ru':
self.func_process_russian()
elif self.language == 'zh':
self.func_process_chinese()
elif self.language == 'ja':
self.func_process_japanese()
elif self.language == 'es':
self.func_process_spanish()
elif self.language == 'de':
self.func_process_german()
elif self.language == 'fr':
self.func_process_french()
elif self.language == 'it':
self.func_process_italian()
else:
print("[*] Language: " + self.language + " not supported in NLP module yet.")
def get_return_dict(self):
return self.return_dict
def func_get_text_language(self):
self.language = detect(self.string)
self.return_dict['language'] = self.language
def func_translate_to_english(self):
# This function is not used because of the limitations in the Translator
translator = Translator(to_lang="en", from_lang=self.language)
split_string = self.string.split(' ')
translated_list = []
for word in split_string:
try:
translated_word = translator.translate(word)
except StopIteration:
pass
translated_list.append(translated_word)
self.translation = ' '.join(translated_list)
self.return_dict['translation'] = translated_list
def func_process_english(self):
nlp = en_core_web_sm.load()
if self.translation is not None:
doc = nlp(self.translation)
else:
doc = nlp(self.string)
for sentence in doc.sents:
sentence = str(sentence).replace(',', ' ')
sentence = str(sentence).replace(':', ' ')
if 'password' in str(sentence).lower() and 'login' in str(sentence).lower():
self.english_sentences[sentence] = 'PASSWORD/LOGIN'
elif 'password' in str(sentence).lower():
self.english_sentences[sentence] = 'PASSWORD'
elif 'login' in str(sentence).lower():
self.english_sentences[sentence] = 'LOGIN'
else:
self.english_sentences[sentence] = ''
for word in doc:
self.english_dict[word.text] = word.ent_type_
for key, value in self.english_dict.items():
if func_md5pattern(key):
self.english_dict[key] = "MD5"
elif func_sha1pattern(key):
self.english_dict[key] = "SHA1"
elif func_sha256pattern(key):
self.english_dict[key] = "SHA256"
elif func_sha512pattern(key):
self.english_dict[key] = "SHA512"
elif func_emailpattern(key):
self.english_dict[key] = "EMAIL"
elif value == '':
if func_passwordpattern(key):
self.english_dict[key] = "PASSWORD"
else:
self.english_dict[key] = value
self.return_dict['english_sentences'] = self.english_sentences
self.return_dict['english_dict'] = self.english_dict
def func_process_portuguese(self):
nlp = pt_core_news_sm.load()
doc = nlp(self.string)
for sentence in doc.sents:
sentence = str(sentence).replace(',', ' ')
sentence = str(sentence).replace(':', ' ')
if ('senha' in str(sentence).lower() and 'login' in str(sentence).lower()) or ('password' in str(sentence).lower() and 'login' in str(sentence).lower()):
self.portugue_sentences[sentence] = 'PASSWORD/LOGIN'
elif ('senha' in str(sentence).lower()) or ('password' in str(sentence).lower()):
self.portugue_sentences[sentence] = 'PASSWORD'
elif ('login' in str(sentence).lower()) or ('login' in str(sentence).lower()):
self.portugue_sentences[sentence] = 'LOGIN'
else:
self.portugue_sentences[sentence] = ''
for word in doc:
self.portugue_dict[word.text] = word.ent_type_
for key, value in self.portugue_dict.items():
if func_md5pattern(key):
self.portugue_dict[key] = "MD5"
elif func_sha1pattern(key):
self.portugue_dict[key] = "SHA1"
elif func_sha256pattern(key):
self.portugue_dict[key] = "SHA256"
elif func_sha512pattern(key):
self.portugue_dict[key] = "SHA512"
elif func_emailpattern(key):
self.portugue_dict[key] = "EMAIL"
elif value == '':
if func_passwordpattern(key):
self.portugue_dict[key] = "PASSWORD"
else:
self.portugue_dict[key] = value
self.return_dict['portugue_sentences'] = self.portugue_sentences
self.return_dict['portugue_dict'] = self.portugue_dict
def func_process_russian(self):
nlp = ru_core_news_sm.load()
doc = nlp(self.string)
for sentence in doc.sents:
sentence = str(sentence).replace(',', ' ')
sentence = str(sentence).replace(':', ' ')
if ('пароль' in str(sentence).lower() and 'логин' in str(sentence).lower()) or ('password' in str(sentence).lower() and 'login' in str(sentence).lower()):
self.russian_sentences[sentence] = 'PASSWORD/LOGIN'
elif ('пароль' in str(sentence).lower()) or ('password' in str(sentence).lower()):
self.russian_sentences[sentence] = 'PASSWORD'
elif ('логин' in str(sentence).lower()) or ('login' in str(sentence).lower()):
self.russian_sentences[sentence] = 'LOGIN'
else:
self.russian_sentences[sentence] = ''
for word in doc:
self.russian_dict[word.text] = word.ent_type_
for key, value in self.russian_dict.items():
if func_md5pattern(key):
self.russian_dict[key] = "MD5"
elif func_sha1pattern(key):
self.russian_dict[key] = "SHA1"
elif func_sha256pattern(key):
self.russian_dict[key] = "SHA256"
elif func_sha512pattern(key):
self.russian_dict[key] = "SHA512"
elif func_emailpattern(key):
self.russian_dict[key] = "EMAIL"
elif value == '':
if func_passwordpattern(key):
self.russian_dict[key] = "PASSWORD"
else:
self.russian_dict[key] = value
self.return_dict['russian_sentences'] = self.russian_sentences
self.return_dict['russian_dict'] = self.russian_dict
def func_process_chinese(self):
nlp = zh_core_web_sm.load()
doc = nlp(self.string)
for sentence in doc.sents:
sentence = str(sentence).replace(',', ' ')
sentence = str(sentence).replace(':', ' ')
if ('密码' in str(sentence).lower() and '登录' in str(sentence).lower()) or ('password' in str(sentence).lower() and 'login' in str(sentence).lower()):
self.chinese_sentences[sentence] = 'PASSWORD/LOGIN'
elif ('密码' in str(sentence).lower()) or ('password' in str(sentence).lower()):
self.chinese_sentences[sentence] = 'PASSWORD'
elif ('登录' in str(sentence).lower()) or ('login' in str(sentence).lower()):
self.chinese_sentences[sentence] = 'LOGIN'
else:
self.chinese_sentences[sentence] = ''
for word in doc:
self.chinese_dict[word.text] = word.ent_type_
for key, value in self.chinese_dict.items():
if func_md5pattern(key):
self.chinese_dict[key] = "MD5"
elif func_sha1pattern(key):
self.chinese_dict[key] = "SHA1"
elif func_sha256pattern(key):
self.chinese_dict[key] = "SHA256"
elif func_sha512pattern(key):
self.chinese_dict[key] = "SHA512"
elif func_emailpattern(key):
self.chinese_dict[key] = "EMAIL"
elif value == '':
if func_passwordpattern(key):
self.chinese_dict[key] = "PASSWORD"
else:
self.chinese_dict[key] = value
self.return_dict['chinese_sentences'] = self.chinese_sentences
self.return_dict['chinese_dict'] = self.chinese_dict
def func_process_japanese(self):
nlp = ja_core_news_sm.load()
doc = nlp(self.string)
for sentence in doc.sents:
sentence = str(sentence).replace(',', ' ')
sentence = str(sentence).replace(':', ' ')
if ('パスワード' in str(sentence).lower() and 'ログイン' in str(sentence).lower()) or ('password' in str(sentence).lower() and 'login' in str(sentence).lower()):
self.japanese_sentences[sentence] = 'PASSWORD/LOGIN'
elif ('パスワード' in str(sentence).lower()) or ('password' in str(sentence).lower()):
self.japanese_sentences[sentence] = 'PASSWORD'
elif ('ログイン' in str(sentence).lower()) or ('login' in str(sentence).lower()):
self.japanese_sentences[sentence] = 'LOGIN'
else:
self.japanese_sentences[sentence] = ''
for word in doc:
self.japanese_dict[word.text] = word.ent_type_
for key, value in self.japanese_dict.items():
if func_md5pattern(key):
self.japanese_dict[key] = "MD5"
elif func_sha1pattern(key):
self.japanese_dict[key] = "SHA1"
elif func_sha256pattern(key):
self.japanese_dict[key] = "SHA256"
elif func_sha512pattern(key):
self.japanese_dict[key] = "SHA512"
elif func_emailpattern(key):
self.japanese_dict[key] = "EMAIL"
elif value == '':
if func_passwordpattern(key):
self.japanese_dict[key] = "PASSWORD"
else:
self.japanese_dict[key] = value
self.return_dict['japanese_sentences'] = self.japanese_sentences
self.return_dict['japanese_dict'] = self.japanese_dict
def func_process_spanish(self):
nlp = es_core_news_sm.load()
doc = nlp(self.string)
for sentence in doc.sents:
sentence = str(sentence).replace(',', ' ')
sentence = str(sentence).replace(':', ' ')
if ('contraseña' in str(sentence).lower() and 'login' in str(sentence).lower()) or ('password' in str(sentence).lower() and 'login' in str(sentence).lower()):
self.spanish_sentences[sentence] = 'PASSWORD/LOGIN'
elif ('contraseña' in str(sentence).lower()) or ('password' in str(sentence).lower()):
self.spanish_sentences[sentence] = 'PASSWORD'
elif ('login' in str(sentence).lower()):
self.spanish_sentences[sentence] = 'LOGIN'
else:
self.spanish_sentences[sentence] = ''
for word in doc:
self.spanish_dict[word.text] = word.ent_type_
for key, value in self.spanish_dict.items():
if func_md5pattern(key):
self.spanish_dict[key] = "MD5"
elif func_sha1pattern(key):
self.spanish_dict[key] = "SHA1"
elif func_sha256pattern(key):
self.spanish_dict[key] = "SHA256"
elif func_sha512pattern(key):
self.spanish_dict[key] = "SHA512"
elif func_emailpattern(key):
self.spanish_dict[key] = "EMAIL"
elif value == '':
if func_passwordpattern(key):
self.spanish_dict[key] = "PASSWORD"
else:
self.spanish_dict[key] = value
self.return_dict['spanish_sentences'] = self.spanish_sentences
self.return_dict['spanish_dict'] = self.spanish_dict
def func_process_german(self):
nlp = de_core_news_sm.load()
doc = nlp(self.string)
for sentence in doc.sents:
sentence = str(sentence).replace(',', ' ')
sentence = str(sentence).replace(':', ' ')
if ('passwort' in str(sentence).lower() and 'login' in str(sentence).lower()) or ('password' in str(sentence).lower() and 'login' in str(sentence).lower()):
self.german_sentences[sentence] = 'PASSWORD/LOGIN'
elif ('passwort' in str(sentence).lower()) or ('password' in str(sentence).lower()):
self.german_sentences[sentence] = 'PASSWORD'
elif ('login' in str(sentence).lower()):
self.german_sentences[sentence] = 'LOGIN'
else:
self.german_sentences[sentence] = ''
for word in doc:
self.german_dict[word.text] = word.ent_type_
for key, value in self.german_dict.items():
if func_md5pattern(key):
self.german_dict[key] = "MD5"
elif func_sha1pattern(key):
self.german_dict[key] = "SHA1"
elif func_sha256pattern(key):
self.german_dict[key] = "SHA256"
elif func_sha512pattern(key):
self.german_dict[key] = "SHA512"
elif func_emailpattern(key):
self.german_dict[key] = "EMAIL"
elif value == '':
if func_passwordpattern(key):
self.german_dict[key] = "PASSWORD"
else:
self.german_dict[key] = value
self.return_dict['german_sentences'] = self.german_sentences
self.return_dict['german_dict'] = self.german_dict
def func_process_french(self):
nlp = fr_core_news_sm.load()
doc = nlp(self.string)
for sentence in doc.sents:
sentence = str(sentence).replace(',', ' ')
sentence = str(sentence).replace(':', ' ')
if ('mot de passe' in str(sentence).lower() and 'login' in str(sentence).lower()) or ('password' in str(sentence).lower() and 'login' in str(sentence).lower()):
self.french_sentences[sentence] = 'PASSWORD/LOGIN'
elif ('mot de passe' in str(sentence).lower()) or ('password' in str(sentence).lower()):
self.french_sentences[sentence] = 'PASSWORD'
elif ('login' in str(sentence).lower()):
self.french_sentences[sentence] = 'LOGIN'
else:
self.french_sentences[sentence] = ''
for word in doc:
self.french_dict[word.text] = word.ent_type_
for key, value in self.french_dict.items():
if func_md5pattern(key):
self.french_dict[key] = "MD5"
elif func_sha1pattern(key):
self.french_dict[key] = "SHA1"
elif func_sha256pattern(key):
self.french_dict[key] = "SHA256"
elif func_sha512pattern(key):
self.french_dict[key] = "SHA512"
elif func_emailpattern(key):
self.french_dict[key] = "EMAIL"
elif value == '':
if func_passwordpattern(key):
self.french_dict[key] = "PASSWORD"
else:
self.french_dict[key] = value
self.return_dict['french_sentences'] = self.french_sentences
self.return_dict['french_dict'] = self.french_dict
def func_process_italian(self):
nlp = it_core_news_sm.load()
doc = nlp(self.string)
for sentence in doc.sents:
| |
4, 0)], Color: deeppink
Polyomino: [(1, 3, 0), (2, 3, 0), (2, 4, 0), (2, 4, 1), (2, 5, 0)], Color: deeppink
Polyomino: [(1, 4, 0), (2, 4, 0), (2, 5, 0), (2, 5, 1), (2, 6, 0)], Color: deeppink
Polyomino: [(1, 5, 0), (2, 5, 0), (2, 6, 0), (2, 6, 1), (2, 7, 0)], Color: deeppink
Polyomino: [(2, 0, 0), (3, 0, 0), (3, 1, 0), (3, 1, 1), (3, 2, 0)], Color: deeppink
Polyomino: [(2, 1, 0), (3, 1, 0), (3, 2, 0), (3, 2, 1), (3, 3, 0)], Color: deeppink
Polyomino: [(2, 2, 0), (3, 2, 0), (3, 3, 0), (3, 3, 1), (3, 4, 0)], Color: deeppink
Polyomino: [(2, 3, 0), (3, 3, 0), (3, 4, 0), (3, 4, 1), (3, 5, 0)], Color: deeppink
Polyomino: [(2, 4, 0), (3, 4, 0), (3, 5, 0), (3, 5, 1), (3, 6, 0)], Color: deeppink
Polyomino: [(2, 5, 0), (3, 5, 0), (3, 6, 0), (3, 6, 1), (3, 7, 0)], Color: deeppink
Polyomino: [(3, 0, 0), (4, 0, 0), (4, 1, 0), (4, 1, 1), (4, 2, 0)], Color: deeppink
Polyomino: [(3, 1, 0), (4, 1, 0), (4, 2, 0), (4, 2, 1), (4, 3, 0)], Color: deeppink
Polyomino: [(3, 2, 0), (4, 2, 0), (4, 3, 0), (4, 3, 1), (4, 4, 0)], Color: deeppink
Polyomino: [(3, 3, 0), (4, 3, 0), (4, 4, 0), (4, 4, 1), (4, 5, 0)], Color: deeppink
Polyomino: [(3, 4, 0), (4, 4, 0), (4, 5, 0), (4, 5, 1), (4, 6, 0)], Color: deeppink
Polyomino: [(3, 5, 0), (4, 5, 0), (4, 6, 0), (4, 6, 1), (4, 7, 0)], Color: deeppink
This method is independent of the translation of the polyomino::
sage: q = Polyomino([(0,0,0), (1,0,0)])
sage: list(q.translated_copies((2,2,1)))
[Polyomino: [(0, 0, 0), (1, 0, 0)], Color: gray, Polyomino: [(0, 1, 0), (1, 1, 0)], Color: gray]
sage: q = Polyomino([(34,7,-9), (35,7,-9)])
sage: list(q.translated_copies((2,2,1)))
[Polyomino: [(0, 0, 0), (1, 0, 0)], Color: gray, Polyomino: [(0, 1, 0), (1, 1, 0)], Color: gray]
Inside smaller boxes::
sage: list(p.translated_copies(box=(2,2,3)))
[]
sage: list(p.translated_copies(box=(2,3,2)))
[Polyomino: [(0, 0, 0), (1, 0, 0), (1, 1, 0), (1, 1, 1), (1, 2, 0)], Color: deeppink]
sage: list(p.translated_copies(box=(3,2,2)))
[]
sage: list(p.translated_copies(box=(1,1,1)))
[]
Using a Polyomino as input::
sage: b = Polyomino([(0,0), (0,1), (0,2), (1,0), (1,1), (1,2)])
sage: p = Polyomino([(0,0)])
sage: list(p.translated_copies(b))
[Polyomino: [(0, 0)], Color: gray,
Polyomino: [(0, 1)], Color: gray,
Polyomino: [(0, 2)], Color: gray,
Polyomino: [(1, 0)], Color: gray,
Polyomino: [(1, 1)], Color: gray,
Polyomino: [(1, 2)], Color: gray]
::
sage: p = Polyomino([(0,0), (1,0), (0,1)])
sage: b = Polyomino([(0,0), (1,0), (2,0), (0,1), (1,1), (0,2)])
sage: list(p.translated_copies(b))
[Polyomino: [(0, 0), (0, 1), (1, 0)], Color: gray,
Polyomino: [(0, 1), (0, 2), (1, 1)], Color: gray,
Polyomino: [(1, 0), (1, 1), (2, 0)], Color: gray]
"""
if not isinstance(box, Polyomino):
ranges = [range(a) for a in box]
box = Polyomino(itertools.product(*ranges))
if not box._dimension == self._dimension:
raise ValueError("Dimension of input box must match the "
"dimension of the polyomino")
minxyz, maxxyz = self.bounding_box()
minxyz, maxxyz = vector(minxyz), vector(maxxyz)
size = maxxyz - minxyz
boxminxyz, boxmaxxyz = box.bounding_box()
ranges = [range(a, b-c+1) for (a,b,c) in zip(boxminxyz,
boxmaxxyz,
size)]
cano = self.canonical()
for v in itertools.product(*ranges):
translated = cano + v
if translated <= box:
yield translated
def isometric_copies(self, box, orientation_preserving=True,
mod_box_isometries=False):
r"""
Return the translated and isometric images of self that lies in the box.
INPUT:
- ``box`` -- Polyomino or tuple of integers (size of a box)
- ``orientation_preserving`` -- bool (optional, default: ``True``),
If True, the group of isometries of the `n`-cube is restricted to
those that preserve the orientation, i.e. of determinant 1.
- ``mod_box_isometries`` -- bool (default: ``False``), whether to
quotient the group of isometries of the `n`-cube by the
subgroup of isometries of the `a_1\times a_2\cdots \times a_n`
rectangular box where are the `a_i` are assumed to be distinct.
EXAMPLES::
sage: from sage.combinat.tiling import Polyomino
sage: p = Polyomino([(0,0,0),(1,0,0),(1,1,0),(1,1,1),(1,2,0)], color='deeppink')
sage: L = list(p.isometric_copies(box=(5,8,2)))
sage: len(L)
360
::
sage: p = Polyomino([(0,0,0),(1,0,0),(1,1,0),(1,2,0),(1,2,1)], color='orange')
sage: L = list(p.isometric_copies(box=(5,8,2)))
sage: len(L)
180
sage: L = list(p.isometric_copies((5,8,2), False))
sage: len(L)
360
sage: L = list(p.isometric_copies((5,8,2), mod_box_isometries=True))
sage: len(L)
45
::
sage: p = Polyomino([(0,0), (1,0), (0,1)])
sage: b = Polyomino([(0,0), (1,0), (2,0), (0,1), (1,1), (0,2)])
sage: sorted(p.isometric_copies(b), key=lambda p: p.sorted_list())
[Polyomino: [(0, 0), (0, 1), (1, 0)], Color: gray,
Polyomino: [(0, 0), (0, 1), (1, 1)], Color: gray,
Polyomino: [(0, 0), (1, 0), (1, 1)], Color: gray,
Polyomino: [(0, 1), (0, 2), (1, 1)], Color: gray,
Polyomino: [(0, 1), (1, 0), (1, 1)], Color: gray,
Polyomino: [(1, 0), (1, 1), (2, 0)], Color: gray]
"""
if not isinstance(box, Polyomino):
ranges = [range(a) for a in box]
box = Polyomino(itertools.product(*ranges))
if not box._dimension == self._dimension:
raise ValueError("Dimension of input box must match the "
"dimension of the polyomino")
box_min_coords, box_max_coords = box.bounding_box()
if mod_box_isometries and len(set(b-a for (a,b) in zip(box_min_coords,
box_max_coords))) < box._dimension:
raise NotImplementedError("The code below assumes that the"
" sizes of the box (={}) are all distinct when"
" argument `mod_box_isometries` is True.".format(box))
all_distinct_cano = self.canonical_isometric_copies(orientation_preserving,
mod_box_isometries)
for cano in all_distinct_cano:
for t in cano.translated_copies(box=box):
yield t
def neighbor_edges(self):
r"""
Return an iterator over the pairs of neighbor coordinates inside of
the polyomino.
Two points `P` and `Q` in the polyomino are neighbor if `P - Q` has
one coordinate equal to `+1` or `-1` and zero everywhere else.
EXAMPLES::
sage: from sage.combinat.tiling import Polyomino
sage: p = Polyomino([(0,0,0),(0,0,1)])
sage: [sorted(edge) for edge in p.neighbor_edges()]
[[(0, 0, 0), (0, 0, 1)]]
In 3d::
sage: p = Polyomino([(0,0,0),(1,0,0),(1,1,0),(1,1,1),(1,2,0)], color='deeppink')
sage: L = sorted(sorted(edge) for edge in p.neighbor_edges())
sage: for a in L: a
[(0, 0, 0), (1, 0, 0)]
[(1, 0, 0), (1, 1, 0)]
[(1, 1, 0), (1, 1, 1)]
[(1, 1, 0), (1, 2, 0)]
In 2d::
sage: p = Polyomino([(0,0),(1,0),(1,1),(1,2)])
sage: L = sorted(sorted(edge) for edge in p.neighbor_edges())
sage: for a in L: a
[(0, 0), (1, 0)]
[(1, 0), (1, 1)]
[(1, 1), (1, 2)]
"""
for P, Q in itertools.combinations(self, 2):
s = sorted(map(abs, Q-P))
firsts = s[:-1]
last = s[-1]
if last == 1 and all(f == 0 for f in firsts):
yield P, Q
def center(self):
r"""
Return the center of the polyomino.
EXAMPLES::
sage: from sage.combinat.tiling import Polyomino
sage: p = Polyomino([(0,0,0),(0,0,1)])
sage: p.center()
(0, 0, 1/2)
In 3d::
sage: p = Polyomino([(0,0,0),(1,0,0),(1,1,0),(1,1,1),(1,2,0)], color='deeppink')
sage: p.center()
(4/5, 4/5, 1/5)
In 2d::
sage: p = Polyomino([(0,0),(1,0),(1,1),(1,2)])
sage: p.center()
(3/4, 3/4)
"""
return sum(self) / len(self)
def boundary(self):
r"""
Return the boundary of a 2d polyomino.
INPUT:
- ``self`` - a 2d polyomino
OUTPUT:
- list of edges (an edge is a pair of adjacent 2d coordinates)
EXAMPLES::
sage: from sage.combinat.tiling import Polyomino
sage: p = Polyomino([(0,0), (1,0), (0,1), (1,1)])
sage: sorted(p.boundary())
[((-0.5, -0.5), (-0.5, 0.5)), ((-0.5, -0.5), (0.5, -0.5)), ((-0.5, 0.5), (-0.5, 1.5)), ((-0.5, 1.5), (0.5, 1.5)), ((0.5, -0.5), (1.5, -0.5)), ((0.5, 1.5), (1.5, 1.5)), ((1.5, -0.5), (1.5, 0.5)), ((1.5, 0.5), (1.5, 1.5))]
sage: len(_)
8
sage: p = Polyomino([(5,5)])
sage: sorted(p.boundary())
[((4.5, 4.5), (4.5, 5.5)), ((4.5, 4.5), (5.5, 4.5)), ((4.5, 5.5), (5.5, 5.5)), ((5.5, 4.5), (5.5, 5.5))]
"""
if self._dimension != 2:
raise NotImplementedError("The method boundary is currently "
"implemented "
"only for dimension 2")
from collections import defaultdict
horizontal = defaultdict(int)
vertical = defaultdict(int)
for a in self:
x, y = a = tuple(a)
horizontal[a] += 1
vertical[a] += 1
horizontal[(x, y+1)] -= 1
vertical[(x+1, y)] -= 1
edges = []
h = 0.5
for (x, y), coeff in horizontal.items():
if coeff:
edges.append(((x-h, y-h), (x+h, y-h)))
for (x, y), coeff in vertical.items():
if coeff:
edges.append(((x-h, y-h), (x-h, y+h)))
return edges
def show3d(self, size=1):
r"""
Return a 3d Graphic object representing the polyomino.
INPUT:
- ``self`` - a polyomino of dimension 3
- ``size`` - number | |
<gh_stars>0
import shapely
from kivy.app import App
from kivy.graphics.context_instructions import Scale, Translate, Color, PushMatrix, Rotate, PopMatrix
from kivy.graphics.fbo import Fbo
from kivy.graphics.gl_instructions import ClearColor, ClearBuffers
from kivy.graphics.transformation import Matrix
from kivy.graphics.vertex_instructions import Rectangle, Ellipse, Line
from kivy.uix.button import Button
from kivy.uix.floatlayout import FloatLayout
from kivy.uix.gridlayout import GridLayout
from kivy.uix.scatter import Scatter
from kivy.uix.stacklayout import StackLayout
from kivy.uix.widget import Widget
from kivy.properties import ObjectProperty
from kivy.uix.image import Image
from kivy.core.window import Window
from kivy.clock import Clock
import random
from random import randint, choice, choices
from kivy.properties import NumericProperty
from kivy.vector import Vector
from shapely.geometry import Polygon
from shapely.affinity import rotate
import numpy as np
from rotabox import Rotabox
# test_poly = sympy.Polygon(p1, p2, p3, p4) #makes a polygon
# c1 = Circle((0, 0), 5)
from pipe import Pipe
class Floatlayout(FloatLayout):
def __init__(self, **kwargs):
super().__init__(**kwargs)
class arrow(GridLayout):
def __init__(self, **kwargs):
super(arrow, self).__init__(**kwargs)
class Projectile2(Image):
max_travel_distance = 100
already_travled = Vector(0, 0)
def __init__(self, start_pos, target_pos, **kwargs):
super(Projectile, self).__init__(**kwargs)
self.pos = start_pos
self.target_pos = target_pos
self.angle = Vector(1, 0).angle(self.target_pos) * -1
self.calc_move_me()
self.source = "Shot1/shot1_4.png"
self.size_hint = (None, None)
self.size = (64, 64)
def calc_move_me(self):
fuss = Vector(self.pos)
# to get the middle of the ship, instead of topleft/topright
fuss.x = fuss.x + self.size[0] * 0.5
fuss.y = fuss.y + self.size[1] * 0.5
spitze = Vector(self.target_pos)
self.waypoint = spitze
self.travel_vec = spitze - fuss
def move_me(self):
self.pos = Vector(self.pos) + self.travel_vec.normalize()
self.already_travled += self.travel_vec.normalize()
def update_hitbox(self):
self.p1 = (self.pos[0] + 2, self.pos[1] + 16)
self.p2 = (self.pos[0] + 60, self.pos[1] + 16)
self.p3 = (self.pos[0] + 60, self.pos[1] + 44)
self.p4 = (self.pos[0] + 2, self.pos[1] + 44)
self.hitbox = Polygon([self.p1, self.p2, self.p3, self.p4])
class Projectile(Rotabox):
def __init__(self, start_pos, target_pos, **kwargs):
super(Projectile, self).__init__(**kwargs)
self.pos = start_pos
self.target_pos = target_pos
self.target_vec = Vector(self.target_pos) - Vector(self.pos)
self.angle = Vector(1, 0).angle(self.target_vec) *-1 #90° imagined Vector
self.calc_move_me()
self.max_travel_distance = 100
self.already_travled = Vector(0, 0)
self.animcounter = 0
self.source1 = "Shot1/shot1_1.png"
self.source2 = "Shot1/shot1_2.png"
self.source3 = "Shot1/shot1_3.png"
self.source4 = "Shot1/shot1_4.png"
self.source_l = [self.source1, self.source2, self.source3, self.source4]
self.add_widget(Image(source=self.source_l[0]))
self.custom_bounds = [
[(0.031, 0.469), (0.031, 0.516), (0.094, 0.594), (0.203, 0.672),
(0.328, 0.719), (0.737, 0.722), (0.938, 0.516), (0.938, 0.453),
(0.797, 0.328), (0.766, 0.344), (0.672, 0.297), (0.438, 0.281),
(0.344, 0.297), (0.328, 0.281), (0.25, 0.359)]]
self.draw_bounds = 1
#for removing it
self.i_am_done = False
def calc_move_me(self):
fuss = Vector(self.pos)
# to get the middle of the ship, instead of topleft/topright
fuss.x = fuss.x + self.size[0] * 0.5
fuss.y = fuss.y + self.size[1] * 0.5
spitze = Vector(self.target_pos)
self.waypoint = spitze
self.travel_vec = spitze - fuss
def move_me(self):
self.pos = Vector(self.pos) + self.travel_vec.normalize()
self.already_travled += self.travel_vec.normalize()
if abs(self.already_travled.x) >= abs(self.travel_vec.normalize().x * 300) and abs(self.already_travled.y) >= abs(self.travel_vec.normalize().y * 300):
#print(self.already_travled)
#print(self.travel_vec.normalize() * 300)
if self.parent:
self.parent.ids["ship"].projectile_list.remove(self)
self.parent.remove_widget(self)
def update_hitbox(self):
self.p1 = (self.pos[0] + 2, self.pos[1] + 16)
self.p2 = (self.pos[0] + 60, self.pos[1] + 16)
self.p3 = (self.pos[0] + 60, self.pos[1] + 44)
self.p4 = (self.pos[0] + 2, self.pos[1] + 44)
self.hitbox = Polygon([self.p1, self.p2, self.p3, self.p4])
def animate_ontime(self, time_passed): # needs to have that to be scheduled
if self.animcounter < len(self.source_l) - 1:
self.animcounter += 1
self.children[0].source = self.source_l[self.animcounter]
class Background(Widget):
bg0 = ObjectProperty(None)
bg1 = ObjectProperty(None)
bg2 = ObjectProperty(None)
bg3 = ObjectProperty(None)
bg4 = ObjectProperty(None)
cloud_texture = ObjectProperty(None)
floor_texture = ObjectProperty(None)
def __init__(self, **kwargs):
super(Background, self).__init__(**kwargs)
# keyboard
# self._keyboard = Window.request_keyboard(
# self._keyboard_closed, self, 'text')
# if self._keyboard.widget:
# # If it exists, this widget is a VKeyboard object which you can use
# # to change the keyboard layout.
# pass
# self._keyboard.bind(on_key_down=self._on_keyboard_down)
# background texture ?
self.bg0 = Image(source="bg/bg0.png").texture
self.bg1 = Image(source="bg/bg1.png").texture
self.bg2 = Image(source="bg/bg2.png").texture
self.bg3 = Image(source="bg/bg3.png").texture
self.bg4 = Image(source="bg/bg4.png").texture
self.bgtextures_avail_list = [self.bg0, self.bg1]
# self.bg0.uvsize = self.bg0.size
self.bg0.wrap = "repeat"
# Create textures
self.cloud_texture = Image(source="cloud.png").texture
self.cloud_texture.wrap = 'repeat'
self.cloud_texture.uvsize = (Window.width / self.cloud_texture.width, -1)
self.floor_texture = Image(source="floor.png").texture
self.floor_texture.wrap = 'repeat'
self.floor_texture.uvsize = (Window.width / self.floor_texture.width, -1)
self.bg_list = self.make_bg()
self.draw_bg(self.bg_list)
def on_size(self, *args):
self.bg_list = self.make_bg()
self.animate_bg(self.bg_list)
self.draw_bg(self.bg_list)
self.cloud_texture.uvsize = (self.width / self.cloud_texture.width, -1)
self.floor_texture.uvsize = (self.width / self.floor_texture.width, -1)
def scroll_textures(self, time_passed):
# Update the uvpos of the texture
self.cloud_texture.uvpos = (
(self.cloud_texture.uvpos[0] + time_passed / 2.0) % Window.width, self.cloud_texture.uvpos[1])
self.floor_texture.uvpos = (
(self.floor_texture.uvpos[0] + time_passed) % Window.width, self.floor_texture.uvpos[1])
# Redraw the texture
texture = self.property('cloud_texture')
texture.dispatch(self)
texture = self.property('floor_texture')
texture.dispatch(self)
def make_bg(self):
# make rectangles for background
bg_l = []
for w in range(int(Window.width / self.bg0.size[0]) + 1):
for h in range(int(Window.height / self.bg0.size[1]) + 1):
bg_l.append(Rectangle(pos=(0 + (w * 200), 0 + (h * 200)), size=(200, 200), texture=self.bg0))
return bg_l
# add them to the bg canvas
def draw_bg(self, bg_l):
self.canvas.clear()
with self.canvas.before:
for rect in bg_l:
self.canvas.add(rect)
self.canvas.ask_update()
def animate_bg(self, bg_list): # needs to have that to be scheduled
# chose tile to change
for rect in bg_list:
if random.randint(0, 10) > 5:
rect.texture = random.choice(self.bgtextures_avail_list)
def change_bg_on_time(self, timepassed): # needs to have that to be scheduled
self.animate_bg(self.bg_list)
self.draw_bg(self.bg_list)
# def _keyboard_closed(self):
# #print('My keyboard have been closed!')
# self._keyboard.unbind(on_key_down=self._on_keyboard_down)
# self._keyboard = None
#
# def _on_keyboard_down(self, keyboard, keycode, text, modifiers):
# #print('The key', keycode, 'have been pressed')
# #print(' - text is %r' % text)
# #print(' - modifiers are %r' % modifiers)
#
# # Keycode is composed of an integer + a string
# # If we hit escape, release the keyboard
# if keycode[1] == 'escape':
# keyboard.release()
#
# # Return True to accept the key. Otherwise, it will be used by
# # the system.
# return True
#
# def draw_projectiles(self):
# pass
# for p in self.parent.ids["ship"].projectile_list:
# self.parent.add_widget(p)
#
#
# with self.canvas.before:
# self.canvas.add(p)
#
#
# self.canvas.ask_update()
class Ship(Image):
velocity = NumericProperty(0)
acceleration = NumericProperty(5)
waypoint = (0, 0)
travelvec = (0, 0)
traveled_already = Vector(0, 0)
up = NumericProperty(1)
#movement
move_up_boolean = False
move_down_boolean = False
move_left_boolean = False
move_right_boolean = False
#projectiles
projectile_list =[]
def __init__(self, **kwargs):
super(Ship, self).__init__(**kwargs)
self.p1 = (self.pos[0] + 2, self.pos[1] + 16)
self.p2 = (self.pos[0] + 60, self.pos[1] + 16)
self.p3 = (self.pos[0] + 60, self.pos[1] + 44)
self.p4 = (self.pos[0] + 2, self.pos[1] + 44)
self.hitbox = Polygon([self.p1, self.p2, self.p3, self.p4])
self.source = "Ship1.png"
self.size_hint = (None, None)
self.size = (64, 64)
#self.hitbox = sympy.Polygon(self.p1, self.p2, self.p3, self.p4)
def update_hitbox(self):
self.p1 = (self.pos[0] + 2, self.pos[1] + 16)
self.p2 = (self.pos[0] + 60, self.pos[1] + 16)
self.p3 = (self.pos[0] + 60, self.pos[1] + 44)
self.p4 = (self.pos[0] + 2, self.pos[1] + 44)
self.hitbox = Polygon([self.p1, self.p2, self.p3, self.p4])
self.hitbox = shapely.affinity.rotate(self.hitbox, angle=90)
#print(self.children)
def rotate(self):
#kivy part
with self.canvas.before:
PushMatrix()
Rotate(origin=self.center, angle=90)
with self.canvas.after:
PopMatrix()
self.hitbox = shapely.affinity.rotate(self.hitbox, angle=90)
#print(self.children)
def on_touch_down(self, touch):
if len(self.projectile_list) < 100:
self.projectile_list.append(Projectile(self.center, touch.pos))
self.parent.add_widget(self.projectile_list[-1])
Clock.schedule_interval(self.projectile_list[-1].animate_ontime, 0.2)
#p_ = Vector(self.pos)
#print(p_.angle(touch.pos))
self.source = "Ship1.png"
self.velocity = 150
super().on_touch_down(touch)
def on_touch_up(self, touch):
self.source = "Ship1.png"
super().on_touch_up(touch)
def move_up(self, widget):
if widget.state == "down":
self.move_up_boolean = True
else:
self.move_up_boolean = False
def move_down(self, widget):
if widget.state == "down":
self.move_down_boolean = True
else:
self.move_down_boolean = False
def move_left(self, widget):
if widget.state == "down":
self.move_left_boolean = True
else:
self.move_left_boolean = False
def move_right(self, widget):
if widget.state == "down":
self.move_right_boolean = True
else:
self.move_right_boolean = False
def move_topleft(self, widget):
if widget.state == "down":
self.move_left_boolean = True
self.move_up_boolean = True
else:
self.move_left_boolean = False
self.move_up_boolean = False
def move_downleft(self, widget):
if widget.state == "down":
self.move_left_boolean = True
self.move_down_boolean = True
else:
self.move_left_boolean = False
self.move_down_boolean = False
def move_topright(self,widget):
if widget.state == "down":
self.move_right_boolean = True
self.move_up_boolean = True
else:
self.move_right_boolean = False
self.move_up_boolean = False
def move_downright(self,widget):
if widget.state == "down":
self.move_right_boolean = True
self.move_down_boolean = True
else:
self.move_right_boolean = False
self.move_down_boolean = False
class MyKeyboardListener(Widget):
def __init__(self, **kwargs):
super(MyKeyboardListener, self).__init__(**kwargs)
self._keyboard = Window.request_keyboard(
self._keyboard_closed, self, 'text')
if self._keyboard.widget:
# If it exists, this widget is a VKeyboard object which you can use
# to change the keyboard layout.
pass
self._keyboard.bind(on_key_down=self._on_keyboard_down)
def _keyboard_closed(self):
#print('My keyboard have been closed!')
self._keyboard.unbind(on_key_down=self._on_keyboard_down)
self._keyboard = None
def _on_keyboard_down(self, keyboard, keycode, text, modifiers):
#print('The key', keycode, 'have been pressed')
#print(' - text is %r' % text)
#print(' - modifiers are %r' % modifiers)
| |
valid.
This must be entered as a sequence of two temperature values.
Required.
:param coeffs:
List of nine coefficients :math:`(a_0, \ldots , a_8)`
:param p0:
The reference-state pressure, usually 1 atm or 1 bar. If omitted,
the default value is used, which is set by the ``standard_pressure``
directive.
"""
self.model = 'NASA9'
self.T_range = Trange
self.pref = p0
if len(coeffs) != 9:
raise InputError('NASA9 coefficient list must have length = 9')
self.coeffs = coeffs
class MultiPolyThermo(thermo):
def __init__(self, regions):
regions = sorted(regions, key=lambda r: r.T_range[0])
self.pref = regions[0].pref
self.Tranges = [regions[0].T_range[0]]
self.model = regions[0].model
self.data = []
for r in regions:
self.Tranges.append(r.T_range[1])
self.data.append(r.coeffs)
def get_yaml(self, out):
super().get_yaml(out)
out['temperature-ranges'] = FlowList(self.Tranges)
out['data'] = [FlowList(coeffs) for coeffs in self.data]
class Shomate(thermo):
"""Shomate polynomial parameterization."""
def __init__(self, Trange=(0.0, 0.0), coeffs=(), p0=None):
r"""
:param Trange:
The temperature range over which the parameterization is valid.
This must be entered as a sequence of two temperature values.
Required input.
:param coeffs:
Sequence of seven coefficients :math:`(A, \ldots , G)`
:param p0:
The reference-state pressure, usually 1 atm or 1 bar. If omitted,
the default value set by the ``standard_pressure`` directive is used.
"""
self.model = 'Shomate'
self.T_range = Trange
self.pref = p0
if len(coeffs) != 7:
raise InputError('Shomate coefficient list must have length = 7')
self.coeffs = coeffs
class const_cp(thermo):
"""Constant specific heat."""
def __init__(self, t0=None, cp0=None, h0=None, s0=None, tmax=None,
tmin=None):
"""
:param t0:
Temperature parameter T0. Default: 298.15 K.
:param cp0:
Reference-state molar heat capacity (constant). Default: 0.0.
:param h0:
Reference-state molar enthalpy at temperature T0. Default: 0.0.
:param s0:
Reference-state molar entropy at temperature T0. Default: 0.0.
"""
self.model = 'constant-cp'
self.pref = None
self.t0 = t0
self.h0 = h0
self.s0 = s0
self.cp0 = cp0
def get_yaml(self, out):
super().get_yaml(out)
if self.t0 is not None:
out['T0'] = applyUnits(self.t0)
if self.h0 is not None:
out['h0'] = applyUnits(self.h0)
if self.s0 is not None:
out['s0'] = applyUnits(self.s0)
if self.cp0 is not None:
out['cp0'] = applyUnits(self.cp0)
class gas_transport:
"""
Species-specific Transport coefficients for gas-phase transport models.
"""
def __init__(self, geom, diam, well_depth, dipole=0.0, polar=0.0,
rot_relax=0.0, acentric_factor=None, disp_coeff=0.0,
quad_polar=0.0):
"""
:param geom:
A string specifying the molecular geometry. One of ``atom``,
``linear``, or ``nonlinear``. Required.
:param diam:
The Lennard-Jones collision diameter in Angstroms. Required.
:param well_depth:
The Lennard-Jones well depth in Kelvin. Required.
:param dipole:
The permanent dipole moment in Debye. Default: 0.0
:param polar:
The polarizability in A^3. Default: 0.0
:param rot_relax:
The rotational relaxation collision number at 298 K. Dimensionless.
Default: 0.0
:param acentric_factor:
Pitzer's acentric factor. Dimensionless.
Default: 0.0
:param disp_coeff:
The dispersion coefficient in A^5
Default: 0.0
:param quad_polar:
The quadrupole polarizability
Default: 0.0
"""
self.geometry = geom
self.diameter = diam
self.well_depth = well_depth
self.dipole = dipole
self.polarizability = polar
self.rot_relax = rot_relax
self.acentric_factor = acentric_factor
self.disp_coeff = disp_coeff
self.quad_polar = quad_polar
@classmethod
def to_yaml(cls, representer, node):
out = BlockMap([('model', 'gas'),
('geometry', node.geometry),
('diameter', node.diameter),
('well-depth', node.well_depth)])
if node.dipole:
out['dipole'] = node.dipole
if node.polarizability:
out['polarizability'] = node.polarizability
if node.rot_relax:
out['rotational-relaxation'] = node.rot_relax
if node.acentric_factor:
out['acentric-factor'] = node.acentric_factor
if node.disp_coeff:
out['dispersion-coefficient'] = node.disp_coeff
if node.quad_polar:
out['quadrupole-polarizability'] = node.quad_polar
return representer.represent_dict(out)
class Arrhenius:
def __init__(self, A=0.0, b=0.0, E=0.0, coverage=()):
"""
:param A:
The pre-exponential coefficient. Required input. If entered without
units, the units will be computed considering all factors that
affect the units. The resulting units string is written to the CTML
file individually for each reaction pre-exponential coefficient.
:param b:
The temperature exponent. Dimensionless. Default: 0.0.
:param E:
Activation energy. Default: 0.0.
:param coverage:
For a single coverage dependency, a list with four elements: the
species name followed by the three coverage parameters. For multiple
coverage dependencies, a list of lists containing the individual
sets of coverage parameters. Only used for surface and edge
reactions.
"""
self.A = A
self.b = b
self.E = E
if coverage:
if isinstance(coverage[0], str):
self.coverage = [coverage]
else:
self.coverage = coverage
for cov in self.coverage:
if len(cov) != 4:
raise InputError("Incorrect number of coverage parameters")
else:
self.coverage = None
@classmethod
def to_yaml(cls, representer, node):
out = FlowMap([('A', applyUnits(node.A)),
('b', applyUnits(node.b)),
('Ea', applyUnits(node.E))])
return representer.represent_dict(out)
class stick(Arrhenius):
"""
A rate expression for a surface reaction given as a sticking probability,
parameterized using a modified Arrhenius expression.
"""
def __init__(self, *args, **kwargs):
"""
:param motz_wise:
``True`` if the Motz & Wise correction should be used, ``False`` if
not. If unspecified, use the mechanism default (set using the
functions `enable_motz_wise` or `disable_motz_wise`).
"""
self.motz_wise = kwargs.pop('motz_wise', None)
Arrhenius.__init__(self, *args, **kwargs)
class reaction:
"""
A homogeneous chemical reaction with pressure-independent rate coefficient
and mass-action kinetics.
"""
def __init__(self, equation, kf, id='', order='', options=()):
r"""
:param equation:
A string specifying the chemical equation.
:param kf:
The rate coefficient for the forward direction. If a sequence of
three numbers is given, these will be interpreted as [A, b, E] in
the modified Arrhenius function :math:`A T^b exp(-E/\hat{R}T)`.
:param id:
An optional identification string.
:param order:
Override the default reaction orders implied by the reactant
stoichiometric coefficients. Given as a string of key:value pairs,
e.g., ``"CH4:0.25 O2:1.5"``.
:param options:
Processing options, as described in
`Options <https://cantera.org/tutorials/cti/reactions.html#options>`__.
May be one or more (as a list) of the following: ``'duplicate'``,
``'negative_A'``,`` 'negative_orders'``, ``'nonreactant_orders'``.
"""
self.equation = equation
self.order = get_composition(order)
self.number = len(_reactions['reactions']) + 1
self.id = id
self.options = [options] if isinstance(options, str) else options
self.kf = Arrhenius(*kf) if isinstance(kf, (list, tuple)) else kf
self.type = 'elementary'
_reactions['reactions'].append(self)
@classmethod
def to_yaml(cls, representer, node):
out = BlockMap()
node.get_yaml(out)
return representer.represent_dict(out)
def get_yaml(self, out):
out['equation'] = self.equation
out.yaml_add_eol_comment('Reaction {}'.format(self.number), 'equation')
if self.type not in ('elementary', 'edge', 'surface'):
out['type'] = self.type
if self.id:
out['id'] = self.id
if self.type in ('elementary', 'three-body', 'edge', 'surface'):
out['rate-constant'] = self.kf
if 'duplicate' in self.options:
out['duplicate'] = True
if 'negative_A' in self.options:
out['negative-A'] = True
if self.order:
out['orders'] = FlowMap(self.order.items())
if 'negative_orders' in self.options:
out['negative-orders'] = True
if 'nonreactant_orders' in self.options:
out['nonreactant-orders'] = True
class three_body_reaction(reaction):
"""
A three-body reaction.
"""
def __init__(self, equation, kf, efficiencies='', id='', options=()):
"""
:param equation:
A string specifying the chemical equation. The reaction can be
written in either the association or dissociation directions, and
may be reversible or irreversible.
:param kf:
The rate coefficient for the forward direction. If a sequence of
three numbers is given, these will be interpreted as [A, b, E] in
the modified Arrhenius function.
:param efficiencies:
A string specifying the third-body collision efficiencies.
The efficiencies for unspecified species are set to 1.0.
:param id:
An optional identification string.
:param options: Processing options, as described in
`Options <https://cantera.org/tutorials/cti/reactions.html#options>`__.
"""
super().__init__(equation, kf, id, '', options)
self.type = 'three-body'
self.efficiencies = get_composition(efficiencies)
def get_yaml(self, out):
super().get_yaml(out)
if self.efficiencies:
out['efficiencies'] = FlowMap(self.efficiencies)
class falloff_base(reaction):
""" Base class for falloff_reaction and chemically_activated_reaction """
def __init__(self, equation, klow, khigh, efficiencies, falloff, id, options):
super().__init__(equation, None, id, '', options)
self.k_low = Arrhenius(*klow) if isinstance(klow, (list, tuple)) else klow
self.k_high = Arrhenius(*khigh) if isinstance(khigh, (list, tuple)) else khigh
self.falloff = falloff
self.efficiencies = get_composition(efficiencies)
def get_yaml(self, out):
super().get_yaml(out)
out['low-P-rate-constant'] = self.k_low
out['high-P-rate-constant'] = self.k_high
if self.falloff:
self.falloff.get_yaml(out)
if self.efficiencies:
out['efficiencies'] = FlowMap(self.efficiencies)
class falloff_reaction(falloff_base):
""" A gas-phase falloff reaction. """
def __init__(self, equation, kf0, kf, efficiencies='', falloff=None, id='',
options=()):
"""
:param equation:
A string specifying the chemical equation.
:param kf:
The rate coefficient for the forward direction in the high-pressure
limit. If a sequence of three numbers is given, these will be
interpreted as [A, b, E] in the modified Arrhenius function.
:param kf0:
The rate coefficient for the forward direction in the low-pressure
limit. If a sequence of three numbers is given, these will be
interpreted as [A, b, E] in the modified Arrhenius function.
:param efficiencies:
A string specifying the third-body collision efficiencies. The
efficiency for unspecified species is set to 1.0.
:param falloff:
An embedded entry specifying a falloff function. If omitted, a
unity falloff function (Lindemann form) will be used.
:param id:
An optional identification string.
:param options:
Processing | |
# /usr/bin/env python3.5
# -*- mode: python -*-
# =============================================================================
# @@-COPYRIGHT-START-@@
#
# Copyright (c) 2021, Qualcomm Innovation Center, Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software
# without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
# SPDX-License-Identifier: BSD-3-Clause
#
# @@-COPYRIGHT-END-@@
# =============================================================================
"""Cross Layer Equalization"""
import collections
import typing
import numpy as np
import tensorflow as tf
import libpymo
from aimet_common.utils import AimetLogger
from aimet_tensorflow.keras.batch_norm_fold import fold_all_batch_norms
from aimet_tensorflow.keras.connectedgraph import ConnectedGraph
from aimet_tensorflow.keras.utils import model_transform_utils
from aimet_tensorflow.keras.utils.weight_tensor_utils import WeightTensorUtils
_logger = AimetLogger.get_area_logger(AimetLogger.LogAreas.CrosslayerEqualization)
BatchNormFoldedPair = typing.Union[typing.Tuple[tf.keras.layers.Conv2D,
tf.keras.layers.BatchNormalization],
typing.Tuple[tf.keras.layers.Dense,
tf.keras.layers.BatchNormalization]]
ClsSet = typing.Union[typing.Tuple[tf.keras.layers.Conv2D,
tf.keras.layers.Conv2D],
typing.Tuple[tf.keras.layers.Conv2D,
tf.keras.layers.DepthwiseConv2D,
tf.keras.layers.Conv2D]]
ScaleFactor = typing.Union[np.ndarray, typing.Tuple[np.ndarray, np.ndarray]]
ReluFlag = typing.Union[bool, typing.Tuple[bool, bool]]
cls_supported_layers = (tf.keras.layers.Conv2D, tf.keras.layers.Conv1D)
zero_padding_layers = (tf.keras.layers.ZeroPadding2D, tf.keras.layers.ZeroPadding1D)
cls_supported_activations = (tf.keras.layers.ReLU, tf.keras.layers.PReLU)
class ClsSetInfo:
"""
This class hold information about the layers in a CLS set, along with corresponding scaling factors
and other information like if there is a ReLU activation function between the CLS set layers
"""
class ClsSetLayerPairInfo:
"""
Models a pair of layers that were scaled using CLS. And related information.
"""
def __init__(self, layer1: tf.keras.layers.Conv2D, layer2: tf.keras.layers.Conv2D, scale_factor: np.ndarray,
relu_activation_between_layers: bool):
"""
:param layer1: Layer whose bias is folded
:param layer2: Layer to which bias of previous layer's bias is folded
:param scale_factor: Scale Factor found from Cross Layer Scaling to scale BN parameters
:param relu_activation_between_layers: If the activation between layer1 and layer2 is Relu
"""
self.layer1 = layer1
self.layer2 = layer2
self.scale_factor = scale_factor
self.relu_activation_between_layers = relu_activation_between_layers
def __eq__(self, other):
if isinstance(self, other.__class__):
return self.layer1 == other.layer1 and \
self.layer2 == other.layer2 and \
np.allclose(self.scale_factor, other.scale_factor) and \
self.relu_activation_between_layers == other.relu_activation_between_layers
return False
def __init__(self, cls_pair_1: ClsSetLayerPairInfo, cls_pair_2: ClsSetLayerPairInfo = None):
"""
Constructor takes 2 pairs if Depth-wise separable layer is being folded
:param cls_pair_1: Pair between two conv or conv and depth-wise conv
:param cls_pair_2: Pair between depth-wise conv and point-wise conv
"""
if cls_pair_2:
self.cls_pair_info_list = [cls_pair_1, cls_pair_2]
else:
self.cls_pair_info_list = [cls_pair_1]
def __eq__(self, other):
if isinstance(self, other.__class__):
return self.cls_pair_info_list == other.cls_pair_info_list
return False
class GraphSearchUtils:
"""Implements graph search utils required by CLE feature"""
def __init__(self,
model: tf.keras.Model,
input_shapes: typing.Union[None, typing.Tuple,
typing.List[typing.Tuple]]):
"""
:param model: Keras Model (Sequential, Functional, Subclassing)
:param input_shapes: Input shape tuple or list of input tuple shape
"""
self._connected_graph = ConnectedGraph(model, input_shapes)
self._ordered_module_list = self._get_ordered_list_of_conv_modules()
def _get_ordered_list_of_conv_modules(self):
"""
Finds order of nodes in graph
:return: List of name, layer tuples in graph in order
"""
result = []
for op in self._connected_graph.ordered_ops:
layer = op.get_module()
if isinstance(layer, cls_supported_layers):
result.append([layer.name, layer])
return result
def find_layer_groups_to_scale(self) -> typing.List[typing.List[tf.keras.layers.Conv2D]]:
"""
Find layer groups to scale
:return: List of groups of layers. Each group can be independently equalized
"""
# Find the input node(s) in the graph
input_nodes = []
for op in self._connected_graph.get_all_ops().values():
if op.inputs and op.inputs[0].is_model_input:
input_nodes.append(op)
layer_groups = []
for op in input_nodes:
self.find_downstream_layer_groups_to_scale(op, layer_groups)
# Sort the layer groups in order of occurrence in the model
ordered_layer_groups = []
for _, module in self._ordered_module_list:
for layer_group in layer_groups:
if layer_group[0] is module:
ordered_layer_groups.append(layer_group)
return ordered_layer_groups
@staticmethod
def find_downstream_layer_groups_to_scale(op, layer_groups, current_group=None, visited_nodes=None):
"""
Recursive function to find cls layer groups downstream from a given op
:param op: Starting op to search from
:param layer_groups: Running list of layer groups
:param current_group: Running current layer group
:param visited_nodes: Running list of visited nodes (to short-circuit recursion)
:return: None
"""
if not visited_nodes:
visited_nodes = []
if not current_group:
current_group = []
if op in visited_nodes:
return
visited_nodes.append(op)
current_layer = op.get_module()
# Conv2D, Conv1D or its subclass is added to the current group
if current_layer and isinstance(current_layer, cls_supported_layers):
current_group.append(current_layer)
# Terminating condition for current group
if not current_layer or not GraphSearchUtils._is_supported_layer_case(current_layer):
if (len(current_group) > 1) and (current_group not in layer_groups):
layer_groups.append(current_group)
current_group = []
if op.output:
for consumer in op.output.consumers:
GraphSearchUtils.find_downstream_layer_groups_to_scale(consumer, layer_groups,
current_group, visited_nodes)
# Reached a leaf.. See if the current group has something to grab
if (len(current_group) > 1) and (current_group not in layer_groups):
layer_groups.append(current_group)
@staticmethod
def _is_supported_layer_case(layer: tf.keras.layers.Layer) -> bool:
"""
Check if the current layer is CLS supported layers or a supported activation layer
:param layer: tf.keras.layers.Layer
:return: True if it's CLS supported layers or a supported layer
"""
return isinstance(layer, (cls_supported_layers + zero_padding_layers)) or \
GraphSearchUtils._is_supported_activations(layer) or \
GraphSearchUtils.is_folded_batch_normalization(layer)
@staticmethod
def is_folded_batch_normalization(layer: tf.keras.layers.Layer) -> bool:
"""
Method to check if layer is folded batchnorm or not
:param layer: layer to check if it is folded batch norm
:return: True if it is folded batch norm, False if not
"""
if not isinstance(layer, tf.keras.layers.BatchNormalization):
return False
return np.all(layer.beta == 0.0) and np.all(layer.gamma == 1.0)
@staticmethod
def _is_supported_activations(layer: tf.keras.layers.Layer) -> bool:
"""
Check if the current layer is a supported activation layer
:param layer: tf.keras.layers.Layer
:return: True if layer is ReLU, PReLU or Activation with supported type
"""
# Case of explicit layer such as tf.keras.layers.ReLU
if isinstance(layer, cls_supported_activations):
return True
# Case of implicit layer such as tf.keras.layers.Activation(tf.nn.relu)
# Note: PReLU is not supported by implicit approach until TF 2.4
layer_config = layer.get_config()
activation = layer_config.get("activation")
if activation is None:
return False
return activation in ["relu", "relu6"]
@staticmethod
def convert_layer_group_to_cls_sets(layer_group: typing.List[tf.keras.layers.Conv2D]) \
-> typing.List[ClsSet]:
"""
Helper function to convert a layer group to a list of cls sets
:param layer_group: Given layer group to convert
:return: List of cls sets
"""
cls_sets = []
layer_group = collections.deque(layer_group)
prev_layer_to_scale = layer_group.popleft()
while layer_group:
next_layer_to_scale = layer_group.popleft()
if isinstance(next_layer_to_scale, tf.keras.layers.DepthwiseConv2D):
next_non_depthwise_conv_layer = layer_group.popleft()
# DepthwiseConv layer right after DepthwiseConv layer is not currently supported
if isinstance(next_non_depthwise_conv_layer, tf.keras.layers.DepthwiseConv2D):
_logger.error("Consecutive DepthwiseConv layer not currently supported")
raise NotImplementedError
cls_sets.append(
(prev_layer_to_scale, next_layer_to_scale, next_non_depthwise_conv_layer))
prev_layer_to_scale = next_non_depthwise_conv_layer
else:
cls_sets.append((prev_layer_to_scale, next_layer_to_scale))
prev_layer_to_scale = next_layer_to_scale
return cls_sets
@staticmethod
def is_relu_activation_present_in_cls_sets(cls_sets: typing.List[ClsSet]) \
-> typing.List[typing.Union[bool, typing.Tuple[bool, bool]]]:
"""
Check if there is ReLU or PReLU activation between cls sets
:param cls_sets: List of ClsSet to find ReLU activation in
:return: List of ReLU activation preset flags (bool or tuple of bool) corresponding to input cls_sets param
"""
is_relu_activation_in_cls_sets = []
for cls_set in cls_sets:
cls_set = cls_set[:-1]
is_relu_activation_in_cls_set = []
for layer in cls_set:
has_relu_activation = GraphSearchUtils._does_layer_have_relu_activation(layer)
is_relu_activation_in_cls_set.append(has_relu_activation)
if len(is_relu_activation_in_cls_set) == 1:
is_relu_activation_in_cls_sets.append(is_relu_activation_in_cls_set[0])
else:
is_relu_activation_in_cls_sets.append(tuple(is_relu_activation_in_cls_set))
return is_relu_activation_in_cls_sets
@staticmethod
def _does_layer_have_relu_activation(layer: tf.keras.layers.Conv2D) -> bool:
"""
Check if layer has ReLU or PReLU activation function
:param layer: Conv2D or it's subclass to check activation function
:return: True If layer has ReLU or PReLU activation, otherwise False
"""
activation_info = tf.keras.activations.serialize(layer.activation)
if isinstance(activation_info, str):
# Instantiating like tf.keras.layers.Conv2D(8, kernel_size=3, activation=tf.keras.activations.relu)
# has the result of serialization as str type
activation_type = activation_info
elif isinstance(activation_info, dict):
# Instantiating like tf.keras.layers.Conv2D(8, kernel_size=3, activation=tf.keras.layers.ReLU())
# has the result of serialization as dict type
activation_type = activation_info["class_name"].lower()
else:
raise NotImplementedError("Not supported format")
# If activation parameter is not set or None, default activation_type is linear
if activation_type == "linear" and layer.outbound_nodes:
assert len(layer.outbound_nodes) == | |
of the first argument to the function relative to the
position of the opening bracket and the number of spaces between the opening
bracket and the function name.
The two values will to be used to align the other arguments in the subsequent line
"""
args = parse_args(args)
spaces_before_func = 0
subline = curr_line[bracket_offset + 1:]
if re.search('^[ \t]*($|\r)', subline):
# whitespace extending to the end of the line means there's no
# function in this line. The indentation level defaults to one.
arg_pos = 1
else:
if bracket_offset != len(curr_line) - 1 and curr_line[bracket_offset + 1] == ' ':
# control reaches here if we are not at the end of the line
# and whitespace follows. We must first find the position of the
# function and then the arguments position
match = re.search(' +[^)\]]| \)', subline) # Find the first non whitespace/bracket character
if match:
spaces_before_func = match.end() - match.start() - 1
end = match.end()
else:
end = 0
# Then use the end of the whitespace group as the first argument
arg_pos = re.search(' +([^)])|( *(\(|\[))', subline[end:])
if arg_pos:
arg_pos = arg_pos.end() + spaces_before_func + 1
else:
arg_pos = spaces_before_func + 1
if re.match('^[ \t]*(#\||;|$|\r)',
subline[(end - 1 + subline[end - 1:].find(' ')):]):
# But, if a comment if found after the function name, the
# indent level becomes one
arg_pos = spaces_before_func + args.default_indent
else:
# If there's no space after the bracket, simply find the end of the
# whitespace group
match = re.search(' +([^)}\n\r])|( *(\(|\[|{))', subline)
if match: # found the argument
arg_pos = match.end()
else: # Either empty list or argument is in the next line
arg_pos = 1
if re.match('^[\t ]*(;|$|\r)', subline[subline.find(' '):]):
# Again if a comment is found after the function name, the
# indent level defaults to 1
arg_pos = spaces_before_func + args.default_indent
return [arg_pos, spaces_before_func]
def _pop_from_list(bracket, lst, line, real_pos, offset, msg_stack):
""" _pop_from_list(char : str, lst : [str], line : str,
real_pos : int, offset : int)
The function is called when a closing bracket is encountered. The function
simply pops the last pushed item and issues a warning if an error is
encountered.
"""
# Try to spot a case when a square bracket is used to close a round bracket
# block
if bracket == ']':
correct_closer = '['
elif bracket == ')':
correct_closer = '('
else:
correct_closer = '{'
if lst != []:
popped = lst.pop()
popped_char = popped['character']
popped_pos = popped['line_number']
popped_offset = popped['bracket_pos']
if popped_char is not correct_closer:
message = "Bracket `%s' does not match `%s' at (%d, %d)"
message = message % (bracket, popped_char, popped_pos, popped_offset)
warning_info = {
'msg': message,
'line': line,
'column': real_pos
}
msg_stack.append(warning_info)
else:
# If the list is empty and a closing bracket is found, it means we have
# excess brackets. That warning is issued here. The coordinates used
# will be slightly or largely off target depending on how much your
# code was modified when used with compact mode
message = "Unmatched closing bracket `%s'" % bracket
warning_info = {
'msg': message,
'line': line,
'column': offset + 1
}
msg_stack.append(warning_info)
return lst
def _push_to_list(lst, func_name, char, line, offset,
first_arg_pos, first_item, in_list_literal,
lead_spaces, args=None):
""" _push_to_list(lst : [str], func_name : str, char : str, line : int, offset : int,
first_arg_pos :int , first_item : int, in_list_literal : bool,
lead_spaces : int, args : str)
Called when an opening bracket is encountered. A hash containing the
necessary data to pin point errors and the indentation level is stored in
the list and the list returned.
"""
args = parse_args(args)
keywords = add_keywords(args)
pos_hash = {'character': char,
'line_number': line,
'bracket_pos': offset,
'indent_level': offset + first_arg_pos, # the default value, e.g in normal function
'func_name': func_name,
'spaces': 0}
is_macro = is_macro_name(func_name, args.dialect)
two_spacer = is_macro or keywords[func_name] in [KEYWORD1, KEYWORD4]
if in_list_literal or char == '{' or (char == '[' and args.dialect == 'clojure'):
# found quoted list or clojure hashmap/vector
pos_hash['indent_level'] = first_item
elif keywords[func_name] == KEYWORD2:
# We only make the if-clause stand out if not in uniform mode
pos_hash['indent_level'] = lead_spaces + ((offset + args.indent_size * 2)
if not args.uniform
else (offset + args.indent_size))
elif func_name != '':
if two_spacer:
pos_hash['indent_level'] = lead_spaces + offset + args.indent_size
elif keywords[func_name] == KEYWORD3:
pos_hash['indent_level'] = lead_spaces + offset + (2 * args.indent_size)
lst.append(pos_hash)
try:
# A hack to make flets and labels in Lisp not indent like
# functions. The 'labels' indentation may not be exactly
# perfect.
parent_func = lst[-3]['func_name']
# Make 'special' indentation occur only in a Clojure binding block([]) for
# letfns
non_bind_block = args.dialect == 'clojure' and lst[-2]['character'] != '['
if keywords[parent_func] == KEYWORD4 and not non_bind_block:
lst[-1]['indent_level'] = offset + args.indent_size
except IndexError:
pass
return lst
def indent_code(original_code, args=None):
""" indented_code(string : str, fname : str) -> [...]
Arguments:
fpath: Simply used in formatting the warning messages
>>> indent_code("(print\n'Hello)")
{'bracket_locations': [],
'comment_locations': [],
'in_comment': 0,
'in_newlisp_tag_string': False,
'in_string': False,
'in_symbol_with_space': False,
'indented_code': ['(print\n', " 'Hello)"],
'last_quote_location': (),
'last_symbol_location': (),
'message_stack': [],
'newlisp_brace_locations': [],
'original_code': ['(print\n', "'Hello)"],
'first_tag_string': ()}
The last entry in the list is the indented string.
"""
args = parse_args(args)
keywords = add_keywords(args)
# Safeguards against processing brackets inside strings
in_string = False
# newLISP use curly brackets as a syntax for multiline strings
# this variable here tries to keep track of that
in_newlisp_string = 0
in_newlisp_tag_string = False
newlisp_brace_locations = []
first_tag_string = ()
# zero_level helps us get the same results as Sitaram's indenter when in
# --no-compact mode.
zero_level = 0
# The two variables prevent formatting comment regions or symbols with whitespace
in_comment = 0
in_symbol_with_space = False
comment_locations = []
last_symbol_location = ()
# A in_symbol_region is the region between pipes(| |) or in strings. This
# includes the comment region. This region is not to be messed with.
in_symbol_region = in_string or in_comment or in_symbol_with_space or \
in_newlisp_string or in_newlisp_tag_string
# we need to know the line number in order to issue almost accurate messages about
# unclosed brackets and string
line_number = 1
# Stores the last position a quote was encountered so that in case there are
# any unclosed strings, we can pinpoint them
last_quote_location = ()
line_ending = find_line_ending(original_code)
code_lines = split_preserve(original_code, line_ending)
indented_code = []
bracket_locations = []
# List of warnings from errors in the code
message_stack = []
for line in code_lines:
escaped = False
curr_line = line
# Get the indent level and the indented line
zero_level, curr_line, indent_level = indent_line(zero_level,
bracket_locations,
line, in_comment,
in_symbol_region, args)
# Build up the indented string.
indented_code.append(curr_line)
regex = '^[ \t]*'
lead_spaces = re.findall(regex, curr_line)
if lead_spaces:
curr_line = re.sub(regex, detabify(lead_spaces[0], args), curr_line)
offset = 0
for curr_char in curr_line:
next_char = curr_line[offset + 1:offset + 2]
prev_char = curr_line[offset - 1:offset]
substr = curr_line[offset + 1:] # slice to the end
if escaped:
# Move to the next character if the current one has been escaped
escaped = False
offset += 1
continue
if curr_char == '\\' and not in_newlisp_string and not in_newlisp_tag_string:
# the next character has been escaped
escaped = True
if (curr_char == ';' or (curr_char == '#' and args.dialect == 'newlisp'))\
and not in_symbol_region and not \
(prev_char == '#' and args.dialect == 'scheme'):
# a comment has been found, go to the next line
# A sharp sign(#) before a semi-colon in Scheme is used to
# comment out sections of code. We don't treat it as a comment
break
# ----------------------------------------------------------
# Comments are dealt with here. Clojure and newLISP don't have Lisp
# style multiline comments so don't include them.
if args.dialect not in ['clojure', 'newlisp'] and curr_char == '|' \
and not in_string:
if prev_char == '#' and not in_symbol_with_space:
| |
dframe_2018['file'].apply(lambda x: int(str(x)[9:11]))
dframe_2018['Second'] = dframe_2018['file'].apply(lambda x: int(str(x)[11:13]))
base_date = datetime(year=2018, month=1, day=1)
dframe_2018['date'] = [base_date + timedelta(days=int(row[0] - 1), hours=int(row[1]), minutes=int(row[2]),
seconds=int(row[3])
) for row in
dframe_2018[[
'Yearly_Day',
'Hour',
'Minute',
'Second']].values]
dframe_2018.drop(columns=['file', 'Year', 'Hour', 'Minute', 'Yearly_Day', 'Second'], inplace=True)
dframe_2018 = dframe_2018[dframe_2018['date'] < end_date]
dframe_2018 = dframe_2018[dframe_2018['date'] > start_date]
dframe_2018.set_index('date', inplace=True)
dframe_2018.rename(columns={compound: f'{compound}_mr'}, inplace=True)
if end_year >= 2019:
dframe_2019 = pd.read_excel(r'C:\Users\ARL\Desktop\Summit_GC_2019\NMHC_results\Blanks_2019.xlsx',
header=None)
dframe_2019.set_index(0, inplace=True)
dframe_transposed = dframe_2019.T
dframe_2019 = dframe_transposed.loc[:, [compound]]
dframe_2019 = dframe_2019.iloc[:, [j for j, c in enumerate(dframe_2019.columns) if j not in [0, 2, 3]]]
dframe_2019['file'] = dframe_transposed.iloc[:, 0]
dframe_2019['decimal_date'] = dframe_transposed.iloc[:, 39]
dframe_2019.dropna(inplace=True, subset=['file'])
dframe_2019['decmial_date_year'] = [(2019 + (float(row[0]) - 1) / 365) for row in
dframe_2019[['decimal_date']].values]
dframe_2019['Year'] = dframe_2019['file'].apply(lambda x: int(str(x)[0:4]))
dframe_2019['Yearly_Day'] = dframe_2019['file'].apply(lambda x: int(str(x)[4:7]))
dframe_2019['Hour'] = dframe_2019['file'].apply(lambda x: int(str(x)[7:9]))
dframe_2019['Minute'] = dframe_2019['file'].apply(lambda x: int(str(x)[9:11]))
dframe_2019['Second'] = dframe_2019['file'].apply(lambda x: int(str(x)[11:13]))
base_date = datetime(year=2019, month=1, day=1)
dframe_2019['date'] = [
base_date + timedelta(days=int(row[0] - 1), hours=int(row[1]), minutes=int(row[2]),
seconds=int(row[3])
) for row in
dframe_2019[[
'Yearly_Day',
'Hour',
'Minute',
'Second']].values]
dframe_2019.drop(columns=['file', 'Year', 'Hour', 'Minute', 'Yearly_Day', 'Second'], inplace=True)
dframe_2019 = dframe_2019[dframe_2019['date'] < end_date]
dframe_2019 = dframe_2019[dframe_2019['date'] > start_date]
dframe_2019.set_index('date', inplace=True)
dframe_2019.rename(columns={compound: f'{compound}_mr'}, inplace=True)
elif start_year == 2019:
dframe_2019 = pd.read_excel(r'C:\Users\ARL\Desktop\Summit_GC_2019\NMHC_results\Blanks_2019.xlsx', header=None)
dframe_2019.set_index(0, inplace=True)
dframe_transposed = dframe_2019.T
dframe_2019 = dframe_transposed.loc[:, [compound]]
dframe_2019 = dframe_2019.iloc[:, [j for j, c in enumerate(dframe_2019.columns) if j not in [0, 2, 3]]]
dframe_2019['file'] = dframe_transposed.iloc[:, 0]
dframe_2019['decimal_date'] = dframe_transposed.iloc[:, 39]
dframe_2019.dropna(inplace=True, subset=['file'])
dframe_2019['decmial_date_year'] = [(2019 + (float(row[0]) - 1) / 365) for row in
dframe_2019[['decimal_date']].values]
dframe_2019['Year'] = dframe_2019['file'].apply(lambda x: int(str(x)[0:4]))
dframe_2019['Yearly_Day'] = dframe_2019['file'].apply(lambda x: int(str(x)[4:7]))
dframe_2019['Hour'] = dframe_2019['file'].apply(lambda x: int(str(x)[7:9]))
dframe_2019['Minute'] = dframe_2019['file'].apply(lambda x: int(str(x)[9:11]))
dframe_2019['Second'] = dframe_2019['file'].apply(lambda x: int(str(x)[11:13]))
base_date = datetime(year=2019, month=1, day=1)
dframe_2019['date'] = [base_date + timedelta(days=int(row[0] - 1), hours=int(row[1]), minutes=int(row[2]),
seconds=int(row[3])
) for row in
dframe_2019[[
'Yearly_Day',
'Hour',
'Minute',
'Second']].values]
dframe_2019.drop(columns=['file', 'Year', 'Hour', 'Minute', 'Yearly_Day', 'Second'], inplace=True)
dframe_2019 = dframe_2019[dframe_2019['date'] < end_date]
dframe_2019 = dframe_2019[dframe_2019['date'] > start_date]
dframe_2019.set_index('date', inplace=True)
dframe_2019.rename(columns={compound: f'{compound}_mr'}, inplace=True)
dframe = pd.concat([dframe_2017, dframe_2018, dframe_2019])
dframe = dframe.loc[dframe.index < end_date]
dframe = dframe.loc[dframe.index > start_date]
dframe.fillna(value=99999, inplace=True)
return dframe
def excel_trap_blanks(compound, start, end):
start_date = datetime.strptime(start, '%Y-%m-%d %H:%M:%S')
end_date = datetime.strptime(end, '%Y-%m-%d %H:%M:%S')
start_year = int(start_date.year)
end_year = int(end_date.year)
dframe_2017 = pd.DataFrame()
dframe_2018 = pd.DataFrame()
dframe_2019 = pd.DataFrame()
if start_year == 2017:
if end_year >= 2017:
dframe_2017 = pd.read_excel(r'Z:\Data\Summit_GC\Summit_GC_2017\NMHC_results\TrapBlanks_2017.xlsx',
header=None)
dframe_2017.set_index(0, inplace=True)
dframe_transposed = dframe_2017.T
dframe_2017 = dframe_transposed.loc[:, [compound]]
dframe_2017 = dframe_2017.iloc[:, [j for j, c in enumerate(dframe_2017.columns) if j not in [0, 2, 3]]]
dframe_2017['file'] = dframe_transposed.iloc[:, 0]
dframe_2017['decimal_date'] = dframe_transposed.iloc[:, 39]
dframe_2017.dropna(inplace=True, subset=['file'])
dframe_2017['decmial_date_year'] = [(2017 + (float(row[0]) - 1) / 365) for row in
dframe_2017[['decimal_date']].values]
dframe_2017['Year'] = dframe_2017['file'].apply(lambda x: int(str(x)[0:4]))
dframe_2017['Yearly_Day'] = dframe_2017['file'].apply(lambda x: int(str(x)[4:7]))
dframe_2017['Hour'] = dframe_2017['file'].apply(lambda x: int(str(x)[7:9]))
dframe_2017['Minute'] = dframe_2017['file'].apply(lambda x: int(str(x)[9:11]))
dframe_2017['Second'] = dframe_2017['file'].apply(lambda x: int(str(x)[11:13]))
base_date = datetime(year=2017, month=1, day=1)
dframe_2017['date'] = [base_date + timedelta(days=int(row[0] - 1), hours=int(row[1]), minutes=int(row[2]),
seconds=int(row[3])
) for row in
dframe_2017[[
'Yearly_Day',
'Hour',
'Minute',
'Second']].values]
dframe_2017.drop(columns=['file', 'Year', 'Hour', 'Minute', 'Yearly_Day', 'Second'], inplace=True)
dframe_2017 = dframe_2017[dframe_2017['date'] < end_date]
dframe_2017 = dframe_2017[dframe_2017['date'] > start_date]
dframe_2017.set_index('date', inplace=True)
dframe_2017.rename(columns={compound: f'{compound}_mr'}, inplace=True)
if end_year >= 2018:
dframe_2018 = pd.read_excel(r'Z:\Data\Summit_GC\Summit_GC_2018\NMHC_results\TrapBlanks_2018.xlsx',
header=None)
dframe_2018.set_index(0, inplace=True)
dframe_transposed = dframe_2018.T
dframe_2018 = dframe_transposed.loc[:, [compound]]
dframe_2018 = dframe_2018.iloc[:, [j for j, c in enumerate(dframe_2018.columns) if j not in [0, 2, 3]]]
dframe_2018['file'] = dframe_transposed.iloc[:, 0]
dframe_2018['decimal_date'] = dframe_transposed.iloc[:, 39]
dframe_2018.dropna(inplace=True, subset=['file'])
dframe_2018['decmial_date_year'] = [(2018 + (float(row[0]) - 1) / 365) for row in
dframe_2018[['decimal_date']].values]
dframe_2018['Year'] = dframe_2018['file'].apply(lambda x: int(str(x)[0:4]))
dframe_2018['Yearly_Day'] = dframe_2018['file'].apply(lambda x: int(str(x)[4:7]))
dframe_2018['Hour'] = dframe_2018['file'].apply(lambda x: int(str(x)[7:9]))
dframe_2018['Minute'] = dframe_2018['file'].apply(lambda x: int(str(x)[9:11]))
dframe_2018['Second'] = dframe_2018['file'].apply(lambda x: int(str(x)[11:13]))
base_date = datetime(year=2018, month=1, day=1)
dframe_2018['date'] = [
base_date + timedelta(days=int(row[0] - 1), hours=int(row[1]), minutes=int(row[2]),
seconds=int(row[3])
) for row in
dframe_2018[[
'Yearly_Day',
'Hour',
'Minute',
'Second']].values]
dframe_2018.drop(columns=['file', 'Year', 'Hour', 'Minute', 'Yearly_Day', 'Second'], inplace=True)
dframe_2018 = dframe_2018[dframe_2018['date'] < end_date]
dframe_2018 = dframe_2018[dframe_2018['date'] > start_date]
dframe_2018.set_index('date', inplace=True)
dframe_2018.rename(columns={compound: f'{compound}_mr'}, inplace=True)
if end_year >= 2019:
dframe_2019 = pd.read_excel(
r'C:\Users\ARL\Desktop\Summit_GC_2019\NMHC_results\TrapBlanks_2019.xlsx',
header=None)
dframe_2019.set_index(0, inplace=True)
dframe_transposed = dframe_2019.T
dframe_2019 = dframe_transposed.loc[:, [compound]]
dframe_2019 = dframe_2019.iloc[:,
[j for j, c in enumerate(dframe_2019.columns) if j not in [0, 2, 3]]]
dframe_2019['file'] = dframe_transposed.iloc[:, 0]
dframe_2019['decimal_date'] = dframe_transposed.iloc[:, 39]
dframe_2019.dropna(inplace=True, subset=['file'])
dframe_2019['decmial_date_year'] = [(2019 + (float(row[0]) - 1) / 365) for row in
dframe_2019[['decimal_date']].values]
dframe_2019['Year'] = dframe_2019['file'].apply(lambda x: int(str(x)[0:4]))
dframe_2019['Yearly_Day'] = dframe_2019['file'].apply(lambda x: int(str(x)[4:7]))
dframe_2019['Hour'] = dframe_2019['file'].apply(lambda x: int(str(x)[7:9]))
dframe_2019['Minute'] = dframe_2019['file'].apply(lambda x: int(str(x)[9:11]))
dframe_2019['Second'] = dframe_2019['file'].apply(lambda x: int(str(x)[11:13]))
base_date = datetime(year=2019, month=1, day=1)
dframe_2019['date'] = [
base_date + timedelta(days=int(row[0] - 1), hours=int(row[1]), minutes=int(row[2]),
seconds=int(row[3])
) for row in
dframe_2019[[
'Yearly_Day',
'Hour',
'Minute',
'Second']].values]
dframe_2019.drop(columns=['file', 'Year', 'Hour', 'Minute', 'Yearly_Day', 'Second'], inplace=True)
dframe_2019 = dframe_2019[dframe_2019['date'] < end_date]
dframe_2019 = dframe_2019[dframe_2019['date'] > start_date]
dframe_2019.set_index('date', inplace=True)
dframe_2019.rename(columns={compound: f'{compound}_mr'}, inplace=True)
elif start_year == 2018:
if end_year >= 2018:
dframe_2018 = pd.read_excel(r'Z:\Data\Summit_GC\Summit_GC_2018\NMHC_results\TrapBlanks_2018.xlsx',
header=None)
dframe_2018.set_index(0, inplace=True)
dframe_transposed = dframe_2018.T
dframe_2018 = dframe_transposed.loc[:, [compound]]
dframe_2018 = dframe_2018.iloc[:, [j for j, c in enumerate(dframe_2018.columns) if j not in [0, 2, 3]]]
dframe_2018['file'] = dframe_transposed.iloc[:, 0]
dframe_2018['decimal_date'] = dframe_transposed.iloc[:, 39]
dframe_2018.dropna(inplace=True, subset=['file'])
dframe_2018['decmial_date_year'] = [(2018 + (float(row[0]) - 1) / 365) for row in
dframe_2018[['decimal_date']].values]
dframe_2018['Year'] = dframe_2018['file'].apply(lambda x: int(str(x)[0:4]))
dframe_2018['Yearly_Day'] = dframe_2018['file'].apply(lambda x: int(str(x)[4:7]))
dframe_2018['Hour'] = dframe_2018['file'].apply(lambda x: int(str(x)[7:9]))
dframe_2018['Minute'] = dframe_2018['file'].apply(lambda x: int(str(x)[9:11]))
dframe_2018['Second'] = dframe_2018['file'].apply(lambda x: int(str(x)[11:13]))
base_date = datetime(year=2018, month=1, day=1)
dframe_2018['date'] = [base_date + timedelta(days=int(row[0] - 1), hours=int(row[1]), minutes=int(row[2]),
seconds=int(row[3])
) for row in
dframe_2018[[
'Yearly_Day',
'Hour',
'Minute',
'Second']].values]
dframe_2018.drop(columns=['file', 'Year', 'Hour', 'Minute', 'Yearly_Day', 'Second'], inplace=True)
dframe_2018 = dframe_2018[dframe_2018['date'] < end_date]
dframe_2018 = dframe_2018[dframe_2018['date'] > start_date]
dframe_2018.set_index('date', inplace=True)
dframe_2018.rename(columns={compound: f'{compound}_mr'}, inplace=True)
if end_year >= 2019:
dframe_2019 = pd.read_excel(r'C:\Users\ARL\Desktop\Summit_GC_2019\NMHC_results\TrapBlanks_2019.xlsx',
header=None)
dframe_2019.set_index(0, inplace=True)
dframe_transposed = dframe_2019.T
dframe_2019 = dframe_transposed.loc[:, [compound]]
dframe_2019 = dframe_2019.iloc[:, [j for j, c in enumerate(dframe_2019.columns) if j not in [0, 2, 3]]]
dframe_2019['file'] = dframe_transposed.iloc[:, 0]
dframe_2019['decimal_date'] = dframe_transposed.iloc[:, 39]
dframe_2019.dropna(inplace=True, subset=['file'])
dframe_2019['decmial_date_year'] = [(2019 + (float(row[0]) - 1) / 365) for row in
dframe_2019[['decimal_date']].values]
dframe_2019['Year'] = dframe_2019['file'].apply(lambda x: int(str(x)[0:4]))
dframe_2019['Yearly_Day'] = dframe_2019['file'].apply(lambda x: int(str(x)[4:7]))
dframe_2019['Hour'] = dframe_2019['file'].apply(lambda x: int(str(x)[7:9]))
dframe_2019['Minute'] = dframe_2019['file'].apply(lambda x: int(str(x)[9:11]))
dframe_2019['Second'] = dframe_2019['file'].apply(lambda x: int(str(x)[11:13]))
base_date = datetime(year=2019, month=1, day=1)
dframe_2019['date'] = [
base_date + timedelta(days=int(row[0] - 1), hours=int(row[1]), minutes=int(row[2]),
seconds=int(row[3])
) for row in
dframe_2019[[
'Yearly_Day',
'Hour',
'Minute',
'Second']].values]
dframe_2019.drop(columns=['file', 'Year', 'Hour', 'Minute', 'Yearly_Day', 'Second'], inplace=True)
dframe_2019 = dframe_2019[dframe_2019['date'] < end_date]
dframe_2019 = dframe_2019[dframe_2019['date'] > start_date]
dframe_2019.set_index('date', inplace=True)
dframe_2019.rename(columns={compound: f'{compound}_mr'}, inplace=True)
elif start_year == 2019:
dframe_2019 = pd.read_excel(r'C:\Users\ARL\Desktop\Summit_GC_2019\NMHC_results\TrapBlanks_2019.xlsx',
header=None)
dframe_2019.set_index(0, inplace=True)
dframe_transposed = dframe_2019.T
dframe_2019 = dframe_transposed.loc[:, [compound]]
dframe_2019 = dframe_2019.iloc[:, [j for j, c in enumerate(dframe_2019.columns) if j not in [0, 2, 3]]]
dframe_2019['file'] = dframe_transposed.iloc[:, 0]
dframe_2019['decimal_date'] = dframe_transposed.iloc[:, 39]
dframe_2019.dropna(inplace=True, subset=['file'])
dframe_2019['decmial_date_year'] = [(2019 + (float(row[0]) - 1) / 365) for row in
dframe_2019[['decimal_date']].values]
dframe_2019['Year'] = dframe_2019['file'].apply(lambda x: int(str(x)[0:4]))
dframe_2019['Yearly_Day'] = dframe_2019['file'].apply(lambda x: int(str(x)[4:7]))
dframe_2019['Hour'] = dframe_2019['file'].apply(lambda x: int(str(x)[7:9]))
dframe_2019['Minute'] = dframe_2019['file'].apply(lambda x: int(str(x)[9:11]))
dframe_2019['Second'] = dframe_2019['file'].apply(lambda x: int(str(x)[11:13]))
base_date = datetime(year=2019, month=1, day=1)
dframe_2019['date'] = [base_date + timedelta(days=int(row[0] - 1), hours=int(row[1]), minutes=int(row[2]),
seconds=int(row[3])
) for row in
dframe_2019[[
'Yearly_Day',
'Hour',
'Minute',
'Second']].values]
dframe_2019.drop(columns=['file', 'Year', 'Hour', 'Minute', 'Yearly_Day', 'Second'], inplace=True)
dframe_2019 = dframe_2019[dframe_2019['date'] < end_date]
dframe_2019 = dframe_2019[dframe_2019['date'] > start_date]
dframe_2019.set_index('date', inplace=True)
dframe_2019.rename(columns={compound: f'{compound}_mr'}, inplace=True)
dframe = pd.concat([dframe_2017, dframe_2018, dframe_2019])
dframe = dframe.loc[dframe.index < end_date]
dframe = dframe.loc[dframe.index > start_date]
dframe.fillna(value=99999, inplace=True)
return dframe
def excel_rf_BH(compound, start, end):
start_date = datetime.strptime(start, '%Y-%m-%d %H:%M:%S')
end_date = datetime.strptime(end, '%Y-%m-%d %H:%M:%S')
start_year = int(start_date.year)
end_year = int(end_date.year)
dframe_2017 = pd.DataFrame()
dframe_2018 = pd.DataFrame()
dframe_2019 = pd.DataFrame()
if start_year == 2017:
if end_year >= 2017:
dframe_2017 = pd.read_excel(r'Z:\Data\Summit_GC\Summit_GC_2017\NMHC_results\BH_STD_2017.xlsx',
header=None)
dframe_2017.set_index(0, inplace=True)
dframe_transposed = dframe_2017.T
dframe_2017 = dframe_transposed.loc[:, [compound]]
dframe_2017 = dframe_2017.iloc[:, [j for j, c in enumerate(dframe_2017.columns) if j not in [0, 2, 3]]]
dframe_2017['file'] = dframe_transposed.iloc[:, 0]
dframe_2017['decimal_date'] = dframe_transposed.iloc[:, 39]
dframe_2017.dropna(inplace=True, subset=['file'])
dframe_2017['decmial_date_year'] = [(2017 + (float(row[0]) - 1) / 365) for row in
dframe_2017[['decimal_date']].values]
dframe_2017['Year'] = dframe_2017['file'].apply(lambda x: int(str(x)[0:4]))
dframe_2017['Yearly_Day'] = dframe_2017['file'].apply(lambda x: int(str(x)[4:7]))
dframe_2017['Hour'] = dframe_2017['file'].apply(lambda x: int(str(x)[7:9]))
dframe_2017['Minute'] = dframe_2017['file'].apply(lambda x: int(str(x)[9:11]))
dframe_2017['Second'] = dframe_2017['file'].apply(lambda x: int(str(x)[11:13]))
base_date = datetime(year=2017, month=1, day=1)
dframe_2017['date'] = [base_date + timedelta(days=int(row[0] - 1), hours=int(row[1]), minutes=int(row[2]),
seconds=int(row[3])
) for row | |
#!/usr/bin/env python
"""Script that parses CATME peer evaluation data and plots summary plots and
statistics.
The CATME Peer evaluation results are provided in a CSV file which contains
more than one table and mixed in metadata. The data are separated by double
line returns and are as follows:
1. Extraneous metadata
2. Table of answers to the per team member rating questions (it has two header
lines)
3. An aggregation table of the data in part 2
The next optional sections are a list of a set of question answers followed by
a table of the responses to those questions. Here are the options for these
sets of questions that are tied to a score of 1, 2, 3, 4, or 5.
Team Conflict
=============
1. None or Not at all
2. Little or Rarely
3. Some
4. Much or Often
5. Very Much or Very Often
Team Satisfaction and Team Perspectives
=======================================
1. Strongly Disagree
2. Disagree
3. Neither Agree Nor Disagree
4. Agree
5. Strongly Agree
The final section are the private comments that the students provide.
"""
import os
import textwrap
from io import StringIO
import numpy as np
import pandas as pd
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
import seaborn as sns
def load_main_table(table_text):
"""Returns a data frame with the peer to peer ratings for a single CATME
peer evaluation given the text from the CSV export."""
lines = table_text.split('\n')
i = 1
cols = []
for thing in lines[1].split('","'):
if thing in ['C ', 'I ', 'K ', 'E ', 'H ']:
cols.append(thing.strip() + str(i) + ' ')
if thing == 'H ':
i += 1
else:
cols.append(thing)
lines[1] = '","'.join(cols)
text = "\n".join(lines[1:])
df = pd.read_csv(StringIO(text))
df.index = df['Student ID']
return df
def find_delinquent_students(df):
"""Returns a list of student names who did not fill out the survey."""
# TODO : Setup to print results with students name and email so an email
# can quickly be sent to all students.
def is_int(s):
try:
int(s)
return True
except ValueError:
return False
delinquent_students = []
for name, group in df.groupby('Team ID'):
na_cols = group.columns[group.isna().any()].tolist()
num_members = len(group)
delinquent_rater_nums = set([int(name.strip()[-1]) for name in na_cols
if is_int(name.strip()[-1])])
delinquent_students += [
group['Student Name'][group['Rater #'] == num].values[0]
for num in delinquent_rater_nums if num <= num_members]
return delinquent_students
def merge_adjustment_factor(*dataframes, with_self=True):
"""Returns a data frame with student id as the index and the peer
evaluation instances as the columns. The entry is the adjustment factor
value. A numerical value is also computed in the column 'Improvement' that
shows whether they were ranked higher or lower as time progressed.
Parameters
==========
with_self : boolean, optional
If True the adjustment factor that includes the students self rating is
returned.
"""
# TODO : Maybe it would be better to select the better of the two
# adjustement factors. For students rated low by their team, their rating
# would improve the score and for students that rated themselves lower than
# their team would get a boost too.
if with_self:
col = 'Adj Factor (w/ Self)'
else:
col = 'Adj Factor (w/o Self)'
data = {}
for i, df in enumerate(dataframes):
data['P{}'.format(i + 1)] = df[col]
data['Student Name'] = df['Student Name']
data = pd.DataFrame(data)
data = data.dropna() # if student drops and is deleted after peer eval
# calculate a slope value, improvement metric, that characterizes whether
# the students' ranking improved over time or didn't, positive values are
# improvements and negative means the got rated worse over time
x_vals = list(range(len(dataframes)))
slopes = []
means = []
stds = []
adjusted_scores = []
# I weight the later evals more than the prior ones because the later ones
# tend to be more serious because the stakes become more real as the class
# progresses.
eval_names = ['P1', 'P2', 'P3', 'P4']
weights = [0.85, 0.90, 0.95, 1.0]
for idx, row in data.iterrows():
y_vals = row[eval_names[:len(dataframes)]].values.astype(float)
# Weight the latter reviews more than the earlier reviews.
mean = np.average(y_vals, weights=weights[:len(dataframes)])
# Calculate a "slope" val that indicates how little or how much
# improvement there was.
opt, _ = curve_fit(lambda x, slope, intercept: slope * x + intercept,
x_vals, y_vals)
improvement = opt[0]
# If the student was rated low but improved over time, bump their
# factor up based on the improvement. Also, don't allow any factor's
# lower than 0.75. A factor of 0.75 can drop the grade two letter
# grades (based on 85).
if mean < 0.95 and improvement > 0.0:
adjusted_score = mean + 1.5 * improvement
else:
adjusted_score = max([0.75, mean])
adjusted_score = mean
means.append(mean)
stds.append(y_vals.std())
slopes.append(improvement)
adjusted_scores.append(adjusted_score)
data['Improvement'] = slopes
data['Mean Adj Factor'] = means
data['STD Adj Factor'] = stds
data['Final Adj Factor'] = adjusted_scores
return data
def plot_student_adj(df, with_self=True):
"""Returns three axes. The first is a bar plot of the adjustment factor for
each student. The second is a bar plot showing the improvement value. And
the third is a bar plot of the adjustment factor modified by the
improvement score."""
fig, axes = plt.subplots(3, sharex=True)
df = df.sort_values('Final Adj Factor')
df.plot(x='Student Name', y='Mean Adj Factor', kind='bar',
yerr='STD Adj Factor', ylim=(0.6, 1.1), ax=axes[0])
df.plot(x='Student Name', y='Improvement', kind='bar', ax=axes[1])
df.plot(x='Student Name', y='Final Adj Factor', kind='bar',
ylim=(0.70, 1.1), ax=axes[2])
return axes
def load_catme_data_sections(path_to_file):
"""Returns a list of text sections from the CATME csv export."""
with open(path_to_file, 'r') as f:
text = f.read()
sections = text.split('\n\n')
return sections
def create_team_factor(df):
# NOTE : This is not complete and not used anywhere. Needs more work.
# TODO : What to do about note="over"
df['Team Factor'] = df['Adj Factor (w/ Self)']
unders = df['Note'] == 'Under'
df['Team Factor'][unders] = df['Adj Factor (w/o Self)'][unders]
df['Team Factor'][df['Team Factor'] > 1.05] = 1.05
df['Team Factor'][(df['Team Factor'] >= 0.95) & (df['Team Factor'] < 1.0)] = 1.0
if 'Manip' in df['Note']:
df['Team Factor'][df['Team ID'] == df.loc[df['Note'] == 'Manip']['Team ID'].values[0]] = 1.0
return df
def parse_team_questions(question_map_text, score_text):
"""Returns a data frame with each asked question in a row.
Team Conflict
=============
Example text that maps an ID to the actual question::
"T1","How much conflict of ideas is there in your work group? (Task Conflict)"
"T2","How frequently do you have disagreements within your work group about the task of the project you are working on? (Task Conflict)"
"T3","How often do people in your work group have conflicting opinions about the project you are working on? (Task Conflict)"
"R1","How much relationship tension is there in your work group? (Relationship Conflict)"
"R2","How often do people get angry while working in your group? (Relationship Conflict)"
"R3","How much emotional conflict is there in your work group? (Relationship Conflict)"
"P1","How often are there disagreements about who should do what in your work group? (Process Conflict)"
"P2","How much conflict is there in your group about task responsibilities? (Process Conflict)"
"P3","How often do you disagree about resource allocation in your work group? (Process Conflict)"
This text is then followed by the scores for those questions::
,,,"Relationship Conflict",,,,,"Task Conflict",,,,,"Process Conflict",,,,,"Overall",,
"Student Name","Student ID","Team ID","R1","R2","R3","Mn","SD","T1","T2","T3","Mn","SD","P1","P2","P3","Mn","SD","Mn","SD"
"Surname01, Firstname01","12345","team01","1","1","1","1.00","0.00","1","1","1","1.00","0.00","1","1","1","1.00","0.00","1.00","0.00"
"Surname02, Firstname02","12346","team01","2","1","1","1.33","0.58","3","2","3","2.67","0.58","2","3","2","2.33","0.58","2.11","0.78"
"Surname03, Firstname03","12347","team01","1","1","1","1.00","0.00","2","1","1","1.33","0.58","1","1","1","1.00","0.00","1.11","0.33"
"Surname04, Firstname04","12348","team01","1","1","1","1.00","0.00","2","2","2","2.00","0.00","2","2","1","1.67","0.58","1.56","0.53"
Team Satisfaction
=================
"Q1","I am satisfied with my present teammates"
"Q2","I am pleased with the way my teammates and I work together"
"Q3","I am very satisfied with working in this team"
,,,"Team Satisfaction",,,,,
"Student Name","Student ID","Team ID","Q1","Q2","Q3","Mn","SD"
"Surname01, Firstname01","12345","team01","4","4","4","4.00","0.00"
"Surname02, Firstname02","12346","team01","4","4","3","3.67","0.58"
Team Perspectives
=================
"TA1","Being part of the team allows team members to do enjoyable work (Task Attraction)"
"TA2","Team members get to participate in enjoyable activities (Task Attraction)"
"TA3","Team members like the work that the group does (Task Attraction)"
"IC1","Team members like each other (Interpersonal Cohesiveness)"
"IC2","Team members get along well (Interpersonal Cohesiveness)"
"IC3","Team members enjoy spending time together (Interpersonal Cohesiveness)"
"TC1","Our team is united in trying to reach its goals for performance (Task Commitment)"
"TC2","I'm unhappy with my team's level of commitment to the task (Task Commitment) [scale reversed]"
"TC3","Our team members have conflicting aspirations for the team's performance (Task Commitment) [scale reversed]"
,,,"Interpersonal Cohesiveness",,,,,"Task Commitment",,,,,"Task Attraction",,,,,"Overall",,
"Student Name","Student ID","Team ID","IC1","IC2","IC3","Mn","SD","TC1","TC2","TC3","Mn","SD","TA1","TA2","TA3","Mn","SD","Mn","SD"
"Surname01, Firstname01","12345","team01","5","5","4","4.67","0.58","5","1","2","4.67","0.58","5","4","4","4.33","0.58","4.56","0.53"
"Surname02, Firstname02","12346","team01","4","4","3","3.67","0.58","4","3","4","3.00","1.00","4","3","4","3.67","0.58","3.44","0.73"
"Surname03, Firstname03","12347","team01","5","5","5","5.00","0.00","5","1","2","4.67","0.58","5","5","5","5.00","0.00","4.89","0.33"
"""
# need | |
+= ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CreateAgentGroupRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.CreateAgentGroup(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeTaskDetail(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CatClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeTaskDetailRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.DescribeTaskDetail(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doCreateProbeTasks(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CatClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.CreateProbeTasksRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.CreateProbeTasks(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeAlarmTopic(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CatClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeAlarmTopicRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.DescribeAlarmTopic(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDescribeAlarms(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CatClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DescribeAlarmsRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.DescribeAlarms(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doGetReturnCodeHistory(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CatClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.GetReturnCodeHistoryRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.GetReturnCodeHistory(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doDeleteTasks(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CatClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.DeleteTasksRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.DeleteTasks(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doGetDailyAvailRatio(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CatClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.GetDailyAvailRatioRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.GetDailyAvailRatio(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doResumeProbeTask(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred = credential.Credential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.Token]
)
http_profile = HttpProfile(
reqTimeout=60 if g_param[OptionsDefine.Timeout] is None else int(g_param[OptionsDefine.Timeout]),
reqMethod="POST",
endpoint=g_param[OptionsDefine.Endpoint],
proxy=g_param[OptionsDefine.HttpsProxy.replace('-', '_')]
)
profile = ClientProfile(httpProfile=http_profile, signMethod="HmacSHA256")
mod = CLIENT_MAP[g_param[OptionsDefine.Version]]
client = mod.CatClient(cred, g_param[OptionsDefine.Region], profile)
client._sdkVersion += ("_CLI_" + __version__)
models = MODELS_MAP[g_param[OptionsDefine.Version]]
model = models.ResumeProbeTaskRequest()
model.from_json_string(json.dumps(args))
start_time = time.time()
while True:
rsp = client.ResumeProbeTask(model)
result = rsp.to_json_string()
try:
json_obj = json.loads(result)
except TypeError as e:
json_obj = json.loads(result.decode('utf-8')) # python3.3
if not g_param[OptionsDefine.Waiter] or search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj) == g_param['OptionsDefine.WaiterInfo']['to']:
break
cur_time = time.time()
if cur_time - start_time >= g_param['OptionsDefine.WaiterInfo']['timeout']:
raise ClientError('Request timeout, wait `%s` to `%s` timeout, last request is %s' %
(g_param['OptionsDefine.WaiterInfo']['expr'], g_param['OptionsDefine.WaiterInfo']['to'],
search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj)))
else:
print('Inquiry result is %s.' % search(g_param['OptionsDefine.WaiterInfo']['expr'], json_obj))
time.sleep(g_param['OptionsDefine.WaiterInfo']['interval'])
FormatOutput.output("action", json_obj, g_param[OptionsDefine.Output], g_param[OptionsDefine.Filter])
def doModifyTaskEx(args, parsed_globals):
g_param = parse_global_arg(parsed_globals)
if g_param[OptionsDefine.UseCVMRole.replace('-', '_')]:
cred = credential.CVMRoleCredential()
elif g_param[OptionsDefine.RoleArn.replace('-', '_')] and g_param[OptionsDefine.RoleSessionName.replace('-', '_')]:
cred = credential.STSAssumeRoleCredential(
g_param[OptionsDefine.SecretId], g_param[OptionsDefine.SecretKey], g_param[OptionsDefine.RoleArn.replace('-', '_')],
g_param[OptionsDefine.RoleSessionName.replace('-', '_')]
)
else:
cred | |
keys:
* "brightness": Brightness settings
* "contrast": Contrast setting
.. versionadded:: 2.0
"""
raise NotImplementedError
@abstractmethod
def set_stem_detector_param(self, name, values, ignore_errors=False):
"""
Set parameters for STEM detector *name*. The parameters should be given as a dictionary.
Allowed keys are described in the :meth:`get_stem_detector_param` method.
When *ignore_errors* is set and setting of a parameter fails, no error is raised.
If an unknown detector *name* is used, an error is raised nevertheless.
:raises KeyError: If an unknown detector *name* is used.
.. versionadded:: 2.0
"""
raise NotImplementedError
@abstractmethod
def get_stem_acquisition_param(self):
"""
Return dictionary with STEM acquisition parameters.
The returned dictionary has the following keys:
* "image_size": Size of image (see :class:`AcqImageSize`): "FULL", "HALF", ...
* "binning": Binning
* "dwell_time(s)": Dwell time in seconds
.. versionadded:: 2.0
.. note::
On Titan 1.1 software reading the parameters fails, if STEM is not available. See :ref:`restrictions`.
"""
raise NotImplementedError
@abstractmethod
def set_stem_acquisition_param(self, values, ignore_errors=False):
"""
Set parameters for STEM acquisition. The parameters should be given as a dictionary.
Allowed keys are described in the :meth:`get_stem_acquisition_param` method.
When *ignore_errors* is set and setting of a parameter fails, no error is raised.
.. versionadded:: 2.0
"""
raise NotImplementedError
def get_detector_param(self, name):
"""
Return parameters for detector `name` as dictionary.
For "CAMERA" detectors the dict will have the following keys:
* "image_size": Size of image (see :class:`AcqImageSize`): "FULL", "HALF", ...
* "binning": Binning
* "exposure(s)": Exposure time in seconds
* "correction": Correction mode (see :class:`AcqImageCorrection`)
* "exposure_mode": Exposure mode (see :class:`AcqExposureMode`)
* "shutter_mode": Shutter mode (see :class:`AcqShutterMode`)
* "pre_exposure(s)": Pre exposure time in seconds
* "pre_exposure_pause(s)": Pre exposure pause time in seconds
For "STEM_DETECTORS" the dict will have the following keys:
* "brightness": Brightness settings
* "contrast": Contrast setting
* "image_size": Size of image (see :class:`AcqImageSize`): "FULL", "HALF", ...
* "binning": Binning
* "dwell_time(s)": Dwell time in seconds
.. versionchanged:: 2.0
Key returning dwell time renamed from 'dwelltime(s)' to 'dwell_time(s)'
.. deprecated:: 2.0
Use the methods :meth:`get_camera_param`, :meth:`get_stem_param`, or :meth:`get_stem_acquisition_param`
instead.
"""
import warnings
warnings.warn("Microscope.get_detector_param() is deprecated. Please use get_stem_detector_param() or "
"get_camera_param() instead.", DeprecationWarning)
try:
param = self.get_camera_param(name)
except KeyError:
try:
param = self.get_stem_detector_param(name)
except KeyError:
raise KeyError("No detector with name %s" % name)
else:
param.update(self.get_stem_acquisition_param())
return param
def set_detector_param(self, name, param):
"""
Set parameters for detector *name*. The parameters should be given as a dictionary.
Allowed keys are described in the :meth:`get_detector_param` method.
If setting a parameter fails, no error is given. Unknown parameters are ignored.
.. versionchanged:: 2.0
Dwell time can be set by parameters 'dwelltime(s)' and 'dwell_time(s)'.
.. deprecated:: 2.0
Use the methods :meth:`set_camera_param`, :meth:`set_stem_detector_param`,
or :meth:`set_stem_acquisition_param` instead.
"""
import warnings
warnings.warn("Microscope.set_detector_param() is deprecated. Please use set_stem_detector_param(),"
"set_camera_param(), or set_stem_acquisition_param() instead.", DeprecationWarning)
try:
param = self.set_camera_param(name, param, ignore_errors=True)
except KeyError:
try:
param = self.set_stem_detector_param(name, param, ignore_errors=True)
except KeyError:
raise KeyError("No detector with name %s" % name)
else:
if ('dwelltime(s)' in param) and not ('dwell_time(s)' in param):
param = dict(param)
param['dwell_time(s)'] = param.pop('dwelltime(s)')
self.set_stem_acquisition_param(param, ignore_errors=True)
return param
@abstractmethod
def acquire(self, *args):
"""
Acquire images for all detectors given as argument.
The images are returned in an dict indexed by detector name.
"""
raise NotImplementedError
@abstractmethod
def get_image_shift(self):
"""
Return image shift as (x,y) tuple in meters.
.. note::
The accuracy of this value depends on the accuracy of the calibration within the microscope.
On FEI microscopes this corresponds to the state of "User Image Shift" (in different units though).
"""
raise NotImplementedError
@abstractmethod
def set_image_shift(self, pos):
"""
Set image shift to position `pos`, which should be an (x, y) tuple, as returned for instance by
:meth:`get_image_shift`.
"""
raise NotImplementedError
@abstractmethod
def get_beam_shift(self):
"""
Return beam shift as (x,y) tuple in meters.
.. note::
The accuracy of this value depends on the accuracy of the calibration within the microscope.
On FEI microscopes this corresponds to the state of "User Beam Shift" (in different units though).
"""
raise NotImplementedError
@abstractmethod
def set_beam_shift(self, shift):
"""
Set beam shift to position `shift`, which should be an (x, y) tuple, as returned for instance by
:meth:`get_beam_shift`.
:param shift: Shift value
:type shift: Tuple[float, float]
"""
raise NotImplementedError
@abstractmethod
def get_beam_tilt(self):
"""
Return beam tilt as (x, y) tuple.
The units this is returned in are radians. The accuracy of ths value depends on the accuracy of the
calibration within the microscope and thus is better not to be trusted blindly.
On FEI microscopes this corresponds to the state of "DF Tilt", however the tilt is always returned in
cartesian coordinates.
"""
raise NotImplementedError
@abstractmethod
def set_beam_tilt(self, tilt):
"""
Set beam tilt to position `tilt`, which should be an (x, y) tuple, as returned for instance by
:meth:`get_beam_tilt`.
On FEI microscopes, this will turn on dark field mode, unless (0, 0) is set.
If (0, 0) is set which will the dark field mode is turned off.
"""
raise NotImplementedError
@abstractmethod
def normalize(self, mode="ALL"):
"""
Normalize some or all lenses.
Possible values for lenses are:
* "SPOTSIZE": The C1 lens
* "INTENSITY": The C2 and - if present - the C3 lens
* "CONDENSER": C1, C2, and - if present - the C3 lens
* "MINI_CONDENSER": The mini condenser lens
* "OBJECTIVE": Objective and mini condenser lens
* "PROJECTOR": Projective lenses (DL, IL, P1, P2)
* "OBJECTIVE_CONDENSER": Objective and condenser system
* "OBJECTIVE_PROJECTOR": Objective and projector system
* "ALL": All lenses
:param mode: What to normalize. If omitted, all lenses are normalized.
:type mode: str
.. versionadded:: 1.0.9
"""
raise NotImplementedError
@abstractmethod
def get_projection_sub_mode(self):
"""
Return current projection sub mode.
.. versionadded:: 1.0.10
"""
raise NotImplementedError
@abstractmethod
def get_projection_mode(self):
"""
Return current projection mode.
.. versionadded:: 1.0.9
"""
raise NotImplementedError
@abstractmethod
def set_projection_mode(self, mode):
"""
Set projection mode.
:param mode: Projection mode: "DIFFRACTION" or "IMAGING"
:type mode: Literal['DIFFRACTION', 'IMAGING']
.. versionadded:: 1.0.9
.. note::
On Titan 1.1 software changing the mode from IMAGING to DIFFRACTION and back again changes the
magnification. See :ref:`restrictions`.
"""
raise NotImplementedError
@abstractmethod
def get_projection_mode_string(self):
"""
Return description of current projection mode. Possible return values are: "LM", Mi", "SA", "Mh", "LAD", and "D"
.. versionadded:: 1.0.9
"""
raise NotImplementedError
@abstractmethod
def get_magnification_index(self):
"""
Return index of current magnification/camera length.
.. versionadded:: 1.0.9
"""
raise NotImplementedError
@abstractmethod
def set_magnification_index(self, index):
"""
Set magnification / camera length index.
:param index: Magnification/Camera length index
:type index: int
.. versionadded:: 1.0.9
"""
raise NotImplementedError
@abstractmethod
def get_indicated_camera_length(self):
"""
Return (indicated) camera length in meters in diffraction modes.
If microscope is in imaging mode, 0 is returned.
.. versionadded:: 1.0.9
"""
raise NotImplementedError
@abstractmethod
def get_indicated_magnification(self):
"""
Return (indicated) magnification in imaging modes.
If microscope is in diffraction mode, 0 is returned.
.. note::
On Titan 1.1 software this method returns 0.0 regardless of used mode. See :ref:`restrictions`.
.. versionadded:: 1.0.9
"""
raise NotImplementedError
@abstractmethod
def get_defocus(self):
"""
Return defocus value. The returned value is in arbitrary units.
Increasing values go into overfocus direction, negative values into underfocus direction.
.. note::
On Titan 1.1 software the defocus value is the actual defocus relative to the eucentric defocus in meters.
The accuracy of this value depends on the accuracy of the calibration within the microscope.
.. versionadded:: 1.0.9
"""
raise NotImplementedError
@abstractmethod
def set_defocus(self, value):
"""
Set defocus value. The value is in arbitrary units. Increasing values go into overfocus direction, negative
values into underfocus direction.
.. note::
On Titan 1.1 software the defocus value is the actual defocus relative to the eucentric defocus in meters.
The accuracy of this value depends on the accuracy of the calibration within the microscope.
:param value: Defocus to set
:type value: float
.. versionadded:: 1.0.9
"""
raise NotImplementedError
@abstractmethod
def get_objective_excitation(self):
"""
Return excitation of objective lens.
.. versionadded:: 1.0.9
"""
raise NotImplementedError
@abstractmethod
def get_intensity(self):
"""
Return intensity value.
The returned value is in arbitrary units.
Increasing values go into overfocus direction, negative values into | |
<filename>BERT/run_pipeline.py<gh_stars>0
import argparse
import torch.multiprocessing as mp
import math
import sys
import time
import os
import psutil
import torch
import torch.distributed as dist
import torch.nn as nn
from torch.distributed.nn import RemoteModule
from torch.utils.data import DataLoader
import torch.distributed.rpc as rpc
from torch.distributed.optim import DistributedOptimizer
import torch.distributed.autograd as dist_autograd
from .model import MLMTask, MLMTask2, MLMTaskEmbedding, MLMTaskEncoder, MLMTaskHead
from fairscale.experimental.nn.distributed_pipeline import DistributedLoss, DistributedPipeline, PipelineModulesGraph
from fairscale.experimental.nn.distributed_pipeline.trace import make_graph
def collate_batch(batch_data, args, mask_id, cls_id):
batch_data = torch.tensor(batch_data).long().view(args.batch_size, -1).t().contiguous()
# Generate masks with args.mask_frac
data_len = batch_data.size(0)
ones_num = int(data_len * args.mask_frac)
zeros_num = data_len - ones_num
lm_mask = torch.cat([torch.zeros(zeros_num), torch.ones(ones_num)])
lm_mask = lm_mask[torch.randperm(data_len)]
batch_data = torch.cat((torch.tensor([[cls_id] * batch_data.size(1)]).long(), batch_data))
lm_mask = torch.cat((torch.tensor([0.0]), lm_mask))
#targets = torch.stack([batch_data[i] for i in range(lm_mask.size(0)) if lm_mask[i]]).view(-1)
targets = torch.stack([batch_data[i] for i in range(lm_mask.size(0))]).view(-1)
batch_data = batch_data.masked_fill(lm_mask.bool().unsqueeze(1), mask_id)
return batch_data, lm_mask, targets
def process_raw_data(raw_data, args):
_num = raw_data.size(0) // (args.batch_size * args.bptt)
raw_data = raw_data[:(_num * args.batch_size * args.bptt)]
return raw_data
class Loss(nn.Module):
def __init__(self, criterion, ntokens):
super().__init__()
self.ntokens = ntokens
self.criterion = criterion
#self.criterion = nn.CrossEntropyLoss()
def forward(self, input, target):
#print("INPUT:", input.sum().item())
return self.criterion(input.view(-1, self.ntokens), target.to(input.device))
class Timer(object):
ALL = []
def __init__(self, name):
self._name = name
self._children = {}
self._start_time = None
self.reset()
Timer.ALL.append(self)
def reset(self):
self._elapsed = 0.0
self._elapsed_sqr = 0.0
self._count = 0
def __enter__(self):
self._start_time = time.time()
self._count += 1
return self
def __exit__(self, *_):
delta = time.time() - self._start_time
self._elapsed += delta
self._elapsed_sqr += delta * delta
self._start_time = None
@property
def name(self):
return self._name
def avg(self):
if self._count == 0: return 0
return self._elapsed / self._count
def std_dev(self):
if self._count == 0: return
avg = self._elapsed / self._count
return math.sqrt(self._elapsed_sqr / self._count - avg * avg)
@classmethod
def report(cls):
r = {}
s = {}
for t in cls.ALL:
r[t.name] = t.avg()
s[t.name] = t.std_dev()
print({"avg": r, "var": s})
@classmethod
def reset_all(cls):
for t in cls.ALL:
t.reset()
timer_fwd = Timer('forward')
timer_loss = Timer('loss')
timer_bwd = Timer('backward')
timer_sync = Timer('sync')
timer_step = Timer('step')
timer_sync2 = Timer('sync2')
def get_item(rref, is_remote):
if is_remote:
return rpc.rpc_sync(rref.owner(), get_item, (rref, False))
return rref.local_value().view(-1)[0].item()
def run_batch(args, optimizer, model, loss_module, data, lm_mask, targets):
with dist_autograd.context() as context_id:
data = data.transpose(0, 1)
with timer_fwd:
output = model(data)
output_item = get_item(output, True)
with timer_loss:
loss = loss_module(output, rpc.RRef(targets)).to_here()
loss_item = loss.item()
with timer_bwd:
dist_autograd.backward(context_id, [loss])
with timer_sync:
sync_devices(args)
with timer_step:
optimizer.step(context_id)
with timer_sync2:
sync_devices(args)
return loss.item()
def train(model, vocab, train_loss_log, train_data,
optimizer, criterion, ntokens, epoch, args):
#model.train()
total_loss = 0.
start_time = time.time()
mask_id = vocab.stoi['<MASK>']
cls_id = vocab.stoi['<cls>']
train_loss_log.append(0.0)
dataloader = DataLoader(train_data, batch_size=args.batch_size * args.bptt,
shuffle=False, collate_fn=lambda b: collate_batch(b, args, mask_id, cls_id))
loss_module = DistributedLoss(Loss, criterion, ntokens)
num_words = 0
for batch, (data, lm_mask, targets) in enumerate(dataloader):
num_words += targets.numel()
try:
loss = run_batch(args, optimizer, model, loss_module, data, lm_mask, targets)
except:
#print(rpc.rpc_sync("w3", torch.cuda.memory_stats, args=(3,)))
#time.sleep(60)
raise
#rpc.rpc_sync(f"w3", torch.cuda.empty_cache)
print("LOSS:", "%0.3f" % (loss,))
total_loss += loss
if batch % args.log_interval == (args.log_interval - 1) and batch > 0:
Timer.report()
Timer.reset_all()
cur_loss = total_loss / args.log_interval
elapsed = time.time() - start_time
train_loss_log[-1] = cur_loss
print(' wps: {:0.2f} | {:5d}/{:5d} batches | s/batch {:5.2f} | loss {:5.2f}'
.format(num_words / elapsed, batch,
len(train_data) // (args.bptt * args.batch_size),
elapsed / args.log_interval,
cur_loss))
if args.num_batch > 0 and batch >= args.num_batch:
break
total_loss = 0
num_words = 0
start_time = time.time()
class NoOp(nn.Module):
def forward(self, input):
#import math; print(input.shape,"=",math.prod(input.shape))
return input
import threading, sys, traceback, signal
def dumpstacks(signal, frame):
print("\n============================= dumpstacks", os.getpid(), "=============================")
id2name = dict([(th.ident, th.name) for th in threading.enumerate()])
code = []
for threadId, stack in sys._current_frames().items():
code.append("\n# Thread: %s(%d)" % (id2name.get(threadId,""), threadId))
for filename, lineno, name, line in traceback.extract_stack(stack):
code.append('File: "%s", line %d, in %s' % (filename, lineno, name))
if line:
code.append(" %s" % (line.strip()))
print("\n".join(code))
def sync_devices(args):
ngpus = args.world_size // args.num_workers
futs = []
for i in range(args.world_size):
futs.append(rpc.rpc_async(f"w{i}", torch.cuda.synchronize, (i%ngpus,)))
torch.futures.wait_all(futs)
def run_main(args):
import torchtext
if args.dataset == 'WikiText103':
from torchtext.experimental.datasets import WikiText103 as WLMDataset
elif args.dataset == 'WikiText2':
from torchtext.experimental.datasets import WikiText2 as WLMDataset
elif args.dataset == 'WMTNewsCrawl':
from torchtext.experimental.datasets import WMTNewsCrawl as WLMDataset
elif args.dataset == 'EnWik9':
from torchtext.datasets import EnWik9
elif args.dataset == 'BookCorpus':
from data import BookCorpus
else:
print("dataset for MLM task is not supported")
try:
vocab = torch.load(args.save_vocab)
except:
print(f"WLMDataset = {WLMDataset}")
train_dataset, valid_dataset, test_dataset = WLMDataset()
old_vocab = train_dataset.vocab
print(f"len(old_vocab) = {len(old_vocab)}")
vocab = torchtext.vocab.Vocab(counter=old_vocab.freqs,
specials=['<unk>', '<pad>', '<MASK>'])
with open(args.save_vocab, 'wb') as f:
torch.save(vocab, f)
if args.dataset == 'WikiText103' or args.dataset == 'WikiText2':
train_dataset, valid_dataset, test_dataset = WLMDataset(vocab=vocab)
train_dataset.data = torch.cat(tuple(filter(lambda t: t.numel() > 0, train_dataset)))
valid_dataset.data = torch.cat(tuple(filter(lambda t: t.numel() > 0, valid_dataset)))
test_dataset.data = torch.cat(tuple(filter(lambda t: t.numel() > 0, test_dataset)))
elif args.dataset == 'WMTNewsCrawl':
from torchtext.experimental.datasets import WikiText2
test_dataset, valid_dataset = WikiText2(vocab=vocab, split=('test', 'valid'))
valid_dataset.data = torch.cat(tuple(filter(lambda t: t.numel() > 0, valid_dataset)))
test_dataset.data = torch.cat(tuple(filter(lambda t: t.numel() > 0, test_dataset)))
train_dataset = WLMDataset(vocab=vocab, split='train')
train_dataset.data = torch.cat(tuple(filter(lambda t: t.numel() > 0, train_dataset)))
elif args.dataset == 'EnWik9':
enwik9 = EnWik9()
idx1, idx2 = int(len(enwik9) * 0.8), int(len(enwik9) * 0.9)
train_data = torch.tensor([vocab.stoi[_id]
for _id in enwik9[0:idx1]]).long()
val_data = torch.tensor([vocab.stoi[_id]
for _id in enwik9[idx1:idx2]]).long()
test_data = torch.tensor([vocab.stoi[_id]
for _id in enwik9[idx2:]]).long()
from torchtext.experimental.datasets import LanguageModelingDataset
train_dataset = LanguageModelingDataset(train_data, vocab, lambda x: x)
valid_dataset = LanguageModelingDataset(val_data, vocab, lambda x: x)
test_dataset = LanguageModelingDataset(test_data, vocab, lambda x: x)
elif args.dataset == 'BookCorpus':
train_dataset, valid_dataset, test_dataset = BookCorpus(vocab)
train_data = process_raw_data(train_dataset.data, args)
val_data = process_raw_data(valid_dataset.data, args)
test_data = process_raw_data(test_dataset.data, args)
ntokens = len(train_dataset.get_vocab())
print(f"Vocabulary size = {ntokens}")
ngpus = args.world_size // args.num_workers
def get_remote_device(i):
return f"w{i}/cuda:{i % ngpus}"
layers = [RemoteModule(get_remote_device(0), MLMTaskEmbedding, (ntokens, args.emsize))]
n_encoders = args.nlayers
skip_start_layers = int(args.ep_embedding)
skip_end_layers = int(args.ep_head) + int(args.ep_noop)
num_parts = min(n_encoders, args.world_size-skip_start_layers-skip_end_layers)
for di, device in enumerate([get_remote_device(i) for i in range(skip_start_layers, num_parts+skip_start_layers)]):
this_encoders = n_encoders // (num_parts-di)
layers.append(RemoteModule(device, MLMTaskEncoder, (args.emsize, args.nhead, args.nhid, this_encoders, args.dropout)))
n_encoders -= this_encoders
next_layer = num_parts + skip_start_layers - 1
if args.ep_head:
next_layer += 1
layers.append(RemoteModule(get_remote_device(next_layer), MLMTaskHead, (ntokens, args.emsize)))
if args.ep_noop:
next_layer += 1
layers.append(RemoteModule(get_remote_device(next_layer), NoOp, ()))
org_model = nn.Sequential(*layers)
graph = make_graph(org_model)
#for node in graph.nodes: print(node.module.on, node.get_name())
model = DistributedPipeline(graph, chunks=args.num_chunks if args.num_chunks else min(args.world_size, args.batch_size))
params = sum([torch.prod(torch.tensor(p.rpc_sync().size())).item() for p in model.parameter_rrefs()])
print(f'Total parameters = {int(params // 1e6)}M')
criterion = nn.CrossEntropyLoss()
optimizer = DistributedOptimizer(
torch.optim.SGD,
model.parameter_rrefs(),
lr=args.lr,
)
best_val_loss = None
train_loss_log, val_loss_log = [], []
for epoch in range(1, args.epochs + 1):
epoch_start_time = time.time()
train(model, train_dataset.vocab, train_loss_log, train_data,
optimizer, criterion, ntokens, epoch, args)
def run_worker(rank, args):
n_gpu = args.world_size // args.num_workers
"""
if rank < 1:
psutil.Process().cpu_affinity([0,1])
"""
n_cpu = psutil.cpu_count()
aff = [(rank + 1 + i) % n_cpu for i in range(n_cpu // 3)]
if rank < 0:
rank = 0
is_master = True
else:
is_master = False
signal.signal(signal.SIGUSR1, dumpstacks)
first_rank = n_gpu * int(os.environ.get('SLURM_PROCID', '0'))
rank += first_rank
if True:# rank==first_rank or is_master:
print("rank:", -1 if is_master else rank, "pid", os.getpid())
torch.cuda.set_per_process_memory_fraction(0.9, rank - first_rank)
torch.manual_seed(args.seed)
os.environ['MASTER_ADDR'] = os.environ.get("MASTER_ADDR", "127.0.0.1")
os.environ['MASTER_PORT'] = os.environ.get("MASTER_PORT", '29500')
options = rpc.TensorPipeRpcBackendOptions(num_worker_threads=256, rpc_timeout=3600)
for i in range(args.world_size):
options.set_device_map(f"w{i}", {rank - first_rank: i % (args.world_size // args.num_workers)})
options.set_device_map("master", {rank - first_rank: 0})
rpc.init_rpc(
"master" if is_master else f"w{rank}",
rank=args.world_size if is_master else rank,
world_size=args.world_size + 1,
rpc_backend_options=options
)
if is_master:
run_main(args)
rpc.shutdown()
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Pipeline experiments')
parser.add_argument('--emsize', type=int, default=768,
help='size of word embeddings')
parser.add_argument('--nhid', type=int, default=3072,
help='number of hidden units per layer')
parser.add_argument('--nlayers', type=int, default=12,
help='number of layers')
parser.add_argument('--nhead', type=int, default=12,
help='the number of heads in the encoder/decoder of the transformer model')
parser.add_argument('--lr', type=float, default=0.1,
help='initial learning rate')
parser.add_argument('--clip', type=float, default=0.1,
help='gradient clipping')
parser.add_argument('--epochs', type=int, default=8,
help='upper epoch limit')
parser.add_argument('--batch_size', type=int, default=32, metavar='N',
help='batch size')
parser.add_argument('--bptt', type=int, default=128,
help='sequence length')
parser.add_argument('--dropout', type=float, default=0.2,
help='dropout applied to layers (0 = no dropout)')
parser.add_argument('--seed', type=int, default=5431916812,
help='random seed')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='report interval')
parser.add_argument('--checkpoint', type=str, default='None',
help='path to load the checkpoint')
# parser.add_argument('--save', type=str, default='mlm_bert.pt',
# help='path to save the final model')
parser.add_argument('--save-vocab', type=str, default='torchtext_bert_vocab.pt',
help='path to save the vocab')
parser.add_argument('--mask_frac', type=float, default=0.15,
help='the fraction of masked tokens')
parser.add_argument('--dataset', type=str, default='WikiText2',
help='dataset used for MLM task')
# parser.add_argument('--parallel', type=str, default='None',
# help='Use DataParallel to train model')
parser.add_argument('--world_size', type=int, default=8,
help='the world size | |
# Copyright 2016 <NAME>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from gpflow import settings
from functools import reduce
float_type = settings.dtypes.float_type
import numpy as np
class BlockDiagMat_many:
def __init__(self, mats):
self.mats = mats
@property
def shape(self):
return (sum([m.shape[0] for m in mats]), sum([m.shape[1] for m in mats]))
@property
def sqrt_dims(self):
return sum([m.sqrt_dims for m in mats])
def _get_rhs_slices(self, X):
ret = []
start = 0
for m in self.mats:
ret.append(tf.slice(X, begin=tf.stack([start, 0]), size=tf.stack([m.shape[1], -1])))
start = start + m.shape[1]
return ret
def _get_rhs_blocks(self, X):
"""
X is a solid matrix, same size as this one. Get the blocks of X that
correspond to the structure of this matrix
"""
ret = []
start1 = 0
start2 = 0
for m in self.mats:
ret.append(tf.slice(X, begin=tf.stack([start1, start2]), size=m.shape))
start1 = start1 + m.shape[0]
start2 = start2 + m.shape[1]
return ret
def get(self):
ret = self.mats[0].get()
for m in self.mats[1:]:
tr_shape = tf.stack([tf.shape(ret)[0], m.shape[1]])
bl_shape = tf.stack([m.shape[0], tf.shape(ret)[1]])
top = tf.concat([ret, tf.zeros(tr_shape, float_type)], axis=1)
bottom = tf.concat([tf.zeros(bl_shape, float_type), m.get()], axis=1)
ret = tf.concat([top, bottom], axis=0)
return ret
def logdet(self):
return reduce(tf.add, [m.logdet() for m in self.mats])
def matmul(self, X):
return tf.concat([m.matmul(Xi) for m, Xi in zip(self.mats, self._get_rhs_slices(X))], axis=0)
def solve(self, X):
return tf.concat([m.solve(Xi) for m, Xi in zip(self.mats, self._get_rhs_slices(X))], axis=0)
def inv(self):
return BlockDiagMat_many([mat.inv() for mat in self.mats])
def trace_KiX(self, X):
"""
X is a square matrix of the same size as this one.
if self is K, compute tr(K^{-1} X)
"""
return reduce(tf.add, [m.trace_KiX(Xi) for m, Xi in zip(self.mats, self._get_rhs_blocks(X))])
def get_diag(self):
return tf.concat([m.get_diag() for m in self.mats], axis=0)
def inv_diag(self):
return tf.concat([m.inv_diag() for m in self.mats], axis=0)
def matmul_sqrt(self, X):
return tf.concat([m.matmul_sqrt(Xi) for m, Xi in zip(self.mats, self._get_rhs_slices(X))], axis=0)
def matmul_sqrt_transpose(self, X):
ret = []
start = np.zeros((2, np.int32))
for m in self.mats:
ret.append(m.matmul_sqrt_transpose(tf.slice(X, begin=start, size=tf.stack([m.sqrt_dims, -1]))))
start[0] += m.sqrt_dims
return tf.concat(ret, axis=0)
class BlockDiagMat:
def __init__(self, A, B):
self.A, self.B = A, B
@property
def shape(self):
mats = [self.A, self.B]
return (sum([m.shape[0] for m in mats]), sum([m.shape[1] for m in mats]))
@property
def sqrt_dims(self):
mats = [self.A, self.B]
return sum([m.sqrt_dims for m in mats])
def _get_rhs_slices(self, X):
# X1 = X[:self.A.shape[1], :]
X1 = tf.slice(X, begin=tf.zeros((2,), tf.int32), size=tf.stack([self.A.shape[1], -1]))
# X2 = X[self.A.shape[1]:, :]
X2 = tf.slice(X, begin=tf.stack([self.A.shape[1], 0]), size=-tf.ones((2,), tf.int32))
return X1, X2
def get(self):
tl_shape = tf.stack([self.A.shape[0], self.B.shape[1]])
br_shape = tf.stack([self.B.shape[0], self.A.shape[1]])
top = tf.concat([self.A.get(), tf.zeros(tl_shape, float_type)], axis=1)
bottom = tf.concat([tf.zeros(br_shape, float_type), self.B.get()], axis=1)
return tf.concat([top, bottom], axis=0)
def logdet(self):
return self.A.logdet() + self.B.logdet()
def matmul(self, X):
X1, X2 = self._get_rhs_slices(X)
top = self.A.matmul(X1)
bottom = self.B.matmul(X2)
return tf.concat([top, bottom], axis=0)
def solve(self, X):
X1, X2 = self._get_rhs_slices(X)
top = self.A.solve(X1)
bottom = self.B.solve(X2)
return tf.concat([top, bottom], axis=0)
def inv(self):
return BlockDiagMat(self.A.inv(), self.B.inv())
def trace_KiX(self, X):
"""
X is a square matrix of the same size as this one.
if self is K, compute tr(K^{-1} X)
"""
X1, X2 = tf.slice(X, [0, 0], self.A.shape), tf.slice(X, self.A.shape, [-1, -1])
top = self.A.trace_KiX(X1)
bottom = self.B.trace_KiX(X2)
return top + bottom
def get_diag(self):
return tf.concat([self.A.get_diag(), self.B.get_diag()], axis=0)
def inv_diag(self):
return tf.concat([self.A.inv_diag(), self.B.inv_diag()], axis=0)
def matmul_sqrt(self, X):
X1, X2 = self._get_rhs_slices(X)
top = self.A.matmul_sqrt(X1)
bottom = self.B.matmul_sqrt(X2)
return tf.concat([top, bottom], axis=0)
def matmul_sqrt_transpose(self, X):
X1 = tf.slice(X, begin=tf.zeros((2,), tf.int32), size=tf.stack([self.A.sqrt_dims, -1]))
X2 = tf.slice(X, begin=tf.stack([self.A.sqrt_dims, 0]), size=-tf.ones((2,), tf.int32))
top = self.A.matmul_sqrt_transpose(X1)
bottom = self.B.matmul_sqrt_transpose(X2)
return tf.concat([top, bottom], axis=0)
class LowRankMat:
def __init__(self, d, W):
"""
A matrix of the form
diag(d) + W W^T
"""
self.d = d
self.W = W
@property
def shape(self):
return (tf.size(self.d), tf.size(self.d))
@property
def sqrt_dims(self):
return tf.size(self.d) + tf.shape(W)[1]
def get(self):
return tf.diag(self.d) + tf.matmul(self.W, tf.transpose(self.W))
def logdet(self):
part1 = tf.reduce_sum(tf.log(self.d))
I = tf.eye(tf.shape(self.W)[1], float_type)
M = I + tf.matmul(tf.transpose(self.W) / self.d, self.W)
part2 = 2*tf.reduce_sum(tf.log(tf.diag_part(tf.cholesky(M))))
return part1 + part2
def matmul(self, B):
WTB = tf.matmul(tf.transpose(self.W), B)
WWTB = tf.matmul(self.W, WTB)
DB = tf.reshape(self.d, [-1, 1]) * B
return DB + WWTB
def get_diag(self):
return self.d + tf.reduce_sum(tf.square(self.W), 1)
def solve(self, B):
d_col = tf.expand_dims(self.d, 1)
DiB = B / d_col
DiW = self.W / d_col
WTDiB = tf.matmul(tf.transpose(DiW), B)
M = tf.eye(tf.shape(self.W)[1], float_type) + tf.matmul(tf.transpose(DiW), self.W)
L = tf.cholesky(M)
tmp1 = tf.matrix_triangular_solve(L, WTDiB, lower=True)
tmp2 = tf.matrix_triangular_solve(tf.transpose(L), tmp1, lower=False)
return DiB - tf.matmul(DiW, tmp2)
def inv(self):
di = tf.reciprocal(self.d)
d_col = tf.expand_dims(self.d, 1)
DiW = self.W / d_col
M = tf.eye(tf.shape(self.W)[1], float_type) + tf.matmul(tf.transpose(DiW), self.W)
L = tf.cholesky(M)
v = tf.transpose(tf.matrix_triangular_solve(L, tf.transpose(DiW), lower=True))
return LowRankMatNeg(di, V)
def trace_KiX(self, X):
"""
X is a square matrix of the same size as this one.
if self is K, compute tr(K^{-1} X)
"""
d_col = tf.expand_dims(self.d, 1)
R = self.W / d_col
RTX = tf.matmul(tf.transpose(R), X)
RTXR = tf.matmul(RTX, R)
M = tf.eye(tf.shape(self.W)[1], float_type) + tf.matmul(tf.transpose(R), self.W)
Mi = tf.matrix_inverse(M)
return tf.reduce_sum(tf.diag_part(X) * 1./self.d) - tf.reduce_sum(RTXR * Mi)
def inv_diag(self):
d_col = tf.expand_dims(self.d, 1)
WTDi = tf.transpose(self.W / d_col)
M = tf.eye(tf.shape(self.W)[1], float_type) + tf.matmul(WTDi, self.W)
L = tf.cholesky(M)
tmp1 = tf.matrix_triangular_solve(L, WTDi, lower=True)
return 1./self.d - tf.reduce_sum(tf.square(tmp1), 0)
def matmul_sqrt(self, B):
"""
There's a non-square sqrt of this matrix given by
[ D^{1/2}]
[ W^T ]
This method right-multiplies the sqrt by the matrix B
"""
DB = tf.expand_dims(tf.sqrt(self.d), 1) * B
VTB = tf.matmul(tf.transpose(self.W), B)
return tf.concat([DB, VTB], axis=0)
def matmul_sqrt_transpose(self, B):
"""
There's a non-square sqrt of this matrix given by
[ D^{1/2}]
[ W^T ]
This method right-multiplies the transposed-sqrt by the matrix B
"""
B1 = tf.slice(B, tf.zeros((2,), tf.int32), tf.stack([tf.size(self.d), -1]))
B2 = tf.slice(B, tf.stack([tf.size(self.d), 0]), -tf.ones((2,), tf.int32))
return tf.expand_dims(tf.sqrt(self.d), 1) * B1 + tf.matmul(self.W, B2)
class LowRankMatNeg:
def __init__(self, d, W):
"""
A matrix of the form
diag(d) - W W^T
(note the minus sign)
"""
self.d = d
self.W = W
@property
def shape(self):
return (tf.size(self.d), tf.size(self.d))
def get(self):
return tf.diag(self.d) - tf.matmul(self.W, tf.transpose(self.W))
class Rank1Mat:
def __init__(self, d, v):
"""
A matrix of the form
diag(d) + v v^T
"""
self.d = d
self.v = v
@property
def shape(self):
return (tf.size(self.d), tf.size(self.d))
@property
def sqrt_dims(self):
return tf.size(self.d) + 1
def get(self):
V = tf.expand_dims(self.v, 1)
return tf.diag(self.d) + tf.matmul(V, tf.transpose(V))
def logdet(self):
return tf.reduce_sum(tf.log(self.d)) +\
tf.log(1. + tf.reduce_sum(tf.square(self.v) / self.d))
def matmul(self, B):
V = tf.expand_dims(self.v, 1)
return tf.expand_dims(self.d, 1) * B +\
tf.matmul(V, tf.matmul(tf.transpose(V), B))
def solve(self, B):
div = self.v / self.d
c = 1. + tf.reduce_sum(div * self.v)
div = tf.expand_dims(div, 1)
return B / tf.expand_dims(self.d, 1) -\
tf.matmul(div/c, tf.matmul(tf.transpose(div), B))
def inv(self):
di = tf.reciprocal(self.d)
Div = self.v * di
M = 1. + tf.reduce_sum(Div * self.v)
v_new = Div / tf.sqrt(M)
return Rank1MatNeg(di, v_new)
def trace_KiX(self, X):
"""
X is a square matrix of the same size as this one.
if self is K, compute tr(K^{-1} X)
"""
R = tf.expand_dims(self.v / self.d, 1)
RTX = tf.matmul(tf.transpose(R), X)
RTXR = tf.matmul(RTX, R)
M = 1 + tf.reduce_sum(tf.square(self.v) / self.d)
return tf.reduce_sum(tf.diag_part(X) / self.d) - RTXR / M
def get_diag(self):
return self.d + tf.square(self.v)
def inv_diag(self):
div = self.v / self.d
c = 1. + tf.reduce_sum(div * self.v)
return 1./self.d - tf.square(div) / c
def matmul_sqrt(self, B):
"""
There's a non-square sqrt of this matrix given by
[ D^{1/2}]
[ V^T ]
This method right-multiplies the sqrt by the matrix B
"""
DB = tf.expand_dims(tf.sqrt(self.d), 1) * B
VTB = tf.matmul(tf.expand_dims(self.v, 0), B)
return tf.concat([DB, VTB], axis=0)
def matmul_sqrt_transpose(self, B):
"""
There's a non-square sqrt of this matrix given by
[ D^{1/2}]
[ W^T ]
This method right-multiplies the transposed-sqrt by | |
Tight_Legend_4ShrinkAxesH(fig,ncolumn=4,fontsize=12,numpoints=1):
"""
Purpose: To put a legend on the right part of the figure. To be used when one figure has 4 subplots and subplots are shrinked in height.
Note: The advantage of this method is that you can specify the line as labeled or not as you want (it extracts lables automatically). see the example below.
Example:
>>> fig,ax=g.Create_4ShrinkAxes('h',0.7)
>>> for i,axt in enumerate(ax):
axt.plot(np.arange(10),color=g.pcolor[i],label='lab'+str(i))
axt.plot(np.arange(10)+i+1,color=g.pcolor[i+4])
>>> g.Tight_Legend_4ShrinkAxesH(fig)
"""
ax=fig.get_axes()
d=[]
pline=[]
plab=[]
for axt in ax:
box=axt.get_position()
d.append([box.x0,box.y1])
lines=axt.get_lines() #from here get the lines with label
for line in lines:
if '_line' in line.get_label():
pass
else:
pline.append(line)
line_col_dic=dict((line.get_color(),line) for line in pline)
plab=[line.get_label() for line in line_col_dic.values()]
if len(plab)==0:
raise ValueError('Cannot find any line with label, please check if any label has been set.')
else:
fig.legend(tuple(line_col_dic.values()),tuple(plab),loc=(min(d)[0],max(d)[1]+0.02),borderpad=0.3,labelspacing=0.2,handletextpad=0.2,handlelength=2,columnspacing=0.4,ncol=ncolumn,prop={'size':fontsize})
def Set_Fontsize_Figure(fig,fontsize):
def match(artist):
return artist.__module__ == "matplotlib.text"
for textobj in fig.findobj(match=match):
textobj.set_fontsize(fontsize)
def Plot_1Horizontal_Zero_Line():
print "Deprecate Warning: Plot_1Horizontal_Zero_Line"
axc=plt.gca()
axc.plot(np.arange(axc.get_xlim()[1]),np.zeros(axc.get_xlim()[1]),'r--')
def Plot_1Horizontal_Line(pos=0,lp='r--'):
"""
pos is the position where needs a vertical line; "lp" receives the linestype string as same in plt.plot() function.
"""
print "Deprecate Warning: Plot_1Horizontal_Line"
axc=plt.gca()
axc.plot(list(axc.get_xlim()),[pos,pos],lp)
def Plot_1Vertical_Line(pos=0,lp='k--',**kwargs):
"""
pos is the position where needs a vertical line
"""
print "Deprecate Warning: Plot_1Vertical_Line"
axc=plt.gca()
axc.set_autoscale_on(False)
axc.plot([pos,pos],list(axc.get_ylim()),lp,**kwargs)
axc.set_autoscale_on(True)
def Plot_Vertical_Lines(ax,pos=[0],lp='k--',**kwargs):
"""
pos is the position where needs a vertical line
"""
lines=[]
ax.set_autoscale_on(False)
for pos1 in pos:
lines.append(ax.plot([pos1,pos1],list(ax.get_ylim()),lp,**kwargs))
ax.set_autoscale_on(True)
return lines
def Plot_Horizontal_Lines(ax,pos=[0],lp='k--',**kwargs):
"""
pos is the position where needs a vertical line
"""
lines=[]
ax.set_autoscale_on(False)
for pos1 in pos:
lines.append(ax.plot(list(ax.get_xlim()),[pos1,pos1],lp,**kwargs))
ax.set_autoscale_on(True)
return lines
def plot_mean_std(data,move_ave=None):
"""
Purpose: plot for a 2D array the average with std
Definition: plt.fill_between(np.arange(len(dmean)),dmean-dstd,dmean+dstd,color=c2['lb'])
Note:
1. the rows are variation of data and column number as length of data
2. the move_ave only applies for non masked array. For masked array, usually movign average is not used so move_ave can be set as None to avoid moving average.
3. moving average is done with mathex.move_ave
"""
if data.ndim !=2:
print 'please provide 2D array!'
if move_ave is not None:
data=mathex.move_ave2d(data,move_ave)
dmean=np.ma.mean(data,axis=0)
dstd=np.ma.std(data,axis=0)
plt.plot(dmean,color=c2['db'])
plt.fill_between(np.arange(len(dmean)),dmean-dstd,dmean+dstd,color=c2['lb'])
def plot_mean_std_ax(ax,data,move_ave=None,lab='label',colmean=c2['db'],colstd=c2['lb'],alph=0.4):
"""
Purpose: plot for a 2D array the average with std
Definition:
Note:
1. the rows are variation of data and column number as length of data
2. the move_ave only applies for non masked array. For masked array, usually movign average is not used so move_ave can be set as None to avoid moving average.
3. moving average is done with mathex.move_ave
"""
if data.ndim !=2:
print 'please provide 2D array!'
if move_ave is not None:
data=mathex.move_ave2d(data,move_ave)
dmean=np.ma.average(data,axis=0)
dstd=np.ma.std(data,axis=0)
line=ax.plot(dmean,color=colmean,label=lab)
poly=ax.fill_between(np.arange(len(dmean)),dmean-dstd,dmean+dstd,color=colstd,alpha=0.4)
return line,poly
def pfread(filename):
fob=file(filename,'r')
data=pk.load(fob)
fob.close()
return data
def pfdump(data,filename):
"""
Purpose: use python pickle to dump data object to a file
Definition: pfdump(data,filename)
"""
fob=file(filename,'w')
data=pk.dump(data,fob)
fob.close()
def ipsearch(string):
"""
[str(l) for l in _ih if l.startswith(string)]
"""
[str(l) for l in _ih if l.startswith(string)]
class cm(object):
#red2greendict={'red': ((0.0, 0.0, 1.0),
# (0.5, 0.3, 0.0),
# (1.0, 1.0, 1.0)),
# 'green': ((0.0, 0.0, 0.0),
# (0.5, 0.0, 0.3),
# (1.0, 1.0, 1.0)),
# 'blue': ((0.0, 0.0, 0.0),
# (0.5, 0.0, 0.0),
# (1.0, 1.0, 0.0))}
_red2green_rgblist=[(242,6,6),(246,248,159),(207,249,144),(66,128,25)]
_red2greendict=rgb2cmdic(_red2green_rgblist,[0,0.5,0.5,1])
red2green = mat.colors.LinearSegmentedColormap('red2green',_red2greendict,256)
#
red2bluedict= {'blue':[(0.0, 0.0, 0.0),
(0.5, 0.0, 0.0),
(0.5, 0.5, 0.5),
(1.0, 1.0, 1.0)],
'green': [(0.0, 0.0, 0.0),
(0.5, 1.0, 1.0),
(0.5, 0.0, 0.0),
(1.0, 1.0, 1.0)],
'red': [(0.0, 1.0, 1.0),
(0.5, 1.0, 1.0),
(0.5, 0.0, 0.0),
(1.0, 0.0, 0.0)]}
red2blue = mat.colors.LinearSegmentedColormap('red2blue',red2bluedict,256)
red2blue_rgblist=[(255,51,0),(255,255,153),(175,238,238),(0,0,250)]
red2bluedict2=rgb2cmdic([(255,51,0),(255,255,153),(175,238,238),(0,0,250)],[0,0.5,0.5,1])
red2blue2 = mat.colors.LinearSegmentedColormap('red2blue2',red2bluedict2,256)
red2bluedict2_r=rgb2cmdic(red2blue_rgblist[::-1],[0,0.5,0.5,1])
red2blue2_r = mat.colors.LinearSegmentedColormap('red2blue2_r',red2bluedict2_r,256)
###
cdict = {'red': [(0.0, 0.0, 0.0),
(0.5, 1.0, 1.0),
(1.0, 1.0, 1.0)],
'green': [(0.0, 0.0, 0.0),
(0.25, 0.0, 0.0),
(0.75, 1.0, 1.0),
(1.0, 1.0, 1.0)],
'blue': [(0.0, 0.0, 0.0),
(0.5, 0.0, 0.0),
(1.0, 1.0, 1.0)]}
cm1 = mat.colors.LinearSegmentedColormap('cm1',cdict,256)
###
clist=[(112,0,0),(148,0,148),(255,0,102),(245,245,0),(107,214,0),(0,43,87),(0,51,204)]
PurpleGreenBlueDic=rgb2cmdic([(112,0,0),(148,0,148),(255,0,102),(245,245,0),(107,214,0),(0,43,87),(0,51,204)],[0,0.1,0.2,0.49,0.5,0.9,1])
PurpleGreenBlue = mat.colors.LinearSegmentedColormap('PurpleGreenBlue',PurpleGreenBlueDic,256)
PurpleGreenBlueDic_rev=rgb2cmdic(clist[::-1],[0,0.1,0.2,0.49,0.5,0.9,1])
PurpleGreenBlue_rev = mat.colors.LinearSegmentedColormap('PurpleGreenBlue_rev',PurpleGreenBlueDic_rev,256)
### this bar is from Dai et al. 2011
clist=[(0.062745098039215685, 0.3843137254901961, 0.68235294117647061),
(0.0039215686274509803, 0.58431372549019611, 0.83529411764705885),
(0.0, 0.70980392156862748, 0.88627450980392153),
(0.0, 0.70588235294117652, 0.80000000000000004),
(0.0, 0.70588235294117652, 0.50980392156862742),
(0.38039215686274508, 0.74509803921568629, 0.29411764705882354),
(0.94509803921568625, 0.92941176470588238, 0.40392156862745099),
(1.0, 0.77647058823529413, 0.26666666666666666),
(0.97647058823529409, 0.59999999999999998, 0.30196078431372547),
(0.94901960784313721, 0.20784313725490197, 0.43529411764705883),
(0.87058823529411766, 0.50980392156862742, 0.70980392156862748),
(0.55686274509803924, 0.38823529411764707, 0.6705882352941176)]
precip=mat.colors.ListedColormap(clist[::-1],'precip')
precip_rev=mat.colors.ListedColormap(clist,'precip_rev')
##temp; the same sequence of color with precip_rev by the level can be maniplated with templevel.
##the twelve colors; yellow: 7; green:6.
templevel=[0.0, 0.09, 0.18, 0.27, 0.36, 0.53, 0.58, 0.64, 0.73, 0.82, 0.91, 1.0]
tempcolor=clist[:]
tempcolor=np.array(tempcolor)*255.
tempc=[tuple(i) for i in tempcolor]
tempcmdic=rgb2cmdic(tempc,templevel)
tempcm=mat.colors.LinearSegmentedColormap('tempcm',tempcmdic,256)
class mapb(object):
caal=(40,75,-170,-50)
def plot_OLS_reg(ax,x,y,c='k',ls='--',PosEquation='uc',
precision_slope=3, precision_inter=3,
textcolor='r',txtkw={},**kwargs):
"""
Purpose: plot OLS regression line for y~x on axes ax.
Note:
1. being able to automatically mask np.nan
Return:
return line[0],[slope, intercept, r_value, p_value, std_err]
Parameters:
----------
kwargs: kwargs for axes.plot
"""
y=np.ma.masked_invalid(y)
x=np.ma.masked_invalid(x)
xnew,ynew=pb.shared_unmask_data(x,y)
[slope, intercept, r_value, p_value, std_err] = sp.stats.mstats.linregress(xnew,ynew)
xnew_plot=pb.linspace_array(xnew)
line=ax.plot(xnew_plot,xnew_plot*slope+intercept,color=c,linestyle=ls,**kwargs)
if PosEquation == False:
pass
else:
equtext = 'y = {0:.{2}f}*x + {1:.{3}f}'.format(slope,
intercept, precision_slope, precision_inter)+\
'\n'+\
'R2={0:.2f}, p={1:.2f}'.format(r_value**2,float(p_value))
equtext = equtext.replace('+ -','- ') # handle negative intercept
Set_AxText(ax,equtext,pos=PosEquation,color=textcolor,**txtkw)
return line[0],[slope, intercept, r_value, p_value, std_err, len(xnew)]
def plot_RTO_OLSreg2var(ax,x,y,c='k',ls='--'):
"""
Purpose: plot OLS regression line for y~x through origin on axes ax. RTO is short for "Regression Through Origin"
Note:
1. being able to automatically mask np.nan
2. only 2 variables are tested.
Return:
return line[0],[slope, r_value, p_value, std_err]
"""
#extract data that are not NaN or masked
xnew=pb.MaskArrayByNan(x)
ynew=pb.MaskArrayByNan(y)
x_reg,y_reg=pb.shared_unmask_data(xnew,ynew)
#use R to do regression
rpy.r.assign('x_reg',x_reg)
rpy.r.assign('y_reg',y_reg)
lmRTO=rpy.r('lmRTO=lm(y_reg ~ x_reg-1)') #Regression through origin
summary_lmRTO=rpy.r('summary_lmRTO=summary(lmRTO)')
#note the 1st, 2nd, 3rd, 4th elements for summary_lmRTO['coefficients'] is Estimate; Std. Error; t value; Pr(>|t|)
slope=summary_lmRTO['coefficients'][0][0]
p_value=summary_lmRTO['coefficients'][0][3]
std_err=summary_lmRTO['coefficients'][0][1]
r_value=np.sqrt(summary_lmRTO['r.squared'])
#creat a new array for ploting regression line
xnew_plot=pb.linspace_array(xnew)
line=ax.plot(xnew_plot,xnew_plot*slope,color=c,linestyle=ls)
return line[0],[slope, r_value, p_value, std_err]
def legpro_points(colorlist,labellist):
"""
Purpose: Creat a group of points/lines line2D used as proxy for legend, return tuple of (linelist,labellist).
Use:
proleg=g.legpro_points(['r','g'],['red','green'])
ax.legend(proleg[0],proleg[1],**kwarg)
"""
point_list=[mat.lines.Line2D([],[],marker='o',ms=5,mfc=c,mew=0,ls='none',color=c) for c in colorlist]
return (point_list,labellist)
def legpro_lines(colorlist,labellist,ls='-',**kwargs):
"""
Purpose: Creat a group of points/lines line2D used as proxy for legend, return tuple of (linelist,labellist).
Use:
proleg=g.legpro_points(['r','g'],['red','green'])
ax.legend(proleg[0],proleg[1],**kwargs)
"""
point_list=[mat.lines.Line2D([],[],color=c,ls=ls,**kwargs) for c in colorlist]
return (point_list,labellist)
def colorbar_set_label_parallel(cbar,label_list,hpos=1.2,vpos=-0.3,
ha='left',va='center',
force_position=None,
**kwargs):
"""
This is to set colorbar label besie the colorbar.
Parameters:
-----------
cbar: the colorbar used to set.
hpos: the left position of labels, used in vertical colorbar.
vpos: the below position of labels, used in horizontal colorbar.
force_position:
1. In case of a tuple, should be the fraction of the first small one
and the number of remaining equal-length sections. Eg., (0.3,12)
2. In case of a np.ndarray or list with values in the unit of axes
fraction, will be directly used to position the texts.
Example:
--------
/homel/ychao/python/script/set_label_parallel_colorbar.py
"""
def get_yloc(first,num):
"""
first is the fraction of the first small downward arrow; num is the
number of remaining equal-length sections on the colorbar.
"""
first_pos = first/2.
second_pos = np.arange(first + 0.5,num,1)
all_pos = np.array([first_pos] + list(second_pos))
return all_pos/(first+num)
cbar.set_ticklabels([])
cbar.ax.tick_params(right='off',left='off')
#get the text position.
yloc=(cbar.values-cbar.boundaries[0])/(cbar.boundaries[-1]-cbar.boundaries[0])
if force_position is not None:
if isinstance(force_position,(tuple)) and len(force_position) == 2:
yloc = get_yloc(*force_position)
elif isinstance(force_position,(np.ndarray,list)):
yloc = force_position
else:
raise ValueError("Cannot understand force_position")
if len(label_list) != len(yloc):
raise ValueError("the lenght of cbar segments and label list are not equal!")
else:
if cbar.orientation == 'vertical':
for label,ypos in zip(label_list,yloc):
cbar.ax.text(hpos,ypos,label,ha=ha,va=va,**kwargs)
elif cbar.orientation == 'horizontal':
for label,ypos in zip(label_list,yloc):
cbar.ax.text(ypos,vpos,label,ha=ha,va=va,**kwargs)
def setp(*artist,**kwargs):
"""
Purpose: set artist properties by kwagrs pairs in an easy and flexible way.
Note:
1. artist will be flat using pb.iteflat so you can use mixed types of matplotlib artists as long as they have the same keyword properties.
2. when artist is a tuple or list,kwargs[key] can also be set as tuple or list, but when kwargs[key] is only one value, it will be broadcast
to the same length with artist automatically.
"""
if len(artist)==1 and isinstance(artist,(tuple,list)):
artist_list=pb.iteflat(artist[0])
else:
artist_list=pb.iteflat(artist)
for key in kwargs:
value=kwargs[key]
if not isinstance(value,Iterable) or isinstance(value,str):
value_list=[value]*len(artist_list)
else:
if len(value)==1:
value_list=value*len(artist_list)
else:
value_list=pb.iteflat(value)
if len(artist_list)!=len(value_list):
raise ValueError('artist list lenght {0} is not equal to value list length {1}'.format(len(artist_list),len(value_list)))
else:
for art,val in zip(artist_list,value_list):
plt.setp(art,key,val)
#print key,value_list,'has been set'
return artist_list,[key]*len(artist_list),value_list
class ProxyLegend(object):
"""
Tags are used as labels for proxy legend.
"""
def | |
about accidentally picking something we'll regret
while new in self.constraints or new in self.variables:
new = new_label()
mapping[u] = new
self.objective.add_variable(vartype, new, lower_bound=lb, upper_bound=ub)
# we don't add the constraint yet because we don't want
# to modify self.constraints
else:
new = mapping[u]
qm.add_variable(vartype, new, lower_bound=lb, upper_bound=ub)
qm.add_quadratic(u, new, bias)
qm.remove_interaction(u, u)
def substitute_self_loops(self) -> Dict[Variable, Variable]:
"""Replace any integer self-loops in the objective or constraints.
Self-loop :math:`i^2` is removed by introducing a new variable
:math:`j` with interaction :math:`i*j` and adding constraint
:math:`j == i`.
Acts on the objective and constraints in-place.
Returns:
Mapping from the integer variable labels to their introduced
counterparts. The constraint enforcing :math:`j == i` uses
the same label.
Examples:
>>> from dimod import Integer, ConstrainedQuadraticModel
>>> i = Integer('i')
>>> cqm = ConstrainedQuadraticModel()
>>> cqm.add_constraint(i*i <=3, label='i squared')
'i squared'
>>> cqm.substitute_self_loops() # doctest: +IGNORE_RESULT
>>> cqm.constraints # doctest: +IGNORE_RESULT
{'i squared': QuadraticModel({'i': 0.0, 'cf651f3d-bdf8-4735-9139-eee0a32e217f': 0.0}, {('cf651f3d-bdf8-4735-9139-eee0a32e217f', 'i'): 1.0}, 0.0, {'i': 'INTEGER', 'cf651f3d-bdf8-4735-9139-eee0a32e217f': 'INTEGER'}, dtype='float64') <= 3,
'cf651f3d-bdf8-4735-9139-eee0a32e217f': QuadraticModel({'i': 1.0, 'cf651f3d-bdf8-4735-9139-eee0a32e217f': -1.0}, {}, 0.0, {'i': 'INTEGER', 'cf651f3d-bdf8-4735-9139-eee0a32e217f': 'INTEGER'}, dtype='float64') == 0}
"""
mapping: Dict[Variable, Variable] = dict()
self._substitute_self_loops_from_model(self.objective, mapping)
for comparison in self.constraints.values():
self._substitute_self_loops_from_model(comparison.lhs, mapping)
# finally add the constraints for the variables
for v, new in mapping.items():
self.add_constraint([(v, 1), (new, -1)], rhs=0, sense='==', label=new)
return mapping
def to_file(self, *, spool_size: int = int(1e9)) -> tempfile.SpooledTemporaryFile:
"""Serialize to a file-like object.
Args:
spool_size: Defines the `max_size` passed to the constructor of
:class:`tempfile.SpooledTemporaryFile`. Determines whether
the returned file-like's contents will be kept on disk or in
memory.
Format Specification (Version 1.1):
This format is inspired by the `NPY format`_
The first 8 bytes are a magic string: exactly "DIMODCQM".
The next 1 byte is an unsigned byte: the major version of the file
format.
The next 1 byte is an unsigned byte: the minor version of the file
format.
The next 4 bytes form a little-endian unsigned int, the length of
the header data HEADER_LEN.
The next HEADER_LEN bytes form the header data. This is a
json-serialized dictionary. The dictionary is exactly:
.. code-block:: python
dict(num_variables=len(cqm.variables),
num_constraints=len(cqm.constraints),
num_biases=cqm.num_biases(),
num_quadratic_variables=cqm.num_quadratic_variables(),
)
it is terminated by a newline character and padded with spaces to
make the entire length of the entire header divisible by 64.
The constraint quadratic model data comes after the header. It is
encoded as a zip file. The zip file will contain one file
named `objective`, containing the objective as encoded as a file
view. It will also contain a directory called `constraints`. The
`constraints` directory will contain one subdirectory for each
constraint, each containing `lhs`, `rhs` and `sense` encoding
the `lhs` as a fileview, the `rhs` as a float and the sense
as a string. Each directory will also contain a `discrete` file,
encoding whether the constraint represents a discrete variable.
Format Specification (Version 1.0):
This format is the same as Version 1.1, except that the data dict
does not have `num_quadratic_variables`.
.. _NPY format: https://numpy.org/doc/stable/reference/generated/numpy.lib.format.html
"""
file = SpooledTemporaryFile(max_size=spool_size)
data = dict(num_variables=len(self.variables),
num_constraints=len(self.constraints),
num_biases=self.num_biases(),
num_quadratic_variables=self.num_quadratic_variables(),
)
write_header(file, CQM_MAGIC_PREFIX, data, version=(1, 1))
# write the values
with zipfile.ZipFile(file, mode='a') as zf:
try:
zf.writestr(
'objective', self.objective.to_file(spool_size=int(1e12))._file.getbuffer())
except AttributeError:
# no objective to write
pass
for label, constraint in self.constraints.items():
# put everything in a constraints/label/ directory
lstr = json.dumps(serialize_variable(label))
lhs = constraint.lhs.to_file(spool_size=int(1e12))._file.getbuffer()
zf.writestr(f'constraints/{lstr}/lhs', lhs)
rhs = np.float64(constraint.rhs).tobytes()
zf.writestr(f'constraints/{lstr}/rhs', rhs)
sense = bytes(constraint.sense.value, 'ascii')
zf.writestr(f'constraints/{lstr}/sense', sense)
discrete = bytes((label in self.discrete,))
zf.writestr(f'constraints/{lstr}/discrete', discrete)
file.seek(0)
return file
def upper_bound(self, v: Variable) -> Bias:
"""Return the upper bound on the specified variable."""
return self.objective.upper_bound(v)
def vartype(self, v: Variable) -> Vartype:
"""The vartype of the given variable."""
return self.objective.vartype(v)
CQM = ConstrainedQuadraticModel
class _Vartypes(abc.Sequence):
"""Support deprecated attribute on ``CQM.variables``"""
def __init__(self, cqm: ConstrainedQuadraticModel):
self.cqm: ConstrainedQuadraticModel = cqm
def __getitem__(self, index: int) -> Vartype:
warnings.warn(
"cqm.variables.vartypes[i] is deprecated and will be removed in dimod 0.11.0, "
"use cqm.vartype(cqm.variables[i]) instead.", DeprecationWarning, stacklevel=3)
return self.cqm.vartype(self.cqm.variables[index])
def __len__(self) -> int:
warnings.warn(
"cqm.variables.vartypes is deprecated and will be removed in dimod 0.11.0",
DeprecationWarning, stacklevel=3)
return len(self.cqm.variables)
class _LowerBounds(abc.Mapping):
"""Support deprecated attribute on ``CQM.variables``"""
def __init__(self, cqm: ConstrainedQuadraticModel):
self.cqm: ConstrainedQuadraticModel = cqm
def __getitem__(self, key: Variable) -> float:
warnings.warn(
"cqm.variables.lower_bounds[v] is deprecated and will be removed in dimod 0.11.0, "
"use cqm.lower_bound(v) instead.", DeprecationWarning, stacklevel=3)
return self.cqm.lower_bound(key)
def __iter__(self) -> Iterator[Variable]:
warnings.warn(
"cqm.variables.lower_bounds is deprecated and will be removed in dimod 0.11.0",
DeprecationWarning, stacklevel=3)
yield from self.cqm.variables
def __len__(self) -> int:
warnings.warn(
"cqm.variables.lower_bounds is deprecated and will be removed in dimod 0.11.0",
DeprecationWarning, stacklevel=3)
return len(self.cqm.variables)
class _UpperBounds(abc.Mapping):
"""Support deprecated attribute on ``CQM.variables``"""
def __init__(self, cqm: ConstrainedQuadraticModel):
self.cqm: ConstrainedQuadraticModel = cqm
def __getitem__(self, key: Variable) -> float:
warnings.warn(
"cqm.variables.upper_bounds[v] is deprecated and will be removed in dimod 0.11.0, "
"use cqm.upper_bound(v) instead.", DeprecationWarning, stacklevel=3)
return self.cqm.upper_bound(key)
def __iter__(self) -> Iterator[Variable]:
warnings.warn(
"cqm.variables.upper_bounds is deprecated and will be removed in dimod 0.11.0",
DeprecationWarning, stacklevel=3)
yield from self.cqm.variables
def __len__(self) -> int:
warnings.warn(
"cqm.variables.upper_bounds is deprecated and will be removed in dimod 0.11.0",
DeprecationWarning, stacklevel=3)
return len(self.cqm.variables)
def _qm_to_bqm(qm: QuadraticModel, integers: MutableMapping[Variable, BinaryQuadraticModel],
) -> BinaryQuadraticModel:
# dev note: probably we'll want to make this function or something similar
# public facing at some point, but right now the interface is pretty weird
# and it only returns BINARY bqms
if any(qm.vartype(v) is Vartype.SPIN for v in qm.variables):
# bqm is BINARY so we want to handle these
qm = qm.spin_to_binary(inplace=False)
bqm = BinaryQuadraticModel(Vartype.BINARY)
for v in qm.variables:
if v in integers:
bqm += qm.get_linear(v) * integers[v]
else:
bqm.add_linear(v, qm.get_linear(v))
for u, v, bias in qm.iter_quadratic():
if u in integers:
if v in integers:
bqm += integers[u] * integers[v] * bias
else:
bqm += Binary(v) * integers[u] * bias
elif v in integers:
bqm += Binary(u) * integers[v] * bias
else:
bqm.add_quadratic(u, v, bias)
bqm.offset += qm.offset
return bqm
class CQMToBQMInverter:
"""Invert a sample from a binary quadratic model constructed by :func:`cqm_to_bqm`."""
__slots__ = ('_binary', '_integers')
def __init__(self,
binary: Mapping[Variable, Vartype],
integers: Mapping[Variable, BinaryQuadraticModel]):
self._binary = binary
self._integers = integers
def __call__(self, sample: Mapping[Variable, int]) -> Mapping[Variable, int]:
new = {}
for v, vartype in self._binary.items():
if vartype is Vartype.BINARY:
new[v] = sample[v]
elif vartype is Vartype.SPIN:
new[v] = 2*sample[v] - 1
else:
raise RuntimeError("unexpected vartype")
for v, bqm in self._integers.items():
new[v] = 0
for u in bqm.variables:
new[v] += sample[u] * u[1]
return new
@classmethod
def from_dict(cls, doc: Dict[str, Dict[Variable, Any]]) -> 'CQMToBQMInverter':
"""Construct an inverter from a serialized representation."""
integers = {}
for v, variables in doc['integers'].items():
v = deserialize_variable(v)
bqm = BinaryQuadraticModel(Vartype.BINARY)
bqm.add_linear_from((deserialize_variable(u), u[1]) for u in variables)
integers[v] = bqm
return cls(
dict((deserialize_variable(v), as_vartype(vartype))
for v, vartype in doc['binary'].items()),
integers,
)
def to_dict(self) -> Dict[str, Dict[Variable, Any]]:
"""Return a json-serializable encoding of the inverter."""
# todo: in 3.8 we can used TypedDict for the typing
return dict(
binary=dict((serialize_variable(v), vartype.name)
for v, vartype in self._binary.items()),
integers=dict((serialize_variable(v), bqm.variables.to_serializable())
for v, bqm in self._integers.items()),
)
# Developer note: This function is *super* ad hoc. In the future, we may want
# A BQM.from_cqm method or similar, but for now I think it makes sense to
# expose that functionality as a function for easier later deprecation.
def cqm_to_bqm(cqm: ConstrainedQuadraticModel, lagrange_multiplier: Optional[Bias] = None,
) -> Tuple[BinaryQuadraticModel, CQMToBQMInverter]:
"""Construct a binary quadratic model from a constrained quadratic model.
Args:
cqm: A constrained quadratic model. All constraints must be linear
and all integer variables must have a lower bound of 0.
lagrange_multiplier: The penalty strength used when converting
constraints into penalty models. Defaults to 10x the largest
bias in the objective.
Returns:
A 2-tuple containing:
A binary quadratic model
A function that converts samples over the binary quadratic model
back into samples for the constrained quadratic model.
Example:
Start with a constrained quadratic model
>>> num_widget_a = dimod.Integer('num_widget_a', upper_bound=7)
>>> num_widget_b = dimod.Integer('num_widget_b', upper_bound=3)
>>> cqm = dimod.ConstrainedQuadraticModel()
>>> cqm.set_objective(-3 * num_widget_a - 4 * num_widget_b)
>>> cqm.add_constraint(num_widget_a + num_widget_b <= 5, label='total widgets')
'total widgets'
Convert | |
# -*- coding: utf-8 -*-
# Created on Feb 17, 2013
# Last updated on September 22, 2016
# @author: <NAME>
import os
import re
import _file_utils as utils
import inspect
import unicodedata
from copy import deepcopy
from itertools import chain
class HierarchyManager:
def __init__(self, text_path, header_regex, tag_path=None, preamble_level=0, case_sensitive=False,
tag_format='ccp'):
"""
Regular expression-based tagger. Wraps Segmenter and uses outputs to apply content tags to hierarchy levels.
Also generates a few reports and formats output.
:param text_path: path to document being analyzed. Assumed to have been cleaned appropriately.
:param header_regex: list of regular expressions corresponding to organizational headers.
:param tag_path: optionally, path to hierarchical tags. If not given, tagging is not conducted.
:param preamble_level: optionally, highest-level organizational tag following the document's preamble.
:param case_sensitive: indicator for whether the header regex matches should be case-sensitive.
:param tag_format: format in which tag data are given.
:return:
"""
# read raw text data, get tags, set flags, create containers for outputs
self.pwd = os.path.dirname(inspect.getfile(inspect.currentframe()))
self.file_name = re.sub('\..+', '', os.path.basename(text_path))
self.header_regex = header_regex
self.text = None
self.parsed = None
self.skeleton = None
self.tag_data = None
if case_sensitive:
self.case_flags = re.M
else:
self.case_flags = re.I | re.M
self.text = utils.TextLoader(text_path).content
# read reference data, if any
self.tag_data = utils.TagLoader(tag_path, tag_format).data
if self.tag_data:
self.tag_report = []
else:
self.tag_report = None
# initialize Segmenter()
self.parser = _Parser(self.text, self.header_regex, self.case_flags, preamble_level)
def parse(self):
"""
Parse the document. This function largely wraps the Segmenter class, and creates a skeleton of the
organizational hierarchy that can be used as a diagnostic tool.
"""
def create_skeleton(obj, out=None, depth=0):
if not out:
out = []
for entry in obj:
if entry['header']:
header_to_write = entry['header']
out.append(depth * '\t' + header_to_write + os.linesep)
if entry['children']:
if entry['children'][0]['header']:
out = create_skeleton(entry['children'], out, depth+1)
else:
out = create_skeleton(entry['children'], out, depth)
return out
self.parser.segment()
self.parsed = self.parser.parsed
self.skeleton = create_skeleton(self.parsed)
def apply_tags(self):
"""
Apply the actual content tags to the text. Tags assumed to come in the form "75.4", which indicates that the tag
should be applied to section 4 of section 75. The only requirement imposed here is that any potential matches be
sequential (so "75.4" would match "3.75.4" or "75.4.3" but not "75.3.4"). Unmatched tags or tags that match to
more than one section are added to the tag_report container.
"""
def create_stub_table(obj, out=None, current_header=None):
"""
Helper function to recursively create a "stub" table, consisting of a mapping between all possible header
stubs and the index combination used to reach that header stub in the parsed object. Used later for matching
and tag application.
"""
def format_header(h):
"""
Helper function to strip sequences not used for matching from headers (e.g. "Title" or "Article")
"""
h = h.lower()
h = ''.join(e for e in h if unicodedata.category(e)[0] not in ['P', 'C'])
if h != 'preamble':
h = re.sub('[a-zA-Z]{3,}|\s+', '', h)
return h
if not out:
out = {}
for i in range(len(obj)):
entry = obj[i]
header = entry['header']
if current_header:
updated_header = deepcopy(current_header)
if header:
updated_header['header'].append(format_header(header))
updated_header['key'].append(i)
else:
updated_header = {'header': [format_header(header)], 'key': [i]}
if entry['text_type'] != 'body':
joined_header = '.'.join(h for h in updated_header['header'] if h)
out[joined_header] = updated_header['key']
if entry['children']:
create_stub_table(entry['children'], out, updated_header)
return out
def apply_tag(obj, index_seq, tag):
"""
Helper function to apply tags to the parsed object.
"""
entry = obj[index_seq.pop(0)]
if index_seq:
entry['children'] = apply_tag(entry['children'], index_seq, tag)
else:
entry['tags'].append(tag)
return obj
stub_table = create_stub_table(self.parsed)
# check for tag matches and apply tags to the parsed object
if self.tag_data:
for tag_entry in self.tag_data:
tag_name = tag_entry['tag']
tag_reference = tag_entry['article']
matches = [s for s in stub_table if re.search('^' + tag_reference + '$|\.' + tag_reference + '$', s,
self.case_flags)]
if len(matches) == 1:
key_sequence = deepcopy(stub_table[matches[0]])
self.parsed = apply_tag(self.parsed, key_sequence, tag_name)
else:
self.tag_report.append(tag_entry)
# output a quick summary of number of tags matched
if self.tag_report is not None:
print('{0} out of {1} tags not matched. See reports for details.'.format(len(self.tag_report),
len(self.tag_data)))
else:
print('All tags successfully matched.')
def create_output(self, output_format='ccp'):
"""
Format the parsed object for easier output.
:param output_format: format to use. Only CCP format currently implemented.
"""
def format_ccp(obj, out=None, parent_index=0):
"""
CCP format setup. Outputs a CSV with document hierarchy expressed using parent/child index columns.
"""
if not out:
out = []
for i in range(len(obj)):
entry = obj[i]
if entry['header']:
header_to_write = entry['header']
else:
header_to_write = ''
split_text = re.split('[\n\r]+', entry['text'])
for line in split_text:
current_index = len(out)+1
if entry['text_type'] != 'body' or line:
out.append([str(current_index), str(parent_index), header_to_write, '',
entry['text_type'], line] + entry['tags'])
if entry['children']:
out = format_ccp(entry['children'], out, parent_index=len(out))
return out
if 'ccp' in output_format:
out_data = format_ccp(self.parsed)
# rectangularize
max_cols = max(len(row) for row in out_data)
out_data = [row + ['']*(max_cols - len(row)) for row in out_data]
if 'multilingual' in output_format:
out_data = [row[0:2] + 3*[row[2]] + row[3:5] + 3*[row[5]] + row[6:] for row in out_data]
return out_data
else:
print('Only CCP output format currently implemented.')
class _Parser:
def __init__(self, text, header_regex, case_flags, preamble_level):
"""
Segmenter class, which does actual document segmentation work. Intended to be called through HierarchyTagger.
:param text: text to be segmented
:param header_regex: list of header regex to be used for segmentation
:param preamble_level: highest-level organizational tag following the document's preamble.
:param case_flags: indicator for whether the header regex matches should be case-sensitive.
"""
self.text = text
self.header_regex = ['^' + unicode(h, encoding='utf8').replace('|', '|^') for h in header_regex]
self.preamble_level = preamble_level
self.case_flags = case_flags
self.parsed, self.list_table = self._pre_process()
def segment(self):
"""
Set up organizational headers, using regex list provided in self.header_regex. Text is pre-processed, then
segmented into a hierarchical structure. Regular text and auxiliary list table are segmented separately, and
then reassembled into a single output.
"""
def shatter(obj, header_tag, case_flags):
"""
Recursive function to segment a given object, using a given organizational tag. Segmented items are
placed under the "children" key of the object, and then recursively segmented if any additional headers
matching the same tag are present.
:param obj: Dictionary object to segmented. Expected to be tabulated text or list container object.
:param header_tag: Regex tag for a particular header.
:param case_flags: Flags for case sensitivity.
:return: segmented obj
"""
# iterate over object (note that object may change size during iteration)
entry_counter = 0
while entry_counter < len(obj):
entry = obj[entry_counter]
header_matches = list(re.finditer(header_tag, entry['text'], flags=case_flags))
# if a header match is found, split the text into pre-match start_stub and post-match content
if len(header_matches) > 0:
header_starts = [header.start() for header in header_matches]
header_starts.append(len(entry['text']))
start_stub = entry['text'][:header_starts[0]].strip('\t\n\r ')
new_entries = []
# for all header matches in post-match content, extract titles and text and format an entry
for j, header_regex in enumerate(header_matches):
text = entry['text'][header_regex.end():header_starts[j+1]].strip('\t\n\r ')
first_line_index = re.search('[\n\r]', text)
if not first_line_index:
first_line_index = len(text)
else:
first_line_index = first_line_index.end()
first_line = text[:first_line_index]
first_line = first_line.strip('\t\n\r ')
if '<title>' in first_line and '</title>' in first_line:
title = re.search('<title>.*?</title>', first_line)
elif '<title>' in first_line:
title = re.search('.*<title>.*', first_line)
else:
title = None
header = header_regex.group(0).strip('\t\n\r ')
header = re.sub('[,|;^#*]', '', header)
header = re.sub('[-.:](?![A-Za-z0-9])', '', header)
if title:
text = first_line[:title.start()] + first_line[title.end():] + text[first_line_index:]
title_text = re.sub('</?title>', '', title.group(0)).strip('\t\n\r ')
else:
title_text = ''
new_entry = {'header': header,
'text': title_text,
'children': deepcopy(entry['children']),
'text_type': u'title',
'tags': []}
new_entry['children'].insert(0, {'header': None,
'text': text,
'children': [],
'text_type': u'body',
'tags': []
}
)
new_entries.append(new_entry)
# handle case where organization "skips" a level
# if we look to shatter content and children are already present, then that implies:
# - carry-over children from new entries are duplicates by definition (except for first one)
# - new entries should be on the same level as existing children
# this section deletes duplicate children and adds new entries to same level as existing
if entry['children']:
for j in range(len(new_entries)):
new_entries[j]['children'] = new_entries[j]['children'][0:1]
entry['text'] = start_stub
entry['children'] = new_entries + entry['children']
# if there is a start_stub, then add new header matches as children of the current entry
elif start_stub:
entry['children'].insert(0, {'header': None,
'text': | |
f_attr, g_attr)
else:
lm = lambda i, j, alg='encrypt' : \
self.compose(f, g(i, j, alg), alg, f_attr, g_attr)
return RijndaelGF.Round_Component_Poly_Constr(lm, self)
def add_round_key_poly_constr(self):
r"""
Return the ``Round_Component_Poly_Constr`` object corresponding to
AddRoundKey.
EXAMPLES::
sage: from sage.crypto.mq.rijndael_gf import RijndaelGF
sage: rgf = RijndaelGF(4, 4)
sage: ark_pc = rgf.add_round_key_poly_constr()
sage: ark_pc
A polynomial constructor for the function 'Add Round Key' of Rijndael-GF block cipher with block length 4, key length 4, and 10 rounds.
sage: ark_pc(0, 1)
a01 + k001
When invoking the returned object's ``__call__`` method, changing the
value of ``algorithm='encrypt'`` does nothing, since the AddRoundKey
round component function is its own inverse. ::
sage: with_encrypt = ark_pc(1, 1, algorithm='encrypt')
sage: with_decrypt = ark_pc(1, 1, algorithm='decrypt')
sage: with_encrypt == with_decrypt
True
When invoking the returned object's ``__call__`` method, one can change
the round subkey used in the returned polynomial by changing the
``round=0`` keyword. ::
sage: ark_pc(2, 1, round=7)
a21 + k721
When passing the returned object to methods such as ``apply_poly`` and
``compose``, we can make these methods use a non-default value for
``round=0`` by passing in a dictionary mapping ``round`` to a different
value. ::
sage: rgf.apply_poly(rgf.state_vrs, ark_pc,
....: poly_constr_attr={'round' : 6})
[a00 + k600 a01 + k601 a02 + k602 a03 + k603]
[a10 + k610 a11 + k611 a12 + k612 a13 + k613]
[a20 + k620 a21 + k621 a22 + k622 a23 + k623]
[a30 + k630 a31 + k631 a32 + k632 a33 + k633]
::
sage: rcpc = rgf.compose(ark_pc, ark_pc,
....: f_attr={'round' : 3}, g_attr={'round' : 5})
sage: rcpc(3, 1)
a31 + k331 + k531
"""
return self._add_round_key_rcpc
def _add_round_key_pc(self, row, col, algorithm='encrypt', round=0):
r"""
Return a polynomial representing an element of a round-key addition.
INPUT:
- ``row`` -- The row number of the entry represented by this method's
output.
- ``col`` -- The column number of the entry represented by this
method's output.
- ``algorithm`` -- (default: "encrypt") Whether to return the
polynomial as an encryption or as a decryption. The encryption flag
is "encrypt" and the decryption flag is "decrypt".
- ``round`` -- (default: 0) The round number of the entry represented
by this method's output.
OUTPUT:
- A polynomial representing the ``row,col`` th entry of a state matrix
after a round-key addition in terms of entries of the input state
matrix and entries of the ``round`` th round key.
EXAMPLES::
sage: from sage.crypto.mq.rijndael_gf import RijndaelGF
sage: rgf = RijndaelGF(4, 4)
sage: rgf._add_round_key_pc(1, 2, round=7)
a12 + k712
As expected, since the encryption and decryption transformations are
identical, changing ``algorithm`` has no effect.
sage: with_encrypt = rgf._add_round_key_pc(3, 2,
....: 'encrypt')
sage: with_decrypt = rgf._add_round_key_pc(3, 2,
....: 'decrypt')
sage: with_encrypt == with_decrypt
True
"""
if round not in range(self._Nr):
msg = "keyword 'round' must be between 0 and {0}"
raise ValueError(msg.format(self._Nr))
state_var = self.state_vrs[row, col]
key_var = self.subkey_vrs[round][row, col]
return state_var + key_var
def add_round_key(self, state, round_key):
r"""
Return the round-key addition of matrices ``state`` and ``round_key``.
INPUT:
- ``state`` -- The state matrix to have ``round_key`` added to.
- ``round_key`` -- The round key to add to ``state``.
OUTPUT:
- A state matrix which is the round key addition of ``state`` and
``round_key``. This transformation is simply the entrywise addition
of these two matrices.
EXAMPLES::
sage: from sage.crypto.mq.rijndael_gf import RijndaelGF
sage: rgf = RijndaelGF(4, 4)
sage: state = rgf._hex_to_GF('36339d50f9b539269f2c092dc4406d23')
sage: key = rgf._hex_to_GF('7CC78D0E22754E667E24573F454A6531')
sage: key_schedule = rgf.expand_key(key)
sage: result = rgf.add_round_key(state, key_schedule[0])
sage: rgf._GF_to_hex(result)
'4af4105edbc07740e1085e12810a0812'
"""
self._check_valid_PRmatrix(state, 'state')
self._check_valid_PRmatrix(round_key, 'round_key')
# We don't use apply_poly here since that would require giving this
# method an extra argument of a round number
return state + round_key
def sub_bytes_poly_constr(self):
r"""
Return the ``Round_Component_Poly_Constr`` object corresponding to
SubBytes.
EXAMPLES::
sage: from sage.crypto.mq.rijndael_gf import RijndaelGF
sage: rgf = RijndaelGF(4, 4)
sage: sb_pc = rgf.sub_bytes_poly_constr()
sage: sb_pc
A polynomial constructor for the function 'SubBytes' of Rijndael-GF block cipher with block length 4, key length 4, and 10 rounds.
sage: sb_pc(2, 3)
(x^2 + 1)*a23^254 +
(x^3 + 1)*a23^253 +
(x^7 + x^6 + x^5 + x^4 + x^3 + 1)*a23^251 +
(x^5 + x^2 + 1)*a23^247 +
(x^7 + x^6 + x^5 + x^4 + x^2)*a23^239 +
a23^223 +
(x^7 + x^5 + x^4 + x^2 + 1)*a23^191 +
(x^7 + x^3 + x^2 + x + 1)*a23^127 +
(x^6 + x^5 + x + 1)
The returned object's ``__call__`` method has an additional keyword
of ``no_inversion=False``, which causes the returned polynomial to
represent only the affine transformation step of SubBytes. ::
sage: sb_pc(1, 0, no_inversion=True)
(x^7 + x^3 + x^2 + x + 1)*a10^128 +
(x^7 + x^5 + x^4 + x^2 + 1)*a10^64 +
a10^32 +
(x^7 + x^6 + x^5 + x^4 + x^2)*a10^16 +
(x^5 + x^2 + 1)*a10^8 +
(x^7 + x^6 + x^5 + x^4 + x^3 + 1)*a10^4 +
(x^3 + 1)*a10^2 +
(x^2 + 1)*a10 +
(x^6 + x^5 + x + 1)
We can build a polynomial representing the inverse transformation
by setting the keyword ``algorithm='decrypt'``. However, the order of
the affine transformation and the inversion step in SubBytes means that
this polynomial has thousands of terms and is very slow to compute.
Hence, if one wishes to build the decryption polynomial with the
intention of evaluating that polynomial for a particular input, it is
strongly recommended to first call
``sb_pc(i, j, algorithm='decrypt', no_inversion=True)`` to build a
polynomial representing only the inverse affine transformation,
evaluate this polynomial for your intended input, then finally
calculate the inverse of the result. ::
sage: poly = sb_pc(1, 2, algorithm='decrypt', no_inversion=True)
sage: state = rgf._hex_to_GF('39daee38f4f1a82aaf432410c36d45b9')
sage: result = poly(state.list())
sage: rgf._GF_to_hex(result * -1)
'49'
When passing the returned object to ``apply_poly`` and ``compose``, we
can make those methods change the keyword ``no_inversion`` of this
object's ``__call__`` method by passing the dictionary
``{'no_inversion' : True}`` to them. ::
sage: result = rgf.apply_poly(state, sb_pc,
....: poly_constr_attr={'no_inversion' : True})
sage: rgf._GF_to_hex(result)
'961c72894526f746aa85fc920adcc719'
::
sage: rcpc = rgf.compose(sb_pc, rgf.shift_rows_poly_constr(),
....: f_attr={'no_inversion' : True})
Note that if we set ``algorithm='decrypt'`` for ``apply_poly``, it
will perform the necessary performance enhancement described above
automatically. The structure of ``compose``, however, unfortunately
does not allow this enhancement to be employed.
"""
return self._sub_bytes_rcpc
def _sub_bytes_pc(self, row, col, algorithm='encrypt', no_inversion=False):
r"""
Return a polynomial representing `SubBytes(A)_{\textit{row, col}}`.
INPUT:
- ``row`` -- The row number of the entry represented by this method's
output.
- ``col`` -- The column number of the entry represented by this
method's output.
- ``algorithm`` -- (default: "encrypt") Whether to return the
polynomial as an encryption or as a decryption. The encryption flag
is "encrypt" and the decryption flag is "decrypt".
- ``no_inversion`` -- (default: ``False``) Don't perform the inversion
step, only perform the affine transformation. Primarily intended
to increase performance during decryption, as is shown in the
below example.
OUTPUT:
- A polynomial representing the ``row,col`` th entry of a state matrix
after the SubBytes method has been applied to it.
EXAMPLES::
sage: from sage.crypto.mq.rijndael_gf import RijndaelGF
sage: rgf = RijndaelGF(4, 4)
sage: rgf._sub_bytes_pc(2, 3)
(x^2 + 1)*a23^254 + (x^3 + 1)*a23^253 + (x^7 + x^6 + x^5 + x^4 + x^3 + 1)*a23^251 + (x^5 + x^2 + 1)*a23^247 + (x^7 + x^6 + x^5 + x^4 + x^2)*a23^239 + a23^223 + (x^7 + x^5 + x^4 + x^2 + 1)*a23^191 + (x^7 + x^3 + x^2 + x + 1)*a23^127 + (x^6 + x^5 + x + 1)
We can use this polynomial to calculate individual entries of the
output matrix for any given state as such::
sage: state = rgf._hex_to_GF('6385b79ffc538df997be478e7547d691')
sage: poly = rgf._sub_bytes_pc(2, 3)
sage: poly(state.list())
x^7 + x^6 + x^5 + x^4 + x^2 + x
We can set ``no_inversion`` to ``True`` to get a polynomial
representation of solely the affine transformation. ::
sage: rgf._sub_bytes_pc(0, 2, no_inversion=True)
(x^7 + x^3 + x^2 + | |
is unbound in the current context.
Raises:
NameError: If requested `name` is not found among the bound names
currently available in `self`.
"""
py_typecheck.check_type(name, six.string_types)
comp = self.active_node
while not isinstance(comp.payload, OuterContextPointer):
if name == comp.payload.name:
return comp.payload
if comp.older_sibling is not None:
comp = comp.older_sibling
elif comp.parent is not None:
comp = comp.parent
raise NameError('Name {} is not available in {}'.format(name, self))
def update_payload_tracking_reference(self, ref):
"""Calls `update` if it finds its Reference arg among the available symbols.
If there is no such available symbol, simply does nothing.
Args:
ref: Instance of `computation_building_blocks.Reference`; generally, this
is the variable a walker has encountered in a TFF AST, and which it is
relying on `SymbolTable` to address correctly.
Raises:
NameError: If `ref` is not found among the bound names currently
available in `self`.
"""
py_typecheck.check_type(ref, computation_building_blocks.Reference)
comp = self.active_node
while not isinstance(comp.payload, OuterContextPointer):
if ref.name == comp.payload.name:
comp.payload.update(ref)
break
if comp.older_sibling is not None:
comp = comp.older_sibling
elif comp.parent is not None:
comp = comp.parent
if isinstance(comp.payload, OuterContextPointer):
raise NameError('The reference {} is not available in {}'.format(
ref, self))
def ingest_variable_binding(self, name, value, mode, comp_id=None):
"""Constructs or updates node in symbol tree as AST is walked.
Passes `name` and `value` onto the symbol tree's node constructor, with
`mode` determining how the node being constructed or updated
relates to the symbol tree's `active_node`.
If there is no preexisting node in the symbol tree bearing the
requested relationship to the active node, a new one will be constructed and
initialized. If there is an existing node, `ingest_variable_binding` checks
that this node has the correct `payload.name`, and overwrites its
`payload.value` with the `value` argument.
Args:
name: The string name of the `CompTracker` instance we are constructing or
updating.
value: Instance of `computation_building_blocks.ComputationBuildingBlock`
or `None`, as in the `value` to pass to symbol tree's node payload
constructor.
mode: Enum indicating the relationship the desired node should bear to the
symbol tree's active node. Can be either CHILD or SIBLING.
comp_id: Integer `comp_id` generated by walking the tree, used to address
children of nodes in the symbol tree. Only necessary if `mode` is
'child'.
Raises:
ValueError: If we are passed a name-mode pair such that a
preexisting node in the symbol tree bears this relationship with
the active node, but has a different name. This is an indication
that either a transformation has failed to happen in the symbol tree
or that we have a symbol tree instance that does not match the
computation we are currently processing.
"""
py_typecheck.check_type(mode, MutationMode)
if mode == MutationMode.CHILD:
py_typecheck.check_type(comp_id, int)
py_typecheck.check_type(name, six.string_types)
if value is not None:
py_typecheck.check_type(
value, computation_building_blocks.ComputationBuildingBlock)
node = SequentialBindingNode(self.payload_type(name=name, value=value))
if mode == MutationMode.SIBLING:
if self.active_node.younger_sibling is None:
self._add_younger_sibling(node)
self._move_to_younger_sibling()
else:
if self.active_node.younger_sibling.payload.name != name:
raise ValueError(
'You have a mismatch between your symbol tree and the '
'computation you are trying to process; your symbol tree is {} '
'and you are looking for a BoundVariableTracker with name {} '
'and value {}'.format(self, name, value))
self._move_to_younger_sibling()
self.active_node.payload.value = value
else:
if self.active_node.children.get(comp_id) is None:
self._add_child(comp_id, node)
self._move_to_child(comp_id)
else:
if self.active_node.children[comp_id].payload.name != name:
raise ValueError(
'You have a mismatch between your symbol tree and the '
'computation you are trying to process; your symbol tree is {} '
'and you are looking for a BoundVariableTracker with name {} '
'and value {}'.format(self, name, value))
self._move_to_child(comp_id)
self.active_node.payload.value = value
def move_to_parent_context(self):
"""Moves `active_node` to the parent of current active node.
Of the `active_node` manipulation methods, this is the only one exposed.
This is because the parent-child relationship corresponds directly to
passing through a scope-introducing TFF AST node in a postorder traversal;
therefore it is convenient to expose this as a mechanism to a TFF AST
traversal function. The rest of these manipulation methods are more easily
exposed via `ingest_variable_binding`.
Raises:
Raises ValueError if the active node has no parent.
"""
if self.active_node.parent:
self.active_node = self.active_node.parent
else:
raise ValueError('You have tried to move to a nonexistent parent.')
def _add_younger_sibling(self, comp_tracker):
"""Appends comp as younger sibling of current `active_node`."""
py_typecheck.check_type(comp_tracker, SequentialBindingNode)
if self._node_ids.get(id(comp_tracker)):
raise ValueError(
'Each instance of {} can only appear once in a given symbol tree.'
.format(self.payload_type))
if self.active_node.younger_sibling is not None:
raise ValueError('Ambiguity in adding a younger sibling')
if self.active_node.parent is not None:
comp_tracker.set_parent(self.active_node.parent)
comp_tracker.set_older_sibling(self.active_node)
self.active_node.set_younger_sibling(comp_tracker)
self._node_ids[id(comp_tracker)] = 1
def _move_to_younger_sibling(self):
"""Moves `active_node` to the younger sibling of the current active node.
Raises:
Raises ValueError if the active node has no younger sibling.
"""
if self.active_node.younger_sibling:
self.active_node = self.active_node.younger_sibling
else:
raise ValueError('You have tried to move to a '
'nonexistent younger sibling in ' + str(self))
def _move_to_older_sibling(self):
"""Moves `active_node` to the older sibling of the current active node.
Raises:
Raises ValueError if the active node has no older sibling.
"""
if self.active_node.older_sibling:
self.active_node = self.active_node.older_sibling
else:
raise ValueError('You have tried to move to a '
'nonexistent older sibling in ' + str(self))
def _add_child(self, constructing_comp_id, comp_tracker):
"""Writes `comp_tracker` to children of active node.
Each `SequentialBindingNode` keeps a `dict` of its children; `_add_child`
updates the value of this `dict` with key `constructing_comp_id` to be
`comp_tracker`.
Notice that `constructing_comp_id` is simply a way of addressing the
children in this dict; it is not necessarily globally unique, as long
as it is sufficient to address child scopes.
Args:
constructing_comp_id: Key to identify child being constructed from the
parent scope.
comp_tracker: Instance of `SequentialBindingNode`, the node to add as a
child of `active_node`.
"""
py_typecheck.check_type(comp_tracker, SequentialBindingNode)
if self._node_ids.get(id(comp_tracker)):
raise ValueError('Each node can only appear once in a given'
'symbol tree. You have tried to add {} '
'twice.'.format(comp_tracker.payload))
comp_tracker.set_parent(self.active_node)
self.active_node.add_child(constructing_comp_id, comp_tracker)
self._node_ids[id(comp_tracker)] = 1
def _move_to_child(self, comp_id):
"""Moves `active_node` to child of current active node with key `comp_id`.
Args:
comp_id: Integer representing the position of the child we wish to update
`active_node` to point to in a preorder traversal of the AST.
Raises:
ValueError: If the active node has no child with the correct id.
"""
if self.active_node.children.get(comp_id) is not None:
self.active_node = self.active_node.get_child(comp_id)
else:
raise ValueError('You have tried to move to a nonexistent child.')
def _equal_under_node(self, self_node, other_node):
"""Recursive helper function to check equality of `SymbolTree`s."""
if self_node is None and other_node is None:
return True
if self_node is None or other_node is None:
return False
if self_node.payload != other_node.payload:
return False
if len(self_node.children) != len(other_node.children):
return False
for (key_1, val_1), (key_2, val_2) in zip(
six.iteritems(self_node.children), six.iteritems(other_node.children)):
if key_1 != key_2:
return False
if not self._equal_under_node(val_1, val_2):
return False
return self._equal_under_node(self_node.younger_sibling,
other_node.younger_sibling)
def __eq__(self, other):
"""Walks to root of `self` and `other` before testing equality of subtrees.
Args:
other: Instance of `SymbolTree` to test for equality with `self`.
Returns:
Returns `True` if and only if `self` and `other` are the same
structurally (each node has the same number of children and siblings) and
each node of `self` compares as equal with the node in the corresponding
position of `other`.
"""
if self is other:
return True
if not isinstance(other, SymbolTree):
return NotImplemented
self_node = _walk_to_root(self.active_node)
other_node = _walk_to_root(other.active_node)
return self._equal_under_node(self_node, other_node)
def __ne__(self, other):
return not self == other
def _string_under_node(self, node):
"""Rescursive helper function to generate string reps of `SymbolTree`s."""
py_typecheck.check_type(node, SequentialBindingNode)
if node is self.active_node:
active_node_indicator = '*'
else:
active_node_indicator = ''
symbol_tree_string = '[' + str(node.payload) + active_node_indicator + ']'
if node.children:
symbol_tree_string += '->{'
for _, child_node in six.iteritems(node.children):
if not child_node.older_sibling:
symbol_tree_string += '('
symbol_tree_string += self._string_under_node(child_node)
symbol_tree_string += '),('
symbol_tree_string = symbol_tree_string[:-2]
symbol_tree_string += '}'
if node.younger_sibling:
symbol_tree_string += '-' + self._string_under_node(node.younger_sibling)
return symbol_tree_string
def __str__(self):
"""Generates a string representation of this `SymbolTree`.
First we walk up to the root node, then we walk down
the tree generating string rep of this symbol tree.
Returns:
Returns a string representation of the current `SymbolTree`, with
the node labeled the active node identified with a *.
"""
node = self.active_node
root_node = _walk_to_root(node)
py_typecheck.check_type(root_node.payload, | |
<gh_stars>1-10
import requests
from .auth import CallHubAuth
from ratelimit import limits, sleep_and_retry
from .bulk_upload_tools import csv_and_mapping_create
from requests.structures import CaseInsensitiveDict
import types
import math
from requests_futures.sessions import FuturesSession
from collections import defaultdict
from concurrent.futures import ProcessPoolExecutor
import traceback
import time
class CallHub:
API_LIMIT = {
"GENERAL": {"calls": 13, "period": 1},
"BULK_CREATE": {"calls": 1, "period": 70},
}
def __init__(self, api_domain, api_key=None, rate_limit=API_LIMIT):
"""
Instantiates a new CallHub instance
>>> callhub = CallHub("https://api-na1.callhub.io")
With built-in rate limiting disabled:
>>> callhub = CallHub(rate_limit=False)
Args:
api_domain (``str``): Domain to access API (eg: api.callhub.io, api-na1.callhub.io), this varies by account
Keyword Args:
api_key (``str``, optional): Optional API key. If not provided,
it will attempt to use ``os.environ['CALLHUB_API_KEY']``
rate_limit (``dict``, optional): Enabled by default with settings that respect callhub's API limits.
Setting this to false disables ratelimiting, or you can set your own limits by following the example
below. Please don't abuse! :)
>>> callhub = CallHub(rate_limit={"GENERAL": {"calls": 13, "period": 1},
>>> "BULK_CREATE": {"calls": 1, "period": 70}})
- Default limits bulk_create to 1 per 70 seconds (CallHub states their limit is every 60s but in
practice a delay of 60s exactly can trip their rate limiter anyways)
- Default limits all other API requests to 13 per second (CallHub support states their limit is 20/s but
this plays it on the safe side, because other rate limiters seem a little sensitive)
"""
self.session = FuturesSession(max_workers=43)
# Attempt 3 retries for failed connections
adapter = requests.adapters.HTTPAdapter(max_retries=3)
self.session.mount('https://', adapter)
self.session.mount('http://', adapter)
# Truncate final '/' off of API domain if it was provided
if api_domain[-1] == "/":
self.api_domain = api_domain[:-1]
else:
self.api_domain = api_domain
if rate_limit:
# Apply general rate limit to self.session.get
rate_limited_get = sleep_and_retry(limits(**rate_limit["GENERAL"])(FuturesSession.get))
self.session.get = types.MethodType(rate_limited_get, self.session)
# Apply general rate limit to self.session.post
rate_limited_post = sleep_and_retry(limits(**rate_limit["GENERAL"])(FuturesSession.post))
self.session.post = types.MethodType(rate_limited_post, self.session)
# Apply bulk rate limit to self.bulk_create
self.bulk_create = sleep_and_retry(limits(**rate_limit["BULK_CREATE"])(self.bulk_create))
self.session.auth = CallHubAuth(api_key=api_key)
# validate_api_key returns administrator email on success
self.admin_email = self.validate_api_key()
# cache for do-not-contact number/list to id mapping
self.dnc_cache = {}
def __repr__(self):
return "<CallHub admin: {}>".format(self.admin_email)
def _collect_fields(self, contacts):
""" Internal Function to get all fields used in a list of contacts """
fields = set()
for contact in contacts:
for key in contact:
fields.add(key)
return fields
def _assert_fields_exist(self, contacts):
"""
Internal function to check if fields in a list of contacts exist in CallHub account
If fields do not exist, raises LookupError.
"""
# Note: CallHub fields are implemented funkily. They can contain capitalization but "CUSTOM_FIELD"
# and "custom_field" cannot exist together in the same account. For that reason, for the purposes of API work,
# fields are treated as case insensitive despite capitalization being allowed. Attempting to upload a contact
# with "CUSTOM_FIELD" will match to "custom_field" in a CallHub account.
fields_in_contacts = self._collect_fields(contacts)
fields_in_callhub = self.fields()
# Ensure case insensitivity and convert to set
fields_in_contact = set([field.lower() for field in fields_in_contacts])
fields_in_callhub = set([field.lower() for field in fields_in_callhub.keys()])
if fields_in_contact.issubset(fields_in_callhub):
return True
else:
raise LookupError("Attempted to upload contact (s) that contain fields that haven't been "
"created in CallHub. Fields present in upload: {} Fields present in "
"account: {}".format(fields_in_contact, fields_in_callhub))
def validate_api_key(self):
"""
Returns admin email address if API key is valid. In rare cases, may be unable to find admin email address, and
returns a warning in that case. If API key invalid, raises ValueError. If the CallHub API returns unexpected
information, raises RunTimeError.
Returns:
username (``str``): Email of administrator account
"""
response = self.session.get("{}/v1/agents/".format(self.api_domain)).result()
if response.json().get("detail") in ['User inactive or deleted.', 'Invalid token.']:
raise ValueError("Bad API Key")
elif "count" in response.json():
if response.json()["count"]:
return response.json()["results"][0]["owner"][0]["username"]
else:
return "Cannot deduce admin account. No agent accounts (not even the default account) exist."
else:
raise RuntimeError("CallHub API is not returning expected values, but your api_key is fine. Their API "
"specifies that https://callhub-api-domain/v1/agents returns a 'count' field, but this was "
"not returned. Please file an issue on GitHub for this project, if an issue for this not "
"already exist.")
def agent_leaderboard(self, start, end):
params = {"start_date": start, "end_date": end}
response = self.session.get("{}/v1/analytics/agent-leaderboard/".format(self.api_domain), params=params).result()
return response.json().get("plot_data")
def fields(self):
"""
Returns a list of fields configured in the CallHub account and their ids
Returns:
fields (``dict``): dictionary of fields and ids
>>> {"first name": 0, "last name": 1}
"""
response = self.session.get('{}/v1/contacts/fields/'.format(self.api_domain)).result()
return {field['name']: field["id"] for field in response.json()["results"]}
def bulk_create(self, phonebook_id, contacts, country_iso):
"""
Leverages CallHub's bulk-upload feature to create many contacts. Supports custom fields.
>>> contacts = [{'first name': 'Sumiya', 'phone number':'5555555555', 'mobile number': '5555555555'},
>>> {'first name': 'Joe', 'phone number':'5555555555', 'mobile number':'5555555555'}]
>>> callhub.bulk_create(885473, contacts, 'CA')
Args:
phonebook_id(``int``): ID of phonebank to insert contacts into.
contacts(``list``): Contacts to insert (phone number is a MANDATORY field in all contacts)
country_iso(``str``): ISO 3166 two-char country code,
see https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2
"""
# Step 1. Get all fields from CallHub account
# Step 2. Check if all fields provided for contacts exist in CallHub account
# Step 3. Turn list of dictionaries into a CSV file and create a column mapping for the file
# Step 4. Upload the CSV and column mapping to CallHub
contacts = [CaseInsensitiveDict(contact) for contact in contacts]
if self._assert_fields_exist(contacts):
# Create CSV file in memory in a way that pleases CallHub and generate column mapping
csv_file, mapping = csv_and_mapping_create(contacts, self.fields())
# Upload CSV
data = {
'phonebook_id': phonebook_id,
'country_choice': 'custom',
'country_ISO': country_iso,
'mapping': mapping
}
response = self.session.post('{}/v1/contacts/bulk_create/'.format(self.api_domain), data=data,
files={'contacts_csv': csv_file}).result()
if "Import in progress" in response.json().get("message", ""):
return True
elif 'Request was throttled' in response.json().get("detail", ""):
raise RuntimeError("Bulk_create request was throttled because rate limit was exceeded.",
response.json())
else:
raise RuntimeError("CallHub did not report that import was successful: ", response.json())
def create_contact(self, contact):
"""
Creates single contact. Supports custom fields.
>>> contact = {'first name': 'Sumiya', 'phone number':'5555555555', 'mobile number': '5555555555'}
>>> callhub.create_contact(contact)
Args:
contacts(``dict``): Contacts to insert
Note that country_code and phone_number are MANDATORY
Returns:
(``str``): ID of created contact or None if contact not created
"""
if self._assert_fields_exist([contact]):
url = "{}/v1/contacts/".format(self.api_domain)
responses, errors = self._handle_requests([{
"func": self.session.post,
"func_params": {"url": url, "data": {"name": contact}},
"expected_status": 201
}])
if errors:
raise RuntimeError(errors)
return responses[0].json().get("id")
def get_contacts(self, limit):
"""
Gets all contacts.
Args:
limit (``int``): Limit of number of contacts to get. If limit not provided, will
return first 100 contacts.
Returns:
contact_list (``list``): List of contacts, where each contact is a dict of key value pairs.
"""
contacts_url = "{}/v1/contacts/".format(self.api_domain)
return self._get_paged_data(contacts_url, limit)
def _get_paged_data(self, url, limit=float(math.inf)):
"""
Internal function. Leverages _bulk_requests to aggregate paged data and return it quickly.
Args:
url (``str``): API endpoint to get paged data from.
Keyword Args:
limit (``float or int``): Limit of paged data to get. Default is infinity.
Returns:
paged_data (``list``) All of the paged data as a signle list of dicts, where each dict contains key value
pairs that represent each individual item in a page.
"""
first_page = self.session.get(url).result()
if first_page.status_code != 200:
raise RuntimeError("Status code {} when making request to: "
"{}, expected 200. Details: {})".format(first_page.status_code,
url,
first_page.text))
first_page = first_page.json()
# Handle either limit of 0 or no results
if first_page["count"] == 0 or limit == 0:
return []
# Set limit to the smallest of either the count or the limit
limit = min(first_page["count"], limit)
# Calculate number of pages
page_size = len(first_page["results"])
num_pages = math.ceil(limit/page_size)
requests = []
for i in range(1, num_pages+1):
requests.append({"func": self.session.get,
"func_params": {"url": url, "params": {"page": i}},
"expected_status": 200})
responses_list, errors = self._handle_requests(requests)
if errors:
raise RuntimeError(errors)
# Turn list of responses into aggregated data from all pages
paged_data = []
for response in responses_list:
paged_data += response.json()["results"]
paged_data = paged_data[:limit]
return paged_data
def _handle_requests(self, requests_list, aggregate_json_value=None, retry=False, current_retry_count=0):
"""
Internal function. Executes a list of requests in batches, asynchronously. Allows fast execution of many reqs.
>>> requests_list = [{"func": session.get,
>>> "func_params": {"url":"https://callhub-api-domain/v1/contacts/", "params":{"page":"1"}}}
>>> "expected_status": 200]
>>> _bulk_request(requests_list)
Args:
requests_list (``list``): List of dicts that each include | |
atom.stereochemistry == "S":
central_carbon_stereo_specified = True
assert central_carbon_stereo_specified
# Populate bond core property fields
fractional_bond_orders = [float(val) for val in range(18)]
for fbo, bond in zip(fractional_bond_orders, molecule.bonds):
bond.fractional_bond_order = fbo
# Do a first conversion to/from oemol
rdmol = molecule.to_rdkit()
molecule2 = Molecule.from_rdkit(rdmol)
# Test that properties survived first conversion
# assert molecule.to_dict() == molecule2.to_dict()
assert molecule.name == molecule2.name
# NOTE: This expects the same indexing scheme in the original and new molecule
central_carbon_stereo_specified = False
for atom in molecule2.atoms:
if (atom.atomic_number == 6) and atom.stereochemistry == "S":
central_carbon_stereo_specified = True
assert central_carbon_stereo_specified
for atom1, atom2 in zip(molecule.atoms, molecule2.atoms):
assert atom1.to_dict() == atom2.to_dict()
for bond1, bond2 in zip(molecule.bonds, molecule2.bonds):
assert bond1.to_dict() == bond2.to_dict()
assert (molecule.conformers[0] == molecule2.conformers[0]).all()
for pc1, pc2 in zip(molecule._partial_charges, molecule2._partial_charges):
pc1_ul = pc1.value_in_unit(unit.elementary_charge)
pc2_ul = pc2.value_in_unit(unit.elementary_charge)
assert_almost_equal(pc1_ul, pc2_ul, decimal=6)
assert (
molecule2.to_smiles(toolkit_registry=toolkit_wrapper)
== expected_output_smiles
)
# TODO: This should be its own test
def test_to_from_rdkit_core_props_unset(self):
"""Test RDKitToolkitWrapper to_rdkit() and from_rdkit() when given empty core property fields"""
toolkit_wrapper = RDKitToolkitWrapper()
# Replacing with a simple molecule with stereochemistry
input_smiles = r"C\C(F)=C(/F)C[C@](C)(Cl)Br"
expected_output_smiles = r"[H][C]([H])([H])/[C]([F])=[C](\[F])[C]([H])([H])[C@]([Cl])([Br])[C]([H])([H])[H]"
molecule = Molecule.from_smiles(input_smiles, toolkit_registry=toolkit_wrapper)
assert (
molecule.to_smiles(toolkit_registry=toolkit_wrapper)
== expected_output_smiles
)
# Ensure one atom has its stereochemistry specified
central_carbon_stereo_specified = False
for atom in molecule.atoms:
if (atom.atomic_number == 6) and atom.stereochemistry == "R":
central_carbon_stereo_specified = True
assert central_carbon_stereo_specified
# Do a first conversion to/from rdmol
rdmol = molecule.to_rdkit()
molecule2 = Molecule.from_rdkit(rdmol)
# Test that properties survived first conversion
assert molecule.name == molecule2.name
# NOTE: This expects the same indexing scheme in the original and new molecule
central_carbon_stereo_specified = False
for atom in molecule2.atoms:
if (atom.atomic_number == 6) and atom.stereochemistry == "R":
central_carbon_stereo_specified = True
assert central_carbon_stereo_specified
for atom1, atom2 in zip(molecule.atoms, molecule2.atoms):
assert atom1.to_dict() == atom2.to_dict()
for bond1, bond2 in zip(molecule.bonds, molecule2.bonds):
assert bond1.to_dict() == bond2.to_dict()
# The molecule was initialized from SMILES, so mol.conformers arrays should be None for both
assert molecule.conformers is None
assert molecule2.conformers is None
# The molecule was initialized from SMILES, so mol.partial_charges arrays should be None for both
assert molecule.partial_charges is None
assert molecule2.partial_charges is None
assert (
molecule2.to_smiles(toolkit_registry=toolkit_wrapper)
== expected_output_smiles
)
def test_from_rdkit_implicit_hydrogens(self):
"""
Test that hydrogens are inferred from hydrogen-less RDKit molecules,
unless the option is turned off.
"""
from rdkit import Chem
rdmol = Chem.MolFromSmiles("CC")
offmol = Molecule.from_rdkit(rdmol)
assert any([a.atomic_number == 1 for a in offmol.atoms])
offmol_no_h = Molecule.from_rdkit(rdmol, hydrogens_are_explicit=True)
assert not any([a.atomic_number == 1 for a in offmol_no_h.atoms])
@pytest.mark.parametrize(
"smiles, expected_map", [("[Cl:1][Cl]", {0: 1}), ("[Cl:1][Cl:2]", {0: 1, 1: 2})]
)
def test_from_rdkit_atom_map(self, smiles, expected_map):
"""
Test OpenEyeToolkitWrapper for loading a molecule with implicit
hydrogens (correct behavior is to add them explicitly)
"""
from rdkit import Chem
off_molecule = Molecule.from_rdkit(Chem.MolFromSmiles(smiles))
assert off_molecule.properties["atom_map"] == expected_map
def test_file_extension_case(self):
"""
Test round-trips of some file extensions when called directly from the toolkit wrappers,
including lower- and uppercase file extensions. Note that this test does not ensure
accuracy, it only tests that reading/writing without raising an exception.
"""
mols_in = RDKitToolkitWrapper().from_file(
file_path=get_data_file_path("molecules/ethanol.sdf"), file_format="sdf"
)
assert len(mols_in) > 0
mols_in = RDKitToolkitWrapper().from_file(
file_path=get_data_file_path("molecules/ethanol.sdf"), file_format="SDF"
)
assert len(mols_in) > 0
def test_get_sdf_coordinates(self):
"""Test RDKitToolkitWrapper for importing a single set of coordinates from a sdf file"""
toolkit_wrapper = RDKitToolkitWrapper()
filename = get_data_file_path("molecules/toluene.sdf")
molecule = Molecule.from_file(filename, toolkit_registry=toolkit_wrapper)
assert len(molecule.conformers) == 1
assert molecule.conformers[0].shape == (15, 3)
assert_almost_equal(
molecule.conformers[0][5][1].value_in_unit(unit.angstrom), 2.0104, decimal=4
)
def test_read_sdf_charges(self):
"""Test RDKitToolkitWrapper for importing a charges from a sdf file"""
toolkit_wrapper = RDKitToolkitWrapper()
filename = get_data_file_path("molecules/ethanol_partial_charges.sdf")
molecule = Molecule.from_file(filename, toolkit_registry=toolkit_wrapper)
assert molecule.partial_charges is not None
assert molecule.partial_charges[0] == -0.4 * unit.elementary_charge
assert molecule.partial_charges[-1] == 0.4 * unit.elementary_charge
def test_write_sdf_charges(self):
"""Test RDKitToolkitWrapper for writing partial charges to a sdf file"""
from io import StringIO
toolkit_wrapper = RDKitToolkitWrapper()
ethanol = create_ethanol()
sio = StringIO()
ethanol.to_file(sio, "SDF", toolkit_registry=toolkit_wrapper)
sdf_text = sio.getvalue()
# The output lines of interest here will look like
# > <atom.dprop.PartialCharge> (1)
# -0.40000000000000002 -0.29999999999999999 -0.20000000000000001 -0.10000000000000001 0.01 0.10000000000000001 0.20000000000000001 0.29999999999999999 0.40000000000000002
# Parse the SDF text, grabbing the numeric line above
sdf_split = sdf_text.split("\n")
charge_line_found = False
for line in sdf_split:
if charge_line_found:
charges = [float(i) for i in line.split()]
break
if "> <atom.dprop.PartialCharge>" in line:
charge_line_found = True
# Make sure that a charge line was ever found
assert charge_line_found
# Make sure that the charges found were correct
assert_almost_equal(
charges, [-0.4, -0.3, -0.2, -0.1, 0.00001, 0.1, 0.2, 0.3, 0.4]
)
def test_sdf_properties_roundtrip(self):
"""Test RDKitToolkitWrapper for performing a round trip of a molecule with defined partial charges
and entries in the properties dict to and from a sdf file"""
toolkit_wrapper = RDKitToolkitWrapper()
ethanol = create_ethanol()
# Write ethanol to a temporary file, and then immediately read it.
with NamedTemporaryFile(suffix=".sdf") as iofile:
ethanol.to_file(
iofile.name, file_format="SDF", toolkit_registry=toolkit_wrapper
)
ethanol2 = Molecule.from_file(
iofile.name, file_format="SDF", toolkit_registry=toolkit_wrapper
)
assert (ethanol.partial_charges == ethanol2.partial_charges).all()
# Now test with no properties or charges
ethanol = create_ethanol()
ethanol.partial_charges = None
# Write ethanol to a temporary file, and then immediately read it.
with NamedTemporaryFile(suffix=".sdf") as iofile:
ethanol.to_file(
iofile.name, file_format="SDF", toolkit_registry=toolkit_wrapper
)
ethanol2 = Molecule.from_file(
iofile.name, file_format="SDF", toolkit_registry=toolkit_wrapper
)
assert ethanol2.partial_charges is None
assert ethanol2.properties == {}
def test_write_sdf_no_charges(self):
"""Test RDKitToolkitWrapper for writing an SDF file with no charges"""
from io import StringIO
toolkit_wrapper = RDKitToolkitWrapper()
ethanol = create_ethanol()
ethanol.partial_charges = None
sio = StringIO()
ethanol.to_file(sio, "SDF", toolkit_registry=toolkit_wrapper)
sdf_text = sio.getvalue()
# In our current configuration, if the OFFMol doesn't have partial charges, we DO NOT want a partial charge
# block to be written. For reference, it's possible to indicate that a partial charge is not known by writing
# out "n/a" (or another placeholder) in the partial charge block atoms without charges.
assert "> <atom.dprop.PartialCharge>" not in sdf_text
def test_read_ethene_sdf(self):
"""
Test that RDKitToolkitWrapper can load an ethene molecule without complaining about bond stereo.
See https://github.com/openforcefield/openff-toolkit/issues/785
"""
ethene_file_path = get_data_file_path("molecules/ethene_rdkit.sdf")
toolkit_wrapper = RDKitToolkitWrapper()
toolkit_wrapper.from_file(ethene_file_path, file_format="sdf")
def test_load_multiconformer_sdf_as_separate_molecules(self):
"""
Test RDKitToolkitWrapper for reading a "multiconformer" SDF, which the OFF
Toolkit should treat as separate molecules
"""
toolkit_wrapper = RDKitToolkitWrapper()
filename = get_data_file_path("molecules/methane_multiconformer.sdf")
molecules = Molecule.from_file(filename, toolkit_registry=toolkit_wrapper)
assert len(molecules) == 2
assert len(molecules[0].conformers) == 1
assert len(molecules[1].conformers) == 1
assert molecules[0].conformers[0].shape == (5, 3)
def test_load_multiconformer_sdf_as_separate_molecules_properties(self):
"""
Test RDKitToolkitWrapper for reading a "multiconformer" SDF, which the OFF
Toolkit should treat as separate molecules
"""
toolkit_wrapper = RDKitToolkitWrapper()
filename = get_data_file_path("molecules/methane_multiconformer_properties.sdf")
molecules = Molecule.from_file(filename, toolkit_registry=toolkit_wrapper)
assert len(molecules) == 2
assert len(molecules[0].conformers) == 1
assert len(molecules[1].conformers) == 1
assert molecules[0].conformers[0].shape == (5, 3)
# The first molecule in the SDF has the following properties and charges:
assert molecules[0].properties["test_property_key"] == "test_property_value"
np.testing.assert_allclose(
molecules[0].partial_charges.value_in_unit(unit.elementary_charge),
[-0.108680, 0.027170, 0.027170, 0.027170, 0.027170],
)
# The second molecule in the SDF has the following properties and charges:
assert molecules[1].properties["test_property_key"] == "test_property_value2"
assert (
molecules[1].properties["another_test_property_key"]
== "another_test_property_value"
)
np.testing.assert_allclose(
molecules[1].partial_charges.value_in_unit(unit.elementary_charge),
[0.027170, 0.027170, 0.027170, 0.027170, -0.108680],
)
def test_write_multiconformer_mol_as_sdf(self):
"""
Test RDKitToolkitWrapper for writing a multiconformer molecule to SDF. The OFF toolkit should only
save the first conformer
"""
from io import StringIO
toolkit_wrapper = RDKitToolkitWrapper()
filename = get_data_file_path("molecules/ethanol.sdf")
ethanol = Molecule.from_file(filename, toolkit_registry=toolkit_wrapper)
ethanol.partial_charges = (
np.array([-4.0, -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0])
* unit.elementary_charge
)
ethanol.properties["test_prop"] = "test_value"
new_conf = ethanol.conformers[0] + (
np.ones(ethanol.conformers[0].shape) * unit.angstrom
)
ethanol.add_conformer(new_conf)
sio = StringIO()
ethanol.to_file(sio, "sdf", toolkit_registry=toolkit_wrapper)
data = sio.getvalue()
# In SD format, each molecule ends with "$$$$"
assert data.count("$$$$") == 1
# A basic SDF for ethanol would be 27 lines, though the properties add three more
assert len(data.split("\n")) == 30
assert "test_prop" in data
assert "<atom.dprop.PartialCharge>" in data
# Ensure the first conformer's first atom's X coordinate is in the file
assert str(ethanol.conformers[0][0][0].value_in_unit(unit.angstrom))[:5] in data
# Ensure the SECOND conformer's first atom's X coordinate is NOT in the file
assert (
str(ethanol.conformers[1][0][0].in_units_of(unit.angstrom))[:5] not in data
)
def test_write_multiconformer_pdb(self):
"""
Make sure RDKit can write multi conformer PDB files.
"""
from io import StringIO
toolkit = RDKitToolkitWrapper()
# load up a multiconformer pdb file and condense down the conformers
molecules = Molecule.from_file(
get_data_file_path("molecules/butane_multi.sdf"), toolkit_registry=toolkit
)
butane | |
Processes a removed ticket, with optional reason given.
"""
if self.current_ticket and self.current_ticket.id == ticket.id:
# Post the reason
user = await self.fetch_user()
await user.send(reason or "Ticket #{} was removed from your queue!".format(ticket.id))
# Deliver next ticket
await self.deliver()
async def deliver(self) -> None:
"""
Deliver the current ticket and refresh the prompt.
"""
# TODO: Scheduling logic to handle delivery failure
# TODO: Logic to handle non-existent user
self.current_ticket = self.get_current_ticket()
if self.current_ticket:
logger.debug("Delivering ticket #{} to mod {}".format(self.current_ticket.id, self.modid))
try:
user = await self.find_user()
self.current_msg = await user.send(
content="Please comment on the following:",
embed=self.current_ticket.embed)
except discord.HTTPException:
# Reschedule
pass
else:
# Set current ticket to being delivered
self.current_ticket.update(stage=TicketStage.DELIVERED, delivered_id=self.current_msg.id)
# Update the last prompt message
self.last_prompt_msgid = self.current_msg.id
# (Re-)schedule the next prompt update
await self.schedule_prompt()
async def process_message(self, message: discord.Message) -> None:
"""
Process a non-command message from the moderator.
If there is a current active ticket, treat it as a comment.
Either way, update the last handled message in data.
"""
prefix = plugins.commands.conf.prefix
if not prefix or not message.content.startswith(prefix):
content = message.content
if ticket := self.current_ticket:
logger.info(
"Processing message from moderator {} as comment to ticket #{}: {}".format(
self.modid, ticket.id, repr(content)))
# Parse the message as a comment to the current ticket
duration, comment, msg = parse_ticket_comment(ticket, content)
# Update the ticket
with ticket.batch_update():
ticket.stage = TicketStage.COMMENTED
ticket.comment = comment
ticket.modified_by = self.modid
ticket.duration = duration
if ticket.status == TicketStatus.NEW:
ticket.status = TicketStatus.IN_EFFECT
self.last_read_msgid = message.id
user = await self.find_user()
await user.send("Ticket comment set! " + msg)
# Publish the ticket
# Implicitly triggers update of the last ticket message
await ticket.publish()
# Deliver the next ticket
await self.deliver()
else:
self.last_read_msgid = message.id
async def reload_mods() -> None:
"""
Reload all moderators from data.
"""
global ticketmods
logger.debug("Loading ticket moderators.")
# Unload mods
for mod in ticketmods.values():
mod.unload()
# Rebuild ticketmod list
ticketmods = {row[0]: TicketMod(row) for row in TicketMod._select_where()}
# Load mods
for mod in ticketmods.values():
await mod.load()
logger.info("Loaded {} ticket moderators.".format(len(ticketmods)))
def get_or_create_mod(modid: int) -> TicketMod:
"""
Get a single TicketMod by modid, or create it if it doesn't exist.
"""
mod = ticketmods.get(modid, None)
if not mod:
mod = TicketMod(TicketMod._insert(modid=modid))
ticketmods[modid] = mod
return mod
# ------------ Commands ------------
def resolve_ticket(msg: discord.Message, args: plugins.commands.ArgParser) -> Optional[Ticket]:
"""
Resolves a ticket from the given message and command args, if possible.
Ticket is extracted from either the referenced message or the first arg.
"""
ticket = None
if ref := msg.reference:
if (ref_msg := ref.resolved) and isinstance(ref_msg, discord.Message):
if ref_msg.author == discord_client.client.user and ref_msg.embeds:
embed = ref_msg.embeds[0]
name = embed.author.name
if isinstance(name, str) and name.startswith("Ticket #"):
ticket_id = int(name[8:].split(' ', maxsplit=1)[0])
ticket = get_ticket(ticket_id)
if ticket is None:
ticketarg = args.next_arg()
if ticketarg is not None and isinstance(ticketarg, plugins.commands.StringArg):
maybe_id = int(ticketarg.text)
# This is either a message snowflake (a big number) or a ticket id (small number). The leading 42 bits of a
# snowflake are the timestamp and we assume that if all of those are zero, it's probably not a snowflake as
# that would imply an epoch time of 0 milliseconds.
if maybe_id < 2**(10+12):
tickets = fetch_tickets_where(id=maybe_id)
else:
tickets = fetch_tickets_where(list_msgid=maybe_id)
ticket = next(tickets, None)
return ticket
def summarise_tickets(*tickets: Ticket, title: str = "Tickets", fmt: Optional[str] = None
) -> Optional[Iterator[discord.Embed]]:
"""
Create paged embeds of ticket summaries from the provided list of tickets.
"""
if not tickets:
return None
lines = [ticket.summary(fmt=fmt) for ticket in tickets]
blocks = ['\n'.join(lines[i:i+10]) for i in range(0, len(lines), 10)]
page_count = len(blocks)
embeds = (discord.Embed(description=blocks[i], title=title) for i in range(page_count))
if page_count > 1:
embeds = (embed.set_footer(text="Page {}/{}".format(i+1, page_count)) for i, embed in enumerate(embeds))
return embeds
Page = collections.namedtuple('Page', ('content', 'embed'), defaults=(None, None))
async def pager(dest: discord.abc.Messageable, pages: List[Page]) -> None:
"""
Page a sequence of pages.
"""
next_reaction = '\u23ED'
prev_reaction = '\u23EE'
all_reaction = '\U0001F4DC'
reactions = (prev_reaction, all_reaction, next_reaction)
pages = list(pages)
# Sanity check
if not pages:
raise ValueError("Cannot page with no pages!")
# Send first page
msg = await dest.send(**pages[0]._asdict())
if len(pages) == 1:
return
# Add reactions
for r in reactions:
await msg.add_reaction(r)
index = 0
with plugins.reactions.ReactionMonitor(channel_id=msg.channel.id, message_id=msg.id, event='add',
filter=lambda _, p: p.emoji.name in reactions, timeout_each=120) as mon:
try:
while True:
_, payload = await mon
if str(payload.emoji) == next_reaction:
index += 1
elif str(payload.emoji) == prev_reaction:
index -= 1
elif str(payload.emoji) == all_reaction:
await msg.delete()
for page in pages:
await dest.send(**page._asdict())
break
index %= len(pages)
await msg.edit(**pages[index]._asdict())
try:
await msg.remove_reaction(payload.emoji, discord.Object(payload.user_id)) # type: ignore
except discord.HTTPException:
pass
else:
# Remove the reactions
try:
for r in reactions:
await msg.clear_reaction(r)
except discord.HTTPException:
pass
except asyncio.TimeoutError:
pass
except asyncio.CancelledError:
pass
@plugins.commands.command("note")
@plugins.privileges.priv("mod")
async def cmd_note(msg: discord.Message, args: plugins.commands.ArgParser) -> None:
"""
Create a note on the target user.
"""
if not isinstance(target_arg := args.next_arg(), plugins.commands.UserMentionArg):
# TODO: Usage
return
targetid = target_arg.id
note = args.get_rest()
if not note:
# Request the note dynamically
prompt = await msg.channel.send("Please enter the note:")
del_reaction = '\u274C'
await prompt.add_reaction(del_reaction)
with plugins.reactions.ReactionMonitor(channel_id=msg.channel.id, message_id=prompt.id, author_id=msg.author.id,
event="add", filter=lambda _, p: p.emoji.name == del_reaction) as mon:
msg_task = asyncio.create_task(
discord_client.client.wait_for('message',
check=lambda msg_: msg_.channel == msg.channel and msg_.author == msg.author))
reaction_task = asyncio.ensure_future(mon)
try:
done, pending = await asyncio.wait((msg_task, reaction_task),
timeout=300, return_when=asyncio.FIRST_COMPLETED)
except asyncio.TimeoutError:
await msg.channel.send("Note prompt timed out, please try again.")
if msg_task in done:
note = msg_task.result().content
elif reaction_task in done:
await msg.channel.send("Note prompt cancelled, no note was created.")
msg_task.cancel()
reaction_task.cancel()
if note:
# Create the note ticket
ticket = await create_ticket(
type=TicketType.NOTE,
modid=msg.author.id,
targetid=targetid,
created_at=datetime.datetime.utcnow(),
created_by=msg.author.id,
stage=TicketStage.COMMENTED,
status=TicketStatus.IN_EFFECT,
comment=note)
# Ack note creation
await msg.channel.send(embed=discord.Embed(
description="[#{}]({}): Note created!".format(ticket.id, ticket.jump_link)))
@plugins.commands.command("tickets")
@plugins.commands.command("ticket")
@plugins.privileges.priv("mod")
async def cmd_ticket(msg: discord.Message, args: plugins.commands.ArgParser) -> None:
user = msg.author
reply = msg.channel.send
no_mentions = discord.AllowedMentions.none()
S_Arg = plugins.commands.StringArg
UM_Arg = plugins.commands.UserMentionArg
tickets: Iterable[Ticket]
embeds: Optional[Iterable[discord.Embed]]
cmd_arg = args.next_arg()
if not isinstance(cmd_arg, S_Arg):
return
cmd = cmd_arg.text.lower()
if cmd == "top":
"""
Usage: ticket top
DM you the ticket at the top of your queue (if any).
Re-deliver the ticket at the top of your queue to your DMS.
"""
mod = get_or_create_mod(user.id)
if not mod.current_ticket:
await reply("Your queue is empty, good job!")
else:
await mod.deliver()
if msg.channel.type != discord.ChannelType.private:
await reply("Ticket #{} has been delivered to your DMs.".format(mod.current_ticket.id))
elif cmd == "queue":
"""
Usage: ticket queue [modmention]
Show the specified moderator's (or your own) ticket queue.
"""
modarg = args.next_arg()
if modarg is None or isinstance(modarg, UM_Arg):
modid = modarg.id if modarg is not None else user.id
embeds = None
if modid in ticketmods:
mod = ticketmods[modid]
tickets = mod.queue
embeds = summarise_tickets(
*tickets, title='Queue for {}'.format(modid),
fmt= "[#{id}]({jump_link}): ({status}) **{type}** for {targetid!m}")
if embeds:
await pager(msg.channel, [Page(embed=embed) for embed in embeds])
else:
await reply(util.discord.format("{!m} has an empty queue!", modid),
allowed_mentions=no_mentions)
elif cmd == "take":
"""
Usage: ticket take <ticket>
Claim a ticket (i.e. set the responsible moderator to yourself).
"""
if not (ticket := resolve_ticket(msg, args)):
await reply("No ticket referenced or ticket could not be found.")
elif ticket.modid == msg.author.id:
await reply("This is already your ticket!")
else:
ticket.update(modid=msg.author.id)
await ticket.mod.ticket_removed(ticket,
"Ticket #{} has been claimed by {}.".format(ticket.id, msg.author.mention))
await ticket.publish()
await reply("You have claimed ticket #{}.".format(ticket.id))
elif cmd == "assign":
"""
Usage: ticket assign <ticket> <modmention>
Assign the specified ticket to the specified moderator.
"""
if (ticket := resolve_ticket(msg, args)) is None:
await reply("No ticket referenced or ticket could not be found.")
elif not isinstance((mod_arg := args.next_arg()), UM_Arg):
await reply("Please provide a moderator mention!")
else:
if mod_arg.id == ticket.modid:
await reply(util.discord.format("Ticket #{} is already assigned to {!m}", ticket.id, mod_arg.id),
allowed_mentions=no_mentions)
else:
old_mod = ticket.mod
new_mod = get_or_create_mod(mod_arg.id)
with ticket.batch_update():
ticket.modid = new_mod.modid
if ticket.stage != TicketStage.COMMENTED:
ticket.delivered_id = None
ticket.stage = TicketStage.NEW
await old_mod.ticket_removed(ticket,
reason=util.discord.format("Ticket {}# has been claimed by {!m}!", ticket.id, new_mod.modid))
await ticket.publish()
elif cmd == "set":
"""
Set or reset the duration and comment for a ticket.
"""
if (ticket := resolve_ticket(msg, args)) is None:
await reply("No ticket | |
<reponame>mederrata/probability
# Copyright 2021 The TensorFlow Probability Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Utilities for constructing trainable linear operators."""
import functools
import numpy as np
import tensorflow.compat.v2 as tf
from tensorflow_probability.python.bijectors import bijector as bijector_lib
from tensorflow_probability.python.bijectors import fill_scale_tril
from tensorflow_probability.python.internal import dtype_util
from tensorflow_probability.python.internal import nest_util
from tensorflow_probability.python.internal import prefer_static as ps
from tensorflow_probability.python.internal import samplers
from tensorflow_probability.python.internal import trainable_state_util
from tensorflow.python.util import nest # pylint: disable=g-direct-tensorflow-import
def _trainable_linear_operator_block(
operators,
block_dims=None,
batch_shape=(),
dtype=None,
name=None):
"""Builds a trainable blockwise `tf.linalg.LinearOperator`.
This function returns a trainable blockwise `LinearOperator`. If `operators`
is a flat list, it is interpreted as blocks along the diagonal of the
structure and an instance of `tf.linalg.LinearOperatorBlockDiag` is returned.
If `operators` is a doubly nested list, then a
`tf.linalg.LinearOperatorBlockLowerTriangular` instance is returned, with
the block in row `i` column `j` (`i >= j`) given by `operators[i][j]`.
The `operators` list may contain `LinearOperator` instances, `LinearOperator`
subclasses, or callables defining custom constructors (see example below).
The dimensions of the blocks are given by `block_dims`; this argument may be
omitted if `operators` contains only `LinearOperator` instances.
Args:
operators: A list or tuple containing `LinearOperator` subclasses,
`LinearOperator` instances, and/or callables returning
`(init_fn, apply_fn)` pairs. If the list is flat, a
`tf.linalg.LinearOperatorBlockDiag` instance is returned. Otherwise, the
list must be singly nested, with the
first element of length 1, second element of length 2, etc.; the
elements of the outer list are interpreted as rows of a lower-triangular
block structure, and a `tf.linalg.LinearOperatorBlockLowerTriangular`
instance is returned. Callables contained in the lists must take two
arguments -- `shape`, the shape of the parameter instantiating the
`LinearOperator`, and `dtype`, the `tf.dtype` of the `LinearOperator` --
and return a further pair of callables representing a stateless trainable
operator (see example below).
block_dims: List or tuple of integers, representing the sizes of the blocks
along one dimension of the (square) blockwise `LinearOperator`. If
`operators` contains only `LinearOperator` instances, `block_dims` may be
`None` and the dimensions are inferred.
batch_shape: Batch shape of the `LinearOperator`.
dtype: `tf.dtype` of the `LinearOperator`.
name: str, name for `tf.name_scope`.
Yields:
*parameters: sequence of `trainable_state_util.Parameter` namedtuples.
These are intended to be consumed by
`trainable_state_util.as_stateful_builder` and
`trainable_state_util.as_stateless_builder` to define stateful and
stateless variants respectively.
### Examples
To build a 5x5 trainable `LinearOperatorBlockDiag` given `LinearOperator`
subclasses and `block_dims`:
```python
op = build_trainable_linear_operator_block(
operators=(tf.linalg.LinearOperatorDiag,
tf.linalg.LinearOperatorLowerTriangular),
block_dims=[3, 2],
dtype=tf.float32)
```
If `operators` contains only `LinearOperator` instances, the `block_dims`
argument is not necessary:
```python
# Builds a 6x6 `LinearOperatorBlockDiag` with batch shape `(4,).
op = build_trainable_linear_operator_block(
operators=(tf.linalg.LinearOperatorDiag(tf.Variable(tf.ones((4, 3)))),
tf.linalg.LinearOperatorFullMatrix([4.]),
tf.linalg.LinearOperatorIdentity(2)))
```
A custom operator constructor may be specified as a callable taking
arguments `shape` and `dtype`, and returning a pair of callables
`(init_fn, apply_fn)` describing a parameterized operator, with the following
signatures:
```python
raw_parameters = init_fn(seed)
linear_operator = apply_fn(raw_parameters)
```
For example, to define a custom initialization for a diagonal operator:
```python
import functools
def diag_operator_with_uniform_initialization(shape, dtype):
init_fn = functools.partial(
samplers.uniform, shape, maxval=2., dtype=dtype)
apply_fn = lambda scale_diag: tf.linalg.LinearOperatorDiag(
scale_diag, is_non_singular=True)
return init_fn, apply_fn
# Build an 8x8 `LinearOperatorBlockLowerTriangular`, with our custom diagonal
# operator in the upper left block, and `LinearOperator` subclasses in the
# lower two blocks.
op = build_trainable_linear_operator_block(
operators=(diag_operator_with_uniform_initialization,
(tf.linalg.LinearOperatorFullMatrix,
tf.linalg.LinearOperatorLowerTriangular)),
block_dims=[4, 4],
dtype=tf.float64)
```
"""
with tf.name_scope(name or 'trainable_linear_operator_block'):
operator_instances = [op for op in nest.flatten(operators)
if isinstance(op, tf.linalg.LinearOperator)]
if (block_dims is None
and len(operator_instances) < len(nest.flatten(operators))):
# If `operator_instances` contains fewer elements than `operators`,
# then some elements of `operators` are not instances of `LinearOperator`.
raise ValueError('Argument `block_dims` must be defined unless '
'`operators` contains only `tf.linalg.LinearOperator` '
'instances.')
batch_shape = ps.cast(batch_shape, tf.int32)
if dtype is None:
dtype = dtype_util.common_dtype(operator_instances)
def convert_operator(path, op):
if isinstance(op, tf.linalg.LinearOperator):
return op
if len(set(path)) == 1: # for operators on the diagonal
shape = ps.concat([batch_shape, [block_dims[path[0]]]], axis=0)
else:
shape = ps.concat([batch_shape,
[block_dims[path[0]], block_dims[path[1]]]], axis=0)
if op in _OPERATOR_COROUTINES:
operator = yield from _OPERATOR_COROUTINES[op](shape=shape, dtype=dtype)
else: # Custom stateless constructor.
init_fn, apply_fn = op(shape=shape, dtype=dtype)
raw_params = yield trainable_state_util.Parameter(init_fn)
operator = apply_fn(raw_params)
return operator
# Build a structure of component trainable LinearOperators.
operator_blocks = yield from nest_util.map_structure_coroutine(
convert_operator,
operators,
_with_tuple_paths=True)
paths = nest.yield_flat_paths(operators)
if all(len(p) == 1 for p in paths):
return tf.linalg.LinearOperatorBlockDiag(
operator_blocks, is_non_singular=True)
elif all(len(p) == 2 for p in paths):
return tf.linalg.LinearOperatorBlockLowerTriangular(
operator_blocks, is_non_singular=True)
else:
raise ValueError(
'Argument `operators` must be a flat or singly-nested sequence.')
def _trainable_linear_operator_tril(
shape,
scale_initializer=1e-2,
diag_bijector=None,
dtype=None,
name=None):
"""Build a trainable `LinearOperatorLowerTriangular` instance.
Args:
shape: Shape of the `LinearOperator`, equal to `[b0, ..., bn, d]`, where
`b0...bn` are batch dimensions and `d` is the length of the diagonal.
scale_initializer: Variables are initialized with samples from
`Normal(0, scale_initializer)`.
diag_bijector: Bijector to apply to the diagonal of the operator.
dtype: `tf.dtype` of the `LinearOperator`.
name: str, name for `tf.name_scope`.
Yields:
*parameters: sequence of `trainable_state_util.Parameter` namedtuples.
These are intended to be consumed by
`trainable_state_util.as_stateful_builder` and
`trainable_state_util.as_stateless_builder` to define stateful and
stateless variants respectively.
"""
with tf.name_scope(name or 'trainable_linear_operator_tril'):
if dtype is None:
dtype = dtype_util.common_dtype(
[scale_initializer], dtype_hint=tf.float32)
scale_initializer = tf.convert_to_tensor(scale_initializer, dtype=dtype)
diag_bijector = diag_bijector or _DefaultScaleDiagonal()
batch_shape, dim = ps.split(shape, num_or_size_splits=[-1, 1])
scale_tril_bijector = fill_scale_tril.FillScaleTriL(
diag_bijector, diag_shift=tf.zeros([], dtype=dtype))
scale_tril = yield trainable_state_util.Parameter(
init_fn=lambda seed: scale_tril_bijector( # pylint: disable=g-long-lambda
samplers.normal(
mean=0.,
stddev=scale_initializer,
shape=ps.concat([batch_shape, dim * (dim + 1) // 2], axis=0),
seed=seed,
dtype=dtype)),
name='scale_tril',
constraining_bijector=scale_tril_bijector)
return tf.linalg.LinearOperatorLowerTriangular(
tril=scale_tril, is_non_singular=True)
def _trainable_linear_operator_diag(
shape,
scale_initializer=1e-2,
diag_bijector=None,
dtype=None,
name=None):
"""Build a trainable `LinearOperatorDiag` instance.
Args:
shape: Shape of the `LinearOperator`, equal to `[b0, ..., bn, d]`, where
`b0...bn` are batch dimensions and `d` is the length of the diagonal.
scale_initializer: Variables are initialized with samples from
`Normal(0, scale_initializer)`.
diag_bijector: Bijector to apply to the diagonal of the operator.
dtype: `tf.dtype` of the `LinearOperator`.
name: str, name for `tf.name_scope`.
Yields:
*parameters: sequence of `trainable_state_util.Parameter` namedtuples.
These are intended to be consumed by
`trainable_state_util.as_stateful_builder` and
`trainable_state_util.as_stateless_builder` to define stateful and
stateless variants respectively.
"""
with tf.name_scope(name or 'trainable_linear_operator_diag'):
if dtype is None:
dtype = dtype_util.common_dtype(
[scale_initializer], dtype_hint=tf.float32)
scale_initializer = tf.convert_to_tensor(scale_initializer, dtype=dtype)
diag_bijector = diag_bijector or _DefaultScaleDiagonal()
scale_diag = yield trainable_state_util.Parameter(
init_fn=lambda seed: diag_bijector( # pylint: disable=g-long-lambda
samplers.normal(
mean=0.,
stddev=scale_initializer,
shape=shape,
dtype=dtype,
seed=seed)),
name='scale_diag',
constraining_bijector=diag_bijector)
return tf.linalg.LinearOperatorDiag(scale_diag, is_non_singular=True)
def _trainable_linear_operator_full_matrix(
shape,
scale_initializer=1e-2,
dtype=None,
name=None):
"""Build a trainable `LinearOperatorFullMatrix` instance.
Args:
shape: Shape of the `LinearOperator`, equal to `[b0, ..., bn, h, w]`, where
`b0...bn` are batch dimensions `h` and `w` are the height and width of the
matrix represented by the `LinearOperator`.
scale_initializer: Variables are initialized with samples from
`Normal(0, scale_initializer)`.
dtype: `tf.dtype` of the `LinearOperator`.
name: str, name for `tf.name_scope`.
Yields:
*parameters: sequence of `trainable_state_util.Parameter` namedtuples.
These are intended to be consumed by
`trainable_state_util.as_stateful_builder` and
`trainable_state_util.as_stateless_builder` to define stateful and
stateless variants respectively.
"""
with tf.name_scope(
name or 'trainable_linear_operator_full_matrix'):
if dtype is None:
dtype = dtype_util.common_dtype([scale_initializer],
dtype_hint=tf.float32)
scale_initializer = tf.convert_to_tensor(scale_initializer, dtype)
scale_matrix = yield trainable_state_util.Parameter(
init_fn=functools.partial(
samplers.normal,
mean=0.,
stddev=scale_initializer,
shape=shape,
dtype=dtype),
name='scale_matrix')
return tf.linalg.LinearOperatorFullMatrix(matrix=scale_matrix)
def _linear_operator_zeros(shape, dtype=None, name=None):
"""Build an instance of `LinearOperatorZeros`.
Args:
shape: Shape of the `LinearOperator`, equal to `[b0, ..., bn, h, w]`, where
`b0...bn` are batch dimensions `h` and `w` are the height and width of the
matrix represented by the `LinearOperator`.
dtype: `tf.dtype` of the `LinearOperator`.
name: str, name for `tf.name_scope`.
Yields:
*parameters: sequence of `trainable_state_util.Parameter` namedtuples.
These are intended to be consumed by
`trainable_state_util.as_stateful_builder` and
`trainable_state_util.as_stateless_builder` to define stateful and
stateless variants respectively.
"""
with tf.name_scope(name or 'linear_operator_zeros'):
batch_shape, rows, cols = ps.split(
shape, num_or_size_splits=[-1, 1, 1])
num_rows, num_cols = rows[0], cols[0]
is_square = num_rows == num_cols
return tf.linalg.LinearOperatorZeros(
num_rows, num_cols, batch_shape=batch_shape, is_square=is_square,
is_self_adjoint=is_square, dtype=dtype)
# Tell Python that this fn is really a (trivial) generator.
yield | |
model = keras.models.Model([input_1, input_2], [output1, output2])
model.compile(
optimizer="rmsprop",
loss="mse",
run_eagerly=test_utils.should_run_eagerly(),
)
model.train_on_batch(
[np.zeros((batch, t, i1)), np.zeros((batch, t, i2, i3))],
[np.zeros((batch, t, o21)), np.zeros((batch, t, o22, o23))],
)
self.assertEqual(
model.output_shape, [(None, t, o21), (None, t, o22, o23)]
)
# test 2: use_tuple=True
cells = [
NestedCell(o11, o12, o13, use_tuple=True),
NestedCell(o21, o22, o23),
]
rnn = keras.layers.RNN(cells, return_sequences=True, return_state=True)
input_1 = keras.Input((t, i1))
input_2 = keras.Input((t, i2, i3))
output1, output2, state1, state2 = rnn(
NestedInput(t1=input_1, t2=input_2)
)
s11, s12 = state1
s21, s22 = state2
self.assertEqual(output1.shape.as_list(), [None, t, o21])
self.assertEqual(output2.shape.as_list(), [None, t, o22, o23])
self.assertEqual(s11.shape.as_list(), [None, o11])
self.assertEqual(s12.shape.as_list(), [None, o12, o13])
self.assertEqual(s21.shape.as_list(), [None, o21])
self.assertEqual(s22.shape.as_list(), [None, o22, o23])
model = keras.models.Model([input_1, input_2], [output1, output2])
model.compile(
optimizer="rmsprop",
loss="mse",
run_eagerly=test_utils.should_run_eagerly(),
)
model.train_on_batch(
[np.zeros((batch, t, i1)), np.zeros((batch, t, i2, i3))],
[np.zeros((batch, t, o21)), np.zeros((batch, t, o22, o23))],
)
self.assertEqual(
model.output_shape, [(None, t, o21), (None, t, o22, o23)]
)
def test_trackable_dependencies(self):
rnn = keras.layers.SimpleRNN
x = np.random.random((2, 2, 2))
y = np.random.random((2, 2))
model = keras.models.Sequential()
model.add(rnn(2))
model.compile(
optimizer="rmsprop",
loss="mse",
run_eagerly=test_utils.should_run_eagerly(),
)
model.fit(x, y, epochs=1, batch_size=1)
# check whether the model variables are present in the
# trackable list of objects
checkpointed_objects = {
id(o) for o in trackable_util.list_objects(model)
}
for v in model.variables:
self.assertIn(id(v), checkpointed_objects)
def test_high_dimension_RNN(self):
# Basic test case.
unit_a = 10
unit_b = 20
input_a = 5
input_b = 10
batch = 32
time_step = 4
cell = Minimal2DRNNCell(unit_a, unit_b)
x = keras.Input((None, input_a, input_b))
layer = keras.layers.RNN(cell)
y = layer(x)
self.assertEqual(cell.state_size.as_list(), [unit_a, unit_b])
if not tf.executing_eagerly():
init_state = layer.get_initial_state(x)
self.assertEqual(len(init_state), 1)
self.assertEqual(
init_state[0].shape.as_list(), [None, unit_a, unit_b]
)
model = keras.models.Model(x, y)
model.compile(
optimizer="rmsprop",
loss="mse",
run_eagerly=test_utils.should_run_eagerly(),
)
model.train_on_batch(
np.zeros((batch, time_step, input_a, input_b)),
np.zeros((batch, unit_a, unit_b)),
)
self.assertEqual(model.output_shape, (None, unit_a, unit_b))
# Test stacking.
cells = [
Minimal2DRNNCell(unit_a, unit_b),
Minimal2DRNNCell(unit_a * 2, unit_b * 2),
Minimal2DRNNCell(unit_a * 4, unit_b * 4),
]
layer = keras.layers.RNN(cells)
y = layer(x)
model = keras.models.Model(x, y)
model.compile(
optimizer="rmsprop",
loss="mse",
run_eagerly=test_utils.should_run_eagerly(),
)
model.train_on_batch(
np.zeros((batch, time_step, input_a, input_b)),
np.zeros((batch, unit_a * 4, unit_b * 4)),
)
self.assertEqual(model.output_shape, (None, unit_a * 4, unit_b * 4))
def test_high_dimension_RNN_with_init_state(self):
unit_a = 10
unit_b = 20
input_a = 5
input_b = 10
batch = 32
time_step = 4
# Basic test case.
cell = Minimal2DRNNCell(unit_a, unit_b)
x = keras.Input((None, input_a, input_b))
s = keras.Input((unit_a, unit_b))
layer = keras.layers.RNN(cell)
y = layer(x, initial_state=s)
model = keras.models.Model([x, s], y)
model.compile(
optimizer="rmsprop",
loss="mse",
run_eagerly=test_utils.should_run_eagerly(),
)
model.train_on_batch(
[
np.zeros((batch, time_step, input_a, input_b)),
np.zeros((batch, unit_a, unit_b)),
],
np.zeros((batch, unit_a, unit_b)),
)
self.assertEqual(model.output_shape, (None, unit_a, unit_b))
# Bad init state shape.
bad_shape_a = unit_a * 2
bad_shape_b = unit_b * 2
cell = Minimal2DRNNCell(unit_a, unit_b)
x = keras.Input((None, input_a, input_b))
s = keras.Input((bad_shape_a, bad_shape_b))
layer = keras.layers.RNN(cell)
with self.assertRaisesWithPredicateMatch(
ValueError, "however `cell.state_size` is"
):
layer(x, initial_state=s)
def test_inconsistent_output_state_size(self):
batch = 32
time_step = 4
state_size = 5
input_size = 6
cell = PlusOneRNNCell(state_size)
x = keras.Input((None, input_size))
layer = keras.layers.RNN(cell)
y = layer(x)
self.assertEqual(cell.state_size, state_size)
if not tf.executing_eagerly():
init_state = layer.get_initial_state(x)
self.assertEqual(len(init_state), 1)
self.assertEqual(init_state[0].shape.as_list(), [None, state_size])
model = keras.models.Model(x, y)
model.compile(
optimizer="rmsprop",
loss="mse",
run_eagerly=test_utils.should_run_eagerly(),
)
model.train_on_batch(
np.zeros((batch, time_step, input_size)),
np.zeros((batch, input_size)),
)
self.assertEqual(model.output_shape, (None, input_size))
def test_get_initial_state(self):
cell = keras.layers.SimpleRNNCell(5)
with self.assertRaisesRegex(
ValueError, "batch_size and dtype cannot be None"
):
cell.get_initial_state(None, None, None)
if not tf.executing_eagerly():
inputs = keras.Input((None, 10))
initial_state = cell.get_initial_state(inputs, None, None)
self.assertEqual(initial_state.shape.as_list(), [None, 5])
self.assertEqual(initial_state.dtype, inputs.dtype)
batch = tf.shape(inputs)[0]
dtype = inputs.dtype
initial_state = cell.get_initial_state(None, batch, dtype)
self.assertEqual(initial_state.shape.as_list(), [None, 5])
self.assertEqual(initial_state.dtype, inputs.dtype)
else:
batch = 8
inputs = np.random.random((batch, 10))
initial_state = cell.get_initial_state(inputs, None, None)
self.assertEqual(initial_state.shape.as_list(), [8, 5])
self.assertEqual(initial_state.dtype, inputs.dtype)
dtype = inputs.dtype
initial_state = cell.get_initial_state(None, batch, dtype)
self.assertEqual(initial_state.shape.as_list(), [batch, 5])
self.assertEqual(initial_state.dtype, inputs.dtype)
@parameterized.parameters([True, False])
def test_nested_input_output(self, stateful):
batch = 10
t = 5
i1, i2, i3 = 3, 4, 5
o1, o2, o3 = 2, 3, 4
cell = NestedCell(o1, o2, o3)
rnn = keras.layers.RNN(cell, stateful=stateful)
batch_size = batch if stateful else None
input_1 = keras.Input((t, i1), batch_size=batch_size)
input_2 = keras.Input((t, i2, i3), batch_size=batch_size)
outputs = rnn((input_1, input_2))
self.assertEqual(len(outputs), 2)
self.assertEqual(outputs[0].shape.as_list(), [batch_size, o1])
self.assertEqual(outputs[1].shape.as_list(), [batch_size, o2, o3])
model = keras.models.Model((input_1, input_2), outputs)
model.compile(
optimizer="rmsprop",
loss="mse",
run_eagerly=test_utils.should_run_eagerly(),
)
model.train_on_batch(
[np.zeros((batch, t, i1)), np.zeros((batch, t, i2, i3))],
[np.zeros((batch, o1)), np.zeros((batch, o2, o3))],
)
self.assertEqual(
model.output_shape, [(batch_size, o1), (batch_size, o2, o3)]
)
cell = NestedCell(o1, o2, o3, use_tuple=True)
rnn = keras.layers.RNN(cell, stateful=stateful)
input_1 = keras.Input((t, i1), batch_size=batch_size)
input_2 = keras.Input((t, i2, i3), batch_size=batch_size)
outputs = rnn(NestedInput(t1=input_1, t2=input_2))
self.assertEqual(len(outputs), 2)
self.assertEqual(outputs[0].shape.as_list(), [batch_size, o1])
self.assertEqual(outputs[1].shape.as_list(), [batch_size, o2, o3])
model = keras.models.Model([input_1, input_2], outputs)
model.compile(
optimizer="rmsprop",
loss="mse",
run_eagerly=test_utils.should_run_eagerly(),
)
model.train_on_batch(
[np.zeros((batch, t, i1)), np.zeros((batch, t, i2, i3))],
[np.zeros((batch, o1)), np.zeros((batch, o2, o3))],
)
self.assertEqual(
model.output_shape, [(batch_size, o1), (batch_size, o2, o3)]
)
def test_nested_input_output_with_state(self):
batch = 10
t = 5
i1, i2, i3 = 3, 4, 5
o1, o2, o3 = 2, 3, 4
cell = NestedCell(o1, o2, o3)
rnn = keras.layers.RNN(cell, return_sequences=True, return_state=True)
input_1 = keras.Input((t, i1))
input_2 = keras.Input((t, i2, i3))
output1, output2, s1, s2 = rnn((input_1, input_2))
self.assertEqual(output1.shape.as_list(), [None, t, o1])
self.assertEqual(output2.shape.as_list(), [None, t, o2, o3])
self.assertEqual(s1.shape.as_list(), [None, o1])
self.assertEqual(s2.shape.as_list(), [None, o2, o3])
model = keras.models.Model([input_1, input_2], [output1, output2])
model.compile(
optimizer="rmsprop",
loss="mse",
run_eagerly=test_utils.should_run_eagerly(),
)
model.train_on_batch(
[np.zeros((batch, t, i1)), np.zeros((batch, t, i2, i3))],
[np.zeros((batch, t, o1)), np.zeros((batch, t, o2, o3))],
)
self.assertEqual(model.output_shape, [(None, t, o1), (None, t, o2, o3)])
cell = NestedCell(o1, o2, o3, use_tuple=True)
rnn = keras.layers.RNN(cell, return_sequences=True, return_state=True)
input_1 = keras.Input((t, i1))
input_2 = keras.Input((t, i2, i3))
output1, output2, s1, s2 = rnn(NestedInput(t1=input_1, t2=input_2))
self.assertEqual(output1.shape.as_list(), [None, t, o1])
self.assertEqual(output2.shape.as_list(), [None, t, o2, o3])
self.assertEqual(s1.shape.as_list(), [None, o1])
self.assertEqual(s2.shape.as_list(), [None, o2, o3])
model = keras.models.Model([input_1, input_2], [output1, output2])
model.compile(
optimizer="rmsprop",
loss="mse",
run_eagerly=test_utils.should_run_eagerly(),
)
model.train_on_batch(
[np.zeros((batch, t, i1)), np.zeros((batch, t, i2, i3))],
[np.zeros((batch, t, o1)), np.zeros((batch, t, o2, o3))],
)
self.assertEqual(model.output_shape, [(None, t, o1), (None, t, o2, o3)])
def test_nest_input_output_with_init_state(self):
batch = 10
t = 5
i1, i2, i3 = 3, 4, 5
o1, o2, o3 = 2, 3, 4
cell = NestedCell(o1, o2, o3)
rnn = keras.layers.RNN(cell, return_sequences=True, return_state=True)
input_1 = keras.Input((t, i1))
input_2 = keras.Input((t, i2, i3))
init_s1 = keras.Input((o1,))
init_s2 = keras.Input((o2, o3))
output1, output2, s1, s2 = rnn(
(input_1, input_2), initial_state=(init_s1, init_s2)
)
self.assertEqual(output1.shape.as_list(), [None, t, o1])
self.assertEqual(output2.shape.as_list(), [None, t, o2, o3])
self.assertEqual(s1.shape.as_list(), [None, o1])
self.assertEqual(s2.shape.as_list(), [None, o2, o3])
model = keras.models.Model(
[input_1, input_2, init_s1, init_s2], [output1, output2]
)
model.compile(
optimizer="rmsprop",
loss="mse",
run_eagerly=test_utils.should_run_eagerly(),
)
model.train_on_batch(
[
np.zeros((batch, t, i1)),
np.zeros((batch, t, i2, i3)),
np.zeros((batch, o1)),
np.zeros((batch, o2, o3)),
],
[np.zeros((batch, t, o1)), np.zeros((batch, t, o2, o3))],
)
self.assertEqual(model.output_shape, [(None, t, o1), (None, t, o2, o3)])
cell = NestedCell(o1, o2, o3, use_tuple=True)
rnn = keras.layers.RNN(cell, return_sequences=True, return_state=True)
input_1 = keras.Input((t, i1))
input_2 = keras.Input((t, i2, i3))
init_s1 = keras.Input((o1,))
init_s2 = keras.Input((o2, o3))
init_state = NestedState(s1=init_s1, s2=init_s2)
output1, output2, s1, s2 = rnn(
NestedInput(t1=input_1, t2=input_2), initial_state=init_state
)
self.assertEqual(output1.shape.as_list(), [None, t, o1])
self.assertEqual(output2.shape.as_list(), [None, t, o2, o3])
self.assertEqual(s1.shape.as_list(), [None, o1])
self.assertEqual(s2.shape.as_list(), [None, o2, o3])
model = keras.models.Model(
[input_1, input_2, init_s1, init_s2], [output1, output2]
)
model.compile(
optimizer="rmsprop",
loss="mse",
run_eagerly=test_utils.should_run_eagerly(),
)
model.train_on_batch(
[
np.zeros((batch, t, i1)),
np.zeros((batch, t, i2, i3)),
np.zeros((batch, o1)),
np.zeros((batch, o2, o3)),
],
[np.zeros((batch, t, o1)), np.zeros((batch, t, o2, o3))],
)
self.assertEqual(model.output_shape, [(None, t, o1), (None, t, o2, o3)])
def test_masking_rnn_with_output_and_states(self):
class Cell(keras.layers.Layer):
def __init__(self):
self.state_size = None
self.output_size = None
super().__init__()
def build(self, input_shape):
self.state_size = input_shape[-1]
self.output_size = input_shape[-1]
def call(self, inputs, states):
return inputs, [s + 1 for s in states]
x = keras.Input((3, 1), name="x")
x_masked = keras.layers.Masking()(x)
s_0 = keras.Input((1,), name="s_0")
y, s = keras.layers.RNN(Cell(), return_state=True)(
x_masked, initial_state=s_0
)
model = keras.models.Model([x, s_0], [y, s])
model.compile(
optimizer="rmsprop",
loss="mse",
run_eagerly=test_utils.should_run_eagerly(),
)
# last time step masked
x_np = np.array([[[1.0], [2.0], [0.0]]])
s_0_np = np.array([[10.0]])
y_np, s_np = model.predict([x_np, s_0_np])
# 1 is added to initial state two times
self.assertAllClose(s_np, s_0_np + 2)
# Expect last output to be the same as last output before masking
self.assertAllClose(y_np, x_np[:, 1, :])
def test_zero_output_for_masking(self):
for unroll in [True, False]:
cell = keras.layers.SimpleRNNCell(5)
x = keras.Input((5, 5))
| |
'kuì',
0x6127: 'kuì',
0x6128: 'què',
0x6129: 'gōng,gòng,hǒng',
0x612A: 'yún',
0x612B: 'sù',
0x612C: 'sù,shuò',
0x612D: 'qí',
0x612E: 'yáo,yào',
0x612F: 'sǒng',
0x6130: 'huàng',
0x6131: 'jí',
0x6132: 'gǔ',
0x6133: 'jù',
0x6134: 'chuàng',
0x6135: 'nì',
0x6136: 'xié',
0x6137: 'kǎi',
0x6138: 'zhěng',
0x6139: 'yǒng',
0x613A: 'cǎo',
0x613B: 'xùn',
0x613C: 'shèn',
0x613D: 'bó',
0x613E: 'kài,xì',
0x613F: 'yuàn',
0x6140: 'xì,xié',
0x6141: 'hùn',
0x6142: 'yǒng',
0x6143: 'yǎng',
0x6144: 'lì',
0x6145: 'cǎo,sāo',
0x6146: 'tāo',
0x6147: 'yīn',
0x6148: 'cí',
0x6149: 'xù,chù',
0x614A: 'qiàn,qiè',
0x614B: 'tài',
0x614C: 'huāng',
0x614D: 'yùn',
0x614E: 'shèn',
0x614F: 'mǐng',
0x6150: 'gōng,gòng,hǒng',
0x6151: 'shè',
0x6152: 'cáo,cóng',
0x6153: 'piāo',
0x6154: 'mù',
0x6155: 'mù',
0x6156: 'guó',
0x6157: 'chì',
0x6158: 'cǎn',
0x6159: 'cán',
0x615A: 'cán',
0x615B: 'cuī',
0x615C: 'mín',
0x615D: 'tè',
0x615E: 'zhāng',
0x615F: 'tòng',
0x6160: 'ào,áo',
0x6161: 'shuǎng',
0x6162: 'màn',
0x6163: 'guàn',
0x6164: 'què',
0x6165: 'zào',
0x6166: 'jiù',
0x6167: 'huì',
0x6168: 'kǎi',
0x6169: 'lián,liǎn',
0x616A: 'òu',
0x616B: 'sǒng',
0x616C: 'qín,jìn,jǐn',
0x616D: 'yìn',
0x616E: 'lǜ',
0x616F: 'shāng',
0x6170: 'wèi',
0x6171: 'tuán',
0x6172: 'mán',
0x6173: 'qiān',
0x6174: 'shè',
0x6175: 'yōng',
0x6176: 'qìng',
0x6177: 'kāng',
0x6178: 'dì,chì',
0x6179: 'zhí,zhé',
0x617A: 'lóu,lǚ',
0x617B: 'juàn',
0x617C: 'qī',
0x617D: 'qī',
0x617E: 'yù',
0x617F: 'píng',
0x6180: 'liáo',
0x6181: 'còng',
0x6182: 'yōu',
0x6183: 'chōng',
0x6184: 'zhī,zhì',
0x6185: 'tòng',
0x6186: 'chēng',
0x6187: 'qì',
0x6188: 'qū',
0x6189: 'péng',
0x618A: 'bèi',
0x618B: 'biē',
0x618C: 'qióng',
0x618D: 'jiāo',
0x618E: 'zēng',
0x618F: 'chì',
0x6190: 'lián',
0x6191: 'píng',
0x6192: 'kuì',
0x6193: 'huì',
0x6194: 'qiáo',
0x6195: 'chéng,dèng,zhèng',
0x6196: 'yìn',
0x6197: 'yìn',
0x6198: 'xǐ,xī',
0x6199: 'xǐ',
0x619A: 'dàn,dá',
0x619B: 'tán',
0x619C: 'duò',
0x619D: 'duì',
0x619E: 'duì,dùn,tūn',
0x619F: 'sù',
0x61A0: 'jué',
0x61A1: 'cè',
0x61A2: 'xiāo,jiāo',
0x61A3: 'fān',
0x61A4: 'fèn',
0x61A5: 'láo',
0x61A6: 'lào,láo',
0x61A7: 'chōng',
0x61A8: 'hān',
0x61A9: 'qì',
0x61AA: 'xián,xiàn',
0x61AB: 'mǐn',
0x61AC: 'jǐng',
0x61AD: 'liǎo,liáo',
0x61AE: 'wǔ',
0x61AF: 'cǎn',
0x61B0: 'jué',
0x61B1: 'cù',
0x61B2: 'xiàn',
0x61B3: 'tǎn',
0x61B4: 'shéng',
0x61B5: 'pī',
0x61B6: 'yì',
0x61B7: 'chù',
0x61B8: 'xiān',
0x61B9: 'náo,nǎo,náng',
0x61BA: 'dàn',
0x61BB: 'tǎn',
0x61BC: 'jǐng,jìng',
0x61BD: 'sōng',
0x61BE: 'hàn',
0x61BF: 'jiǎo,jǐ',
0x61C0: 'wèi',
0x61C1: 'xuān,huān',
0x61C2: 'dǒng',
0x61C3: 'qín',
0x61C4: 'qín',
0x61C5: 'jù',
0x61C6: 'cǎo,sāo,sào',
0x61C7: 'kěn',
0x61C8: 'xiè',
0x61C9: 'yīng,yìng',
0x61CA: 'ào',
0x61CB: 'mào',
0x61CC: 'yì',
0x61CD: 'lǐn',
0x61CE: 'sè',
0x61CF: 'jùn',
0x61D0: 'huái',
0x61D1: 'mèn',
0x61D2: 'lǎn',
0x61D3: 'ài',
0x61D4: 'lǐn',
0x61D5: 'yān',
0x61D6: 'guō',
0x61D7: 'xià',
0x61D8: 'chì',
0x61D9: 'yǔ,yú',
0x61DA: 'yìn',
0x61DB: 'dāi',
0x61DC: 'mèng,méng,měng',
0x61DD: 'ài,yì,nǐ',
0x61DE: 'méng,měng',
0x61DF: 'duì',
0x61E0: 'qí,jī,jì',
0x61E1: 'mǒ',
0x61E2: 'lán,xiàn',
0x61E3: 'mèn',
0x61E4: 'chóu',
0x61E5: 'zhì',
0x61E6: 'nuò',
0x61E7: 'nuò',
0x61E8: 'yān',
0x61E9: 'yǎng',
0x61EA: 'bó',
0x61EB: 'zhì',
0x61EC: 'kuàng',
0x61ED: 'kuǎng',
0x61EE: 'yōu,yǒu',
0x61EF: 'fū',
0x61F0: 'liú,liǔ',
0x61F1: 'miè',
0x61F2: 'chéng',
0x61F3: 'huì',
0x61F4: 'chàn',
0x61F5: 'měng',
0x61F6: 'lǎn',
0x61F7: 'huái',
0x61F8: 'xuán',
0x61F9: 'ràng',
0x61FA: 'chàn',
0x61FB: 'jì',
0x61FC: 'jù',
0x61FD: 'huān',
0x61FE: 'shè',
0x61FF: 'yì',
0x6200: 'liàn',
0x6201: 'nǎn',
0x6202: 'mí,mó',
0x6203: 'tǎng',
0x6204: 'jué',
0x6205: 'gàng,zhuàng',
0x6206: 'gàng,zhuàng',
0x6207: 'gàng,zhuàng',
0x6208: 'gē',
0x6209: 'yuè',
0x620A: 'wù',
0x620B: 'jiān',
0x620C: 'xū',
0x620D: 'shù',
0x620E: 'róng',
0x620F: 'xì,hū',
0x6210: 'chéng',
0x6211: 'wǒ',
0x6212: 'jiè',
0x6213: 'gē',
0x6214: 'jiān',
0x6215: 'qiāng',
0x6216: 'huò',
0x6217: 'qiāng,qiàng',
0x6218: 'zhàn',
0x6219: 'dòng',
0x621A: 'qī',
0x621B: 'jiá',
0x621C: 'dié',
0x621D: 'zéi',
0x621E: 'jiá',
0x621F: 'jǐ',
0x6220: 'zhí',
0x6221: 'kān',
0x6222: 'jí',
0x6223: 'kuí',
0x6224: 'gài',
0x6225: 'děng',
0x6226: 'zhàn',
0x6227: 'qiāng,qiàng',
0x6228: 'gē',
0x6229: 'jiǎn',
0x622A: 'jié',
0x622B: 'yù',
0x622C: 'jiǎn',
0x622D: 'yǎn',
0x622E: 'lù',
0x622F: 'xì,hū',
0x6230: 'zhàn',
0x6231: 'xì,hū',
0x6232: 'xì,hū',
0x6233: 'chuō',
0x6234: 'dài',
0x6235: 'qú',
0x6236: 'hù',
0x6237: 'hù',
0x6238: 'hù',
0x6239: 'è',
0x623A: 'shì',
0x623B: 'tì',
0x623C: 'mǎo',
0x623D: 'hù',
0x623E: 'lì',
0x623F: 'fáng,páng',
0x6240: 'suǒ',
0x6241: 'biǎn,piān',
0x6242: 'diàn',
0x6243: 'jiōng',
0x6244: 'shǎng,jiōng',
0x6245: 'yí',
0x6246: 'yǐ',
0x6247: 'shàn,shān',
0x6248: 'hù',
0x6249: 'fēi',
0x624A: 'yǎn',
0x624B: 'shǒu',
0x624C: 'shǒu',
0x624D: 'cái',
0x624E: 'zā,zhā,zhá',
0x624F: 'qiú',
0x6250: 'lè,lì,cái',
0x6251: 'pū',
0x6252: 'bā,pá',
0x6253: 'dǎ,dá',
0x6254: 'rēng',
0x6255: 'fǎn,fú',
0x6256: 'rù',
0x6257: 'zài',
0x6258: 'tuō',
0x6259: 'zhàng',
0x625A: 'diǎo,dí,yuē,lì',
0x625B: 'káng,gāng',
0x625C: 'yū,wū',
0x625D: 'yū,wū,kū',
0x625E: 'hàn',
0x625F: 'shēn',
0x6260: 'chā',
0x6261: 'tuō,chǐ,yǐ',
0x6262: 'gǔ,xì,gē,jié',
0x6263: 'kòu',
0x6264: 'wù',
0x6265: 'dèn',
0x6266: 'qiān',
0x6267: 'zhí',
0x6268: 'rèn',
0x6269: 'kuò',
0x626A: 'mén',
0x626B: 'sǎo,sào',
0x626C: 'yáng',
0x626D: 'niǔ',
0x626E: 'bàn',
0x626F: 'chě',
0x6270: 'rǎo',
0x6271: 'xī,chā,qì',
0x6272: 'qián,qín',
0x6273: 'bān',
0x6274: 'jiá',
0x6275: 'yú',
0x6276: 'fú',
0x6277: 'bā,ào',
0x6278: 'xī,zhé',
0x6279: 'pī',
0x627A: 'zhǐ',
0x627B: 'zhì,sǔn,kǎn',
0x627C: 'è',
0x627D: 'dèn',
0x627E: 'zhǎo',
0x627F: 'chéng',
0x6280: 'jì',
0x6281: 'yǎn',
0x6282: 'kuáng,wǎng,zài',
0x6283: 'biàn',
0x6284: 'chāo',
0x6285: 'jū',
0x6286: 'wěn',
0x6287: 'hú,gǔ',
0x6288: 'yuè',
0x6289: 'jué',
0x628A: 'bǎ,bà',
0x628B: 'qìn',
0x628C: 'dǎn,shěn',
0x628D: 'zhěng',
0x628E: 'yǔn',
0x628F: 'wán',
0x6290: 'nè,nì,ruì,nà',
0x6291: 'yì',
0x6292: 'shū',
0x6293: 'zhuā',
0x6294: 'póu',
0x6295: 'tóu',
0x6296: 'dǒu',
0x6297: 'kàng',
0x6298: 'zhē,zhé,shé',
0x6299: 'póu,pōu,fū',
0x629A: 'fǔ',
0x629B: 'pāo',
0x629C: 'bá',
0x629D: 'ǎo,ào,niù',
0x629E: 'zé',
0x629F: 'tuán',
0x62A0: 'kōu',
0x62A1: 'lūn,lún',
0x62A2: 'qiāng,qiǎng,chēng',
0x62A3: 'yún',
0x62A4: 'hù',
0x62A5: 'bào',
0x62A6: 'bǐng',
0x62A7: 'zhǐ,zhǎi',
0x62A8: 'pēng',
0x62A9: 'nán',
0x62AA: 'bù,pū',
0x62AB: 'pī',
0x62AC: 'tái',
0x62AD: 'yǎo,tāo',
0x62AE: 'zhěn',
0x62AF: 'zhā',
0x62B0: 'yāng',
0x62B1: 'bào',
0x62B2: 'hē,hè,qiā',
0x62B3: 'nǐ,ní',
0x62B4: 'yè',
0x62B5: 'dǐ',
0x62B6: 'chì',
0x62B7: 'pī,pēi',
0x62B8: 'jiā',
0x62B9: 'mǒ,mò,mā',
0x62BA: 'mèi',
0x62BB: 'chēn',
0x62BC: 'yā',
0x62BD: 'chōu',
0x62BE: 'qū',
0x62BF: 'mǐn',
0x62C0: 'zhù',
0x62C1: 'jiā,yá',
0x62C2: 'fú,bì',
0x62C3: 'zhǎ',
0x62C4: 'zhǔ',
0x62C5: 'dān,dàn,dǎn',
0x62C6: 'chāi,cā',
0x62C7: 'mǔ',
0x62C8: 'niān',
0x62C9: 'lā,lá',
0x62CA: 'fǔ',
0x62CB: 'pāo',
0x62CC: 'bàn,pàn',
0x62CD: 'pāi',
0x62CE: 'līn',
0x62CF: 'ná',
0x62D0: 'guǎi',
0x62D1: 'qián',
0x62D2: 'jù',
0x62D3: 'tuò,tà,zhí',
0x62D4: 'bá',
0x62D5: 'tuō',
0x62D6: 'tuō',
0x62D7: 'ǎo,ào,niù',
0x62D8: 'jū,gōu',
0x62D9: 'zhuō',
0x62DA: 'pàn,pīn,fān',
0x62DB: 'zhāo',
0x62DC: 'bài',
0x62DD: 'bài',
0x62DE: 'dǐ',
0x62DF: 'nǐ',
0x62E0: 'jù',
0x62E1: 'kuò',
0x62E2: 'lǒng',
0x62E3: 'jiǎn',
0x62E4: 'qiǎ',
0x62E5: 'yōng',
0x62E6: 'lán',
0x62E7: 'níng,nǐng,nìng',
0x62E8: 'bō',
0x62E9: 'zé,zhái',
0x62EA: 'qiān',
0x62EB: 'hén',
0x62EC: 'kuò,guā',
0x62ED: 'shì',
0x62EE: 'jié,jiá',
0x62EF: 'zhěng',
0x62F0: 'nǐn',
0x62F1: 'gǒng',
0x62F2: 'gǒng',
0x62F3: 'quán',
0x62F4: 'shuān',
0x62F5: 'cún,zùn',
0x62F6: 'zā,zǎn',
0x62F7: 'kǎo',
0x62F8: 'yí,chǐ,hài',
0x62F9: 'xié',
0x62FA: 'cè,sè,chuò',
0x62FB: 'huī',
0x62FC: 'pīn',
0x62FD: 'zhuài,zhuāi,yè',
0x62FE: 'shí,shè',
0x62FF: 'ná',
0x6300: 'bāi',
0x6301: 'chí',
0x6302: 'guà',
0x6303: 'zhì',
0x6304: 'kuò,guāng',
0x6305: 'duò',
0x6306: 'duǒ,duò',
0x6307: 'zhǐ',
0x6308: 'qiè',
0x6309: 'àn',
0x630A: 'nòng',
0x630B: 'zhèn',
0x630C: 'gé',
0x630D: 'jiào',
0x630E: 'kuà,kū',
0x630F: 'dòng',
0x6310: 'rú,ná',
0x6311: 'tiāo,tiǎo',
0x6312: 'liè',
0x6313: 'zhā',
0x6314: 'lǚ',
0x6315: 'dié,shè',
0x6316: 'wā',
0x6317: 'jué',
0x6318: 'liě',
0x6319: 'jǔ',
0x631A: 'zhì',
0x631B: 'luán',
0x631C: 'yà,yǎ',
0x631D: 'zhuā,wō',
0x631E: 'tà',
0x631F: 'xié,jiā',
0x6320: 'náo',
0x6321: 'dǎng,dàng',
0x6322: 'jiǎo',
0x6323: 'zhèng,zhēng',
0x6324: 'jǐ',
0x6325: 'huī',
0x6326: 'xián',
0x6327: 'yǔ',
0x6328: 'āi,ái',
0x6329: 'tuō,shuì',
0x632A: 'nuó',
0x632B: 'cuò',
0x632C: 'bó',
0x632D: 'gěng',
0x632E: 'tǐ,tì',
0x632F: 'zhèn',
0x6330: 'chéng',
0x6331: 'suō,shā',
0x6332: 'suō,shā',
0x6333: 'kēng,qiān',
0x6334: 'měi',
0x6335: 'nòng',
0x6336: 'jú',
0x6337: 'bàng,péng',
0x6338: 'jiǎn',
0x6339: 'yì',
0x633A: 'tǐng',
0x633B: 'shān',
0x633C: 'ruó',
0x633D: 'wǎn',
0x633E: 'xié,jiā',
0x633F: 'chā',
0x6340: 'péng,féng',
0x6341: 'jiǎo,kù',
0x6342: 'wǔ',
0x6343: 'jùn',
0x6344: 'jiù,jū,qiú',
0x6345: 'tǒng',
0x6346: 'kǔn',
0x6347: 'huò,chì',
0x6348: 'tú,shū,chá',
0x6349: 'zhuō',
0x634A: 'póu,pōu,fū',
0x634B: 'luō,lǚ',
0x634C: 'bā',
0x634D: 'hàn',
0x634E: 'shāo,shào',
0x634F: 'niē',
0x6350: 'juān',
0x6351: 'zè',
0x6352: 'shù,sǒng,sōu',
0x6353: 'yé,yú',
0x6354: 'jué,zhuó',
0x6355: 'bǔ',
0x6356: 'wán',
0x6357: 'bù,pú,zhì',
0x6358: 'zùn',
0x6359: 'yè',
0x635A: 'zhāi',
0x635B: 'lǚ',
0x635C: 'sōu',
0x635D: 'tuō,shuì',
0x635E: 'lāo',
0x635F: 'sǔn',
0x6360: 'bāng',
0x6361: 'jiǎn',
0x6362: 'huàn',
0x6363: 'dǎo',
0x6364: 'wěi',
0x6365: 'wàn,wǎn,wān,yù',
0x6366: 'qín',
0x6367: 'pěng',
0x6368: 'shě',
0x6369: 'liè',
0x636A: 'mín',
0x636B: 'mén',
0x636C: 'fǔ,fù,bǔ',
0x636D: 'bǎi',
0x636E: 'jù,jū',
0x636F: 'dáo',
0x6370: 'wǒ,luò,luǒ',
0x6371: 'ái',
0x6372: 'juǎn,quán',
0x6373: 'yuè',
0x6374: 'zǒng',
0x6375: 'chēn',
0x6376: 'chuí',
0x6377: 'jié',
0x6378: 'tū',
0x6379: 'bèn',
0x637A: 'nà',
0x637B: 'niǎn,niē',
0x637C: 'ruó,wěi,ré',
0x637D: 'zuó',
0x637E: 'wò,xiá',
0x637F: 'qī',
0x6380: 'xiān',
0x6381: 'chéng',
0x6382: 'diān',
0x6383: 'sǎo,sào',
0x6384: 'lūn,lún',
0x6385: 'qìng,qiàn',
0x6386: 'gāng',
0x6387: 'duō',
0x6388: 'shòu',
0x6389: 'diào',
0x638A: 'pǒu,póu',
0x638B: 'dǐ',
0x638C: 'zhǎng',
0x638D: 'hùn',
0x638E: 'jǐ',
0x638F: 'tāo',
0x6390: 'qiā',
0x6391: 'qí',
0x6392: 'pái,pǎi',
0x6393: 'shū',
0x6394: 'qiān,wàn',
0x6395: 'líng',
0x6396: 'yè,yē',
0x6397: 'yà,yǎ',
0x6398: 'jué',
0x6399: 'zhēng,zhèng',
0x639A: 'liǎng',
0x639B: 'guà',
0x639C: 'nǐ,niè,yì',
0x639D: 'huò,xù',
0x639E: 'shàn,yàn,yǎn',
0x639F: 'zhěng,dìng',
0x63A0: 'lüè',
0x63A1: 'cǎi',
0x63A2: 'tàn',
0x63A3: 'chè',
0x63A4: 'bīng',
0x63A5: 'jiē',
0x63A6: | |
<reponame>dschwoerer/samscripts
#! /usr/bin/env python
# Copyright <NAME>, 2015. www.sovic.org
#
# Creates a pileup from a given SAM/BAM file, and calls consensus bases (or variants).
import os
import sys
import operator
import subprocess
def increase_in_dict(dict_counter, value):
try:
dict_counter[value] += 1
except:
dict_counter[value] = 1
def process_mpileup_line(
line,
line_number,
ret_variant_list,
ret_vcf_list,
ret_snp_count,
ret_insertion_count,
ret_deletion_count,
ret_num_undercovered_bases,
ret_num_called_bases,
ret_num_correct_bases,
ret_coverage_sum,
coverage_threshold,
verbose=False,
):
# Split the line, and perform a sanity check.
split_line = line.strip().split("\t")
if len(split_line) < 5 or len(split_line) > 6:
sys.stderr.write(line + "\n")
return 0
ref_name = split_line[0]
position = split_line[1]
ref_base = split_line[2]
coverage = split_line[3]
original_bases = split_line[4]
if len(split_line) == 6:
qualities = split_line[5]
bases = ""
# Replace the '.' and ',' signs with the actual reference base.
i = 0
while i < len(original_bases):
if original_bases[i] == "." or original_bases[i] == ",":
bases += ref_base
else:
bases += original_bases[i]
i += 1
base_counts = {}
insertion_count = 0
current_base_deletion_count = 0
deletion_count = 0
insertion_event_counts = {}
deletion_event_counts = {}
end_counts = 0
# print 'position: %s' % position;
# print 'bases: "%s"' % bases;
# print 'line_number: %d' % line_number;
# print line;
# print '';
# sys.stdout.flush();
i = 0
while i < len(bases):
base = bases[i]
if base == r"^":
# This is the starting position of a read. It encodes two
# symbols: '^' marking the read start and a char marking the
# mapping quality of the read.
# increase_in_dict(base_counts, bases[i + 1].upper());
i += 1
# Increase only by 1, because we have i += 1 down there.
elif base == r"$":
# This marks the end of a read.
end_counts += 1
elif base == r"*":
# This is a deletion, just count it.
current_base_deletion_count += 1
elif base == r"-":
# This marks the occurance of deletions. It is a composite object
# consisting of: the special character '-', the number of the deleted bases
# and the actual bases that are deleted (these bases follow the current position).
# In our approach, we ignore this case, because we count deletions one by one
# through the '*' character.
# Get the number of bases that need to be skipped in the string.
j = i + 1
while bases[j] in "0123456789":
j += 1
num_bases = int(bases[(i + 1) : j])
skip_bases = (j - i) + num_bases - 1
deletion_count += 1
deletion = bases[j : (j + num_bases)].upper()
increase_in_dict(deletion_event_counts, deletion)
# Skip the length of the numeric entry plus the actual number of bases
# that need to be skipped.
i += skip_bases
elif base == r"+":
# This marks the occurance of an insertion. It is a composite object
# consisting of: the special character '+', the number of the inserted bases
# and the actual bases that are inserted (these bases follow the current position).
# Similar to the deletion marking, but here we actually care about the bases,
# and we need to make an allele aware count.
# Get the number of bases that are inserted;
j = i + 1
while bases[j] in "0123456789":
j += 1
num_bases = int(bases[(i + 1) : j])
skip_bases = (j - i) + num_bases - 1
insertion_count += 1
insertion = bases[j : (j + num_bases)].upper()
increase_in_dict(insertion_event_counts, insertion)
i += skip_bases
else:
increase_in_dict(base_counts, bases[i].upper())
i += 1
# TODO: An additional problematic case, discovered this on 03.11.2014., when analyzing BWA-MEM's mpileup.
# There are pileup bases that do not have any actual bases, but only the '*' symbols. How should this be handled properly?
# Example line from the mpileup file:
# gi|48994873|gb|U00096.2|_Escherichia_coli_str._K-12_substr._MG1655,_complete_genome 1938202 T 20 ******************** 8,2*#-;)$B>2$1&D-
# I chose to handle them as undercovered bases.
non_indel_coverage_current_base = int(coverage) - current_base_deletion_count
if verbose == True:
sys.stdout.write("%s\nbase_counts: %s\n" % (line.strip(), str(base_counts)))
# EDIT: Previously I compared the total coverage of the current base with the coverage threshold.
# However, the total coverage also accounts for the deletions denoted with the '*' sign, which I think
# isn't relevant, as deletions are counted prior to occuring, and at that point is already decided if there is going
# to be a deletion event. If we wound up at this base (i.e. this base didn't get skipped because of a deletion
# consensus), then the deletions on this base are ignored.
# if (int(coverage) < coverage_threshold or int(coverage) == current_base_deletion_count):
# if (non_indel_coverage_current_base < coverage_threshold):
if int(coverage) < coverage_threshold:
ret_num_undercovered_bases[0] += 1
# ret_coverage_sum[0] += 0;
ret_coverage_sum[0] += int(coverage)
# TODO: Should I count total coverage of this base, or the non_indel_coverage_current_base?
sorted_base_counts = [["A", 0], ["C", 0], ["T", 0], ["G", 0]]
sorted_base_counts = sorted(
list(base_counts.items()), key=operator.itemgetter(1)
)
try:
most_common_base_count = sorted_base_counts[-1][1]
except Exception as e:
most_common_base_count = 0
pass
# variant_line = 'undercovered1\tpos = %s\tcoverage = %d\tnon_indel_cov_curr = %d\tmost_common_base_count = %d\tref_base = %s\tcons_base = %s\tbase_counts = %s\tinsertion_counts = %s\tdeletion_counts = %s\t%s' % (position, int(coverage), non_indel_coverage_current_base, most_common_base_count, ref_base, sorted_base_counts[-1][0], str(sorted_base_counts), str(insertion_event_counts), str(deletion_event_counts), line.strip());
# ret_variant_list.append(variant_line);
variant_line = (
"undercovered1\tpos = %s\tref = %s\tcoverage = %d\tbase_counts = %s\tinsertion_counts = %s\tdeletion_counts = %s"
% (
position,
ref_name,
int(coverage),
str(sorted_base_counts),
str(insertion_event_counts),
str(deletion_event_counts),
)
)
ret_variant_list.append(variant_line)
### VCF output ###
qual = 1000
info = "DP=%s;TYPE=snp" % (coverage)
ref_field = ref_base
alt_field = "N"
vcf_line = "%s\t%s\t.\t%s\t%s\t%d\tPASS\t%s" % (
ref_name,
position,
ref_field,
alt_field,
qual,
info,
)
ret_vcf_list.append(vcf_line)
##################
else:
ret_num_called_bases[0] += 1
ret_coverage_sum[0] += int(coverage)
# TODO: Should I count total coverage of this base, or the non_indel_coverage_current_base?
most_common_base_count = 0
### Handling base consensus.
sorted_base_counts = sorted(
list(base_counts.items()), key=operator.itemgetter(1)
)
try:
most_common_base_count = sorted_base_counts[-1][1]
except Exception as e:
pass
# sys.stderr.write(str(e) + '\n');
# sys.stderr.write('sorted_base_counts:\n');
# sys.stderr.write(str(sorted_base_counts) + '\n');
# sys.stderr.write('base_counts:\n');
# sys.stderr.write(str(base_counts) + '\n');
# sys.stderr.write('original_bases:\n');
# sys.stderr.write(str(original_bases) + '\n');
# sys.stderr.write('line:\n');
# sys.stderr.write(line.strip() + '\n');
# most_common_base_count = 0;
# Allow for the case where there are multiple equally good choices.
# In this case, we prefer the choice which is equal to the reference.
is_good = False
for base_count in sorted_base_counts:
if base_count[1] == most_common_base_count:
if base_count[0] == ref_base:
is_good = True
break
if is_good == False:
if len(sorted_base_counts) > 0:
ret_snp_count[0] += 1
# ret_variant_list.append(line_number);
variant_line = (
"SNP\tpos = %s\tref = %s\tcoverage = %d\tnon_indel_cov_curr = %d\tmost_common_base_count = %d\tref_base = %s\tcons_base = %s\tbase_counts = %s\tinsertion_counts = %s\tdeletion_counts = %s\t%s"
% (
position,
ref_name,
int(coverage),
non_indel_coverage_current_base,
most_common_base_count,
ref_base,
("{}")
if (len(sorted_base_counts) == 0)
else (str(sorted_base_counts[-1][0])),
str(sorted_base_counts),
str(insertion_event_counts),
str(deletion_event_counts),
line.strip(),
)
)
ret_variant_list.append(variant_line)
### VCF output ###
alt_base = (
("{}")
if (len(sorted_base_counts) == 0)
else (str(sorted_base_counts[-1][0]))
)
qual = 1000
info = "DP=%s;TYPE=snp" % (coverage)
ref_field = ref_base
alt_field = alt_base
vcf_line = "%s\t%s\t.\t%s\t%s\t%d\tPASS\t%s" % (
ref_name,
position,
ref_field,
alt_field,
qual,
info,
)
ret_vcf_list.append(vcf_line)
##################
else:
sys.stderr.write(
"\nWarning: a SNP was detected, but there were no bases in the sorted_base_counts!"
)
variant_line = (
"SNP\tpos = %s\tref = %s\tcoverage = %d\tnon_indel_cov_curr = %d\tmost_common_base_count = %d\tref_base = %s\tcons_base = %s\tbase_counts = %s\tinsertion_counts = %s\tdeletion_counts = %s\t%s"
% (
position,
ref_name,
int(coverage),
non_indel_coverage_current_base,
most_common_base_count,
ref_base,
("{}")
if (len(sorted_base_counts) == 0)
else (str(sorted_base_counts[-1][0])),
str(sorted_base_counts),
str(insertion_event_counts),
str(deletion_event_counts),
line.strip(),
)
)
sys.stderr.write("\n")
else:
ret_num_correct_bases[0] += 1
if verbose == True:
sys.stdout.write("Reference base: %s\n" % (ref_base))
sys.stdout.write("Consensus base: %s\n\n" % (base_count[0]))
# if (int(position) == 100000 or int(position) == 1000000 or int(position) == 2000000 or int(position) == 3000000 or int(position) == 4000000):
# print '\nTEST\tpos = %s\tcoverage = %d\tnon_indel_cov_curr = %d\tmost_common_base_count = %d\tref_base = %s\tcons_base = %s\tbase_counts = %s\tinsertion_counts = %s\tdeletion_counts = %s\t%s\n' % (position, int(coverage), non_indel_coverage_current_base, most_common_base_count, ref_base, sorted_base_counts[-1][0], str(sorted_base_counts), str(insertion_event_counts), str(deletion_event_counts), line.strip());
### Handling indel consensus.
### Put a different coverage threshold. Here we are interested even in the reads
### which had a '*' at the current position (because we don't know where it ends).
non_indel_coverage_next_base = (
int(coverage) - end_counts - deletion_count - insertion_count
)
if (
non_indel_coverage_next_base + deletion_count + insertion_count
) > coverage_threshold:
# Sanity | |
import hashlib
import json
import os
import re
import time
import uuid
from typing import List, Union, Optional
from urllib.parse import urljoin
from zipfile import ZipFile
from requests import Session, get, Response
from requests.adapters import HTTPAdapter
from requests.cookies import cookiejar_from_dict
from urllib3.util.retry import Retry
from notion.block.basic import Block
from notion.block.collection.basic import (
CollectionBlock,
TemplateBlock,
CollectionRowBlock,
)
from notion.block.collection.view import CollectionView
from notion.block.types import get_block_type, get_collection_view_type
from notion.logger import logger
from notion.monitor import Monitor
from notion.operations import operation_update_last_edited, build_operations
from notion.settings import API_BASE_URL
from notion.space import NotionSpace
from notion.store import RecordStore
from notion.user import NotionUser
from notion.utils import extract_id, now, to_list
class NotionApiError(Exception):
def __init__(self, message: str, **extra):
dumped_data = json.dumps(extra, indent=2)
logger.error(f"Exception: {dumped_data}")
super().__init__(message)
class InvalidCollectionViewUrl(NotionApiError):
pass
class NotionValidationError(NotionApiError):
pass
class NotionUnauthorizedError(NotionApiError):
pass
class Transaction:
"""
Transaction object.
"""
_is_nested = False
def __init__(self, client):
"""
Create Transaction object.
Arguments
---------
client : NotionClient
Client object to use for transaction.
"""
self.client = client
def __enter__(self):
if hasattr(self.client, "_transaction_operations"):
# client is already in a transaction, so we'll just
# make this one a no-op and let the outer one handle it
self._is_nested = True
return
self.client._transaction_operations = []
self.client._pages_to_refresh = []
self.client._blocks_to_refresh = []
def __exit__(self, exc_type, exc_value, traceback):
if self._is_nested:
return
operations = getattr(self.client, "_transaction_operations")
delattr(self.client, "_transaction_operations")
if not exc_type:
# submit the transaction if there was no exception
self.client.submit_transaction(operations=operations)
self.client._store.handle_post_transaction_refreshing()
class NotionClient:
"""
This is the entry point to using the API.
Create an instance of this class, passing it
the value of the "token_v2" cookie from a logged-in
browser session on Notion.so.
"""
def __init__(
self,
token_v2: str = "",
enable_monitoring: bool = False,
start_monitoring: bool = False,
enable_caching: bool = False,
cache_key: str = "",
):
"""
Create NotionClient object and fill its fields.
Arguments
---------
token_v2 : str, optional
The cookie from logged-in browser session on notion.so.
If not provided then all operations will be ran as if user
was not logged in.
Defaults to empty string.
enable_monitoring : bool, optional
Whether or not to monitor the records managed by NotionClient.
Defaults to False.
start_monitoring : bool, optional
Whether or not to start monitoring immediately upon logging in.
This option takes effect only when `monitor` is True.
Defaults to False.
enable_caching : bool, optional
Whether or not to enable caching of fetched data to file.
Defaults to False.
cache_key : str, optional
The key string used for storing all cached data in file.
This option takes effect only when `enable_caching` is True.
Defaults to SHA256 of token_v2.
"""
self.session = self._create_session(token_v2)
# noinspection InsecureHash
cache_key = cache_key or hashlib.sha256(token_v2.encode()).hexdigest()
cache_key = cache_key if enable_caching else None
self._store = RecordStore(self, cache_key=cache_key)
self._monitor = None
if enable_monitoring:
self._monitor = Monitor(self)
if start_monitoring:
self.start_monitoring()
if token_v2:
self._update_user_info()
@staticmethod
def _create_session(token_v2: str = "") -> Session:
"""
Helper method for creating a session object for API requests.
Arguments
---------
token_v2 : str, optional
Token to use for creating User session.
Defaults to empty string.
Returns
-------
Session
initialised Session object.
"""
retry = Retry(
total=5,
backoff_factor=0.3,
status_forcelist=(502, 503),
method_whitelist=(
"POST",
"HEAD",
"TRACE",
"GET",
"PUT",
"OPTIONS",
"DELETE",
),
)
session = Session()
session.mount("https://", HTTPAdapter(max_retries=retry))
session.cookies = cookiejar_from_dict({"token_v2": token_v2})
return session
@staticmethod
def _download_url(url: str, save_path: str, chunk_size: int = 128):
"""
Download the zip file and save it to a file.
Arguments
---------
url : str
URL from which to download.
save_path : str
File name to output the zip file into.
chunk_size : int, optional
Size of the downloaded chunk.
If set to 0 then the data will be read as it arrives
in whatever the size the chunks are received.
Defaults to 128.
https://requests.readthedocs.io/en/master/user/quickstart/#raw-response-content
"""
r = get(url, stream=True)
with open(save_path, "wb") as fd:
for chunk in r.iter_content(chunk_size=chunk_size or None):
fd.write(chunk)
@staticmethod
def _unzip_file(file_name: str, delete: bool = True):
"""
Helper method to unzip the zipped file.
Arguments
---------
file_name : str
File name of the ZIP to unpack.
delete : bool, optional
Whether or not to remove the file after unpacking.
Defaults to True.
"""
with ZipFile(file_name) as f:
f.extractall()
if delete:
os.remove(file_name)
@staticmethod
def _maybe_prefix_url(endpoint: str) -> str:
if endpoint.startswith("http"):
return endpoint
return urljoin(API_BASE_URL, endpoint)
def _update_user_info(self) -> dict:
"""
Reload information about a Notion User.
Returns
-------
dict
User data.
"""
data = self.post("loadUserContent").json()
data = self._store.store_record_map(data)
first_user = list(data["notion_user"].keys())[0]
first_space = list(data["space"].keys())[0]
self.current_user = self.get_user(first_user)
self.current_space = self.get_space(first_space)
return data
def get_top_level_pages(self) -> list:
"""
Get list of top level pages defined in Notion Workspace.
Returns
-------
list of Block
Top level pages.
"""
blocks = self._update_user_info()["block"].keys()
return [self.get_block(bid) for bid in blocks]
def get_record_data(
self, table: str, url_or_id: str, force_refresh: bool = False
) -> dict:
"""
Get record data.
Arguments
---------
table : str
A "block type" in notion.so terminology.
url_or_id : str
Path or ID to block.
force_refresh : bool, optional
Whether or not to force a refresh of data.
Defaults to False.
Returns
-------
dict
Record data.
"""
return self._store.get(
table=table, url_or_id=url_or_id, force_refresh=force_refresh
)
def get_block(self, url_or_id: str, force_refresh: bool = False) -> Optional[Block]:
"""
Retrieve an instance of a subclass of Block that maps to
the block/page identified by the URL or ID passed in.
Arguments
---------
url_or_id : str
Path or ID to block.
force_refresh : bool, optional
Whether or not to force a refresh of data.
Defaults to False.
Returns
-------
Block or None
Found block or None.
"""
block_id = extract_id(url_or_id)
block = self.get_record_data("block", block_id, force_refresh)
if not block:
return None
if block.get("parent_table") == "collection":
if block.get("is_template"):
klass = TemplateBlock
else:
klass = CollectionRowBlock
else:
klass = get_block_type(block.get("type"))
return klass(client=self, block_id=block_id)
def get_collection(
self, collection_id: str, force_refresh: bool = False
) -> Optional[CollectionBlock]:
"""
Retrieve an instance of Collection that maps to
the collection identified by the ID passed in.
Arguments
---------
collection_id : str
ID of searched collection.
force_refresh : bool, optional
Whether or not to force a refresh of data.
Defaults to False.
Returns
-------
CollectionBlock
Found collection or None.
"""
record_data = self.get_record_data(
"collection", collection_id, force_refresh=force_refresh
)
if record_data:
return CollectionBlock(self, collection_id)
def get_collection_view(
self,
url_or_id: str,
collection: CollectionBlock = None,
force_refresh: bool = False,
) -> Optional[CollectionView]:
"""
Retrieve an instance of a subclass of CollectionView
that maps to the appropriate type.
The `url_or_id` argument can either be the URL
for a database page, or the ID of a collection_view
(in which case you must pass the collection)
Arguments
---------
url_or_id : str
ID of searched collection view.
collection : Collection
object representing ID of searched collection view.
force_refresh : bool, optional
Whether or not to force a refresh of data.
Defaults to False.
Raises
------
InvalidCollectionViewUrl
When passed in URL is invalid.
Returns
-------
CollectionView
Found collectionView or None.
"""
if url_or_id.startswith("http"):
# if it's a URL for a database page,
# try extracting the collection and view IDs
match = re.search(r"([a-f0-9]{32})\?v=([a-f0-9]{32})", url_or_id)
if not match:
raise InvalidCollectionViewUrl(
f"Could not find valid ID in URL '{url_or_id}'"
)
collection_id, view_id = match.groups()
collection_id = self.get_record_data(
table="block",
url_or_id=collection_id,
force_refresh=force_refresh,
)["collection_id"]
collection = self.get_collection(collection_id, force_refresh)
else:
view_id = url_or_id
if collection is None:
raise ValueError(
"If 'url_or_id' is an ID (not a URL), "
"you must also pass the 'collection'"
)
view = self.get_record_data(
table="collection_view",
url_or_id=view_id,
force_refresh=force_refresh,
)
if view:
klass = get_collection_view_type(view.get("type", ""))
return klass(self, view_id, collection=collection)
def get_user(
self, user_id: str, force_refresh: bool = False
) -> Optional[NotionUser]:
"""
Retrieve an instance of User that maps to
the notion_user identified by the ID passed in.
Arguments
---------
user_id : str
ID of searched user.
force_refresh : bool, optional
Whether or not to force a refresh of data.
Defaults to False.
Returns
-------
NotionUser
Found user or None.
"""
user = self.get_record_data("notion_user", user_id, force_refresh)
if user:
return NotionUser(self, user_id)
def get_space(
self, space_id: str, force_refresh: bool = False
) -> Optional[NotionSpace]:
"""
Retrieve an instance of Space that maps to
the space identified by the ID passed in.
Arguments
---------
space_id : str
ID of searched user.
force_refresh : bool, optional
Whether or | |
<gh_stars>1-10
from copy import deepcopy
import numpy as np
from shapely import geometry, validation
from article_separation.image_segmentation.net_post_processing.region_to_page_writer import RegionToPageWriter
from python_util.parser.xml.page.page_constants import sSEPARATORREGION, sTEXTREGION
from python_util.parser.xml.page.page_objects import SeparatorRegion
class SeparatorRegionToPageWriter(RegionToPageWriter):
"""
A RegionToPageWriter specifically for the separator detection task. Has some additional methods to handle the
detected separator regions.
"""
def __init__(self, path_to_page, path_to_image=None, fixed_height=None, scaling_factor=None, region_dict=None):
super().__init__(path_to_page, path_to_image, fixed_height, scaling_factor)
self.region_dict = region_dict
def remove_separator_regions_from_page(self):
"""
Removes all current separator regions from the Page object.
:return:
"""
self.page_object.remove_regions(sSEPARATORREGION)
def convert_polygon_with_holes(self, polygon_sh):
"""
Given is a shapely polygon ``polygon_sh`` which can have holes, meaning that it is defined by more than one
polygon (e.g. an annulus is the region between two concentric circles, looking like a ring). This polygon should
then be converted to one or more polygons without holes by splitting the polygon at some point(s).
:param polygon_sh: Shapely polygon.
:return:
"""
def split_horiz_by_point(polygon, point):
""""""
assert polygon.geom_type == "Polygon" and point.geom_type == "Point"
nx, ny, xx, xy = polygon.bounds
if point.x < nx or point.x > xx:
return [polygon]
lEnv = geometry.LineString([(nx, ny), (point.x, xy)]).envelope
rEnv = geometry.LineString([(point.x, ny), (xx, xy)]).envelope
try:
return [polygon.intersection(lEnv), polygon.intersection(rEnv)]
except Exception as e:
print("Geometry error: %s" % validation.explain_validity(polygon))
return [polygon.buffer(0)]
parts = []
if polygon_sh.type == "MultiPolygon":
for p in polygon_sh.geoms:
parts.extend(self.convert_polygon_with_holes(p))
elif polygon_sh.type == "Polygon":
if len(polygon_sh.interiors):
pt = polygon_sh.interiors[0].centroid
halves = split_horiz_by_point(polygon_sh, pt)
for p in halves:
parts.extend(self.convert_polygon_with_holes(p))
else:
parts = [polygon_sh]
return parts
def convert_polygon_with_holes2(self, polygon_sh):
"""
Compare to ``convert_polygon_with_holes``, but less efficient.
:param polygon_sh:
:return:
"""
def closest_pt(pt, ptset):
""""""
dist2 = np.sum((ptset - pt) ** 2, 1)
minidx = np.argmin(dist2)
return minidx
def cw_perpendicular(pt, norm=None):
""""""
d = np.sqrt((pt ** 2).sum()) or 1
if norm is None:
return np.array([pt[1], -pt[0]])
return np.array([pt[1], -pt[0]]) / d * norm
def lazy_short_join_gap(exter, inter, refpt, gap=0.000001):
""""""
exIdx = closest_pt(refpt, exter)
inIdx = closest_pt(exter[exIdx], inter)
print(exter[exIdx], inter[inIdx])
excwgap = exter[exIdx] + cw_perpendicular(inter[inIdx] - exter[exIdx], gap)
incwgap = inter[inIdx] + cw_perpendicular(exter[exIdx] - inter[inIdx], gap)
out = np.vstack((exter[:exIdx], excwgap, inter[inIdx:-1], inter[:inIdx], incwgap, exter[exIdx:]))
out[-1] = out[0]
return out
if len(polygon_sh.interiors):
ex = np.asarray(polygon_sh.exterior)
for inter in polygon_sh.interiors:
inArr = np.asarray(inter)
ex = lazy_short_join_gap(ex, inArr, np.asarray(inter.centroid))
poly = geometry.Polygon(ex)
print(len(list(poly.interiors)))
return poly
def merge_regions(self, remove_holes=True):
"""
For every (vertical) separator iterate over all text lines and split them if the separator goes through them or
cut them off depending on where the separator goes through the text lines. Finally, write the separator regions
to the Page object.
:param remove_holes: If True, remove holes of polygons.
:return:
"""
def _split_shapely_polygon(region_to_split_sh, region_compare_sh):
# region_to_split_sh = region_to_split_sh.buffer(0)
# region_compare_sh = region_compare_sh.buffer(0)
difference = region_to_split_sh.difference(region_compare_sh)
if type(difference) == geometry.MultiPolygon or type(difference) == geometry.MultiLineString:
new_region_polys_sh = list(difference)
else:
new_region_polys_sh = [difference]
return new_region_polys_sh
def _create_page_objects(region_to_split, new_region_polys):
new_region_objects = [deepcopy(region_to_split) for _ in range(len(new_region_polys))]
for j, (new_region_poly, new_region_object) in enumerate(
zip(new_region_polys, new_region_objects)):
new_region_object.set_points(new_region_poly)
if len(new_region_polys) > 1:
new_region_object.id = region_to_split.id + "_" + str(j + 1)
return new_region_objects
def _delete_region_from_page(region_id):
region_to_delete = self.page_object.get_child_by_id(self.page_object.page_doc, region_id)
if len(region_to_delete) == 0:
return
region_to_delete = region_to_delete[0]
self.page_object.remove_page_xml_node(region_to_delete)
def _add_regions_to_page(region_object_list):
for region_object in region_object_list:
self.page_object.add_region(region_object)
def _get_parent_region(child_split_sh, parent_splits_sh):
for j, parent_split_sh in enumerate(parent_splits_sh):
if child_split_sh.intersects(parent_split_sh):
return j, parent_split_sh
return None, None
def _split_text_lines(text_lines_dict, sep_poly):
"""
Given a separator polygon ``sep_poly`` split just the text lines (and its baselines) given by
``text_line_list``. ``sep_poly`` is a list of lists of polygon coordinates. If the separator polygon is only
described via one exterior polygon, the list of lists has length 1. Otherwise, there are also inner
polygons, i.e. the list of lists has a length > 1.
:param text_line_list:
:param sep_poly:
:return:
"""
sep_poly_sh = geometry.Polygon(sep_poly[0], sep_poly[1:]).buffer(0)
if type(sep_poly_sh) == geometry.MultiPolygon:
sep_poly_sh = sep_poly_sh[np.argmax([poly.area for poly in list(sep_poly_sh)])]
for tl_id, text_lines in text_lines_dict.items():
for text_line in text_lines:
text_line_sh = geometry.Polygon(text_line.surr_p.points_list).buffer(0)
# If text line is contained completely in the vertical separator polygon delete it
if sep_poly_sh.contains(text_line_sh):
text_lines_dict[tl_id].remove(text_line)
continue
if text_line_sh.intersects(sep_poly_sh):
text_line_splits_sh = _split_shapely_polygon(text_line_sh, sep_poly_sh)
text_line_splits = [list(poly.exterior.coords) for poly in text_line_splits_sh]
new_text_line_objects = _create_page_objects(text_line, text_line_splits)
for new_text_line_object in new_text_line_objects:
new_text_line_object.set_baseline(None)
if len(new_text_line_objects) != 1:
new_text_line_object.words = []
if len(new_text_line_objects) != 1:
for word in text_line.words:
# Assumes that the words are in the right order
word_polygon_sh = geometry.Polygon(word.surr_p.points_list).buffer(0)
matching_textline_idx = np.argmax([word_polygon_sh.intersection(text_line_split_sh).area
for text_line_split_sh in text_line_splits_sh])
corr_textline = new_text_line_objects[matching_textline_idx]
corr_textline.words.append(word)
if len(text_line.words) > 0:
for new_text_line_object in new_text_line_objects:
new_text_line_object.text = " ".join([word.text for word in new_text_line_object.words])
baseline_sh = geometry.LineString(
text_line.baseline.points_list) if text_line.baseline is not None else None
if baseline_sh is not None and baseline_sh.intersects(sep_poly_sh):
baseline_splits = _split_shapely_polygon(baseline_sh, sep_poly_sh)
elif baseline_sh is not None:
baseline_splits = [baseline_sh]
# baseline split -> text line split
used_idx = set()
for baseline_split in baseline_splits:
idx, parent_text_line = _get_parent_region(baseline_split,
text_line_splits_sh)
if idx is None:
continue
used_idx.add(idx)
new_text_line_objects[idx].set_baseline(list(baseline_split.coords))
# Remove all text line splits that don't have an associated baseline split
# TODO: Maybe rather consider the word elements instead?
new_text_line_objects = [new_text_line_objects[idx] for idx in used_idx]
text_lines_dict[tl_id].extend(new_text_line_objects)
text_lines_dict[tl_id].remove(text_line)
return text_lines_dict
def _split_regions(region_dict, sep_poly):
"""
Given a SeparatorRegion, split regions in region_dict if possible/necessary. Returns False if one of the
regions in ``region_dict`` contains the SeparatorRegion. Then don't write it to the PAGE file.
This function assumes, that the text lines lie completely within the text regions and the baselines lie
completely within the text lines.
:param region_dict:
:param sep_poly:
:return:
"""
sep_poly_sh = geometry.Polygon(sep_poly).buffer(0)
if type(sep_poly_sh) == geometry.MultiPolygon:
sep_poly_sh = sep_poly_sh[np.argmax([poly.area for poly in list(sep_poly_sh)])]
# sep_poly_sh = sep_poly_sh[max(range(len(list(sep_poly_sh))), key=lambda i: list(sep_poly_sh)[i].area)]
for region_type, region_list in region_dict.items():
updated_region_list = deepcopy(region_list)
all_new_region_objects = []
for i, region in enumerate(region_list):
region_polygon_sh = geometry.Polygon(region.points.points_list)
if region_polygon_sh.intersects(sep_poly_sh):
if region_polygon_sh.contains(sep_poly_sh) or sep_poly_sh.contains(region_polygon_sh):
# don't need to check the other regions, provided that we don't have overlapping regions
return False
new_region_polys_sh = _split_shapely_polygon(region_polygon_sh, sep_poly_sh)
new_region_polys = [list(poly.exterior.coords) for poly in new_region_polys_sh]
new_region_objects = _create_page_objects(region, new_region_polys)
# if the region is a TextRegion we also need to take care of the baselines and text lines
if region_type == sTEXTREGION:
for new_region_object in new_region_objects:
new_region_object.text_lines = []
text_lines = region.text_lines
for text_line in text_lines:
text_line_sh = geometry.Polygon(text_line.surr_p.points_list).buffer(0)
if sep_poly_sh.contains(text_line_sh):
return False
if text_line_sh.intersects(sep_poly_sh):
text_line_splits_sh = _split_shapely_polygon(text_line_sh, sep_poly_sh)
text_line_splits = [list(poly.exterior.coords) for poly in text_line_splits_sh]
new_text_line_objects = _create_page_objects(text_line, text_line_splits)
for new_text_line_object in new_text_line_objects:
new_text_line_object.set_baseline(None)
new_text_line_object.words = []
# word_idx = np.argmax(
# [geometry.Polygon(word.surr_p.points_list).buffer(0).distance(sep_poly_sh)
# for word in text_line.words])
for word in text_line.words:
word_polygon_sh = geometry.Polygon(word.surr_p.points_list).buffer(0)
matching_textline_idx = np.argmax([word_polygon_sh.intersection(text_line_split_sh)
for text_line_split_sh in text_line_splits_sh])
corr_textline = new_text_line_objects[matching_textline_idx]
corr_textline.words.append(word)
if len(text_line.words) > 0:
for new_text_line_object in new_text_line_objects:
new_text_line_object.text = " ".join([word.text for word in text_line.words])
baseline_sh = geometry.LineString(
text_line.baseline.points_list) if text_line.baseline is not None else None
if baseline_sh is not None and baseline_sh.intersects(sep_poly_sh):
baseline_splits = _split_shapely_polygon(baseline_sh, sep_poly_sh)
# baseline split -> text line split
for baseline_split in baseline_splits:
idx, parent_text_line = _get_parent_region(baseline_split,
text_line_splits_sh)
if idx is None:
continue
new_text_line_objects[idx].set_baseline(list(baseline_split.coords))
else:
text_line_splits_sh = [text_line_sh]
new_text_line_objects = [text_line]
# text line split -> region split
for text_line_split, new_text_line_object in zip(text_line_splits_sh,
new_text_line_objects):
idx, parent_region = _get_parent_region(text_line_split, new_region_polys_sh)
if idx is None:
continue
new_region_objects[idx].text_lines.append(new_text_line_object)
_delete_region_from_page(region.id)
offset = len(region_list) - len(updated_region_list)
updated_region_list.pop(i - offset)
# updated_region_list[i:i + 1] = new_region_objects
all_new_region_objects.extend(new_region_objects)
_add_regions_to_page(new_region_objects)
updated_region_list.extend(all_new_region_objects)
region_dict[region_type] = updated_region_list
# _add_regions_to_page(all_new_region_objects)
return True
def _add_separator_regions_to_page(separator_polygons, remove_holes=False):
for separator_polygon in separator_polygons:
if remove_holes and len(separator_polygon) > 1:
separator_polygon_ext = separator_polygon[0]
separator_polygon_int = separator_polygon[1:]
separator_polygon_int = [int_poly for int_poly in separator_polygon_int
if geometry.Polygon(int_poly).area > 1000]
separator_polygon_sh = geometry.Polygon(separator_polygon_ext, separator_polygon_int).buffer(0)
separator_polygon_parts_sh = self.convert_polygon_with_holes(separator_polygon_sh)
separator_polygon_parts = [list(sep_part.exterior.coords) for sep_part in separator_polygon_parts_sh]
# separator_polygon = list(separator_polygon_sh.exterior.coords)
for separator_polygon_part in separator_polygon_parts:
separator_id = self.page_object.get_unique_id(sSEPARATORREGION)
custom_tag_dict = None
if separator_type != sSEPARATORREGION:
custom_tag_dict = {"structure": {"orientation": separator_type.lstrip(sSEPARATORREGION + "_")}}
separator_region = SeparatorRegion(separator_id, points=separator_polygon_part,
custom=custom_tag_dict)
self.page_object.add_region(separator_region)
else:
# Ignore the inner polygons and only write the outer ones
separator_polygon = separator_polygon[0]
separator_id = self.page_object.get_unique_id(sSEPARATORREGION)
custom_tag_dict = None
if separator_type != sSEPARATORREGION:
custom_tag_dict = {"structure": {"orientation": separator_type.lstrip(sSEPARATORREGION + "_")}}
separator_region = SeparatorRegion(separator_id, points=separator_polygon, custom=custom_tag_dict)
self.page_object.add_region(separator_region)
text_regions = self.page_object.get_text_regions()
# For now we are only interested in the SeparatorRegion information
for separator_type in [sSEPARATORREGION, sSEPARATORREGION + "_horizontal", sSEPARATORREGION + "_vertical"]:
try:
separator_polygons | |
SELECT x := (
Issue.number,
(Issue.time_spent_log ?? DUMMY).spent_time
) ORDER BY x.0 THEN x.1;
''',
[
['1', 60],
['2', 90],
['3', 30],
['3', 60],
['4', -1],
['5', -1],
['6', -1],
],
)
async def test_edgeql_coalesce_object_03(self):
await self.assert_query_result(
r'''
WITH
DUMMY := (SELECT LogEntry FILTER LogEntry.body = 'Dummy')
SELECT x := (Issue.time_spent_log ?? DUMMY) {
spent_time
}
ORDER BY x.spent_time;
''',
[
{'spent_time': 30},
{'spent_time': 60},
{'spent_time': 60},
{'spent_time': 90},
],
sort=lambda x: x['spent_time']
)
async def test_edgeql_coalesce_object_04(self):
await self.assert_query_result(
r'''
WITH
DUMMY := (SELECT LogEntry FILTER LogEntry.body = 'Dummy')
SELECT (
(SELECT Issue
FILTER Issue.status.name = 'Open').time_spent_log
??
DUMMY
) {
id,
spent_time
};
''',
[
{'spent_time': -1},
],
)
async def test_edgeql_coalesce_object_05(self):
await self.assert_query_result(
r'''
WITH
DUMMY := (SELECT LogEntry FILTER LogEntry.body = 'Dummy'),
I := (
SELECT Issue
FILTER Issue.status.name = 'Open'
)
SELECT (I.time_spent_log ?? DUMMY) {
id,
spent_time
};
''',
[
{'spent_time': -1},
],
)
async def test_edgeql_coalesce_object_06(self):
await self.assert_query_result(
r'''
WITH
LOG1 := (SELECT LogEntry FILTER LogEntry.body = 'Log1')
SELECT Issue {
number,
log1 := Issue.time_spent_log ?= LOG1
} ORDER BY Issue.number;
''',
[
{
'number': '1',
'log1': [True],
}, {
'number': '2',
'log1': [False],
}, {
'number': '3',
'log1': [False, False]
}, {
'number': '4',
'log1': [False],
}, {
'number': '5',
'log1': [False],
}, {
'number': '6',
'log1': [False],
},
],
)
async def test_edgeql_coalesce_object_07(self):
await self.assert_query_result(
r'''
WITH
LOG1 := (SELECT LogEntry FILTER LogEntry.body = 'Log1')
SELECT (
Issue.number, Issue.time_spent_log ?= LOG1
) ORDER BY Issue.number;
''',
[
['1', True],
['2', False],
['3', False],
['3', False],
['4', False],
['5', False],
['6', False],
],
)
async def test_edgeql_coalesce_object_08(self):
await self.assert_query_result(
r'''
WITH
LOG1 := (SELECT LogEntry FILTER LogEntry.body = 'Log1')
SELECT Issue.time_spent_log ?!= LOG1;
''',
[
False,
True,
True,
True,
],
sort=True
)
async def test_edgeql_coalesce_object_09(self):
await self.assert_query_result(
r'''
WITH
DUMMY := (SELECT LogEntry FILTER LogEntry.body = 'Dummy')
SELECT (
SELECT Issue
FILTER Issue.status.name = 'Open'
).time_spent_log ?= DUMMY;
''',
[
False,
],
)
async def test_edgeql_coalesce_object_10(self):
await self.assert_query_result(
r'''
WITH
DUMMY := (SELECT LogEntry FILTER LogEntry.body = 'Dummy'),
I := (
SELECT Issue
FILTER Issue.status.name = 'Open'
)
SELECT I.time_spent_log ?!= DUMMY;
''',
[
True,
],
)
async def test_edgeql_coalesce_object_11(self):
await self.assert_query_result(
r'''
SELECT
(
(SELECT Issue FILTER .number = '1')
??
(SELECT Issue FILTER .number = '2')
) {
number
}
''',
[{
'number': '1',
}]
)
async def test_edgeql_coalesce_object_12(self):
await self.assert_query_result(
r'''
SELECT
(
(SELECT Issue FILTER .number = '100')
??
(SELECT Issue FILTER .number = '2')
) {
number
}
''',
[{
'number': '2',
}]
)
async def test_edgeql_coalesce_wrapping_optional(self):
await self.con.execute(
r'''
CREATE FUNCTION optfunc(
a: std::str, b: OPTIONAL std::str) -> OPTIONAL std::str
USING EdgeQL $$
SELECT b IF a = 'foo' ELSE a
$$;
'''
)
await self.assert_query_result(
r'''
SELECT optfunc('foo', <str>{}) ?? 'N/A';
''',
['N/A'],
)
await self.assert_query_result(
r'''
SELECT optfunc('foo', 'b') ?? 'N/A';
''',
['b'],
)
await self.assert_query_result(
r'''
SELECT optfunc('a', <str>{}) ?? 'N/A';
''',
['a'],
)
async def test_edgeql_coalesce_set_of_01(self):
await self.assert_query_result(
r'''
SELECT <str>Publication.id ?? <str>count(Publication)
''',
['0'],
)
async def test_edgeql_coalesce_set_of_02(self):
await self.assert_query_result(
r'''
SELECT Publication.title ?? <str>count(Publication)
''',
['0'],
)
async def test_edgeql_coalesce_set_of_03(self):
await self.assert_query_result(
r'''
SELECT <str>Publication.id ?= <str>count(Publication)
''',
[False],
)
async def test_edgeql_coalesce_set_of_04(self):
await self.assert_query_result(
r'''
SELECT Publication.title ?= <str>count(Publication)
''',
[False],
)
async def test_edgeql_coalesce_set_of_05(self):
await self.assert_query_result(
r'''
SELECT (Publication.title ?? <str>count(Publication))
?? Publication.title
''',
['0'],
)
async def test_edgeql_coalesce_set_of_06(self):
await self.assert_query_result(
r'''
SELECT (Publication.title ?= <str>count(Publication),
Publication)
''',
[],
)
async def test_edgeql_coalesce_set_of_07(self):
await self.assert_query_result(
r'''
SELECT (Publication.title ?= '0',
(Publication.title ?? <str>count(Publication)));
''',
[[False, '0']],
)
async def test_edgeql_coalesce_set_of_08(self):
await self.assert_query_result(
r'''
SELECT ("1" if Publication.title ?= "foo" else "2") ++
(Publication.title ?? <str>count(Publication))
''',
['20'],
)
async def test_edgeql_coalesce_set_of_09(self):
await self.assert_query_result(
r'''
SELECT (Publication.title ?= "Foo", Publication.title ?= "bar")
''',
[[False, False]],
)
async def test_edgeql_coalesce_set_of_10(self):
await self.assert_query_result(
r'''
SELECT (Publication.title++Publication.title ?= "Foo",
Publication.title ?= "bar")
''',
[[False, False]],
)
async def test_edgeql_coalesce_set_of_11(self):
await self.assert_query_result(
r'''
SELECT (Publication.title ?= "", count(Publication))
''',
[[False, 0]],
)
await self.assert_query_result(
r'''
SELECT (count(Publication), Publication.title ?= "")
''',
[[False, 0]],
)
async def test_edgeql_coalesce_set_of_12(self):
await self.assert_query_result(
r'''
SELECT (
Publication ?= Publication,
(Publication.title++Publication.title
?= Publication.title) ?=
(Publication ?!= Publication)
)
''',
[[True, False]]
)
async def test_edgeql_coalesce_set_of_13(self):
await self.assert_query_result(
r'''
SELECT (Publication ?= Publication, Publication)
''',
[],
)
async def test_edgeql_coalesce_set_of_nonempty_01(self):
await self.con.execute(
'''INSERT Publication { title := "1" }''')
await self.con.execute(
'''INSERT Publication { title := "asdf" }''')
await self.assert_query_result(
r'''
SELECT Publication.title ?= <str>count(Publication)
''',
[True, False],
)
async def test_edgeql_coalesce_self_01(self):
await self.assert_query_result(
r'''
SELECT Publication ?? Publication
''',
[],
)
async def test_edgeql_coalesce_self_02(self):
await self.assert_query_result(
r'''
WITH Z := (SELECT Comment FILTER .owner.name = "Yury")
SELECT (Z.parent ?? Z);
''',
[],
)
async def test_edgeql_coalesce_pointless_01(self):
# This is pointless but it should work.
await self.assert_query_result(
r'''
SELECT 'a' ?? (SELECT {'a', 'b'})
''',
["a"],
)
async def test_edgeql_coalesce_correlation_01(self):
await self.assert_query_result(
r'''
SELECT _ := (
SELECT (Issue.name ++ <str>Issue.time_estimate)) ?? 'n/a'
ORDER BY _;
''',
["Issue 160", "Issue 290", "Issue 390"],
)
async def test_edgeql_coalesce_correlation_02(self):
await self.assert_query_result(
r'''
WITH X := (SELECT (Issue.name ++ <str>Issue.time_estimate)),
SELECT _ := X ?? 'n/a'
ORDER BY _;
''',
["Issue 160", "Issue 290", "Issue 390"],
)
async def test_edgeql_coalesce_correlation_03(self):
# TODO: add this to the schema if we want more like it
await self.con.execute('''
CREATE FUNCTION opts(x: OPTIONAL str) -> OPTIONAL str {
USING (x) };
''')
await self.assert_query_result(
r'''
SELECT _ := (
count(Issue),
opts((SELECT (<str>Issue.time_estimate))),
) ORDER BY _;
''',
[[6, "60"], [6, "90"], [6, "90"]],
)
async def test_edgeql_coalesce_tuple_01(self):
await self.assert_query_result(
r'''
SELECT (SELECT ('no', 'no') FILTER false) ?? ('a', 'b');
''',
[
['a', 'b'],
]
)
async def test_edgeql_coalesce_tuple_02(self):
await self.assert_query_result(
r'''
SELECT _ := (Issue.name, (Issue.name, <str>Issue.time_estimate)
?? ('hm', 'n/a')) ORDER BY _;
''',
[
["Issue 1", ["Issue 1", "60"]],
["Issue 2", ["Issue 2", "90"]],
["Issue 3", ["Issue 3", "90"]],
["Issue 4", ["hm", "n/a"]],
["Issue 5", ["hm", "n/a"]],
["Issue 6", ["hm", "n/a"]],
]
)
async def test_edgeql_coalesce_tuple_03(self):
await self.assert_query_result(
r'''
SELECT _ := (Issue.name, (Issue.name, Issue.time_estimate)
?? (Issue.name, -1)) ORDER BY _;
''',
[
["Issue 1", ["Issue 1", 60]],
["Issue 2", ["Issue 2", 90]],
["Issue 3", ["Issue 3", 90]],
["Issue 4", ["Issue 4", -1]],
["Issue 5", ["Issue 5", -1]],
["Issue 6", ["Issue 6", -1]],
]
)
async def test_edgeql_coalesce_tuple_04(self):
await self.assert_query_result(
r'''
SELECT _ := (Issue.name, Issue.time_estimate)
?? (Issue.name, -1) ORDER BY _;
''',
[
["Issue 1", 60],
["Issue 2", 90],
["Issue 3", 90],
["Issue 4", -1],
["Issue 5", -1],
["Issue 6", -1],
],
)
async def test_edgeql_coalesce_tuple_05(self):
await self.assert_query_result(
r'''
WITH X := (Issue.name, Issue.time_estimate),
SELECT _ := X ?? ('hm', -1) ORDER BY _;
''',
[
["Issue 1", 60],
["Issue 2", 90],
["Issue 3", 90],
],
)
async def test_edgeql_coalesce_tuple_06(self):
await self.assert_query_result(
r'''
SELECT (SELECT ((), 'no') FILTER false) ?? ((), 'b');
''',
[
[[], 'b'],
],
)
async def test_edgeql_coalesce_tuple_07(self):
await self.assert_query_result(
r'''
SELECT (SELECT () FILTER false) ?? {(), ()};
''',
[
[], []
],
)
await self.assert_query_result(
r'''
SELECT (SELECT () FILTER true) ?? {(), ()};
''',
[
[]
],
)
await self.assert_query_result(
r'''
SELECT (SELECT ((), ()) FILTER true) ?? {((), ()), ((), ())}
''',
[
[[], []]
],
)
async def test_edgeql_coalesce_tuple_08(self):
await self.con.execute('''
CREATE TYPE Foo {
CREATE PROPERTY bar -> tuple<int64, int64>;
CREATE PROPERTY baz -> tuple<tuple<int64, int64>, str>;
};
''')
await self.assert_query_result(
r'''
SELECT Foo.bar ?? (1, 2)
''',
[[1, 2]],
)
await self.assert_query_result(
r'''
SELECT Foo.bar UNION (1, 2)
''',
[[1, 2]],
)
await self.assert_query_result(
r'''
SELECT (Foo.bar ?? (1, 2)).0
''',
[1],
)
await self.assert_query_result(
r'''
SELECT (Foo.bar UNION (1, 2)).0
''',
[1],
)
await self.assert_query_result(
r'''
SELECT (Foo.baz ?? ((1, 2), 'huh')).0.1
''',
[2],
)
# Insert some data and mess around some more
await self.con.execute('''
INSERT Foo { bar := (3, 4), baz := ((3, 4), 'test') }
''')
await self.assert_query_result(
r'''
SELECT | |
<reponame>AaronFriel/pulumi-aws-native
# coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
from ._enums import *
from ._inputs import *
__all__ = ['FlowOutputArgs', 'FlowOutput']
@pulumi.input_type
class FlowOutputArgs:
def __init__(__self__, *,
flow_arn: pulumi.Input[str],
protocol: pulumi.Input['FlowOutputProtocol'],
cidr_allow_list: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
description: Optional[pulumi.Input[str]] = None,
destination: Optional[pulumi.Input[str]] = None,
encryption: Optional[pulumi.Input['FlowOutputEncryptionArgs']] = None,
max_latency: Optional[pulumi.Input[int]] = None,
min_latency: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
port: Optional[pulumi.Input[int]] = None,
remote_id: Optional[pulumi.Input[str]] = None,
smoothing_latency: Optional[pulumi.Input[int]] = None,
stream_id: Optional[pulumi.Input[str]] = None,
vpc_interface_attachment: Optional[pulumi.Input['FlowOutputVpcInterfaceAttachmentArgs']] = None):
"""
The set of arguments for constructing a FlowOutput resource.
:param pulumi.Input[str] flow_arn: The Amazon Resource Name (ARN), a unique identifier for any AWS resource, of the flow.
:param pulumi.Input['FlowOutputProtocol'] protocol: The protocol that is used by the source or output.
:param pulumi.Input[Sequence[pulumi.Input[str]]] cidr_allow_list: The range of IP addresses that should be allowed to initiate output requests to this flow. These IP addresses should be in the form of a Classless Inter-Domain Routing (CIDR) block; for example, 10.0.0.0/16.
:param pulumi.Input[str] description: A description of the output.
:param pulumi.Input[str] destination: The address where you want to send the output.
:param pulumi.Input['FlowOutputEncryptionArgs'] encryption: The type of key used for the encryption. If no keyType is provided, the service will use the default setting (static-key).
:param pulumi.Input[int] max_latency: The maximum latency in milliseconds. This parameter applies only to RIST-based and Zixi-based streams.
:param pulumi.Input[int] min_latency: The minimum latency in milliseconds.
:param pulumi.Input[str] name: The name of the output. This value must be unique within the current flow.
:param pulumi.Input[int] port: The port to use when content is distributed to this output.
:param pulumi.Input[str] remote_id: The remote ID for the Zixi-pull stream.
:param pulumi.Input[int] smoothing_latency: The smoothing latency in milliseconds for RIST, RTP, and RTP-FEC streams.
:param pulumi.Input[str] stream_id: The stream ID that you want to use for this transport. This parameter applies only to Zixi-based streams.
:param pulumi.Input['FlowOutputVpcInterfaceAttachmentArgs'] vpc_interface_attachment: The name of the VPC interface attachment to use for this output.
"""
pulumi.set(__self__, "flow_arn", flow_arn)
pulumi.set(__self__, "protocol", protocol)
if cidr_allow_list is not None:
pulumi.set(__self__, "cidr_allow_list", cidr_allow_list)
if description is not None:
pulumi.set(__self__, "description", description)
if destination is not None:
pulumi.set(__self__, "destination", destination)
if encryption is not None:
pulumi.set(__self__, "encryption", encryption)
if max_latency is not None:
pulumi.set(__self__, "max_latency", max_latency)
if min_latency is not None:
pulumi.set(__self__, "min_latency", min_latency)
if name is not None:
pulumi.set(__self__, "name", name)
if port is not None:
pulumi.set(__self__, "port", port)
if remote_id is not None:
pulumi.set(__self__, "remote_id", remote_id)
if smoothing_latency is not None:
pulumi.set(__self__, "smoothing_latency", smoothing_latency)
if stream_id is not None:
pulumi.set(__self__, "stream_id", stream_id)
if vpc_interface_attachment is not None:
pulumi.set(__self__, "vpc_interface_attachment", vpc_interface_attachment)
@property
@pulumi.getter(name="flowArn")
def flow_arn(self) -> pulumi.Input[str]:
"""
The Amazon Resource Name (ARN), a unique identifier for any AWS resource, of the flow.
"""
return pulumi.get(self, "flow_arn")
@flow_arn.setter
def flow_arn(self, value: pulumi.Input[str]):
pulumi.set(self, "flow_arn", value)
@property
@pulumi.getter
def protocol(self) -> pulumi.Input['FlowOutputProtocol']:
"""
The protocol that is used by the source or output.
"""
return pulumi.get(self, "protocol")
@protocol.setter
def protocol(self, value: pulumi.Input['FlowOutputProtocol']):
pulumi.set(self, "protocol", value)
@property
@pulumi.getter(name="cidrAllowList")
def cidr_allow_list(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
The range of IP addresses that should be allowed to initiate output requests to this flow. These IP addresses should be in the form of a Classless Inter-Domain Routing (CIDR) block; for example, 10.0.0.0/16.
"""
return pulumi.get(self, "cidr_allow_list")
@cidr_allow_list.setter
def cidr_allow_list(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "cidr_allow_list", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
A description of the output.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def destination(self) -> Optional[pulumi.Input[str]]:
"""
The address where you want to send the output.
"""
return pulumi.get(self, "destination")
@destination.setter
def destination(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "destination", value)
@property
@pulumi.getter
def encryption(self) -> Optional[pulumi.Input['FlowOutputEncryptionArgs']]:
"""
The type of key used for the encryption. If no keyType is provided, the service will use the default setting (static-key).
"""
return pulumi.get(self, "encryption")
@encryption.setter
def encryption(self, value: Optional[pulumi.Input['FlowOutputEncryptionArgs']]):
pulumi.set(self, "encryption", value)
@property
@pulumi.getter(name="maxLatency")
def max_latency(self) -> Optional[pulumi.Input[int]]:
"""
The maximum latency in milliseconds. This parameter applies only to RIST-based and Zixi-based streams.
"""
return pulumi.get(self, "max_latency")
@max_latency.setter
def max_latency(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "max_latency", value)
@property
@pulumi.getter(name="minLatency")
def min_latency(self) -> Optional[pulumi.Input[int]]:
"""
The minimum latency in milliseconds.
"""
return pulumi.get(self, "min_latency")
@min_latency.setter
def min_latency(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "min_latency", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the output. This value must be unique within the current flow.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def port(self) -> Optional[pulumi.Input[int]]:
"""
The port to use when content is distributed to this output.
"""
return pulumi.get(self, "port")
@port.setter
def port(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "port", value)
@property
@pulumi.getter(name="remoteId")
def remote_id(self) -> Optional[pulumi.Input[str]]:
"""
The remote ID for the Zixi-pull stream.
"""
return pulumi.get(self, "remote_id")
@remote_id.setter
def remote_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "remote_id", value)
@property
@pulumi.getter(name="smoothingLatency")
def smoothing_latency(self) -> Optional[pulumi.Input[int]]:
"""
The smoothing latency in milliseconds for RIST, RTP, and RTP-FEC streams.
"""
return pulumi.get(self, "smoothing_latency")
@smoothing_latency.setter
def smoothing_latency(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "smoothing_latency", value)
@property
@pulumi.getter(name="streamId")
def stream_id(self) -> Optional[pulumi.Input[str]]:
"""
The stream ID that you want to use for this transport. This parameter applies only to Zixi-based streams.
"""
return pulumi.get(self, "stream_id")
@stream_id.setter
def stream_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "stream_id", value)
@property
@pulumi.getter(name="vpcInterfaceAttachment")
def vpc_interface_attachment(self) -> Optional[pulumi.Input['FlowOutputVpcInterfaceAttachmentArgs']]:
"""
The name of the VPC interface attachment to use for this output.
"""
return pulumi.get(self, "vpc_interface_attachment")
@vpc_interface_attachment.setter
def vpc_interface_attachment(self, value: Optional[pulumi.Input['FlowOutputVpcInterfaceAttachmentArgs']]):
pulumi.set(self, "vpc_interface_attachment", value)
class FlowOutput(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
cidr_allow_list: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
description: Optional[pulumi.Input[str]] = None,
destination: Optional[pulumi.Input[str]] = None,
encryption: Optional[pulumi.Input[pulumi.InputType['FlowOutputEncryptionArgs']]] = None,
flow_arn: Optional[pulumi.Input[str]] = None,
max_latency: Optional[pulumi.Input[int]] = None,
min_latency: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
port: Optional[pulumi.Input[int]] = None,
protocol: Optional[pulumi.Input['FlowOutputProtocol']] = None,
remote_id: Optional[pulumi.Input[str]] = None,
smoothing_latency: Optional[pulumi.Input[int]] = None,
stream_id: Optional[pulumi.Input[str]] = None,
vpc_interface_attachment: Optional[pulumi.Input[pulumi.InputType['FlowOutputVpcInterfaceAttachmentArgs']]] = None,
__props__=None):
"""
Resource schema for AWS::MediaConnect::FlowOutput
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[Sequence[pulumi.Input[str]]] cidr_allow_list: The range of IP addresses that should be allowed to initiate output requests to this flow. These IP addresses should be in the form of a Classless Inter-Domain Routing (CIDR) block; for example, 10.0.0.0/16.
:param pulumi.Input[str] description: A description of the output.
:param pulumi.Input[str] destination: The address where you want to send the output.
:param pulumi.Input[pulumi.InputType['FlowOutputEncryptionArgs']] encryption: The type of key used for the encryption. If no keyType is provided, the service will use the default setting (static-key).
:param pulumi.Input[str] flow_arn: The Amazon Resource Name (ARN), a unique identifier for any AWS resource, of the flow.
:param pulumi.Input[int] max_latency: The maximum latency in milliseconds. This parameter applies only to RIST-based and Zixi-based streams.
:param pulumi.Input[int] min_latency: The minimum latency in milliseconds.
:param pulumi.Input[str] name: The name of the output. This value must be unique within the current flow.
:param pulumi.Input[int] port: The port to use when content is distributed to this output.
:param pulumi.Input['FlowOutputProtocol'] protocol: The protocol that is used by the source or output.
:param pulumi.Input[str] remote_id: The remote ID for the Zixi-pull stream.
:param pulumi.Input[int] smoothing_latency: The smoothing latency in milliseconds for RIST, RTP, and RTP-FEC streams.
:param pulumi.Input[str] stream_id: The stream ID that you want to use for this transport. This parameter applies only to Zixi-based streams.
:param pulumi.Input[pulumi.InputType['FlowOutputVpcInterfaceAttachmentArgs']] vpc_interface_attachment: The name of the VPC interface attachment to use for this output.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: FlowOutputArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Resource schema for AWS::MediaConnect::FlowOutput
:param str resource_name: The name of the resource.
:param FlowOutputArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(FlowOutputArgs, pulumi.ResourceOptions, *args, **kwargs)
| |
<filename>venv/Lib/site-packages/pygments/cmdline.py
# -*- coding: utf-8 -*-
"""
pygments.cmdline
~~~~~~~~~~~~~~~~
Command line interface.
:copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import os
import sys
import getopt
from textwrap import dedent
from pygments import __version__, highlight
from pygments.util import ClassNotFound, OptionError, docstring_headline, \
guess_decode, guess_decode_from_terminal, terminal_encoding, \
UnclosingTextIOWrapper
from pygments.lexers import get_all_lexers, get_lexer_by_name, guess_lexer, \
load_lexer_from_file, get_lexer_for_filename, find_lexer_class_for_filename
from pygments.lexers.special import TextLexer
from pygments.formatters.latex import LatexEmbeddedLexer, LatexFormatter
from pygments.formatters import get_all_formatters, get_formatter_by_name, \
load_formatter_from_file, get_formatter_for_filename, find_formatter_class
from pygments.formatters.terminal import TerminalFormatter
from pygments.formatters.terminal256 import Terminal256Formatter
from pygments.filters import get_all_filters, find_filter_class
from pygments.styles import get_all_styles, get_style_by_name
USAGE = """\
Usage: %s [-l <lexer> | -g] [-F <filter>[:<options>]] [-f <formatter>]
[-O <options>] [-P <option=value>] [-s] [-v] [-x] [-o <outfile>] [<infile>]
%s -S <style> -f <formatter> [-a <arg>] [-O <options>] [-P <option=value>]
%s -L [<which> ...]
%s -N <filename>
%s -H <type> <name>
%s -h | -V
Highlight the input file and write the result to <outfile>.
If no input file is given, use stdin, if -o is not given, use stdout.
If -s is passed, lexing will be done in "streaming" mode, reading and
highlighting one line at a time. This will only work properly with
lexers that have no constructs spanning multiple lines!
<lexer> is a lexer name (query all lexer names with -L). If -l is not
given, the lexer is guessed from the extension of the input file name
(this obviously doesn't work if the input is stdin). If -g is passed,
attempt to guess the lexer from the file contents, or pass through as
plain text if this fails (this can work for stdin).
Likewise, <formatter> is a formatter name, and will be guessed from
the extension of the output file name. If no output file is given,
the terminal formatter will be used by default.
The additional option -x allows custom lexers and formatters to be
loaded from a .py file relative to the current working directory. For
example, ``-l ./customlexer.py -x``. By default, this option expects a
file with a class named CustomLexer or CustomFormatter; you can also
specify your own class name with a colon (``-l ./lexer.py:MyLexer``).
Users should be very careful not to use this option with untrusted files,
because it will import and run them.
With the -O option, you can give the lexer and formatter a comma-
separated list of options, e.g. ``-O bg=light,python=cool``.
The -P option adds lexer and formatter options like the -O option, but
you can only give one option per -P. That way, the option value may
contain commas and equals signs, which it can't with -O, e.g.
``-P "heading=Pygments, the Python highlighter".
With the -F option, you can add filters to the token stream, you can
give options in the same way as for -O after a colon (note: there must
not be spaces around the colon).
The -O, -P and -F options can be given multiple times.
With the -S option, print out style definitions for style <style>
for formatter <formatter>. The argument given by -a is formatter
dependent.
The -L option lists lexers, formatters, styles or filters -- set
`which` to the thing you want to list (e.g. "styles"), or omit it to
list everything.
The -N option guesses and prints out a lexer name based solely on
the given filename. It does not take input or highlight anything.
If no specific lexer can be determined "text" is returned.
The -H option prints detailed help for the object <name> of type <type>,
where <type> is one of "lexer", "formatter" or "filter".
The -s option processes lines one at a time until EOF, rather than
waiting to process the entire file. This only works for stdin, and
is intended for streaming input such as you get from 'tail -f'.
Example usage: "tail -f sql.log | pygmentize -s -l sql"
The -v option prints a detailed traceback on unhandled exceptions,
which is useful for debugging and bug reports.
The -h option prints this help.
The -V option prints the package version.
"""
def _parse_options(o_strs):
opts = {}
if not o_strs:
return opts
for o_str in o_strs:
if not o_str.strip():
continue
o_args = o_str.split(',')
for o_arg in o_args:
o_arg = o_arg.strip()
try:
o_key, o_val = o_arg.split('=', 1)
o_key = o_key.strip()
o_val = o_val.strip()
except ValueError:
opts[o_arg] = True
else:
opts[o_key] = o_val
return opts
def _parse_filters(f_strs):
filters = []
if not f_strs:
return filters
for f_str in f_strs:
if ':' in f_str:
fname, fopts = f_str.split(':', 1)
filters.append((fname, _parse_options([fopts])))
else:
filters.append((f_str, {}))
return filters
def _print_help(what, name):
try:
if what == 'lexer':
cls = get_lexer_by_name(name)
print("Help on the %s lexer:" % cls.name)
print(dedent(cls.__doc__))
elif what == 'formatter':
cls = find_formatter_class(name)
print("Help on the %s formatter:" % cls.name)
print(dedent(cls.__doc__))
elif what == 'filter':
cls = find_filter_class(name)
print("Help on the %s filter:" % name)
print(dedent(cls.__doc__))
return 0
except (AttributeError, ValueError):
print("%s not found!" % what, file=sys.stderr)
return 1
def _print_list(what):
if what == 'lexer':
print()
print("Lexers:")
print("~~~~~~~")
info = []
for fullname, names, exts, _ in get_all_lexers():
tup = (', '.join(names)+':', fullname,
exts and '(filenames ' + ', '.join(exts) + ')' or '')
info.append(tup)
info.sort()
for i in info:
print(('* %s\n %s %s') % i)
elif what == 'formatter':
print()
print("Formatters:")
print("~~~~~~~~~~~")
info = []
for cls in get_all_formatters():
doc = docstring_headline(cls)
tup = (', '.join(cls.aliases) + ':', doc, cls.filenames and
'(filenames ' + ', '.join(cls.filenames) + ')' or '')
info.append(tup)
info.sort()
for i in info:
print(('* %s\n %s %s') % i)
elif what == 'filter':
print()
print("Filters:")
print("~~~~~~~~")
for name in get_all_filters():
cls = find_filter_class(name)
print("* " + name + ':')
print(" %s" % docstring_headline(cls))
elif what == 'style':
print()
print("Styles:")
print("~~~~~~~")
for name in get_all_styles():
cls = get_style_by_name(name)
print("* " + name + ':')
print(" %s" % docstring_headline(cls))
def main_inner(popts, args, usage):
opts = {}
O_opts = []
P_opts = []
F_opts = []
for opt, arg in popts:
if opt == '-O':
O_opts.append(arg)
elif opt == '-P':
P_opts.append(arg)
elif opt == '-F':
F_opts.append(arg)
opts[opt] = arg
if opts.pop('-h', None) is not None:
print(usage)
return 0
if opts.pop('-V', None) is not None:
print('Pygments version %s, (c) 2006-2021 by <NAME>.' % __version__)
return 0
# handle ``pygmentize -L``
L_opt = opts.pop('-L', None)
if L_opt is not None:
if opts:
print(usage, file=sys.stderr)
return 2
# print version
main(['', '-V'])
if not args:
args = ['lexer', 'formatter', 'filter', 'style']
for arg in args:
_print_list(arg.rstrip('s'))
return 0
# handle ``pygmentize -H``
H_opt = opts.pop('-H', None)
if H_opt is not None:
if opts or len(args) != 2:
print(usage, file=sys.stderr)
return 2
what, name = args # pylint: disable=unbalanced-tuple-unpacking
if what not in ('lexer', 'formatter', 'filter'):
print(usage, file=sys.stderr)
return 2
return _print_help(what, name)
# parse -O options
parsed_opts = _parse_options(O_opts)
opts.pop('-O', None)
# parse -P options
for p_opt in P_opts:
try:
name, value = p_opt.split('=', 1)
except ValueError:
parsed_opts[p_opt] = True
else:
parsed_opts[name] = value
opts.pop('-P', None)
# encodings
inencoding = parsed_opts.get('inencoding', parsed_opts.get('encoding'))
outencoding = parsed_opts.get('outencoding', parsed_opts.get('encoding'))
# handle ``pygmentize -N``
infn = opts.pop('-N', None)
if infn is not None:
lexer = find_lexer_class_for_filename(infn)
if lexer is None:
lexer = TextLexer
print(lexer.aliases[0])
return 0
# handle ``pygmentize -S``
S_opt = opts.pop('-S', None)
a_opt = opts.pop('-a', None)
if S_opt is not None:
f_opt = opts.pop('-f', None)
if not f_opt:
print(usage, file=sys.stderr)
return 2
if opts or args:
print(usage, file=sys.stderr)
return 2
try:
parsed_opts['style'] = S_opt
fmter = get_formatter_by_name(f_opt, **parsed_opts)
except ClassNotFound as err:
print(err, file=sys.stderr)
return 1
print(fmter.get_style_defs(a_opt or ''))
return 0
# if no -S is given, -a is not allowed
if a_opt is not None:
print(usage, file=sys.stderr)
return 2
# parse -F options
F_opts = _parse_filters(F_opts)
opts.pop('-F', None)
allow_custom_lexer_formatter = False
# -x: allow custom (eXternal) lexers and formatters
if opts.pop('-x', None) is not None:
allow_custom_lexer_formatter = True
# select lexer
lexer = None
# given by name?
lexername = opts.pop('-l', None)
if lexername:
# custom lexer, located relative to user's cwd
if allow_custom_lexer_formatter and '.py' in lexername:
try:
filename = None
name = None
if ':' in lexername:
filename, name = lexername.rsplit(':', 1)
if '.py' in name:
# This can happen on Windows: If the lexername is
# C:\lexer.py -- return to normal load path in that case
name = None
| |
<gh_stars>1-10
#!/usr/bin/env python
# -*- coding: utf-8 -*-
###############################################################################
# $Id$
#
# Project: GDAL/OGR Test Suite
# Purpose: Test basic read support for a all datatypes from a TIFF file.
# Author: <NAME> <<EMAIL>>
#
###############################################################################
# Copyright (c) 2003, <NAME> <<EMAIL>>
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Library General Public
# License as published by the Free Software Foundation; either
# version 2 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Library General Public License for more details.
#
# You should have received a copy of the GNU Library General Public
# License along with this library; if not, write to the
# Free Software Foundation, Inc., 59 Temple Place - Suite 330,
# Boston, MA 02111-1307, USA.
###############################################################################
import os
import sys
import string
import shutil
sys.path.append( '../pymod' )
import gdaltest
from osgeo import gdal, osr
###############################################################################
# When imported build a list of units based on the files available.
gdaltest_list = []
init_list = [ \
('byte.tif', 1, 4672, None),
('int10.tif', 1, 4672, None),
('int12.tif', 1, 4672, None),
('int16.tif', 1, 4672, None),
('uint16.tif', 1, 4672, None),
('int24.tif', 1, 4672, None),
('int32.tif', 1, 4672, None),
('uint32.tif', 1, 4672, None),
('float16.tif', 1, 4672, None),
('float24.tif', 1, 4672, None),
('float32.tif', 1, 4672, None),
('float32_minwhite.tif', 1, 1, None),
('float64.tif', 1, 4672, None),
('cint16.tif', 1, 5028, None),
('cint32.tif', 1, 5028, None),
('cfloat32.tif', 1, 5028, None),
('cfloat64.tif', 1, 5028, None),
# The following four related partial final strip/tiles (#1179)
('separate_tiled.tif', 2, 15234, None),
('seperate_strip.tif', 2, 15234, None),
('contig_tiled.tif', 2, 15234, None),
('contig_strip.tif', 2, 15234, None),
('empty1bit.tif', 1, 0, None)]
###############################################################################
# Test absolute/offset && index directory access
def tiff_read_off():
# Test absolute/offset directory access
ds = gdal.Open('GTIFF_DIR:off:408:data/byte.tif')
if ds.GetRasterBand(1).Checksum() != 4672:
return 'fail'
# Test index directory access
ds = gdal.Open('GTIFF_DIR:1:data/byte.tif')
if ds.GetRasterBand(1).Checksum() != 4672:
return 'fail'
# Check that georeferencing is read properly when accessing "GTIFF_DIR" subdatasets (#3478)
gt = ds.GetGeoTransform()
if gt != (440720.0, 60.0, 0.0, 3751320.0, 0.0, -60.0):
gdaltest.post_reason('did not get expected geotransform')
print(gt)
return 'fail'
return 'success'
###############################################################################
# Confirm we interprete bands as alpha when we should, and not when we
# should not.
def tiff_check_alpha():
# Grey + alpha
ds = gdal.Open('data/stefan_full_greyalpha.tif')
if ds.GetRasterBand(2).GetRasterColorInterpretation()!= gdal.GCI_AlphaBand:
gdaltest.post_reason( 'Wrong color interpretation (stefan_full_greyalpha).')
print(ds.GetRasterBand(2).GetRasterColorInterpretation())
return 'fail'
ds = None
# RGB + alpha
ds = gdal.Open('data/stefan_full_rgba.tif')
if ds.GetRasterBand(4).GetRasterColorInterpretation()!= gdal.GCI_AlphaBand:
gdaltest.post_reason( 'Wrong color interpretation (stefan_full_rgba).')
print(ds.GetRasterBand(4).GetRasterColorInterpretation())
return 'fail'
ds = None
# RGB + undefined
ds = gdal.Open('data/stefan_full_rgba_photometric_rgb.tif')
if ds.GetRasterBand(4).GetRasterColorInterpretation()!= gdal.GCI_Undefined:
gdaltest.post_reason( 'Wrong color interpretation (stefan_full_rgba_photometric_rgb).')
print(ds.GetRasterBand(4).GetRasterColorInterpretation())
return 'fail'
ds = None
return 'success'
###############################################################################
# Test reading a CMYK tiff as RGBA image
def tiff_read_cmyk_rgba():
ds = gdal.Open('data/rgbsmall_cmyk.tif')
md = ds.GetMetadata('IMAGE_STRUCTURE')
if 'SOURCE_COLOR_SPACE' not in md or md['SOURCE_COLOR_SPACE'] != 'CMYK':
print('bad value for IMAGE_STRUCTURE[SOURCE_COLOR_SPACE]')
return 'fail'
if ds.GetRasterBand(1).GetRasterColorInterpretation()!= gdal.GCI_RedBand:
gdaltest.post_reason( 'Wrong color interpretation.')
print(ds.GetRasterBand(1).GetRasterColorInterpretation())
return 'fail'
if ds.GetRasterBand(4).GetRasterColorInterpretation()!= gdal.GCI_AlphaBand:
gdaltest.post_reason( 'Wrong color interpretation (alpha).')
print(ds.GetRasterBand(4).GetRasterColorInterpretation())
return 'fail'
if ds.GetRasterBand(1).Checksum() != 23303:
print('Expected checksum = %d. Got = %d' % (23303, ds.GetRasterBand(1).Checksum()))
return 'fail'
return 'success'
###############################################################################
# Test reading a CMYK tiff as a raw image
def tiff_read_cmyk_raw():
ds = gdal.Open('GTIFF_RAW:data/rgbsmall_cmyk.tif')
if ds.GetRasterBand(1).GetRasterColorInterpretation()!= gdal.GCI_CyanBand:
gdaltest.post_reason( 'Wrong color interpretation.')
print(ds.GetRasterBand(1).GetRasterColorInterpretation())
return 'fail'
if ds.GetRasterBand(1).Checksum() != 29430:
print('Expected checksum = %d. Got = %d' % (29430, ds.GetRasterBand(1).Checksum()))
return 'fail'
return 'success'
###############################################################################
# Test reading a OJPEG image
def tiff_read_ojpeg():
md = gdal.GetDriverByName('GTiff').GetMetadata()
if md['DMD_CREATIONOPTIONLIST'].find('JPEG') == -1:
return 'skip'
gdal.PushErrorHandler('CPLQuietErrorHandler')
ds = gdal.Open('data/zackthecat.tif')
gdal.PopErrorHandler()
if ds is None:
if gdal.GetLastErrorMsg().find('Cannot open TIFF file due to missing codec') == 0:
return 'skip'
else:
print(gdal.GetLastErrorMsg())
return 'fail'
gdal.PushErrorHandler('CPLQuietErrorHandler')
got_cs = ds.GetRasterBand(1).Checksum()
gdal.PopErrorHandler()
expected_cs = 61570
if got_cs != expected_cs:
print('Expected checksum = %d. Got = %d' % (expected_cs, got_cs))
return 'fail'
return 'success'
###############################################################################
# Read a .tif.gz file
def tiff_read_gzip():
try:
os.remove('data/byte.tif.gz.properties')
except:
pass
ds = gdal.Open('/vsigzip/./data/byte.tif.gz')
if ds.GetRasterBand(1).Checksum() != 4672:
print('Expected checksum = %d. Got = %d' % (4672, ds.GetRasterBand(1).Checksum()))
return 'fail'
ds = None
try:
os.stat('data/byte.tif.gz.properties')
gdaltest.post_reason('did not expect data/byte.tif.gz.properties')
return 'fail'
except:
return 'success'
###############################################################################
# Read a .tif.zip file (with explicit filename)
def tiff_read_zip_1():
ds = gdal.Open('/vsizip/./data/byte.tif.zip/byte.tif')
if ds.GetRasterBand(1).Checksum() != 4672:
print('Expected checksum = %d. Got = %d' % (4672, ds.GetRasterBand(1).Checksum()))
return 'fail'
ds = None
return 'success'
###############################################################################
# Read a .tif.zip file (with implicit filename)
def tiff_read_zip_2():
ds = gdal.Open('/vsizip/./data/byte.tif.zip')
if ds.GetRasterBand(1).Checksum() != 4672:
print('Expected checksum = %d. Got = %d' % (4672, ds.GetRasterBand(1).Checksum()))
return 'fail'
ds = None
return 'success'
###############################################################################
# Read a .tif.zip file with a single file in a subdirectory (with explicit filename)
def tiff_read_zip_3():
ds = gdal.Open('/vsizip/./data/onefileinsubdir.zip/onefileinsubdir/byte.tif')
if ds.GetRasterBand(1).Checksum() != 4672:
print('Expected checksum = %d. Got = %d' % (4672, ds.GetRasterBand(1).Checksum()))
return 'fail'
ds = None
return 'success'
###############################################################################
# Read a .tif.zip file with a single file in a subdirectory(with implicit filename)
def tiff_read_zip_4():
ds = gdal.Open('/vsizip/./data/onefileinsubdir.zip')
if ds.GetRasterBand(1).Checksum() != 4672:
print('Expected checksum = %d. Got = %d' % (4672, ds.GetRasterBand(1).Checksum()))
return 'fail'
ds = None
return 'success'
###############################################################################
# Read a .tif.zip file with 2 files in a subdirectory
def tiff_read_zip_5():
ds = gdal.Open('/vsizip/./data/twofileinsubdir.zip/twofileinsubdir/byte.tif')
if ds.GetRasterBand(1).Checksum() != 4672:
print('Expected checksum = %d. Got = %d' % (4672, ds.GetRasterBand(1).Checksum()))
return 'fail'
ds = None
return 'success'
###############################################################################
# Read a .tar file (with explicit filename)
def tiff_read_tar_1():
ds = gdal.Open('/vsitar/./data/byte.tar/byte.tif')
if ds.GetRasterBand(1).Checksum() != 4672:
print('Expected checksum = %d. Got = %d' % (4672, ds.GetRasterBand(1).Checksum()))
return 'fail'
ds = None
return 'success'
###############################################################################
# Read a .tar file (with implicit filename)
def tiff_read_tar_2():
ds = gdal.Open('/vsitar/./data/byte.tar')
if ds.GetRasterBand(1).Checksum() != 4672:
print('Expected checksum = %d. Got = %d' % (4672, ds.GetRasterBand(1).Checksum()))
return 'fail'
ds = None
return 'success'
###############################################################################
# Read a .tgz file (with explicit filename)
def tiff_read_tgz_1():
ds = gdal.Open('/vsitar/./data/byte.tgz/byte.tif')
if ds.GetRasterBand(1).Checksum() != 4672:
print('Expected checksum = %d. Got = %d' % (4672, ds.GetRasterBand(1).Checksum()))
return 'fail'
ds = None
return 'success'
###############################################################################
# Read a .tgz file (with implicit filename)
def tiff_read_tgz_2():
ds = gdal.Open('/vsitar/./data/byte.tgz')
if ds.GetRasterBand(1).Checksum() != 4672:
print('Expected checksum = %d. Got = %d' % (4672, ds.GetRasterBand(1).Checksum()))
return 'fail'
ds = None
return 'success'
###############################################################################
# Check handling of non-degree angular units (#601)
def tiff_grads():
ds = gdal.Open('data/test_gf.tif')
srs = ds.GetProjectionRef()
if srs.find('PARAMETER["latitude_of_origin",46.8]') == -1:
print(srs)
gdaltest.post_reason( 'Did not get expected latitude of origin.' )
return 'fail'
return 'success'
###############################################################################
# Check Erdas Citation Parsing for coordinate system.
def tiff_citation():
build_info = gdal.VersionInfo('BUILD_INFO')
if build_info.find('ESRI_BUILD=YES') == -1:
return 'skip'
ds = gdal.Open('data/citation_mixedcase.tif')
wkt = ds.GetProjectionRef()
expected_wkt = """PROJCS["NAD_1983_HARN_StatePlane_Oregon_North_FIPS_3601_Feet_Intl",GEOGCS["GCS_North_American_1983_HARN",DATUM["NAD83_High_Accuracy_Reference_Network",SPHEROID["GRS_1980",6378137.0,298.257222101]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433]],PROJECTION["Lambert_Conformal_Conic_2SP"],PARAMETER["False_Easting",8202099.737532808],PARAMETER["False_Northing",0.0],PARAMETER["Central_Meridian",-120.5],PARAMETER["Standard_Parallel_1",44.33333333333334],PARAMETER["Standard_Parallel_2",46.0],PARAMETER["Latitude_Of_Origin",43.66666666666666],UNIT["Foot",0.3048]]"""
if wkt != expected_wkt:
print('got: ', wkt)
gdaltest.post_reason( 'Erdas citation processing failing?' )
return 'fail'
return 'success'
###############################################################################
# Check that we can read linear projection parameters properly (#3901)
def tiff_linearparmunits():
# Test the file with the correct formulation.
ds = gdal.Open('data/spaf27_correct.tif')
wkt = ds.GetProjectionRef()
ds = None
srs = osr.SpatialReference( wkt )
fe = srs.GetProjParm(osr.SRS_PP_FALSE_EASTING)
if abs(fe-2000000.0) > 0.001:
gdaltest.post_reason( 'did not get expected false easting (1)' )
return 'fail'
# Test the file with the old (broken) GDAL formulation.
ds = gdal.Open('data/spaf27_brokengdal.tif')
wkt = ds.GetProjectionRef()
ds = None
srs = osr.SpatialReference( wkt )
fe = srs.GetProjParm(osr.SRS_PP_FALSE_EASTING)
if abs(fe-609601.219202438) > 0.001:
gdaltest.post_reason( 'did not get expected false easting (2)' )
return 'fail'
# Test the file when using an EPSG code.
ds = gdal.Open('data/spaf27_epsg.tif')
wkt = ds.GetProjectionRef()
ds = None
srs = osr.SpatialReference( wkt )
fe = srs.GetProjParm(osr.SRS_PP_FALSE_EASTING)
if abs(fe-2000000.0) > 0.001:
gdaltest.post_reason( 'did not get expected false easting (3)' )
return 'fail'
return 'success'
###############################################################################
# Check that the GTIFF_LINEAR_UNITS handling works properly (#3901)
def tiff_linearparmunits2():
gdal.SetConfigOption( 'GTIFF_LINEAR_UNITS', 'BROKEN' )
# Test the file with the correct formulation.
ds = gdal.Open('data/spaf27_correct.tif')
wkt = ds.GetProjectionRef()
ds = None
srs = osr.SpatialReference( wkt )
fe = srs.GetProjParm(osr.SRS_PP_FALSE_EASTING)
if abs(fe-6561666.66667) > 0.001:
gdaltest.post_reason( 'did not get expected false easting (1)' )
return 'fail'
# Test the file with the correct formulation that is marked as correct.
ds = gdal.Open('data/spaf27_markedcorrect.tif')
wkt = ds.GetProjectionRef()
ds = None
srs = osr.SpatialReference( wkt )
fe = srs.GetProjParm(osr.SRS_PP_FALSE_EASTING)
if abs(fe-2000000.0) > 0.001:
gdaltest.post_reason( 'did not get expected false easting (2)' )
| |
tf.tanh(rewards / 5.0)
# Negative rewards are given less weight than positive rewards.
clipped_rewards = tf.where(rewards < 0, .3 * squeezed, squeezed) * 5.
discounts = tf.to_float(~done) * FLAGS.discounting
# Compute V-trace returns and weights.
# Note, this is put on the CPU because it's faster than on GPU. It can be
# improved further with XLA-compilation or with a custom TensorFlow operation.
with tf.device('/cpu'):
return vtrace.from_logits(
behaviour_policy_logits=agent_outputs.policy_logits,
target_policy_logits=learner_outputs.policy_logits,
actions=agent_outputs.action,
discounts=discounts,
rewards=clipped_rewards,
values=learner_outputs.baseline,
bootstrap_value=bootstrap_value), learner_outputs, agent_outputs, done, infos
def compute_loss(agent, agent_state, env_outputs, agent_outputs,
env_outputs_vr=None, agent_outputs_vr=None,
env_outputs_pc=None, agent_outputs_pc=None,
env_outputs_rp=None, buffer_full=None):
# Compute loss as a weighted sum of the baseline loss, the policy gradient
# loss and an entropy regularization term.
# AC baseline
learner_outputs, _ = agent.unroll(agent_outputs.action, env_outputs, agent_state)
aux_rewards = None
# ICM module/Curiosity
if FLAGS.curiosity:
curiosity_outputs = agent.icm_unroll(agent_outputs.action, env_outputs)
icm_inverse_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=agent_outputs.action[1:],
logits=curiosity_outputs.icm_inverse,
name='sparse_softmax_curiosity'
)
aux_rewards = curiosity_outputs.icm_forward
icm_forward_loss = tf.reduce_sum(aux_rewards)
vtrace_returns, learner_outputs, agent_outputs, done, infos = \
compute_vtrace(learner_outputs, env_outputs, agent_outputs, aux_rewards=aux_rewards)
if FLAGS.qr:
num_atoms = FLAGS.num_atoms
total_loss = compute_policy_gradient_loss(
learner_outputs.policy_logits, agent_outputs.action,
tf.reduce_sum(vtrace_returns.pg_advantages, axis=-1) / num_atoms)
v_t_selected = tf.transpose(
tf.reshape(tf.tile(vtrace_returns.vs, [1, 1, num_atoms]),
[-1, FLAGS.batch_size, num_atoms, num_atoms]),
[0, 1, 2, 3])
v_t_selected_target = tf.reshape(
tf.tile(learner_outputs.baseline, [1, 1, num_atoms]),
[-1, FLAGS.batch_size, num_atoms, num_atoms])
rho = tf.range(0, num_atoms + 1) / num_atoms
rho = tf.get_variable('rho', trainable=False, initializer=tf.cast(tf.reshape(tf.tile(
tf.slice(rho, [0], [num_atoms]) + tf.slice(rho, [1], [num_atoms]) / 2,
[num_atoms]), [num_atoms, num_atoms]), tf.float32))
total_loss += FLAGS.baseline_cost * compute_huber_quantile_loss(v_t_selected_target - v_t_selected,
rho)
else:
total_loss = compute_policy_gradient_loss(
learner_outputs.policy_logits, agent_outputs.action,
vtrace_returns.pg_advantages)
total_loss += FLAGS.baseline_cost * compute_baseline_loss(
vtrace_returns.vs - learner_outputs.baseline)
total_loss += FLAGS.entropy_cost * compute_entropy_loss(
learner_outputs.policy_logits)
# Replay buffer
if bool_buffer:
is_full = tf.where(
tf.equal(tf.reduce_sum(tf.cast(buffer_full, tf.int32)),
tf.constant(FLAGS.batch_size, dtype=tf.int32)),
tf.constant(1.0, dtype=tf.float32), tf.constant(0, dtype=tf.float32), name="is_full")
# Value replay
if FLAGS.value_replay:
learner_outputs_vr, _ = agent.unroll(agent_outputs_vr.action, env_outputs_vr,
agent.initial_state(FLAGS.batch_size))
vtrace_returns_vr, learner_outputs_vr, _, _, _ = compute_vtrace(learner_outputs_vr, env_outputs_vr,
agent_outputs_vr)
# Value replay loss
total_loss += is_full * compute_baseline_loss(
vtrace_returns_vr.vs - learner_outputs_vr.baseline)
# Pixel change
if FLAGS.pixel_change:
first_env_outputs_pc = nest.map_structure(lambda t: t[0], env_outputs_pc)
agent_outputs_pc = nest.map_structure(lambda t: t[1:], agent_outputs_pc)
env_outputs_pc = nest.map_structure(lambda t: t[1:], env_outputs_pc)
learner_outputs_pc = agent.decov_unroll(agent_outputs_pc.action, env_outputs_pc,
agent.initial_state(FLAGS.batch_size))
env_outputs_pc = compute_pc(first_env_outputs_pc, env_outputs_pc)
vs_pc, learner_outputs_pc, agent_outputs_pc, _, _ = compute_vs(learner_outputs_pc, env_outputs_pc,
agent_outputs_pc)
# Extract Q for taken action
q_pc = tf.transpose(learner_outputs_pc.q, [0, 1, 4, 2, 3])
q_pc = tf.reshape(q_pc, [-1] + q_pc.get_shape().as_list()[2:])
idx = tf.stack(
[tf.range(0, q_pc.get_shape().as_list()[0]), tf.reshape(agent_outputs_pc.action, [-1])], axis=1)
q_pc = tf.reshape(tf.gather_nd(q_pc, idx), [-1, FLAGS.batch_size] + q_pc.get_shape().as_list()[2:])
# Pixel change loss - TD target for Q
total_loss += is_full * .05 * compute_baseline_loss(vs_pc - q_pc)
# Reward prediction
if FLAGS.reward_prediction:
labels = tf.sign(tf.cast(env_outputs_rp.reward[-1], tf.int32)) + 1
env_outputs_rp = nest.map_structure(lambda t: t[:3], env_outputs_rp)
logits_rp = tf.squeeze(agent.rp(env_outputs_rp))
# Reward prediction loss
rp_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels,
logits=logits_rp,
name='sparse_softmax_rp'
)
total_loss += is_full * tf.reduce_sum(rp_loss)
# Depth prediction
if FLAGS.depth:
total_loss += compute_depth_loss(learner_outputs.depth_logits, env_outputs.observation)
# ICM module/Curiosity
if FLAGS.curiosity:
total_loss = total_loss * FLAGS.lmbda + (1 - FLAGS.beta) * icm_inverse_loss + FLAGS.beta * icm_forward_loss
return total_loss, done, infos
def build_learner(agent, agent_state, env_outputs, agent_outputs,
env_outputs_vr=None, agent_outputs_vr=None,
env_outputs_pc=None, agent_outputs_pc=None,
env_outputs_rp=None, buffer_full=None, **kwargs):
"""Builds the learner loop.
Args:
agent: A snt.RNNCore module outputting `AgentOutput` named tuples, with an
`unroll` call for computing the outputs for a whole trajectory.
agent_state: The initial agent state for each sequence in the batch.
env_outputs: A `StepOutput` namedtuple where each field is of shape
agent_outputs: An `AgentOut [T+1, ...].
put` namedtuple where each field is of shape
[T+1, ...].
Returns:
A tuple of (done, infos, and environment frames) where
the environment frames tensor causes an update.
"""
# Estimate loss and retrieve additional information
total_loss, done, infos = compute_loss(agent, agent_state, env_outputs, agent_outputs,
env_outputs_vr=env_outputs_vr, agent_outputs_vr=agent_outputs_vr,
env_outputs_pc=env_outputs_pc, agent_outputs_pc=agent_outputs_pc,
env_outputs_rp=env_outputs_rp, buffer_full=buffer_full)
# Optimization
num_env_frames = tf.train.get_global_step()
learning_rate = tf.train.polynomial_decay(FLAGS.learning_rate, num_env_frames,
FLAGS.total_environment_frames)
optimizer = tf.train.RMSPropOptimizer(learning_rate, FLAGS.decay,
FLAGS.momentum, FLAGS.epsilon)
train_op = optimizer.minimize(total_loss)
# Merge updating the network and environment frames into a single tensor.
with tf.control_dependencies([train_op]):
num_env_frames_and_train = num_env_frames.assign_add(
FLAGS.batch_size * FLAGS.unroll_length * FLAGS.num_action_repeats)
# Adding a few summaries.
tf.summary.scalar('learning_rate', learning_rate)
tf.summary.scalar('total_loss', total_loss)
return done, infos, num_env_frames_and_train
def create_environment(level_name, seed, is_test=False):
"""Creates an environment wrapped in a `FlowEnvironment`."""
if level_name in dmlab30.ALL_LEVELS:
level_name = 'contributed/dmlab30/' + level_name
# Note, you may want to use a level cache to speed of compilation of
# environment maps. See the documentation for the Python interface of DeepMind
# Lab.
config = {
'width': FLAGS.width,
'height': FLAGS.height,
'datasetPath': FLAGS.dataset_path,
'logLevel': 'WARN',
}
if is_test:
config['allowHoldOutLevels'] = 'true'
# Mixer seed for evalution, see
# https://github.com/deepmind/lab/blob/master/docs/users/python_api.md
config['mixerSeed'] = 0x600D5EED
p = py_process.PyProcess(environment.PyProcessDmLab, level_name, config,
FLAGS.num_action_repeats, seed)
return environment.FlowEnvironment(p.proxy)
@contextlib.contextmanager
def pin_global_variables(device):
"""Pins global variables to the specified device."""
def getter(getter, *args, **kwargs):
var_collections = kwargs.get('collections', None)
if var_collections is None:
var_collections = [tf.GraphKeys.GLOBAL_VARIABLES]
if tf.GraphKeys.GLOBAL_VARIABLES in var_collections:
with tf.device(device):
return getter(*args, **kwargs)
else:
return getter(*args, **kwargs)
with tf.variable_scope('', custom_getter=getter) as vs:
yield vs
def train(action_set, level_names):
"""Train."""
if is_single_machine():
local_job_device = ''
shared_job_device = ''
is_actor_fn = lambda i: True
is_learner = True
global_variable_device = '/gpu'
server = tf.train.Server.create_local_server()
filters = []
else:
local_job_device = '/job:%s/task:%d' % (FLAGS.job_name, FLAGS.task)
shared_job_device = '/job:learner/task:0'
is_actor_fn = lambda i: FLAGS.job_name == 'actor' and i == FLAGS.task
is_learner = FLAGS.job_name == 'learner'
# Placing the variable on CPU, makes it cheaper to send it to all the
# actors. Continual copying the variables from the GPU is slow.
global_variable_device = shared_job_device + '/cpu'
cluster = tf.train.ClusterSpec({
'actor': ['localhost:%d' % (8001 + i) for i in range(FLAGS.num_actors)],
'learner': ['localhost:8000']
})
server = tf.train.Server(cluster, job_name=FLAGS.job_name,
task_index=FLAGS.task)
filters = [shared_job_device, local_job_device]
# Only used to find the actor output structure.
with tf.Graph().as_default():
agent = Agent(len(action_set))
env = create_environment(level_names[0], seed=0)
structure = build_actor(agent, env, level_names[0], action_set, -1)
flattened_structure = nest.flatten(structure)
dtypes = [t.dtype for t in flattened_structure]
shapes = [t.shape.as_list() for t in flattened_structure]
with tf.Graph().as_default(), \
tf.device(local_job_device + '/cpu'), \
pin_global_variables(global_variable_device):
tf.set_random_seed(FLAGS.seed) # Makes initialization deterministic.
# Create Queue and Agent on the learner.
with tf.device(shared_job_device):
queue = tf.FIFOQueue(1, dtypes, shapes, shared_name='buffer')
agent = Agent(len(action_set))
if is_single_machine() and 'dynamic_batching' in sys.modules:
# For single machine training, we use dynamic batching for improved GPU
# utilization. The semantics of single machine training are slightly
# different from the distributed setting because within a single unroll
# of an environment, the actions may be computed using different weights
# if an update happens within the unroll.
old_build = agent._build
@dynamic_batching.batch_fn
def build(*args):
with tf.device('/gpu'):
return old_build(*args)
tf.logging.info('Using dynamic batching.')
agent._build = build
# Build actors and ops to enqueue their output.
enqueue_ops = []
for i in range(FLAGS.num_actors):
if is_actor_fn(i):
level_name = level_names[i % len(level_names)]
tf.logging.info('Creating actor %d with level %s', i, level_name)
env = create_environment(level_name, seed=i + 1)
actor_output = build_actor(agent, env, level_name, action_set, i)
with tf.device(shared_job_device):
enqueue_ops.append(queue.enqueue(nest.flatten(actor_output)))
# If running in a single machine setup, run actors with QueueRunners
# (separate threads).
if is_learner and enqueue_ops:
tf.train.add_queue_runner(tf.train.QueueRunner(queue, enqueue_ops))
# Build learner.
if is_learner:
# Create global step, which is the number of environment frames processed.
tf.get_variable(
'num_environment_frames',
initializer=tf.zeros_initializer(),
shape=[],
dtype=tf.int64,
trainable=False,
collections=[tf.GraphKeys.GLOBAL_STEP, tf.GraphKeys.GLOBAL_VARIABLES])
# Create batch (time major) and recreate structure.
dequeued = queue.dequeue_many(FLAGS.batch_size)
dequeued = nest.pack_sequence_as(structure, dequeued)
def make_time_major(s):
return nest.map_structure(
lambda t: tf.transpose(t, [1, 0] + list(range(t.shape.ndims))[2:]), s)
dict_inputs = {
'env_outputs': make_time_major(dequeued.env_outputs),
'agent_outputs': make_time_major(dequeued.agent_outputs)
}
if FLAGS.value_replay:
dict_inputs['env_outputs_vr'] = make_time_major(dequeued.env_outputs_vr)
dict_inputs['agent_outputs_vr'] = make_time_major(dequeued.agent_outputs_vr)
if FLAGS.pixel_change:
dict_inputs['env_outputs_pc'] = make_time_major(dequeued.env_outputs_pc)
dict_inputs['agent_outputs_pc'] = make_time_major(dequeued.agent_outputs_pc)
if FLAGS.reward_prediction:
dict_inputs['env_outputs_rp'] = make_time_major(dequeued.env_outputs_rp)
dequeued = dequeued._replace(**dict_inputs)
with tf.device('/gpu'):
# Using StagingArea allows us to prepare the next batch and send it to
# the GPU while we're performing a training step. This adds up to 1 step
# policy lag.
flattened_output = nest.flatten(dequeued)
area = tf.contrib.staging.StagingArea(
[t.dtype for t in flattened_output],
[t.shape for t in flattened_output])
stage_op = area.put(flattened_output)
data_from_actors = nest.pack_sequence_as(structure, area.get())
# Unroll agent on sequence, create losses and update ops.
output = build_learner(agent, **data_from_actors._asdict())
# Create MonitoredSession (to run the graph, checkpoint and log).
tf.logging.info('Creating MonitoredSession, is_chief %s', is_learner)
config = tf.ConfigProto(allow_soft_placement=True, device_filters=filters)
with tf.train.MonitoredTrainingSession(
server.target,
is_chief=is_learner,
checkpoint_dir=FLAGS.logdir,
save_checkpoint_secs=900,
save_summaries_steps=400000,
log_step_count_steps=400000,
config=config,
hooks=[py_process.PyProcessHook()]) as session:
if is_learner:
# Logging.
level_returns = {level_name: [] for level_name in level_names}
summary_writer | |
<filename>produce_scenes_audio.py
# CLEAR Dataset
# >> Scene audio produce
#
# Author : <NAME>
# Year : 2018-2019
# Affiliations: Universite de Sherbrooke - Electrical and Computer Engineering faculty
# KTH Stockholm Royal Institute of Technology
# IGLU - CHIST-ERA
import sys, os, argparse, random
from multiprocessing import Process, Queue
from shutil import rmtree as rm_dir
from datetime import datetime
import time
import gc
import json
from pydub import AudioSegment
from pydub.utils import get_array_type
import numpy as np
import matplotlib
# Matplotlib options to reduce memory usage
matplotlib.interactive(False)
matplotlib.use('agg')
import matplotlib.pyplot as plt
from utils.audio_processing import add_reverberation, generate_random_noise
from utils.misc import init_random_seed, pydub_audiosegment_to_float_array, float_array_to_pydub_audiosegment
from utils.misc import save_arguments
"""
Arguments definition
"""
parser = argparse.ArgumentParser(fromfile_prefix_chars='@')
# Inputs
parser.add_argument('--elementary_sounds_folder', default='../elementary_sounds', type=str,
help='Folder containing all the elementary sounds and the JSON listing them')
parser.add_argument('--elementary_sounds_definition_filename', default='elementary_sounds.json', type=str,
help='Filename of the JSON file listing the attributes of the elementary sounds')
# Options
parser.add_argument('--with_background_noise', action='store_true',
help='Use this setting to include a background noise in the scenes')
parser.add_argument('--background_noise_gain_range', default="-100,-20", type=str,
help='Range for the gain applied to the background noise. '
'Should be written as 0,100 for a range from 0 to 100')
parser.add_argument('--no_background_noise', action='store_true',
help='Override the --with_background_noise setting. If this is set, there will be no background noise.')
parser.add_argument('--with_reverb', action='store_true',
help='Use this setting to include ramdom reverberations in the scenes')
parser.add_argument('--reverb_room_scale_range', default="0,100", type=str,
help='Range for the reverberation parameter. Should be written as 0,100 for a range from 0 to 100')
parser.add_argument('--reverb_delay_range', default="0,500", type=str,
help='Range for the reverberation parameter. Should be written as 0,100 for a range from 0 to 100')
parser.add_argument('--no_reverb', action='store_true',
help='Override the --with_reverb setting. If this is set, there will be no reverberation.')
parser.add_argument('--no_audio_files', action='store_true',
help='If set, audio file won\'t be produced. '
'The --produce_spectrograms switch will also be activated')
parser.add_argument('--produce_spectrograms', action='store_true',
help='If set, produce the spectrograms for each scenes')
parser.add_argument('--spectrogram_freq_resolution', default=21, type=int,
help='Resolution of the Y axis in Freq/px ')
parser.add_argument('--spectrogram_time_resolution', default=3, type=int,
help='Resolution of the X axis in ms/px')
parser.add_argument('--spectrogram_window_length', default=1024, type=int,
help='Number of samples used in the FFT window')
parser.add_argument('--spectrogram_window_overlap', default=512, type=int,
help='Number of samples that are overlapped in the FFT window')
# Outputs
parser.add_argument('--output_folder', default='../output', type=str,
help='Folder where the audio and images will be saved')
parser.add_argument('--set_type', default='train', type=str,
help="Specify the set type (train/val/test)")
parser.add_argument('--clear_existing_files', action='store_true',
help='If set, will delete all files in the output folder before starting the generation.')
parser.add_argument('--output_filename_prefix', default='CLEAR', type=str,
help='Prefix used for produced files')
parser.add_argument('--output_frame_rate', default=22050, type=int,
help='Frame rate of the outputed audio file')
parser.add_argument('--do_resample', action='store_true',
help='If set, will use the --output_frame_rate to resample the audio')
parser.add_argument('--output_version_nb', default='0.1', type=str,
help='Version number that will be appended to the produced file')
parser.add_argument('--produce_specific_scenes', default="", type=str,
help='Range for the reverberation parameter. Should be written as 0,100 for a range from 0 to 100')
# Misc
parser.add_argument('--random_nb_generator_seed', default=None, type=int,
help='Set the random number generator seed to reproduce results')
parser.add_argument('--nb_process', default=4, type=int,
help='Number of process allocated for the production')
"""
Produce audio recording from scene JSON definition
Can also produce spectrograms of the scene if the correct option is provided
- Load scenes JSON definition from file
- Calculate random silence duration (Silence between sounds)
- Concatenate Elementary Sounds (In the order defined by the scene JSON)
- Generate random white noise and overlay on the scene
- Apply reverberation effect
- Write audio scene to file (Either as a WAV file, a spectrogram/PNG or both
The production is distributed across {nb_process} processes
"""
class AudioSceneProducer:
def __init__(self,
outputFolder,
version_nb,
spectrogramSettings,
withBackgroundNoise,
backgroundNoiseGainSetting,
withReverb,
reverbSettings,
produce_audio_files,
produce_spectrograms,
clear_existing_files,
elementarySoundsJsonFilename,
elementarySoundFolderPath,
setType,
outputPrefix,
outputFrameRate,
randomSeed):
# Paths
self.outputFolder = outputFolder
self.elementarySoundFolderPath = elementarySoundFolderPath
self.version_nb = version_nb
self.outputPrefix = outputPrefix
self.setType = setType
self.produce_audio_files = produce_audio_files
self.produce_spectrograms = produce_spectrograms
experiment_output_folder = os.path.join(self.outputFolder, self.version_nb)
# Loading elementary sounds definition from json definition file
with open(os.path.join(self.elementarySoundFolderPath, elementarySoundsJsonFilename)) as file:
self.elementarySounds = json.load(file)
# Loading scenes definition
sceneFilename = '%s_%s_scenes.json' % (self.outputPrefix, self.setType)
sceneFilepath = os.path.join(experiment_output_folder, 'scenes', sceneFilename)
with open(sceneFilepath) as scenesJson:
self.scenes = json.load(scenesJson)['scenes']
self.spectrogramSettings = spectrogramSettings
self.withBackgroundNoise = withBackgroundNoise
self.backgroundNoiseGainSetting = backgroundNoiseGainSetting
self.withReverb = withReverb
self.reverbSettings = reverbSettings
self.outputFrameRate = outputFrameRate
root_images_output_folder = os.path.join(experiment_output_folder, 'images')
root_audio_output_folder = os.path.join(experiment_output_folder, 'audio')
if not os.path.isdir(experiment_output_folder):
# This is impossible, if the experiment folder doesn't exist we won't be able to retrieve the scenes
os.mkdir(experiment_output_folder)
self.images_output_folder = os.path.join(root_images_output_folder, self.setType)
self.audio_output_folder = os.path.join(root_audio_output_folder, self.setType)
if self.produce_audio_files:
if not os.path.isdir(root_audio_output_folder):
os.mkdir(root_audio_output_folder)
os.mkdir(self.audio_output_folder)
else:
if not os.path.isdir(self.audio_output_folder):
os.mkdir(self.audio_output_folder)
elif clear_existing_files:
rm_dir(self.audio_output_folder)
os.mkdir(self.audio_output_folder)
if self.produce_spectrograms:
if not os.path.isdir(root_images_output_folder):
os.mkdir(root_images_output_folder)
os.mkdir(self.images_output_folder)
else:
if not os.path.isdir(self.images_output_folder):
os.mkdir(self.images_output_folder)
elif clear_existing_files:
rm_dir(self.images_output_folder)
os.mkdir(self.images_output_folder)
self.currentSceneIndex = -1 # We start at -1 since nextScene() will increment idx at the start of the fct
self.nbOfLoadedScenes = len(self.scenes)
if self.nbOfLoadedScenes == 0:
print("[ERROR] Must have at least 1 scene in '" + sceneFilepath + "'", file=sys.stderr)
exit(1)
self.show_status_every = int(self.nbOfLoadedScenes / 10)
self.show_status_every = self.show_status_every if self.show_status_every > 0 else 1
self.loadedSounds = []
self.randomSeed = randomSeed
def loadAllElementarySounds(self):
print("Loading elementary sounds")
for sound in self.elementarySounds:
# Creating the audio segment (Suppose WAV format)
soundFilepath = os.path.join(self.elementarySoundFolderPath, sound['filename'])
soundAudioSegment = AudioSegment.from_wav(soundFilepath)
if self.outputFrameRate and soundAudioSegment.frame_rate != self.outputFrameRate:
soundAudioSegment = soundAudioSegment.set_frame_rate(self.outputFrameRate)
self.loadedSounds.append({
'name': sound['filename'],
'audioSegment': soundAudioSegment
})
print("Done loading elementary sounds")
def _getLoadedAudioSegmentByName(self, name):
filterResult = list(filter(lambda sound: sound['name'] == name, self.loadedSounds))
if len(filterResult) == 1:
return filterResult[0]['audioSegment']
else:
print('[ERROR] Could not retrieve loaded audio segment \'' + name + '\' from memory.')
exit(1)
def produceSceneProcess(self, queue, emptyQueueTimeout=5):
# Wait 1 sec for the main thread to fillup the queue
time.sleep(1)
emptyQueueCount = 0
while emptyQueueCount < emptyQueueTimeout:
if not queue.empty():
# Reset empty queue count
emptyQueueCount = 0
# Retrieve Id and produce scene
idToProcess = queue.get()
self.produceScene(idToProcess)
else:
emptyQueueCount += 1
time.sleep(random.random())
return
def produceScene(self, sceneId):
# Since this function is run by different process, we must set the same seed for every process
init_random_seed(self.randomSeed)
if sceneId < self.nbOfLoadedScenes:
scene = self.scenes[sceneId]
if sceneId % self.show_status_every == 0:
print('Producing scene ' + str(sceneId), flush=True)
sceneAudioSegment = self.assembleAudioScene(scene)
if self.outputFrameRate and sceneAudioSegment.frame_rate != self.outputFrameRate:
sceneAudioSegment = sceneAudioSegment.set_frame_rate(self.outputFrameRate)
if self.produce_audio_files:
audioFilename = '%s_%s_%06d.flac' % (self.outputPrefix, self.setType, sceneId)
sceneAudioSegment.export(os.path.join(self.audio_output_folder, audioFilename), format='flac')
if self.produce_spectrograms:
spectrogram = AudioSceneProducer.createSpectrogram(sceneAudioSegment,
self.spectrogramSettings['freqResolution'],
self.spectrogramSettings['timeResolution'],
self.spectrogramSettings['window_length'],
self.spectrogramSettings['window_overlap'])
imageFilename = '%s_%s_%06d.png' % (self.outputPrefix, self.setType, sceneId)
spectrogram.savefig(os.path.join(self.images_output_folder, imageFilename), dpi=100)
AudioSceneProducer.clearSpectrogram(spectrogram)
else:
print("[ERROR] The scene specified by id '%d' couln't be found" % sceneId)
def assembleAudioScene(self, scene):
sceneAudioSegment = AudioSegment.empty()
sceneAudioSegment += AudioSegment.silent(duration=scene['silence_before'])
for sound in scene['objects']:
newAudioSegment = self._getLoadedAudioSegmentByName(sound['filename'])
sceneAudioSegment += newAudioSegment
# Insert a silence padding after the sound
sceneAudioSegment += AudioSegment.silent(duration=sound['silence_after'])
if self.withBackgroundNoise:
gain = random.randrange(self.backgroundNoiseGainSetting['min'], self.backgroundNoiseGainSetting['max'])
sceneAudioSegment = AudioSceneProducer.overlayBackgroundNoise(sceneAudioSegment, gain)
if self.withReverb:
roomScale = random.randrange(self.reverbSettings['roomScale']['min'],
self.reverbSettings['roomScale']['max'])
delay = random.randrange(self.reverbSettings['delay']['min'], self.reverbSettings['delay']['max'])
sceneAudioSegment = AudioSceneProducer.applyReverberation(sceneAudioSegment, roomScale, delay)
# Make sure the everything is in Mono (If stereo, will convert to mono)
sceneAudioSegment.set_channels(1)
return sceneAudioSegment
@staticmethod
def applyReverberation(audioSegment, roomScale, delay):
floatArray = pydub_audiosegment_to_float_array(audioSegment, audioSegment.frame_rate, audioSegment.sample_width)
floatArrayWithReverb = add_reverberation(floatArray, room_scale=roomScale, pre_delay=delay)
return float_array_to_pydub_audiosegment(floatArrayWithReverb, audioSegment.frame_rate,
audioSegment.sample_width)
@staticmethod
def overlayBackgroundNoise(sceneAudioSegment, noiseGain):
backgroundNoise = generate_random_noise(sceneAudioSegment.duration_seconds * 1000,
noiseGain,
sceneAudioSegment.frame_width,
sceneAudioSegment.frame_rate)
sceneAudioSegment = backgroundNoise.overlay(sceneAudioSegment)
return sceneAudioSegment
@staticmethod
def createSpectrogram(sceneAudioSegment, freqResolution, timeResolution, windowLength, windowOverlap):
highestFreq = sceneAudioSegment.frame_rate/2
height = highestFreq // freqResolution
width = sceneAudioSegment.duration_seconds * 1000 // timeResolution
# Set figure settings to remove all axis
spectrogram = plt.figure(frameon=False)
spectrogram.set_size_inches(width/100, height/100)
ax = plt.Axes(spectrogram, [0., 0., 1., 1.])
ax.set_axis_off()
spectrogram.add_axes(ax)
# Generate the spectrogram
# See https://matplotlib.org/api/_as_gen/matplotlib.pyplot.specgram.html?highlight=matplotlib%20pyplot%20specgram#matplotlib.pyplot.specgram
Pxx, freqs, bins, im = ax.specgram(x=np.frombuffer(sceneAudioSegment._data,
dtype=get_array_type(8*sceneAudioSegment.frame_width)),
Fs=sceneAudioSegment.frame_rate,
window=matplotlib.mlab.window_hanning,
NFFT=windowLength,
noverlap=windowOverlap,
scale='dB')
return spectrogram
@staticmethod
def clearSpectrogram(spectrogram):
# Close and Clear the figure
plt.close(spectrogram)
spectrogram.clear()
gc.collect()
def mainPool():
args = parser.parse_args()
# Setting & Saving the random seed
assert args.random_nb_generator_seed is not None, "The seed must be specified in the arguments."
init_random_seed(args.random_nb_generator_seed)
# If not producing audio, we will produce spectrograms
if args.no_audio_files and not args.produce_spectrograms:
args.produce_spectrograms = True
# Preparing settings
reverbRoomScaleRange = args.reverb_room_scale_range.split(',')
reverbDelayRange = args.reverb_delay_range.split(',')
reverbSettings = {
'roomScale': {
'min': int(reverbRoomScaleRange[0]),
'max': int(reverbRoomScaleRange[1])
},
'delay': {
'min': int(reverbDelayRange[0]),
'max': int(reverbDelayRange[1])
}
}
backgroundNoiseGainRange = args.background_noise_gain_range.split(',')
backgroundNoiseGainSetting = {
'min': int(backgroundNoiseGainRange[0]),
'max': int(backgroundNoiseGainRange[1])
}
args.with_background_noise = args.with_background_noise and not args.no_background_noise
args.with_reverb = args.with_reverb and not args.no_reverb
# Creating the producer
producer = AudioSceneProducer(outputFolder=args.output_folder,
version_nb=args.output_version_nb,
elementarySoundsJsonFilename=args.elementary_sounds_definition_filename,
elementarySoundFolderPath=args.elementary_sounds_folder,
setType=args.set_type,
randomSeed=args.random_nb_generator_seed,
outputFrameRate=args.output_frame_rate if args.do_resample else None ,
outputPrefix=args.output_filename_prefix,
produce_audio_files=not args.no_audio_files,
produce_spectrograms=args.produce_spectrograms,
clear_existing_files=args.clear_existing_files,
withBackgroundNoise=args.with_background_noise,
backgroundNoiseGainSetting=backgroundNoiseGainSetting,
withReverb=args.with_reverb,
reverbSettings=reverbSettings,
spectrogramSettings={
'freqResolution': args.spectrogram_freq_resolution,
'timeResolution': args.spectrogram_time_resolution,
'window_length': args.spectrogram_window_length,
'window_overlap': args.spectrogram_window_overlap,
})
# Save arguments
save_arguments(args, f"{args.output_folder}/{args.output_version_nb}/arguments",
f"produce_scenes_audio_{args.set_type}.args")
# Setting ids of scenes to produce
if args.produce_specific_scenes == '':
| |
is not None:
self._categories = self.categories
# categories not set -> infer if we need legacy mode or not
elif self.n_values is not None and self.n_values != 'auto':
msg = (
"Passing 'n_values' is deprecated in version 0.20 and will be "
"removed in 0.22. You can use the 'categories' keyword "
"instead. 'n_values=n' corresponds to 'categories=[range(n)]'."
)
warnings.warn(msg, DeprecationWarning)
self._legacy_mode = True
else: # n_values = 'auto'
if self.handle_unknown == 'ignore':
# no change in behaviour, no need to raise deprecation warning
self._legacy_mode = False
self._categories = 'auto'
if self.n_values == 'auto':
# user manually specified this
msg = (
"Passing 'n_values' is deprecated in version 0.20 and "
"will be removed in 0.22. n_values='auto' can be "
"replaced with categories='auto'."
)
warnings.warn(msg, DeprecationWarning)
else:
# check if we have integer or categorical input
try:
X = check_array(X, dtype=np.int)
except ValueError:
self._legacy_mode = False
self._categories = 'auto'
else:
msg = (
"The handling of integer data will change in version "
"0.22. Currently, the categories are determined "
"based on the range [0, max(values)], while in the "
"future they will be determined based on the unique "
"values.\nIf you want the future behaviour and "
"silence this warning, you can specify "
"\"categories='auto'\".\n"
"In case you used a LabelEncoder before this "
"OneHotEncoder to convert the categories to integers, "
"then you can now use the OneHotEncoder directly."
)
warnings.warn(msg, FutureWarning)
self._legacy_mode = True
self.n_values = 'auto'
# if user specified categorical_features -> always use legacy mode
if self.categorical_features is not None:
if (isinstance(self.categorical_features, six.string_types)
and self.categorical_features == 'all'):
warnings.warn(
"The 'categorical_features' keyword is deprecated in "
"version 0.20 and will be removed in 0.22. The passed "
"value of 'all' is the default and can simply be removed.",
DeprecationWarning)
else:
if self.categories is not None:
raise ValueError(
"The 'categorical_features' keyword is deprecated, "
"and cannot be used together with specifying "
"'categories'.")
warnings.warn(
"The 'categorical_features' keyword is deprecated in "
"version 0.20 and will be removed in 0.22. You can "
"use the ColumnTransformer instead.", DeprecationWarning)
self._legacy_mode = True
self._categorical_features = self.categorical_features
else:
self._categorical_features = 'all'
def fit(self, X, y=None):
"""Fit OneHotEncoder to X.
Parameters
----------
X : array-like, shape [n_samples, n_features]
The data to determine the categories of each feature.
Returns
-------
self
"""
if self.handle_unknown not in ('error', 'ignore'):
msg = ("handle_unknown should be either 'error' or 'ignore', "
"got {0}.".format(self.handle_unknown))
raise ValueError(msg)
self._handle_deprecations(X)
if self._legacy_mode:
_transform_selected(X, self._legacy_fit_transform, self.dtype,
self._categorical_features,
copy=True)
return self
else:
self._fit(X, handle_unknown=self.handle_unknown)
return self
def _legacy_fit_transform(self, X):
"""Assumes X contains only categorical features."""
dtype = getattr(X, 'dtype', None)
X = check_array(X, dtype=np.int)
if np.any(X < 0):
raise ValueError("OneHotEncoder in legacy mode cannot handle "
"categories encoded as negative integers. "
"Please set categories='auto' explicitly to "
"be able to use arbitrary integer values as "
"category identifiers.")
n_samples, n_features = X.shape
if (isinstance(self.n_values, six.string_types) and
self.n_values == 'auto'):
n_values = np.max(X, axis=0) + 1
elif isinstance(self.n_values, numbers.Integral):
if (np.max(X, axis=0) >= self.n_values).any():
raise ValueError("Feature out of bounds for n_values=%d"
% self.n_values)
n_values = np.empty(n_features, dtype=np.int)
n_values.fill(self.n_values)
else:
try:
n_values = np.asarray(self.n_values, dtype=int)
except (ValueError, TypeError):
raise TypeError("Wrong type for parameter `n_values`. Expected"
" 'auto', int or array of ints, got %r"
% type(X))
if n_values.ndim < 1 or n_values.shape[0] != X.shape[1]:
raise ValueError("Shape mismatch: if n_values is an array,"
" it has to be of shape (n_features,).")
self._n_values_ = n_values
self.categories_ = [np.arange(n_val - 1, dtype=dtype)
for n_val in n_values]
n_values = np.hstack([[0], n_values])
indices = np.cumsum(n_values)
self._feature_indices_ = indices
column_indices = (X + indices[:-1]).ravel()
row_indices = np.repeat(np.arange(n_samples, dtype=np.int32),
n_features)
data = np.ones(n_samples * n_features)
out = sparse.coo_matrix((data, (row_indices, column_indices)),
shape=(n_samples, indices[-1]),
dtype=self.dtype).tocsr()
if (isinstance(self.n_values, six.string_types) and
self.n_values == 'auto'):
mask = np.array(out.sum(axis=0)).ravel() != 0
active_features = np.where(mask)[0]
out = out[:, active_features]
self._active_features_ = active_features
self.categories_ = [
np.unique(X[:, i]).astype(dtype) if dtype
else np.unique(X[:, i]) for i in range(n_features)]
return out if self.sparse else out.toarray()
def fit_transform(self, X, y=None):
"""Fit OneHotEncoder to X, then transform X.
Equivalent to fit(X).transform(X) but more convenient.
Parameters
----------
X : array-like, shape [n_samples, n_features]
The data to encode.
Returns
-------
X_out : sparse matrix if sparse=True else a 2-d array
Transformed input.
"""
if self.handle_unknown not in ('error', 'ignore'):
msg = ("handle_unknown should be either 'error' or 'ignore', "
"got {0}.".format(self.handle_unknown))
raise ValueError(msg)
self._handle_deprecations(X)
if self._legacy_mode:
return _transform_selected(
X, self._legacy_fit_transform, self.dtype,
self._categorical_features, copy=True)
else:
return self.fit(X).transform(X)
def _legacy_transform(self, X):
"""Assumes X contains only categorical features."""
X = check_array(X, dtype=np.int)
if np.any(X < 0):
raise ValueError("OneHotEncoder in legacy mode cannot handle "
"categories encoded as negative integers. "
"Please set categories='auto' explicitly to "
"be able to use arbitrary integer values as "
"category identifiers.")
n_samples, n_features = X.shape
indices = self._feature_indices_
if n_features != indices.shape[0] - 1:
raise ValueError("X has different shape than during fitting."
" Expected %d, got %d."
% (indices.shape[0] - 1, n_features))
# We use only those categorical features of X that are known using fit.
# i.e lesser than n_values_ using mask.
# This means, if self.handle_unknown is "ignore", the row_indices and
# col_indices corresponding to the unknown categorical feature are
# ignored.
mask = (X < self._n_values_).ravel()
if np.any(~mask):
if self.handle_unknown not in ['error', 'ignore']:
raise ValueError("handle_unknown should be either error or "
"unknown got %s" % self.handle_unknown)
if self.handle_unknown == 'error':
raise ValueError("unknown categorical feature present %s "
"during transform." % X.ravel()[~mask])
column_indices = (X + indices[:-1]).ravel()[mask]
row_indices = np.repeat(np.arange(n_samples, dtype=np.int32),
n_features)[mask]
data = np.ones(np.sum(mask))
out = sparse.coo_matrix((data, (row_indices, column_indices)),
shape=(n_samples, indices[-1]),
dtype=self.dtype).tocsr()
if (isinstance(self.n_values, six.string_types) and
self.n_values == 'auto'):
out = out[:, self._active_features_]
return out if self.sparse else out.toarray()
def _transform_new(self, X):
"""New implementation assuming categorical input"""
X_temp = check_array(X, dtype=None)
if not hasattr(X, 'dtype') and np.issubdtype(X_temp.dtype, np.str_):
X = check_array(X, dtype=np.object)
else:
X = X_temp
n_samples, n_features = X.shape
X_int, X_mask = self._transform(X, handle_unknown=self.handle_unknown)
mask = X_mask.ravel()
n_values = [cats.shape[0] for cats in self.categories_]
n_values = np.array([0] + n_values)
feature_indices = np.cumsum(n_values)
indices = (X_int + feature_indices[:-1]).ravel()[mask]
indptr = X_mask.sum(axis=1).cumsum()
indptr = np.insert(indptr, 0, 0)
data = np.ones(n_samples * n_features)[mask]
out = sparse.csr_matrix((data, indices, indptr),
shape=(n_samples, feature_indices[-1]),
dtype=self.dtype)
if not self.sparse:
return out.toarray()
else:
return out
def transform(self, X):
"""Transform X using one-hot encoding.
Parameters
----------
X : array-like, shape [n_samples, n_features]
The data to encode.
Returns
-------
X_out : sparse matrix if sparse=True else a 2-d array
Transformed input.
"""
if self._legacy_mode:
return _transform_selected(X, self._legacy_transform, self.dtype,
self._categorical_features,
copy=True)
else:
return self._transform_new(X)
def inverse_transform(self, X):
"""Convert the back data to the original representation.
In case unknown categories are encountered (all zero's in the
one-hot encoding), ``None`` is used to represent this category.
Parameters
----------
X : array-like or sparse matrix, shape [n_samples, n_encoded_features]
The transformed data.
Returns
-------
X_tr : array-like, shape [n_samples, n_features]
Inverse transformed array.
"""
# if self._legacy_mode:
# raise ValueError("only supported for categorical features")
check_is_fitted(self, 'categories_')
X = check_array(X, accept_sparse='csr')
n_samples, _ = X.shape
n_features = len(self.categories_)
n_transformed_features = sum([len(cats) for cats in self.categories_])
# validate shape of passed X
msg = ("Shape of the passed X data is not correct. Expected {0} "
"columns, got {1}.")
if X.shape[1] != n_transformed_features:
raise ValueError(msg.format(n_transformed_features, X.shape[1]))
# create resulting array of appropriate dtype
dt = np.find_common_type([cat.dtype for cat in self.categories_], [])
X_tr = np.empty((n_samples, n_features), dtype=dt)
j = 0
found_unknown = {}
for i in range(n_features):
n_categories = len(self.categories_[i])
sub = X[:, j:j + n_categories]
# for sparse X argmax returns 2D matrix, ensure 1D array
labels = np.asarray(_argmax(sub, axis=1)).flatten()
X_tr[:, i] = self.categories_[i][labels]
if self.handle_unknown == 'ignore':
# ignored unknown categories: we have a row of all zero's
unknown = np.asarray(sub.sum(axis=1) == 0).flatten()
if unknown.any():
found_unknown[i] = unknown
j += n_categories
# if ignored are found: potentially need to upcast result to
# insert None values
if found_unknown:
if X_tr.dtype != object:
X_tr = | |
dict[str, PredefinedField]()
# A repository of Interaction objects.
interactions: dict[str, Interaction] = dict[str, Interaction]()
# A repository of InteractionProperty objects.
interactionProperties: dict[str, ContactProperty] = dict[str, ContactProperty]()
# A repository of ContactControl objects.
contactControls: dict[str, ContactControl] = dict[str, ContactControl]()
# A repository of ContactInitialization objects.
contactInitializations: dict[str, ContactInitialization] = dict[str, ContactInitialization]()
# A repository of ContactStabilization objects.
contactStabilizations: dict[str, ContactStabilization] = dict[str, ContactStabilization]()
# A tuple of tuples of Strings specifying the linked child PartInstance name in the
# current model to the corresponding parent PartInstance name in a different model.
linkedInstances: tuple = ()
# A tuple of tuples of Strings specifying the linked child Part name in the current model
# to the corresponding parent Part name in a different model.
linkedParts: tuple = ()
# A repository of Load objects.
loads: dict[str, Load] = dict[str, Load]()
# A repository of Material objects.
materials: dict[str, Material] = dict[str, Material]()
# A repository of Calibration objects.
calibrations: dict[str, Calibration] = dict[str, Calibration]()
# A repository of Section objects.
sections: dict[str, Section] = dict[str, Section]()
# A repository of RemeshingRule objects.
remeshingRules: dict[str, RemeshingRule] = dict[str, RemeshingRule]()
# A repository of ConstrainedSketch objects.
sketches: dict[str, ConstrainedSketch] = dict[str, ConstrainedSketch]()
# A repository of Part objects.
parts: dict[str, Part] = dict[str, Part]()
# A repository of Step objects.
steps: dict[str, Step] = dict[str, Step]()
# A FeatureOptions object.
featureOptions: FeatureOptions = FeatureOptions()
# A repository of AdaptiveMeshConstraint objects.
adaptiveMeshConstraints: dict[str, AdaptiveMeshConstraint] = dict[str, AdaptiveMeshConstraint]()
# A repository of AdaptiveMeshControl objects.
adaptiveMeshControls: dict[str, AdaptiveMeshControl] = dict[str, AdaptiveMeshControl]()
# A repository of TimePoint objects.
timePoints: dict[str, TimePoint] = dict[str, TimePoint]()
# A repository of Filter objects.
filters: dict[str, Filter] = dict[str, Filter]()
# A repository of IntegratedOutputSection objects.
integratedOutputSections: dict[str, IntegratedOutputSection] = dict[str, IntegratedOutputSection]()
# A repository of FieldOutputRequest objects.
fieldOutputRequests: dict[str, FieldOutputRequest] = dict[str, FieldOutputRequest]()
# A repository of HistoryOutputRequest objects.
historyOutputRequests: dict[str, HistoryOutputRequest] = dict[str, HistoryOutputRequest]()
# A repository of OptimizationTask objects.
optimizationTasks: dict[str, OptimizationTask] = dict[str, OptimizationTask]()
# A repository of TableCollection objects.
tableCollections: dict[str, TableCollection] = dict[str, TableCollection]()
# A repository of EventSeriesType objects.
eventSeriesTypes: dict[str, EventSeriesType] = dict[str, EventSeriesType]()
# A repository of EventSeriesData objects.
eventSeriesDatas: dict[str, EventSeriesData] = dict[str, EventSeriesData]()
def __init__(self, name: str, description: str = '', stefanBoltzmann: float = None,
absoluteZero: float = None, waveFormulation: SymbolicConstant = NOT_SET,
modelType: SymbolicConstant = STANDARD_EXPLICIT, universalGas: float = None,
copyConstraints: Boolean = ON, copyConnectors: Boolean = ON,
copyInteractions: Boolean = ON):
"""This method creates a Model object.
Notes
-----
This function can be accessed by:
.. code-block:: python
mdb.Model
Parameters
----------
name
A String specifying the repository key.
description
A String specifying the purpose and contents of the Model object. The default value is
an empty string.
stefanBoltzmann
None or a Float specifying the Stefan-Boltzmann constant. The default value is None.
absoluteZero
None or a Float specifying the absolute zero constant. The default value is None.
waveFormulation
A SymbolicConstant specifying the type of incident wave formulation to be used in
acoustic problems. Possible values are NOT_SET, SCATTERED, and TOTAL. The default value
is NOT_SET.
modelType
A SymbolicConstant specifying the analysis model type. Possible values are
STANDARD_EXPLICIT and ELECTROMAGNETIC. The default is STANDARD_EXPLICIT.
universalGas
None or a Float specifying the universal gas constant. The default value is None.
copyConstraints
A boolean specifying whether to copy the constraints created in the model to the model
that instances this model. The default value is ON.
copyConnectors
A boolean specifying whether to copy the connectors created in the model to the model
that instances this model. The default value is ON.
copyInteractions
A boolean specifying whether to copy the interactions created in the model to the model
that instances this model. The default value is ON.
Returns
-------
A Model object.
"""
self.steps['Initial'] = InitialStep()
def ModelFromInputFile(self, name: str, inputFileName: str):
"""This method creates a Model object by reading the keywords in an input file and creating
the corresponding Abaqus/CAE objects.
Notes
-----
This function can be accessed by:
.. code-block:: python
mdb.Model
Parameters
----------
name
A String specifying the repository key.
inputFileName
A String specifying the name of the input file (including the .inp extension) to be
parsed into the new model. This String can also be the full path to the input file if it
is located in another directory.
Returns
-------
A Model object.
"""
pass
def ModelFromOdbFile(self, name: str, odbFileName: str):
"""This method creates a Model object by reading an output database and creating any
corresponding Abaqus/CAE objects.
Notes
-----
This function can be accessed by:
.. code-block:: python
mdb.Model
Parameters
----------
name
A String specifying the repository key.
odbFileName
A String specifying the name of the output database file (including the .odb extension)
to be read into the new model. This String can also be the full path to the output
database file if it is located in another directory.
Returns
-------
A Model object.
"""
pass
def ModelFromNastranFile(self, modelName: str, inputFileName: str,
sectionConsolidation: SymbolicConstant = PRESERVE_SECTION,
preIntegratedShell: Boolean = OFF, weightMassScaling: Boolean = ON,
loadCases: Boolean = ON, coupleBeamOffsets: Boolean = ON, cbar: str = B31,
cquad4: str = S4, chexa: str = C3D8I, ctetra: str = C3D10,
keepTranslatedFiles: Boolean = ON):
"""This method creates a Model object by reading the keywords in a Nastran bulk data file
or Nastran input file and creating any corresponding Abaqus/CAE objects. The default
values is discussed in following and can be defined alternatively in the Abaqus
environment file as the one used for the translator from Nastran to Abaqus. For more
information, see Translating Nastran data to Abaqus files.
Notes
-----
This function can be accessed by:
.. code-block:: python
mdb.Model
Parameters
----------
modelName
A String specifying the repository key.
inputFileName
A String specifying the name of the Nastran input file (including the .bdf, .dat, .nas,
.nastran, .blk, .bulk extension) to be read into the new model. This String can also be
the full path to the Nastran input file if it is located in another directory.
sectionConsolidation
A SymbolicConstant specifying the method used to create shell section. Possible values
are PRESERVE_SECTION, GROUP_BY_MATERIAL, and NONE. If PRESERVE_SECTION is used, an
Abaqus section is created corresponding to each shell property ID. If GROUP_BY_MATERIAL
is used, a single Abaqus section is created for all homogeneous elements referencing the
same material. In both cases, material orientations and offsets are created using
discrete fields. If NONE is used, a separate shell section is created for each
combination of orientation, material offset, and/or thickness. The default is
PRESERVE_SECTION.
preIntegratedShell
A Boolean specifying whether the pre-integrated shell section is created in default for
shell element. The default value is OFF.
weightMassScaling
A Boolean specifying whether the value on the Nastran data line PARAM, WTMASS is used as
a multiplier for all density, mass, and rotary inertia values created in the Abaqus
input file. The default value is ON.
loadCases
A Boolean specifying whether each SUBCASE for linear static analyses is translated to a
LOAD CASE option, and all such LOAD CASE options are grouped in a single STEP option.
The default value is ON.
coupleBeamOffsets
A Boolean specifying whether to translate the beam element connectivity to newly created
nodes at the offset location and rigidly coupling the new and original nodes. If not,
beam element offsets | |
one
author = self.work.authors.first()
if author:
# use the last name of the first author
author = author.authorized_name.split(',')[0]
else:
# otherwise, set it to an empty string
author = ''
# truncate the title to first several words of the title
title = ' '.join(self.work.primary_title.split()[:9])
# use copyright year if available, with fallback to work year if
year = self.copyright_year or self.work.year or ''
# # return a slug (not unique for multiple copies of same instance)
return slugify('%s %s %s' % (unidecode(author), unidecode(title), year))
def generate_safe_slug(self):
'''Generate a unique slug. Checks for duplicates and calculates
an appropriate copy letter if needed.
:rtype str: String in the format `lastname-title-of-work-year-copy`
'''
# base slug, without any copy letter
base_slug = self.generate_base_slug()
if self.copy:
slug = '-'.join([base_slug, self.copy])
else:
slug = base_slug
# check for any copies with the same base slug
duplicates = Instance.objects.filter(
slug__icontains=base_slug).order_by('-slug')
# exclude current record if it has already been saved
if self.pk:
duplicates = duplicates.exclude(pk=self.pk)
# any new copies should start with 'B' since 'A' is implicit in already
# saved slug for original
new_copy_letter = 'B'
# check for duplicates
if duplicates.exists():
# get the list of matching slugs
slugs = duplicates.values_list('slug', flat=True)
# if slug with specified copy is already unique, use that without
# further processing
if not slug in slugs:
return slug
# otherwise, calculate the appropriate copy letter to use
# collect copy suffixes from the slugs
# (trailing single uppercase letters only)
letters = [ltr for slug in slugs
for ltr in slug.rsplit('-', 1)[1]
if len(ltr) == 1 and ltr in string.ascii_uppercase]
# if existing copies letters are found, increment from the
# highest one (already sorted properly from queryset return)
if letters:
next_copy = chr(ord(letters[0]) + 1)
else:
# otherwise, default next copy is B (first is assumed to be A)
next_copy = 'B'
slug = '-'.join([base_slug, next_copy])
# also store the new copy letter as instance copy
self.copy = next_copy
return slug
def display_title(self):
'''display title - alternate title or work short title'''
return self.alternate_title or self.work.short_title or '[no title]'
display_title.short_description = 'Title'
def is_digitized(self):
'''boolean indicator if there is an associated digital edition'''
return bool(self.digital_edition) or \
bool(self.collected_in and self.collected_in.digital_edition)
# technically sorts on the foreign key, but that effectively filters
# instances with/without digital additions
is_digitized.admin_order_field = 'digital_edition'
is_digitized.boolean = True
def primary_language(self):
'''Primary :class:`Language` for this work instance. Use only
language or primary language for the instance if available; falls
back to only or primary language for the associated work.'''
langs = self.languages.all()
# if instance has only one language, use that
# (whether or not marked as primary)
if langs.exists():
# if more than one, filter to just primary
if langs.count() > 1:
langs = langs.filter(instancelanguage__is_primary=True)
# otherwise, return language for the work
if not langs and self.work.languages.exists():
langs = self.work.languages.all()
# filter by primary if more than one
if langs.count() > 1:
langs = langs.filter(worklanguage__is_primary=True)
if langs:
return langs.first()
@property
def location(self):
'''Location in Derrida's library (currently only available for
digitized books).'''
# NOTE: PUL digital editions from the Finding Aid include the
# location in the item title
if self.is_digitized():
# Split manifest label on dashes; at most we want the first two
location_parts = self.digital_edition.label.split(' - ')[:2]
# some volumes include a "Gift Books" notation we don't care about
if location_parts[-1].startswith('Gift Books'):
location_parts = location_parts[:-1]
return ', '.join(location_parts)
@property
def item_type(self):
'''item type: book, book section, or journal article'''
if self.journal:
return 'Journal Article'
if self.collected_in:
return 'Book Section'
return 'Book'
def author_names(self):
'''Display Work author names; convenience access for display in admin'''
return self.work.author_names()
author_names.short_description = 'Authors'
author_names.admin_order_field = 'work__authors__authorized_name'
def catalogue_call_numbers(self):
'''Convenience access to catalogue call numbers, for display in admin'''
return ', '.join([c.call_number for c in self.instancecatalogue_set.all()
if c.call_number])
catalogue_call_numbers.short_description = 'Call Numbers'
catalogue_call_numbers.admin_order_field = 'catalogue__call_number'
def print_year(self):
'''Year from :attr:`print_date` if year is known'''
if self.print_date and self.print_date_year_known:
return self.print_date.year
@property
def year(self):
'''year for indexing and display; :attr:`print_date` if known,
otherwise :attr:`copyright_year`'''
return self.print_year() or self.copyright_year
def images(self):
'''Queryset containing all :class:`djiffy.models.Canvas` objects
associated with the digital edition for this item.'''
if self.digital_edition:
return self.digital_edition.canvases.all()
return Canvas.objects.none()
#: terms in an image label that indicate a canvas should be
#: considered an overview image (e.g., cover & outside views)
overview_labels = ['cover', 'spine', 'back', 'edge', 'view']
def overview_images(self):
'''Overview images for this book - cover, spine, etc.
Filtered based on canvas label naming conventions.'''
label_query = models.Q()
for overview_label in self.overview_labels:
label_query |= models.Q(label__icontains=overview_label)
return self.images().filter(label_query) \
.exclude(label__icontains='insertion')
def annotated_pages(self):
'''Annotated pages for this book. Filtered based on the presence
of a documented :class:`~derrida.interventions.models.Intervention`
in the database.'''
return self.images().filter(intervention__isnull=False).distinct()
def insertion_images(self):
'''Insertion images for this book.
Filtered based on canvas label naming conventions.'''
# NOTE: using Insertion because of possible case-sensitive
# search on mysql even when icontains is used
return self.images().filter(label__icontains='Insertion')
@classmethod
def allow_canvas_detail(cls, canvas):
'''Check if canvas detail view is allowed. Allows insertion images,
overview images, and pages with documented interventions.'''
return any([
'insertion' in canvas.label.lower(),
any(label in canvas.label.lower()
for label in cls.overview_labels),
canvas.intervention_set.exists()
])
def allow_canvas_large_image(self, canvas):
'''Check if canvas large image view is allowed. Always allows
insertion images and overview images; other pages with documented
interventions are allowed as long as they are not suppressed,
either via :attr:`suppress_all_images` or specific
:attr:`suppressed_images`.'''
# insertion & overview always allowed
if any(['insertion' in canvas.label.lower(),
any(label in canvas.label.lower()
for label in self.overview_labels)]):
# allow
return True
# if all other images are suppressed, deny without checking further
if self.suppress_all_images:
return False
# if image has interventions, check if it is suppressed
if canvas.intervention_set.exists():
# deny if suppressed, otherwise allow
return canvas not in self.suppressed_images.all()
@property
def related_instances(self):
'''Find related works; for now, this means works by the
same author. For a work that collects item, include
work by any book section authors.'''
authors = list(self.work.authors.all())
if self.collected_set.exists():
for instance in self.collected_set.all():
authors.extend(instance.work.authors.all())
return Instance.objects.filter(work__authors__in=authors) \
.exclude(pk=self.pk) \
.exclude(digital_edition__isnull=True)
class WorkSubject(Notable):
'''Through-model for work-subject relationship, to allow designating
a particular subject as primary or adding notes.'''
#: :class:`Subject`
subject = models.ForeignKey(Subject)
#: :class:`Work`
work = models.ForeignKey(Work)
#: boolean flag indicating if this subject is primary for this work
is_primary = models.BooleanField(default=False)
class Meta:
unique_together = ('subject', 'work')
verbose_name = 'Subject'
def __str__(self):
return '%s %s%s' % (self.work, self.subject,
' (primary)' if self.is_primary else '')
class WorkLanguage(Notable):
'''Through-model for work-language relationship, to allow designating
one language as primary or adding notes.'''
#: :class:`Language`
language = models.ForeignKey(Language)
#: :class:`Work`
work = models.ForeignKey(Work)
#: boolean flag indicating if this language is primary for this work
is_primary = models.BooleanField()
class Meta:
unique_together = ('work', 'language')
verbose_name = 'Language'
def __str__(self):
return '%s %s%s' % (self.work, self.language,
' (primary)' if self.is_primary else '')
class InstanceLanguage(Notable):
'''Through-model for instance-language relationship, to allow designating
one language as primary or adding notes.'''
#: :class:`Language`
language = models.ForeignKey(Language)
#: :class:`Instance`
instance = models.ForeignKey(Instance)
#: boolean flag indicating if this language is primary for this instance
is_primary = models.BooleanField()
class Meta:
unique_together = ('instance', 'language')
verbose_name = 'Language'
def __str__(self):
return '%s %s%s' % (self.instance, self.language,
' (primary)' if self.is_primary else '')
class InstanceCatalogue(Notable, DateRange):
'''Location of a work instance in the real world, associating it with an
owning instutition.'''
institution = models.ForeignKey(OwningInstitution)
instance = models.ForeignKey(Instance)
is_current = models.BooleanField()
# using char instead of int because assuming call numbers may contain
# strings as well as numbers
call_number = models.CharField(max_length=255, blank=True, null=True,
help_text='Used for Derrida shelf mark')
class Meta:
verbose_name = 'Catalogue'
def __str__(self):
dates = ''
if self.dates:
dates = ' (%s)' % self.dates
return '%s / %s%s' % (self.instance, self.institution, dates)
class CreatorType(Named, Notable):
'''Type of creator role a person can have to a book - author,
editor, translator, etc.'''
uri = models.URLField(blank=True, null=True)
class InstanceCreator(Notable):
creator_type = models.ForeignKey(CreatorType)
# technically should disallow author here, but can clean that up later
| |
>= 2:
try:
int(self.arguments[1])
except ValueError:
core.debug_print('Exception in Cloud command <damage> argument parsing:')
if core.config('debug', False) or core.state.get('is_daemon', False):
traceback.print_exc(file=sys.stdout)
return '<damage> must be a number'
return True
@handle_exceptions
def run(self):
import api
import bottle
if len(self.arguments) < 2:
damage = 0
else:
damage = int(self.arguments[1])
try:
item = api.api_item_by_damage(self.arguments[0], damage)
except bottle.HTTPError as e:
self.reply(str(e.body))
return
item_name = str(item['name'] if 'name' in item else item['stringID'])
paths_config = core.config('paths')
if 'json' in paths_config:
with open(os.path.join(paths_config['json'], 'cloud.json')) as cloud_json:
cloud = json.load(cloud_json)
else:
self.warning('Could not find cloud.json because config item .paths.json is missing')
return
for x, corridor, y, floor, z, chest in self.cloud_iter(cloud):
if chest['id'] == item['stringID'] and chest['damage'] == damage:
break
else:
self.reply(item_name + ' is not available in the Cloud')
return
if x == 0:
corridor_name = 'central corridor'
elif x == 1 and '2' not in floor:
corridor_name = 'left corridor'
elif x == -1 and '-2' not in floor:
corridor_name = 'right corridor'
else:
if x < 0:
direction = 'right'
x *= -1
else:
direction = 'left'
corridor_name = '{}{} corridor to the {}'.format(x, self.ordinal(x), direction)
row = z // 2 + 1
chest_wall = 'right' if z % 2 else 'left'
self.reply('{}: {}{} floor, {}, {}{} chest to the {}'.format(item_name, y, self.ordinal(y), corridor_name, row, self.ordinal(row), chest_wall))
class Command(BaseCommand):
"""perform a Minecraft server command"""
usage = '<command> [<arguments>...]'
def parse_args(self):
if len(self.arguments) == 0:
return False
return True
def permission_level(self):
return 4
@handle_exceptions
def run(self):
for line in minecraft.command(self.arguments[0], self.arguments[1:]).splitlines():
self.reply(line)
class DeathGames(BaseCommand):
"""record an assassination attempt in the Death Games log"""
usage = '(win | fail) [<attacker>] <target>'
def parse_args(self):
if len(self.arguments) not in range(2, 4):
return False
if self.arguments[0].lower() not in ('win', 'fail'):
return False
try:
nicksub.Person(self.arguments[1])
except nicksub.PersonNotFoundError:
try:
nicksub.Person(self.arguments[1], context=self.context)
except:
return '<attacker> must be a person'
if len(self.arguments) == 3:
try:
nicksub.Person(self.arguments[2])
except nicksub.PersonNotFoundError:
try:
nicksub.Person(self.arguments[2], context=self.context)
except:
return '<target> must be a person'
return True
def permission_level(self):
return 3
@handle_exceptions
def run(self):
success = self.arguments[0].lower() == 'win'
if len(self.arguments) == 3:
try:
attacker = nicksub.Person(self.arguments[1])
except nicksub.PersonNotFoundError:
attacker = nicksub.Person(self.arguments[1], context=self.context)
try:
target = nicksub.Person(self.arguments[2])
except nicksub.PersonNotFoundError:
target = nicksub.Person(self.arguments[2], context=self.context)
else:
attacker = self.sender
try:
target = nicksub.Person(self.arguments[1])
except nicksub.PersonNotFoundError:
target = nicksub.Person(self.arguments[1], context=self.context)
core.death_games_log(attacker, target, success)
class DeathTweet(BaseCommand):
"""toggle death message tweeting"""
usage = '[on | off [<time>]]'
def parse_args(self):
if len(self.arguments) > 2:
return False
elif len(self.arguments) >= 1:
if self.arguments[0] not in ('on', 'off'):
return False
if len(self.arguments) == 2:
if self.arguments[0] == 'on':
return False
try:
core.parse_timedelta(self.arguments[1])
except:
return '<time> must be a time interval, like 2h16m30s'
return True
def permission_level(self):
if len(self.arguments) == 2 and parse_timedelta(self.arguments[1]) > timedelta(days=1):
return 4
return 3
def reenable_death_tweets():
core.state['death_tweets'] = True
self.reply('Death tweets are back on')
@handle_exceptions
def run(self):
if len(self.arguments) == 0:
self.reply('Deathtweeting is currently ' + ('enabled' if core.state['death_tweets'] else 'disabled'))
elif self.arguments[0] == 'on':
core.state['death_tweets'] = True
self.reply('Deathtweeting is now enabled')
else:
if len(self.arguments) == 2:
number = core.parse_timedelta(self.arguments[1])
threading.Timer(number, self.reenable_death_tweets).start()
core.state['death_tweets'] = False
self.reply('Deathtweeting is now disabled')
class EnableWorld(BaseCommand):
"""switch to a different Minecraft world"""
usage = '<world_name>'
def parse_args(self):
if len(self.arguments) != 1:
return False
return True
def permission_level(self):
return 4
def run(self):
if not core.state['server_control_lock'].acquire():
self.warning('Server access is locked. Not switching worlds.')
return
core.update_topic(special_status='Switching to {} world…'.format(self.arguments[0]))
if minecraft.enable_world(self.arguments[0], reply=self.reply, log_path=os.path.join(core.config('paths')['logs'], 'logins.log')):
self.reply('Server restarted.')
else:
self.reply('Something went wrong while enabling the {} world!'.format(self.arguments[0]))
core.update_topic(special_status=None)
core.state['server_control_lock'].release()
class FixStatus(BaseCommand):
"""update the server status on the website and in the channel topic"""
def parse_args(self):
if len(self.arguments):
return False
return True
@handle_exceptions
def run(self):
core.update_all(force=True)
class Help(BaseCommand):
"""get help on a command"""
usage = '[aliases | commands | <alias> | <command>]'
def parse_args(self):
if len(self.arguments) > 2:
return False
return True
def reply(self, irc_reply, tellraw_reply=None):
if self.context == 'irc':
if self.addressing is None:
core.state['bot'].say(self.sender.irc_nick(respect_highlight_option=False), irc_reply)
else:
core.state['bot'].say(self.addressing.irc_nick(respect_highlight_option=False), '(from ' + self.sender.irc_nick(respect_highlight_option=False) + ') ' + irc_reply)
else:
return super().reply(irc_reply, tellraw_reply)
@handle_exceptions
def run(self):
if len(self.arguments) == 0:
self.reply('Hello, I am ' + ('wurstminebot' if core.config('irc').get('nick', 'wurstminebot') == 'wurstminebot' else core.config('irc')['nick'] + ', a wurstminebot') + '. I sync messages between IRC and Minecraft, and respond to various commands.')
self.reply('Execute “Help commands” for a list of commands, or “Help <command>” (replace <command> with a command name) for help on a specific command.', 'Execute "Help commands" for a list of commands, or "Help <command>" (replace <command> with a command name) for help on a specific command.')
help_text = 'To execute a command, send it to me in private chat (here) or address me in a channel (like this: “' + core.config('irc').get('nick', 'wurstminebot') + ': <command>...”). You can also execute commands in a channel or in Minecraft like this: “!<command>...”.'
help_text_tellraw = 'You can execute a command by typing "!<command>..." in the in-game chat or an IRC channel. You can also send the command to me in a private IRC query (without the "!") or address me in a channel (like this: "' + core.config('irc').get('nick', 'wurstminebot') + ': <command>...").'
self.reply(help_text, help_text_tellraw)
elif self.arguments[0].lower() == 'aliases':
num_aliases = len(list(core.config('aliases').keys()))
if num_aliases > 0:
help_text = 'Currently defined aliases: ' + ', '.join(sorted(list(core.config('aliases').keys()))) + '. For more information, execute'
else:
help_text = 'No aliases are currently defined. For more information, execute'
self.reply(help_text + ' “Help alias”.', help_text + ' "Help alias".')
elif self.arguments[0].lower() == 'commands':
num_aliases = len(list(core.config('aliases').keys()))
self.reply('Available commands: ' + ', '.join(sorted([command_class.__name__ for command_class in classes])) + (', and ' + str(num_aliases) + ' aliases.' if num_aliases > 0 else '.'))
elif self.arguments[0].lower() in core.config('aliases'):
alias_dict = core.config('aliases')[self.arguments[0].lower()]
if alias_dict.get('type') == 'command':
self.reply(self.arguments[0].lower() + ' is ' + ('an alias of ' + alias_dict['command_name'] if 'command_name' in alias_dict else 'a broken alias') + '.')
elif alias_dict.get('type') == 'disable':
self.reply(self.arguments[0].lower() + ' is disabled.')
elif alias_dict.get('type') == 'reply':
self.reply(self.arguments[0].lower() + ' is an echo alias. Execute it to see what the reply is.')
elif alias_dict.get('type') == 'say':
self.reply(self.arguments[0].lower() + ' is an alias. Execute it to see what it stands for.')
else:
self.reply(self.arguments[0] + ' is a broken alias.')
else:
for command_class in classes:
if command_class.__name__.lower() == self.arguments[0].lower():
self.reply(command_class.__name__ + ': ' + command_class.__doc__)
self.reply('Usage: ' + command_class.__name__ + ('' if command_class.usage is None else (' ' + command_class.usage)))
self.reply('More info: http://wiki.wurstmineberg.de/Commands#' + command_class.__name__, {'text': 'More info', 'clickEvent': {'action': 'open_url', 'value': 'http://wiki.wurstmineberg.de/Commands#' + command_class.__name__}})
break
else:
self.reply(core.ErrorMessage.unknown(self.arguments[0]))
class Invite(BaseCommand):
"""invite a new player"""
usage = '<unique_id> <minecraft_name> [<twitter_username>]'
def parse_args(self):
if len(self.arguments) not in range(2, 4):
return False
if not re.match('[a-z][0-9a-z]{1,15}$', self.arguments[0].lower()):
return '<unique_id> must be a valid Wurstmineberg ID: alphanumeric, 2 to 15 characters, and start with a letter'
try:
nicksub.Person(self.arguments[0], strict=False)
except:
pass # person with this ID does not exist
else:
return 'a person with this Wurstmineberg ID already exists'
if not re.match(minecraft.regexes.player, self.arguments[1]):
return '<minecraft_name> is not a valid Minecraft nickname'
if len(self.arguments) > 2 and not re.match('@?[A-Za-z0-9_]{1,15}$', self.arguments[2]):
return '<twitter_username> is invalid, see https://support.twitter.com/articles/101299'
return True
def permission_level(self):
response = requests.get('https://s3.amazonaws.com/MinecraftSkins/' + self.arguments[1] + '.png')
if response.status_code != 200:
return 4
return 3
@handle_exceptions
def run(self):
if len(self.arguments) == 3 and self.arguments[2] is not None and len(self.arguments[2]):
screen_name = self.arguments[2][1:] if self.arguments[2].startswith('@') else self.arguments[2]
else:
screen_name = None
minecraft.whitelist_add(self.arguments[0].lower(), minecraft_nick=self.arguments[1], people_file=core.config('paths').get('people'), person_status='invited', invited_by=self.sender)
self.reply('A new person with id ' + self.arguments[0].lower() + ' is now invited. The !Whitelist command must be run by a bot op.')
if screen_name is not None:
core.set_twitter(nicksub.Person(self.arguments[0]), screen_name)
self.reply('@' + core.config('twitter')['screen_name'] + ' is now following @' + screen_name)
class Join(BaseCommand):
"""make the bot join a channel"""
usage = '<channel>'
def parse_args(self):
if len(self.arguments) != 1:
return False
if not self.arguments[0].startswith('#'):
return '<channel> is not a valid IRC channel name'
return True
def permission_level(self):
return 4
@handle_exceptions
def run(self):
chans = sorted(core.config('irc').get('channels', []))
if str(self.arguments[0]) | |
for NOLINT
raw_lines = clean_lines.raw_lines
ParseNolintSuppressions(filename, raw_lines[endlinenum-1], endlinenum-1,
error)
ParseNolintSuppressions(filename, raw_lines[endlinenum], endlinenum,
error)
error(filename, endlinenum, 'readability/braces', 4,
"You don't need a ; after a }")
def CheckEmptyBlockBody(filename, clean_lines, linenum, error):
"""Look for empty loop/conditional body with only a single semicolon.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
error: The function to call with any errors found.
"""
# Search for loop keywords at the beginning of the line. Because only
# whitespaces are allowed before the keywords, this will also ignore most
# do-while-loops, since those lines should start with closing brace.
#
# We also check "if" blocks here, since an empty conditional block
# is likely an error.
line = clean_lines.elided[linenum]
matched = Match(r'\s*(for|while|if)\s*\(', line)
if matched:
# Find the end of the conditional expression.
(end_line, end_linenum, end_pos) = CloseExpression(
clean_lines, linenum, line.find('('))
# Output warning if what follows the condition expression is a semicolon.
# No warning for all other cases, including whitespace or newline, since we
# have a separate check for semicolons preceded by whitespace.
if end_pos >= 0 and Match(r';', end_line[end_pos:]):
if matched.group(1) == 'if':
error(filename, end_linenum, 'whitespace/empty_conditional_body', 5,
'Empty conditional bodies should use {}')
else:
error(filename, end_linenum, 'whitespace/empty_loop_body', 5,
'Empty loop bodies should use {} or continue')
# Check for if statements that have completely empty bodies (no comments)
# and no else clauses.
if end_pos >= 0 and matched.group(1) == 'if':
# Find the position of the opening { for the if statement.
# Return without logging an error if it has no brackets.
opening_linenum = end_linenum
opening_line_fragment = end_line[end_pos:]
# Loop until EOF or find anything that's not whitespace or opening {.
while not Search(r'^\s*\{', opening_line_fragment):
if Search(r'^(?!\s*$)', opening_line_fragment):
# Conditional has no brackets.
return
opening_linenum += 1
if opening_linenum == len(clean_lines.elided):
# Couldn't find conditional's opening { or any code before EOF.
return
opening_line_fragment = clean_lines.elided[opening_linenum]
# Set opening_line (opening_line_fragment may not be entire opening line).
opening_line = clean_lines.elided[opening_linenum]
# Find the position of the closing }.
opening_pos = opening_line_fragment.find('{')
if opening_linenum == end_linenum:
# We need to make opening_pos relative to the start of the entire line.
opening_pos += end_pos
(closing_line, closing_linenum, closing_pos) = CloseExpression(
clean_lines, opening_linenum, opening_pos)
if closing_pos < 0:
return
# Now construct the body of the conditional. This consists of the portion
# of the opening line after the {, all lines until the closing line,
# and the portion of the closing line before the }.
if (clean_lines.raw_lines[opening_linenum] !=
CleanseComments(clean_lines.raw_lines[opening_linenum])):
# Opening line ends with a comment, so conditional isn't empty.
return
if closing_linenum > opening_linenum:
# Opening line after the {. Ignore comments here since we checked above.
body = list(opening_line[opening_pos+1:])
# All lines until closing line, excluding closing line, with comments.
body.extend(clean_lines.raw_lines[opening_linenum+1:closing_linenum])
# Closing line before the }. Won't (and can't) have comments.
body.append(clean_lines.elided[closing_linenum][:closing_pos-1])
body = '\n'.join(body)
else:
# If statement has brackets and fits on a single line.
body = opening_line[opening_pos+1:closing_pos-1]
# Check if the body is empty
if not _EMPTY_CONDITIONAL_BODY_PATTERN.search(body):
return
# The body is empty. Now make sure there's not an else clause.
current_linenum = closing_linenum
current_line_fragment = closing_line[closing_pos:]
# Loop until EOF or find anything that's not whitespace or else clause.
while Search(r'^\s*$|^(?=\s*else)', current_line_fragment):
if Search(r'^(?=\s*else)', current_line_fragment):
# Found an else clause, so don't log an error.
return
current_linenum += 1
if current_linenum == len(clean_lines.elided):
break
current_line_fragment = clean_lines.elided[current_linenum]
# The body is empty and there's no else clause until EOF or other code.
error(filename, end_linenum, 'whitespace/empty_if_body', 4,
('If statement had no body and no else clause'))
def FindCheckMacro(line):
"""Find a replaceable CHECK-like macro.
Args:
line: line to search on.
Returns:
(macro name, start position), or (None, -1) if no replaceable
macro is found.
"""
for macro in _CHECK_MACROS:
i = line.find(macro)
if i >= 0:
# Find opening parenthesis. Do a regular expression match here
# to make sure that we are matching the expected CHECK macro, as
# opposed to some other macro that happens to contain the CHECK
# substring.
matched = Match(r'^(.*\b' + macro + r'\s*)\(', line)
if not matched:
continue
return (macro, len(matched.group(1)))
return (None, -1)
def CheckCheck(filename, clean_lines, linenum, error):
"""Checks the use of CHECK and EXPECT macros.
Args:
filename: The name of the current file.
clean_lines: A CleansedLines instance containing the file.
linenum: The number of the line to check.
error: The function to call with any errors found.
"""
# Decide the set of replacement macros that should be suggested
lines = clean_lines.elided
(check_macro, start_pos) = FindCheckMacro(lines[linenum])
if not check_macro:
return
# Find end of the boolean expression by matching parentheses
(last_line, end_line, end_pos) = CloseExpression(
clean_lines, linenum, start_pos)
if end_pos < 0:
return
# If the check macro is followed by something other than a
# semicolon, assume users will log their own custom error messages
# and don't suggest any replacements.
if not Match(r'\s*;', last_line[end_pos:]):
return
if linenum == end_line:
expression = lines[linenum][start_pos + 1:end_pos - 1]
else:
expression = lines[linenum][start_pos + 1:]
for i in xrange(linenum + 1, end_line):
expression += lines[i]
expression += last_line[0:end_pos - 1]
# Parse expression so that we can take parentheses into account.
# This avoids false positives for inputs like "CHECK((a < 4) == b)",
# which is not replaceable by CHECK_LE.
lhs = ''
rhs = ''
operator = None
while expression:
matched = Match(r'^\s*(<<|<<=|>>|>>=|->\*|->|&&|\|\||'
r'==|!=|>=|>|<=|<|\()(.*)$', expression)
if matched:
token = matched.group(1)
if token == '(':
# Parenthesized operand
expression = matched.group(2)
(end, _) = FindEndOfExpressionInLine(expression, 0, ['('])
if end < 0:
return # Unmatched parenthesis
lhs += '(' + expression[0:end]
expression = expression[end:]
elif token in ('&&', '||'):
# Logical and/or operators. This means the expression
# contains more than one term, for example:
# CHECK(42 < a && a < b);
#
# These are not replaceable with CHECK_LE, so bail out early.
return
elif token in ('<<', '<<=', '>>', '>>=', '->*', '->'):
# Non-relational operator
lhs += token
expression = matched.group(2)
else:
# Relational operator
operator = token
rhs = matched.group(2)
break
else:
# Unparenthesized operand. Instead of appending to lhs one character
# at a time, we do another regular expression match to consume several
# characters at once if possible. Trivial benchmark shows that this
# is more efficient when the operands are longer than a single
# character, which is generally the case.
matched = Match(r'^([^-=!<>()&|]+)(.*)$', expression)
if not matched:
matched = Match(r'^(\s*\S)(.*)$', expression)
if not matched:
break
lhs += matched.group(1)
expression = matched.group(2)
# Only apply checks if we got all parts of the boolean expression
if not (lhs and operator and rhs):
return
# Check that rhs do not contain logical operators. We already know
# that lhs is fine since the loop above parses out && and ||.
if rhs.find('&&') > -1 or rhs.find('||') > -1:
return
# At least one of the operands must be a constant literal. This is
# to avoid suggesting replacements for unprintable things like
# CHECK(variable != iterator)
#
# The following pattern matches decimal, hex integers, strings, and
# characters (in that order).
lhs = lhs.strip()
rhs = rhs.strip()
match_constant = r'^([-+]?(\d+|0[xX][0-9a-fA-F]+)[lLuU]{0,3}|".*"|\'.*\')$'
if Match(match_constant, lhs) or Match(match_constant, rhs):
pass
# Note: since we know both lhs and rhs, we can provide a more
# descriptive error message like:
# Consider using CHECK_EQ(x, 42) instead of CHECK(x == 42)
# Instead of:
# Consider using CHECK_EQ instead of CHECK(a == b)
#
# We are still keeping the less descriptive message because if lhs
# or rhs gets long, the error message might become unreadable.
# error(filename, linenum, 'readability/check', 2,
# 'Consider using %s instead of %s(a %s b)' % (
# _CHECK_REPLACEMENT[check_macro][operator],
# check_macro, operator))
def CheckAltTokens(filename, clean_lines, linenum, error):
"""Check alternative | |
<gh_stars>10-100
import socket
from collections import OrderedDict
from contextlib import suppress, contextmanager
from fcntl import LOCK_EX, flock, LOCK_UN
from functools import reduce
from http.client import HTTPException, BadStatusLine
from itertools import chain
from os.path import join
from os import remove
from urllib.error import URLError, HTTPError
from logging import getLogger
from shutil import rmtree
from consul_kv import Connection, map_dictionary, dictionary_map
from consul_kv.utils import dict_merge
from raptiformica.settings import conf
from raptiformica.utils import load_json, write_json, list_all_files_with_extension_in_directory, ensure_directory, \
retry, group_n_elements
import raptiformica.distributed.proxy
log = getLogger(__name__)
consul_conn = Connection(
endpoint=conf().KEY_VALUE_ENDPOINT,
timeout=conf().KEY_VALUE_TIMEOUT
)
API_EXCEPTIONS = (HTTPError, HTTPException, URLError,
ConnectionRefusedError, ConnectionResetError,
BadStatusLine, OSError, ValueError,
socket.timeout)
@contextmanager
def config_cache_lock():
"""
Obtain the config cache lock, perform the code
in the context and then let the lock go.
:yield None
:return None:
"""
with open(conf().CONFIG_CACHE_LOCK, 'w+') as lock:
try:
log.debug(
"Getting config cache lock. "
"If this blocks forever, try deleting file "
"{} and restart the process.".format(conf().CONFIG_CACHE_LOCK)
)
flock(lock, LOCK_EX) # Blocks until lock becomes available
yield
finally:
log.debug("Releasing the config cache lock")
flock(lock, LOCK_UN)
def write_config_mapping(config, config_file):
"""
Write the config to a config file
:param dict config: The config to be dumped to json
:param str config_file: The mutable config file
:return None:
"""
ensure_directory(conf().USER_ARTIFACTS_DIR)
# Lock the config cache file so two processes can't
# write to the file at the same time and corrupt the json
with config_cache_lock():
write_json(config, config_file)
def load_module_config(modules_dir=None):
"""
Find all configuration files in the modules_dir and return them as parsed a list
:param str modules_dir: path to look for .json config files in
:return list configs: list of parsed configs
"""
modules_dir = modules_dir or conf().MODULES_DIR
file_names = list_all_files_with_extension_in_directory(
modules_dir, 'json'
)
def try_load_module(filename):
log.debug("Loading module config from {}".format(filename))
try:
config = load_json(filename)
if 'raptiformica_api_version' not in config:
raise ValueError(
"Not a raptiformica config file. Skipping.."
)
return config
except ValueError:
log.debug("Failed to parse module config in {}, "
"skipping..".format(filename))
return filter(lambda x: x is not None, map(try_load_module, file_names))
def load_module_configs(module_dirs=None):
"""
Load the module configs for all the specified modules dirs and
return a flattened list containing the configs
:param iterable module_dirs: directories to look for module configs in
:return list configs: list of parsed configs
"""
module_dirs = module_dirs or (conf().MODULES_DIR, conf().USER_MODULES_DIR)
return chain.from_iterable(
map(load_module_config, module_dirs)
)
def try_config_request(func):
"""
Try the config request on the local Consul instance's API port,
if that fails attempt the same request on the API port of one of
the locally known neighbours.
:param func func: Function to attempt
:return dict mapping: The function result
"""
try:
log.debug("Attempting API call on local Consul instance")
return func()
except API_EXCEPTIONS:
if conf().FORWARDED_CONSUL_ONCE_ALREADY:
log.debug(
"Not attempting to forward any remote Consul instance, "
"already attempted that once. Working from the most recent "
"available cached config now"
)
else:
log.debug("Attempting API call on remote Consul instance")
conf().set_forwarded_remote_consul_once()
with suppress(RuntimeError):
# Absolute import because the distributed proxy
# imports from settings as well
with raptiformica.distributed.proxy.forward_any_port(
source_port=8500, predicate=[
'consul', 'kv', 'get', '/raptiformica/raptiformica_api_version'
]
):
return func()
raise
def upload_config_mapping(mapped):
"""
Upload a mapped config to the distributed key value store
:param dict[key, value] mapped: dict of key value pairs
:return None:
"""
batches = [
# todo: replace OrderedDict with a regular dict,
# insertion order is not significant here but it made it
# easier to unit test this function. That could also be
# solved by reworking the unit tests to accept batches of
# any order.
OrderedDict(sub) for sub in group_n_elements(
list(mapped.items()), n=32
)
]
batch_count = len(batches)
log.debug(
"Uploading local configs to "
"distributed key value store in {}"
"".format(
"one batch" if batch_count == 1
else "{} batches".format(batch_count)
)
)
for batch in batches:
# Note that this only uses transactions for every
# 16 key value pairs in the mapping. If you need a
# transactional guarantee then upload a subset of the
# mapping smaller than the batch threshold. Presumably
# all sections of the shared config mapping are commutative.
try_config_request(lambda: consul_conn.put_mapping(batch))
def download_config_mapping():
"""
Get the entire config from the distributed key value store
:return dict mapping: all registered key value pairs
"""
log.debug(
"Attempting to retrieve the shared config "
"from the distributed key value store"
)
mapping = try_config_request(
lambda: consul_conn.get_mapping(conf().KEY_VALUE_PATH)
)
if not mapping:
raise ValueError(
"Retrieved empty data from distributed key "
"value store. Not accepting."
)
if not validate_config_mapping(mapping):
raise ValueError(
"Retrieved corrupted data from distributed key "
"value store. Not accepting."
)
return mapping
def on_disk_mapping(module_dirs=None):
"""
Retrieve the on disk config mapping
:param iterable module_dirs: directories to look for module configs in
:return dict mapping: retrieved key value mapping with config data
"""
module_dirs = module_dirs or (conf().MODULES_DIR, conf().USER_MODULES_DIR)
configs = load_module_configs(module_dirs=module_dirs)
return {
join(conf().KEY_VALUE_PATH, k): v for k, v in
reduce(dict_merge, map(map_dictionary, configs), dict()).items()
}
def try_update_config_mapping(mapping):
"""
If no consul cluster has been established yet there is no
distributed key value store yet, in that case write the mapping
to the local cache so this can be copied by rsync to new hosts
until at least three can be linked together and form consensus
:param dict mapping: key value mapping with config data
:return dict mapping: retrieved key value mapping with config data
"""
try:
mapping = update_config_mapping(mapping)
except API_EXCEPTIONS:
cached_mapping = get_config_mapping()
cached_mapping.update(mapping)
cache_config_mapping(cached_mapping)
mapping = cached_mapping
return mapping
def try_delete_config(key, recurse=False):
"""
Try to delete a key in the distributed key value store,
if there is no distributed key value store yet or we can't
connect, then remove the key from the local config.
:param str key: key to remove
:param bool recurse: recurse the path and delete all entries
:return:
"""
# todo: in the case of an offline delete those changes will
# never be synced back to the distributed k v store but
# should instead be fixed by some form of eventual consistency
try:
consul_conn.delete(key, recurse=recurse)
sync_shared_config_mapping()
except API_EXCEPTIONS:
log.debug(
"Could not connect to the distributed key value store to "
"delete the key. Only deleting from local cache for now."
)
cached_mapping = get_config_mapping()
mapping = {
k: v for k, v in cached_mapping.items() if
# find all keys starting with the key if recurse,
# else only filter away the key with an exact match
not (k.startswith(key) if recurse else k == key)
}
cache_config_mapping(mapping)
@retry(attempts=5, expect=API_EXCEPTIONS)
def update_config_mapping(mapping):
"""
Upload a new mapping to the distributed key value store and
retrieve the latest mapping
:param dict mapping: the mapping to PUT to the k v API
:return dict mapping: retrieved key value mapping with config data
"""
upload_config_mapping(mapping)
return download_config_mapping()
def sync_shared_config_mapping():
"""
Retrieve the remote config mapping or upload
the local configuration mapping if none exists in
the distributed key value store yet. Also caches
the downloaded result.
:return dict mapping: retrieved key value mapping with config data
"""
try:
mapping = download_config_mapping()
except (ValueError, HTTPError):
mapping = get_local_config_mapping()
mapping = update_config_mapping(mapping)
cache_config_mapping(mapping)
return mapping
def cache_config_mapping(mapping):
"""
Write the retrieved config to disk so the mutations are retained
even in case of network failure
:param dict mapping: the cached k v mapping
:return None:
"""
if not mapping:
raise RuntimeError(
"Passed key value mapping was null. "
"Refusing to cache empty mapping!"
)
if not validate_config_mapping(mapping):
raise RuntimeError(
"Missing all required service types from config. "
"It might have gotten corrupted. "
"Refusing to cache this mapping!"
)
write_config_mapping(mapping, conf().MUTABLE_CONFIG)
def cached_config_mapping():
"""
Retrieve the cached config of the last successful config download
from distributed key value store. Waits with reading the cached
config until the exclusive lock can be acquired to prevent reading
truncated json because another process could be updating the cache.
:return dict mapping: the k v config mapping
"""
with config_cache_lock():
return load_json(conf().MUTABLE_CONFIG)
def validate_config_mapping(mapping):
"""
Validate a config mapping. If for some reason the retrieved mapping is
corrupted we need to act accordingly.
:param dict mapping: key value mapping with config data
:return bool valid: True if valid, False if not
"""
mapping_as_dict = dictionary_map(mapping)
for config_type in ('server', 'compute', 'platform'):
| |
or # noqa: E501
local_var_params['datacenter_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `datacenter_id` when calling `datacenters_volumes_labels_get`") # noqa: E501
# verify the required parameter 'volume_id' is set
if self.api_client.client_side_validation and ('volume_id' not in local_var_params or # noqa: E501
local_var_params['volume_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `volume_id` when calling `datacenters_volumes_labels_get`") # noqa: E501
if self.api_client.client_side_validation and 'depth' in local_var_params and local_var_params['depth'] > 10: # noqa: E501
raise ApiValueError("Invalid value for parameter `depth` when calling `datacenters_volumes_labels_get`, must be a value less than or equal to `10`") # noqa: E501
if self.api_client.client_side_validation and 'depth' in local_var_params and local_var_params['depth'] < 0: # noqa: E501
raise ApiValueError("Invalid value for parameter `depth` when calling `datacenters_volumes_labels_get`, must be a value greater than or equal to `0`") # noqa: E501
if self.api_client.client_side_validation and 'offset' in local_var_params and local_var_params['offset'] < 0: # noqa: E501
raise ApiValueError("Invalid value for parameter `offset` when calling `datacenters_volumes_labels_get`, must be a value greater than or equal to `0`") # noqa: E501
if self.api_client.client_side_validation and 'limit' in local_var_params and local_var_params['limit'] > 10000: # noqa: E501
raise ApiValueError("Invalid value for parameter `limit` when calling `datacenters_volumes_labels_get`, must be a value less than or equal to `10000`") # noqa: E501
if self.api_client.client_side_validation and 'limit' in local_var_params and local_var_params['limit'] < 1: # noqa: E501
raise ApiValueError("Invalid value for parameter `limit` when calling `datacenters_volumes_labels_get`, must be a value greater than or equal to `1`") # noqa: E501
collection_formats = {}
path_params = {}
if 'datacenter_id' in local_var_params:
path_params['datacenterId'] = local_var_params['datacenter_id'] # noqa: E501
if 'volume_id' in local_var_params:
path_params['volumeId'] = local_var_params['volume_id'] # noqa: E501
query_params = []
if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501
query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501
if 'depth' in local_var_params and local_var_params['depth'] is not None: # noqa: E501
query_params.append(('depth', local_var_params['depth'])) # noqa: E501
if 'offset' in local_var_params and local_var_params['offset'] is not None: # noqa: E501
query_params.append(('offset', local_var_params['offset'])) # noqa: E501
if 'limit' in local_var_params and local_var_params['limit'] is not None: # noqa: E501
query_params.append(('limit', local_var_params['limit'])) # noqa: E501
header_params = {}
if 'x_contract_number' in local_var_params:
header_params['X-Contract-Number'] = local_var_params['x_contract_number'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['Basic Authentication', 'Token Authentication'] # noqa: E501
response_type = 'LabelResources'
if 'response_type' in kwargs:
response_type = kwargs['response_type']
return self.api_client.call_api(
'/datacenters/{datacenterId}/volumes/{volumeId}/labels', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=response_type, # noqa: E501
auth_settings=auth_settings,
async_req=local_var_params.get('async_req'),
_return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501
_preload_content=local_var_params.get('_preload_content', True),
_request_timeout=local_var_params.get('_request_timeout'),
collection_formats=collection_formats,
_request_auth=local_var_params.get('_request_auth'))
def datacenters_volumes_labels_post(self, datacenter_id, volume_id, label, **kwargs): # noqa: E501
"""Add a Label to Volume # noqa: E501
This will add a label to the volume. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.datacenters_volumes_labels_post(datacenter_id, volume_id, label, async_req=True)
>>> result = thread.get()
:param datacenter_id: The unique ID of the Datacenter (required)
:type datacenter_id: str
:param volume_id: The unique ID of the Volume (required)
:type volume_id: str
:param label: Label to be added (required)
:type label: LabelResource
:param pretty: Controls whether response is pretty-printed (with indentation and new lines)
:type pretty: bool
:param depth: Controls the details depth of response objects. Eg. GET /datacenters/[ID] - depth=0: only direct properties are included. Children (servers etc.) are not included - depth=1: direct properties and children references are included - depth=2: direct properties and children properties are included - depth=3: direct properties and children properties and children's children are included - depth=... and so on
:type depth: int
:param x_contract_number: Users having more than 1 contract need to provide contract number, against which all API requests should be executed
:type x_contract_number: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: LabelResource
"""
kwargs['_return_http_data_only'] = True
return self.datacenters_volumes_labels_post_with_http_info(datacenter_id, volume_id, label, **kwargs) # noqa: E501
def datacenters_volumes_labels_post_with_http_info(self, datacenter_id, volume_id, label, **kwargs): # noqa: E501
"""Add a Label to Volume # noqa: E501
This will add a label to the volume. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.datacenters_volumes_labels_post_with_http_info(datacenter_id, volume_id, label, async_req=True)
>>> result = thread.get()
:param datacenter_id: The unique ID of the Datacenter (required)
:type datacenter_id: str
:param volume_id: The unique ID of the Volume (required)
:type volume_id: str
:param label: Label to be added (required)
:type label: LabelResource
:param pretty: Controls whether response is pretty-printed (with indentation and new lines)
:type pretty: bool
:param depth: Controls the details depth of response objects. Eg. GET /datacenters/[ID] - depth=0: only direct properties are included. Children (servers etc.) are not included - depth=1: direct properties and children references are included - depth=2: direct properties and children properties are included - depth=3: direct properties and children properties and children's children are included - depth=... and so on
:type depth: int
:param x_contract_number: Users having more than 1 contract need to provide contract number, against which all API requests should be executed
:type x_contract_number: int
:param async_req: Whether to execute the request asynchronously.
:type async_req: bool, optional
:param _return_http_data_only: response data without head status code
and headers
:type _return_http_data_only: bool, optional
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:type _preload_content: bool, optional
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
:param _request_auth: set to override the auth_settings for an a single
request; this effectively ignores the authentication
in the spec for a single request.
:type _request_auth: dict, optional
:return: Returns the result object.
If the method is called asynchronously,
returns the request thread.
:rtype: tuple(LabelResource, status_code(int), headers(HTTPHeaderDict))
"""
local_var_params = locals()
all_params = [
'datacenter_id',
'volume_id',
'label',
'pretty',
'depth',
'x_contract_number'
]
all_params.extend(
[
'async_req',
'_return_http_data_only',
'_preload_content',
'_request_timeout',
'_request_auth',
'response_type'
]
)
for local_var_params_key, local_var_params_val in six.iteritems(local_var_params['kwargs']):
if local_var_params_key not in all_params:
raise ApiTypeError(
"Got an unexpected keyword argument '%s'"
" to method datacenters_volumes_labels_post" % local_var_params_key
)
local_var_params[local_var_params_key] = local_var_params_val
del local_var_params['kwargs']
# verify the required parameter 'datacenter_id' is set
if self.api_client.client_side_validation and ('datacenter_id' not in local_var_params or # noqa: E501
local_var_params['datacenter_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `datacenter_id` when calling `datacenters_volumes_labels_post`") # noqa: E501
# verify the required parameter 'volume_id' is set
if self.api_client.client_side_validation and ('volume_id' not in local_var_params or # noqa: E501
local_var_params['volume_id'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `volume_id` when calling `datacenters_volumes_labels_post`") # noqa: E501
# verify the required parameter 'label' is set
if self.api_client.client_side_validation and ('label' not in local_var_params or # noqa: E501
local_var_params['label'] is None): # noqa: E501
raise ApiValueError("Missing the required parameter `label` when calling `datacenters_volumes_labels_post`") # noqa: E501
if self.api_client.client_side_validation and 'depth' in local_var_params and local_var_params['depth'] > 10: # noqa: E501
raise ApiValueError("Invalid value for parameter `depth` when calling `datacenters_volumes_labels_post`, must be a value less than or equal to `10`") # noqa: E501
if self.api_client.client_side_validation and 'depth' in local_var_params and local_var_params['depth'] < 0: # noqa: E501
raise ApiValueError("Invalid value for parameter `depth` when calling `datacenters_volumes_labels_post`, must be a value greater than or equal to `0`") # noqa: E501
collection_formats = {}
path_params = {}
if 'datacenter_id' in local_var_params:
path_params['datacenterId'] = local_var_params['datacenter_id'] # noqa: E501
if 'volume_id' in local_var_params:
path_params['volumeId'] = local_var_params['volume_id'] # noqa: E501
query_params = []
if 'pretty' in local_var_params and local_var_params['pretty'] is | |
None:
pulumi.set(__self__, "path", path)
if success_threshold is not None:
pulumi.set(__self__, "success_threshold", success_threshold)
if timeout is not None:
pulumi.set(__self__, "timeout", timeout)
@property
@pulumi.getter(name="appStartTimeout")
def app_start_timeout(self) -> Optional[pulumi.Input[str]]:
"""
A maximum time limit on application initialization, measured from moment the application successfully replies to a healthcheck until it is ready to serve traffic.
"""
return pulumi.get(self, "app_start_timeout")
@app_start_timeout.setter
def app_start_timeout(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "app_start_timeout", value)
@property
@pulumi.getter(name="checkInterval")
def check_interval(self) -> Optional[pulumi.Input[str]]:
"""
Interval between health checks.
"""
return pulumi.get(self, "check_interval")
@check_interval.setter
def check_interval(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "check_interval", value)
@property
@pulumi.getter(name="failureThreshold")
def failure_threshold(self) -> Optional[pulumi.Input[int]]:
"""
Number of consecutive failed checks required before removing traffic.
"""
return pulumi.get(self, "failure_threshold")
@failure_threshold.setter
def failure_threshold(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "failure_threshold", value)
@property
@pulumi.getter
def host(self) -> Optional[pulumi.Input[str]]:
"""
Host header to send when performing a HTTP Readiness check. Example: "myapp.appspot.com"
"""
return pulumi.get(self, "host")
@host.setter
def host(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "host", value)
@property
@pulumi.getter
def path(self) -> Optional[pulumi.Input[str]]:
"""
The request path.
"""
return pulumi.get(self, "path")
@path.setter
def path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "path", value)
@property
@pulumi.getter(name="successThreshold")
def success_threshold(self) -> Optional[pulumi.Input[int]]:
"""
Number of consecutive successful checks required before receiving traffic.
"""
return pulumi.get(self, "success_threshold")
@success_threshold.setter
def success_threshold(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "success_threshold", value)
@property
@pulumi.getter
def timeout(self) -> Optional[pulumi.Input[str]]:
"""
Time before the check is considered failed.
"""
return pulumi.get(self, "timeout")
@timeout.setter
def timeout(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "timeout", value)
@pulumi.input_type
class RequestUtilizationArgs:
def __init__(__self__, *,
target_concurrent_requests: Optional[pulumi.Input[int]] = None,
target_request_count_per_second: Optional[pulumi.Input[int]] = None):
"""
Target scaling by request utilization. Only applicable in the App Engine flexible environment.
:param pulumi.Input[int] target_concurrent_requests: Target number of concurrent requests.
:param pulumi.Input[int] target_request_count_per_second: Target requests per second.
"""
if target_concurrent_requests is not None:
pulumi.set(__self__, "target_concurrent_requests", target_concurrent_requests)
if target_request_count_per_second is not None:
pulumi.set(__self__, "target_request_count_per_second", target_request_count_per_second)
@property
@pulumi.getter(name="targetConcurrentRequests")
def target_concurrent_requests(self) -> Optional[pulumi.Input[int]]:
"""
Target number of concurrent requests.
"""
return pulumi.get(self, "target_concurrent_requests")
@target_concurrent_requests.setter
def target_concurrent_requests(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "target_concurrent_requests", value)
@property
@pulumi.getter(name="targetRequestCountPerSecond")
def target_request_count_per_second(self) -> Optional[pulumi.Input[int]]:
"""
Target requests per second.
"""
return pulumi.get(self, "target_request_count_per_second")
@target_request_count_per_second.setter
def target_request_count_per_second(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "target_request_count_per_second", value)
@pulumi.input_type
class ResourcesArgs:
def __init__(__self__, *,
cpu: Optional[pulumi.Input[float]] = None,
disk_gb: Optional[pulumi.Input[float]] = None,
kms_key_reference: Optional[pulumi.Input[str]] = None,
memory_gb: Optional[pulumi.Input[float]] = None,
volumes: Optional[pulumi.Input[Sequence[pulumi.Input['VolumeArgs']]]] = None):
"""
Machine resources for a version.
:param pulumi.Input[float] cpu: Number of CPU cores needed.
:param pulumi.Input[float] disk_gb: Disk size (GB) needed.
:param pulumi.Input[str] kms_key_reference: The name of the encryption key that is stored in Google Cloud KMS. Only should be used by Cloud Composer to encrypt the vm disk
:param pulumi.Input[float] memory_gb: Memory (GB) needed.
:param pulumi.Input[Sequence[pulumi.Input['VolumeArgs']]] volumes: User specified volumes.
"""
if cpu is not None:
pulumi.set(__self__, "cpu", cpu)
if disk_gb is not None:
pulumi.set(__self__, "disk_gb", disk_gb)
if kms_key_reference is not None:
pulumi.set(__self__, "kms_key_reference", kms_key_reference)
if memory_gb is not None:
pulumi.set(__self__, "memory_gb", memory_gb)
if volumes is not None:
pulumi.set(__self__, "volumes", volumes)
@property
@pulumi.getter
def cpu(self) -> Optional[pulumi.Input[float]]:
"""
Number of CPU cores needed.
"""
return pulumi.get(self, "cpu")
@cpu.setter
def cpu(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "cpu", value)
@property
@pulumi.getter(name="diskGb")
def disk_gb(self) -> Optional[pulumi.Input[float]]:
"""
Disk size (GB) needed.
"""
return pulumi.get(self, "disk_gb")
@disk_gb.setter
def disk_gb(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "disk_gb", value)
@property
@pulumi.getter(name="kmsKeyReference")
def kms_key_reference(self) -> Optional[pulumi.Input[str]]:
"""
The name of the encryption key that is stored in Google Cloud KMS. Only should be used by Cloud Composer to encrypt the vm disk
"""
return pulumi.get(self, "kms_key_reference")
@kms_key_reference.setter
def kms_key_reference(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "kms_key_reference", value)
@property
@pulumi.getter(name="memoryGb")
def memory_gb(self) -> Optional[pulumi.Input[float]]:
"""
Memory (GB) needed.
"""
return pulumi.get(self, "memory_gb")
@memory_gb.setter
def memory_gb(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "memory_gb", value)
@property
@pulumi.getter
def volumes(self) -> Optional[pulumi.Input[Sequence[pulumi.Input['VolumeArgs']]]]:
"""
User specified volumes.
"""
return pulumi.get(self, "volumes")
@volumes.setter
def volumes(self, value: Optional[pulumi.Input[Sequence[pulumi.Input['VolumeArgs']]]]):
pulumi.set(self, "volumes", value)
@pulumi.input_type
class ScriptHandlerArgs:
def __init__(__self__, *,
script_path: Optional[pulumi.Input[str]] = None):
"""
Executes a script to handle the request that matches the URL pattern.
:param pulumi.Input[str] script_path: Path to the script from the application root directory.
"""
if script_path is not None:
pulumi.set(__self__, "script_path", script_path)
@property
@pulumi.getter(name="scriptPath")
def script_path(self) -> Optional[pulumi.Input[str]]:
"""
Path to the script from the application root directory.
"""
return pulumi.get(self, "script_path")
@script_path.setter
def script_path(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "script_path", value)
@pulumi.input_type
class SslSettingsArgs:
def __init__(__self__, *,
certificate_id: Optional[pulumi.Input[str]] = None,
ssl_management_type: Optional[pulumi.Input['SslSettingsSslManagementType']] = None):
"""
SSL configuration for a DomainMapping resource.
:param pulumi.Input[str] certificate_id: ID of the AuthorizedCertificate resource configuring SSL for the application. Clearing this field will remove SSL support.By default, a managed certificate is automatically created for every domain mapping. To omit SSL support or to configure SSL manually, specify SslManagementType.MANUAL on a CREATE or UPDATE request. You must be authorized to administer the AuthorizedCertificate resource to manually map it to a DomainMapping resource. Example: 12345.
:param pulumi.Input['SslSettingsSslManagementType'] ssl_management_type: SSL management type for this domain. If AUTOMATIC, a managed certificate is automatically provisioned. If MANUAL, certificate_id must be manually specified in order to configure SSL for this domain.
"""
if certificate_id is not None:
pulumi.set(__self__, "certificate_id", certificate_id)
if ssl_management_type is not None:
pulumi.set(__self__, "ssl_management_type", ssl_management_type)
@property
@pulumi.getter(name="certificateId")
def certificate_id(self) -> Optional[pulumi.Input[str]]:
"""
ID of the AuthorizedCertificate resource configuring SSL for the application. Clearing this field will remove SSL support.By default, a managed certificate is automatically created for every domain mapping. To omit SSL support or to configure SSL manually, specify SslManagementType.MANUAL on a CREATE or UPDATE request. You must be authorized to administer the AuthorizedCertificate resource to manually map it to a DomainMapping resource. Example: 12345.
"""
return pulumi.get(self, "certificate_id")
@certificate_id.setter
def certificate_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "certificate_id", value)
@property
@pulumi.getter(name="sslManagementType")
def ssl_management_type(self) -> Optional[pulumi.Input['SslSettingsSslManagementType']]:
"""
SSL management type for this domain. If AUTOMATIC, a managed certificate is automatically provisioned. If MANUAL, certificate_id must be manually specified in order to configure SSL for this domain.
"""
return pulumi.get(self, "ssl_management_type")
@ssl_management_type.setter
def ssl_management_type(self, value: Optional[pulumi.Input['SslSettingsSslManagementType']]):
pulumi.set(self, "ssl_management_type", value)
@pulumi.input_type
class StandardSchedulerSettingsArgs:
def __init__(__self__, *,
max_instances: Optional[pulumi.Input[int]] = None,
min_instances: Optional[pulumi.Input[int]] = None,
target_cpu_utilization: Optional[pulumi.Input[float]] = None,
target_throughput_utilization: Optional[pulumi.Input[float]] = None):
"""
Scheduler settings for standard environment.
:param pulumi.Input[int] max_instances: Maximum number of instances to run for this version. Set to zero to disable max_instances configuration.
:param pulumi.Input[int] min_instances: Minimum number of instances to run for this version. Set to zero to disable min_instances configuration.
:param pulumi.Input[float] target_cpu_utilization: Target CPU utilization ratio to maintain when scaling.
:param pulumi.Input[float] target_throughput_utilization: Target throughput utilization ratio to maintain when scaling
"""
if max_instances is not None:
pulumi.set(__self__, "max_instances", max_instances)
if min_instances is not None:
pulumi.set(__self__, "min_instances", min_instances)
if target_cpu_utilization is not None:
pulumi.set(__self__, "target_cpu_utilization", target_cpu_utilization)
if target_throughput_utilization is not None:
pulumi.set(__self__, "target_throughput_utilization", target_throughput_utilization)
@property
@pulumi.getter(name="maxInstances")
def max_instances(self) -> Optional[pulumi.Input[int]]:
"""
Maximum number of instances to run for this version. Set to zero to disable max_instances configuration.
"""
return pulumi.get(self, "max_instances")
@max_instances.setter
def max_instances(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "max_instances", value)
@property
@pulumi.getter(name="minInstances")
def min_instances(self) -> Optional[pulumi.Input[int]]:
"""
Minimum number of instances to run for this version. Set to zero to disable min_instances configuration.
"""
return pulumi.get(self, "min_instances")
@min_instances.setter
def min_instances(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "min_instances", value)
@property
@pulumi.getter(name="targetCpuUtilization")
def target_cpu_utilization(self) -> Optional[pulumi.Input[float]]:
"""
Target CPU utilization ratio to maintain when scaling.
"""
return pulumi.get(self, "target_cpu_utilization")
@target_cpu_utilization.setter
def target_cpu_utilization(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "target_cpu_utilization", value)
@property
@pulumi.getter(name="targetThroughputUtilization")
def target_throughput_utilization(self) -> Optional[pulumi.Input[float]]:
"""
Target throughput utilization ratio to maintain when scaling
"""
return pulumi.get(self, "target_throughput_utilization")
@target_throughput_utilization.setter
def target_throughput_utilization(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "target_throughput_utilization", value)
@pulumi.input_type
class StaticFilesHandlerArgs:
def __init__(__self__, *,
application_readable: Optional[pulumi.Input[bool]] = None,
expiration: Optional[pulumi.Input[str]] = None,
http_headers: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
mime_type: Optional[pulumi.Input[str]] = None,
path: Optional[pulumi.Input[str]] = None,
require_matching_file: Optional[pulumi.Input[bool]] = None,
upload_path_regex: Optional[pulumi.Input[str]] = None):
"""
Files served directly to the user for a given URL, such as images, CSS stylesheets, or JavaScript source files. Static file handlers describe which files in the application directory are static files, and which URLs serve them.
:param pulumi.Input[bool] application_readable: Whether files should also be uploaded as code data. By default, files declared in static file handlers are uploaded as static data and are only served to end users; they cannot be read by the application. If enabled, uploads are | |
reverse=True)
return sorted_executions
@app.route('/workflow/execution/{Id}', cors=True, methods=['GET'], authorizer=authorizer)
def get_workflow_execution_by_id(Id):
""" Get a workflow execution by id
Returns:
A dictionary containing the workflow execution.
Raises:
200: Workflow executions returned sucessfully.
404: Not found
500: ChaliceViewError - internal server error
"""
table = DYNAMO_RESOURCE.Table(WORKFLOW_EXECUTION_TABLE_NAME)
workflow_execution = None
response = table.get_item(
Key={
'Id': Id
},
ConsistentRead=True)
if "Item" in response:
workflow_execution = response["Item"]
else:
raise NotFoundError(
"Exception: workflow execution '%s' not found" % Id)
return workflow_execution
@app.route('/workflow/execution/{Id}', cors=True, methods=['DELETE'], authorizer=authorizer)
def delete_workflow_execution(Id):
""" Delete a workflow executions
Returns:
Raises:
200: Workflow execution deleted sucessfully.
404: Not found
500: ChaliceViewError - internal server error
"""
table = DYNAMO_RESOURCE.Table(WORKFLOW_EXECUTION_TABLE_NAME)
try:
workflow_execution = None
response = table.get_item(
Key={
'Id': Id
},
ConsistentRead=True)
if "Item" in response:
workflow_execution = response["Item"]
else:
raise NotFoundError(
"Exception: workflow execution '%s' not found" % Id)
response = table.delete_item(
Key={
'Id': Id
})
except Exception as e:
workflow_execution = None
logger.error("Exception {}".format(e))
raise ChaliceViewError("Exception: '%s'" % e)
return workflow_execution
def update_workflow_execution_status(id, status, message):
"""
Get the workflow execution by id from dyanamo and assign to this object
:param id: The id of the workflow execution
:param status: The new status of the workflow execution
"""
print("Update workflow execution {} set status = {}".format(id, status))
execution_table = DYNAMO_RESOURCE.Table(WORKFLOW_EXECUTION_TABLE_NAME)
if status == awsmie.WORKFLOW_STATUS_ERROR:
response = execution_table.update_item(
Key={
'Id': id
},
UpdateExpression='SET #workflow_status = :workflow_status, Message = :message',
ExpressionAttributeNames={
'#workflow_status': "Status"
},
ExpressionAttributeValues={
':workflow_status': status,
':message': message
}
)
else:
response = execution_table.update_item(
Key={
'Id': id
},
UpdateExpression='SET #workflow_status = :workflow_status',
ExpressionAttributeNames={
'#workflow_status': "Status"
},
ExpressionAttributeValues={
':workflow_status': status
}
)
if status in [awsmie.WORKFLOW_STATUS_QUEUED, awsmie.WORKFLOW_STATUS_COMPLETE, awsmie.WORKFLOW_STATUS_ERROR]:
# Trigger the workflow_scheduler
response = LAMBDA_CLIENT.invoke(
FunctionName=WORKFLOW_SCHEDULER_LAMBDA_ARN,
InvocationType='Event'
)
# ================================================================================================
# ___ ______ ____ _ ____ _
# / \ \ / / ___| / ___| ___ _ ____ _(_) ___ ___ | _ \ _ __ _____ _(_) ___ ___
# / _ \ \ /\ / /\___ \ \___ \ / _ | '__\ \ / | |/ __/ _ \ | |_) | '__/ _ \ \/ | |/ _ / __|
# / ___ \ V V / ___) | ___) | __| | \ V /| | (_| __/ | __/| | | (_) > <| | __\__ \
# /_/ \_\_/\_/ |____/ |____/ \___|_| \_/ |_|\___\___| |_| |_| \___/_/\_|_|\___|___/
#
# ================================================================================================
@app.route('/service/transcribe/get_vocabulary', cors=True, methods=['POST'], content_types=['application/json'], authorizer=authorizer)
def get_vocabulary():
""" Get the description for an Amazon Transcribe custom vocabulary.
Returns:
This is a proxy for boto3 get_vocabulary and returns the output from that SDK method.
See `the boto3 documentation for details <https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/transcribe.html#TranscribeService.Client.get_vocabulary>`_
Raises:
See `the boto3 documentation for details <https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/transcribe.html#TranscribeService.Client.get_vocabulary>`_
"""
print('get_vocabulary request: '+app.current_request.raw_body.decode())
transcribe_client = boto3.client('transcribe', region_name=os.environ['AWS_REGION'])
vocabulary_name = json.loads(app.current_request.raw_body.decode())['vocabulary_name']
response = transcribe_client.get_vocabulary(VocabularyName=vocabulary_name)
# Convert time field to a format that is JSON serializable
response['LastModifiedTime'] = response['LastModifiedTime'].isoformat()
return response
@app.route('/service/transcribe/download_vocabulary', cors=True, methods=['POST'], content_types=['application/json'], authorizer=authorizer)
def download_vocabulary():
""" Get the contents of an Amazon Transcribe custom vocabulary.
Body:
.. code-block:: python
{
"vocabulary_name": string
}
Returns:
A list of vocabulary terms.
.. code-block:: python
{
"vocabulary": [{
"Phrase": string,
"IPA": string,
"SoundsLike": string,
"DisplayAs": string
},
...
}
Raises:
500: ChaliceViewError - internal server error
"""
print('download_vocabulary request: '+app.current_request.raw_body.decode())
transcribe_client = boto3.client('transcribe', region_name=os.environ['AWS_REGION'])
vocabulary_name = json.loads(app.current_request.raw_body.decode())['vocabulary_name']
url = transcribe_client.get_vocabulary(VocabularyName=vocabulary_name)['DownloadUri']
import urllib.request
vocabulary_file = urllib.request.urlopen(url).read().decode("utf-8")
vocabulary_json = []
vocabulary_fields = vocabulary_file.split('\n')[0].split('\t')
for line in vocabulary_file.split('\n')[1:]:
vocabulary_item_array = line.split('\t')
vocabulary_item_json = {}
# if vocab item is missing any fields, then skip it
if len(vocabulary_item_array) == len(vocabulary_fields):
i = 0
for field in vocabulary_fields:
vocabulary_item_json[field] = vocabulary_item_array[i]
i = i + 1
vocabulary_json.append(vocabulary_item_json)
return {"vocabulary": vocabulary_json}
@app.route('/service/transcribe/list_vocabularies', cors=True, methods=['GET'], authorizer=authorizer)
def list_vocabularies():
""" List all the available Amazon Transcribe custom vocabularies in this region.
Returns:
This is a proxy for boto3 list_vocabularies and returns the output from that SDK method.
See `the boto3 documentation for details <https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/transcribe.html#TranscribeService.Client.list_vocabularies>`_
Raises:
See the boto3 documentation for details
500: ChaliceViewError - internal server error
"""
# List all custom vocabularies
print('list_vocabularies request: '+app.current_request.raw_body.decode())
transcribe_client = boto3.client('transcribe', region_name=os.environ['AWS_REGION'])
response = transcribe_client.list_vocabularies(MaxResults=100)
vocabularies = response['Vocabularies']
while ('NextToken' in response):
response = transcribe_client.list_vocabularies(MaxResults=100, NextToken=response['NextToken'])
vocabularies = vocabularies + response['Vocabularies']
# Convert time field to a format that is JSON serializable
for item in vocabularies:
item['LastModifiedTime'] = item['LastModifiedTime'].isoformat()
return response
@app.route('/service/transcribe/delete_vocabulary', cors=True, methods=['POST'], content_types=['application/json'], authorizer=authorizer)
def delete_vocabulary():
""" Delete an Amazon Transcribe custom vocabulary.
Body:
.. code-block:: python
{
'vocabulary_name': 'string'
}
Returns:
This is a proxy for boto3 delete_vocabulary and returns the output from that SDK method.
See `the boto3 documentation for details <https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/transcribe.html#TranscribeService.Client.delete_vocabulary>`_
Raises:
See the boto3 documentation for details
500: ChaliceViewError - internal server error
"""
# Delete the specified vocabulary if it exists
print('delete_vocabulary request: '+app.current_request.raw_body.decode())
transcribe_client = boto3.client('transcribe', region_name=os.environ['AWS_REGION'])
vocabulary_name = json.loads(app.current_request.raw_body.decode())['vocabulary_name']
response = transcribe_client.delete_vocabulary(VocabularyName=vocabulary_name)
return response
@app.route('/service/transcribe/create_vocabulary', cors=True, methods=['POST'], content_types=['application/json'], authorizer=authorizer)
def create_vocabulary():
""" Create an Amazon Transcribe custom vocabulary.
Body:
.. code-block:: python
{
'vocabulary_name'='string',
'language_code'='af-ZA'|'ar-AE'|'ar-SA'|'cy-GB'|'da-DK'|'de-CH'|'de-DE'|'en-AB'|'en-AU'|'en-GB'|'en-IE'|'en-IN'|'en-US'|'en-WL'|'es-ES'|'es-US'|'fa-IR'|'fr-CA'|'fr-FR'|'ga-IE'|'gd-GB'|'he-IL'|'hi-IN'|'id-ID'|'it-IT'|'ja-JP'|'ko-KR'|'ms-MY'|'nl-NL'|'pt-BR'|'pt-PT'|'ru-RU'|'ta-IN'|'te-IN'|'tr-TR'|'zh-CN',
's3uri'='string'
}
Returns:
This is a proxy for boto3 create_vocabulary and returns the output from that SDK method.
See `the boto3 documentation for details <https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/transcribe.html#TranscribeService.Client.create_vocabulary>`_
Raises:
See the boto3 documentation for details
500: ChaliceViewError - internal server error
"""
# Save the input vocab to a new vocabulary
print('create_vocabulary request: '+app.current_request.raw_body.decode())
transcribe_client = boto3.client('transcribe', region_name=os.environ['AWS_REGION'])
vocabulary_name = json.loads(app.current_request.raw_body.decode())['vocabulary_name']
language_code = json.loads(app.current_request.raw_body.decode())['language_code']
response = transcribe_client.create_vocabulary(
VocabularyName=vocabulary_name,
LanguageCode=language_code,
VocabularyFileUri=json.loads(app.current_request.raw_body.decode())['s3uri']
)
return response
@app.route('/service/transcribe/list_language_models', cors=True, methods=['GET'], authorizer=authorizer)
def list_language_models():
""" Provides more information about the custom language models you've created. You can use the information in this list to find a specific custom language model. You can then use the describe_language_model operation to get more information about it.
Returns:
This is a proxy for boto3 list_language_models and returns the output from that SDK method.
See `the boto3 documentation for details <https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/transcribe.html#TranscribeService.Client.list_language_models>`_
Raises:
See `the boto3 documentation for details <https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/transcribe.html#TranscribeService.Client.list_language_models>`_
"""
print('list_language_models request: '+app.current_request.raw_body.decode())
transcribe_client = boto3.client('transcribe', region_name=os.environ['AWS_REGION'])
response = transcribe_client.list_language_models()
models = response['Models']
while ('NextToken' in response):
response = transcribe_client.list_language_models(MaxResults=100, NextToken=response['NextToken'])
models = models + response['Models']
# Convert time field to a format that is JSON serializable
for item in models:
item['CreateTime'] = item['CreateTime'].isoformat()
item['LastModifiedTime'] = item['LastModifiedTime'].isoformat()
return response
@app.route('/service/transcribe/describe_language_model', cors=True, methods=['POST'], content_types=['application/json'], authorizer=authorizer)
def describe_language_model():
""" Gets information about a single custom language model.
Body:
.. code-block:: python
{
'ModelName': 'string'
}
Returns:
This is a proxy for boto3 describe_language_model and returns the output from that SDK method.
See `the boto3 documentation for details <https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/transcribe.html#TranscribeService.Client.describe_language_model>`_
Raises:
See `the boto3 documentation for details <https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/transcribe.html#TranscribeService.Client.describe_language_model>`_
"""
print('describe_language_model request: '+app.current_request.raw_body.decode())
transcribe_client = boto3.client('transcribe', region_name=os.environ['AWS_REGION'])
request_payload = dict(json.loads(app.current_request.raw_body.decode()))
response = transcribe_client.describe_language_model(**request_payload)
# Convert time field to a format that is JSON serializable
response['LanguageModel']['CreateTime'] = response['LanguageModel']['CreateTime'].isoformat()
response['LanguageModel']['LastModifiedTime'] = response['LanguageModel']['LastModifiedTime'].isoformat()
return response
@app.route('/service/translate/get_terminology', cors=True, methods=['POST'], content_types=['application/json'], authorizer=authorizer)
def get_terminology():
""" Get a link to the CSV formatted description for an Amazon Translate parallel data.
Body:
.. code-block:: python
{
'terminology_name'='string'
}
Returns:
This is a proxy for boto3 get_terminology and returns the output from that SDK method.
See `the boto3 documentation for details <https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/translate.html#Translate.Client.get_terminology>`_
Raises:
See the boto3 documentation for details
500: ChaliceViewError - internal server error
"""
print('get_terminology request: '+app.current_request.raw_body.decode())
translate_client = boto3.client('translate', region_name=os.environ['AWS_REGION'])
terminology_name = json.loads(app.current_request.raw_body.decode())['terminology_name']
response = translate_client.get_terminology(Name=terminology_name, TerminologyDataFormat='CSV')
# Remove response metadata since we don't need it
del response['ResponseMetadata']
# Convert time field to a format that is JSON serializable
response['TerminologyProperties']['CreatedAt'] = response['TerminologyProperties']['CreatedAt'].isoformat()
response['TerminologyProperties']['LastUpdatedAt'] = response['TerminologyProperties']['LastUpdatedAt'].isoformat()
return response
@app.route('/service/translate/download_terminology', cors=True, methods=['POST'], content_types=['application/json'], authorizer=authorizer)
def download_terminology():
""" Get the CSV formated contents of an Amazon Translate terminology.
Body:
.. code-block:: python
{
'terminology_name'='string'
}
Returns:
A string contining the CSV formatted Amazon Transcribe terminology
.. code-block:: python
{
'terminology_csv': string
}
Raises:
See the boto3 documentation for details
500: ChaliceViewError - internal server error
"""
# This function returns the specified terminology in CSV format, wrapped in a JSON formatted response.
print('download_terminology request: '+app.current_request.raw_body.decode())
translate_client = boto3.client('translate', region_name=os.environ['AWS_REGION'])
terminology_name = json.loads(app.current_request.raw_body.decode())['terminology_name']
url = translate_client.get_terminology(Name=terminology_name, TerminologyDataFormat='CSV')['TerminologyDataLocation']['Location']
import urllib.request
terminology_csv = urllib.request.urlopen(url).read().decode("utf-8")
return {"terminology": terminology_csv}
@app.route('/service/translate/list_terminologies', cors=True, methods=['GET'], authorizer=authorizer)
def list_terminologies():
""" Get the list of available Amazon Translate Terminologies for this region
Returns:
This is a proxy for boto3 get_terminology and returns the output from that | |
<reponame>xiaomengyc/SPG
import numpy as np
import cv2
import os
import torch
import os
import time
from torchvision import models, transforms
from torch.utils.data import DataLoader
from torch.optim import SGD
from torch.autograd import Variable
idx2catename = {'voc20': ['aeroplane','bicycle','bird','boat','bottle','bus','car','cat','chair','cow','diningtable','dog','horse',
'motorbike','person','pottedplant','sheep','sofa','train','tvmonitor'],
'coco80': ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck',
'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench',
'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe',
'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard',
'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl',
'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet',
'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven',
'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
'hair drier', 'toothbrush']}
class SAVE_ATTEN(object):
def __init__(self, save_dir='save_bins', dataset=None):
# type: (object, object) -> object
self.save_dir = save_dir
if dataset is not None:
self.idx2cate = self._get_idx2cate_dict(datasetname=dataset)
else:
self.idx2cate = None
if not os.path.exists(self.save_dir):
os.makedirs(self.save_dir)
def save_top_5_pred_labels(self, preds, org_paths, global_step):
img_num = np.shape(preds)[0]
for idx in xrange(img_num):
img_name = org_paths[idx].strip().split('/')[-1]
if '.JPEG' in img_name:
img_id = img_name[:-5]
elif '.png' in img_name or '.jpg' in img_name:
img_id = img_name[:-4]
out = img_id + ' ' + ' '.join(map(str, preds[idx,:])) + '\n'
out_file = os.path.join(self.save_dir, 'pred_labels.txt')
if global_step == 0 and idx==0 and os.path.exists(out_file):
os.remove(out_file)
with open(out_file, 'a') as f:
f.write(out)
def save_masked_img_batch(self, path_batch, atten_batch, label_batch):
#img_num = np.shape(atten_batch)[0]
img_num = atten_batch.size()[0]
# fid = open('imagenet_val_shape.txt', 'a')
# print(np.shape(img_batch), np.shape(label_batch), np.shape(org_size_batch), np.shape(atten_batch))
for idx in xrange(img_num):
atten = atten_batch[idx]
atten = atten.cpu().data.numpy()
label = label_batch[idx]
label = int(label)
self._save_masked_img(path_batch[idx], atten,label)
def _get_idx2cate_dict(self, datasetname=None):
if datasetname not in idx2catename.keys():
print 'The given %s dataset category names are not available. The supported are: %s'\
%(str(datasetname),','.join(idx2catename.keys()))
return None
else:
return {idx:cate_name for idx, cate_name in enumerate(idx2catename[datasetname])}
def _save_masked_img(self, img_path, atten, label):
'''
save masked images with only one ground truth label
:param path:
:param img:
:param atten:
:param org_size:
:param label:
:param scores:
:param step:
:param args:
:return:
'''
if not os.path.isfile(img_path):
raise 'Image not exist:%s'%(img_path)
img = cv2.imread(img_path)
org_size = np.shape(img)
w = org_size[0]
h = org_size[1]
attention_map = atten[label,:,:]
atten_norm = attention_map
print(np.shape(attention_map), 'Max:', np.max(attention_map), 'Min:',np.min(attention_map))
# min_val = np.min(attention_map)
# max_val = np.max(attention_map)
# atten_norm = (attention_map - min_val)/(max_val - min_val)
atten_norm = cv2.resize(atten_norm, dsize=(h,w))
atten_norm = atten_norm* 255
heat_map = cv2.applyColorMap(atten_norm.astype(np.uint8), cv2.COLORMAP_JET)
img = cv2.addWeighted(img.astype(np.uint8), 0.5, heat_map.astype(np.uint8), 0.5, 0)
img_id = img_path.strip().split('/')[-1]
img_id = img_id.strip().split('.')[0]
save_dir = os.path.join(self.save_dir, img_id+'.png')
cv2.imwrite(save_dir, img)
def get_img_id(self, path):
img_id = path.strip().split('/')[-1]
return img_id.strip().split('.')[0]
def save_top_5_atten_maps(self, atten_fuse_batch, top_indices_batch, org_paths, topk=5):
'''
Save top-5 localization maps for generating bboxes
:param atten_fuse_batch: normalized last layer feature maps of size (batch_size, C, W, H), type: numpy array
:param top_indices_batch: ranked predicted labels of size (batch_size, C), type: numpy array
:param org_paths:
:param args:
:return:
'''
img_num = np.shape(atten_fuse_batch)[0]
for idx in xrange(img_num):
img_id = org_paths[idx].strip().split('/')[-1][:-4]
for k in range(topk):
atten_pos = top_indices_batch[idx, k]
atten_map = atten_fuse_batch[idx, atten_pos,:,:]
heat_map = cv2.resize(atten_map, dsize=(224, 224))
# heat_map = cv2.resize(atten_map, dsize=(img_shape[1], img_shape[0]))
heat_map = heat_map* 255
save_path = os.path.join(self.save_dir, 'heat_maps', 'top%d'%(k+1))
if not os.path.exists(save_path):
os.makedirs(save_path)
save_path = os.path.join(save_path,img_id+'.png')
cv2.imwrite(save_path, heat_map)
# def save_heatmap_segmentation(self, img_path, atten, gt_label, save_dir=None, size=(224,224), maskedimg=False):
# assert np.ndim(atten) == 4
#
# labels_idx = np.where(gt_label[0]==1)[0] if np.ndim(gt_label)==2 else np.where(gt_label==1)[0]
#
# if save_dir is None:
# save_dir = self.save_dir
# if not os.path.exists(save_dir):
# os.mkdir(save_dir)
#
# if isinstance(img_path, list) or isinstance(img_path, tuple):
# batch_size = len(img_path)
# for i in range(batch_size):
# img, size = self.read_img(img_path[i], size=size)
# atten_img = atten[i] #get attention maps for the i-th img of the batch
# img_name = self.get_img_id(img_path[i])
# img_dir = os.path.join(save_dir, img_name)
# if not os.path.exists(img_dir):
# os.mkdir(img_dir)
# for k in labels_idx:
# atten_map_k = atten_img[k,:,:]
# atten_map_k = cv2.resize(atten_map_k, dsize=size)
# if maskedimg:
# img_to_save = self._add_msk2img(img, atten_map_k)
# else:
# img_to_save = self.normalize_map(atten_map_k)*255.0
#
# save_path = os.path.join(img_dir, '%d.png'%(k))
# cv2.imwrite(save_path, img_to_save)
def normalize_map(self, atten_map):
min_val = np.min(atten_map)
max_val = np.max(atten_map)
atten_norm = (atten_map - min_val)/(max_val - min_val)
return atten_norm
def _add_msk2img(self, img, msk, isnorm=True):
if np.ndim(img) == 3:
assert np.shape(img)[0:2] == np.shape(msk)
else:
assert np.shape(img) == np.shape(msk)
if isnorm:
min_val = np.min(msk)
max_val = np.max(msk)
atten_norm = (msk - min_val)/(max_val - min_val)
atten_norm = atten_norm* 255
heat_map = cv2.applyColorMap(atten_norm.astype(np.uint8), cv2.COLORMAP_JET)
w_img = cv2.addWeighted(img.astype(np.uint8), 0.5, heat_map.astype(np.uint8), 0.5, 0)
return w_img
def _draw_text(self, pic, txt, pos='topleft'):
font = cv2.FONT_HERSHEY_SIMPLEX #multiple line
txt = txt.strip().split('\n')
stat_y = 30
for t in txt:
pic = cv2.putText(pic,t,(10,stat_y), font, 0.8,(255,255,255),2,cv2.LINE_AA)
stat_y += 30
return pic
def _mark_score_on_picture(self, pic, score_vec, label_idx):
score = score_vec[label_idx]
txt = '%.3f'%(score)
pic = self._draw_text(pic, txt, pos='topleft')
return pic
def get_heatmap_idxes(self, gt_label):
labels_idx = []
if np.ndim(gt_label) == 1:
labels_idx = np.expand_dims(gt_label, axis=1).astype(np.int)
elif np.ndim(gt_label) == 2:
for row in gt_label:
idxes = np.where(row[0]==1)[0] if np.ndim(row)==2 else np.where(row==1)[0]
labels_idx.append(idxes.tolist())
else:
labels_idx = None
return labels_idx
def get_map_k(self, atten, k, size=(224,224)):
atten_map_k = atten[k,:,:]
# print np.max(atten_map_k), np.min(atten_map_k)
atten_map_k = cv2.resize(atten_map_k, dsize=size)
return atten_map_k
def read_img(self, img_path, size=(224,224)):
img = cv2.imread(img_path)
if img is None:
print "Image does not exist. %s" %(img_path)
exit(0)
if size == (0,0):
size = np.shape(img)[:2]
else:
img = cv2.resize(img, size)
return img, size[::-1]
def get_masked_img(self, img_path, atten, gt_label,
size=(224,224), maps_in_dir=False, save_dir=None, only_map=False):
assert np.ndim(atten) == 4
save_dir = save_dir if save_dir is not None else self.save_dir
if isinstance(img_path, list) or isinstance(img_path, tuple):
batch_size = len(img_path)
label_indexes = self.get_heatmap_idxes(gt_label)
for i in range(batch_size):
img, size = self.read_img(img_path[i], size)
img_name = img_path[i].split('/')[-1]
img_name = img_name.strip().split('.')[0]
if maps_in_dir:
img_save_dir = os.path.join(save_dir, img_name)
os.mkdir(img_save_dir)
for k in label_indexes[i]:
atten_map_k = self.get_map_k(atten[i], k , size)
msked_img = self._add_msk2img(img, atten_map_k)
suffix = str(k+1)
if only_map:
save_img = (self.normalize_map(atten_map_k)*255).astype(np.int)
else:
save_img = msked_img
if maps_in_dir:
cv2.imwrite(os.path.join(img_save_dir, suffix + '.png'), save_img)
else:
cv2.imwrite(os.path.join(save_dir, img_name + '_' + suffix + '.png'), save_img)
# if score_vec is not None and labels_idx is not None:
# msked_img = self._mark_score_on_picture(msked_img, score_vec, labels_idx[k])
# if labels_idx is not None:
# suffix = self.idx2cate.get(labels_idx[k], k)
# def get_masked_img_ml(self, img_path, atten, save_dir=None, size=(224,224),
# gt_label=None, score_vec=None):
# assert np.ndim(atten) == 4
#
# if gt_label is not None and self.idx2cate is not None:
# labels_idx = np.where(gt_label[0]==1)[0] if np.ndim(gt_label)==2 else np.where(gt_label==1)[0]
# else:
# labels_idx = None
#
#
# if save_dir is not None:
# self.save_dir = save_dir
# if isinstance(img_path, list) or isinstance(img_path, tuple):
# batch_size = len(img_path)
# for i in range(batch_size):
# img = cv2.imread(img_path[i])
# if img is None:
# print "Image does not exist. %s" %(img_path[i])
# exit(0)
#
# else:
# atten_img = atten[i] #get attention maps for the i-th img
# img_name = img_path[i].split('/')[-1]
# for k in range(np.shape(atten_img)[0]):
# if size == (0,0):
# w, h, _ = np.shape(img)
# # h, w, _ = np.shape(img)
# else:
# h, w = size
# img = cv2.resize(img, dsize=(h, w))
# atten_map_k = atten_img[k,:,:]
# # print np.max(atten_map_k), np.min(atten_map_k)
# atten_map_k = cv2.resize(atten_map_k, dsize=(h,w))
# msked_img = self._add_msk2img(img, atten_map_k)
# if score_vec is not None and labels_idx is not None:
# msked_img = self._mark_score_on_picture(msked_img, score_vec, labels_idx[k])
# if labels_idx is not None:
# suffix = self.idx2cate.get(labels_idx[k], k)
# else:
# suffix = str(k)
# if '.' in img_name:
# img_name = img_name.strip().split('.')[0]
# cv2.imwrite(os.path.join(self.save_dir, img_name + '_' + suffix + '.png'), msked_img)
#
#
# def get_masked_img(self, img_path, atten, save_dir=None, size=(224,224), combine=True):
# '''
#
# :param img_path:
# :param atten:
# :param size: if it is (0,0) use original image size, otherwise use the specified size.
# :param combine:
# :return:
# '''
#
# if save_dir is not None:
# self.save_dir = save_dir
# if isinstance(img_path, list) or isinstance(img_path, tuple):
# batch_size = len(img_path)
#
# for i in range(batch_size):
# atten_norm = atten[i]
# min_val = np.min(atten_norm)
# max_val = np.max(atten_norm)
# atten_norm = (atten_norm - min_val)/(max_val - min_val)
# # print np.max(atten_norm), np.min(atten_norm)
# img = cv2.imread(img_path[i])
# if | |
i,j,k : used for looping
return:
flag : success or error
outSmArray : array containing smoothed data
outRevArray : revised input data according to type of moving average and window
msg : success or error massage reason
*****************************************************************************************************************************"""
def movingAvg(self, inDataArray, inWindow, inMavgType):
flag = Filter.success
msg = ''
# providing try block to handle exceptions
try:
if inMavgType is None:
inMavgType = 'backward' # checking if moving average type is null and setting default value
# initializing array
values = []
outSmArray = []
revArray = []
outRevArray = []
# checking wondow is integer
if type(inWindow) != int:
msg = Filter.eMsg6 #message 'Provide a Integer value '
flag = Filter.error
return flag, outRevArray, outSmArray, msg
weights = np.repeat(1.0, inWindow) / inWindow # array of window size with value 1.0/window
inputArrayLen = len(inDataArray) # calculating number of input
# checking valid length of array
if (len(inDataArray[0]) < Filter.cArrayLenLimit):
msg = Filter.eMsg4 #message 'Number of input values less than 3'
flag = Filter.error
return flag, outRevArray, outSmArray, msg
# checking the window not crossing 1 and length of input data
if (inWindow == 1 or inWindow > len(inDataArray[0])):
flag = Filter.error
if (inWindow == 1):
msg = 'window should not be 1'
else:
msg = Filter.eMsg3 # 'Window is bigger than the input length.'
return flag, outRevArray, outSmArray, msg
# if window is in range
else:
for i in range(inputArrayLen): # loop for 1 or more data input
values = np.convolve(inDataArray[i], weights, 'valid') # calculating moving average
outSmArray.append(values) # appending smoothed data
if inMavgType == 'forward':
for j in range(inputArrayLen):
outRevArray.append(np.flip(np.delete(np.flip(inDataArray[j]), np.s_[0: int(
inWindow - 1):]))) # deleting extra data from backside of input array
elif inMavgType == 'backward':
for j in range(inputArrayLen):
outRevArray.append(np.delete(inDataArray[j],
np.s_[0: inWindow - 1:])) # deleting extra data from front of input array
elif inMavgType == 'fixed':
if (inWindow % 2 != 0):
for j in range(inputArrayLen):
revArray.append(np.flip(np.delete(np.flip(inDataArray[j]), np.s_[0: int(
(inWindow - 1) / 2):]))) # deleting extra data from backside of input array
for k in range(inputArrayLen):
outRevArray.append(np.delete(revArray[k], np.s_[0: int(
(inWindow - 1) / 2):])) # deleting extra data from front of input array
else:
flag = Filter.error
msg = Filter.eMsg2 # message 'For fixed moving average provide odd numbers of window '
return flag, outRevArray, outSmArray, msg
else:
flag = Filter.error
msg = Filter.eMsg5 # message 'Provide a proper moving average type'
# handling exceptions in except block
except:
flag = Filter.error
msg = Filter.eMsg1 # unexpected error
return flag, outRevArray, outSmArray, msg # returing flag(success or error),reviced input array,smoothed array, messsage
"""
*****************************************************************************************************************************
method countConsec : This methods calculates the 1st consecutive dataset in a given array staring from a given index
parameters:
indexVal : starting index for the search of consecutive dataset
inOutlierArray : Array containg all outlier data index of original data set
variables:
count : used for intermediate counting
return:
outIndexBegin : begining of consecutive data
outIndexEnd : end of consecutive data
i : outlierMatrix array index where the current dataset seaching stoppep
*****************************************************************************************************************************"""
def countConsec(self, indexVal, inOutlierArray):
#initializing
count = 0
outIndexEnd = 0
outIndexBegin = inOutlierArray[indexVal]
#looping through all data in outlierMatrix to find consecutive data set
for i in range(indexVal, len(inOutlierArray) - 1):
#searching if there is any data set equals to its next data set
if inOutlierArray[i + 1] == inOutlierArray[i] + 1:
count += 1 # counting a value how many times the loop is executing for a specific consecutive sequence
if count == 1:
outIndexBegin = inOutlierArray[i] # assigning the begining index of consecutive sequence
outIndexEnd = inOutlierArray[i + 1] # assighing the last index of consecuitive sequence
else:
if (count != 0):
break #breacking out the loop if we have already found a consecutive sequence
return outIndexBegin, outIndexEnd, i #returning begining ,ending of consecuive sequence,stopping index where the search stopped
"""
*****************************************************************************************************************************
method count : This methods calculates number of consecutive data sets
parameters:
outlierMatrix : Array containg all outlier data index of original data set
variables:
count : used for intermediate counting
index : used to loop through index of input outlierMatrix
return:
count1 : nuber of consecuitive data set
*****************************************************************************************************************************"""
def count(self, inOutlierArray):
# initializing
count = 0
count1 = 0
# looping through for count how many consecutives values are in the inOutlierArray
for i in range(len(inOutlierArray) - 1):
if inOutlierArray[i + 1] == inOutlierArray[i] + 1:
count += 1
else:
if count != 0:
count1 = count1 + 1
count = 0
if count != 0:
count1 += 1
return count1
"""
*****************************************************************************************************************************
method::
interpolation : method to construct new data points within the range of a discrete set of known data points
parameters::
inDataArray : input array provided to find interpolated data set
inOutlierArray : Array containg all outlier data index of original data set
inIntpTyp : Type of Interpolation
0 = Linear
1 = Quadratic
inMaxLim : Max limit provided by user or calculated using standard deviation
inMinLim : Min limit provided by user or calculated using standard deviation
variables::
intpArrayIndex1 : intermediate array to calculate linear interpolation
indexVal : index value for consecutive values
indexBegin : index Begin for consecutive values
indexEnd : index End for consecutive values
counter : counter for number of different consecutives outliers to replace
return::
flag : success or error
outSmArray : array containing smoothed data
outRevArray : revised input data according to type of moving average and window
msg : success or error massage reason
count1 : number of consecutive data set
*****************************************************************************************************************************"""
def interpolation(self, inDataArray, inOutlierArray, inIntpTyp, inMaxLim, inMinLim):
#initializing with default values
flag = Filter.success
msg = ''
outIntpArray = []
#convert 0 to False and 1 to True
if inIntpTyp == 0 or inIntpTyp == 'Linear':
inIntpTyp = False
elif inIntpTyp == 1 or inIntpTyp == 'Quadratic':
inIntpTyp = True
# providing try block to handle exceptions
try:
# checking valid length of array
if len(inDataArray) < Filter.cArrayLenLimit:
msg = Filter.eMsg4 # 'Number of input values less than 3'
flag = Filter.error # activates flag
return flag, outIntpArray, msg
# checking if max value provided is less than min value
if (inMaxLim < inMinLim):
flag = Filter.error
msg = 'Max value is lower than Min value'
# checking if max value provided is equal to min value
elif (inMaxLim == inMinLim):
flag = Filter.error
msg = 'Max value equal to than Min value'
# cheching the inIntpTyp is a true or false value
elif type(inIntpTyp) != bool:
msg = Filter.eMsg9 # 'Provide a Boolean value '
flag = Filter.error
return flag, outIntpArray, msg
else:
outIntpArray = inDataArray.copy() # coping original data
# Linear interpolation
if inIntpTyp == False:
intpArrayIndex1 = np.zeros([len(inOutlierArray), 3]) # creating intermediate array to calculate linear interpolation
for i in range(len(inOutlierArray)): # looping through range of number of outlier data
# handing case for 1st data as it is in boundary
if inOutlierArray[i] == 0:
#checking data is near to max limit or min limit
if (abs(inDataArray[inOutlierArray[i]] - inMaxLim) >
abs(inDataArray[inOutlierArray[i]] - inMinLim)):
intpArrayIndex1[i][0] = inMinLim # taking min limit to interpolate
else:
intpArrayIndex1[i][0] = inMaxLim # taking max limit to interpolate
else:
intpArrayIndex1[i][0] = inDataArray[inOutlierArray[i] - 1] # taking previous value to interpolate
intpArrayIndex1[i][1] = inDataArray[inOutlierArray[i]] # taking current value to interpolate
# handing case for last data as it is in boundary
if (inOutlierArray[i] + 1) >= len(inDataArray):
#checking data is near to max limit or min limit
if abs(inDataArray[inOutlierArray[i]] - inMaxLim) > \
abs(inDataArray[inOutlierArray[i]] - inMinLim):
intpArrayIndex1[i][2] = inMinLim # taking min limit to interpolate
else:
intpArrayIndex1[i][2] = inMaxLim # taking max limit to interpolate
else:
intpArrayIndex1[i][2] = inDataArray[inOutlierArray[i] + 1] # | |
#!/usr/bin/env python
from __future__ import print_function
from builtins import input
from builtins import str
import sys
import pmagpy.pmag as pmag
def main():
"""
NAME
specimens_results_magic.py
DESCRIPTION
combines pmag_specimens.txt file with age, location, acceptance criteria and
outputs pmag_results table along with other MagIC tables necessary for uploading to the database
SYNTAX
specimens_results_magic.py [command line options]
OPTIONS
-h prints help message and quits
-usr USER: identify user, default is ""
-f: specimen input magic_measurements format file, default is "magic_measurements.txt"
-fsp: specimen input pmag_specimens format file, default is "pmag_specimens.txt"
-fsm: sample input er_samples format file, default is "er_samples.txt"
-fsi: specimen input er_sites format file, default is "er_sites.txt"
-fla: specify a file with paleolatitudes for calculating VADMs, default is not to calculate VADMS
format is: site_name paleolatitude (space delimited file)
-fa AGES: specify er_ages format file with age information
-crd [s,g,t,b]: specify coordinate system
(s, specimen, g geographic, t, tilt corrected, b, geographic and tilt corrected)
Default is to assume geographic
NB: only the tilt corrected data will appear on the results table, if both g and t are selected.
-cor [AC:CR:NL]: colon delimited list of required data adjustments for all specimens
included in intensity calculations (anisotropy, cooling rate, non-linear TRM)
unless specified, corrections will not be applied
-pri [TRM:ARM] colon delimited list of priorities for anisotropy correction (-cor must also be set to include AC). default is TRM, then ARM
-age MIN MAX UNITS: specify age boundaries and units
-exc: use exiting selection criteria (in pmag_criteria.txt file), default is default criteria
-C: no acceptance criteria
-aD: average directions per sample, default is NOT
-aI: average multiple specimen intensities per sample, default is by site
-aC: average all components together, default is NOT
-pol: calculate polarity averages
-sam: save sample level vgps and v[a]dms, default is by site
-xSi: skip the site level intensity calculation
-p: plot directions and look at intensities by site, default is NOT
-fmt: specify output for saved images, default is svg (only if -p set)
-lat: use present latitude for calculating VADMs, default is not to calculate VADMs
-xD: skip directions
-xI: skip intensities
OUPUT
writes pmag_samples, pmag_sites, pmag_results tables
"""
# set defaults
Comps=[] # list of components
version_num=pmag.get_version()
args=sys.argv
DefaultAge=["none"]
skipdirs,coord,excrit,custom,vgps,average,Iaverage,plotsites,opt=1,0,0,0,0,0,0,0,0
get_model_lat=0 # this skips VADM calculation altogether, when get_model_lat=1, uses present day
fmt='svg'
dir_path="."
model_lat_file=""
Caverage=0
infile='pmag_specimens.txt'
measfile="magic_measurements.txt"
sampfile="er_samples.txt"
sitefile="er_sites.txt"
agefile="er_ages.txt"
specout="er_specimens.txt"
sampout="pmag_samples.txt"
siteout="pmag_sites.txt"
resout="pmag_results.txt"
critout="pmag_criteria.txt"
instout="magic_instruments.txt"
sigcutoff,OBJ="",""
noDir,noInt=0,0
polarity=0
coords=['0']
Dcrit,Icrit,nocrit=0,0,0
corrections=[]
nocorrection=['DA-NL','DA-AC','DA-CR']
priorities=['DA-AC-ARM','DA-AC-TRM'] # priorities for anisotropy correction
# get command line stuff
if "-h" in args:
print(main.__doc__)
sys.exit()
if '-WD' in args:
ind=args.index("-WD")
dir_path=args[ind+1]
if '-cor' in args:
ind=args.index('-cor')
cors=args[ind+1].split(':') # list of required data adjustments
for cor in cors:
nocorrection.remove('DA-'+cor)
corrections.append('DA-'+cor)
if '-pri' in args:
ind=args.index('-pri')
priorities=args[ind+1].split(':') # list of required data adjustments
for p in priorities:
p='DA-AC-'+p
if '-f' in args:
ind=args.index("-f")
measfile=args[ind+1]
if '-fsp' in args:
ind=args.index("-fsp")
infile=args[ind+1]
if '-fsi' in args:
ind=args.index("-fsi")
sitefile=args[ind+1]
if "-crd" in args:
ind=args.index("-crd")
coord=args[ind+1]
if coord=='s':coords=['-1']
if coord=='g':coords=['0']
if coord=='t':coords=['100']
if coord=='b':coords=['0','100']
if "-usr" in args:
ind=args.index("-usr")
user=sys.argv[ind+1]
else: user=""
if "-C" in args: Dcrit,Icrit,nocrit=1,1,1 # no selection criteria
if "-sam" in args: vgps=1 # save sample level VGPS/VADMs
if "-xSi" in args:
nositeints=1 # skip site level intensity
else:
nositeints=0
if "-age" in args:
ind=args.index("-age")
DefaultAge[0]=args[ind+1]
DefaultAge.append(args[ind+2])
DefaultAge.append(args[ind+3])
Daverage,Iaverage,Caverage=0,0,0
if "-aD" in args: Daverage=1 # average by sample directions
if "-aI" in args: Iaverage=1 # average by sample intensities
if "-aC" in args: Caverage=1 # average all components together ??? why???
if "-pol" in args: polarity=1 # calculate averages by polarity
if '-xD' in args:noDir=1
if '-xI' in args:
noInt=1
elif "-fla" in args:
if '-lat' in args:
print("you should set a paleolatitude file OR use present day lat - not both")
sys.exit()
ind=args.index("-fla")
model_lat_file=dir_path+'/'+args[ind+1]
get_model_lat=2
mlat=open(model_lat_file,'r')
ModelLats=[]
for line in mlat.readlines():
ModelLat={}
tmp=line.split()
ModelLat["er_site_name"]=tmp[0]
ModelLat["site_model_lat"]=tmp[1]
ModelLat["er_sample_name"]=tmp[0]
ModelLat["sample_lat"]=tmp[1]
ModelLats.append(ModelLat)
get_model_lat=2
elif '-lat' in args:
get_model_lat=1
if "-p" in args:
plotsites=1
if "-fmt" in args:
ind=args.index("-fmt")
fmt=args[ind+1]
if noDir==0: # plot by site - set up plot window
import pmagplotlib
EQ={}
EQ['eqarea']=1
pmagplotlib.plot_init(EQ['eqarea'],5,5) # define figure 1 as equal area projection
pmagplotlib.plot_net(EQ['eqarea']) # I don't know why this has to be here, but otherwise the first plot never plots...
pmagplotlib.draw_figs(EQ)
if '-WD' in args:
infile=dir_path+'/'+infile
measfile=dir_path+'/'+measfile
instout=dir_path+'/'+instout
sampfile=dir_path+'/'+sampfile
sitefile=dir_path+'/'+sitefile
agefile=dir_path+'/'+agefile
specout=dir_path+'/'+specout
sampout=dir_path+'/'+sampout
siteout=dir_path+'/'+siteout
resout=dir_path+'/'+resout
critout=dir_path+'/'+critout
if "-exc" in args: # use existing pmag_criteria file
if "-C" in args:
print('you can not use both existing and no criteria - choose either -exc OR -C OR neither (for default)')
sys.exit()
crit_data,file_type=pmag.magic_read(critout)
print("Acceptance criteria read in from ", critout)
else : # use default criteria (if nocrit set, then get really loose criteria as default)
crit_data=pmag.default_criteria(nocrit)
if nocrit==0:
print("Acceptance criteria are defaults")
else:
print("No acceptance criteria used ")
accept={}
for critrec in crit_data:
for key in list(critrec.keys()):
# need to migrate specimen_dang to specimen_int_dang for intensity data using old format
if 'IE-SPEC' in list(critrec.keys()) and 'specimen_dang' in list(critrec.keys()) and 'specimen_int_dang' not in list(critrec.keys()):
critrec['specimen_int_dang']=critrec['specimen_dang']
del critrec['specimen_dang']
# need to get rid of ron shaars sample_int_sigma_uT
if 'sample_int_sigma_uT' in list(critrec.keys()):
critrec['sample_int_sigma']='%10.3e'%(eval(critrec['sample_int_sigma_uT'])*1e-6)
if key not in list(accept.keys()) and critrec[key]!='':
accept[key]=critrec[key]
#
#
if "-exc" not in args and "-C" not in args:
print("args",args)
pmag.magic_write(critout,[accept],'pmag_criteria')
print("\n Pmag Criteria stored in ",critout,'\n')
#
# now we're done slow dancing
#
SiteNFO,file_type=pmag.magic_read(sitefile) # read in site data - has the lats and lons
SampNFO,file_type=pmag.magic_read(sampfile) # read in site data - has the lats and lons
height_nfo=pmag.get_dictitem(SiteNFO,'site_height','','F') # find all the sites with height info.
if agefile !="":AgeNFO,file_type=pmag.magic_read(agefile) # read in the age information
Data,file_type=pmag.magic_read(infile) # read in specimen interpretations
IntData=pmag.get_dictitem(Data,'specimen_int','','F') # retrieve specimens with intensity data
comment,orient="",[]
samples,sites=[],[]
for rec in Data: # run through the data filling in missing keys and finding all components, coordinates available
# fill in missing fields, collect unique sample and site names
if 'er_sample_name' not in list(rec.keys()):
rec['er_sample_name']=""
elif rec['er_sample_name'] not in samples:
samples.append(rec['er_sample_name'])
if 'er_site_name' not in list(rec.keys()):
rec['er_site_name']=""
elif rec['er_site_name'] not in sites:
sites.append(rec['er_site_name'])
if 'specimen_int' not in list(rec.keys()):rec['specimen_int']=''
if 'specimen_comp_name' not in list(rec.keys()) or rec['specimen_comp_name']=="":rec['specimen_comp_name']='A'
if rec['specimen_comp_name'] not in Comps:Comps.append(rec['specimen_comp_name'])
rec['specimen_tilt_correction']=rec['specimen_tilt_correction'].strip('\n')
if "specimen_tilt_correction" not in list(rec.keys()): rec["specimen_tilt_correction"]="-1" # assume sample coordinates
if rec["specimen_tilt_correction"] not in orient: orient.append(rec["specimen_tilt_correction"]) # collect available coordinate systems
if "specimen_direction_type" not in list(rec.keys()): rec["specimen_direction_type"]='l' # assume direction is line - not plane
if "specimen_dec" not in list(rec.keys()): rec["specimen_direction_type"]='' # if no declination, set direction type to blank
if "specimen_n" not in list(rec.keys()): rec["specimen_n"]='' # put in n
if "specimen_alpha95" not in list(rec.keys()): rec["specimen_alpha95"]='' # put in alpha95
if "magic_method_codes" not in list(rec.keys()): rec["magic_method_codes"]=''
#
# start parsing data into SpecDirs, SpecPlanes, SpecInts
SpecInts,SpecDirs,SpecPlanes=[],[],[]
samples.sort() # get sorted list of samples and sites
sites.sort()
if noInt==0: # don't skip intensities
IntData=pmag.get_dictitem(Data,'specimen_int','','F') # retrieve specimens with intensity data
if nocrit==0: # use selection criteria
for rec in IntData: # do selection criteria
kill=pmag.grade(rec,accept,'specimen_int')
if len(kill)==0: SpecInts.append(rec) # intensity record to be included in sample, site calculations
else:
SpecInts=IntData[:] # take everything - no selection criteria
# check for required data adjustments
if len(corrections)>0 and len(SpecInts)>0:
for cor in corrections:
SpecInts=pmag.get_dictitem(SpecInts,'magic_method_codes',cor,'has') # only take specimens with the required corrections
if len(nocorrection)>0 and len(SpecInts)>0:
for cor in nocorrection:
SpecInts=pmag.get_dictitem(SpecInts,'magic_method_codes',cor,'not') # exclude the corrections not specified for inclusion
# take top priority specimen of its name in remaining specimens (only one per customer)
PrioritySpecInts=[]
specimens=pmag.get_specs(SpecInts) # get list of uniq specimen names
for spec in specimens:
ThisSpecRecs=pmag.get_dictitem(SpecInts,'er_specimen_name',spec,'T') # all the records for this specimen
if len(ThisSpecRecs)==1:
PrioritySpecInts.append(ThisSpecRecs[0])
elif len(ThisSpecRecs)>1: # more than one
prec=[]
for p in priorities:
ThisSpecRecs=pmag.get_dictitem(SpecInts,'magic_method_codes',p,'has') # all the records for this specimen
if len(ThisSpecRecs)>0:prec.append(ThisSpecRecs[0])
PrioritySpecInts.append(prec[0]) # take the best one
SpecInts=PrioritySpecInts # this has the first specimen record
if noDir==0: # don't skip directions
AllDirs=pmag.get_dictitem(Data,'specimen_direction_type','','F') # retrieve specimens with directed lines and planes
Ns=pmag.get_dictitem(AllDirs,'specimen_n','','F') # get all specimens with specimen_n information
if nocrit!=1: # use selection criteria
for rec in Ns: # look through everything with specimen_n for "good" data
kill=pmag.grade(rec,accept,'specimen_dir')
if len(kill)==0: # nothing | |
..., n)
The third dimension of ´P´ spans the v-direction control points (0, 1, ..., m)
W : ndarray with shape (n+1, m+1)
Array containing the weight of the control points
The first dimension of ´W´ spans the u-direction control points weights (0, 1, ..., n)
The second dimension of ´W´ spans the v-direction control points weights (0, 1, ..., m)
p : int
Degree of the u-basis polynomials
q : int
Degree of the v-basis polynomials
U : ndarray with shape (r+1=n+p+2,)
Knot vector in the u-direction
Set the multiplicity of the first and last entries equal to ´p+1´ to obtain a clamped spline
V : ndarray with shape (s+1=m+q+2,)
Knot vector in the v-direction
Set the multiplicity of the first and last entries equal to ´q+1´ to obtain a clamped spline
u : scalar or ndarray with shape (N,)
u-parameter used to evaluate the surface
v : scalar or ndarray with shape (N,)
v-parameter used to evaluate the surface
Returns
-------
S : ndarray with shape (ndim, N)
Array containing the NURBS surface coordinates
The first dimension of ´S´ spans the ´(x,y,z)´ coordinates
The second dimension of ´S´ spans the (u,v) parametrization sample points
"""
# Check the shape of the input parameters
if P.ndim > 3: raise Exception('P must be an array of shape (ndim, n+1, m+1)')
if W.ndim > 2: raise Exception('W must be an array of shape (n+1, m+1)')
if not np.isscalar(p): raise Exception('p must be an scalar')
if not np.isscalar(q): raise Exception('q must be an scalar')
if U.ndim > 1: raise Exception('U must be an array of shape (r+1=n+p+2,)')
if V.ndim > 1: raise Exception('V must be an array of shape (s+1=m+q+2,)')
if np.isscalar(u): u = np.asarray(u)
elif u.ndim > 1: raise Exception('u must be a scalar or an array of shape (N,)')
if np.isscalar(v): v = np.asarray(v)
elif u.ndim > 1: raise Exception('v must be a scalar or an array of shape (N,)')
# Shape of the array of control points
n_dim, nn, mm = np.shape(P)
# Highest index of the control points (counting from zero)
n = nn - 1
m = mm - 1
# Compute the B-Spline basis polynomials
N_basis_u = compute_basis_polynomials(n, p, U, u) # shape (n+1, N)
N_basis_v = compute_basis_polynomials(m, q, V, v) # shape (m+1, N)
# Map the control points to homogeneous space | P_w = (x*w,y*w,z*w,w)
P_w = np.concatenate((P * W[np.newaxis, :], W[np.newaxis, :]), axis=0)
# Compute the coordinates of the NURBS surface in homogeneous space
# This implementation is vectorized to increase speed
A = np.dot(P_w, N_basis_v) # shape (ndim+1, n+1, N)
B = np.repeat(N_basis_u[np.newaxis], repeats=n_dim+1, axis=0) # shape (ndim+1, n+1, N)
S_w = np.sum(A*B, axis=1) # shape (ndim+1, N)
# Map the coordinates back to the ordinary space
S = S_w[0:-1,:]/S_w[-1, :]
return S
@staticmethod
def compute_bspline_coordinates(P, p, q, U, V, u, v):
""" Evaluate the coordinates of the B-Spline surface corresponding to the (u,v) parametrization
This function computes the coordinates of a B-Spline surface as given by equation 3.11. See algorithm A3.5
Parameters
----------
P : ndarray with shape (ndim, n+1, m+1)
Array containing the coordinates of the control points
The first dimension of ´P´ spans the coordinates of the control points (any number of dimensions)
The second dimension of ´P´ spans the u-direction control points (0, 1, ..., n)
The third dimension of ´P´ spans the v-direction control points (0, 1, ..., m)
p : int
Degree of the u-basis polynomials
q : int
Degree of the v-basis polynomials
U : ndarray with shape (r+1=n+p+2,)
Knot vector in the u-direction
Set the multiplicity of the first and last entries equal to ´p+1´ to obtain a clamped spline
V : ndarray with shape (s+1=m+q+2,)
Knot vector in the v-direction
Set the multiplicity of the first and last entries equal to ´q+1´ to obtain a clamped spline
u : scalar or ndarray with shape (N,)
u-parameter used to evaluate the surface
v : scalar or ndarray with shape (N,)
v-parameter used to evaluate the surface
Returns
-------
S : ndarray with shape (ndim, N)
Array containing the NURBS surface coordinates
The first dimension of ´S´ spans the ´(x,y,z)´ coordinates
The second dimension of ´S´ spans the (u,v) parametrization sample points
"""
# Check the shape of the input parameters
if P.ndim > 3: raise Exception('P must be an array of shape (ndim, n+1, m+1)')
if not np.isscalar(p): raise Exception('p must be an scalar')
if not np.isscalar(q): raise Exception('q must be an scalar')
if U.ndim > 1: raise Exception('U must be an array of shape (r+1=n+p+2,)')
if V.ndim > 1: raise Exception('V must be an array of shape (s+1=m+q+2,)')
if np.isscalar(u): u = np.asarray(u)
elif u.ndim > 1: raise Exception('u must be a scalar or an array of shape (N,)')
if np.isscalar(v): v = np.asarray(v)
elif u.ndim > 1: raise Exception('v must be a scalar or an array of shape (N,)')
# Shape of the array of control points
n_dim, nn, mm = np.shape(P)
# Highest index of the control points (counting from zero)
n = nn - 1
m = mm - 1
# Compute the B-Spline basis polynomials
N_basis_u = compute_basis_polynomials(n, p, U, u) # shape (n+1, N)
N_basis_v = compute_basis_polynomials(m, q, V, v) # shape (m+1, N)
# Compute the coordinates of the B-Spline surface
# This implementation is vectorized to increase speed
A = np.dot(P, N_basis_v) # shape (ndim, n+1, N)
B = np.repeat(N_basis_u[np.newaxis], repeats=n_dim, axis=0) # shape (ndim, n+1, N)
S = np.sum(A*B,axis=1) # shape (ndim, N)
return S
# ---------------------------------------------------------------------------------------------------------------- #
# Compute the derivatives of the surface
# ---------------------------------------------------------------------------------------------------------------- #
def get_derivative(self, u, v, order_u, order_v):
""" Evaluate the derivative of the surface for the input u-parametrization
Parameters
----------
u : scalar or ndarray with shape (N,)
u-parameter used to evaluate the surface
v : scalar or ndarray with shape (N,)
v-parameter used to evaluate the surface
order_u : int
Order of the partial derivative in the u-direction
order_v : int
Order of the partial derivative in the v-direction
Returns
-------
dS : ndarray with shape (ndim, N)
Array containing the derivative of the desired order
The first dimension of ´dC´ spans the ´(x,y,z)´ coordinates
The second dimension of ´dC´ spans the ´u´ parametrization sample points
"""
# Compute the array of surface derivatives up to the input (u,v) orders and slice the desired values
dS = self.compute_nurbs_derivatives(self.P, self.W, self.p, self.q, self.U, self.V, u, v, order_u, order_v)[order_u, order_v, ...]
return dS
def compute_nurbs_derivatives(self, P, W, p, q, U, V, u, v, up_to_order_u, up_to_order_v):
""" Compute the derivatives of a NURBS surface in ordinary space up to to the desired orders
This function computes the analytic derivatives of the NURBS surface in ordinary space using equation 4.20 and
the derivatives of the NURBS surface in homogeneous space obtained from compute_bspline_derivatives()
The derivatives are computed recursively in a fashion similar to algorithm A4.4
Parameters
----------
P : ndarray with shape (ndim, n+1, m+1)
Array containing the coordinates of the control points
The first dimension of ´P´ spans the coordinates of the control points (any number of dimensions)
The second dimension of ´P´ spans the u-direction control points (0, 1, ..., n)
The third dimension of ´P´ spans the v-direction control points (0, 1, ..., m)
W : ndarray with shape (n+1, m+1)
Array containing the weight of the control points
The first dimension of ´W´ spans the u-direction control points weights (0, 1, ..., n)
The second dimension of ´W´ spans the v-direction control points weights (0, 1, ..., m)
p : int
Degree of the u-basis polynomials
q : int
Degree of the v-basis polynomials
U : ndarray with shape (r+1=n+p+2,)
Knot vector in the u-direction
Set the multiplicity of the first and last entries equal to ´p+1´ | |
<reponame>summerandwinter/poempythonweb<filename>images.py<gh_stars>0
# coding: utf-8
from datetime import datetime
from django.http import HttpResponse
from django.http import HttpResponseRedirect
from django.http import HttpResponseServerError
from django.shortcuts import render
from django.urls import reverse
from django.views import View
from leancloud import Object
from leancloud import Query
from leancloud.errors import LeanCloudError
from PIL import Image, ImageColor, ImageFont, ImageDraw, ImageFilter
from io import BytesIO
from textwrap import *
import re
# 模糊
def filter_blur(request):
image_data = Image.open("photo.jpg")
fliter_data = image_data.filter(ImageFilter.BLUR)
msstream=BytesIO()
fliter_data.save(msstream,"jpeg")
fliter_data.close()
return HttpResponse(msstream.getvalue(),content_type="image/jpeg")
# 轮廓
def filter_contour(request):
image_data = Image.open("photo.jpg")
fliter_data = image_data.filter(ImageFilter.CONTOUR)
msstream=BytesIO()
fliter_data.save(msstream,"jpeg")
fliter_data.close()
return HttpResponse(msstream.getvalue(),content_type="image/jpeg")
# 细节
def filter_detail(request):
image_data = Image.open("photo.jpg")
fliter_data = image_data.filter(ImageFilter.DETAIL)
msstream=BytesIO()
fliter_data.save(msstream,"jpeg")
fliter_data.close()
return HttpResponse(msstream.getvalue(),content_type="image/jpeg")
# 边缘增强
def filter_edge_enhance(request):
image_data = Image.open("photo.jpg")
fliter_data = image_data.filter(ImageFilter.EDGE_ENHANCE)
msstream=BytesIO()
fliter_data.save(msstream,"jpeg")
fliter_data.close()
return HttpResponse(msstream.getvalue(),content_type="image/jpeg")
# 边缘增强
def filter_edge_enhance_more(request):
image_data = Image.open("photo.jpg")
fliter_data = image_data.filter(ImageFilter.EDGE_ENHANCE_MORE)
msstream=BytesIO()
fliter_data.save(msstream,"jpeg")
fliter_data.close()
return HttpResponse(msstream.getvalue(),content_type="image/jpeg")
# 浮雕
def filter_emboss(request):
image_data = Image.open("photo.jpg")
fliter_data = image_data.filter(ImageFilter.EMBOSS)
msstream=BytesIO()
fliter_data.save(msstream,"jpeg")
fliter_data.close()
return HttpResponse(msstream.getvalue(),content_type="image/jpeg")
#寻找边缘
def filter_find_edges(request):
image_data = Image.open("photo.jpg")
fliter_data = image_data.filter(ImageFilter.FIND_EDGES)
msstream=BytesIO()
fliter_data.save(msstream,"jpeg")
fliter_data.close()
return HttpResponse(msstream.getvalue(),content_type="image/jpeg")
#柔化
def filter_smooth(request):
image_data = Image.open("photo.jpg")
fliter_data = image_data.filter(ImageFilter.SMOOTH)
msstream=BytesIO()
fliter_data.save(msstream,"jpeg")
fliter_data.close()
return HttpResponse(msstream.getvalue(),content_type="image/jpeg")
#柔化
def filter_smooth_more(request):
image_data = Image.open("photo.jpg")
fliter_data = image_data.filter(ImageFilter.SMOOTH_MORE)
msstream=BytesIO()
fliter_data.save(msstream,"jpeg")
fliter_data.close()
return HttpResponse(msstream.getvalue(),content_type="image/jpeg")
# 锐化
def filter_sharpen(request):
image_data = Image.open("photo.jpg")
fliter_data = image_data.filter(ImageFilter.SHARPEN)
msstream=BytesIO()
fliter_data.save(msstream,"jpeg")
fliter_data.close()
return HttpResponse(msstream.getvalue(),content_type="image/jpeg")
# 高斯模糊
def filter_gaussian_blur(request):
image_data = Image.open("photo.jpg")
fliter_data = image_data.filter(ImageFilter.GaussianBlur(4))
msstream=BytesIO()
fliter_data.save(msstream,"jpeg")
fliter_data.close()
return HttpResponse(msstream.getvalue(),content_type="image/jpeg")
# 反遮罩锐化
def filter_unsharp_mask(request):
image_data = Image.open("photo.jpg")
fliter_data = image_data.filter(ImageFilter.UnsharpMask())
msstream=BytesIO()
fliter_data.save(msstream,"jpeg")
fliter_data.close()
return HttpResponse(msstream.getvalue(),content_type="image/jpeg")
def template(request):
w = 640
h = 862
iw = 600
ih = 340
title = '每日一言'
content = '觉得最失落的,大概是你还在为你们的未来出谋划策,他却已慢慢后退不再与你并肩。'
spacing = 20
content = fill(content, 15)
author = '- 天天码图 -'
copyright = '微信小程序「天天码图」'
title_fnt = ImageFont.truetype('font/zh/YueSong.ttf', 35)
content_fnt = ImageFont.truetype('font/zh/YueSong.ttf', 30)
author_fnt = ImageFont.truetype('font/zh/YueSong.ttf', 25)
copyright_fnt = ImageFont.truetype('font/zh/YueSong.ttf', 25)
base = Image.new('RGBA',(w,h),(255,255,255,255))
draw = ImageDraw.Draw(base)
tw,th = draw.multiline_textsize(title, font=title_fnt)
aw,ah = draw.multiline_textsize(author, font=author_fnt)
cw,ch = draw.multiline_textsize(content, font=content_fnt, spacing=spacing)
crw,crh = draw.multiline_textsize(copyright, font=copyright_fnt)
h = 635+th+ch+crh+ah;
base = Image.new('RGBA',(w,h),(255,255,255,255))
draw = ImageDraw.Draw(base)
photo = Image.open("photo.jpg").convert('RGBA')
(pw, ph) = photo.size
if pw/ph>iw/ih:
box = ((pw-ph*iw/ih)/2,0,(pw+ph*iw/ih)/2,ph)
else:
box = (0,(ph-pw*ih/iw)/2,pw,(ph+pw*ih/iw)/2)
photo = photo.crop(box)
photo = photo.resize((iw,ih))
base.paste(photo,box=(20,20))
# get a drawing context
draw = ImageDraw.Draw(base)
# draw text in the middle of the image, half opacity
draw.multiline_text((w/2-tw/2,420), title, font=title_fnt, fill=(0,0,0,255), align='center')
draw.multiline_text((w/2-cw/2,420+th+45), content, font=content_fnt, fill=(0,0,0,255), align='center', spacing=spacing)
draw.multiline_text((w/2-aw/2,420+th+45+ch+115), author, font=author_fnt, fill=(0,0,0,255), align='center')
draw.multiline_text((w-crw,420+th+45+ch+115+ah+50), copyright, font=copyright_fnt, fill=(189,189,189,255), align='center')
# get BytesIO
msstream = BytesIO()
# save image data to output stream
base.save(msstream,"png")
# release memory
base.close()
return HttpResponse(msstream.getvalue(),content_type="image/png")
def template2(request):
w = 640
h = 1020
iw = 600
ih = 340
title = '每日一言'
content = '觉得最失落的,大概是你还在为你们的未来出谋划策,他却已慢慢后退不再与你并肩。'
spacing = 20
padding = 2
author = '- 天天码图 -'
copyright = '微信小程序「天天码图」'
title_fnt = ImageFont.truetype('font/zh/YueSong.ttf', 35)
content_fnt = ImageFont.truetype('font/zh/YueSong.ttf', 30)
author_fnt = ImageFont.truetype('font/zh/YueSong.ttf', 25)
copyright_fnt = ImageFont.truetype('font/zh/YueSong.ttf', 25)
base = Image.new('RGBA',(w,h),(255,255,255,255))
draw = ImageDraw.Draw(base)
aw,ah = draw.multiline_textsize(author, font=author_fnt)
crw,crh = draw.multiline_textsize(copyright, font=copyright_fnt)
photo = Image.open("photo.jpg").convert('RGBA')
(pw, ph) = photo.size
if pw/ph>iw/ih:
box = ((pw-ph*iw/ih)/2,0,(pw+ph*iw/ih)/2,ph)
else:
box = (0,(ph-pw*ih/iw)/2,pw,(ph+pw*ih/iw)/2)
photo = photo.crop(box)
photo = photo.resize((iw,ih))
base.paste(photo,box=(20,20))
# get a drawing context
draw = ImageDraw.Draw(base)
# split the title
tlines = wrap(title, 1)
# current title height
tnh = 420
# get width and height of single title word
stw,sth = title_fnt.getsize("已")
for tline in tlines:
draw.text((w-115-stw,tnh), tline, fill=(0,0,0,255), font=title_fnt)
tnh = tnh+sth
# get width and height of single content word
scw,sch = content_fnt.getsize("已")
clines = wrap(content, 14)
# current width of content
cnw = w-115-stw-115-scw
for cline in clines:
# current height of content
cnh = 420
cwords = wrap(cline, 1)
for cword in cwords:
pattern = re.compile("[,。、]+")
if pattern.search(cword):
draw.text((cnw,cnh), cword, fill=(0,0,0,255), font=content_fnt)
# draw.text((cnw+30-12,cnh-30+12), cword, fill=(0,0,0,255), font=content_fnt)
else:
draw.text((cnw,cnh), cword, fill=(0,0,0,255), font=content_fnt)
cnh = cnh+sch+padding
cnw = cnw-scw-spacing
# draw text in the middle of the image, half opacity
# draw.multiline_text((w/2-tw/2,420), title, font=title_fnt, fill=(0,0,0,255), align='center')
# draw.multiline_text((w/2-cw/2,420+th+45), content, font=content_fnt, fill=(0,0,0,255), align='center', spacing=spacing)
draw.multiline_text((w/2-aw/2,h-50-15-crh-ah), author, font=author_fnt, fill=(0,0,0,255), align='center')
draw.multiline_text((w-crw,h-15-crh), copyright, font=copyright_fnt, fill=(189,189,189,255), align='center')
# get BytesIO
msstream = BytesIO()
# save image data to output stream
base.save(msstream,"png")
# release memory
base.close()
return HttpResponse(msstream.getvalue(),content_type="image/png")
def template3(request):
w = 640
h = 862
iw = 600
ih = 340
bw = 300
bh = 300
title = '每日一言'
content = '觉得最失落的,大概是你还在为你们的未来出谋划策,他却已慢慢后退不再与你并肩。'
spacing = 20
content = fill(content, 15)
author = '- 天天码图 -'
copyright = '微信小程序「天天码图」'
title_fnt = ImageFont.truetype('font/zh/YueSong.ttf', 35)
content_fnt = ImageFont.truetype('font/zh/YueSong.ttf', 30)
author_fnt = ImageFont.truetype('font/zh/YueSong.ttf', 25)
copyright_fnt = ImageFont.truetype('font/zh/YueSong.ttf', 25)
base = Image.new('RGBA',(w,h),(255,255,255,255))
draw = ImageDraw.Draw(base)
tw,th = draw.multiline_textsize(title, font=title_fnt)
aw,ah = draw.multiline_textsize(author, font=author_fnt)
cw,ch = draw.multiline_textsize(content, font=content_fnt, spacing=spacing)
crw,crh = draw.multiline_textsize(copyright, font=copyright_fnt)
h = 695+th+ch+crh+ah;
base = Image.new('RGBA',(w,h),(255,255,255,255))
draw = ImageDraw.Draw(base)
photo = Image.open("photo.jpg").convert('RGBA')
pw, ph = photo.size
if pw > ph:
box = ((pw-ph*bw/bh)/2,0,(pw+ph*bw/bh)/2,ph)
else:
box = (0,(ph-pw*bh/bw)/2,pw,(ph+pw*bh/bw)/2)
photo = photo.crop(box)
photo = photo.resize((bw*4,bh*4))
circle = Image.new('L', (bw*4, bh*4), 0)
draw = ImageDraw.Draw(circle)
draw.ellipse((0, 0, bw*4, bh*4), fill=255)
alpha = Image.new('L', (bw*4, bh*4), 255)
alpha.paste(circle, (0, 0))
photo.putalpha(alpha)
photo = photo.resize((bw,bh),Image.ANTIALIAS)
base.paste(photo,box=(170,120),mask=photo)
# get a drawing context
draw = ImageDraw.Draw(base)
# draw text in the middle of the image, half opacity
draw.multiline_text((w/2-tw/2,480), title, font=title_fnt, fill=(0,0,0,255), align='center')
draw.multiline_text((w/2-cw/2,480+th+45), content, font=content_fnt, fill=(0,0,0,255), align='center', spacing=spacing)
draw.multiline_text((w/2-aw/2,480+th+45+ch+115), author, font=author_fnt, fill=(0,0,0,255), align='center')
draw.multiline_text((w-crw,480+th+45+ch+115+ah+50), copyright, font=copyright_fnt, fill=(189,189,189,255), align='center')
# get BytesIO
msstream = BytesIO()
# save image data to output stream
base.save(msstream,"png")
# release memory
base.close()
return HttpResponse(msstream.getvalue(),content_type="image/png")
def template4(request):
w = 640
h = 1080
iw = 600
ih = 340
bw = 300
bh = 300
padding = 2
title = '每日一言'
content = '觉得最失落的,大概是你还在为你们的未来出谋划策,他却已慢慢后退不再与你并肩。'
spacing = 20
content = fill(content, 15)
author = '- 天天码图 -'
copyright = '微信小程序「天天码图」'
title_fnt = ImageFont.truetype('font/zh/WangQingHua.ttf', 35)
content_fnt = ImageFont.truetype('font/zh/WangQingHua.ttf', 30)
author_fnt = ImageFont.truetype('font/zh/WangQingHua.ttf', 25)
copyright_fnt = ImageFont.truetype('font/zh/WangQingHua.ttf', 25)
base = Image.new('RGBA',(w,h),(255,255,255,255))
draw = ImageDraw.Draw(base)
aw,ah = draw.multiline_textsize(author, font=author_fnt)
crw,crh = draw.multiline_textsize(copyright, font=copyright_fnt)
photo = Image.open("photo.jpg").convert('RGBA')
pw, ph = photo.size
if pw > ph:
box = ((pw-ph*bw/bh)/2,0,(pw+ph*bw/bh)/2,ph)
else:
box = (0,(ph-pw*bh/bw)/2,pw,(ph+pw*bh/bw)/2)
photo = photo.crop(box)
photo = photo.resize((bw*4,bh*4))
circle = Image.new('L', (bw*4, bh*4), 0)
draw = ImageDraw.Draw(circle)
draw.ellipse((0, 0, bw*4, bh*4), fill=255)
alpha = Image.new('L', (bw*4, bh*4), 255)
alpha.paste(circle, (0, 0))
photo.putalpha(alpha)
photo = photo.resize((bw,bh),Image.ANTIALIAS)
base.paste(photo,box=(170,120),mask=photo)
# get a drawing context
draw = ImageDraw.Draw(base)
# split the title
tlines = wrap(title, 1)
# current title height
tnh = 480
# get width and height of single title word
stw,sth = title_fnt.getsize("已")
for tline in tlines:
draw.text((w-115-stw,tnh), tline, fill=(0,0,0,255), font=title_fnt)
tnh = tnh+sth
# get width and height of single content word
scw,sch = content_fnt.getsize("已")
clines = wrap(content, 14)
# current width of content
cnw = w-115-stw-115-scw
for cline in clines:
# current height of content
cnh = 480
cwords = wrap(cline, 1)
for cword in cwords:
pattern = re.compile("[,。、]+")
if pattern.search(cword):
draw.text((cnw,cnh), cword, fill=(0,0,0,255), font=content_fnt)
# draw.text((cnw+30-12,cnh-30+12), cword, fill=(0,0,0,255), font=content_fnt)
else:
draw.text((cnw,cnh), cword, fill=(0,0,0,255), font=content_fnt)
cnh = cnh+sch+padding
cnw = cnw-scw-spacing
# draw text in the middle of the image, half opacity
# draw.multiline_text((w/2-tw/2,420), title, font=title_fnt, fill=(0,0,0,255), align='center')
# draw.multiline_text((w/2-cw/2,420+th+45), content, font=content_fnt, fill=(0,0,0,255), align='center', spacing=spacing)
draw.multiline_text((w/2-aw/2,h-50-15-crh-ah), author, font=author_fnt, fill=(0,0,0,255), align='center')
draw.multiline_text((w-crw,h-15-crh), copyright, font=copyright_fnt, fill=(189,189,189,255), align='center')
# get BytesIO
msstream = BytesIO()
# save image data to output stream
base.save(msstream,"png")
# release memory
base.close()
return HttpResponse(msstream.getvalue(),content_type="image/png")
def template5(request,font):
w = 640
h = 1080
iw = 600
ih = 340
bw = 300
bh = 300
padding = 2
title = '西江月·夜行黄沙道中'
author = '辛弃疾'
category = '#婉约#豪放#夏天#'
content = '''帘外雨潺潺,
春意阑珊。
罗衾不耐五更寒。
梦里不知身是客,
一晌贪欢。
独自莫凭栏,
无限江山,
别时容易见时难。
流水落花春去也,
天上人间。'''
spacing = 20
#content = content.replace(',','')
#content = content.replace('。','')
#content = content.replace('\r','。')
#content = fill(content, 14)
copyright = '微信小程序「天天码图」'
title_fnt = ImageFont.truetype('font/zh/'+font+'.ttf', 35)
author_fnt = ImageFont.truetype('font/zh/'+font+'.ttf', 25)
content_fnt = ImageFont.truetype('font/zh/'+font+'.ttf', 30)
copyright_fnt = ImageFont.truetype('font/zh/YueSong.ttf', 15)
clines = content.split('\n')
tlines = wrap(title, 1)
alines = wrap(author, 1)
# get width and height | |
when calling `project_project_id_repositories_post`") # noqa: E501
# verify the required parameter 'project_id' is set
if ('project_id' not in params or
params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `project_project_id_repositories_post`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['bearer', 'cookie'] # noqa: E501
return self.api_client.call_api(
'/project/{project_id}/repositories', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def project_project_id_repositories_repository_id_delete(self, project_id, repository_id, **kwargs): # noqa: E501
"""Removes repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.project_project_id_repositories_repository_id_delete(project_id, repository_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param int project_id: Project ID (required)
:param int repository_id: repository ID (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.project_project_id_repositories_repository_id_delete_with_http_info(project_id, repository_id, **kwargs) # noqa: E501
else:
(data) = self.project_project_id_repositories_repository_id_delete_with_http_info(project_id, repository_id, **kwargs) # noqa: E501
return data
def project_project_id_repositories_repository_id_delete_with_http_info(self, project_id, repository_id, **kwargs): # noqa: E501
"""Removes repository # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.project_project_id_repositories_repository_id_delete_with_http_info(project_id, repository_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param int project_id: Project ID (required)
:param int repository_id: repository ID (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id', 'repository_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method project_project_id_repositories_repository_id_delete" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params or
params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `project_project_id_repositories_repository_id_delete`") # noqa: E501
# verify the required parameter 'repository_id' is set
if ('repository_id' not in params or
params['repository_id'] is None):
raise ValueError("Missing the required parameter `repository_id` when calling `project_project_id_repositories_repository_id_delete`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id'] # noqa: E501
if 'repository_id' in params:
path_params['repository_id'] = params['repository_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# Authentication setting
auth_settings = ['bearer', 'cookie'] # noqa: E501
return self.api_client.call_api(
'/project/{project_id}/repositories/{repository_id}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def project_project_id_tasks_get(self, project_id, **kwargs): # noqa: E501
"""Get Tasks related to current project # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.project_project_id_tasks_get(project_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param int project_id: Project ID (required)
:return: list[Task]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.project_project_id_tasks_get_with_http_info(project_id, **kwargs) # noqa: E501
else:
(data) = self.project_project_id_tasks_get_with_http_info(project_id, **kwargs) # noqa: E501
return data
def project_project_id_tasks_get_with_http_info(self, project_id, **kwargs): # noqa: E501
"""Get Tasks related to current project # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.project_project_id_tasks_get_with_http_info(project_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param int project_id: Project ID (required)
:return: list[Task]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method project_project_id_tasks_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params or
params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `project_project_id_tasks_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'text/plain; charset=utf-8']) # noqa: E501
# Authentication setting
auth_settings = ['bearer', 'cookie'] # noqa: E501
return self.api_client.call_api(
'/project/{project_id}/tasks', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Task]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def project_project_id_tasks_last_get(self, project_id, **kwargs): # noqa: E501
"""Get last 200 Tasks related to current project # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.project_project_id_tasks_last_get(project_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param int project_id: Project ID (required)
:return: list[Task]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.project_project_id_tasks_last_get_with_http_info(project_id, **kwargs) # noqa: E501
else:
(data) = self.project_project_id_tasks_last_get_with_http_info(project_id, **kwargs) # noqa: E501
return data
def project_project_id_tasks_last_get_with_http_info(self, project_id, **kwargs): # noqa: E501
"""Get last 200 Tasks related to current project # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.project_project_id_tasks_last_get_with_http_info(project_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param int project_id: Project ID (required)
:return: list[Task]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['project_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method project_project_id_tasks_last_get" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'project_id' is set
if ('project_id' not in params or
params['project_id'] is None):
raise ValueError("Missing the required parameter `project_id` when calling `project_project_id_tasks_last_get`") # noqa: E501
collection_formats = {}
path_params = {}
if 'project_id' in params:
path_params['project_id'] = params['project_id'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json', 'text/plain; charset=utf-8']) # noqa: E501
# Authentication setting
auth_settings = ['bearer', 'cookie'] # noqa: E501
return self.api_client.call_api(
'/project/{project_id}/tasks/last', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Task]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def project_project_id_tasks_post(self, body, project_id, **kwargs): # noqa: E501
"""Starts a job # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.project_project_id_tasks_post(body, project_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param Body3 body: (required)
:param int project_id: Project ID (required)
:return: Task
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.project_project_id_tasks_post_with_http_info(body, project_id, **kwargs) # noqa: E501
else:
(data) = self.project_project_id_tasks_post_with_http_info(body, project_id, **kwargs) # noqa: E501
return data
def project_project_id_tasks_post_with_http_info(self, body, project_id, **kwargs): # noqa: E501
"""Starts a job # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.project_project_id_tasks_post_with_http_info(body, project_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param Body3 body: (required)
:param int project_id: Project ID (required)
:return: Task
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['body', 'project_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method project_project_id_tasks_post" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when | |
raise e
for ch in self.CHANNEL:
for i in lenlist['less'][ch]:
replace = np.zeros(info['numOfSamples'])
if info['LEN'][ch][i] != 0:
replace[:info['LEN'][ch][i]] = self.data[ch][i]
self.data[ch][i] = replace
# distance offset tend to increase
info['up'] = True
# list of stations that have distance offset different from the trend
info['abnormal'] = []
if staSpc is None and orgStartT is not None:
up = []
down = []
for i in range(1, len(listOfDataStations)):
staId = listOfStations.index(listOfDataStations[i])
pStaId = listOfStations.index(listOfDataStations[i-1])
if info['distanceOffset'][staId] > \
info['distanceOffset'][pStaId]:
up.append((pStaId, staId))
else:
down.append((pStaId, staId))
checkedList = down
if len(down) > len(up):
checkedList = up
info['up'] = False
for a1, a2 in checkedList:
if a1 not in info['abnormal']:
info['abnormal'].append(a1)
if a2 not in info['abnormal']:
info['abnormal'].append(a2)
info['numOfStations'] = len(listOfStations)
info['minOffset'] = self.minOffset
info['sumD'] = self.maxOffset - self.minOffset
info['numOfDataStations'] = len(listOfDataStations)
return info
###############################################
# def readData_receiverGather
# Author: <NAME>
# Updated: 201701
# to populate data for receiverGather
def readData_receiverGather(
self, orgStartT, offset, timeLen, staSpc,
appClockDriftCorr, redVel, # corrections
PH5View, statusBar=None, beginMsg=None):
'''
Read trace data based on given start and stop epoch, arrays,
and channels.
receiverGather get the events from ONE selected station-channel
Sets: self.metadata
Returns: info
'''
sampleRate = PH5View.selectedArray['sampleRate']
statusMsg = beginMsg + ": preparing event table"
statusBar.showMessage(statusMsg)
# For each event, loop through each station,
# each channel in the requested array and extract trace data.
self.data = []
info = {}
info['maxP2P'] = -1 * (2**31 - 1)
info['zeroDOffsetIndex'] = None
info['distanceOffset'] = []
# secs = timeLen
Offset_t = {}
self.minOffset = None
self.maxOffset = None
# receiver gather has only one pair of station-channel
staId = PH5View.selectedArray['seclectedStations'][0]
ch = self.CHANNEL[0]
self.metadata = []
self.data = {ch: []}
a = self.ARRAY[0] # currently allow to select one array at a time
if orgStartT is not None:
startTime = orgStartT + offset
stopTime = startTime + timeLen
lenlist = {'less': {ch: []}, 'maybeless': {ch: []}}
info['numOfSamples'] = 0
info['noDataList'] = []
info['LEN'] = {ch: []}
# If there is an associated event calculate offset distances
for ev in PH5View.selectedEvents:
Offset_t[a] = self.fio.calc_offsets(
a, ev['eventId'], ev['eventName'])
if orgStartT is None:
startTime = ev['eStart'] + offset
stopTime = startTime + timeLen
sr = None
# slen = None
rows = self.fio.Array_t[a]['byid']
line_seq = 0
r = rows[staId][ch][0]
ii = len(self.metadata)
try:
if not ph5api.is_in(
r['deploy_time/epoch_l'],
r['pickup_time/epoch_l'], startTime, stopTime):
continue
das = r['das/serial_number_s']
corr = self.calcCorrection(
ii, das, ch, Offset_t, a, r, startTime,
sampleRate, staSpc, appClockDriftCorr, redVel)
# + 1.1/sampleRate: add a little bit than the
# time of one sample
traces = self.fio.cut(
das, startTime-corr[0]/1000.,
stopTime-corr[0]/1000. + 1.1/sampleRate,
ch, sampleRate, apply_time_correction=False)
trace = ph5api.pad_traces(traces)
if trace.nsamples == 0:
v = (ev['eventId'], PH5View.selectedArray['arrayId'],
das, r['id_s'], ch)
noDataItem = "Event:%s Array:%s Das: %s " + \
"Station: %s Chan: %s"
noDataItem %= v
if noDataItem not in info['noDataList']:
info['noDataList'].append(noDataItem)
continue
if sr is None:
sr = trace.sample_rate
# slen = int ((secs * sr) + 0.5)
self.metadata.append(None)
info['distanceOffset'].append(None)
self.getMetadata(info, lenlist, ii, trace, a, ev, r, ch, das,
Offset_t, corr, staSpc, orgStartT, startTime)
trace.data = np.array(trace.data, dtype=np.float32)
self.data[ch].append(trace.data)
info['LEN'][ch].append(trace.nsamples)
self.metadata[ii]['minmax'] = (np.amin(trace.data),
np.amax(trace.data))
if statusBar is not None and line_seq % 10 == 0:
statusMsg = beginMsg + ": reading data and metadata: " + \
"%s events"
statusBar.showMessage(statusMsg % line_seq)
except PH5ReaderError, e:
raise e
for i in lenlist['less'][ch]:
replace = np.zeros(info['numOfSamples'])
if info['LEN'][ch][i] != 0:
replace[:info['LEN'][ch][i]] = self.data[ch][i]
self.data[ch][i] = replace
# use fixed offset => offset always increase
info['up'] = True
info['abnormal'] = []
info['quickRemoved'] = {ch: {}}
info['deepRemoved'] = {ch: []}
info['numOfDataStations'] = info['numOfStations'] = len(self.data[ch])
info['zerosList'] = []
info['minOffset'] = self.minOffset
info['sumD'] = self.maxOffset - self.minOffset
return info
###############################################
# def readData_shotGather
# Author: <NAME>
# Updated: 201701
# to populate data for shotGather
def readData_shotGather(
self, orgStartT, offset, timeLen, staSpc,
appClockDriftCorr, redVel, # corrections
PH5View, statusBar=None, beginMsg=None):
'''
Read trace data based on given start and stop epoch,
arrays, and channels.
Sets: self.metadata
Returns: info
'''
sampleRate = PH5View.selectedArray['sampleRate']
statusMsg = beginMsg + ": preparing event table"
statusBar.showMessage(statusMsg)
# For each event, loop through each station,
# each channel in the requested array and extract trace data.
self.data = {}
info = {}
# info['maxP2P'] = -1 * (2**31 - 1)
info['zeroDOffsetIndex'] = None
info['LEN'] = {}
info['quickRemoved'] = {}
info['deepRemoved'] = {}
info['numOfSamples'] = 0
Offset_t = {}
self.minOffset = None
self.maxOffset = None
a = self.ARRAY[0] # currently allow to select one array at a time
rows = self.fio.Array_t[a]['byid']
order = self.fio.Array_t[a]['order']
listOfStations = sorted(PH5View.selectedArray['seclectedStations'])
self.metadata = [None] * len(listOfStations)
info['distanceOffset'] = [None] * len(listOfStations)
if orgStartT is not None:
startTime = orgStartT + offset
stopTime = startTime + timeLen
info['noDataList'] = []
listOfDataStations = []
lenlist = {'less': {}, 'maybeless': {}}
# If there is an associated event calculate offset distances
for ev in PH5View.selectedEvents:
Offset_t[a] = self.fio.calc_offsets(
a, ev['eventId'], ev['eventName'])
if orgStartT is None:
startTime = ev['eStart'] + offset
stopTime = startTime + timeLen
sr = None
count = 0
for o in order:
for ch in self.CHANNEL:
if ch not in self.data.keys():
self.data[ch] = [[]] * len(listOfStations)
info['LEN'][ch] = [0] * len(listOfStations)
lenlist['less'][ch] = []
lenlist['maybeless'][ch] = []
info['quickRemoved'][ch] = {}
info['deepRemoved'][ch] = []
for r in rows[o][ch]:
try:
if r['id_s'] not in \
PH5View.selectedArray['seclectedStations']:
raise PH5ReaderError("Continue")
ii = listOfStations.index(r['id_s'])
if not ph5api.is_in(
r['deploy_time/epoch_l'],
r['pickup_time/epoch_l'],
startTime, stopTime):
raise PH5ReaderError("Continue")
das = r['das/serial_number_s']
corr = self.calcCorrection(
ii, das, ch, Offset_t, a, r, startTime,
sampleRate, staSpc, appClockDriftCorr, redVel)
# + 1.1/sampleRate: add a little bit
# than the time of one sample
traces = self.fio.cut(
das, startTime-corr[0]/1000.,
stopTime-corr[0]/1000. + 1.1/sampleRate,
ch, sampleRate, apply_time_correction=False)
trace = ph5api.pad_traces(traces)
if trace.nsamples == 0:
v = (ev['eventId'],
PH5View.selectedArray['arrayId'],
das, r['id_s'], ch)
noDataItem = \
"Event:%s Array:%s Das: %s " + \
"Station: %s Chan: %s"
noDataItem %= v
if noDataItem not in info['noDataList']:
info['noDataList'].append(noDataItem)
raise PH5ReaderError("Continue")
if sr is None:
sr = trace.sample_rate
# slen = int ((secs * sr) + 0.5)
self.getMetadata(info, lenlist, ii, trace, a, ev,
r, ch, das, Offset_t, corr,
staSpc, orgStartT, startTime)
trace.data = np.array(trace.data,
dtype=np.float32)
if len(self.data[ch][ii]) < trace.nsamples:
self.data[ch][ii] = (trace.data)
info['LEN'][ch][ii] = trace.nsamples
if r['id_s'] not in listOfDataStations:
listOfDataStations.append(r['id_s'])
if 'minmax' not in self.metadata[ii].keys():
self.metadata[ii]['minmax'] = \
(np.amin(trace.data),
np.amax(trace.data))
else:
minval = min(
self.metadata[ii]['minmax'][0],
np.amin(trace.data))
maxval = max(
self.metadata[ii]['minmax'][1],
np.amax(trace.data))
self.metadata[ii]['minmax'] = \
(minval, maxval)
count += 1
if statusBar is not None and count % 10 == 0:
statusMsg = beginMsg + \
": reading data and " + \
"metadata: %s station-channels"
statusBar.showMessage(statusMsg % count)
except PH5ReaderError, e:
if e.message == "Continue":
if r['id_s'] in listOfStations:
lenlist['less'][ch].append(ii)
else:
raise e
for ch in self.CHANNEL:
for i in lenlist['less'][ch]:
replace = np.zeros(info['numOfSamples'])
if info['LEN'][ch][i] != 0:
replace[:info['LEN'][ch][i]] = self.data[ch][i]
self.data[ch][i] = replace
# distance offset tend to increase
info['up'] = True
# list of stations that have distance offset different from the trend
info['abnormal'] = []
if staSpc is None and orgStartT is not None:
up = []
down = []
for i in range(1, len(listOfDataStations)):
staId = listOfStations.index(listOfDataStations[i])
pStaId = listOfStations.index(listOfDataStations[i-1])
if info['distanceOffset'][staId] > \
info['distanceOffset'][pStaId]:
up.append((pStaId, staId))
else:
down.append((pStaId, staId))
checkedList = down
if len(down) > len(up):
checkedList = up
info['up'] = False
for a1, a2 in checkedList:
if a1 not in info['abnormal']:
info['abnormal'].append(a1)
if a2 not in info['abnormal']:
info['abnormal'].append(a2)
info['numOfStations'] = len(listOfStations)
info['minOffset'] = self.minOffset
info['sumD'] = self.maxOffset - self.minOffset
info['numOfDataStations'] = len(listOfDataStations)
return info
############################################
# def readData_shotGather
# Author: <NAME>
# Updated: 201803
def getMetadata(self, info, lenlist, ii, trace, a, ev, r, ch,
das, Offset_t, corr, staSpc, orgStartT, startTime):
'''
Sets: self.metadata[ii]
'''
if self.metadata[ii] is not None:
self.metadata[ii]['chans'].append(r['channel_number_i'])
else:
self.metadata[ii] = {}
self.metadata[ii]['totalCorr'] = corr[0]
self.metadata[ii]['clockDriftCorr'] = corr[1]
self.metadata[ii]['redVelCorr'] = corr[2]
self.metadata[ii]['absStartTime'] = \
timedoy.epoch2passcal(startTime)
self.metadata[ii]['arrayId'] = a[-3:]
self.metadata[ii]['stationId'] = r['id_s']
self.metadata[ii]['eventId'] = ev['eventId'] if ev is not None \
else None
self.metadata[ii]['dasSerial'] = das
self.metadata[ii]['chans'] = [r['channel_number_i']]
self.metadata[ii]['desc'] = r['description_s']
self.metadata[ii]['lat'] = r['location/Y/value_d']
| |
after processing. There may also be additional results attributes that hold
intermediate processing steps results. Each Block references the previous
step to get the input data for the current processing step.
Each Block has a 'Chain' object that performs the actual scientific
algorithms on the data. Chain objects are created at run-time and are not
saved. The Block, Tab, and Chain objects are how Analysis implements the
Model-View-Controller paradigm. Only the Block objects are saved (when the
user chooses File->Save). All other objects are recreated at run-time.
"""
# The XML_VERSION enables us to change the XML output format in the future
XML_VERSION = "1.2.0"
def __init__(self, attributes=None):
#------------------------------------------------------------
# Set up elements of the processing chain for the data set
#
# Each dataset tab is located in the "outer" notebook which
# is used to organize the one or more data sets that are open
# in the application.
#
# - The blocks list contain objects that correspond to the
# processing tabs in the dataset (or "inner") notebook. A
# "block" contains a "chain" object that contains
# the code to run the functor chain of processing steps for
# a given block. The functor chain is dynamically allocated
# for each call depending on the widget settings in the tab.
#
#------------------------------------------------------------
self.id = util_misc.uuid()
# dataset_filename is only set on-the-fly (as opposed to being set
# via inflate()). It's only set when the current dataset is read in,
# or saved to, a VIFF file.
self.dataset_filename = ''
self.behave_as_preset = False
# preset_filename only set if apply_preset() is called, it is just for
# provenance, we do NOT keep track if setting changed manually afterwards
self.preset_filename = ''
self.user_prior = mrs_user_prior.UserPrior()
# Create default blocks. We replace these as needed. Note that the
# order in which these are added drives the order of the blocks and
# tabs in the application as a whole.
self.blocks = collections.OrderedDict()
for name in ("raw", "prep", "spectral", "fit", "quant"):
self._create_block( (name, DEFAULT_BLOCK_CLASSES[name]) )
if attributes is not None:
self.inflate(attributes)
# Update the user prior spectrum and fid caches
self.user_prior.basis.update(self)
@property
def raw_shape(self):
"""Raw data dimensionality. It's read only."""
if self.blocks:
return self.blocks["raw"].data_shape
return None
@property
def raw_dims(self):
"""Raw data dimensionality. It's read only."""
if self.blocks:
if self.blocks["prep"].is_identity:
return self.blocks["raw"].dims
else:
return self.blocks["prep"].dims
return None
@property
def spectral_dims(self):
"""Spectral data dimensionality. It's read only."""
if self.blocks:
spectral_dims = self.raw_dims
zfmult = self.zero_fill_multiplier
if zfmult:
spectral_dims[0] *= zfmult
return spectral_dims
return None
@property
def spectral_hpp(self):
"""Raw/All data center frequency. It's read only."""
if self.blocks:
spectral_dims = self.spectral_dims
if spectral_dims:
return self.sw / spectral_dims[0]
return None
@property
def sw(self):
"""Raw/All data sweep width. It's read only."""
return self.blocks["raw"].sw if self.blocks else None
@property
def raw_hpp(self):
"""Raw/All data center frequency. It's read only."""
return self.sw / self.raw_dims[0] if self.blocks else None
@property
def frequency(self):
"""Raw/All data center frequency. It's read only."""
return self.blocks["raw"].frequency if self.blocks else None
@property
def resppm(self):
"""Raw/All data resonance PPM value. It's read only."""
return self.blocks["raw"].resppm if self.blocks else None
@property
def echopeak(self):
""" Acquisition echo peak (0.0 for FID data, 0.5 for full echo) """
return self.blocks["raw"].echopeak if self.blocks else None
@property
def is_fid(self):
"""Boolean. It's read only."""
return self.blocks["raw"].is_fid if self.blocks else None
@property
def seqte(self):
"""Acquisition echo time in msec. It's read only."""
return self.blocks["raw"].seqte if self.blocks else None
@property
def seqtr(self):
"""Acquisition repetition time in msec. It's read only."""
return self.blocks["raw"].seqtr if self.blocks else None
@property
def nucleus(self):
"""Acquisition nucleus. It's read only."""
return self.blocks["raw"].nucleus if self.blocks else None
@property
def zero_fill_multiplier(self):
"""Spectral dimension zero fill factor. It's read only."""
return self.blocks["spectral"].set.zero_fill_multiplier if self.blocks else None
@property
def phase_1_pivot(self):
"""Spectral phase 1 pivot location in ppm. It's read only."""
return self.blocks["spectral"].set.phase_1_pivot if self.blocks else None
@property
def auto_b0_range_start(self):
""" PPM start range for automated B0 shift routine searches """
return self.user_prior.auto_b0_range_start
@property
def auto_b0_range_end(self):
""" PPM end range for automated B0 shift routine searches """
return self.user_prior.auto_b0_range_end
@property
def auto_phase0_range_start(self):
""" PPM start range for automated Phase0 shift routine searches """
return self.user_prior.auto_phase0_range_start
@property
def auto_phase0_range_end(self):
""" PPM end range for automated Phase0 shift routine searches """
return self.user_prior.auto_phase0_range_end
@property
def auto_phase1_range_start(self):
""" PPM start range for automated Phase1 shift routine searches """
return self.user_prior.auto_phase1_range_start
@property
def auto_phase1_range_end(self):
""" PPM end range for automated Phase1 shift routine searches """
return self.user_prior.auto_phase1_range_end
@property
def auto_phase1_pivot(self):
""" PPM value at which automated Phase1 routine rotates phase """
return self.user_prior.auto_phase1_pivot
@property
def metinfo(self):
"""
Returns the Metinfo object stored in Dataset. This provides info about
literature values of metabolites, such as concentrations and spins that
are used in the fitting initial values routines.
"""
return self.user_prior.metinfo
@property
def user_prior_summed_spectrum(self):
"""
Returns Numpy array with frequency spectrum created from the UserPrior
values in that dialog. This is the model spectrum used in the automated
B0 and Phase routines. This spectrum matches the spectral resolution of
the data. Obtained from UserPrior object in Dataset. Read only!
"""
return self.user_prior.basis.get_spectrum_sum(self)
@property
def all_voxels(self):
""" return list of all voxel indices based on spectral_dims """
dims = self.spectral_dims
all = []
for k in range(dims[3]):
for j in range(dims[2]):
for i in range(dims[1]):
all.append((i,j,k))
return all
@property
def measure_time(self):
"""
Returns a Numpy array of floating point values. These values are the
acquisition times for each FID in the data array. Each value is the
time since midnight in seconds at which the acquisition started. Most
use cases will normalize this value into 0 for the first FID and some
delta seconds from 0 for each subsequent FID.
This functions gathers measure_time values from the Prep block since
the number of FIDs may change if we 'massage' the data. Eg. if we
average every other or every 3/4/N FIDs to improve SNR. In this case
the new measure_time array is calculted in the Prep block/functor. If
no measure_time attribute is available in Prep block, we look for one
in the Raw block. If there is none in Raw we default to a 'range' of
integer steps.
Eg. For data taken from DICOM files, the 'measure_time' tag is used.
This tag stores the time since midnight in an HHMMSS.fraction string.
We convert this string to seconds in float format.
Eg. At this time we don't have other examples, but anyone writing code
for this attribute could just start from 0 and increment as desired.
Any subsequent normalization would subtract from the first point and
it would work just fine.
"""
if self.blocks:
if self.blocks["prep"].is_identity:
block = self.blocks["raw"]
try:
val = block.measure_time
except AttributeError:
val = list(range(self.raw_dims[1]))
else:
block = self.blocks["prep"]
try:
val = block.measure_time
except AttributeError:
block = self.blocks["raw"]
try:
val = block.measure_time
except AttributeError:
val = list(range(self.raw_dims[1]))
return val
@property
def prior_list_unique(self):
"""
Get list of metabolites in the prior set listed only as unique abbreviations.
This makes it easier for me to check if a fitting condition can be set.
"""
metinfo = self.user_prior.metinfo
prior_list = self.blocks['fit'].set.prior_list
prior_list_unique = [metinfo.get_abbreviation(item.lower()) for item in prior_list]
return prior_list_unique
@property
def minppm(self):
return self.pts2ppm(self.spectral_dims[0])
@property
def maxppm(self):
return self.pts2ppm(0)
@property
def minmaxppm(self):
return self.minppm, self.maxppm
def ppm2pts(self, val, acq=False, rel=False):
"""
Returns the point index along spectrum for given ppm value.
- Assumes center point <--> resppm for rel False
- Assumes center point <--> 0.0 ppm for rel True
"""
dim0 = self.raw_dims[0] if acq else self.spectral_dims[0]
hpp = self.raw_hpp if acq else self.spectral_hpp
pts = self.frequency*val/hpp if rel else (dim0/2) - (self.frequency*(val-self.resppm)/hpp)
pts = np.where(pts > 0, pts, 0)
return pts
def ppm2hz(self, val, acq=False, rel=False):
"""
Returns the absolute number of hz away from 0.0 ppm based on an assumed ppm
value for the center | |
import os
import sys
import re
import json
import math
from difflib import SequenceMatcher
import plotly.graph_objects as go
import requests
import networkx as nx
import pandas as pd
import numpy as np
import scipy
import matplotlib
import matplotlib.pyplot as plt
from ipywidgets import interactive, HBox, VBox
import ipywidgets as widgets
from IPython.display import HTML, display
import tabulate
from dotenv import dotenv_values
from domaintools import API
from configparser import ConfigParser
import networkx as nx
import matplotlib.pyplot as plt
import itertools
# load REST API creds from .env file
dcat_config = dotenv_values(".env")
def show_iris_query_ui(domain_list_ui, search_hash_ui):
lookup_ui = widgets.VBox([
widgets.Label(value="Enter a return delimited list of domains to lookup (no commas, no quotes)"),
domain_list_ui,
widgets.Label(value="Or..."),
widgets.Label(value="Enter an Iris search hassh to lookup"),
search_hash_ui,
])
return lookup_ui
def clean_domain_list(domain_list_ui):
# remove any quotes, spaces, or defanging square brackets
full_domain_list = domain_list_ui.value.strip().replace(' ', '').replace('"', '').replace("'", "").replace('[',
'').replace(
']', '')
# replace commas with new lines
full_domain_list = full_domain_list.replace(",", "\n")
# update the widget
domain_list_ui.value = full_domain_list
# split into array
return full_domain_list.split("\n")
def get_rest_api_creds(api_username_ui, api_pw_ui):
api_username = api_username_ui.value
if len(api_username) == 0:
api_username = dcat_config["IRIS_API_USERNAME"]
api_key = api_pw_ui.value
if len(api_key) == 0:
api_key = dcat_config["IRIS_API_KEY"]
return api_username, api_key
def query_iris_rest_api(api_username_ui, api_pw_ui, domain_list_ui, search_hash_ui):
api_username, api_key = get_rest_api_creds(api_username_ui, api_pw_ui)
api = API(api_username, api_key)
if len(domain_list_ui.value) > 0:
# split list of domains into groups of 100 because of API restrictions
results = []
full_domain_list = clean_domain_list(domain_list_ui)
max_domains = 100
start = 0
end = max_domains
for _ in range(math.ceil(len(full_domain_list) / max_domains)):
# slice out max domains to query
partial_domain_list = full_domain_list[start:end]
# build query string
domain_list = ",".join(partial_domain_list)
iris_query = {"domain": domain_list}
# query rest api
print(f"...querying Iris REST API for {len(partial_domain_list)} domains")
iris_results = api.iris_investigate(**iris_query)
# build up the set of return domain objects
results += iris_results.response().get('results', {})
# update slice indexes
start = end
end += max_domains
return results
elif len(search_hash_ui.value) > 0:
iris_query = {"search_hash": search_hash_ui.value}
iris_results = api.iris_investigate(**iris_query)
# print(iris_results.status)
iris_results = iris_results.response().get('results', {})
return iris_results
else:
print(
"Domain List and Search Hash text boxes are empty. Please enter either a list of domains or search hash to lookup")
raise Exception("Domain List and Search Hash text boxes are empty")
class Config(object):
""" Little helper class to hold all the config values"""
class Domain(object):
""" Little helper class to hold the domain name and risk score
"""
def __init__(self, domain_json):
self.json = domain_json
self.name = domain_json["domain"]
self.risk_score = domain_json["domain_risk"]['risk_score']
self.pivots = {}
self.label = f"{self.name} ({self.risk_score})"
def __str__(self):
return f"name: {self.name}, risk: {self.risk_score}"
def __repr__(self):
return str(self)
class DomainRelationship(object):
def __init__(self, weight: float, category: str):
# this is the maximum weight that an edge can have.
# Adjust this if you want to play around with stronger edge weights
self.max_weight = 5.0
self.weight = weight
self.categories = [category]
def __str__(self):
return f"weight: {self.weight}, categories: {self.categories}"
def __repr__(self):
return str(self)
def add(self, weight: float, category: str):
""" Note: certain pivot categories can be added more than once for 2 domains;
things like IP and name server. For example, two domains could be on the same set of 5
IP addreese. For now the weights are just summed if there are more than one pivots of
the same category, but maybe we need a different strategy. Since IPs have multiple pivots
(ip address, country code, asn, isp) this means if there were 5 shared IPs between two
domains, the weight would be: 4 * 5 * pivot_weight.
This might over amplify the edge strength
"""
if category not in self.categories:
# this helps by not overly boosting the edge weight if two domains share
# multipel IP addresses
self.weight += weight
self.weight = min(self.weight, self.max_weight)
self.categories.append(category)
def get_description(self):
return "<br>".join(sorted(self.categories))
class Pivot(object):
def __init__(self, category, value, global_count):
self.category = category
self.value = value
self.global_count = global_count
self.domains = set()
# def union(self, other: "Pivot"):
# self.domains.union(other.domains)
def label(self):
# return f"category: {self.category}: value: {self.value} ({self.global_count})"
return f"{self.category}: {self.value} ({self.global_count})"
def __str__(self):
return f"category: {self.category}, " \
f"value: {self.value}, " \
f"global_count: {self.global_count}, " \
f"domains: {self.domains}"
def __repr__(self):
return str(self)
# build graph
def get_edge_count(n: int):
# for a complete graph, the edge count is: n(n-1)/2
return n * (n - 1) / 2
# def pivot_on_matching_substrings(graph: "Graph", domains: dict, config: "Config"):
# """Create pivots between domains that share a common substring of
# `config.longest_common_substring` chars long.
#
# Note: SequenceMatcher has some known issues with not finding the longest match in very long
# strings, but does a pretty good job with shorter strings such as domain names.
# https://stackoverflow.com/questions/18715688/find-common-substring-between-two-strings
# """
# domain_names = list(domains.keys())
# for x in range(len(domain_names)):
# domain1 = domain_names[x]
# string1 = domain1.split('.')[0]
# # pull out substrings to ignore
# if config.ignore_substrings and len(config.ignore_substrings) > 0:
# for ignore in config.ignore_substrings:
# string1 = string1.replace(ignore, "")
# for y in range(x + 1, len(domain_names)):
# domain2 = domain_names[y]
# string2 = domain2.split('.')[0]
# # pull out substrings to ignore
# if config.ignore_substrings and len(config.ignore_substrings) > 0:
# for ignore in config.ignore_substrings:
# string2 = string2.replace(ignore, "")
# # find the longest common substring between the two domains
# matcher = SequenceMatcher(None, string1, string2, False)
# match = matcher.find_longest_match(0, len(string1), 0, len(string2))
# longest_match = string1[match.a: match.a + match.size]
# # check if the matching substring is long enough
# if len(longest_match) >= config.longest_common_substring:
# # add pivots
# _append_value_to_pivot(
# graph,
# "longest_common_substring",
# longest_match, None,
# domains[domain1], config)
# _append_value_to_pivot(
# graph,
# "longest_common_substring",
# longest_match, None,
# domains[domain2], config)
def build_pivot_graph(iris_results: list, config: "Config"):
""" Main workflow function that takes the results from an Iris Investigate query and
builds the graph object of how each of the domains in the query are connected to each other"""
# parse the Iris API Result to build the pivot data structure
graph, domains = init_local_pivot_graph(iris_results, config)
print(len(graph.nodes))
print()
# normalize registrar pivots (see note in function comments)
# if "registrar" in pivot_categories and config.normalize_registrars:
# normalize_similar_registrars(pivot_categories["registrar"])
# create pivots for longest common substrings
# pivot_on_matching_substrings(graph, domains, config)
# print(len(graph.nodes))
# print()
# trim pivots from graph that have less than the set count threshold or contain all domains
# graph = trim_pivots(graph, len(domains), config)
# print(len(graph.nodes))
# print()
# trim unconnected domains and domains with only a create date pivot
# TURBO: I'm not sure yet how to do this
# trimmed_unconnected_domains = trim_unconnected_domains(graph, domains, config)
# print(len(graph.nodes))
# print()
# trimmed_create_date_domains = trim_domains_with_only_create_date_pivot(graph, pivot_categories)
# print(len(graph.nodes))
# print()
# print(f"{len(trimmed_unconnected_domains)} "
# f"domains trimmed because they were not connected to other domains")
# print(f"{len(trimmed_create_date_domains)} "
# f"domains trimmed because create_date was the only pivot")
print(f"{len(graph.nodes)} nodes in graph structure \n")
# build the graph structure based on the domain pivots
graph = build_local_pivot_graph(graph, domains, config)
return (graph, domains,
{
# "unconnected": trimmed_unconnected_domains,
# "create_date": trimmed_create_date_domains
}
)
def get_pivots(data_obj, name, return_data=None, count=0, pivot_threshold=500):
"""
Does a deep dive through a data object to check count vs pivot threshold.
Args:
data_obj: Either a list or dict that needs to check pivot count
name: pivot category name
return_data: Holds data to return once we reach the end of the data_obj
count: Lets us track to know when we are finished with the data_obj
pivot_threshold: Threshold to include as a pivot.
"""
if return_data is None:
return_data = []
count += 1
if isinstance(data_obj, dict) and len(data_obj):
temp_name = name
for k, v in data_obj.items():
if isinstance(data_obj[k], (dict, list)):
name = "{}_{}".format(name, k)
temp_data = get_pivots(
data_obj[k], name, return_data, count, pivot_threshold
)
if temp_data:
return_data.append([name[1:].upper().replace("_", " "), temp_data])
name = temp_name
if "count" in data_obj and (1 < data_obj["count"] < pivot_threshold):
return data_obj["value"], data_obj["count"]
elif isinstance(data_obj, list) and len(data_obj):
for index, item in enumerate(data_obj):
temp_data = get_pivots(item, name, return_data, count, pivot_threshold)
if temp_data:
if isinstance(temp_data, list):
for x in temp_data:
return_data.append(x)
elif isinstance(temp_data, tuple):
return_data.append([name[1:].upper().replace("_", " "), temp_data])
count -= 1
if count:
return
else:
return return_data
def build_infra_graph(iris_results: list, config: "Config"):
graph = nx.Graph()
pv_dict = {}
config.domain_risk_dict = {}
for domain in iris_results:
if domain["domain"] not in config.domain_risk_dict:
config.domain_risk_dict[domain["domain"]] = domain.get("domain_risk", {}).get("risk_score", 0)
# GET PIVOTS
nps = get_pivots(domain, "", pivot_threshold=config.pivot_threshold)
pv_list = []
for p in nps:
if p[0] not in config.exclude_list:
pv_list.append("{}_{}".format(p[0], p[1][0]))
# CREATE POSSIBLE NODES AND POSSIBLE EDGES
x = itertools.combinations(pv_list, 2)
for | |
<filename>rsrcfork/api.py
import collections
import collections.abc
import enum
import os
import struct
import typing
# The formats of all following structures is as described in the Inside Macintosh book (see module docstring).
# Signedness and byte order of the integers is never stated explicitly in IM.
# All integers are big-endian, as this is the native byte order of the 68k and PowerPC processors used in old Macs.
# Almost all integers are non-negative byte counts or offsets, so it only makes sense for them to be unsigned. Sometimes the number -1 is used as a placeholder value, it should be considered equivalent to its two's complement value interpreted as unsigned (i. e. all bits set). The only exception is the resource ID field, which is signed.
# Resource file header, found at the start of the resource file.
# 4 bytes: Offset from beginning of resource file to resource data. Basically guaranteed to be 0x100.
# 4 bytes: Offset from beginning of resource file to resource map.
# 4 bytes: Length of resource data.
# 4 bytes: Length of resource map.
# 112 bytes: System-reserved data. In practice, this is usually all null bytes.
# 128 bytes: Application-specific data. In practice, this is usually all null bytes.
STRUCT_RESOURCE_HEADER = struct.Struct(">IIII112s128s")
# Header for a single resource data block, found immediately before the resource data itself.
# 4 bytes: Length of following resource data.
STRUCT_RESOURCE_DATA_HEADER = struct.Struct(">I")
# Header for the resource map, found immediately after the last resource data block. This position is also indicated in the header.
# 16 bytes: Reserved for copy of resource header (in memory). Should be 0 in the file.
# 4 bytes: Reserved for handle to next resource map to be searched (in memory). Should be 0 in file.
# 2 bytes: Reserved for file reference number (in memory). Should be 0 in file.
# 2 bytes: Resource file attributes. Combination of ResourceFileAttrs flags, see below.
# 2 bytes: Offset from beginning of resource map to type list.
# 2 bytes: Offset from beginning of resource map to resource name list.
STRUCT_RESOURCE_MAP_HEADER = struct.Struct(">16x4x2xHHH")
# Header for the type list, found immediately after the resource map header.
# 2 bytes: Number of resource types in the map minus 1.
STRUCT_RESOURCE_TYPE_LIST_HEADER = struct.Struct(">H")
# A single type in the type list.
# 4 bytes: Resource type. This is usually a 4-character ASCII mnemonic, but may be any 4 bytes.
# 2 bytes: Number of resources of this type in the map minus 1.
# 2 bytes: Offset from beginning of type list to reference list for resources of this type.
STRUCT_RESOURCE_TYPE = struct.Struct(">4sHH")
# A single resource reference in a reference list. (A reference list has no header, and neither does the list of reference lists.)
# 2 bytes: Resource ID.
# 2 bytes: Offset from beginning of resource name list to length of resource name, or -1 (0xffff) if none.
# 1 byte: Resource attributes. Combination of ResourceAttrs flags, see below. (Note: packed into 4 bytes together with the next 3 bytes.)
# 3 bytes: Offset from beginning of resource data to length of data for this resource. (Note: packed into 4 bytes together with the previous 1 byte.)
# 4 bytes: Reserved for handle to resource (in memory). Should be 0 in file.
STRUCT_RESOURCE_REFERENCE = struct.Struct(">hHI4x")
# Header for a resource name, found immediately before the name itself. (The name list has no header.)
# 1 byte: Length of following resource name.
STRUCT_RESOURCE_NAME_HEADER = struct.Struct(">B")
class ResourceFileAttrs(enum.Flag):
"""Resource file attribute flags. The descriptions for these flags are taken from comments on the map*Bit and map* enum constants in <CarbonCore/Resources.h>."""
mapResourcesLocked = 1 << 15 # "Resources Locked" (undocumented, but available as a checkbox in ResEdit)
_BIT_14 = 1 << 14
_BIT_13 = 1 << 13
_BIT_12 = 1 << 12
_BIT_11 = 1 << 11
_BIT_10 = 1 << 10
_BIT_9 = 1 << 9
mapPrinterDriverMultiFinderCompatible = 1 << 8 # "Printer Driver MultiFinder Compatible" (undocumented, but available as a checkbox in ResEdit)
mapReadOnly = 1 << 7 # "is this file read-only?", "Resource file read-only"
mapCompact = 1 << 6 # "Is a compact necessary?", "Compact resource file"
mapChanged = 1 << 5 # "Is it necessary to write map?", "Write map out at update"
_BIT_4 = 1 << 4
_BIT_3 = 1 << 3
_BIT_2 = 1 << 2
_BIT_1 = 1 << 1
_BIT_0 = 1 << 0
class ResourceAttrs(enum.Flag):
"""Resource attribute flags. The descriptions for these flags are taken from comments on the res*Bit and res* enum constants in <CarbonCore/Resources.h>."""
resSysRef = 1 << 7 # "reference to system/local reference" (only documented as resSysRefBit = 7 in <CarbonCore/Resources.h>
resSysHeap = 1 << 6 # "In system/in application heap", "System or application heap?"
resPurgeable = 1 << 5 # "Purgeable/not purgeable", "Purgeable resource?"
resLocked = 1 << 4 # "Locked/not locked", "Load it in locked?"
resProtected = 1 << 3 # "Protected/not protected", "Protected?"
resPreload = 1 << 2 # "Read in at OpenResource?", "Load in on OpenResFile?"
resChanged = 1 << 1 # "Existing resource changed since last update", "Resource changed?"
resCompressed = 1 << 0 # "indicates that the resource data is compressed" (only documented in https://github.com/kreativekorp/ksfl/wiki/Macintosh-Resource-File-Format)
class Resource(object):
"""A single resource from a resource file."""
__slots__ = ("resource_type", "resource_id", "name", "attributes", "data")
def __init__(self, resource_type: bytes, resource_id: int, name: typing.Optional[bytes], attributes: ResourceAttrs, data: bytes):
"""Create a new resource with the given type code, ID, name, attributes, and data."""
super().__init__()
self.resource_type: bytes = resource_type
self.resource_id: int = resource_id
self.name: typing.Optional[bytes] = name
self.attributes: ResourceAttrs = attributes
self.data: bytes = data
def __repr__(self):
if len(self.data) > 32:
data = f"<{len(self.data)} bytes: {self.data[:32]}...>"
else:
data = repr(self.data)
return f"{type(self).__module__}.{type(self).__qualname__}(resource_type={self.resource_type}, resource_id={self.resource_id}, name={self.name}, attributes={self.attributes}, data={data})"
class ResourceFile(collections.abc.Mapping):
"""A resource file reader operating on a byte stream."""
# noinspection PyProtectedMember
class _LazyResourceMap(collections.abc.Mapping):
"""Internal class: Lazy mapping of resource IDs to resource objects, returned when subscripting a ResourceFile."""
def __init__(self, resfile: "ResourceFile", restype: bytes):
"""Create a new _LazyResourceMap "containing" all resources in resfile that have the type code restype."""
super().__init__()
self._resfile: "ResourceFile" = resfile
self._restype: bytes = restype
self._submap: typing.Mapping[int, typing.Tuple[int, ResourceAttrs, int]] = self._resfile._references[self._restype]
def __len__(self):
"""Get the number of resources with this type code."""
return len(self._submap)
def __iter__(self):
"""Iterate over the IDs of all resources with this type code."""
return iter(self._submap)
def __contains__(self, key: int):
"""Check if a resource with the given ID exists for this type code."""
return key in self._submap
def __getitem__(self, key: int) -> Resource:
"""Get a resource with the given ID for this type code."""
name_offset, attributes, data_offset = self._submap[key]
if name_offset == 0xffff:
name = None
elif self._resfile._allow_seek:
self._resfile._stream.seek(self._resfile.map_offset + self._resfile.map_name_list_offset + name_offset)
(name_length,) = self._resfile._stream_unpack(STRUCT_RESOURCE_NAME_HEADER)
name = self._resfile._read(name_length)
else:
name = self._resfile._resource_names[name_offset]
if self._resfile._allow_seek:
self._resfile._stream.seek(self._resfile.data_offset + data_offset)
(data_length,) = self._resfile._stream_unpack(STRUCT_RESOURCE_DATA_HEADER)
data = self._resfile._read(data_length)
else:
data = self._resfile._resource_data[data_offset]
return Resource(self._restype, key, name, attributes, data)
def __repr__(self):
if len(self) == 1:
return f"<{type(self).__module__}.{type(self).__qualname__} at {id(self):#x} containing one resource: {next(iter(self.values()))}>"
else:
return f"<{type(self).__module__}.{type(self).__qualname__} at {id(self):#x} containing {len(self)} resources with IDs: {list(self)}>"
@classmethod
def open(cls, filename: typing.Union[str, bytes, os.PathLike], *, rsrcfork: typing.Optional[bool]=None, **kwargs) -> "ResourceFile":
"""Open the file at the given path as a ResourceFile.
If rsrcfork is not None, it is treated as boolean and controls whether the data or resource fork of the file should be opened. (On systems other than macOS, opening resource forks will not work of course, since they don't exist.)
If rsrcfork is None, guess whether the data or resource fork should be opened. If the resource fork exists and is not empty, it is opened, otherwise the data fork is opened instead.
"""
f: typing.io.BinaryIO
if rsrcfork is None:
# Determine whether the file has a usable resource fork.
try:
# Try to open the resource fork.
f = open(os.path.join(filename, "..namedfork", "rsrc"), "rb")
except (FileNotFoundError, NotADirectoryError):
# If the resource fork doesn't exist, fall back to the data fork.
f = open(filename, "rb")
else:
try:
# Resource fork exists, check if it actually contains anything.
if f.read(1):
# Resource fork contains data, seek back to start before using it.
f.seek(0)
else:
# Resource fork contains no data, fall back to the data fork.
f.close()
f = open(filename, "rb")
except BaseException:
f.close()
raise
elif rsrcfork:
# Force use of the resource fork.
f = open(os.path.join(filename, "..namedfork", "rsrc"), "rb")
else:
# Force use of the data fork.
f = open(filename, "rb")
# Use the selected fork to build a ResourceFile.
return cls(f, close=True, **kwargs)
def __init__(self, stream: typing.io.BinaryIO, *, allow_seek: typing.Optional[bool]=None, close: bool=False):
"""Create a ResourceFile wrapping the given byte stream.
To read resource file data from a bytes object, wrap it in an io.BytesIO.
allow_seek controls whether seeking should be used when reading the file. If allow_seek is None, stream.seekable() is called to determine whether seeking should be used.
If | |
options=[]):
"""
:arg arguments: A string of comma-separated C argument declarations.
If *arguments* is specified, then *input_expr* must also be
specified. All types used here must be known to PyOpenCL.
(see :func:`pyopencl.tools.get_or_register_dtype`).
:arg key_expr: An integer-valued C expression returning the
key based on which the sort is performed. The array index
for which the key is to be computed is available as `i`.
The expression may refer to any of the *arguments*.
:arg sort_arg_names: A list of argument names whose corresponding
array arguments will be sorted according to *key_expr*.
"""
# {{{ arg processing
from pyopencl.tools import parse_arg_list
self.arguments = parse_arg_list(arguments)
del arguments
self.sort_arg_names = sort_arg_names
self.bits = int(bits_at_a_time)
self.index_dtype = np.dtype(index_dtype)
self.key_dtype = np.dtype(key_dtype)
self.options = options
# }}}
# {{{ kernel creation
scan_ctype, scan_dtype, scan_t_cdecl = \
_make_sort_scan_type(context.devices[0], self.bits, self.index_dtype)
from pyopencl.tools import VectorArg, ScalarArg
scan_arguments = (
list(self.arguments)
+ [VectorArg(arg.dtype, "sorted_"+arg.name) for arg in self.arguments
if arg.name in sort_arg_names]
+ [ScalarArg(np.int32, "base_bit")])
def get_count_branch(known_bits):
if len(known_bits) == self.bits:
return "s.c%s" % known_bits
boundary_mnr = known_bits + "1" + (self.bits-len(known_bits)-1)*"0"
return ("((mnr < %s) ? %s : %s)" % (
int(boundary_mnr, 2),
get_count_branch(known_bits+"0"),
get_count_branch(known_bits+"1")))
codegen_args = dict(
bits=self.bits,
key_ctype=dtype_to_ctype(self.key_dtype),
key_expr=key_expr,
index_ctype=dtype_to_ctype(self.index_dtype),
index_type_max=np.iinfo(self.index_dtype).max,
padded_bin=_padded_bin,
scan_ctype=scan_ctype,
sort_arg_names=sort_arg_names,
get_count_branch=get_count_branch,
)
preamble = scan_t_cdecl+RADIX_SORT_PREAMBLE_TPL.render(**codegen_args)
scan_preamble = preamble \
+ RADIX_SORT_SCAN_PREAMBLE_TPL.render(**codegen_args)
from pyopencl.scan import GenericScanKernel
self.scan_kernel = GenericScanKernel(
context, scan_dtype,
arguments=scan_arguments,
input_expr="scan_t_from_value(%s, base_bit, i)" % key_expr,
scan_expr="scan_t_add(a, b, across_seg_boundary)",
neutral="scan_t_neutral()",
output_statement=RADIX_SORT_OUTPUT_STMT_TPL.render(**codegen_args),
preamble=scan_preamble, options=self.options)
for i, arg in enumerate(self.arguments):
if isinstance(arg, VectorArg):
self.first_array_arg_idx = i
# }}}
def __call__(self, *args, **kwargs):
"""Run the radix sort. In addition to *args* which must match the
*arguments* specification on the constructor, the following
keyword arguments are supported:
:arg key_bits: specify how many bits (starting from least-significant)
there are in the key.
:arg allocator: See the *allocator* argument of :func:`pyopencl.array.empty`.
:arg queue: A :class:`pyopencl.CommandQueue`, defaulting to the
one from the first argument array.
:arg wait_for: |explain-waitfor|
:returns: A tuple ``(sorted, event)``. *sorted* consists of sorted
copies of the arrays named in *sorted_args*, in the order of that
list. *event* is a :class:`pyopencl.Event` for dependency management.
"""
wait_for = kwargs.pop("wait_for", None)
# {{{ run control
key_bits = kwargs.pop("key_bits", None)
if key_bits is None:
key_bits = int(np.iinfo(self.key_dtype).bits)
n = len(args[self.first_array_arg_idx])
allocator = kwargs.pop("allocator", None)
if allocator is None:
allocator = args[self.first_array_arg_idx].allocator
queue = kwargs.pop("allocator", None)
if queue is None:
queue = args[self.first_array_arg_idx].queue
args = list(args)
base_bit = 0
while base_bit < key_bits:
sorted_args = [
cl.array.empty(queue, n, arg_descr.dtype, allocator=allocator)
for arg_descr in self.arguments
if arg_descr.name in self.sort_arg_names]
scan_args = args + sorted_args + [base_bit]
last_evt = self.scan_kernel(*scan_args,
**dict(queue=queue, wait_for=wait_for))
wait_for = [last_evt]
# substitute sorted
for i, arg_descr in enumerate(self.arguments):
if arg_descr.name in self.sort_arg_names:
args[i] = sorted_args[self.sort_arg_names.index(arg_descr.name)]
base_bit += self.bits
return [arg_val
for arg_descr, arg_val in zip(self.arguments, args)
if arg_descr.name in self.sort_arg_names], last_evt
# }}}
# }}}
# }}}
# {{{ generic parallel list builder
# {{{ kernel template
_LIST_BUILDER_TEMPLATE = Template("""//CL//
% if double_support:
#ifndef cl_khr_fp64
#pragma OPENCL EXTENSION cl_khr_fp64: enable
#endif
#define PYOPENCL_DEFINE_CDOUBLE
% endif
#include <pyopencl-complex.h>
${preamble}
// {{{ declare helper macros for user interface
typedef ${index_type} index_type;
%if is_count_stage:
#define PLB_COUNT_STAGE
%for name, dtype in list_names_and_dtypes:
%if name in count_sharing:
#define APPEND_${name}(value) { /* nothing */ }
%else:
#define APPEND_${name}(value) { ++(*plb_loc_${name}_count); }
%endif
%endfor
%else:
#define PLB_WRITE_STAGE
%for name, dtype in list_names_and_dtypes:
%if name in count_sharing:
#define APPEND_${name}(value) \
{ plb_${name}_list[(*plb_${count_sharing[name]}_index) - 1] \
= value; }
%else:
#define APPEND_${name}(value) \
{ plb_${name}_list[(*plb_${name}_index)++] = value; }
%endif
%endfor
%endif
#define LIST_ARG_DECL ${user_list_arg_decl}
#define LIST_ARGS ${user_list_args}
#define USER_ARG_DECL ${user_arg_decl}
#define USER_ARGS ${user_args}
// }}}
${generate_template}
// {{{ kernel entry point
__kernel
%if do_not_vectorize:
__attribute__((reqd_work_group_size(1, 1, 1)))
%endif
void ${kernel_name}(${kernel_list_arg_decl} USER_ARG_DECL index_type n)
{
%if not do_not_vectorize:
int lid = get_local_id(0);
index_type gsize = get_global_size(0);
index_type work_group_start = get_local_size(0)*get_group_id(0);
for (index_type i = work_group_start + lid; i < n; i += gsize)
%else:
const int chunk_size = 128;
index_type chunk_base = get_global_id(0)*chunk_size;
index_type gsize = get_global_size(0);
for (; chunk_base < n; chunk_base += gsize*chunk_size)
for (index_type i = chunk_base; i < min(n, chunk_base+chunk_size); ++i)
%endif
{
%if is_count_stage:
%for name, dtype in list_names_and_dtypes:
%if name not in count_sharing:
index_type plb_loc_${name}_count = 0;
%endif
%endfor
%else:
%for name, dtype in list_names_and_dtypes:
%if name not in count_sharing:
index_type plb_${name}_index =
plb_${name}_start_index[i];
%endif
%endfor
%endif
generate(${kernel_list_arg_values} USER_ARGS i);
%if is_count_stage:
%for name, dtype in list_names_and_dtypes:
%if name not in count_sharing:
plb_${name}_count[i] = plb_loc_${name}_count;
%endif
%endfor
%endif
}
}
// }}}
""", strict_undefined=True)
# }}}
def _get_arg_decl(arg_list):
result = ""
for arg in arg_list:
result += arg.declarator() + ", "
return result
def _get_arg_list(arg_list, prefix=""):
result = ""
for arg in arg_list:
result += prefix + arg.name + ", "
return result
class BuiltList(Record):
pass
class ListOfListsBuilder:
"""Generates and executes code to produce a large number of variable-size
lists, simply.
.. note:: This functionality is provided as a preview. Its interface
is subject to change until this notice is removed.
.. versionadded:: 2013.1
Here's a usage example::
from pyopencl.algorithm import ListOfListsBuilder
builder = ListOfListsBuilder(context, [("mylist", np.int32)], \"\"\"
void generate(LIST_ARG_DECL USER_ARG_DECL index_type i)
{
int count = i % 4;
for (int j = 0; j < count; ++j)
{
APPEND_mylist(count);
}
}
\"\"\", arg_decls=[])
result, event = builder(queue, 2000)
inf = result["mylist"]
assert inf.count == 3000
assert (inf.list.get()[-6:] == [1, 2, 2, 3, 3, 3]).all()
The function `generate` above is called once for each "input object".
Each input object can then generate zero or more list entries.
The number of these input objects is given to :meth:`__call__` as *n_objects*.
List entries are generated by calls to `APPEND_<list name>(value)`.
Multiple lists may be generated at once.
"""
def __init__(self, context, list_names_and_dtypes, generate_template,
arg_decls, count_sharing=None, devices=None,
name_prefix="plb_build_list", options=[], preamble="",
debug=False, complex_kernel=False):
"""
:arg context: A :class:`pyopencl.Context`.
:arg list_names_and_dtypes: a list of `(name, dtype)` tuples
indicating the lists to be built.
:arg generate_template: a snippet of C as described below
:arg arg_decls: A string of comma-separated C argument declarations.
:arg count_sharing: A mapping consisting of `(child, mother)`
indicating that `mother` and `child` will always have the
same number of indices, and the `APPEND` to `mother`
will always happen *before* the `APPEND` to the child.
:arg name_prefix: the name prefix to use for the compiled kernels
:arg options: OpenCL compilation options for kernels using
*generate_template*.
:arg complex_kernel: If `True`, prevents vectorization on CPUs.
*generate_template* may use the following C macros/identifiers:
* `index_type`: expands to C identifier for the index type used
for the calculation
* `USER_ARG_DECL`: expands to the C declarator for `arg_decls`
* `USER_ARGS`: a list of C argument values corresponding to
`user_arg_decl`
* `LIST_ARG_DECL`: expands to a C argument list representing the
data for the output lists. These are escaped prefixed with
`"plg_"` so as to not interfere with user-provided names.
* `LIST_ARGS`: a list of C argument values corresponding to
`LIST_ARG_DECL`
* `APPEND_name(entry)`: inserts `entry` into the list `name`.
*entry* must be a valid C expression of the correct type.
All argument-list related macros have a trailing comma included
if they are non-empty.
*generate_template* must supply a function:
.. code-block:: c
void generate(USER_ARG_DECL LIST_ARG_DECL index_type i)
{
APPEND_mylist(5);
}
Internally, the `kernel_template` is expanded (at least) twice. Once,
for a 'counting' stage where the size of all the lists is determined,
and a second time, for a 'generation' stage where the lists are
actually filled. A `generate` function that has side effects beyond
calling `append` is therefore ill-formed.
"""
if devices is None:
devices = context.devices
if count_sharing is None:
count_sharing = {}
self.context = context
self.devices = devices
self.list_names_and_dtypes = list_names_and_dtypes
self.generate_template = generate_template
from pyopencl.tools import parse_arg_list
self.arg_decls = parse_arg_list(arg_decls)
self.count_sharing = count_sharing
self.name_prefix = name_prefix
self.preamble = preamble
self.options = options
self.debug = debug
self.complex_kernel = complex_kernel
# {{{ kernel generators
@memoize_method
def get_scan_kernel(self, index_dtype):
from pyopencl.scan import GenericScanKernel
return GenericScanKernel(
self.context, index_dtype,
arguments="__global %s *ary" % dtype_to_ctype(index_dtype),
input_expr="ary[i]",
scan_expr="a+b", neutral="0",
output_statement="ary[i+1] = item;",
devices=self.devices)
def do_not_vectorize(self):
from pytools import any
return (self.complex_kernel
and any(dev.type & cl.device_type.CPU
for dev in self.context.devices))
@memoize_method
def get_count_kernel(self, index_dtype):
index_ctype = dtype_to_ctype(index_dtype)
from pyopencl.tools import VectorArg, OtherArg
kernel_list_args | |
= vol_meta[kv.VOL_OPTS][kv.ATTACH_AS]
else:
vinfo[kv.ATTACH_AS] = kv.DEFAULT_ATTACH_AS
if kv.ACCESS in vol_meta[kv.VOL_OPTS]:
vinfo[kv.ACCESS] = vol_meta[kv.VOL_OPTS][kv.ACCESS]
else:
vinfo[kv.ACCESS] = kv.DEFAULT_ACCESS
return vinfo
# Return error, or None for OK
def removeVMDK(vmdk_path):
logging.info("*** removeVMDK: %s", vmdk_path)
cmd = "{0} {1}".format(VMDK_DELETE_CMD, vmdk_path)
rc, out = RunCommand(cmd)
if rc != 0:
return err("Failed to remove %s. %s" % (vmdk_path, out))
return None
def getVMDK(vmdk_path, vol_name, datastore):
"""Checks if the volume exists, and returns error if it does not"""
# Note: will return more Volume info here, when Docker API actually accepts it
if not os.path.isfile(vmdk_path):
return err("Volume {0} not found (file: {1})".format(vol_name, vmdk_path))
# Return volume info - volume policy, size, allocated capacity, allocation
# type, creat-by, create time.
try:
result = vol_info(kv.getAll(vmdk_path),
kv.get_vol_info(vmdk_path),
datastore)
except:
msg = "Failed to get disk details for %s" % vmdk_path
logging.error(msg)
result = msg
return result
def listVMDK(vm_datastore, tenant):
"""
Returns a list of volume names (note: may be an empty list).
Each volume name is returned as either `volume@datastore`, or just `volume`
for volumes on vm_datastore
"""
vmdks = vmdk_utils.get_volumes(tenant)
# build fully qualified vol name for each volume found
return [{u'Name': get_full_vol_name(x['filename'], x['datastore'], vm_datastore),
u'Attributes': {}} \
for x in vmdks]
# Return VM managed object, reconnect if needed. Throws if fails twice.
def findVmByUuid(vm_uuid):
vm = None
try:
vm = si.content.searchIndex.FindByUuid(None, vm_uuid, True, False)
except Exception as ex:
logging.warning("Failed to find VM by uuid=%s, retrying...\n%s",
vm_uuid, str(ex))
if vm:
return vm
#
# Retry. It can throw if connect/search fails. But search can't return None
# since we get UUID from VMM so VM must exist
#
connectLocal()
vm = si.content.searchIndex.FindByUuid(None, vm_uuid, True, False)
logging.info("Found VM name=%s, id=%s ", vm.config.name, vm_uuid)
return vm
# Return error, or None for OK.
def attachVMDK(vmdk_path, vm_uuid):
vm = findVmByUuid(vm_uuid)
logging.info("*** attachVMDK: %s to %s VM uuid = %s",
vmdk_path, vm.config.name, vm_uuid)
return disk_attach(vmdk_path, vm)
# Return error, or None for OK.
def detachVMDK(vmdk_path, vm_uuid):
vm = findVmByUuid(vm_uuid)
logging.info("*** detachVMDK: %s from %s VM uuid = %s",
vmdk_path, vm.config.name, vm_uuid)
return disk_detach(vmdk_path, vm)
# Check existence (and creates if needed) the path for docker volume VMDKs
def get_vol_path(datastore, tenant_name=None):
# If the command is NOT running under a tenant, the folder for Docker
# volumes is created on <datastore>/DOCK_VOLS_DIR
# If the command is running under a tenant, the foler for Dock volume
# is created on <datastore>/DOCK_VOLS_DIR/tenant_name
if tenant_name:
path = os.path.join("/vmfs/volumes", datastore, DOCK_VOLS_DIR, tenant_name)
else:
path = os.path.join("/vmfs/volumes", datastore, DOCK_VOLS_DIR)
if os.path.isdir(path):
# If the path exists then return it as is.
logging.debug("Found %s, returning", path)
return path
# The osfs tools are usable for all datastores
cmd = "{0} {1}".format(OSFS_MKDIR_CMD, path)
rc, out = RunCommand(cmd)
if rc == 0:
logging.info("Created %s", path)
return path
logging.warning("Failed to create %s", path)
return None
def known_datastores():
"""returns names of know datastores"""
return [i[0] for i in vmdk_utils.get_datastores()]
def parse_vol_name(full_vol_name):
"""
Parses volume[@datastore] and returns (volume, datastore)
On parse errors raises ValidationError with syntax explanation
"""
# Parse volume name with regexp package
#
# Caveat: we block '-NNNNNN' in end of volume name to make sure that volume
# name never conflicts with VMDK snapshot name (e.g. 'disk-000001.vmdk').
# Note that N is a digit and there are exactly 6 of them (hardcoded in ESXi)
# vmdk_utils.py:list_vmdks() explicitly relies on this assumption.
#
try:
at = full_vol_name.rindex('@')
vol_name = full_vol_name[:at]
ds_name = full_vol_name[at + 1:]
except ValueError:
# '@' not found
vol_name = full_vol_name
ds_name = None
# now block the '-NNNNN' volume names
if re.match(vmdk_utils.SNAP_NAME_REGEXP, vol_name):
raise ValidationError("Volume names ending with '-NNNNNN' (where N is a digit) are not supported")
if len(vol_name) > MAX_VOL_NAME_LEN:
raise ValidationError("Volume name is too long (max len is {0})".format(MAX_VOL_NAME_LEN))
if ds_name and len(ds_name) > MAX_DS_NAME_LEN:
raise ValidationError("Datastore name is too long (max len is {0})".format(MAX_DS_NAME_LEN))
return vol_name, ds_name
def get_full_vol_name(vmdk_name, datastore, vm_datastore):
"""
Forms full volume name from vmdk file name an datastore as volume@datastore
For volumes on vm_datastore, just returns volume name
"""
vol_name = vmdk_utils.strip_vmdk_extension(vmdk_name)
logging.debug("get_full_vol_name: %s %s %s", vmdk_name, datastore, vm_datastore)
if datastore == vm_datastore:
return vol_name
return "{0}@{1}".format(vol_name, datastore)
# TBD - move to vmdk_utils
def get_datastore_name(config_path):
"""Returns datastore NAME in config_path (not url-name which may be used in path)"""
# path is always /vmfs/volumes/<datastore>/... , so extract datastore:
config_ds_name = config_path.split("/")[3]
ds_name = [x[0] for x in vmdk_utils.get_datastores() \
if x[0] == config_ds_name or x[1] == config_ds_name ]
if len(ds_name) != 1:
logging.error("get_datastore_name: found more than one match: %s" % ds_name)
logging.debug("get_datastore_name: path=%s name=%s" % (config_ds_name, ds_name))
return ds_name[0]
# gets the requests, calculates path for volumes, and calls the relevant handler
def executeRequest(vm_uuid, vm_name, config_path, cmd, full_vol_name, opts):
"""
Executes a <cmd> request issused from a VM.
The request is about volume <full_volume_name> in format volume@datastore.
If @datastore is omitted, the one where the VM resides is used.
For VM, the function gets vm_uuid, vm_name and config_path
<opts> is a json options string blindly passed to a specific operation
Returns None (if all OK) or error string
"""
vm_datastore = get_datastore_name(config_path)
error_info, tenant_uuid, tenant_name = auth.authorize(vm_uuid, vm_datastore, cmd, opts)
if error_info:
return err(error_info)
if cmd == "list":
return listVMDK(vm_datastore, tenant_name)
try:
vol_name, datastore = parse_vol_name(full_vol_name)
except ValidationError as ex:
return err(str(ex))
if not datastore:
datastore = vm_datastore
elif datastore not in known_datastores():
return err("Invalid datastore '%s'.\n" \
"Known datastores: %s.\n" \
"Default datastore: %s" \
% (datastore, ", ".join(known_datastores()), vm_datastore))
# get /vmfs/volumes/<volid>/dockvols path on ESX:
path = get_vol_path(datastore, tenant_name)
logging.debug("executeRequest %s %s", tenant_name, path)
if path is None:
return err("Failed to initialize volume path {0}".format(path))
vmdk_path = vmdk_utils.get_vmdk_path(path, vol_name)
if cmd == "get":
response = getVMDK(vmdk_path, vol_name, datastore)
elif cmd == "create":
response = createVMDK(vmdk_path, vm_name, vol_name, opts)
# create succeed, insert infomation of this volume to volumes table
if not response:
if tenant_uuid:
vol_size_in_MB = convert.convert_to_MB(auth.get_vol_size(opts))
auth.add_volume_to_volumes_table(tenant_uuid, datastore, vol_name, vol_size_in_MB)
else:
logging.warning(" VM %s does not belong to any tenant", vm_name)
elif cmd == "remove":
response = removeVMDK(vmdk_path)
elif cmd == "attach":
response = attachVMDK(vmdk_path, vm_uuid)
elif cmd == "detach":
response = detachVMDK(vmdk_path, vm_uuid)
else:
return err("Unknown command:" + cmd)
return response
def connectLocal():
'''
connect and do stuff on local machine
'''
global si #
# Connect to localhost as dcui
# User "dcui" is a local Admin that does not lose permissions
# when the host is in lockdown mode.
si = pyVim.connect.Connect(host='localhost', user='dcui')
if not si:
raise SystemExit("Failed to connect to localhost as 'dcui'.")
atexit.register(pyVim.connect.Disconnect, si)
# set out ID in context to be used in request - so we'll see it in logs
reqCtx = VmomiSupport.GetRequestContext()
reqCtx["realUser"] = 'dvolplug'
logging.debug("Connect to localhost si:%s", si)
return si
def get_datastore_url(datastore):
global si
if not si:
connectLocal()
res = [d.info.url for d in si.content.rootFolder.childEntity[0].datastore if d.info.name == datastore]
return res[0]
def findDeviceByPath(vmdk_path, vm):
logging.debug("findDeviceByPath: Looking for device {0}".format(vmdk_path))
for d in vm.config.hardware.device:
if type(d) != vim.vm.device.VirtualDisk:
continue
# Disks of all backing have a backing object with a filename attribute.
# The filename identifies the virtual disk by name and can be used
# to match with the given volume name.
# Filename format is as follows:
# "[<datastore name>] <parent-directory>/tenant/<vmdk-descriptor-name>"
logging.debug("d.backing.fileName %s", d.backing.fileName)
backing_disk = d.backing.fileName.split(" ")[1]
# datastore='[datastore name]'
datastore = d.backing.fileName.split(" ")[0]
datastore = datastore[1:-1]
# Construct the parent dir and vmdk name, resolving
# links if any.
dvol_dir = os.path.dirname(vmdk_path)
datastore_url = get_datastore_url(datastore)
datastore_prefix = os.path.realpath(datastore_url) + '/'
real_vol_dir = os.path.realpath(dvol_dir).replace(datastore_prefix, "")
virtual_disk = os.path.join(real_vol_dir, os.path.basename(vmdk_path))
logging.debug("dvol_dir=%s datastore_prefix=%s real_vol_dir=%s", dvol_dir, datastore_prefix,real_vol_dir)
logging.debug("backing_disk=%s virtual_disk=%s", backing_disk, virtual_disk)
if virtual_disk == backing_disk:
logging.debug("findDeviceByPath: MATCH: %s", backing_disk)
return d
return None
# Find the PCI slot number
def get_controller_pci_slot(vm, pvscsi, key_offset):
''' Return PCI slot number of the given PVSCSI controller
Input parameters:
vm: VM configuration
pvscsi: given PVSCSI controller
key_offset: offset from the bus number, controller_key - key_offset
is equal to the slot number of this given PVSCSI controller
'''
if pvscsi.slotInfo:
return str(pvscsi.slotInfo.pciSlotNumber)
else:
# Slot number is got from from the VM config
| |
<reponame>wangyusu/pymatgen
"""
This module defines input sets for CP2K and is a work in progress. The structure/philosophy
of this module is based on the Vasp input sets in Pymatgen. These sets are meant to contain
tested parameters that will result in successful, reproducible, consistent calculations without
need for intervention 99% of the time. 99% of the time, you only need to provide a pymatgen
structure object and let the defaults take over from there.
The sets are intended to be very general, e.g. a set for geometry relaxation, and so most of the
time, if you have specific needs, you can simply specify them via the keyword argument
override_default_params (see Section.update() method). If you have the need to create a new input
set (say for a standardized high throughput calculation) then you can create a new child of the
Cp2kInputSet class.
In order to implement a new Set within the current code structure, follow this 3 step flow:
(1) Inherit from Cp2kInputSet or one of its children and call the super() constructor
(2) Create the new sections and insert them into self and its subsections as needed
(3) Call self.update(override_default_params) in order to allow user settings.
"""
import warnings
from pathlib import Path
from typing import Dict, Union
from pymatgen.core.periodic_table import Element
from pymatgen.core.lattice import Lattice
from pymatgen.core.structure import Molecule, Structure
from pymatgen.io.cp2k.inputs import (
LDOS,
PBE,
PDOS,
QS,
XC_FUNCTIONAL,
Cell,
Coord,
Cp2kInput,
Dft,
E_Density_Cube,
ForceEval,
Global,
Keyword,
KeywordList,
Kind,
Kpoints,
Mgrid,
MO_Cubes,
OrbitalTransformation,
Scf,
Section,
Smear,
Subsys,
V_Hartree_Cube,
)
from pymatgen.io.cp2k.utils import (
get_aux_basis,
get_basis_and_potential,
get_unique_site_indices,
)
__author__ = "<NAME>"
__version__ = "0.2"
__email__ = "<EMAIL>"
__date__ = "January 2019"
MODULE_DIR = Path(__file__).resolve().parent
class Cp2kInputSet(Cp2kInput):
"""
The basic representation of a CP2K input set as a collection of "sections" defining the simulation
connected to a structure object. At the most basis level, CP2K requires a &GLOBAL section and
&FORCE_EVAL section. Global sets parameters like "RUN_TYPE" or the overall verbosity. FORCE_EVAL is
the largest section usually, containing the cell and coordinates of atoms, the DFT settings, and more.
This top level input set is meant to initialize GLOBAL and FORCE_EVAL based on a structure object and
and sections that the user provides.
Like everything that goes into a cp2k input file, this base input set is essentially a section object.
These sets are distinguished by saving default settings for easy implementation of calculations such
as relaxation and static calculations. This base set is here to transfer a pymatgen structure object
into the input format for cp2k and associate the basis set and pseudopotential to use with each
element in the structure.
Generally, this class will not be used directly, and instead one of
its child-classes will be used, which contain more predefined initializations of various sections, and,
if modifications are required, the user can specify override_default_settings.
"""
def __init__(
self,
structure: Union[Structure, Molecule],
potential_and_basis: Dict = {},
multiplicity: int = 0,
project_name: str = "CP2K",
override_default_params: Dict = {},
**kwargs,
):
"""
Args:
structure: (Structure or Molecule) pymatgen structure or molecule object used to define
the lattice, coordinates, and elements. This structure object cannot contain "special"
species like the Dummy species, e.g. X, or fractional occupations, e.g. Fe0.2, etc.
potential_and_basis: (dict) Specifies what basis set and potential to use. Specify these
as a dict of the form:
{ element: {'cardinality': __, 'sr': __, 'q': __},
'cardinality': __, 'functional': __}
Where cardinality and functional are overall specifications (for all elements), while
<key='element'> specifies the overrides for a specific element. Currently the following
conventions must be followed:
(a) All species of a particular element must have the same potential/basis
multiplicity: (int) Specify the system's multiplicity if appropriate
project_name: (str) Specify the project name. This will be used to name the output files
from a CP2K calculation
override_default_params: (dict) Specifies user-defined settings to override the settings of any
input set (See Section.update())
"""
super().__init__(name="CP2K_INPUT", subsections={})
# Important CP2K set parameters
self.structure = structure
self.charge = structure.charge
self.potential_and_basis = potential_and_basis
self.multiplicity = multiplicity # spin multiplicity = 2s+1
self.override_default_params = override_default_params
self.project_name = project_name
self.kwargs = kwargs
for s in self.structure.species:
assert s in Element
self.insert(ForceEval()) # always present in cp2k
self.basis_set_file_names = None # need for dft
self.potential_file_name = None # need for dft
self.create_subsys(self.structure) # assemble structure with atom types and pseudopotentials assigned
if self.kwargs.get("print_forces", True):
self.print_forces()
if self.kwargs.get("print_motion", True):
self.print_motion()
self.update(override_default_params)
def create_subsys(self, structure: Union[Structure, Molecule]):
"""
Create the structure for the input
"""
subsys = Subsys()
if isinstance(structure, Structure):
subsys.insert(Cell(structure.lattice))
# Decide what basis sets/pseudopotentials to use
basis_and_potential = get_basis_and_potential([str(s) for s in structure.species], self.potential_and_basis)
# Insert atom kinds by identifying the unique sites (unique element and site properties)
unique_kinds = get_unique_site_indices(structure)
for k, v in unique_kinds.items():
kind = k.split("_")[0]
kwargs = {}
if "magmom" in self.structure.site_properties:
kwargs["magnetization"] = self.structure.site_properties["magmom"][v[0]]
if "ghost" in self.structure.site_properties:
kwargs["ghost"] = self.structure.site_properties["ghost"][v[0]]
if "basis_set" in self.structure.site_properties:
basis_set = self.structure.site_properties["basis_set"][v[0]]
else:
basis_set = basis_and_potential[kind]["basis"]
if "potential" in self.structure.site_properties:
potential = self.structure.site_properties["potential"][v[0]]
else:
potential = basis_and_potential[kind]["potential"]
if "aux_basis" in self.structure.site_properties:
kwargs["aux_basis"] = self.structure.site_properties["aux_basis"][v[0]]
subsys.insert(Kind(kind, alias=k, basis_set=basis_set, potential=potential, **kwargs))
coord = Coord(structure, aliases=unique_kinds)
subsys.insert(coord)
self["FORCE_EVAL"].insert(subsys)
self.basis_set_file_names = basis_and_potential["basis_filenames"]
self.potential_file_name = basis_and_potential["potential_filename"]
def print_forces(self):
"""
Print out the forces and stress during calculation
"""
self["FORCE_EVAL"].insert(Section("PRINT", subsections={}))
self["FORCE_EVAL"]["PRINT"].insert(Section("FORCES", subsections={}))
self["FORCE_EVAL"]["PRINT"].insert(Section("STRESS_TENSOR", subsections={}))
def print_motion(self):
"""
Print the motion info (trajectory, cell, forces, stress
"""
if not self.check("MOTION"):
self.insert(Section("MOTION", subsections={}))
self["MOTION"].insert(Section("PRINT", subsections={}))
self["MOTION"]["PRINT"].insert(Section("TRAJECTORY", section_parameters=["ON"], subsections={}))
self["MOTION"]["PRINT"].insert(Section("CELL", subsections={}))
self["MOTION"]["PRINT"].insert(Section("FORCES", subsections={}))
self["MOTION"]["PRINT"].insert(Section("STRESS", subsections={}))
class DftSet(Cp2kInputSet):
"""
Base for an input set using the Quickstep module (i.e. a DFT calculation). The DFT section is pretty vast
in CP2K, so this set hopes to make the DFT setup fairly simple. The provided parameters are pretty conservative,
and so they should not need to be changed very often.
"""
def __init__(
self,
structure: Union[Structure, Molecule],
ot: bool = True,
band_gap: float = 0.01,
eps_default: float = 1e-12,
eps_scf: float = 1e-7,
max_scf: Union[int, None] = None,
minimizer: str = "DIIS",
preconditioner: str = "FULL_ALL",
algorithm: str = "STRICT",
linesearch: str = "2PNT",
cutoff: int = 1200,
rel_cutoff: int = 80,
ngrids: int = 5,
progression_factor: int = 3,
override_default_params: Dict = {},
wfn_restart_file_name: str = None,
kpoints: Union[Kpoints, None] = None,
smearing: bool = False,
**kwargs,
):
"""
Args:
structure: Pymatgen structure or molecule object
ot (bool): Whether or not to use orbital transformation method for matrix diagonalization. OT is the
flagship scf solver of CP2K, and will provide huge speed-ups for this part of the calculation,
but the system must have a band gap for OT to be used (higher band-gap --> faster convergence).
Band gap is also used by the preconditioner for OT, and should be set as a value SMALLER than the true
band gap to get good efficiency. Generally, this parameter does not need to be changed from
default of 0.01
band_gap (float): The band gap can also be specified in order to determine if ot should be turned on.
eps_default (float): Replaces all EPS_XX Keywords in the DFT section (NOT its subsections!) to have this
value, ensuring an overall accuracy of at least this much.
eps_scf (float): The convergence criteria for leaving the SCF loop in Hartrees. Default is 1e-7. Should
ensure reasonable results for all properties. Smaller than 1e-7 is generally not needed unless
you need very high precision. 1e-6 may be used for difficult systems, and should still give
reasonable results for most properties.
max_scf (int): The max number of SCF cycles before terminating the solver. NOTE: With the OT solver, this
corresponds to the max number of INNER scf loops, and then the outer loops are set with outer_max_scf,
while with diagnolization it corresponds to the overall (INNER*OUTER) number of SCF steps, with the
inner loop limit set by
minimizer (str): The minimization scheme. DIIS can be as much as 50% faster than the more robust conjugate
gradient method, and so it is chosen as default. Switch to CG if dealing with a difficult system.
preconditioner (str): Preconditioner for | |
publisher of the document.
Note: Information provided in the FULL view of the article might be
more complete.
"""
# Return information from FULL view, fall back to other views
full = chained_get(self._head, ['source', 'publisher', 'publishername'])
if full is None:
return chained_get(self._json, ['coredata', 'dc:publisher'])
else:
return full
@property
def publisheraddress(self):
"""Name of the publisher of the document."""
return chained_get(self._head, ['source', 'publisher', 'publisheraddress'])
@property
def pubmed_id(self):
"""The PubMed ID of the document."""
return chained_get(self._json, ['coredata', 'pubmed-id'])
@property
def refcount(self):
"""Number of references of an article.
Note: Requires either the FULL view or REF view.
"""
try: # REF view
return self._ref['@total-references']
except KeyError: # FULL view
return self._ref.get('@refcount')
@property
def references(self):
"""List of namedtuples representing references listed in the document,
in the form (position, id, doi, title, authors, authors_auid,
authors_affiliationid, sourcetitle, publicationyear, volume, issue,
first, last, citedbycount, type, text, fulltext).
`position` is the number at which the reference appears in the
document, `id` is the Scopus ID of the referenced document (EID
without the "2-s2.0-"), `authors` is a string of the names of the
authors in the format "Surname1, Initials1; Surname2, Initials2",
`authors_auid` is a string of the author IDs joined on "; ",
`authors_affiliationid` is a string of the authors' affiliation IDs
joined on "; ", `sourcetitle` is the name of the source (e.g. the
journal), `publicationyear` is the year of the publication as a string,
`volume` and `issue`, are strings referring to the volume and issue,
`first` and `last` refer to the page range, `citedbycount` is a string
for the total number of citations of the cited item, `type` describes
the parsing status of the reference (resolved or not), `text` is
Scopus-provided information on the publication, `fulltext` is the text
the authors used for the reference.
Note: Requires either the FULL view or REF view.
Might be empty even if refcount is positive. Specific fields can
be empty.
Author lists (authors, authors_auid, authors_affiliationid) may contain
duplicates but None's have been filtered out.
"""
out = []
fields = 'position id doi title authors authors_auid '\
'authors_affiliationid sourcetitle publicationyear volume '\
'issue first last citedbycount type text fulltext'
ref = namedtuple('Reference', fields)
items = listify(self._ref.get("reference", []))
for item in items:
info = item.get('ref-info', item)
volisspag = info.get('volisspag', {}) or {}
if isinstance(volisspag, list):
volisspag = volisspag[0]
volis = volisspag.get("voliss", {})
if isinstance(volis, list):
volis = volis[0]
# Parse author information
try: # FULL view parsing
auth = listify(item['ref-info']['ref-authors']['author'])
authors = [', '.join([d['ce:surname'], d['ce:initials']])
for d in auth]
auids = None
affids = None
ids = listify(info['refd-itemidlist']['itemid'])
doi = _select_by_idtype(ids, id_type='DOI')
scopus_id = _select_by_idtype(ids, id_type='SGR')
except KeyError: # REF view parsing
auth = (info.get('author-list') or {}).get('author', [])
authors = [', '.join(filter(None, [d.get('ce:surname'),
d.get('ce:given-name')]))
for d in auth]
auids = "; ".join(filter(None, [d.get('@auid') for d in auth]))
affs = filter(None, [d.get('affiliation') for d in auth])
affids = "; ".join([aff.get('@id') for aff in affs])
doi = info.get('ce:doi')
scopus_id = info.get('scopus-id')
# Combine information
new = ref(position=item.get('@id'), id=scopus_id, doi=doi,
authors="; ".join(authors), authors_auid=auids or None,
authors_affiliationid=affids or None,
title=info.get('ref-title', {}).get('ref-titletext', info.get('title')),
sourcetitle=info.get('ref-sourcetitle', info.get('sourcetitle')),
publicationyear=info.get('ref-publicationyear', {}).get('@first'),
volume=volis.get('@volume'), issue=volis.get('@issue'),
first=volisspag.get('pagerange', {}).get('@first'),
last=volisspag.get('pagerange', {}).get('@last'),
citedbycount=info.get('citedby-count'), type=info.get('type'),
text=info.get('ref-text'),
fulltext=item.get('ref-fulltext'))
out.append(new)
return out or None
@property
def scopus_link(self):
"""URL to the document page on Scopus."""
return get_link(self._json, 1)
@property
def self_link(self):
"""URL to Scopus API page of this document."""
return get_link(self._json, 0)
@property
def sequencebank(self):
"""List of namedtuples representing biological entities defined or
mentioned in the text, in the form (name, sequence_number, type).
"""
path = ['enhancement', 'sequencebanks', 'sequencebank']
items = listify(chained_get(self._head, path, []))
bank = namedtuple('Sequencebank', 'name sequence_number type')
out = []
for item in items:
numbers = listify(item['sequence-number'])
for number in numbers:
new = bank(name=item['@name'], sequence_number=number['$'],
type=number['@type'])
out.append(new)
return out or None
@property
def source_id(self):
"""Scopus source ID of the document."""
return chained_get(self._json, ['coredata', 'source-id'])
@property
def sourcetitle_abbreviation(self):
"""Abbreviation of the source the document is published in.
Note: Requires the FULL view of the article.
"""
return self._head.get('source', {}).get('sourcetitle-abbrev')
@property
def srctype(self):
"""Aggregation type of source the document is published in (short
version of aggregationType).
"""
return chained_get(self._json, ['coredata', 'srctype'])
@property
def startingPage(self):
"""Starting page. If this is empty, try .pageRange instead."""
# Try coredata first, fall back to bibrecord afterwards
starting = chained_get(self._json, ['coredata', 'prism:startingPage'])
if not starting:
path = ['source', 'volisspag', 'pagerange', '@first']
starting = chained_get(self._head, path)
return starting
@property
def subject_areas(self):
"""List of namedtuples containing subject areas of the article
in the form (area abbreviation code).
Note: Requires the FULL view of the article.
"""
area = namedtuple('Area', 'area abbreviation code')
path = ['subject-areas', 'subject-area']
out = [area(area=item['$'], abbreviation=item['@abbrev'],
code=item['@code'])
for item in listify(chained_get(self._json, path, []))]
return out or None
@property
def subtype(self):
"""Type of the document. Refer to the Scopus Content Coverage Guide
for a list of possible values. Short version of subtypedescription.
"""
return chained_get(self._json, ['coredata', 'subtype']) or None
@property
def subtypedescription(self):
"""Type of the document. Refer to the Scopus Content Coverage Guide
for a list of possible values. Long version of subtype.
"""
return chained_get(self._json, ['coredata', 'subtypeDescription']) or None
@property
def title(self):
"""Title of the document."""
return chained_get(self._json, ['coredata', 'dc:title'])
@property
def url(self):
"""URL to the API view of the document."""
return chained_get(self._json, ['coredata', 'prism:url'])
@property
def volume(self):
"""Volume for the document."""
return chained_get(self._json, ['coredata', 'prism:volume'])
@property
def website(self):
"""Website of publisher."""
path = ['source', 'website', 'ce:e-address', '$']
return chained_get(self._head, path)
def __init__(self, identifier=None, refresh=False, view='META_ABS',
id_type=None):
"""Interaction with the Abstract Retrieval API.
Parameters
----------
identifier : str or int
The identifier of a document. Can be the Scopus EID, the Scopus
ID, the PII, the Pubmed-ID or the DOI.
refresh : bool or int (optional, default=False)
Whether to refresh the cached file if it exists or not. If int
is passed, cached file will be refreshed if the number of days
since last modification exceeds that value.
id_type: str (optional, default=None)
The type of used ID. Allowed values: None, 'eid', 'pii',
'scopus_id', 'pubmed_id', 'doi'. If the value is None, the
function tries to infer the ID type itself.
view : str (optional, default=META_ABS)
The view of the file that should be downloaded. Allowed values:
META, META_ABS, REF, FULL, where FULL includes all information
of META_ABS view and META_ABS includes all information of the
META view. For details see
https://dev.elsevier.com/guides/AbstractRetrievalViews.htm.
Raises
------
ValueError
If the id_type parameter or the view parameter contains
invalid entries.
Examples
--------
See https://pybliometrics.readthedocs.io/en/stable/examples/AbstractRetrieval.html.
Notes
-----
The directory for cached results is `{path}/{view}/{identifier}`,
where `path` is specified in `~/.scopus/config.ini`. In case
`identifier` is a DOI,, an underscore replaces the forward slash.
"""
# Checks
identifier = str(identifier)
allowed_views = ('META', 'META_ABS', 'REF', 'FULL')
if view not in allowed_views:
raise ValueError('view parameter must be one of ' +
', '.join(allowed_views))
if id_type is None:
id_type = detect_id_type(identifier)
else:
allowed_id_types = ('eid', 'pii', 'scopus_id', 'pubmed_id', 'doi')
if id_type not in allowed_id_types:
raise ValueError('id_type parameter must be one of ' +
', '.join(allowed_id_types))
# Load json
Retrieval.__init__(self, identifier=identifier, id_type=id_type,
api='AbstractRetrieval', refresh=refresh, view=view)
self._json = self._json['abstracts-retrieval-response']
self._head = chained_get(self._json, ["item", "bibrecord", "head"], {})
conf_path = ['source', 'additional-srcinfo', 'conferenceinfo', 'confevent']
self._confevent = chained_get(self._head, conf_path, {})
if self._view == "REF":
ref_path = ["references"]
else:
ref_path = ['item', 'bibrecord', 'tail', 'bibliography']
self._ref = chained_get(self._json, ref_path, {})
def __str__(self):
"""Return pretty text version of the document.
Assumes the document is a journal article and was loaded with
view="META_ABS" or view="FULL".
"""
date = self.get_cache_file_mdate().split()[0]
# Authors
if self.authors:
if len(self.authors) > 1:
authors = _list_authors(self.authors)
else:
a = self.authors[0]
authors = str(a.given_name) + ' ' + str(a.surname)
else:
authors = "(No author found)"
# All other information
s = f'{authors}: "{self.title}", {self.publicationName}, {self.volume}'
if self.issueIdentifier:
s += f'({self.issueIdentifier})'
s += ', '
s += _parse_pages(self)
s += f'({self.coverDate[:4]}).'
if self.doi:
s += f' https://doi.org/{self.doi}.\n'
s += f'{self.citedby_count} citation(s) as of {date}'
if self.affiliation:
s += "\n Affiliation(s):\n "
s += '\n '.join([aff.name for aff in self.affiliation])
| |
= int('00221037', 16)
RefractiveProcedureOccurred = int('00221039', 16)
RefractiveSurgeryTypeCodeSequence = int('00221040', 16)
OphthalmicUltrasoundMethodCodeSequence = int('00221044', 16)
OphthalmicAxialLengthMeasurementsSequence = int('00221050', 16)
IOLPower = int('00221053', 16)
PredictedRefractiveError = int('00221054', 16)
OphthalmicAxialLengthVelocity = int('00221059', 16)
LensStatusDescription = int('00221065', 16)
VitreousStatusDescription = int('00221066', 16)
IOLPowerSequence = int('00221090', 16)
LensConstantSequence = int('00221092', 16)
IOLManufacturer = int('00221093', 16)
LensConstantDescription = int('00221094', 16)
ImplantName = int('00221095', 16)
KeratometryMeasurementTypeCodeSequence = int('00221096', 16)
ImplantPartNumber = int('00221097', 16)
ReferencedOphthalmicAxialMeasurementsSequence = int('00221100', 16)
OphthalmicAxialLengthMeasurementsSegmentNameCodeSequence = int(
'00221101', 16)
RefractiveErrorBeforeRefractiveSurgeryCodeSequence = int('00221103', 16)
IOLPowerForExactEmmetropia = int('00221121', 16)
IOLPowerForExactTargetRefraction = int('00221122', 16)
AnteriorChamberDepthDefinitionCodeSequence = int('00221125', 16)
LensThicknessSequence = int('00221127', 16)
AnteriorChamberDepthSequence = int('00221128', 16)
LensThickness = int('00221130', 16)
AnteriorChamberDepth = int('00221131', 16)
SourceofLensThicknessDataCodeSequence = int('00221132', 16)
SourceofAnteriorChamberDepthDataCodeSequence = int('00221133', 16)
SourceofRefractiveMeasurementsSequence = int('00221134', 16)
SourceofRefractiveMeasurementsCodeSequence = int('00221135', 16)
OphthalmicAxialLengthMeasurementModified = int('00221140', 16)
OphthalmicAxialLengthDataSourceCodeSequence = int('00221150', 16)
OphthalmicAxialLengthAcquisitionMethodCodeSequence = int('00221153', 16)
SignaltoNoiseRatio = int('00221155', 16)
OphthalmicAxialLengthDataSourceDescription = int('00221159', 16)
OphthalmicAxialLengthMeasurementsTotalLengthSequence = int('00221210', 16)
OphthalmicAxialLengthMeasurementsSegmentalLengthSequence = int(
'00221211', 16)
OphthalmicAxialLengthMeasurementsLengthSummationSequence = int(
'00221212', 16)
UltrasoundOphthalmicAxialLengthMeasurementsSequence = int('00221220', 16)
OpticalOphthalmicAxialLengthMeasurementsSequence = int('00221225', 16)
UltrasoundSelectedOphthalmicAxialLengthSequence = int('00221230', 16)
OphthalmicAxialLengthSelectionMethodCodeSequence = int('00221250', 16)
OpticalSelectedOphthalmicAxialLengthSequence = int('00221255', 16)
SelectedSegmentalOphthalmicAxialLengthSequence = int('00221257', 16)
SelectedTotalOphthalmicAxialLengthSequence = int('00221260', 16)
OphthalmicAxialLengthQualityMetricSequence = int('00221262', 16)
OphthalmicAxialLengthQualityMetricTypeCodeSequence = int('00221265', 16)
OphthalmicAxialLengthQualityMetricTypeDescription = int('00221273', 16)
IntraocularLensCalculationsRightEyeSequence = int('00221300', 16)
IntraocularLensCalculationsLeftEyeSequence = int('00221310', 16)
ReferencedOphthalmicAxialLengthMeasurementQCImageSequence = int(
'00221330', 16)
OphthalmicMappingDeviceType = int('00221415', 16)
AcquisitionMethodCodeSequence = int('00221420', 16)
AcquisitionMethodAlgorithmSequence = int('00221423', 16)
OphthalmicThicknessMapTypeCodeSequence = int('00221436', 16)
OphthalmicThicknessMappingNormalsSequence = int('00221443', 16)
RetinalThicknessDefinitionCodeSequence = int('00221445', 16)
PixelValueMappingtoCodedConceptSequence = int('00221450', 16)
MappedPixelValue = int('00221452', 16)
PixelValueMappingExplanation = int('00221454', 16)
OphthalmicThicknessMapQualityThresholdSequence = int('00221458', 16)
OphthalmicThicknessMapThresholdQualityRating = int('00221460', 16)
AnatomicStructureReferencePoint = int('00221463', 16)
RegistrationtoLocalizerSequence = int('00221465', 16)
RegisteredLocalizerUnits = int('00221466', 16)
RegisteredLocalizerTopLeftHandCorner = int('00221467', 16)
RegisteredLocalizerBottomRightHandCorner = int('00221468', 16)
OphthalmicThicknessMapQualityRatingSequence = int('00221470', 16)
RelevantOPTAttributesSequence = int('00221472', 16)
TransformationMethodCodeSequence = int('00221512', 16)
TransformationAlgorithmSequence = int('00221513', 16)
OphthalmicAxialLengthMethod = int('00221515', 16)
OphthalmicFOV = int('00221517', 16)
TwoDimensionaltoThreeDimensionalMapSequence = int('00221518', 16)
WideFieldOphthalmicPhotographyQualityRatingSequence = int('00221525', 16)
WideFieldOphthalmicPhotographyQualityThresholdSequence = int(
'00221526', 16)
WideFieldOphthalmicPhotographyThresholdQualityRating = int('00221527', 16)
XCoordinatesCenterPixelViewAngle = int('00221528', 16)
YCoordinatesCenterPixelViewAngle = int('00221529', 16)
NumberofMapPoints = int('00221530', 16)
TwoDimensionaltoThreeDimensionalMapData = int('00221531', 16)
VisualFieldHorizontalExtent = int('00240010', 16)
VisualFieldVerticalExtent = int('00240011', 16)
VisualFieldShape = int('00240012', 16)
ScreeningTestModeCodeSequence = int('00240016', 16)
MaximumStimulusLuminance = int('00240018', 16)
BackgroundLuminance = int('00240020', 16)
StimulusColorCodeSequence = int('00240021', 16)
BackgroundIlluminationColorCodeSequence = int('00240024', 16)
StimulusArea = int('00240025', 16)
StimulusPresentationTime = int('00240028', 16)
FixationSequence = int('00240032', 16)
FixationMonitoringCodeSequence = int('00240033', 16)
VisualFieldCatchTrialSequence = int('00240034', 16)
FixationCheckedQuantity = int('00240035', 16)
PatientNotProperlyFixatedQuantity = int('00240036', 16)
PresentedVisualStimuliDataFlag = int('00240037', 16)
NumberofVisualStimuli = int('00240038', 16)
ExcessiveFixationLossesDataFlag = int('00240039', 16)
ExcessiveFixationLosses = int('00240040', 16)
StimuliRetestingQuantity = int('00240042', 16)
CommentsonPatientsPerformanceofVisualField = int('00240044', 16)
FalseNegativesEstimateFlag = int('00240045', 16)
FalseNegativesEstimate = int('00240046', 16)
NegativeCatchTrialsQuantity = int('00240048', 16)
FalseNegativesQuantity = int('00240050', 16)
ExcessiveFalseNegativesDataFlag = int('00240051', 16)
ExcessiveFalseNegatives = int('00240052', 16)
FalsePositivesEstimateFlag = int('00240053', 16)
FalsePositivesEstimate = int('00240054', 16)
CatchTrialsDataFlag = int('00240055', 16)
PositiveCatchTrialsQuantity = int('00240056', 16)
TestPointNormalsDataFlag = int('00240057', 16)
TestPointNormalsSequence = int('00240058', 16)
GlobalDeviationProbabilityNormalsFlag = int('00240059', 16)
FalsePositivesQuantity = int('00240060', 16)
ExcessiveFalsePositivesDataFlag = int('00240061', 16)
ExcessiveFalsePositives = int('00240062', 16)
VisualFieldTestNormalsFlag = int('00240063', 16)
ResultsNormalsSequence = int('00240064', 16)
AgeCorrectedSensitivityDeviationAlgorithmSequence = int('00240065', 16)
GlobalDeviationFromNormal = int('00240066', 16)
GeneralizedDefectSensitivityDeviationAlgorithmSequence = int(
'00240067', 16)
LocalizedDeviationFromNormal = int('00240068', 16)
PatientReliabilityIndicator = int('00240069', 16)
VisualFieldMeanSensitivity = int('00240070', 16)
GlobalDeviationProbability = int('00240071', 16)
LocalDeviationProbabilityNormalsFlag = int('00240072', 16)
LocalizedDeviationProbability = int('00240073', 16)
ShortTermFluctuationCalculated = int('00240074', 16)
ShortTermFluctuation = int('00240075', 16)
ShortTermFluctuationProbabilityCalculated = int('00240076', 16)
ShortTermFluctuationProbability = int('00240077', 16)
CorrectedLocalizedDeviationFromNormalCalculated = int('00240078', 16)
CorrectedLocalizedDeviationFromNormal = int('00240079', 16)
CorrectedLocalizedDeviationFromNormalProbabilityCalculated = int(
'00240080', 16)
CorrectedLocalizedDeviationFromNormalProbability = int('00240081', 16)
GlobalDeviationProbabilitySequence = int('00240083', 16)
LocalizedDeviationProbabilitySequence = int('00240085', 16)
FovealSensitivityMeasured = int('00240086', 16)
FovealSensitivity = int('00240087', 16)
VisualFieldTestDuration = int('00240088', 16)
VisualFieldTestPointSequence = int('00240089', 16)
VisualFieldTestPointXCoordinate = int('00240090', 16)
VisualFieldTestPointYCoordinate = int('00240091', 16)
AgeCorrectedSensitivityDeviationValue = int('00240092', 16)
StimulusResults = int('00240093', 16)
SensitivityValue = int('00240094', 16)
RetestStimulusSeen = int('00240095', 16)
RetestSensitivityValue = int('00240096', 16)
VisualFieldTestPointNormalsSequence = int('00240097', 16)
QuantifiedDefect = int('00240098', 16)
AgeCorrectedSensitivityDeviationProbabilityValue = int('00240100', 16)
GeneralizedDefectCorrectedSensitivityDeviationFlag = int('00240102', 16)
GeneralizedDefectCorrectedSensitivityDeviationValue = int('00240103', 16)
GeneralizedDefectCorrectedSensitivityDeviationProbabilityValue = int(
'00240104', 16)
MinimumSensitivityValue = int('00240105', 16)
BlindSpotLocalized = int('00240106', 16)
BlindSpotXCoordinate = int('00240107', 16)
BlindSpotYCoordinate = int('00240108', 16)
VisualAcuityMeasurementSequence = int('00240110', 16)
RefractiveParametersUsedonPatientSequence = int('00240112', 16)
MeasurementLaterality = int('00240113', 16)
OphthalmicPatientClinicalInformationLeftEyeSequence = int('00240114', 16)
OphthalmicPatientClinicalInformationRightEyeSequence = int('00240115', 16)
FovealPointNormativeDataFlag = int('00240117', 16)
FovealPointProbabilityValue = int('00240118', 16)
ScreeningBaselineMeasured = int('00240120', 16)
ScreeningBaselineMeasuredSequence = int('00240122', 16)
ScreeningBaselineType = int('00240124', 16)
ScreeningBaselineValue = int('00240126', 16)
AlgorithmSource = int('00240202', 16)
DataSetName = int('00240306', 16)
DataSetVersion = int('00240307', 16)
DataSetSource = int('00240308', 16)
DataSetDescription = int('00240309', 16)
VisualFieldTestReliabilityGlobalIndexSequence = int('00240317', 16)
VisualFieldGlobalResultsIndexSequence = int('00240320', 16)
DataObservationSequence = int('00240325', 16)
IndexNormalsFlag = int('00240338', 16)
IndexProbability = int('00240341', 16)
IndexProbabilitySequence = int('00240344', 16)
SamplesperPixel = int('00280002', 16)
SamplesperPixelUsed = int('00280003', 16)
PhotometricInterpretation = int('00280004', 16)
ImageDimensions = int('00280005', 16)
PlanarConfiguration = int('00280006', 16)
NumberofFrames = int('00280008', 16)
FrameIncrementPointer = int('00280009', 16)
FrameDimensionPointer = int('0028000A', 16)
Rows = int('00280010', 16)
Columns = int('00280011', 16)
Planes = int('00280012', 16)
UltrasoundColorDataPresent = int('00280014', 16)
PixelSpacing = int('00280030', 16)
ZoomFactor = int('00280031', 16)
ZoomCenter = int('00280032', 16)
PixelAspectRatio = int('00280034', 16)
ImageFormat = int('00280040', 16)
ManipulatedImage = int('00280050', 16)
CorrectedImage = int('00280051', 16)
CompressionRecognitionCode = int('0028005F', 16)
CompressionCode = int('00280060', 16)
CompressionOriginator = int('00280061', 16)
CompressionLabel = int('00280062', 16)
CompressionDescription = int('00280063', 16)
CompressionSequence = int('00280065', 16)
CompressionStepPointers = int('00280066', 16)
RepeatInterval = int('00280068', 16)
BitsGrouped = int('00280069', 16)
PerimeterTable = int('00280070', 16)
PerimeterValue = int('00280071', 16)
PredictorRows = int('00280080', 16)
PredictorColumns = int('00280081', 16)
PredictorConstants = int('00280082', 16)
BlockedPixels = int('00280090', 16)
BlockRows = int('00280091', 16)
BlockColumns = int('00280092', 16)
RowOverlap = int('00280093', 16)
ColumnOverlap = int('00280094', 16)
BitsAllocated = int('00280100', 16)
BitsStored = int('00280101', 16)
HighBit = int('00280102', 16)
PixelRepresentation = int('00280103', 16)
SmallestValidPixelValue = int('00280104', 16)
LargestValidPixelValue = int('00280105', 16)
SmallestImagePixelValue = int('00280106', 16)
LargestImagePixelValue = int('00280107', 16)
SmallestPixelValueinSeries = int('00280108', 16)
LargestPixelValueinSeries = int('00280109', 16)
SmallestImagePixelValueinPlane = int('00280110', 16)
LargestImagePixelValueinPlane = int('00280111', 16)
PixelPaddingValue = int('00280120', 16)
PixelPaddingRangeLimit = int('00280121', 16)
FloatPixelPaddingValue = int('00280122', 16)
DoubleFloatPixelPaddingValue = int('00280123', 16)
FloatPixelPaddingRangeLimit = int('00280124', 16)
DoubleFloatPixelPaddingRangeLimit = int('00280125', 16)
ImageLocation = int('00280200', 16)
QualityControlImage = int('00280300', 16)
BurnedInAnnotation = int('00280301', 16)
RecognizableVisualFeatures = int('00280302', 16)
LongitudinalTemporalInformationModified = int('00280303', 16)
ReferencedColorPaletteInstanceUID = int('00280304', 16)
TransformLabel = int('00280400', 16)
TransformVersionNumber = int('00280401', 16)
NumberofTransformSteps = int('00280402', 16)
SequenceofCompressedData = int('00280403', 16)
DetailsofCoefficients = int('00280404', 16)
DCTLabel = int('00280700', 16)
DataBlockDescription = int('00280701', 16)
DataBlock = int('00280702', 16)
NormalizationFactorFormat = int('00280710', 16)
ZonalMapNumberFormat = int('00280720', 16)
ZonalMapLocation = int('00280721', 16)
ZonalMapFormat = int('00280722', 16)
AdaptiveMapFormat = int('00280730', 16)
CodeNumberFormat = int('00280740', 16)
PixelSpacingCalibrationType = int('00280A02', 16)
PixelSpacingCalibrationDescription = int('00280A04', 16)
PixelIntensityRelationship = int('00281040', 16)
PixelIntensityRelationshipSign = int('00281041', 16)
WindowCenter = int('00281050', 16)
WindowWidth = int('00281051', 16)
RescaleIntercept = int('00281052', 16)
RescaleSlope = int('00281053', 16)
RescaleType = int('00281054', 16)
WindowCenterWidthExplanation = int('00281055', 16)
VOILUTFunction = int('00281056', 16)
GrayScale = int('00281080', 16)
RecommendedViewingMode = int('00281090', 16)
GrayLookupTableDescriptor = int('00281100', 16)
RedPaletteColorLookupTableDescriptor = int('00281101', 16)
GreenPaletteColorLookupTableDescriptor = int('00281102', 16)
BluePaletteColorLookupTableDescriptor = int('00281103', 16)
AlphaPaletteColorLookupTableDescriptor = int('00281104', 16)
LargeRedPaletteColorLookupTableDescriptor = int('00281111', 16)
LargeGreenPaletteColorLookupTableDescriptor = int('00281112', 16)
LargeBluePaletteColorLookupTableDescriptor = int('00281113', 16)
PaletteColorLookupTableUID = int('00281199', 16)
GrayLookupTableData = int('00281200', 16)
RedPaletteColorLookupTableData = int('00281201', 16)
GreenPaletteColorLookupTableData = int('00281202', 16)
BluePaletteColorLookupTableData = int('00281203', 16)
AlphaPaletteColorLookupTableData = int('00281204', 16)
LargeRedPaletteColorLookupTableData = int('00281211', 16)
LargeGreenPaletteColorLookupTableData = int('00281212', 16)
LargeBluePaletteColorLookupTableData = int('00281213', 16)
LargePaletteColorLookupTableUID = int('00281214', 16)
SegmentedRedPaletteColorLookupTableData = int('00281221', 16)
SegmentedGreenPaletteColorLookupTableData = int('00281222', 16)
SegmentedBluePaletteColorLookupTableData = int('00281223', 16)
SegmentedAlphaPaletteColorLookupTableData = int('00281224', 16)
BreastImplantPresent = int('00281300', 16)
PartialView = int('00281350', 16)
PartialViewDescription = int('00281351', 16)
PartialViewCodeSequence = int('00281352', 16)
SpatialLocationsPreserved = int('0028135A', 16)
DataFrameAssignmentSequence = int('00281401', 16)
DataPathAssignment = int('00281402', 16)
BitsMappedtoColorLookupTable = int('00281403', 16)
BlendingLUT1Sequence = int('00281404', 16)
BlendingLUT1TransferFunction = int('00281405', 16)
BlendingWeightConstant = int('00281406', 16)
BlendingLookupTableDescriptor = int('00281407', 16)
BlendingLookupTableData = int('00281408', 16)
EnhancedPaletteColorLookupTableSequence = int('0028140B', 16)
BlendingLUT2Sequence = int('0028140C', 16)
BlendingLUT2TransferFunction = int('0028140D', 16)
DataPathID = int('0028140E', 16)
RGBLUTTransferFunction = int('0028140F', 16)
AlphaLUTTransferFunction = int('00281410', 16)
ICCProfile = int('00282000', 16)
ColorSpace = int('00282002', 16)
LossyImageCompression = int('00282110', 16)
LossyImageCompressionRatio = int('00282112', 16)
LossyImageCompressionMethod = int('00282114', 16)
ModalityLUTSequence | |
"""
Scattering GUI
"""
import sys, os
import matplotlib.pyplot as plt # Plotting
import numpy as np
if sys.version_info[0] < 3:
import Tkinter as tk
else:
import tkinter as tk
from .. import functions_general as fg
from .. import functions_crystallography as fc
from .basic_widgets import StringViewer
from .basic_widgets import (TF, BF, SF, LF, HF,
bkg, ety, btn, opt, btn2,
btn_active, opt_active, txtcol,
btn_txt, ety_txt, opt_txt)
class ScatteringGui:
"""
Simulate scattering of various forms
"""
def __init__(self, xtl):
"""Initialise"""
self.xtl = xtl
# Create Tk inter instance
self.root = tk.Tk()
self.root.wm_title('Scattering %s' % xtl.name)
# self.root.minsize(width=640, height=480)
self.root.maxsize(width=self.root.winfo_screenwidth(), height=self.root.winfo_screenheight())
self.root.tk_setPalette(
background=bkg,
foreground=txtcol,
activeBackground=opt_active,
activeForeground=txtcol)
frame = tk.Frame(self.root)
frame.pack(side=tk.LEFT, anchor=tk.N)
# Variatbles
self.energy_kev = tk.DoubleVar(frame, 8.0)
self.edge = tk.StringVar(frame, 'Edge')
self.type = tk.StringVar(frame, 'X-Ray')
self.orientation = tk.StringVar(frame, 'None')
self.direction_h = tk.IntVar(frame, 0)
self.direction_k = tk.IntVar(frame, 0)
self.direction_l = tk.IntVar(frame, 1)
self.theta_offset = tk.DoubleVar(frame, 0.0)
self.theta_min = tk.DoubleVar(frame, -180.0)
self.theta_max = tk.DoubleVar(frame, 180.0)
self.twotheta_min = tk.DoubleVar(frame, -180.0)
self.twotheta_max = tk.DoubleVar(frame, 180.0)
self.powder_units = tk.StringVar(frame, 'Two-Theta')
self.powderaverage = tk.BooleanVar(frame, True)
self.powder_width = tk.DoubleVar(frame, 0.01)
self.hkl_check = tk.StringVar(frame, '0 0 1')
self.hkl_result = tk.StringVar(frame, 'I:%10.0f TTH:%8.2f' % (0, 0))
self.val_i = tk.IntVar(frame, 0)
self.hkl_magnetic = tk.StringVar(frame, '0 0 1')
self.azim_zero = tk.StringVar(frame, '1 0 0')
self.isres = tk.BooleanVar(frame, True)
self.psival = tk.DoubleVar(frame, 0.0)
self.polval = tk.StringVar(frame, u'\u03c3-\u03c0')
self.resF0 = tk.DoubleVar(frame, 0.0)
self.resF1 = tk.DoubleVar(frame, 1.0)
self.resF2 = tk.DoubleVar(frame, 0.0)
self.magresult = tk.StringVar(frame, 'I = --')
# X-ray edges:
self.xr_edges, self.xr_energies = self.xtl.Properties.xray_edges()
self.xr_edges.insert(0, 'Cu Ka')
self.xr_edges.insert(1, 'Mo Ka')
self.xr_energies.insert(0, fg.Cu)
self.xr_energies.insert(1, fg.Mo)
line = tk.Frame(frame)
line.pack(side=tk.TOP, fill=tk.X, pady=5)
var = tk.Label(line, text='Scattering', font=LF)
var.pack(side=tk.LEFT)
var = tk.Button(line, text='Supernova', font=BF, command=self.fun_supernova, bg=btn,
activebackground=btn_active)
var.pack(side=tk.RIGHT)
var = tk.Button(line, text='Wish', font=BF, command=self.fun_wish, bg=btn, activebackground=btn_active)
var.pack(side=tk.RIGHT)
var = tk.Button(line, text='I16', font=BF, command=self.fun_i16, bg=btn, activebackground=btn_active)
var.pack(side=tk.RIGHT)
# ---Settings---
box = tk.LabelFrame(frame, text='Settings')
box.pack(side=tk.TOP, fill=tk.BOTH, padx=5, pady=5)
# Energy
line = tk.Frame(box)
line.pack(side=tk.TOP, fill=tk.X, pady=5)
var = tk.Label(line, text='Energy (keV):', font=SF)
var.pack(side=tk.LEFT)
var = tk.OptionMenu(line, self.edge, *self.xr_edges, command=self.fun_edge)
var.config(font=SF, width=5, bg=opt, activebackground=opt_active)
var["menu"].config(bg=opt, bd=0, activebackground=opt_active)
var.pack(side=tk.LEFT)
var = tk.Entry(line, textvariable=self.energy_kev, font=TF, width=8, bg=ety, fg=ety_txt)
var.pack(side=tk.LEFT)
# Type
line = tk.Frame(box)
line.pack(side=tk.TOP, fill=tk.X, pady=5)
types = ['X-Ray', 'Neutron', 'XRay Magnetic', 'Neutron Magnetic', 'XRay Resonant', 'XRay Dispersion']
var = tk.Label(line, text='Type:', font=SF)
var.pack(side=tk.LEFT)
var = tk.OptionMenu(line, self.type, *types)
var.config(font=SF, width=10, bg=opt, activebackground=opt_active)
var["menu"].config(bg=opt, bd=0, activebackground=opt_active)
var.pack(side=tk.LEFT)
# Units
xaxistypes = ['two-theta', 'd-spacing', 'Q']
var = tk.Label(line, text='Units:', font=SF)
var.pack(side=tk.LEFT)
var = tk.OptionMenu(line, self.powder_units, *xaxistypes)
var.config(font=SF, width=10, bg=opt, activebackground=opt_active)
var["menu"].config(bg=opt, bd=0, activebackground=opt_active)
var.pack(side=tk.LEFT)
# Orientation
line = tk.Frame(box)
line.pack(side=tk.TOP, fill=tk.X, pady=5)
var = tk.Label(line, text='Geometry:', font=SF)
var.pack(side=tk.LEFT)
orients = ['None', 'Reflection', 'Transmission']
var = tk.OptionMenu(line, self.orientation, *orients)
var.config(font=SF, width=10, bg=opt, activebackground=opt_active)
var["menu"].config(bg=opt, bd=0, activebackground=opt_active)
var.pack(side=tk.LEFT)
# Direction
var = tk.Label(line, text='Direction:', font=SF)
var.pack(side=tk.LEFT)
var = tk.Entry(line, textvariable=self.direction_h, font=TF, width=2, bg=ety, fg=ety_txt)
var.pack(side=tk.LEFT)
var = tk.Entry(line, textvariable=self.direction_k, font=TF, width=2, bg=ety, fg=ety_txt)
var.pack(side=tk.LEFT)
var = tk.Entry(line, textvariable=self.direction_l, font=TF, width=2, bg=ety, fg=ety_txt)
var.pack(side=tk.LEFT)
# Theta offset
line = tk.Frame(box)
line.pack(side=tk.TOP, fill=tk.X, pady=5)
var = tk.Label(line, text='Offset:', font=SF)
var.pack(side=tk.LEFT)
var = tk.Entry(line, textvariable=self.theta_offset, font=TF, width=5, bg=ety, fg=ety_txt)
var.pack(side=tk.LEFT)
# Theta min
var = tk.Label(line, text='Min Theta:', font=SF)
var.pack(side=tk.LEFT)
var = tk.Entry(line, textvariable=self.theta_min, font=TF, width=5, bg=ety, fg=ety_txt)
var.pack(side=tk.LEFT)
# Theta max
var = tk.Label(line, text='Max Theta:', font=SF)
var.pack(side=tk.LEFT)
var = tk.Entry(line, textvariable=self.theta_max, font=TF, width=5, bg=ety, fg=ety_txt)
var.pack(side=tk.LEFT)
# TwoTheta min
line = tk.Frame(box)
line.pack(side=tk.TOP, fill=tk.X, pady=5)
var = tk.Label(line, text='Min TwoTheta:', font=SF)
var.pack(side=tk.LEFT)
var = tk.Entry(line, textvariable=self.twotheta_min, font=TF, width=5, bg=ety, fg=ety_txt)
var.pack(side=tk.LEFT)
# TwoTheta max
var = tk.Entry(line, textvariable=self.twotheta_max, font=TF, width=5, bg=ety, fg=ety_txt)
var.pack(side=tk.RIGHT)
var = tk.Label(line, text='Max TwoTheta:', font=SF)
var.pack(side=tk.RIGHT)
# Powder width
line = tk.Frame(box)
line.pack(side=tk.TOP, fill=tk.X, pady=5)
var = tk.Label(line, text='Powder peak width:', font=SF)
var.pack(side=tk.LEFT, padx=3)
var = tk.Entry(line, textvariable=self.powder_width, font=TF, width=5, bg=ety, fg=ety_txt)
var.pack(side=tk.LEFT)
# Powder average tickbox
var = tk.Checkbutton(line, text='Powder average', variable=self.powderaverage, font=SF)
var.pack(side=tk.LEFT, padx=6)
# ---Intensities---
box = tk.LabelFrame(frame, text='Intensities')
box.pack(side=tk.TOP, fill=tk.BOTH, padx=5, pady=5)
line = tk.Frame(box)
line.pack(side=tk.TOP, fill=tk.X, pady=5)
var = tk.Button(line, text='Display Intensities', font=BF, command=self.fun_intensities, bg=btn2,
activebackground=btn_active)
var.pack(side=tk.LEFT)
var = tk.Button(line, text='Plot Powder', font=BF, command=self.fun_powder, bg=btn,
activebackground=btn_active)
var.pack(side=tk.LEFT)
# hkl check
line = tk.Frame(box)
line.pack(side=tk.TOP, fill=tk.X, pady=5)
hklbox = tk.LabelFrame(line, text='Quick Check')
hklbox.pack(side=tk.RIGHT)
var = tk.Entry(hklbox, textvariable=self.hkl_check, font=TF, width=6, bg=ety, fg=ety_txt)
var.pack(side=tk.LEFT)
var.bind('<Return>', self.fun_hklcheck)
var.bind('<KP_Enter>', self.fun_hklcheck)
var = tk.Label(hklbox, textvariable=self.hkl_result, font=TF, width=22)
var.pack(side=tk.LEFT)
var = tk.Button(hklbox, text='Check HKL', font=BF, command=self.fun_hklcheck, bg=btn,
activebackground=btn_active)
var.pack(side=tk.LEFT, pady=2)
# ---Planes---
box = tk.LabelFrame(frame, text='Reciprocal Space Planes')
box.pack(side=tk.TOP, fill=tk.BOTH, padx=5, pady=5)
line = tk.Frame(box)
line.pack(side=tk.TOP, pady=5)
# ---HKL Planes---
# i value
var = tk.Label(line, text='i:', font=SF)
var.pack(side=tk.LEFT)
var = tk.Entry(line, textvariable=self.val_i, font=TF, width=3, bg=ety, fg=ety_txt)
var.pack(side=tk.LEFT)
# directions
vframe = tk.Frame(line)
vframe.pack(side=tk.LEFT, padx=3)
var = tk.Button(vframe, text='HKi', font=BF, command=self.fun_hki, width=5, bg=btn, activebackground=btn_active)
var.pack()
var = tk.Button(vframe, text='HiL', font=BF, command=self.fun_hil, width=5, bg=btn, activebackground=btn_active)
var.pack()
vframe = tk.Frame(line)
vframe.pack(side=tk.LEFT)
var = tk.Button(vframe, text='iKL', font=BF, command=self.fun_ikl, width=5, bg=btn, activebackground=btn_active)
var.pack()
var = tk.Button(vframe, text='HHi', font=BF, command=self.fun_hhi, width=5, bg=btn, activebackground=btn_active)
var.pack()
# ---X-ray Magnetic scattering----
if np.any(self.xtl.Structure.mxmymz()):
box = tk.LabelFrame(frame, text='X-Ray Magnetic Scattering')
box.pack(side=tk.TOP, fill=tk.BOTH, padx=3)
line = tk.Frame(box)
line.pack(side=tk.TOP, fill=tk.BOTH, pady=5)
# Resonant HKL, azimuthal reference
vframe = tk.Frame(line)
vframe.pack(side=tk.LEFT, fill=tk.Y, padx=3)
hframe = tk.Frame(vframe)
hframe.pack()
var = tk.Label(hframe, text=' HKL:', font=SF, width=11)
var.pack(side=tk.LEFT)
var = tk.Entry(hframe, textvariable=self.hkl_magnetic, font=TF, width=6, bg=ety, fg=ety_txt)
var.pack(side=tk.LEFT)
var.bind('<Return>', self.fun_hklmag)
var.bind('<KP_Enter>', self.fun_hklmag)
hframe = tk.Frame(vframe)
hframe.pack()
var = tk.Label(vframe, text='Azim. Ref.:', font=SF, width=11)
var.pack(side=tk.LEFT)
var = tk.Entry(vframe, textvariable=self.azim_zero, font=TF, width=6, bg=ety, fg=ety_txt)
var.pack(side=tk.LEFT)
# Resonant value
vframe = tk.Frame(line)
vframe.pack(side=tk.LEFT, fill=tk.Y, padx=3)
hframe = tk.Frame(vframe)
hframe.pack()
var = tk.Label(hframe, text='F0:', font=SF)
var.pack(side=tk.LEFT)
var = tk.Entry(hframe, textvariable=self.resF0, font=TF, width=3, bg=ety, fg=ety_txt)
var.pack(side=tk.LEFT)
hframe = tk.Frame(vframe)
hframe.pack()
var = tk.Label(hframe, text='F1:', font=SF)
var.pack(side=tk.LEFT)
var = tk.Entry(hframe, textvariable=self.resF1, font=TF, width=3, bg=ety, fg=ety_txt)
var.pack(side=tk.LEFT)
hframe = tk.Frame(vframe)
hframe.pack()
var = tk.Label(hframe, text='F2:', font=SF)
var.pack(side=tk.LEFT)
var = tk.Entry(hframe, textvariable=self.resF2, font=TF, width=3, bg=ety, fg=ety_txt)
var.pack(side=tk.LEFT)
vframe = tk.Frame(line)
vframe.pack(side=tk.LEFT, fill=tk.Y, padx=3)
# Polarisation
poltypes = [u'\u03c3-\u03c3', u'\u03c3-\u03c0', u'\u03c0-\u03c3', u'\u03c0-\u03c0']
hframe = tk.Frame(vframe)
hframe.pack()
var = tk.Label(hframe, text='Polarisation:', font=SF)
var.pack(side=tk.LEFT)
var = tk.OptionMenu(hframe, self.polval, *poltypes)
var.config(font=SF, width=5, bg=opt, activebackground=opt_active)
var["menu"].config(bg=opt, bd=0, activebackground=opt_active)
var.pack(side=tk.LEFT)
hframe = tk.Frame(vframe)
hframe.pack()
# Resonant tickbox
var = tk.Checkbutton(hframe, text='Resonant', variable=self.isres, font=SF)
var.pack(side=tk.LEFT, padx=6)
# psi
var = tk.Label(hframe, text='psi:', font=SF, width=4)
var.pack(side=tk.LEFT)
var = tk.Entry(hframe, textvariable=self.psival, font=TF, width=4, bg=ety, fg=ety_txt)
var.pack(side=tk.LEFT)
var.bind('<Return>', self.fun_hklmag)
var.bind('<KP_Enter>', self.fun_hklmag)
line = tk.Frame(box)
line.pack(side=tk.TOP, fill=tk.BOTH, pady=5)
vframe = tk.Frame(line)
vframe.pack(side=tk.LEFT, fill=tk.Y, padx=3)
# Mag. Inten button
var = tk.Button(vframe, text='Calc. Mag. Inten.', font=BF, command=self.fun_hklmag, bg=btn,
activebackground=btn_active)
var.pack(side=tk.LEFT, padx=5)
# Magnetic Result
var = tk.Label(vframe, textvariable=self.magresult, font=SF, width=12)
var.pack(side=tk.LEFT, fill=tk.Y)
# Azimuth Button
var = tk.Button(line, text='Simulate\n Azimuth', font=BF, command=self.fun_azimuth, width=7, bg=btn,
activebackground=btn_active)
var.pack(side=tk.RIGHT)
def fun_set(self):
""""Set gui parameters from crystal"""
self.type.set(self.xtl._scattering_type)
# self.energy_kev.set(8)
self.theta_offset.set(self.xtl._scattering_theta_offset)
self.theta_min.set(self.xtl._scattering_min_theta)
self.theta_max.set(self.xtl._scattering_max_theta)
self.twotheta_min.set(self.xtl._scattering_min_two_theta)
self.twotheta_max.set(self.xtl._scattering_max_two_theta)
if self.orientation.get() == 'Reflection':
self.direction_h.set(self.xtl._scattering_specular_direction[0])
self.direction_k.set(self.xtl._scattering_specular_direction[1])
self.direction_l.set(self.xtl._scattering_specular_direction[2])
else:
self.direction_h.set(self.xtl._scattering_parallel_direction[0])
self.direction_k.set(self.xtl._scattering_parallel_direction[1])
self.direction_l.set(self.xtl._scattering_parallel_direction[2])
def fun_get(self):
"""Set crytal parameters from gui"""
scat = self.xtl.Scatter
scat._scattering_type = self.type.get()
scat._energy_kev = self.energy_kev.get()
scat._scattering_theta_offset = self.theta_offset.get()
scat._scattering_min_theta = self.theta_min.get()
scat._scattering_max_theta = self.theta_max.get()
scat._scattering_min_twotheta = self.twotheta_min.get()
scat._scattering_max_twotheta = self.twotheta_max.get()
scat._powder_units = self.powder_units.get()
if self.orientation.get() == 'Reflection':
scat._scattering_specular_direction[0] = self.direction_h.get()
scat._scattering_specular_direction[1] = self.direction_k.get()
scat._scattering_specular_direction[2] = self.direction_l.get()
elif self.orientation.get() == 'Transmission':
scat._scattering_parallel_direction[0] = self.direction_h.get()
scat._scattering_parallel_direction[1] = self.direction_k.get()
scat._scattering_parallel_direction[2] = self.direction_l.get()
def fun_i16(self):
""""Add I16 parameters"""
self.type.set('X-Ray')
self.energy_kev.set(8.0)
self.edge.set('Edge')
self.powder_units.set('Two-Theta')
self.powderaverage.set(False)
self.orientation.set('Reflection')
self.theta_offset.set(0.0)
self.theta_min.set(-20.0)
self.theta_max.set(150.0)
self.twotheta_min.set(0.0)
self.twotheta_max.set(130.0)
def fun_wish(self):
""""Add Wish parameters"""
self.type.set('Neutron')
self.energy_kev.set(17.7)
self.edge.set('Edge')
self.powder_units.set('d-spacing')
self.orientation.set('None')
self.theta_offset.set(0.0)
self.theta_min.set(-180.0)
self.theta_max.set(180.0)
self.twotheta_min.set(10.0)
self.twotheta_max.set(170.0)
def fun_supernova(self):
"""Add SuperNova parameters"""
self.type.set('X-Ray')
idx = self.xr_edges.index('Mo Ka')
self.edge.set('Mo Ka')
self.energy_kev.set(self.xr_energies[idx])
self.powder_units.set('Two-Theta')
self.orientation.set('None')
self.theta_offset.set(0.0)
self.theta_min.set(-180.0)
self.theta_max.set(180.0)
self.twotheta_min.set(-170.0)
self.twotheta_max.set(170.0)
def fun_edge(self, event=None):
"""X-ray edge option menu"""
edge = self.edge.get()
if self.edge.get() in self.xr_edges:
idx = self.xr_edges.index(edge)
self.energy_kev.set(self.xr_energies[idx])
def fun_hklcheck(self, event=None):
""""Show single hkl intensity"""
self.fun_get()
hkl = self.hkl_check.get()
hkl = hkl.replace(',', ' ') # remove commas
hkl = hkl.replace('(', '').replace(')', '') # remove brackets
hkl = hkl.replace('[', '').replace(']', '') # remove brackets
hkl = np.fromstring(hkl, sep=' ')
I = self.xtl.Scatter.intensity(hkl)
unit = self.powder_units.get()
energy = self.energy_kev.get()
tth = self.xtl.Cell.tth(hkl, energy)
if unit.lower() in ['tth', 'angle', 'twotheta', 'theta', 'two-theta']:
self.hkl_result.set('I:%10.0f TTH:%8.2f' % (I, tth))
elif unit.lower() in ['d', 'dspace', 'd-spacing', 'dspacing']:
q = fc.calqmag(tth, energy)
d = fc.q2dspace(q)
self.hkl_result.set(u'I:%10.0f d:%8.2f \u00c5' | |
<reponame>cjohnson-ctl/platform-client-sdk-python<filename>build/PureCloudPlatformClientV2/models/coaching_appointment_response.py
# coding: utf-8
"""
Copyright 2016 SmartBear Software
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Ref: https://github.com/swagger-api/swagger-codegen
"""
from pprint import pformat
from six import iteritems
import re
import json
from ..utils import sanitize_for_serialization
class CoachingAppointmentResponse(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
def __init__(self):
"""
CoachingAppointmentResponse - a model defined in Swagger
:param dict swaggerTypes: The key is attribute name
and the value is attribute type.
:param dict attributeMap: The key is attribute name
and the value is json key in definition.
"""
self.swagger_types = {
'id': 'str',
'name': 'str',
'description': 'str',
'date_start': 'datetime',
'length_in_minutes': 'int',
'status': 'str',
'facilitator': 'UserReference',
'attendees': 'list[UserReference]',
'created_by': 'UserReference',
'date_created': 'datetime',
'modified_by': 'UserReference',
'date_modified': 'datetime',
'conversations': 'list[ConversationReference]',
'documents': 'list[DocumentReference]',
'is_overdue': 'bool',
'self_uri': 'str'
}
self.attribute_map = {
'id': 'id',
'name': 'name',
'description': 'description',
'date_start': 'dateStart',
'length_in_minutes': 'lengthInMinutes',
'status': 'status',
'facilitator': 'facilitator',
'attendees': 'attendees',
'created_by': 'createdBy',
'date_created': 'dateCreated',
'modified_by': 'modifiedBy',
'date_modified': 'dateModified',
'conversations': 'conversations',
'documents': 'documents',
'is_overdue': 'isOverdue',
'self_uri': 'selfUri'
}
self._id = None
self._name = None
self._description = None
self._date_start = None
self._length_in_minutes = None
self._status = None
self._facilitator = None
self._attendees = None
self._created_by = None
self._date_created = None
self._modified_by = None
self._date_modified = None
self._conversations = None
self._documents = None
self._is_overdue = None
self._self_uri = None
@property
def id(self):
"""
Gets the id of this CoachingAppointmentResponse.
The globally unique identifier for the object.
:return: The id of this CoachingAppointmentResponse.
:rtype: str
"""
return self._id
@id.setter
def id(self, id):
"""
Sets the id of this CoachingAppointmentResponse.
The globally unique identifier for the object.
:param id: The id of this CoachingAppointmentResponse.
:type: str
"""
self._id = id
@property
def name(self):
"""
Gets the name of this CoachingAppointmentResponse.
The name of coaching appointment
:return: The name of this CoachingAppointmentResponse.
:rtype: str
"""
return self._name
@name.setter
def name(self, name):
"""
Sets the name of this CoachingAppointmentResponse.
The name of coaching appointment
:param name: The name of this CoachingAppointmentResponse.
:type: str
"""
self._name = name
@property
def description(self):
"""
Gets the description of this CoachingAppointmentResponse.
The description of coaching appointment
:return: The description of this CoachingAppointmentResponse.
:rtype: str
"""
return self._description
@description.setter
def description(self, description):
"""
Sets the description of this CoachingAppointmentResponse.
The description of coaching appointment
:param description: The description of this CoachingAppointmentResponse.
:type: str
"""
self._description = description
@property
def date_start(self):
"""
Gets the date_start of this CoachingAppointmentResponse.
The date/time the coaching appointment starts. Date time is represented as an ISO-8601 string. For example: yyyy-MM-ddTHH:mm:ss[.mmm]Z
:return: The date_start of this CoachingAppointmentResponse.
:rtype: datetime
"""
return self._date_start
@date_start.setter
def date_start(self, date_start):
"""
Sets the date_start of this CoachingAppointmentResponse.
The date/time the coaching appointment starts. Date time is represented as an ISO-8601 string. For example: yyyy-MM-ddTHH:mm:ss[.mmm]Z
:param date_start: The date_start of this CoachingAppointmentResponse.
:type: datetime
"""
self._date_start = date_start
@property
def length_in_minutes(self):
"""
Gets the length_in_minutes of this CoachingAppointmentResponse.
The duration of coaching appointment in minutes
:return: The length_in_minutes of this CoachingAppointmentResponse.
:rtype: int
"""
return self._length_in_minutes
@length_in_minutes.setter
def length_in_minutes(self, length_in_minutes):
"""
Sets the length_in_minutes of this CoachingAppointmentResponse.
The duration of coaching appointment in minutes
:param length_in_minutes: The length_in_minutes of this CoachingAppointmentResponse.
:type: int
"""
self._length_in_minutes = length_in_minutes
@property
def status(self):
"""
Gets the status of this CoachingAppointmentResponse.
The status of coaching appointment
:return: The status of this CoachingAppointmentResponse.
:rtype: str
"""
return self._status
@status.setter
def status(self, status):
"""
Sets the status of this CoachingAppointmentResponse.
The status of coaching appointment
:param status: The status of this CoachingAppointmentResponse.
:type: str
"""
allowed_values = ["Scheduled", "InProgress", "Completed", "InvalidSchedule"]
if status.lower() not in map(str.lower, allowed_values):
# print("Invalid value for status -> " + status)
self._status = "outdated_sdk_version"
else:
self._status = status
@property
def facilitator(self):
"""
Gets the facilitator of this CoachingAppointmentResponse.
The facilitator of coaching appointment
:return: The facilitator of this CoachingAppointmentResponse.
:rtype: UserReference
"""
return self._facilitator
@facilitator.setter
def facilitator(self, facilitator):
"""
Sets the facilitator of this CoachingAppointmentResponse.
The facilitator of coaching appointment
:param facilitator: The facilitator of this CoachingAppointmentResponse.
:type: UserReference
"""
self._facilitator = facilitator
@property
def attendees(self):
"""
Gets the attendees of this CoachingAppointmentResponse.
The list of attendees attending the coaching
:return: The attendees of this CoachingAppointmentResponse.
:rtype: list[UserReference]
"""
return self._attendees
@attendees.setter
def attendees(self, attendees):
"""
Sets the attendees of this CoachingAppointmentResponse.
The list of attendees attending the coaching
:param attendees: The attendees of this CoachingAppointmentResponse.
:type: list[UserReference]
"""
self._attendees = attendees
@property
def created_by(self):
"""
Gets the created_by of this CoachingAppointmentResponse.
The user who created the coaching appointment
:return: The created_by of this CoachingAppointmentResponse.
:rtype: UserReference
"""
return self._created_by
@created_by.setter
def created_by(self, created_by):
"""
Sets the created_by of this CoachingAppointmentResponse.
The user who created the coaching appointment
:param created_by: The created_by of this CoachingAppointmentResponse.
:type: UserReference
"""
self._created_by = created_by
@property
def date_created(self):
"""
Gets the date_created of this CoachingAppointmentResponse.
The date/time the coaching appointment was created. Date time is represented as an ISO-8601 string. For example: yyyy-MM-ddTHH:mm:ss[.mmm]Z
:return: The date_created of this CoachingAppointmentResponse.
:rtype: datetime
"""
return self._date_created
@date_created.setter
def date_created(self, date_created):
"""
Sets the date_created of this CoachingAppointmentResponse.
The date/time the coaching appointment was created. Date time is represented as an ISO-8601 string. For example: yyyy-MM-ddTHH:mm:ss[.mmm]Z
:param date_created: The date_created of this CoachingAppointmentResponse.
:type: datetime
"""
self._date_created = date_created
@property
def modified_by(self):
"""
Gets the modified_by of this CoachingAppointmentResponse.
The last user to modify the coaching appointment
:return: The modified_by of this CoachingAppointmentResponse.
:rtype: UserReference
"""
return self._modified_by
@modified_by.setter
def modified_by(self, modified_by):
"""
Sets the modified_by of this CoachingAppointmentResponse.
The last user to modify the coaching appointment
:param modified_by: The modified_by of this CoachingAppointmentResponse.
:type: UserReference
"""
self._modified_by = modified_by
@property
def date_modified(self):
"""
Gets the date_modified of this CoachingAppointmentResponse.
The date/time the coaching appointment was last modified. Date time is represented as an ISO-8601 string. For example: yyyy-MM-ddTHH:mm:ss[.mmm]Z
:return: The date_modified of this CoachingAppointmentResponse.
:rtype: datetime
"""
return self._date_modified
@date_modified.setter
def date_modified(self, date_modified):
"""
Sets the date_modified of this CoachingAppointmentResponse.
The date/time the coaching appointment was last modified. Date time is represented as an ISO-8601 string. For example: yyyy-MM-ddTHH:mm:ss[.mmm]Z
:param date_modified: The date_modified of this CoachingAppointmentResponse.
:type: datetime
"""
self._date_modified = date_modified
@property
def conversations(self):
"""
Gets the conversations of this CoachingAppointmentResponse.
The list of conversations associated with coaching appointment.
:return: The conversations of this CoachingAppointmentResponse.
:rtype: list[ConversationReference]
"""
return self._conversations
@conversations.setter
def conversations(self, conversations):
"""
Sets the conversations of this CoachingAppointmentResponse.
The list of conversations associated with coaching appointment.
:param conversations: The conversations of this CoachingAppointmentResponse.
:type: list[ConversationReference]
"""
self._conversations = conversations
@property
def documents(self):
"""
Gets the documents of this CoachingAppointmentResponse.
The list of documents associated with coaching appointment.
:return: The documents of this CoachingAppointmentResponse.
:rtype: list[DocumentReference]
"""
return self._documents
@documents.setter
def documents(self, documents):
"""
Sets the documents of this CoachingAppointmentResponse.
The list of documents associated with coaching appointment.
:param documents: The documents of this CoachingAppointmentResponse.
:type: list[DocumentReference]
"""
self._documents = documents
@property
def is_overdue(self):
"""
Gets the is_overdue of this CoachingAppointmentResponse.
Whether the appointment is overdue.
:return: The is_overdue of this CoachingAppointmentResponse.
:rtype: bool
"""
return self._is_overdue
@is_overdue.setter
def is_overdue(self, is_overdue):
"""
Sets the is_overdue of this CoachingAppointmentResponse.
Whether the appointment is overdue.
:param is_overdue: The is_overdue of this CoachingAppointmentResponse.
:type: bool
"""
self._is_overdue = is_overdue
@property
def self_uri(self):
"""
Gets the self_uri of this CoachingAppointmentResponse.
The URI for this object
:return: The self_uri of this CoachingAppointmentResponse.
:rtype: str
"""
return | |
24.538, 22.591, 24.580,
VERTEX, 24.490, 22.409, 24.688,
VERTEX, 24.433, 22.238, 24.808,
VERTEX, 24.417, 22.200, 24.839,
END,
BEGIN, LINE_LOOP,
VERTEX, 21.088, 24.249, 29.529,
VERTEX, 21.225, 24.388, 29.466,
VERTEX, 21.373, 24.538, 29.397,
VERTEX, 21.521, 24.687, 29.326,
VERTEX, 21.674, 24.725, 29.254,
VERTEX, 21.850, 24.769, 29.170,
VERTEX, 22.026, 24.814, 29.086,
VERTEX, 22.201, 24.858, 29.001,
VERTEX, 22.375, 24.904, 28.914,
VERTEX, 22.549, 24.949, 28.827,
VERTEX, 22.549, 24.787, 28.832,
VERTEX, 22.549, 24.593, 28.838,
VERTEX, 22.548, 24.399, 28.843,
VERTEX, 22.548, 24.205, 28.847,
VERTEX, 22.547, 24.012, 28.850,
VERTEX, 22.545, 23.818, 28.852,
VERTEX, 22.543, 23.624, 28.852,
VERTEX, 22.541, 23.431, 28.852,
VERTEX, 22.414, 23.310, 28.911,
VERTEX, 22.278, 23.183, 28.974,
VERTEX, 22.141, 23.057, 29.035,
VERTEX, 22.003, 22.931, 29.096,
VERTEX, 21.864, 22.806, 29.156,
VERTEX, 21.725, 22.681, 29.216,
VERTEX, 21.637, 22.603, 29.253,
VERTEX, 21.576, 22.790, 29.288,
VERTEX, 21.514, 22.976, 29.323,
VERTEX, 21.451, 23.163, 29.357,
VERTEX, 21.387, 23.349, 29.389,
VERTEX, 21.323, 23.535, 29.421,
VERTEX, 21.259, 23.721, 29.452,
VERTEX, 21.193, 23.908, 29.482,
VERTEX, 21.128, 24.094, 29.512,
VERTEX, 21.080, 24.228, 29.533,
END,
BEGIN, LINE_LOOP,
VERTEX, 21.169, 24.591, 29.622,
VERTEX, 21.215, 24.604, 29.583,
VERTEX, 21.369, 24.645, 29.455,
VERTEX, 21.521, 24.687, 29.326,
VERTEX, 21.509, 24.674, 29.332,
VERTEX, 21.365, 24.530, 29.401,
VERTEX, 21.221, 24.384, 29.468,
VERTEX, 21.088, 24.249, 29.529,
VERTEX, 21.075, 24.285, 29.561,
VERTEX, 21.069, 24.413, 29.630,
VERTEX, 21.064, 24.515, 29.686,
END,
BEGIN, LINE_LOOP,
VERTEX, 21.088, 24.249, 29.529,
VERTEX, 21.081, 24.228, 29.533,
VERTEX, 21.079, 24.244, 29.541,
VERTEX, 21.075, 24.284, 29.559,
END,
BEGIN, LINE_LOOP,
VERTEX, 20.776, 23.361, 29.318,
VERTEX, 20.839, 23.533, 29.362,
VERTEX, 20.901, 23.706, 29.406,
VERTEX, 20.961, 23.879, 29.449,
VERTEX, 21.021, 24.053, 29.491,
VERTEX, 21.080, 24.228, 29.533,
VERTEX, 21.144, 24.047, 29.504,
VERTEX, 21.210, 23.861, 29.475,
VERTEX, 21.275, 23.674, 29.444,
VERTEX, 21.340, 23.488, 29.413,
VERTEX, 21.404, 23.301, 29.381,
VERTEX, 21.467, 23.115, 29.348,
VERTEX, 21.530, 22.928, 29.314,
VERTEX, 21.592, 22.741, 29.279,
VERTEX, 21.637, 22.603, 29.253,
VERTEX, 21.608, 22.519, 29.230,
VERTEX, 21.442, 22.629, 29.237,
VERTEX, 21.274, 22.737, 29.242,
VERTEX, 21.106, 22.844, 29.246,
VERTEX, 20.936, 22.949, 29.248,
VERTEX, 20.766, 23.054, 29.250,
VERTEX, 20.737, 23.071, 29.250,
VERTEX, 20.712, 23.189, 29.273,
END,
BEGIN, LINE_LOOP,
VERTEX, 20.977, 24.506, 29.706,
VERTEX, 21.033, 24.519, 29.697,
VERTEX, 21.064, 24.515, 29.686,
VERTEX, 21.067, 24.458, 29.655,
VERTEX, 21.075, 24.285, 29.560,
VERTEX, 21.075, 24.285, 29.560,
END,
BEGIN, LINE_LOOP,
VERTEX, 21.021, 24.053, 29.491,
VERTEX, 20.962, 23.879, 29.449,
VERTEX, 20.901, 23.706, 29.406,
VERTEX, 20.839, 23.533, 29.362,
VERTEX, 20.776, 23.361, 29.318,
VERTEX, 20.712, 23.189, 29.273,
VERTEX, 20.642, 23.313, 29.368,
VERTEX, 20.560, 23.459, 29.480,
VERTEX, 20.477, 23.606, 29.590,
VERTEX, 20.394, 23.752, 29.698,
VERTEX, 20.458, 23.921, 29.738,
VERTEX, 20.521, 24.091, 29.777,
VERTEX, 20.586, 24.260, 29.814,
VERTEX, 20.651, 24.430, 29.850,
VERTEX, 20.666, 24.434, 29.844,
VERTEX, 20.858, 24.479, 29.759,
VERTEX, 20.977, 24.507, 29.706,
VERTEX, 21.075, 24.284, 29.559,
VERTEX, 21.080, 24.235, 29.536,
VERTEX, 21.080, 24.228, 29.533,
END,
BEGIN, LINE_LOOP,
VERTEX, 21.716, 24.766, 29.200,
VERTEX, 21.890, 24.828, 29.097,
VERTEX, 22.065, 24.889, 28.994,
VERTEX, 22.239, 24.950, 28.890,
VERTEX, 22.414, 25.011, 28.786,
VERTEX, 22.588, 25.071, 28.682,
VERTEX, 22.592, 25.079, 28.666,
VERTEX, 22.640, 25.173, 28.486,
VERTEX, 22.688, 25.267, 28.306,
VERTEX, 22.668, 25.263, 28.312,
VERTEX, 22.489, 25.226, 28.372,
VERTEX, 22.310, 25.188, 28.433,
VERTEX, 22.133, 25.148, 28.496,
VERTEX, 22.095, 25.139, 28.510,
VERTEX, 21.985, 25.054, 28.668,
VERTEX, 21.875, 24.968, 28.826,
VERTEX, 21.765, 24.881, 28.983,
VERTEX, 21.655, 24.794, 29.140,
VERTEX, 21.545, 24.707, 29.297,
VERTEX, 21.541, 24.704, 29.303,
END,
BEGIN, LINE_LOOP,
VERTEX, 21.694, 24.730, 29.245,
VERTEX, 21.866, 24.773, 29.163,
VERTEX, 22.037, 24.817, 29.080,
VERTEX, 22.209, 24.860, 28.997,
VERTEX, 22.379, 24.905, 28.912,
VERTEX, 22.549, 24.949, 28.827,
VERTEX, 22.561, 24.985, 28.784,
VERTEX, 22.588, 25.071, 28.682,
VERTEX, 22.414, 25.011, 28.786,
VERTEX, 22.239, 24.950, 28.890,
VERTEX, 22.065, 24.889, 28.993,
VERTEX, 21.890, 24.828, 29.097,
VERTEX, 21.716, 24.766, 29.200,
VERTEX, 21.541, 24.704, 29.303,
VERTEX, 21.521, 24.687, 29.326,
END,
BEGIN, LINE_LOOP,
VERTEX, 22.704, 26.905, 25.065,
VERTEX, 22.546, 26.782, 25.105,
VERTEX, 22.388, 26.659, 25.146,
VERTEX, 22.230, 26.537, 25.187,
VERTEX, 22.072, 26.415, 25.228,
VERTEX, 21.913, 26.293, 25.269,
VERTEX, 21.754, 26.173, 25.311,
VERTEX, 21.595, 26.053, 25.353,
VERTEX, 21.435, 25.934, 25.395,
VERTEX, 21.274, 25.816, 25.438,
VERTEX, 21.113, 25.700, 25.482,
VERTEX, 20.951, 25.585, 25.526,
VERTEX, 21.029, 25.586, 25.661,
VERTEX, 21.129, 25.588, 25.837,
VERTEX, 21.228, 25.591, 26.013,
VERTEX, 21.324, 25.595, 26.190,
VERTEX, 21.419, 25.600, 26.368,
VERTEX, 21.510, 25.607, 26.547,
VERTEX, 21.600, 25.614, 26.728,
VERTEX, 21.687, 25.623, 26.910,
VERTEX, 21.772, 25.632, 27.092,
VERTEX, 21.855, 25.643, 27.276,
VERTEX, 21.937, 25.654, 27.460,
VERTEX, 22.018, 25.665, 27.645,
VERTEX, 22.098, 25.678, 27.830,
VERTEX, 22.145, 25.685, 27.942,
VERTEX, 22.264, 25.792, 27.818,
VERTEX, 22.381, 25.900, 27.693,
VERTEX, 22.496, 26.009, 27.568,
VERTEX, 22.611, 26.119, 27.443,
VERTEX, 22.724, 26.230, 27.318,
VERTEX, 22.836, 26.341, 27.192,
VERTEX, 22.947, 26.454, 27.066,
VERTEX, 23.057, 26.567, 26.939,
VERTEX, 23.167, 26.681, 26.812,
VERTEX, 23.276, 26.795, 26.685,
VERTEX, 23.384, 26.909, 26.558,
VERTEX, 23.492, 27.024, 26.431,
VERTEX, 23.474, 27.039, 26.338,
VERTEX, 23.407, 27.066, 26.090,
VERTEX, 23.322, 27.081, 25.848,
VERTEX, 23.219, 27.084, 25.612,
VERTEX, 23.100, 27.074, 25.384,
VERTEX, 22.964, 27.052, 25.166,
VERTEX, 22.861, 27.029, 25.024,
END,
BEGIN, LINE_LOOP,
VERTEX, 23.385, 26.910, 26.557,
VERTEX, 23.277, 26.796, 26.685,
VERTEX, 23.168, 26.681, 26.811,
VERTEX, 23.058, 26.568, 26.938,
VERTEX, 22.948, 26.454, 27.065,
VERTEX, 22.837, 26.342, 27.191,
VERTEX, 22.724, 26.230, 27.317,
VERTEX, 22.611, 26.119, 27.443,
VERTEX, 22.497, 26.009, 27.568,
VERTEX, 22.381, 25.900, 27.693,
VERTEX, 22.264, 25.792, 27.817,
VERTEX, 22.146, 25.685, 27.941,
VERTEX, 22.146, 25.684, 27.943,
VERTEX, 22.136, 25.547, 28.084,
VERTEX, 22.124, 25.411, 28.226,
VERTEX, 22.110, 25.275, 28.368,
VERTEX, 22.095, 25.140, 28.510,
VERTEX, 22.277, 25.180, 28.445,
VERTEX, 22.465, 25.221, 28.380,
VERTEX, 22.654, 25.260, 28.316,
VERTEX, 22.688, 25.267, 28.306,
VERTEX, 22.821, 25.365, 28.188,
VERTEX, 22.955, 25.463, 28.072,
VERTEX, 23.090, 25.561, 27.956,
VERTEX, 23.226, 25.658, 27.841,
VERTEX, 23.294, 25.706, 27.784,
VERTEX, 23.352, 25.841, 27.639,
VERTEX, 23.414, 25.976, 27.495,
VERTEX, 23.477, 26.111, 27.352,
VERTEX, 23.543, 26.245, 27.209,
VERTEX, 23.610, 26.378, 27.066,
VERTEX, 23.679, 26.512, 26.924,
VERTEX, 23.750, 26.645, 26.782,
VERTEX, 23.822, 26.778, 26.640,
VERTEX, 23.680, 26.896, 26.538,
VERTEX, 23.523, 27.007, 26.445,
VERTEX, 23.493, 27.025, 26.430,
END,
BEGIN, LINE_LOOP,
VERTEX, 20.752, 25.584, 25.502,
VERTEX, 20.553, 25.583, 25.479,
VERTEX, 20.354, 25.584, 25.457,
VERTEX, 20.155, 25.585, 25.436,
VERTEX, 19.955, 25.587, 25.417,
VERTEX, 19.756, 25.590, 25.398,
VERTEX, 19.556, 25.594, 25.380,
VERTEX, 19.357, 25.598, 25.363,
VERTEX, 19.157, 25.602, 25.347,
VERTEX, 18.982, 25.607, 25.333,
VERTEX, 18.930, 25.610, 25.498,
VERTEX, 18.878, 25.613, 25.664,
VERTEX, 18.944, 25.614, 25.850,
VERTEX, 19.012, 25.615, 26.038,
VERTEX, 19.083, 25.616, 26.226,
VERTEX, 19.154, 25.618, 26.413,
VERTEX, 19.229, 25.620, 26.599,
VERTEX, 19.306, 25.623, 26.783,
VERTEX, 19.386, 25.627, 26.966,
VERTEX, 19.470, 25.631, 27.146,
VERTEX, 19.559, 25.636, 27.324,
VERTEX, 19.652, 25.642, 27.500,
VERTEX, 19.749, 25.649, 27.673,
VERTEX, 19.851, 25.657, 27.844,
VERTEX, 19.957, 25.666, 28.013,
VERTEX, 20.066, 25.675, 28.180,
VERTEX, 20.178, 25.686, 28.345,
VERTEX, 20.292, 25.696, 28.510,
VERTEX, 20.409, 25.707, 28.673,
VERTEX, 20.477, 25.714, 28.766,
VERTEX, 20.709, 25.712, 28.746,
VERTEX, 20.941, 25.712, 28.725,
VERTEX, 20.948, 25.712, 28.721,
VERTEX, 21.121, 25.706, 28.613,
VERTEX, 21.293, 25.700, 28.503,
VERTEX, 21.465, 25.696, 28.393,
VERTEX, 21.636, 25.692, 28.281,
VERTEX, 21.806, 25.689, 28.169,
VERTEX, 21.976, 25.687, 28.056,
VERTEX, 22.146, 25.685, 27.941,
VERTEX, 22.094, 25.677, 27.821,
VERTEX, 22.014, 25.665, 27.635,
VERTEX, 21.933, 25.654, 27.451,
VERTEX, 21.851, 25.642, 27.266,
VERTEX, 21.768, 25.632, 27.083,
VERTEX, 21.683, 25.623, 26.900,
VERTEX, 21.596, 25.614, 26.719,
VERTEX, 21.506, 25.607, 26.538,
VERTEX, 21.415, 25.600, 26.359,
VERTEX, 21.320, 25.595, 26.181,
VERTEX, 21.224, 25.591, 26.004,
VERTEX, 21.125, 25.588, 25.828,
VERTEX, 21.025, 25.586, 25.653,
VERTEX, 20.951, 25.585, 25.526,
END,
BEGIN, LINE_LOOP,
VERTEX, 20.198, 25.510, 28.995,
VERTEX, 20.336, 25.612, 28.880,
VERTEX, 20.475, 25.714, 28.766,
VERTEX, 20.399, 25.706, 28.661,
VERTEX, 20.279, 25.695, 28.493,
VERTEX, 20.162, 25.685, 28.325,
VERTEX, 20.047, 25.674, 28.155,
VERTEX, 19.936, 25.665, 27.983,
VERTEX, 19.828, 25.656, 27.809,
VERTEX, 19.724, 25.648, 27.633,
VERTEX, 19.625, 25.641, 27.454,
VERTEX, 19.531, 25.635, 27.273,
VERTEX, 19.442, 25.630, 27.090,
VERTEX, 19.357, 25.626, 26.904,
VERTEX, 19.276, 25.623, 26.715,
VERTEX, 19.198, 25.620, 26.526,
VERTEX, 19.123, 25.618, 26.334,
VERTEX, 19.050, 25.616, 26.141,
VERTEX, 18.979, 25.615, 25.948,
VERTEX, 18.909, 25.614, 25.753,
VERTEX, 18.878, 25.614, 25.665,
VERTEX, 18.701, 25.510, 25.703,
VERTEX, 18.525, 25.407, 25.739,
VERTEX, 18.416, 25.322, 25.859,
VERTEX, 18.307, 25.238, 25.979,
VERTEX, 18.199, 25.152, 26.099,
VERTEX, 18.249, 25.168, 26.163,
VERTEX, 18.375, 25.205, 26.323,
VERTEX, 18.499, 25.241, 26.484,
VERTEX, 18.622, 25.276, 26.645,
VERTEX, 18.744, 25.308, 26.808,
VERTEX, 18.863, 25.338, 26.972,
VERTEX, 18.980, 25.366, 27.137,
VERTEX, 19.095, 25.389, 27.305,
VERTEX, 19.206, 25.409, 27.475,
VERTEX, 19.313, 25.424, 27.648,
VERTEX, 19.416, 25.434, 27.823,
VERTEX, 19.516, 25.440, 28.000,
VERTEX, 19.612, 25.442, 28.180,
VERTEX, 19.706, 25.440, 28.363,
VERTEX, 19.797, 25.435, 28.546,
VERTEX, 19.886, 25.428, 28.732,
VERTEX, 19.973, 25.418, 28.918,
VERTEX, 20.059, 25.407, 29.105,
VERTEX, 20.061, 25.407, 29.111,
END,
BEGIN, LINE_LOOP,
VERTEX, 18.839, 25.516, 25.331,
VERTEX, 18.734, 25.479, 25.467,
VERTEX, 18.629, 25.443, 25.603,
VERTEX, 18.525, 25.406, 25.739,
VERTEX, 18.526, 25.407, 25.739,
VERTEX, 18.702, 25.511, 25.703,
VERTEX, 18.878, 25.614, 25.665,
VERTEX, 18.913, 25.611, 25.551,
VERTEX, 18.969, 25.608, 25.374,
VERTEX, 18.982, 25.607, 25.333,
END,
BEGIN, LINE_LOOP,
VERTEX, 20.336, 25.612, 28.881,
VERTEX, 20.198, 25.510, 28.996,
VERTEX, 20.061, 25.406, 29.111,
VERTEX, 20.123, 25.357, 29.155,
VERTEX, 20.256, 25.245, 29.254,
VERTEX, 20.279, 25.225, 29.272,
VERTEX, 20.435, 25.161, 29.322,
VERTEX, 20.590, 25.097, 29.371,
VERTEX, 20.746, 25.033, 29.420,
VERTEX, 20.902, 24.970, 29.469,
VERTEX, 20.911, 25.110, 29.329,
VERTEX, 20.920, 25.260, 29.178,
VERTEX, 20.928, 25.411, 29.027,
VERTEX, 20.935, 25.561, 28.876,
VERTEX, 20.941, 25.712, 28.725,
VERTEX, 20.894, 25.712, 28.730,
VERTEX, 20.700, 25.712, 28.747,
VERTEX, 20.506, 25.713, 28.764,
VERTEX, 20.475, 25.714, 28.767,
END,
BEGIN, LINE_LOOP,
VERTEX, 20.936, 25.569, 28.869,
VERTEX, 20.929, 25.426, 29.012,
VERTEX, 20.921, 25.283, 29.155,
VERTEX, 20.913, 25.140, 29.299,
VERTEX, 20.904, 24.997, 29.442,
VERTEX, 20.902, 24.970, 29.469,
VERTEX, 20.975, 24.834, 29.552,
VERTEX, 21.048, 24.697, 29.636,
VERTEX, 21.049, 24.694, 29.638,
VERTEX, 21.051, 24.693, 29.638,
VERTEX, 21.116, 24.666, 29.620,
VERTEX, 21.225, 24.676, 29.539,
VERTEX, 21.383, 24.690, 29.421,
VERTEX, 21.541, 24.704, 29.303,
VERTEX, 21.577, 24.732, 29.251,
VERTEX, 21.681, 24.815, 29.103,
VERTEX, 21.784, 24.896, 28.955,
VERTEX, 21.888, 24.978, 28.807,
VERTEX, 21.991, 25.059, 28.659,
VERTEX, 22.095, 25.140, 28.510,
VERTEX, 22.097, 25.157, 28.491,
VERTEX, 22.112, 25.294, 28.348,
VERTEX, 22.126, 25.431, 28.205,
VERTEX, 22.137, 25.568, 28.062,
VERTEX, 22.146, 25.685, 27.941,
VERTEX, 21.981, 25.687, 28.052,
VERTEX, 21.816, 25.689, 28.162,
VERTEX, 21.651, 25.692, 28.271,
VERTEX, 21.485, 25.695, 28.380,
VERTEX, 21.318, 25.700, 28.487,
VERTEX, 21.151, 25.705, 28.593,
VERTEX, 20.983, 25.710, 28.699,
VERTEX, 20.941, 25.712, 28.725,
END,
BEGIN, LINE_LOOP,
VERTEX, 21.049, 24.694, 29.638,
VERTEX, 21.011, 24.766, 29.594,
VERTEX, 20.923, 24.931, 29.493,
VERTEX, 20.902, 24.970, 29.469,
VERTEX, 20.746, 25.033, 29.420,
VERTEX, 20.590, 25.097, 29.371,
VERTEX, 20.435, 25.160, 29.322,
VERTEX, 20.280, 25.225, 29.272,
VERTEX, 20.400, 25.112, 29.347,
VERTEX, 20.523, 24.993, 29.425,
VERTEX, 20.644, 24.871, 29.504,
VERTEX, 20.763, 24.748, 29.584,
VERTEX, 20.881, 24.623, 29.664,
VERTEX, 20.910, 24.592, 29.684,
END,
BEGIN, LINE_LOOP,
VERTEX, 21.169, 24.591, 29.622,
VERTEX, 21.176, 24.593, 29.616,
VERTEX, 21.349, 24.640, 29.471,
VERTEX, 21.521, 24.687, 29.326,
VERTEX, 21.531, 24.695, 29.315,
VERTEX, 21.541, 24.704, 29.303,
VERTEX, 21.396, 24.691, 29.411,
VERTEX, 21.252, 24.678, 29.519,
VERTEX, 21.116, 24.666, 29.620,
END,
BEGIN, LINE_LOOP,
VERTEX, 20.910, 24.592, 29.684,
VERTEX, 20.977, 24.506, 29.706,
VERTEX, 21.033, 24.519, 29.697,
VERTEX, 21.051, 24.693, 29.638,
VERTEX, 21.049, 24.694, 29.638,
END,
BEGIN, LINE_LOOP,
VERTEX, 21.033, 24.519, 29.697,
VERTEX, 21.064, 24.515, 29.686,
VERTEX, 21.169, 24.591, 29.622,
VERTEX, 21.116, 24.666, 29.620,
VERTEX, 21.051, 24.693, 29.638,
END,
BEGIN, LINE_LOOP,
VERTEX, 18.041, 25.143, 26.246,
VERTEX, 17.883, 25.130, 26.393,
VERTEX, 17.726, 25.115, 26.540,
VERTEX, 17.569, 25.098, 26.687,
VERTEX, 17.575, 25.100, 26.702,
VERTEX, 17.644, 25.126, 26.882,
VERTEX, 17.712, 25.150, 27.063,
VERTEX, | |
35969, 35977, 35983, 35993, 35999,
36007, 36011, 36013, 36017, 36037, 36061, 36067, 36073,
36083, 36097, 36107, 36109, 36131, 36137, 36151, 36161,
36187, 36191, 36209, 36217, 36229, 36241, 36251, 36263,
36269, 36277, 36293, 36299, 36307, 36313, 36319, 36341,
36343, 36353, 36373, 36383, 36389, 36433, 36451, 36457,
36467, 36469, 36473, 36479, 36493, 36497, 36523, 36527,
36529, 36541, 36551, 36559, 36563, 36571, 36583, 36587,
36599, 36607, 36629, 36637, 36643, 36653, 36671, 36677,
36683, 36691, 36697, 36709, 36713, 36721, 36739, 36749,
36761, 36767, 36779, 36781, 36787, 36791, 36793, 36809,
36821, 36833, 36847, 36857, 36871, 36877, 36887, 36899,
36901, 36913, 36919, 36923, 36929, 36931, 36943, 36947,
36973, 36979, 36997, 37003, 37013, 37019, 37021, 37039,
37049, 37057, 37061, 37087, 37097, 37117, 37123, 37139,
37159, 37171, 37181, 37189, 37199, 37201, 37217, 37223,
37243, 37253, 37273, 37277, 37307, 37309, 37313, 37321,
37337, 37339, 37357, 37361, 37363, 37369, 37379, 37397,
37409, 37423, 37441, 37447, 37463, 37483, 37489, 37493,
37501, 37507, 37511, 37517, 37529, 37537, 37547, 37549,
37561, 37567, 37571, 37573, 37579, 37589, 37591, 37607,
37619, 37633, 37643, 37649, 37657, 37663, 37691, 37693,
37699, 37717, 37747, 37781, 37783, 37799, 37811, 37813,
37831, 37847, 37853, 37861, 37871, 37879, 37889, 37897,
37907, 37951, 37957, 37963, 37967, 37987, 37991, 37993,
37997, 38011, 38039, 38047, 38053, 38069, 38083, 38113,
38119, 38149, 38153, 38167, 38177, 38183, 38189, 38197,
38201, 38219, 38231, 38237, 38239, 38261, 38273, 38281,
38287, 38299, 38303, 38317, 38321, 38327, 38329, 38333,
38351, 38371, 38377, 38393, 38431, 38447, 38449, 38453,
38459, 38461, 38501, 38543, 38557, 38561, 38567, 38569,
38593, 38603, 38609, 38611, 38629, 38639, 38651, 38653,
38669, 38671, 38677, 38693, 38699, 38707, 38711, 38713,
38723, 38729, 38737, 38747, 38749, 38767, 38783, 38791,
38803, 38821, 38833, 38839, 38851, 38861, 38867, 38873,
38891, 38903, 38917, 38921, 38923, 38933, 38953, 38959,
38971, 38977, 38993, 39019, 39023, 39041, 39043, 39047,
39079, 39089, 39097, 39103, 39107, 39113, 39119, 39133,
39139, 39157, 39161, 39163, 39181, 39191, 39199, 39209,
39217, 39227, 39229, 39233, 39239, 39241, 39251, 39293,
39301, 39313, 39317, 39323, 39341, 39343, 39359, 39367,
39371, 39373, 39383, 39397, 39409, 39419, 39439, 39443,
39451, 39461, 39499, 39503, 39509, 39511, 39521, 39541,
39551, 39563, 39569, 39581, 39607, 39619, 39623, 39631,
39659, 39667, 39671, 39679, 39703, 39709, 39719, 39727,
39733, 39749, 39761, 39769, 39779, 39791, 39799, 39821,
39827, 39829, 39839, 39841, 39847, 39857, 39863, 39869,
39877, 39883, 39887, 39901, 39929, 39937, 39953, 39971,
39979, 39983, 39989, 40009, 40013, 40031, 40037, 40039,
40063, 40087, 40093, 40099, 40111, 40123, 40127, 40129,
40151, 40153, 40163, 40169, 40177, 40189, 40193, 40213,
40231, 40237, 40241, 40253, 40277, 40283, 40289, 40343,
40351, 40357, 40361, 40387, 40423, 40427, 40429, 40433,
40459, 40471, 40483, 40487, 40493, 40499, 40507, 40519,
40529, 40531, 40543, 40559, 40577, 40583, 40591, 40597,
40609, 40627, 40637, 40639, 40693, 40697, 40699, 40709,
40739, 40751, 40759, 40763, 40771, 40787, 40801, 40813,
40819, 40823, 40829, 40841, 40847, 40849, 40853, 40867,
40879, 40883, 40897, 40903, 40927, 40933, 40939, 40949,
40961, 40973, 40993, 41011, 41017, 41023, 41039, 41047,
41051, 41057, 41077, 41081, 41113, 41117, 41131, 41141,
41143, 41149, 41161, 41177, 41179, 41183, 41189, 41201,
41203, 41213, 41221, 41227, 41231, 41233, 41243, 41257,
41263, 41269, 41281, 41299, 41333, 41341, 41351, 41357,
41381, 41387, 41389, 41399, 41411, 41413, 41443, 41453,
41467, 41479, 41491, 41507, 41513, 41519, 41521, 41539,
41543, 41549, 41579, 41593, 41597, 41603, 41609, 41611,
41617, 41621, 41627, 41641, 41647, 41651, 41659, 41669,
41681, 41687, 41719, 41729, 41737, 41759, 41761, 41771,
41777, 41801, 41809, 41813, 41843, 41849, 41851, 41863,
41879, 41887, 41893, 41897, 41903, 41911, 41927, 41941,
41947, 41953, 41957, 41959, 41969, 41981, 41983, 41999,
42013, 42017, 42019, 42023, 42043, 42061, 42071, 42073,
42083, 42089, 42101, 42131, 42139, 42157, 42169, 42179,
42181, 42187, 42193, 42197, 42209, 42221, 42223, 42227,
42239, 42257, 42281, 42283, 42293, 42299, 42307, 42323,
42331, 42337, 42349, 42359, 42373, 42379, 42391, 42397,
42403, 42407, 42409, 42433, 42437, 42443, 42451, 42457,
42461, 42463, 42467, 42473, 42487, 42491, 42499, 42509,
42533, 42557, 42569, 42571, 42577, 42589, 42611, 42641,
42643, 42649, 42667, 42677, 42683, 42689, 42697, 42701,
42703, 42709, 42719, 42727, 42737, 42743, 42751, 42767,
42773, 42787, 42793, 42797, 42821, 42829, 42839, 42841,
42853, 42859, 42863, 42899, 42901, 42923, 42929, 42937,
42943, 42953, 42961, 42967, 42979, 42989, 43003, 43013,
43019, 43037, 43049, 43051, 43063, 43067, 43093, 43103,
43117, 43133, 43151, 43159, 43177, 43189, 43201, 43207,
43223, 43237, 43261, 43271, 43283, 43291, 43313, 43319,
43321, 43331, 43391, 43397, 43399, 43403, 43411, 43427,
43441, 43451, 43457, 43481, 43487, 43499, 43517, 43541,
43543, 43573, 43577, 43579, 43591, 43597, 43607, 43609,
43613, 43627, 43633, 43649, 43651, 43661, 43669, 43691,
43711, 43717, 43721, 43753, 43759, 43777, 43781, 43783,
43787, 43789, 43793, 43801, 43853, 43867, 43889, 43891,
43913, 43933, 43943, 43951, 43961, 43963, 43969, 43973,
43987, 43991, 43997, 44017, 44021, 44027, 44029, 44041,
44053, 44059, 44071, 44087, 44089, 44101, 44111, 44119,
44123, 44129, 44131, 44159, 44171, 44179, 44189, 44201,
44203, 44207, 44221, 44249, 44257, 44263, 44267, 44269,
44273, 44279, 44281, 44293, 44351, 44357, 44371, 44381,
44383, 44389, 44417, 44449, 44453, 44483, 44491, 44497,
44501, 44507, 44519, 44531, 44533, 44537, 44543, 44549,
44563, 44579, 44587, 44617, 44621, 44623, 44633, 44641,
44647, 44651, 44657, 44683, 44687, 44699, 44701, 44711,
44729, 44741, 44753, 44771, 44773, 44777, 44789, 44797,
44809, 44819, 44839, 44843, 44851, 44867, 44879, 44887,
44893, 44909, 44917, 44927, 44939, 44953, 44959, 44963,
44971, 44983, 44987, 45007, 45013, 45053, 45061, 45077,
45083, 45119, 45121, 45127, 45131, 45137, 45139, 45161,
45179, 45181, 45191, 45197, 45233, 45247, 45259, 45263,
45281, 45289, 45293, 45307, 45317, 45319, 45329, 45337,
45341, 45343, 45361, 45377, 45389, 45403, 45413, 45427,
45433, 45439, 45481, 45491, 45497, 45503, 45523, 45533,
45541, 45553, 45557, 45569, 45587, 45589, 45599, 45613,
45631, 45641, 45659, 45667, 45673, 45677, 45691, 45697,
45707, 45737, 45751, 45757, 45763, 45767, 45779, 45817,
45821, 45823, 45827, 45833, 45841, 45853, 45863, 45869,
45887, 45893, 45943, 45949, 45953, 45959, 45971, 45979,
45989, 46021, 46027, 46049, 46051, 46061, 46073, 46091,
46093, 46099, 46103, 46133, 46141, 46147, 46153, 46171,
46181, 46183, 46187, 46199, 46219, 46229, 46237, 46261,
46271, 46273, 46279, 46301, 46307, 46309, 46327, 46337,
46349, 46351, 46381, 46399, 46411, 46439, 46441, 46447,
46451, 46457, 46471, 46477, 46489, 46499, 46507, 46511,
46523, 46549, 46559, 46567, 46573, 46589, 46591, 46601,
46619, 46633, 46639, 46643, 46649, 46663, 46679, 46681,
46687, 46691, 46703, 46723, 46727, 46747, 46751, 46757,
46769, 46771, 46807, 46811, 46817, 46819, 46829, 46831,
46853, 46861, 46867, 46877, 46889, 46901, 46919, 46933,
46957, 46993, 46997, 47017, 47041, 47051, 47057, 47059,
47087, 47093, 47111, 47119, 47123, 47129, 47137, 47143,
47147, 47149, 47161, 47189, 47207, 47221, 47237, 47251,
47269, 47279, 47287, 47293, 47297, 47303, 47309, 47317,
47339, 47351, 47353, 47363, 47381, 47387, 47389, 47407,
47417, 47419, 47431, 47441, 47459, 47491, 47497, 47501,
47507, 47513, 47521, 47527, 47533, 47543, 47563, 47569,
47581, 47591, 47599, 47609, 47623, 47629, 47639, 47653,
47657, 47659, 47681, 47699, 47701, 47711, 47713, 47717,
47737, 47741, 47743, 47777, 47779, 47791, 47797, 47807,
47809, 47819, 47837, 47843, 47857, 47869, 47881, 47903,
47911, 47917, 47933, 47939, 47947, 47951, 47963, 47969,
47977, 47981, 48017, 48023, 48029, 48049, 48073, 48079,
48091, 48109, 48119, 48121, 48131, 48157, 48163, 48179,
48187, 48193, 48197, 48221, 48239, 48247, 48259, 48271,
48281, 48299, 48311, 48313, 48337, 48341, 48353, 48371,
48383, 48397, 48407, 48409, 48413, 48437, 48449, 48463,
48473, 48479, 48481, 48487, 48491, 48497, 48523, 48527,
48533, 48539, 48541, 48563, 48571, 48589, 48593, 48611,
48619, 48623, 48647, 48649, 48661, 48673, 48677, 48679,
48731, 48733, 48751, 48757, 48761, 48767, 48779, 48781,
48787, 48799, 48809, 48817, 48821, 48823, 48847, 48857,
48859, 48869, 48871, 48883, 48889, 48907, 48947, 48953,
48973, 48989, 48991, 49003, 49009, 49019, 49031, 49033,
49037, 49043, 49057, 49069, 49081, 49103, 49109, 49117,
49121, 49123, 49139, 49157, 49169, 49171, 49177, 49193,
49199, 49201, 49207, 49211, 49223, 49253, 49261, 49277,
49279, 49297, 49307, 49331, 49333, 49339, 49363, 49367,
49369, 49391, 49393, 49409, 49411, 49417, 49429, 49433,
49451, 49459, 49463, 49477, 49481, 49499, 49523, 49529,
49531, 49537, 49547, 49549, 49559, 49597, 49603, 49613,
49627, 49633, 49639, | |
target translation to source one
:param target: str, transform we want to match
:param source: str, source transform
"""
scale_pivot_vector = get_scale_pivot(transform_name=source, world_space=False)
rotate_pivot_vector = get_rotate_pivot(transform_name=source, world_space=False)
translate_vector = get_translation(transform_name=source, world_space=True)
set_scale_pivot(transform_name=target, scale_pivot_vector=scale_pivot_vector, world_space=False)
set_rotate_pivot(transform_name=target, rotate_pivot_vector=rotate_pivot_vector, world_space=False)
set_translation(transform_name=target, translate_vector=translate_vector, world_space=True)
def match_rotation(target, source):
"""
Matches target rotation to source one
:param target: str, transform we want to match
:param source: str, source transform
"""
rotation_vector = get_rotation(transform_name=source, world_space=True)
set_rotation(transform_name=target, rotation_vector=rotation_vector, world_space=True)
def match_translation_rotation(target, source):
"""
Matches target translation and rotation to source ones
:param target: str, transform we want to match
:param source: str, source transform
"""
match_translation(target=target, source=source)
match_rotation(target=target, source=source)
def match_translation_to_rotate_pivot(target, source):
"""
Matches target translation to the source transform rotate pivot
:param target: str, transform we want to match
:param source: str, source transform
"""
translate_vector = get_rotate_pivot(transform_name=source, world_space=True)
set_translation(transform_name=target, translate_vector=translate_vector, world_space=True)
def match_rotate_scale_pivot_to_translation(target, source):
"""
Matches the rotation and scale pivot of target transform to the translation of source
:param target: str, transform we want to match
:param source: str, source transform
"""
position = get_translation(transform_name=source, world_space=True)
maya.cmds.move(position[0], position[1], position[2], '{}.scalePivot'.format(target),
'{}.rotatePivot'.format(target), a=True)
def match_rotate_pivot(target, source, world_space=False):
"""
Matches target transform rotate pivot to source one in object space
:param target: str, transform we want to match rotate pivot to source
:param source: str, source transform
:param world_space: bool, Whether to match rotate pivot in object space or world space. By default, in object space
"""
source_rotate_pivot = get_rotate_pivot(transform_name=source, world_space=world_space)
set_rotate_pivot(transform_name=target, rotate_pivot_vector=source_rotate_pivot, world_space=world_space)
def match_scale_pivot(target, source, world_space=False):
"""
Matches target transform scale pivot to source one in object space
:param target: str, transform we want to match scale pivot to source
:param source: str, source transform
:param world_space: bool, Whether to match scale pivot in object space or world space. By default, in object space
"""
source_scale_pivot = get_scale_pivot(transform_name=source, world_space=world_space)
set_scale_pivot(transform_name=target, scale_pivot_vector=source_scale_pivot, world_space=world_space)
def match_orient(target, source):
"""
Matches target orientation using an orientation constraint
:param target: str, transform we want to match scale pivot to source
:param source: str, source transform
"""
maya.cmds.delete(maya.cmds.orientConstraint(source, target, mo=False))
def match_point(target, source):
"""
Matches target position using a position constraint
:param target: str, transform we want to match scale pivot to source
:param source: str, source transform
"""
maya.cmds.delete(maya.cmds.pointConstraint(source, target, mo=False))
def match_orient_point(target, source):
"""
Matches target position and orientation using position and orientation constraints
:param target: str, transform we want to match scale pivot to source
:param source: str, source transform
"""
maya.cmds.delete(maya.cmds.orientConstraint(source, target, mo=False))
maya.cmds.delete(maya.cmds.pointConstraint(source, target, mo=False))
def get_distance(source_transform, target_transform):
"""
Get the distance between source and target transforms
:param source_transform: str, name of a transform node
:param target_transform: str, name of a transform node
:return: float
"""
v1 = maya.cmds.xform(source_transform, q=True, rp=True, ws=True)
if maya.cmds.nodeType(target_transform) == 'mesh':
v2 = maya.cmds.xform(target_transform, q=True, t=True, ws=True)
else:
v2 = maya.cmds.xform(target_transform, q=True, rp=True, ws=True)
return vec3.get_distance_between_vectors(v1, v2)
def create_group_in_plane(transform1, transform2, transform3):
"""
Creates a group that is located in the triangle plane defined by 3 transforms
:param transform1: str, name of a transform node
:param transform2: str, name of a transform node
:param transform3: str, name of a transform node
:return: str, name of new group that is located in triangle plane (good place to place pole vectors)
"""
pole_group = maya.cmds.group(empty=True)
match_translation_rotation(target=pole_group, source=transform1)
maya.cmds.aimConstraint(transform3, pole_group, offset=[0, 0, 0], weight=1, aimVector=[1, 0, 0], upVector=[0, 1, 0],
worldUpType='object', worldUpObject=transform2)
pole_group_2 = maya.cmds.group(empty=True, n='pole_{}'.format(transform1))
match_translation_rotation(target=pole_group_2, source=transform2)
maya.cmds.parent(pole_group_2, pole_group)
maya.cmds.makeIdentity(pole_group_2, apply=True, t=True, r=True)
maya.cmds.parent(pole_group_2, w=True)
maya.cmds.delete(pole_group)
return pole_group_2
def get_pole_vector(transform1, transform2, transform3, offset=1):
"""
Given 3 transform (such as arm, elbow, wrist), returns a position where pole vector should be located
:param transform1: str, name of a transform node
:param transform2: str, name of a transform node
:param transform3: str, name of a transform node
:param offset: float, offset value for the final pole vector position
:return: list(float, float, float), pole vector with offset
"""
dst = get_distance(transform1, transform3)
grp = create_group_in_plane(transform1, transform2, transform3)
maya.cmds.move(0, offset * dst, 0, grp, r=True, os=True)
final_pos = maya.cmds.xform(grp, q=True, rp=True, ws=True)
maya.cmds.delete(grp)
return final_pos
def mirror_toggle(transform, flag):
"""
Mirrors attribute value that handles the mirror functionality
:param transform: str
:param flag: bool
"""
if not maya.cmds.objExists('{}.mirror'.format(transform)):
maya.cmds.addAttr(transform, ln='mirror', at='bool', k=True)
maya.cmds.setAttr('{}.mirror'.format(transform), flag)
def mirror_transform(
prefix=None, suffix=None, string_search=None, create_if_missing=False, transforms=None, left_to_right=True):
"""
Mirrors the position of all transforms that match the given search strings
:param prefix:str, prefix to search for
:param suffix: str, suffix to search for
:param string_search: str, search for a name containing string search
:param create_if_missing: bool
:param transforms: list(str)
:param left_to_right: bool
:return:
"""
from tpDcc.dccs.maya.core import shape as shape_lib
if transforms is None:
transforms = list()
else:
transforms = transforms[:]
scope_joints = list()
scope_transforms = list()
joints = list()
skip_search = False
if transforms:
skip_search = True
temp_transforms = list(transforms)
for temp_xform in temp_transforms:
node_type = maya.cmds.nodeType(temp_xform)
if node_type == 'joint':
joints.append(temp_xform)
if node_type == 'transform':
transforms.append(temp_xform)
if not skip_search:
# If not prefix or suffix given we store all selected joints and transforms
if not prefix and not suffix and not string_search:
joints = maya.cmds.ls(type='joint')
transforms = maya.cmds.ls(type='transform')
# If prefix is given we store objects matching that prefix
if prefix:
joints = maya.cmds.ls('{}*'.format(prefix, type='joint'))
transforms = maya.cmds.ls('{}*'.format(prefix), type='transform')
scope_joints += joints
scope_transforms += transforms
# If suffix is given we store objects matching that prefix
if suffix:
joints = maya.cmds.ls('*{}'.format(prefix, type='joint'))
transforms = maya.cmds.ls('*{}'.format(prefix), type='transform')
scope_joints += joints
scope_transforms += transforms
if string_search:
joints = maya.cmds.ls('*{}*'.format(string_search, type='joint'))
transforms = maya.cmds.ls('*{}*'.format(string_search, type='transform'))
# Get list of elements to mirror
scope_joints += joints
scope_transforms += transforms
scope = list(set(scope_joints + scope_transforms))
if not scope:
logger.warning('No objects to mirror!')
return
other_parents = dict()
fixed = list()
created = False
for xform in scope:
if maya.cmds.objExists('{}.inMesh'.format(xform)):
continue
if left_to_right:
other = find_transform_right_side(xform, check_if_exists=False)
else:
other = find_transform_left_side(xform, check_if_exists=False)
if not other:
continue
if xform in fixed:
continue
if attribute.is_translate_rotate_connected(other, ignore_keyframe=True):
continue
shape_type = shape_lib.get_shape_node_type(xform)
if not maya.cmds.objExists(other) and create_if_missing:
node_type = maya.cmds.nodeType(xform)
if not node_type == 'joint':
if shape_type:
other_node = maya.cmds.createNode(shape_type)
if shape_lib.is_a_shape(other_node):
other_node = maya.cmds.listRelatives(other_node, p=True, f=True)
other = maya.cmds.rename(other_node, other)
elif node_type == 'joint':
other = maya.cmds.duplicate(xform, po=True, n=other)[0]
if shape_type:
other_shape = maya.cmds.createNode(shape_type)
if shape_lib.is_a_shape(other_shape):
temp_parent = maya.cmds.listRelatives(other_shape, p=True, f=True)
maya.cmds.parent(other_shape, other, r=True, s=True)
maya.cmds.rename(other_shape, '{}Shape'.format(other))
maya.cmds.delete(temp_parent)
created = True
parent = maya.cmds.listRelatives(xform, p=True)
if parent:
if left_to_right:
other_parent = find_transform_right_side(parent[0], check_if_exists=False)
else:
other_parent = find_transform_left_side(parent[0], check_if_exists=False)
if other_parent:
other_parents[other] = other_parent
if maya.cmds.objExists(other):
if maya.cmds.objExists('{}.mirror'.format(other)):
mirror = maya.cmds.getAttr('{}.mirror'.format(other))
if not mirror:
logger.debug('{} was not mirrored because its mirror attribute is set off!'.format(other))
continue
lock_state = attribute.LockTransformState(other)
lock_state.unlock()
new_xform = maya.cmds.xform(xform, query=True, ws=True, t=True)
# Mirror locator
if shape_type == 'locator':
local_position = maya.cmds.getAttr('{}.localPosition'.format(xform))[0]
local_scale = maya.cmds.getAttr('{}.localScale'.format(xform))[0]
maya.cmds.setAttr('{}.localPosition'.format(other), *local_position, type='float3')
maya.cmds.setAttr('{}.localScale'.format(other), *local_scale, type='float3')
# Mirror Joint
if maya.cmds.nodeType(other) == 'joint':
radius = maya.cmds.getAttr('{}.radius'.format(xform))
if not node.is_referenced(other):
var = attribute.NumericAttribute('radius')
var.set_node(other)
var.set_value(radius)
if not maya.cmds.getAttr('{}.radius'.format(other), lock=True):
maya.cmds.setAttr('{}.radius'.format(other), radius)
maya.cmds.move(
new_xform[0] * -1, new_xform[1], new_xform[2],
'{}.scalePivot'.format(other), '{}.rotatePivot'.format(other), a=True)
# Mirror Transform
if maya.cmds.nodeType(other) == 'transform':
pos = (new_xform[0] * -1, new_xform[1], new_xform[2])
maya.cmds.xform(other, ws=True, t=pos)
pivot = maya.cmds.xform(xform, query=True, ws=True, rp=True)
maya.cmds.move((pivot[0] * -1), pivot[1], pivot[2], '{}.scalePivot'.format(other),
'{}.rotatePivot'.format(other), a=True)
if maya.cmds.objExists('{}.localPosition'.format(xform)):
fix_locator_shape_position(xform)
if maya.cmds.objExists('{}.localPosition'.format(other)):
fix_locator_shape_position(other)
children = maya.cmds.listRelatives(xform, type='transform')
if not children:
rotate = maya.cmds.getAttr('%s.rotate' % xform)[0]
scale = maya.cmds.getAttr('%s.scale' % xform)[0]
rotate = python.force_list(rotate)
scale = python.force_list(scale)
rotate[1] *= -1
rotate[2] *= -1
maya.cmds.setAttr('%s.rotate' % other, *rotate, type='float3')
maya.cmds.setAttr('%s.scale' % other, *scale, type='float3')
lock_state.restore_initial()
fixed.append(other)
if create_if_missing:
for other in other_parents.keys():
parent = other_parents[other]
if maya.cmds.objExists(parent) and maya.cmds.objExists(other):
maya.cmds.parent(other, parent)
if create_if_missing:
if created:
return True
else:
return False
else:
if fixed:
return True
else:
return False
def find_transform_right_side(transform, check_if_exists=True):
"""
Try to find the right side of a transform
:param transform: str, name of a transform
:param check_if_exists: bool, Whether to return transform if mirror does not exists or not
:return: str
"""
other = ''
for side in TRANSFORM_SIDES['end']['short']:
if transform.endswith(side[0]):
other = name.replace_string_at_end(transform, side[0], side[1])
if (maya.cmds.objExists(other) and check_if_exists) or not check_if_exists:
return other
for side in TRANSFORM_SIDES['end']['long']:
if transform.find(side[0]) > -1:
for end_side | |
verbosity=True)
self._mach_e = mach_solution
# Static properties at the entrance:
self._T_e = self.isent_flow.stat_temp_from_mach(self.mach_e, self.T_et)
self._p_e = self.isent_flow.stat_pressure_from_mach(self.mach_e, self.p_et, self.T_et)
self._rho_e = self.p_e / self.T_e / self.fluid.Rg
self._rho_et = self.isent_flow.stag_density_from_mach(self.mach_e, self.rho_e, self.T_e)
self._vel_e = self.isent_flow.vel_from_mach(self.mach_e, self.T_e)
self._h_e = self.fluid.cp(self.T_e) * self.T_e
self._ekin_e = 0.5 * self.vel_e**2
return
def solve_from_static_exit_pressure(self, ps, As=None, adiab_efficiency=1, nozzle_type='con-di'):
"""
Solve nozzle with static discharge pressure known (exit section).
The nozzle is assumed to be adiabatic.
+ If As is None, the discharge area is calculated with the provided
adiabatic efficiency
+ If As is provided, the corresponding adiabatic efficiency is
calculated. If the efficiency is impossible a warning is raised
Inputs:
-------
ps: float. Static pressure at the exit section (stage #9). [Pa]
As: float. Area of the exit section (stage #9) of a nozzle. If no area is
provided, it is calculated assuming the adiabatic efficiency set in
initialize_nozzle. Otherwise the efficiency is recalculated. A warning
is raised if the adiabatic efficiency is unfeasible.
adiab_effiency: float. Adiabatic efficiency of the nozzle. If the exit area is
not provided, isentropic nozzle is assumed. Otherwise it is calculated
nozzle_type: string. May be 'con-di' for Laval/convergent-divergent nozzle or
'convergent' for a convergent nozzle. By default 'con-di' is selected.
"""
# Nozzle type
if nozzle_type.lower()=='condi' or nozzle_type.lower()=='laval':
nozzle_type = 'con-di'
elif nozzle_type.lower()=='con' or nozzle_type.lower()=='conv':
nozzle_type = 'convergent'
elif not nozzle_type.lower() in ['con-di', 'convergent']:
warnings.warn('Unknown nozzle type: {}. Nozzle will be set to con-di (default)'.format(nozzle_type))
# Store static pressure
self._p_s = ps
gamma_to = self.fluid.gamma(self.T_et)
if As is None:
# Calculate discharge area assuming adapted nozzle with a given adiabatic efficiency
self._adiab_efficiency = adiab_efficiency
Ts_Tet =(1 + self.adiab_efficiency*((self.p_s/self.p_et)**((gamma_to-1)/gamma_to)- 1))
self._T_s = self.T_et * Ts_Tet
if self.T_s <= 2/(gamma_to + 1)*self.T_st:
self._exit_regime = 'supersonic'
self._T_s_star = self.isent_flow.stat_temp_from_mach(1, self.T_st)
self._p_s_star = self.isent_flow.stat_pressure_from_mach(1, self.p_st, self.T_st)
self._A_star = self.A_s*self.mach_s*((gamma_to+1)/2/(1+(gamma_to-1)/2*self.mach_s**2))**((gamma_to+1)/2/(gamma_to-1))
else:
self._exit_regime = 'subsonic'
self._T_s_star = np.nan
self._p_s_star = np.nan
self._A_star = np.nan
self._vel_s = self.isent_flow.vel_from_stag_temp(self.T_st, self.T_s)
self._mach_s = self.isent_flow.mach_number(self.vel_s, self.T_s)
self._p_st = self.isent_flow.stag_pressure_from_mach(self.mach_s, self.p_s, self.T_s)
self._rho_s = self.p_s / self.fluid.Rg / self.T_s
self._A_s = self.mflow_s / self.rho_s / self.vel_s
self._rho_st = self.isent_flow.stag_density_from_mach(self.mach_s, self.rho_s, self.T_s)
self._ekin_s = 0.5 * self.vel_s**2
self._h_s = self.h_st - self.ekin_s
else:
# As is provided, check the adiabatic efficiency
self._A_s = As
# Solve static temperature from mass flow 2nd order equation:
aux_var = 2*self.fluid.cp(self.T_st)*(self.p_s*self.A_s/self.fluid.Rg/self.mflow_s)**2
T9 = (-aux_var + np.sqrt(aux_var**2 + 4*aux_var*self.T_st))/2
self._T_s = T9
if self.T_s <= 2/(gamma_to + 1)*self.T_st:
self._exit_regime = 'supersonic'
self._T_s_star = self.isent_flow.stat_temp_from_mach(1, self.T_st)
self._p_s_star = self.isent_flow.stat_pressure_from_mach(1, self.p_st, self.T_st)
self._A_star = self.A_s*self.mach_s*((gamma_to+1)/2/(1+(gamma_to-1)/2*self.mach_s**2))**((gamma_to+1)/2/(gamma_to-1))
else:
self._exit_regime = 'subsonic'
self._T_s_star = np.nan
self._p_s_star = np.nan
self._A_star = np.nan
self._vel_s = self.isent_flow.vel_from_stag_temp(self.T_st, self.T_s)
self._mach_s = self.isent_flow.mach_number(self.vel_s, self.T_s)
self._p_st = self.isent_flow.stag_pressure_from_mach(self.mach_s, self.p_s, self.T_s)
self._rho_s = self.p_s / self.fluid.Rg / self.T_s
self._rho_st = self.isent_flow.stag_density_from_mach(self.mach_s, self.rho_s, self.T_s)
self._ekin_s = 0.5 * self.vel_s**2
self._h_s = self.h_st - self.ekin_s
# Recalculate adiabatic efficiency
adiab_eff = (self.T_s/self.T_et-1)/((self.p_s/self.p_et)**((gamma_to-1)/gamma_to)-1)
self._adiab_efficiency = adiab_eff
if not 0<=adiab_eff<=1:
warnings.warn('Unfeasible nozzle adiabatic efficiency (ad_eff={0}) for adapted nozzle (ps={1}) and fixed area (As={2})'.format(self.adiab_efficiency, self.p_s, self.A_s), UserWarning)
if self.exit_regime=='supersonic':
if nozzle_type=='convergent':
# Recalculate solution assuming critical, convergent nozzles
self.solve_critical_convergent_nozzle(self.ps, adiab_efficiency=self.adiab_efficiency)
else:
# Calculate nozzle regime. Solution is iterated:
expon = (gamma_to+1)/(2*(gamma_to-1))
area_rela = self.A_star/self.A_s
# Mach function
mach_func = lambda M: M*((gamma_to + 1)/2/( 1+(gamma_to-1)/2*M**2 ))**(expon) - area_rela
# Subsonic solution (pressure at wich choking occurs)
subsonic_mach, _, _ = num_iters.variable_step_roots(x0=0.5, func=mach_func, dxmax=.2, verbosity=True)
# Supersonic solution (pressure at wich supersonic nozzle is adapted).
supersonic_mach, _, _ = num_iters.variable_step_roots(x0=1.5, func=mach_func, dxmax=.2, verbosity=True)
self._pchoke = self.isent_flow.stat_pressure_from_mach(subsonic_mach, self.p_st, self.T_st)
self._padapt = self.isent_flow.stat_pressure_from_mach(supersonic_mach, self.p_st, self.T_st)
return
def solve_critical_convergent_nozzle(self, ps=None, As=None, adiab_efficiency=1):
"""
Critical nozzle (choked) with convergent geometry (thus critical area is
the discharge area A_s). Solves the nozzle assuming critical conditions and:
+ ps: If ps is provided, the discharge area and adiabatic efficiency are
calculated
+ As: if As is provided, the corresponding adiabatic efficiency and static
pressure are calculated. If the efficiency is impossible a warning is raised
+ adiab_efficiency: If no ps nor As are provided, the adiabatic efficiency is
used to calculate the nozzle. By default the nozzle is assumed isentropic.
Inputs:
-------
ps: float. Static discharge pressure [Pa]
As: float. Discharge, critical area [m**2]
adiab_efficiency: float. Adiabatic efficiency of the nozzle. By default is 1 (isentropic)
"""
gamma_to = self.fluid.gamma(self.T_et)
# Exit mach number and static temperature:
self._mach_s = 1
self._exit_regime = 'supersonic'
self._T_s = self.isent_flow.stat_temp_from_mach(1, self.T_st)
self._vel_s = self.isent_flow.sound_speed(self.T_s)
if (As is None) and not (ps is None):
# Store static pressure
self._p_s = ps
# Calculate the critical discharge area given the adiabatic efficiency
self._p_st = self.isent_flow.stag_pressure_from_mach(self.mach_s, self.p_s, self.T_s)
self._rho_s = self.p_s / self.fluid.Rg / self.T_s
self._A_s = self.mflow_s / self.rho_s / self.vel_s
self._rho_st = self.isent_flow.stag_density_from_mach(self.mach_s, self.rho_s, self.T_s)
self._ekin_s = 0.5 * self.vel_s**2
self._h_s = self.h_st - self.ekin_s
# Recalculate adiabatic efficiency
adiab_eff = (self.T_s/self.T_et-1)/((self.p_s/self.p_et)**((gamma_to-1)/gamma_to)-1)
self._adiab_efficiency = adiab_eff
if not 0<=adiab_eff<=1:
warnings.warn('Unfeasible nozzle adiabatic efficiency (ad_eff={0}) for adapted nozzle (ps={1}) and fixed area (As={2})'.format(self.adiab_efficiency, self.p_s, self.T_s), UserWarning)
elif not (As is None) and (ps is None):
# As is provided
self._A_s = As
self._rho_s = self.mflow_s / self.A_s / self.vel_s
self._rho_st = self.isent_flow.stag_density_from_mach(self.mach_s, self.rho_s, self.T_s)
self._p_s = self.rho_s * self.fluid.Rg * self.T_s
self._p_st = self.isent_flow.stag_pressure_from_mach(self.mach_s, self.p_s, self.T_s)
self._rho_st = self.isent_flow.stag_density_from_mach(self.mach_s, self.rho_s, self.T_s)
self._ekin_s = 0.5 * self.vel_s**2
self._h_s = self.h_st - self.ekin_s
# Recalculate adiabatic efficiency
adiab_eff = (self.T_s/self.T_et-1)/((self.p_s/self.p_et)**((gamma_to-1)/gamma_to)-1)
self._adiab_efficiency = adiab_eff
if not 0<=adiab_eff<=1:
warnings.warn('Unfeasible nozzle adiabatic efficiency (ad_eff={0}) for adapted nozzle (ps={1}) and fixed area (As={2})'.format(self.adiab_efficiency, self.p_s, self.T_s), UserWarning)
elif (As is None) and (ps is None):
# Calculate ps form adiabatic efficiency
self._adiab_efficiency = adiab_efficiency
# Static pressure
self._p_s = self.p_et * (1 + (self.T_s/self.T_et -1)/self.adiab_efficiency)**(gamma_to/(gamma_to-1))
self._rho_s = self.p_s / self.fluid.Rg / self.T_s
self._A_s = self.mflow_s / self.rho_s / self.vel_s
self._rho_st = self.isent_flow.stag_density_from_mach(self.mach_s, self.rho_s, self.T_s)
self._p_st = self.isent_flow.stag_pressure_from_mach(self.mach_s, self.p_s, self.T_s)
self._ekin_s = 0.5 * self.vel_s**2
self._h_s = self.h_st - self.ekin_s
else:
warnings.warn('If the nozzle is critical, the area and the static pressure at the discharge section cannot be set at the same time. The static pressure at the discharge section will be dismissed and recalculated.')
# As is provided
self._A_s = As
self._rho_s = self.mflow_s / self.A_s / self.vel_s
self._rho_st = self.isent_flow.stag_density_from_mach(self.mach_s, self.rho_s, self.T_s)
self._p_s = self.rho_s * self.fluid.Rg * self.T_s
self._p_st = self.isent_flow.stag_pressure_from_mach(self.mach_s, self.p_s, self.T_s)
self._rho_st = self.isent_flow.stag_density_from_mach(self.mach_s, self.rho_s, self.T_s)
self._ekin_s = 0.5 * self.vel_s**2
self._h_s = self.h_st - self.ekin_s
# Recalculate adiabatic efficiency
adiab_eff = (self.T_s/self.T_et-1)/((self.p_s/self.p_et)**((gamma_to-1)/gamma_to)-1)
self._adiab_efficiency = adiab_eff
if not 0<=adiab_eff<=1:
warnings.warn('Unfeasible nozzle adiabatic efficiency (ad_eff={0}) for adapted nozzle (ps={1}) and fixed area (As={2})'.format(self.adiab_efficiency, self.p_s, self.T_s), UserWarning)
# Critical conditions:
self._p_s_star = self.p_s
self._T_s_star = self.T_s
self._A_star = self.A_s
if self.A_s>=self.A_e:
warnings.warn('Exit area must be smaller than entry area in a convergent, critical nozzle. Results may be unfeasible')
return
def solve_generic_nozzle(self, ps=None, As=None, adiab_efficiency=1, nozzle_type='con-di'):
"""
Generic nozzle solver.
"""
# Nozzle type
if nozzle_type.lower()=='condi' or nozzle_type.lower()=='laval':
nozzle_type = 'con-di'
elif nozzle_type.lower()=='con':
nozzle_type = 'convergent'
elif not nozzle_type.lower() in ['con-di', 'convergent']:
warnings.warn('Unknown nozzle type: {}. Nozzle will be set to con-di (default)'.format(nozzle_type))
gamma_to = self.fluid.gamma(self.T_et)
if (As is None) and (ps is None):
warnings.warn("Exit area or exit static pressure must be provided to solve a generic nozzle.")
elif (As is None) and not (ps is None):
# Calculate discharge area assuming known adiabatic efficiency and static pressure
self._adiab_efficiency = adiab_efficiency
self._p_s = ps
self.solve_from_static_exit_pressure(self.p_s, adiab_efficiency=self.adiab_efficiency)
elif not (As is None) and (ps is None):
# As is provided
self._A_s = As
T_s_function = lambda T_s: self.p_et * (1+ (T_s/self.T_et-1)/self.adiab_efficiency)**(gamma_to/(gamma_to-1)) - self.mflow_s*self.fluid.Rg*T_s/self.A_s/np.sqrt(2*self.fluid.cp(self.T_st - T_s))
T_s_solution, _, _ = num_iters.variable_step_roots(x0=220, func=T_s_function, dxmax=5, verbosity=True)
p_s = self.mflow_s*self.fluid.Rg*T_s_solution/self.A_s / self.isent_flow.vel_from_stag_temp(self.T_st, T_s_solution)
self._p_s = p_s
self._T_s = T_s_solution
self._vel_s = self.isent_flow.vel_from_stag_temp(self.T_st, self.T_s)
self._mach_s = self.isent_flow.mach_number(self.vel_s, self.T_s)
if self.T_s <= 2/(gamma_to + 1)*self.T_st:
self._exit_regime = 'supersonic'
self._T_s_star = self.isent_flow.stat_temp_from_mach(1, self.T_st)
self._p_s_star = self.isent_flow.stat_pressure_from_mach(1, self.p_st, self.T_st)
self._A_star = self.A_s*self.mach_s*((gamma_to+1)/2/(1+(gamma_to-1)/2*self.mach_s**2))**((gamma_to+1)/2/(gamma_to-1))
else:
self._exit_regime = 'subsonic'
self._T_s_star = np.nan
self._p_s_star = np.nan
| |
# -*- coding: utf-8 -*-
"""
Created on Mon Nov 30 14:53:07 2020
Data of 138 bus test case used on Distribution Expansion Planning Model proposed by Muñoz-Delgado et al. (2014).
Reference:
<NAME>., <NAME>., & <NAME>. (2014). Joint expansion planning of distributed generation and distribution networks. IEEE Transactions on Power Systems, 30(5), 2579-2590.
DOI: 10.1109/TPWRS.2014.2364960
@Code Athor: <NAME>
"""
import numpy as np
import pandas as pd
def power_out(k,speed):
if k == 1:
WG = np.array([[3, 4.0],
[4, 20.0],
[5, 50.0],
[6, 96.0],
[7, 156.0],
[8, 238.0],
[9, 340.0],
[10, 466.0],
[11, 600.0],
[12, 710.0],
[13, 790.0],
[14, 850.0],
[15, 880.0],
[16, 905.0],
[17, 910.0]]
)
elif k == 2:
WG = np.array([[2, 3.0],
[3, 25.0],
[4, 82.0],
[5, 174.0],
[6, 321.0],
[7, 532.0],
[8, 815.0],
[9, 1180.0],
[10, 1580.0],
[11, 1810.0],
[12, 1980.0],
[13, 2050.0]]
)
if k == 1 and speed < 3:
Pr = 0
elif k == 1 and speed >= 17:
Pr = 0.91
elif k == 2 and speed < 2:
Pr = 0
elif k == 2 and speed >= 13:
Pr = 2.05
else:
speed_aux1 = int(speed)
speed_aux2 = speed_aux1 + 1
loc_aux1 = np.where(speed_aux1 == WG[:,0])[0].item()
loc_aux2 = np.where(speed_aux2 == WG[:,0])[0].item()
Pr_aux1 = (speed*WG[loc_aux1,1])/speed_aux1
Pr_aux2 = (speed*WG[loc_aux2,1])/speed_aux2
Pr = ((Pr_aux1+Pr_aux2)/2)/1000
return Pr
# =============================================================================
# System Data
# =============================================================================
n_bus = 138 #Number of buses
n_branches = 151 #Number of branches
load_factor = [0.7, 0.83, 1]
#EFF = Existing Fixed Feeder
#ERF = Existing Replaceable Feeder
#NRF = New Replacement Feeder
#NAF = New Added Feeder
line_data = pd.read_csv("138_line_data.csv")
branch = []
for i in range(line_data.shape[0]):
s = line_data['From'][i]
r = line_data['to'][i]
l = np.round(line_data['Lenght'][i],2)
tYpe = line_data['Type'][i]
branch.append(((s,r), l, tYpe))
load_zone = pd.read_csv("138_load_zone.csv")
peak_demand = np.full((load_zone.shape[0],10),0,dtype=float)
for i in range(0,load_zone.shape[0]):
for j in range(1,10+1):
peak_demand[i,j-1] = load_zone[str(j)][i]/1000
#Zones A = 1, B = 2, C = 3
#Buses= 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 ... 138
node_zone = np.full((1,load_zone.shape[0]),0,dtype=int)
for i in range(0,load_zone.shape[0]):
if load_zone['Zone'][i] == 'A':
node_zone[0,i] = 1
elif load_zone['Zone'][i] == 'B':
node_zone[0,i] = 2
elif load_zone['Zone'][i] == 'C':
node_zone[0,i] = 3
wind_speed = np.array([#Load Level (m/s)
#1 2 3
[8.53, 9.12, 10.04], #Zone A
[6.13, 7.26, 7.11], #Zone B
[4.13, 5.10, 5.56] #Zone C
])
# =============================================================================
# Sets of Indexes
# =============================================================================
B = np.arange(1, len(load_factor)+1, dtype=int) #Set of Load Levels
T = np.arange(1, np.shape(peak_demand)[1]+1, dtype=int) #Set of Time Stages
L = ["EFF", "ERF", "NRF", "NAF"] #Set of Feeder Types
#C = Conventional
#W = Wind Generation
P = ["C", "W"] #Set of Generator Types
#ET = Existing Transformer
#NT = New Transformer
TR = ["ET", "NT"] #Set of Transformer Types
# =============================================================================
# Sets of Alternatives
# =============================================================================
K_l = {"EFF": [1], #Sets of available alternatives for feeders
"ERF": [1],
"NRF": [1, 2],
"NAF": [1, 2]
}
K_p = {"C": [1, 2], #Sets of available alternatives for generators
"W": [1, 2]
}
K_tr = {"ET": [1], #Sets of available alternatives for transformers
"NT": [1, 2]
}
# =============================================================================
# Sets of Branches
# =============================================================================
Upsilon_l = {"EFF": [],
"ERF": [],
"NRF": [],
"NAF": []
}
for branch_type in L: #Set of branches with feeders of type l
for b in branch:
if b[2] == branch_type:
s = b[0][0]
r = b[0][1]
Upsilon_l[branch_type].append((s,r))
Upsilon_l["NRF"] = Upsilon_l["ERF"]
# =============================================================================
# Sets of Nodes
# =============================================================================
Omega_SS = [136, 137, 138] #Sets of nodes connected to node s by substation nodes
Omega_SSE = [136, 137] # Fixing eq14
Omega_SSN = [138] # Fixing eq14
Omega_l_s = {"EFF": [[] for i in range(0,n_bus)], #Sets of nodes connected to node s by a feeder of type l
"ERF": [[] for i in range(0,n_bus)],
"NRF": [[] for i in range(0,n_bus)],
"NAF": [[] for i in range(0,n_bus)]
}
for branch_type in L:
for (s,r) in Upsilon_l[branch_type]:
Omega_l_s[branch_type][(s,r)[0]-1].append((s,r)[1])
Omega_l_s[branch_type][(s,r)[1]-1].append((s,r)[0])
Omega_LN_t = {1: [indx+1 for indx,value in enumerate(peak_demand[:, 0]) if value > 0], #Sets of nodes connected to node s by load nodes
2: [indx+1 for indx,value in enumerate(peak_demand[:, 1]) if value > 0],
3: [indx+1 for indx,value in enumerate(peak_demand[:, 2]) if value > 0],
4: [indx+1 for indx,value in enumerate(peak_demand[:, 3]) if value > 0],
5: [indx+1 for indx,value in enumerate(peak_demand[:, 4]) if value > 0],
6: [indx+1 for indx,value in enumerate(peak_demand[:, 5]) if value > 0],
7: [indx+1 for indx,value in enumerate(peak_demand[:, 6]) if value > 0],
8: [indx+1 for indx,value in enumerate(peak_demand[:, 7]) if value > 0],
9: [indx+1 for indx,value in enumerate(peak_demand[:, 8]) if value > 0],
10: [indx+1 for indx,value in enumerate(peak_demand[:, 9]) if value > 0],
}
Omega_N = np.arange(1, n_bus+1, dtype=int) #Sets of nodes connected to node s by system nodes
Omega_p = {"C": [10, 28, 38, 53, 64, 94, 108, 117, 126, 133], #Sets of nodes connected to node s by distributed generation
"W": [31, 52, 78, 94, 103, 113, 114, 116, 120, 122]
}
# =============================================================================
# Energy Costs
# =============================================================================
#Load Levels
# 1 2 3
C_SS_b = [57.7, 70, 85.3] #the costs of the energy supplied by all substations
#DG units
C_Ep_k = {"C": [47, 45], #Conventional DG
"W": [0, 0] #Windy DG
}
#Cost for unserved energy
C_U = 2000
# =============================================================================
# Investment Costs
# =============================================================================
C_Il_k = {"NRF": [29870, 39310], #Investment cost coefficients of feeders
"NAF": [25030, 34920]
}
C_INT_k = [500000, 950000] #Investment cost coefficients of new transformers
C_Ip_k = {"C": [500000, 490000], #Investment cost coefficients of generators
"W": [1850000, 1840000]
}
C_ISS_s = {136: 100000, #Investment cost coefficients of substations
137: 100000,
138: 150000
}
# =============================================================================
# Maintenance Costs
# =============================================================================
C_Ml_k = {"EFF": [450], #Maintenance cost coefficients of feeders
"ERF": [450],
"NRF": [450, 450],
"NAF": [450, 450]
}
C_Mp_k = {"C": [0.05*0.9*500000*1, 0.05*0.9*490000*2], #Maintenance cost coefficients of generators
"W": [0.05*0.9*1850000*0.91, 0.05*0.9*1840000*2.05]
}
C_Mtr_k = {"ET": [2000], #Maintenance cost coefficients of transformers
"NT": [1000, 3000]
}
# =============================================================================
# System's Data
# =============================================================================
D__st = peak_demand #Actual nodal peak demand
Dtio_stb = np.full((np.shape(Omega_N)[0],np.shape(T)[0],np.shape(B)[0]),0,dtype=float) #fictitious nodal demand
for s in range(np.shape(Omega_N)[0]):
for t in range(np.shape(T)[0]):
for b in range(np.shape(B)[0]):
if (s+1 in Omega_p["C"] or s+1 in Omega_p["W"]) and s+1 in Omega_LN_t[t+1]:
Dtio_stb[s,t,b] = 1
else:
Dtio_stb[s,t,b] = 0
Fup_l_k = {"EFF": [6.28], #Upper limit for actual current flows through (MVA)
"ERF": [6.28],
"NRF": [9.00, 12.00],
"NAF": [6.28, 9.00]
}
Gup_p_k = {"C": [1.00, 2.00], #Rated capacities of generators
"W": [0.91, 2.05]
}
# Ref: https://wind-turbine.com/download/101655/enercon_produkt_en_06_2015.pdf
Gmax_W_sktb = np.full((np.shape(Omega_N)[0],np.shape(K_p["W"])[0],np.shape(T)[0],np.shape(B)[0]),0,dtype=float) #maximum wind power availability.
for s in range(np.shape(Omega_N)[0]): #Bus
for k in range(np.shape(K_p["W"])[0]): #Option
for t in range(np.shape(T)[0]): #Stage
for b in range(np.shape(B)[0]): #Load Level
zone = node_zone[0,s]
speed = wind_speed[zone-1,b]
Gmax_W_sktb[s,k,t,b] = power_out(k+1,speed)
Gup_tr_k = {"ET": [12], #Upper limit for current injections of transformers.
"NT": [7.5, 15]
}
Vbase = 13.8 #kV
V_ = 0.95*Vbase #Lower bound for nodal voltages
Vup = 1.05*Vbase #Upper bound for nodal voltages
V_SS = 1.05*Vbase #Voltage at the substations
l__sr = np.full((np.shape(Omega_N)[0],np.shape(Omega_N)[0]),0,dtype=float) #Feeder length.
for b in branch:
s, r = b[0]
l__sr[s-1,r-1] = b[1]
l__sr[r-1,s-1] = b[1]
n__DG = np.add.reduce([np.shape(Omega_p[p]) for p in P])[0] #Number of candidate nodes for installation of distributed generation
n__T = np.shape(T)[0] #number of time stages
pf = 0.9 #System power factor
H = Vup - V_ #Ref: DOI: 10.1109/TPWRS.2017.2764331
# =============================================================================
# Assets Data
# =============================================================================
i = 7.1/100 #Annual interest rate.
IB__t = [5000000, 5000000, 5000000, 5000000, 5000000, 5000000, 5000000, 5000000, 5000000, 5000000] #Investment budget for stage t
Eta_l = {"NRF": 25, #Lifetimes of feeders in year
"NAF": 25
}
Eta_NT = 15 #Lifetime of new transformers
Eta_p = {"C": 20, #Lifetime of generators
"W": 20
}
Eta_SS = 100 #Lifetime of substations
RR_l = {"NRF": (i*(1+i)**Eta_l["NRF"])/((1+i)**Eta_l["NRF"] - 1), #Capital recovery rates for investment in feeders
"NAF": (i*(1+i)**Eta_l["NAF"])/((1+i)**Eta_l["NAF"] - 1)
}
RR_NT = (i*(1+i)**Eta_NT)/((1+i)**Eta_NT - 1) #Capital recovery rates for investment in new transformers
RR_p = {"C": (i*(1+i)**Eta_p["C"])/((1+i)**Eta_p["C"] - 1), #Capital recovery rates for investment in generators
"W": (i*(1+i)**Eta_p["W"])/((1+i)**Eta_p["W"] - 1)
}
RR_SS = i #Capital recovery rates for investment in substations.
Z_l_k = {"EFF": [0.557], #Unitary impedance magnitude of feeders
"ERF": [0.557],
"NRF": [0.478, 0.423],
"NAF": [0.557, 0.478]
}
Z_tr_k = {"ET": [0.16], #impedance magnitude of transformers
"NT": [0.25, 0.13]
}
Delta__b = [2000, 5760, 1000] #Duration of load level b
Mi__b = load_factor #Loading | |
from __future__ import print_function
import os
import random
import signal
import numpy as np
from robolearn.old_utils.sampler import Sampler
from robolearn.old_agents import GPSAgent
from robolearn.old_algos.gps.gps import GPS
from robolearn.old_costs.cost_action import CostAction
from robolearn.old_costs.cost_fk import CostFK
from robolearn.old_costs.cost_sum import CostSum
from robolearn.old_costs.cost_utils import RAMP_FINAL_ONLY, RAMP_CONSTANT
from robolearn.old_costs.cost_utils import evall1l2term
from robolearn.old_envs import BigmanEnv
from robolearn.old_policies.lin_gauss_init import init_pd, init_demos
from robolearn.old_policies.policy_opt.policy_opt_tf import PolicyOptTf
from robolearn.old_policies.policy_opt.tf_models import tf_network
from robolearn.old_policies.policy_prior import ConstantPolicyPrior # For MDGPS
from robolearn.old_utils.dynamics.dynamics_lr_prior import DynamicsLRPrior
from robolearn.old_utils.dynamics.dynamics_prior_gmm import DynamicsPriorGMM
from robolearn.old_utils.iit.iit_robots_params import bigman_params
from robolearn.old_utils.print_utils import change_print_color
from robolearn.old_utils.robot_model import RobotModel
from robolearn.old_utils.tasks.bigman.lift_box_utils import Reset_condition_bigman_box_gazebo
from robolearn.old_utils.tasks.bigman.lift_box_utils import create_bigman_box_condition
from robolearn.old_utils.tasks.bigman.lift_box_utils import create_box_relative_pose
from robolearn.old_utils.tasks.bigman.lift_box_utils import create_hand_relative_pose
from robolearn.old_utils.tasks.bigman.lift_box_utils import spawn_box_gazebo
from robolearn.old_utils.tasks.bigman.lift_box_utils import task_space_torque_control_demos, \
load_task_space_torque_control_demos
from robolearn.old_utils.traj_opt.traj_opt_lqr import TrajOptLQR
np.set_printoptions(precision=4, suppress=True, linewidth=1000)
def kill_everything(_signal=None, _frame=None):
print("\n\033[1;31mThe script has been kill by the user!!")
os._exit(1)
signal.signal(signal.SIGINT, kill_everything)
# ################## #
# ################## #
# ### PARAMETERS ### #
# ################## #
# ################## #
# Task parameters
Ts = 0.01
Treach = 5
Tlift = 0 # 3.8
Tinter = 0 # 0.5
Tend = 0 # 0.7
# EndTime = 4 # Using final time to define the horizon
EndTime = Treach + Tinter + Tlift + Tend # Using final time to define the horizon
init_with_demos = False
demos_dir = None # 'TASKSPACE_TORQUE_CTRL_DEMO_2017-07-21_16:32:39'
seed = 6
random.seed(seed)
np.random.seed(seed)
# BOX
box_x = 0.70
box_y = 0.00
box_z = 0.0184
box_yaw = 0 # Degrees
box_size = [0.4, 0.5, 0.3]
final_box_height = 0.0
box_relative_pose = create_box_relative_pose(box_x=box_x, box_y=box_y, box_z=box_z, box_yaw=box_yaw)
# Robot Model (It is used to calculate the IK cost)
#robot_urdf_file = os.environ["ROBOTOLOGY_ROOT"]+'/configs/ADVR_shared/bigman/urdf/bigman.urdf'
robot_urdf_file = os.environ["ROBOTOLOGY_ROOT"]+'/robots/iit-bigman-ros-pkg/bigman_urdf/urdf/bigman.urdf'
robot_model = RobotModel(robot_urdf_file)
LH_name = 'LWrMot3'
RH_name = 'RWrMot3'
l_soft_hand_offset = np.array([0.000, -0.030, -0.210])
r_soft_hand_offset = np.array([0.000, 0.030, -0.210])
touching_box_config = np.array([0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.,
0., 0., 0.,
0.0568, 0.2386, -0.2337, -1.6803, 0.2226, 0.0107, 0.5633,
#0., 0., 0., -1.5708, 0., 0., 0.,
0., 0.,
0.0568, -0.2386, 0.2337, -1.6803, -0.2226, 0.0107, -0.5633])
#0., 0., 0., -1.5708, 0., 0., 0.])
# ################### #
# ################### #
# ### ENVIRONMENT ### #
# ################### #
# ################### #
change_print_color.change('BLUE')
print("\nCreating Bigman environment...")
# Robot configuration
interface = 'ros'
body_part_active = 'RA'
body_part_sensed = 'RA'
command_type = 'effort'
left_hand_rel_pose = create_hand_relative_pose([0, 0, 0, 1, 0, 0, 0],
hand_x=0.0, hand_y=box_size[1]/2-0.02, hand_z=0.0, hand_yaw=0)
# left_hand_rel_pose[:] = left_hand_rel_pose[[3, 4, 5, 6, 0, 1, 2]] # Changing from 'pos+orient' to 'orient+pos'
right_hand_rel_pose = create_hand_relative_pose([0, 0, 0, 1, 0, 0, 0],
hand_x=0.0, hand_y=-box_size[1]/2+0.02, hand_z=0.0, hand_yaw=0)
# right_hand_rel_pose[:] = right_hand_rel_pose[[3, 4, 5, 6, 0, 1, 2]] # Changing from 'pos+orient' to 'orient+pos'
reset_condition_bigman_box_gazebo_fcn = Reset_condition_bigman_box_gazebo()
observation_active = [{'name': 'joint_state',
'type': 'joint_state',
'ros_topic': '/xbotcore/bigman/joint_states',
# 'fields': ['link_position', 'link_velocity', 'effort'],
'fields': ['link_position', 'link_velocity'],
# 'joints': bigman_params['joint_ids']['UB']},
'joints': bigman_params['joint_ids'][body_part_sensed]},
{'name': 'prev_cmd',
'type': 'prev_cmd'},
{'name': 'distance_left_arm',
'type': 'fk_pose',
'body_name': LH_name,
'body_offset': l_soft_hand_offset,
'target_offset': left_hand_rel_pose,
'fields': ['orientation', 'position']},
{'name': 'distance_right_arm',
'type': 'fk_pose',
'body_name': RH_name,
'body_offset': r_soft_hand_offset,
'target_offset': right_hand_rel_pose,
'fields': ['orientation', 'position']},
# {'name': 'ft_left_arm',
# 'type': 'fk_vel',
# 'ros_topic': None,
# 'body_name': LH_name,
# 'body_offset': l_soft_hand_offset,
# 'fields': ['orientation', 'position']},
# {'name': 'ft_left_arm',
# 'type': 'ft_sensor',
# 'ros_topic': '/xbotcore/bigman/ft/l_arm_ft',
# 'fields': ['force', 'torque']},
# {'name': 'ft_right_arm',
# 'type': 'ft_sensor',
# 'ros_topic': '/xbotcore/bigman/ft/r_arm_ft',
# 'fields': ['force', 'torque']},
# {'name': 'ft_left_leg',
# 'type': 'ft_sensor',
# 'ros_topic': '/xbotcore/bigman/ft/l_leg_ft',
# 'fields': ['force', 'torque']},
# {'name': 'ft_right_leg',
# 'type': 'ft_sensor',
# 'ros_topic': '/xbotcore/bigman/ft/r_leg_ft',
# 'fields': ['force', 'torque']},
# {'name': 'imu1',
# 'type': 'imu',
# 'ros_topic': '/xbotcore/bigman/imu/imu_link',
# 'fields': ['orientation', 'angular_velocity', 'linear_acceleration']},
# {'name': 'optitrack',
# 'type': 'optitrack',
# 'ros_topic': '/optitrack/relative_poses',
# 'fields': ['orientation', 'position'],
# 'bodies': ['box']},
]
state_active = [{'name': 'joint_state',
'type': 'joint_state',
'fields': ['link_position', 'link_velocity'],
'joints': bigman_params['joint_ids'][body_part_sensed]},
{'name': 'prev_cmd',
'type': 'prev_cmd'},
{'name': 'distance_left_arm',
'type': 'fk_pose',
'body_name': LH_name,
'body_offset': l_soft_hand_offset,
'target_offset': left_hand_rel_pose,
'fields': ['orientation', 'position']},
{'name': 'distance_right_arm',
'type': 'fk_pose',
'body_name': RH_name,
'body_offset': r_soft_hand_offset,
'target_offset': right_hand_rel_pose,
'fields': ['orientation', 'position']},
# {'name': 'optitrack',
# 'type': 'optitrack',
# 'fields': ['orientation', 'position'],
# 'bodies': ['box']} # check if it is better relative position with EE(EEs)
]
optional_env_params = {
'temp_object_name': 'box'
}
# Spawn Box first because it is simulation
spawn_box_gazebo(box_relative_pose, box_size=box_size)
# Create a BIGMAN ROS EnvInterface
bigman_env = BigmanEnv(interface=interface, mode='simulation',
body_part_active=body_part_active, command_type=command_type,
observation_active=observation_active,
state_active=state_active,
cmd_freq=int(1/Ts),
robot_dyn_model=robot_model,
optional_env_params=optional_env_params,
reset_simulation_fcn=reset_condition_bigman_box_gazebo_fcn)
# reset_simulation_fcn=reset_condition_bigman_box_gazebo)
action_dim = bigman_env.action_dim
state_dim = bigman_env.state_dim
observation_dim = bigman_env.obs_dim
print("Bigman Environment OK. body_part_active:%s (action_dim=%d). Command_type:%s" % (body_part_active, action_dim,
command_type))
# ################# #
# ################# #
# ##### AGENT ##### #
# ################# #
# ################# #
change_print_color.change('CYAN')
print("\nCreating Bigman Agent...")
policy_params = {
'network_model': tf_network, # tf_network, multi_modal_network, multi_modal_network_fp
'network_params': {
'n_layers': 1, # Hidden layers??
'dim_hidden': [40], # List of size per n_layers
'obs_names': bigman_env.get_obs_info()['names'],
'obs_dof': bigman_env.get_obs_info()['dimensions'], # DoF for observation data tensor
},
# Initialization.
'init_var': 0.1, # Initial policy variance.
'ent_reg': 0.0, # Entropy regularizer (Used to update policy variance)
# Solver hyperparameters.
'iterations': 5000, # Number of iterations per inner iteration (Default:5000). Recommended: 1000?
'batch_size': 15,
'lr': 0.001, # Base learning rate (by default it's fixed).
'lr_policy': 'fixed', # Learning rate policy.
'momentum': 0.9, # Momentum.
'weight_decay': 0.005, # Weight decay.
'solver_type': 'Adam', # Solver type (e.g. 'SGD', 'Adam', etc.).
# set gpu usage.
'use_gpu': 1, # Whether or not to use the GPU for training.
'gpu_id': 0,
'random_seed': 1,
'fc_only_iterations': 0, # TODO: Only forwardcontrol? if it is CNN??
# 'weights_file_prefix': EXP_DIR + 'policy',
}
policy_opt = {
'type': PolicyOptTf,
'hyperparams': policy_params
}
bigman_agent = GPSAgent(act_dim=action_dim, obs_dim=observation_dim, state_dim=state_dim, policy_opt=policy_opt)
print("Bigman Agent:%s OK\n" % type(bigman_agent))
# ################# #
# ################# #
# ##### COSTS ##### #
# ################# #
# ################# #
# Action Cost
act_cost = {
'type': CostAction,
'wu': np.ones(action_dim) * 1e-4,
'target': None, # Target action value
}
# State Cost
target_distance_right_arm = np.zeros(6)
# state_cost_distance = {
# 'type': CostState,
# 'ramp_option': RAMP_QUADRATIC, # How target cost ramps over time. RAMP_* :CONSTANT, LINEAR, QUADRATIC, FINAL_ONLY
# 'l1': 0.1, # Weight for l1 norm
# 'l2': 1.0, # Weight for l2 norm
# 'alpha': 1e-2, # Constant added in square root in l1 norm
# 'wp_final_multiplier': 10.0, # Weight multiplier on final time step.
# 'data_types': {
# 'distance_left_arm': {
# # 'wp': np.ones_like(target_state), # State weights - must be set.
# 'wp': np.array([1.0, 1.0, 1.0, 3.0, 3.0, 1.0]), # State weights - must be set.
# 'target_state': target_distance_left_arm, # Target state - must be set.
# 'average': None, # (12, 3),
# 'data_idx': bigman_env.get_state_info(name='distance_left_arm')['idx']
# },
# 'distance_right_arm': {
# # 'wp': np.ones_like(target_state), # State weights - must be set.
# 'wp': np.array([1.0, 1.0, 1.0, 3.0, 3.0, 1.0]), # State weights - must be set.
# 'target_state': target_distance_right_arm, # Target state - must be set.
# 'average': None, # (12, 3),
# 'data_idx': bigman_env.get_state_info(name='distance_right_arm')['idx']
# },
# },
# }
RAfk_cost = {
'type': CostFK,
'ramp_option': RAMP_CONSTANT, # How target cost ramps over time. RAMP_* :CONSTANT, LINEAR, QUADRATIC, FINAL_ONLY
'target_pose': target_distance_right_arm,
'tgt_data_type': 'state', # 'state' or 'observation'
'tgt_idx': bigman_env.get_state_info(name='distance_right_arm')['idx'],
'op_point_name': RH_name,
'op_point_offset': r_soft_hand_offset,
'joints_idx': bigman_env.get_state_info(name='link_position')['idx'][7:],
'joint_ids': bigman_params['joint_ids']['RA'],
'robot_model': robot_model,
# 'wp': np.array([1.0, 1.0, 1.0, 0.7, 0.8, 0.6]), # one dim less because 'quat' error | 1)orient 2)pos
'wp': np.array([1.0, 1.0, 1.0, 6.0, 6.0, 3.0]), # one dim less because 'quat' error | 1)orient 2)pos
'evalnorm': evall1l2term,
'l1': 1.0, # 1.0, # 1.0, # Weight for l1 norm: log(d^2 + alpha) --> Lorentzian rho-function Precise placement at the target
'l2': 1.0, # 1.0, #1.0e-3, # Weight for l2 norm: d^2 --> Encourages to quickly get the object in the vicinity of the target
'alpha': 1.0e-2, # e-5, # Constant added in square root in l1 norm
'wp_final_multiplier': 1, # 10
}
RAfk_l1_cost = {
'type': CostFK,
'ramp_option': RAMP_CONSTANT, # How target cost ramps over time. RAMP_* :CONSTANT, LINEAR, QUADRATIC, FINAL_ONLY
'target_pose': target_distance_right_arm,
'tgt_data_type': 'state', # 'state' or 'observation'
'tgt_idx': bigman_env.get_state_info(name='distance_right_arm')['idx'],
'op_point_name': RH_name,
'op_point_offset': r_soft_hand_offset,
'joints_idx': bigman_env.get_state_info(name='link_position')['idx'][7:],
'joint_ids': bigman_params['joint_ids']['RA'],
'robot_model': robot_model,
# 'wp': np.array([1.0, 1.0, 1.0, 0.7, 0.8, 0.6]), # one dim less because 'quat' error | 1)orient 2)pos
'wp': np.array([1.0, 1.0, 1.0, 6.0, 6.0, 3.0]), # one dim less because 'quat' error | 1)orient 2)pos
'evalnorm': evall1l2term,
'l1': 1.0, # 1.0, # 1.0, # Weight for l1 norm: log(d^2 + alpha) --> Lorentzian rho-function Precise placement at the target
'l2': 0.0, # 1.0, #1.0e-3, # Weight for l2 norm: d^2 --> Encourages to quickly get the object in the vicinity of the target
'alpha': 1.0e-2, # e-5, # Constant added in square root in l1 norm
'wp_final_multiplier': 1, # 10
}
RAfk_l2_cost = {
'type': CostFK,
'ramp_option': RAMP_CONSTANT, # How target cost ramps over time. RAMP_* :CONSTANT, LINEAR, QUADRATIC, FINAL_ONLY
'target_pose': target_distance_right_arm,
'tgt_data_type': 'state', # 'state' or 'observation'
'tgt_idx': bigman_env.get_state_info(name='distance_right_arm')['idx'],
'op_point_name': RH_name,
'op_point_offset': r_soft_hand_offset,
'joints_idx': bigman_env.get_state_info(name='link_position')['idx'][7:],
'joint_ids': bigman_params['joint_ids']['RA'],
'robot_model': robot_model,
# 'wp': np.array([1.0, 1.0, 1.0, 0.7, 0.8, 0.6]), # one dim less because 'quat' error | |
env)
estrutura = self.load_template('structure', env)
dados = self.gettemplate('payload_6.yml', env)
reponame = dados['params']['Projeto']
app = NewTemplate('codepipeline_role',
'codebuild_role', 'DevSecOps_Role')
resources = {}
cf_codebuild = app.generate_codebuild(
dados['runtime'], template_pipeline, dados['stages'], dados['params'], env, imageCustom)
cf_source = app.generate_sources(
dados['stages'], env, reponame, 'codebuild_role', 'release-1.19')
cf_action = app.generate_action(
dados['stages'], template_pipeline, cf_codebuild, env)
resources.update(cf_source)
resources.update(cf_action)
cf_stages = app.generate_stage(
template_pipeline, resources, env, estrutura)
cf_pipeline = app.generate_pipeline(cf_stages, f"{reponame}-{env}")
cf = self.gerando_cloudformation(cf_pipeline)
template = json.dumps(cf)
app.save_swap(reponame, template, env, '00000')
assert os.path.isdir('swap') == True
assert os.path.isfile(
'swap/Pipeline-Python-develop-00000.json') == True
os.remove('swap/Pipeline-Python-develop-00000.json')
def test_deve_criar_pasta_swap(self, params, imageCustom):
for pipe in params['templates']:
env = 'develop'
name_template = pipe
template_pipeline = self.load_template(name_template, env)
estrutura = self.load_template('structure', env)
dados = self.gettemplate('payload_6.yml', env)
reponame = dados['params']['Projeto']
app = NewTemplate('codepipeline_role',
'codebuild_role', 'DevSecOps_Role')
resources = {}
cf_codebuild = app.generate_codebuild(
dados['runtime'], template_pipeline, dados['stages'], dados['params'], env, imageCustom)
cf_source = app.generate_sources(
dados['stages'], env, reponame, 'codebuild_role', 'release-1.19')
cf_action = app.generate_action(
dados['stages'], template_pipeline, cf_codebuild, env)
resources.update(cf_source)
resources.update(cf_action)
cf_stages = app.generate_stage(
template_pipeline, resources, env, estrutura)
cf_pipeline = app.generate_pipeline(cf_stages, f"{reponame}-{env}")
cf = self.gerando_cloudformation(cf_pipeline)
template = json.dumps(cf)
shutil.rmtree('swap')
app.save_swap(reponame, template, env, '00000')
assert os.path.isdir('swap') == True
assert os.path.isfile(
'swap/Pipeline-Python-develop-00000.json') == True
os.remove('swap/Pipeline-Python-develop-00000.json')
os.rmdir('swap')
def test_deve_retornar_url_da_pipeline(self, params, imageCustom):
for pipe in params['templates']:
env = 'develop'
name_template = pipe
template_pipeline = self.load_template(name_template, 'develop')
estrutura = self.load_template('structure', env)
depends = self.load_template('depends', env)
dados = self.gettemplate('payload_6.yml', env)
app = NewTemplate('codepipeline_role',
'codebuild_role', 'DevSecOps_Role')
template_params = {
'env': env,
'runtime': dados['runtime'],
'stages': dados['stages'],
'account': '000000',
'pipeline_stages': template_pipeline,
'params': dados['params'],
'release': 'release-10',
'imageCustom': imageCustom,
'structure': estrutura,
'depends': depends
}
file_template = app.generate(tp=template_params)
print(file_template)
assert os.path.isdir('swap') == True
assert os.path.isfile(
'swap/Pipeline-Python-develop-000000.json') == True
os.remove('swap/Pipeline-Python-develop-000000.json')
def test_deve_verificar_a_estrutura_da_pipeline(self, params, imageCustom, payloads):
for name_template in params['templates']:
for payload in payloads:
env = 'develop'
template_pipeline = self.load_template(name_template, env)
estrutura = self.load_template('structure', env)
depends = self.load_template('depends', env)
dados = self.gettemplate(payload, env)
codepipeline_role = "arn:aws:iam::033921349789:role/RoleCodepipelineRole"
codebuild_role = "arn:aws:iam::033921349789:role/RoleCodeBuildRole"
DevSecOps_Role = "arn:aws:iam::033921349789:role/RoleCodeBuildRole"
app = NewTemplate(codepipeline_role,
codebuild_role, DevSecOps_Role)
template_params = {
'env': env,
'runtime': dados['runtime'],
'stages': dados['stages'],
'account': '000000',
'pipeline_stages': template_pipeline,
'params': dados['params'],
'release': 'release-10',
'imageCustom': imageCustom,
'structure': estrutura,
'depends': depends
}
file_template = app.generate(tp=template_params)
# Abrindo a pipeline criada
ft = open(file_template)
ftemplate = json.loads(ft.read())
ft.close()
resources = ftemplate['Resources'].keys()
codebuilds = []
sg = []
for resource in resources:
if ftemplate['Resources'][resource]['Type'] == 'AWS::CodeBuild::Project':
name = ftemplate['Resources'][resource]['Properties']['Name']
codebuilds.append(name)
elif ftemplate['Resources'][resource]['Type'] == 'AWS::EC2::SecurityGroup':
sg.append(ftemplate['Resources'][resource])
for resource in resources:
if ftemplate['Resources'][resource]['Type'] == 'AWS::CodePipeline::Pipeline':
for stages in (ftemplate['Resources'][resource]['Properties']['Stages']):
for action in stages['Actions']:
if action['ActionTypeId']['Category'] == 'Build':
assert action['Configuration']['ProjectName'] in codebuilds
assert sg
print(payload)
if payload == 'payload_6.yml':
assert len(
ftemplate['Resources']['PipelinePythonDevelop']['Properties']['Stages']) == 5
else:
assert len(
ftemplate['Resources']['PipelinePythonDevelop']['Properties']['Stages']) == 3
actions = ftemplate['Resources']['PipelinePythonDevelop']['Properties']['Stages']
if payload == 'payload_1.yml':
print(len(actions[2]['Actions']))
assert len(actions) == 3
assert len(actions[0]['Actions']) == 2
assert len(actions[1]['Actions']) == 8
assert len(actions[2]['Actions']) == 2
elif payload == 'payload_2.yml':
print(len(actions[2]['Actions']))
assert len(actions) == 3
assert len(actions[0]['Actions']) == 2
assert len(actions[1]['Actions']) == 8
assert len(actions[2]['Actions']) == 2
elif payload == 'payload_3.yml':
print(len(actions[2]['Actions']))
assert len(actions) == 3
assert len(actions[0]['Actions']) == 3
assert len(actions[1]['Actions']) == 8
assert len(actions[2]['Actions']) == 2
elif payload == 'payload_4.yml':
print(len(actions[2]['Actions']))
assert len(actions) == 3
assert len(actions[0]['Actions']) == 3
assert len(actions[1]['Actions']) == 8
assert len(actions[2]['Actions']) == 2
elif payload == 'payload_5.yml':
print(len(actions[2]['Actions']))
assert len(actions) == 3
assert len(actions[0]['Actions']) == 3
assert len(actions[1]['Actions']) == 9
assert len(actions[2]['Actions']) == 2
elif payload == 'payload_6.yml':
print(actions[4])
assert len(actions) == 5
assert len(actions[0]['Actions']) == 2
assert len(actions[1]['Actions']) == 8
assert len(actions[2]['Actions']) == 2
assert len(actions[4]['Actions']) == 2
os.remove('swap/Pipeline-Python-develop-000000.json')
def test_deve_retornar_pipeline_com_action_obrigatorio_com_source_personalizado(self, params, payloads, imageCustom):
"""
Este teste deve validar a alteracao de um codebuild obrigatorio como o build, mas com o source personalizado
"""
for name_template in params['templates']:
env = 'develop'
template_pipeline = self.load_template(name_template, env)
estrutura = self.load_template('structure', env)
depends = self.load_template('depends', env)
dados = self.gettemplate('payload_8.yml', env)
codepipeline_role = "arn:aws:iam::033921349789:role/RoleCodepipelineRole"
codebuild_role = "arn:aws:iam::033921349789:role/RoleCodeBuildRole"
DevSecOps_Role = "arn:aws:iam::033921349789:role/RoleCodeBuildRole"
app = NewTemplate(codepipeline_role,
codebuild_role, DevSecOps_Role)
template_params = {
'env': env,
'runtime': dados['runtime'],
'stages': dados['stages'],
'account': '000000',
'pipeline_stages': template_pipeline,
'params': dados['params'],
'release': 'release-10',
'imageCustom': imageCustom,
'structure': estrutura,
'depends': depends
}
file_template = app.generate(tp=template_params)
# Abrindo a pipeline criada
ft = open(file_template)
ftemplate = json.loads(ft.read())
ft.close()
resources = ftemplate['Resources'].keys()
l_actions = ftemplate['Resources']['PipelinePythonDevelop']['Properties']['Stages']
for actions in l_actions:
for action in actions['Actions']:
if action['ActionTypeId']['Category'] != 'Source':
print(action)
if action['Name'] == 'Build':
assert [{'Name': 'Normalizacao'}] == [
item for item in action['InputArtifacts'] if item['Name'] == 'Normalizacao']
assert 'Normalizacao' == action['Configuration']['PrimarySource']
if action['Name'] == 'Testunit':
assert [{'Name': 'Normalizacao'}] == [
item for item in action['InputArtifacts'] if item['Name'] == 'Normalizacao']
assert 'Normalizacao' == action['Configuration']['PrimarySource']
if action['Name'] == 'Sonar':
assert [{'Name': 'Normalizacao'}] == [
item for item in action['InputArtifacts'] if item['Name'] == 'Normalizacao']
assert 'Normalizacao' == action['Configuration']['PrimarySource']
def test_deve_retornar_pipeline_com_action_customizado_com_multiplos_sources(self, params, payloads, imageCustom):
"""
Este teste deve validar a alteracao de um codebuild obrigatorio como o build, mas com o source personalizado
"""
for name_template in params['templates']:
env = 'develop'
template_pipeline = self.load_template(name_template, env)
estrutura = self.load_template('structure', env)
depends = self.load_template('depends', env)
dados = self.gettemplate('payload_8.yml', env)
codepipeline_role = "arn:aws:iam::033921349789:role/RoleCodepipelineRole"
codebuild_role = "arn:aws:iam::033921349789:role/RoleCodeBuildRole"
DevSecOps_Role = "arn:aws:iam::033921349789:role/RoleCodeBuildRole"
app = NewTemplate(codepipeline_role,
codebuild_role, DevSecOps_Role)
template_params = {
'env': env,
'runtime': dados['runtime'],
'stages': dados['stages'],
'account': '000000',
'pipeline_stages': template_pipeline,
'params': dados['params'],
'release': 'release-10',
'imageCustom': imageCustom,
'structure': estrutura,
'depends': depends
}
file_template = app.generate(tp=template_params)
# Abrindo a pipeline criada
ft = open(file_template)
ftemplate = json.loads(ft.read())
ft.close()
resources = ftemplate['Resources'].keys()
l_actions = ftemplate['Resources']['PipelinePythonDevelop']['Properties']['Stages']
for actions in l_actions:
for action in actions['Actions']:
if action['ActionTypeId']['Category'] != 'Source':
if action['Name'] == 'Normalizacao':
print(action)
assert [{'Name': 'App'}, {'Name': 'App2'}, {
'Name': 'App3'}] == action['InputArtifacts']
assert 'App' == action['Configuration']['PrimarySource']
if action['Name'] == 'Testmultant':
print(action)
assert [{'Name': 'Build'}
] == action['InputArtifacts']
assert 'Build' == action['Configuration']['PrimarySource']
def test_deve_retornar_pipeline_com_stages_ordenados(self, params, payloads, imageCustom):
"""
Este teste deve validar a alteracao de um codebuild obrigatorio como o build, mas com o source personalizado
"""
for name_template in params['templates']:
env = 'develop'
template_pipeline = self.load_template(name_template, env)
estrutura = self.load_template('structure', env)
depends = self.load_template('depends', env)
dados = self.gettemplate('payload_9.yml', env)
codepipeline_role = "arn:aws:iam::033921349789:role/RoleCodepipelineRole"
codebuild_role = "arn:aws:iam::033921349789:role/RoleCodeBuildRole"
DevSecOps_Role = "arn:aws:iam::033921349789:role/RoleCodeBuildRole"
app = NewTemplate(codepipeline_role,
codebuild_role, DevSecOps_Role)
template_params = {
'env': env,
'runtime': dados['runtime'],
'stages': dados['stages'],
'account': '000000',
'pipeline_stages': template_pipeline,
'params': dados['params'],
'release': 'release-10',
'imageCustom': imageCustom,
'structure': estrutura,
'depends': depends
}
file_template = app.generate(tp=template_params)
# Abrindo a pipeline criada
ft = open(file_template)
ftemplate = json.loads(ft.read())
ft.close()
resources = ftemplate['Resources'].keys()
l_actions = ftemplate['Resources']['PipelinePythonDevelop']['Properties']['Stages']
list_stages = [stage['Name'] for stage in l_actions]
print(list_stages)
assert ['Source', 'Continuous_Integration', 'Seguranca',
'Seguranca3', 'DeployDev'] == list_stages
def test_deve_retornar_codebuild_eh_madatorio(self, params):
for name_template in params['templates']:
buildName = ['Build', 'Customizado']
env = 'develop'
template_pipeline = self.load_template(name_template, env)
pipeline = NewTemplate('codepipeline_role',
'codebuild_role', 'DevSecOps_Role')
mandatorio = pipeline.codebuild_mandatory(
buildName[0], template_pipeline)
customizado = pipeline.codebuild_mandatory(
buildName[1], template_pipeline)
assert mandatorio == True
assert customizado == False
def test_deve_retornar_codebuild_com_source_personalizado(self, params):
for name_template in params['templates']:
env = 'develop'
template_pipeline = self.load_template(name_template, env)
pipeline = NewTemplate('codepipeline_role',
'codebuild_role', 'DevSecOps_Role')
source1 = pipeline.check_is_not_codebuild(
'Source', template_pipeline)
source2 = pipeline.check_is_not_codebuild(
'Build', template_pipeline)
source3 = pipeline.check_is_not_codebuild(
'Agendamento1', template_pipeline)
source4 = pipeline.check_is_not_codebuild(
'AprovacaoPO', template_pipeline)
assert source1 == True
assert source2 == False
assert source3 == True
assert source4 == True
def test_deve_retornar_pipeline_master(self, params, imageCustom, payloads):
for pipe in params['templates']:
cf_pipeline = self.generate_pipeline(
pipe, 'master', 'payload_1.yml', imageCustom)
cf = self.gerando_cloudformation(cf_pipeline)
print(cf['Resources'].keys())
assert len(cf['Resources']) == 7
def create_pipeline(self, name_template, env, imageCustom, payload):
template_pipeline = self.load_template(name_template, env)
estrutura = self.load_template('structure', env)
depends = self.load_template('depends', env)
dados = self.gettemplate(payload, env)
codepipeline_role = "arn:aws:iam::033921349789:role/RoleCodepipelineRole"
codebuild_role = "arn:aws:iam::033921349789:role/RoleCodeBuildRole"
DevSecOps_Role = "arn:aws:iam::033921349789:role/RoleCodeBuildRole"
app = NewTemplate(codepipeline_role,
codebuild_role, DevSecOps_Role)
template_params = {
'env': env,
'runtime': dados['runtime'],
'stages': dados['stages'],
'account': '000000',
'pipeline_stages': template_pipeline,
'params': dados['params'],
'release': 'release-10',
'imageCustom': imageCustom,
'structure': estrutura,
'depends': depends
}
file_template = app.generate(tp=template_params)
# Abrindo a pipeline criada
ft = open(file_template)
ftemplate = json.loads(ft.read())
ft.close()
return ftemplate
def test_deve_retornar_pipeline_com_action_de_aprovacao(self, params, payloads, imageCustom):
"""
Este teste deve validar a alteracao de um codebuild obrigatorio como o build, mas com o source personalizado
"""
for name_template in params['templates']:
env = 'master'
ftemplate = self.create_pipeline(
name_template, env, imageCustom, 'payload_11.yml')
env_ = env.capitalize()
pipe_name = f'PipelinePython{env_}'
l_actions = ftemplate['Resources']['PipelinePythonMaster']['Properties']['Stages']
cont = 0
for actions in l_actions:
for action in actions['Actions']:
if action['ActionTypeId']['Category'] == 'Approval':
cont += 1
assert cont == 2
def test_deve_verificar_se_action_nao_estao_vazios(self, params, payloads, imageCustom):
"""
Este teste deve validar a alteracao de um codebuild obrigatorio como o | |
the *RCPT* commands.
Raises:
ConnectionResetError: If the connection with the server is
unexpectedely lost.
SMTPCommandFailedError: If the server refuses our EHLO/HELO
greeting.
SMTPCommandFailedError: If the server refuses our MAIL command.
SMTPCommandFailedError: If the server refuses our DATA command.
SMTPNoRecipientError: If the server refuses all given
recipients.
Returns:
dict: A dict containing an entry for each recipient that was
refused. Each entry is associated with a (code, message)
2-tuple containing the error code and message, as returned by
the server.
When everythign runs smoothly, the returning dict is empty.
.. note:: The connection remains open after. It's your responsibility
to close it. A good practice is to use the asynchronous context
manager instead. See :meth:`SMTP.__aenter__` for further details.
"""
# Make sure `recipients` is a list:
if isinstance(recipients, str):
recipients = [recipients]
# Set some defaults values:
if mail_options is None:
mail_options = []
if rcpt_options is None:
rcpt_options = []
# EHLO or HELO is required:
await self.ehlo_or_helo_if_needed()
if self.supports_esmtp:
if "size" in self.esmtp_extensions:
mail_options.append("size={}".format(len(message)))
await self.mail(sender, mail_options)
errors = []
for recipient in recipients:
try:
await self.rcpt(recipient, rcpt_options)
except SMTPCommandFailedError as e:
errors.append(e)
if len(recipients) == len(errors):
# The server refused all our recipients:
raise SMTPNoRecipientError(errors)
await self.data(message)
# If we got here then somebody got our mail:
return errors
async def send_mail(
self, sender, recipients, message, mail_options=None, rcpt_options=None
):
"""
Alias for :meth:`SMTP.sendmail`.
"""
return await self.sendmail(
sender, recipients, message, mail_options, rcpt_options
)
async def ehlo_or_helo_if_needed(self):
"""
Calls :meth:`SMTP.ehlo` and/or :meth:`SMTP.helo` if needed.
If there hasn't been any previous *EHLO* or *HELO* command this
session, tries to initiate the session. *EHLO* is tried first.
Raises:
ConnectionResetError: If the connection with the server is
unexpectedely lost.
SMTPCommandFailedError: If the server refuses our EHLO/HELO
greeting.
"""
no_helo = self.last_helo_response == (None, None)
no_ehlo = self.last_ehlo_response == (None, None)
if no_helo and no_ehlo:
try:
# First we try EHLO:
await self.ehlo()
except SMTPCommandFailedError:
# EHLO failed, let's try HELO:
await self.helo()
async def close(self):
"""
Cleans up after the connection to the SMTP server has been closed
(voluntarily or not).
"""
if self.writer is not None:
# Close the transport:
try:
self.writer.close()
except OSError as exc:
if exc.errno != errno.ENOTCONN:
raise
self.reset_state()
async def _auth_cram_md5(self, username, password):
"""
Performs an authentication attemps using the CRAM-MD5 mechanism.
Protocol:
1. Send 'AUTH CRAM-MD5' to server ;
2. If the server replies with a 334 return code, we can go on:
1) The challenge (sent by the server) is base64-decoded ;
2) The decoded challenge is hashed using HMAC-MD5 and the user
password as key (shared secret) ;
3) The hashed challenge is converted to a string of lowercase
hexadecimal digits ;
4) The username and a space character are prepended to the hex
digits ;
5) The concatenation is base64-encoded and sent to the server.
6) If the server replies with a return code of 235, user is
authenticated.
Args:
username (str): Identifier of the user trying to authenticate.
password (<PASSWORD>): <PASSWORD>.
Raises:
ConnectionResetError: If the connection with the server is
unexpectedely lost.
SMTPAuthenticationError: If the authentication attempt fails.
Returns:
(int, str): A (code, message) 2-tuple containing the server
response.
"""
mechanism = "CRAM-MD5"
code, message = await self.do_cmd("AUTH", mechanism, success=(334,))
decoded_challenge = base64.b64decode(message)
challenge_hash = hmac.new(
key=password.encode("utf-8"), msg=decoded_challenge, digestmod="md5"
)
hex_hash = challenge_hash.hexdigest()
response = "{} {}".format(username, hex_hash)
encoded_response = SMTP.b64enc(response)
try:
code, message = await self.do_cmd(encoded_response, success=(235, 503))
except SMTPCommandFailedError as e:
raise SMTPAuthenticationError(e.code, e.message, mechanism)
return code, message
async def _auth_login(self, username, password):
"""
Performs an authentication attempt using the LOGIN mechanism.
Protocol:
1. The username is base64-encoded ;
2. The string 'AUTH LOGIN' and a space character are prepended to
the base64-encoded username and sent to the server ;
3. If the server replies with a 334 return code, we can go on:
1) The password is base64-encoded and sent to the server ;
2) If the server replies with a 235 return code, the user is
authenticated.
Args:
username (str): Identifier of the user trying to authenticate.
password (str): <PASSWORD> the <PASSWORD>.
Raises:
ConnectionResetError: If the connection with the server is
unexpectedely lost.
SMTPAuthenticationError: If the authentication attempt fails.
Returns:
(int, str): A (code, message) 2-tuple containing the server
response.
"""
mechanism = "LOGIN"
code, message = await self.do_cmd(
"AUTH", mechanism, SMTP.b64enc(username), success=(334,)
)
try:
code, message = await self.do_cmd(SMTP.b64enc(password), success=(235, 503))
except SMTPCommandFailedError as e:
raise SMTPAuthenticationError(e.code, e.message, mechanism)
return code, message
async def _auth_plain(self, username, password):
"""
Performs an authentication attempt using the PLAIN mechanism.
Protocol:
1. Format the username and password in a suitable way ;
2. The formatted string is base64-encoded ;
3. The string 'AUTH PLAIN' and a space character are prepended to
the base64-encoded username and password and sent to the
server ;
4. If the server replies with a 235 return code, user is
authenticated.
Args:
username (str): Identifier of the user trying to authenticate.
password (str): <PASSWORD>.
Raises:
ConnectionResetError: If the connection with the server is
unexpectedely lost.
SMTPAuthenticationError: If the authentication attempt fails.
Returns:
(int, str): A (code, message) 2-tuple containing the server
response.
"""
mechanism = "PLAIN"
credentials = "\0{}\0{}".format(username, password)
encoded_credentials = SMTP.b64enc(credentials)
try:
code, message = await self.do_cmd(
"AUTH", mechanism, encoded_credentials, success=(235, 503)
)
except SMTPCommandFailedError as e:
raise SMTPAuthenticationError(e.code, e.message, mechanism)
return code, message
@staticmethod
def parse_esmtp_extensions(message):
"""
Parses the response given by an ESMTP server after a *EHLO* command.
The response is parsed to build:
- A dict of supported ESMTP extensions (with parameters, if any).
- A list of supported authentication methods.
Returns:
(dict, list): A (extensions, auth_mechanisms) 2-tuple containing
the supported extensions and authentication methods.
"""
extns = {}
auths = []
oldstyle_auth_regex = re.compile(r"auth=(?P<auth>.*)", re.IGNORECASE)
extension_regex = re.compile(
r"(?P<feature>[a-z0-9][a-z0-9\-]*) ?", re.IGNORECASE
)
lines = message.splitlines()
for line in lines[1:]:
# To be able to communicate with as many SMTP servers as possible,
# we have to take the old-style auth advertisement into account.
match = oldstyle_auth_regex.match(line)
if match:
auth = match.group("auth")[0]
auth = auth.lower().strip()
if auth not in auths:
auths.append(auth)
# RFC 1869 requires a space between EHLO keyword and parameters.
# It's actually stricter, in that only spaces are allowed between
# parameters, but were not going to check for that here.
# Note that the space isn't present if there are no parameters.
match = extension_regex.match(line)
if match:
feature = match.group("feature").lower()
params = match.string[match.end("feature") :].strip()
extns[feature] = params
if feature == "auth":
auths.extend([param.strip().lower() for param in params.split()])
return extns, auths
@staticmethod
def prepare_message(message):
"""
Returns the given message encoded in ascii with a format suitable for
SMTP transmission:
- Makes sure the message is ASCII encoded ;
- Normalizes line endings to '\r\n' ;
- Adds a (second) period at the beginning of lines that start
with a period ;
- Makes sure the message ends with '\r\n.\r\n'.
For further details, please check out RFC 5321 `§ 4.1.1.4`_
and `§ 4.5.2`_.
.. _`§ 4.1.1.1`: https://tools.ietf.org/html/rfc5321#section-4.1.1.4
.. _`§ 4.5.2`: https://tools.ietf.org/html/rfc5321#section-4.5.2
"""
if isinstance(message, bytes):
bytes_message = message
else:
bytes_message = message.encode("ascii")
# The original algorithm uses regexes to do this stuff.
# This one is -IMHO- more pythonic and it is slightly faster.
#
# Another version is even faster, but I chose to keep something
# more pythonic and readable.
# FYI, the fastest way to do all this stuff seems to be
# (according to my benchmarks):
#
# bytes_message.replace(b"\r\n", b"\n") \
# .replace(b"\r", b"\n") \
# .replace(b"\n", b"\r\n")
#
# DOT_LINE_REGEX = re.compile(rb"^\.", re.MULTILINE)
# bytes_message = DOT_LINE_REGEX.sub(b"..", bytes_message)
#
# if not bytes_message.endswith(b"\r\n"):
# bytes_message += b"\r\n"
#
# bytes_message += b"\r\n.\r\n"
lines = []
for line in bytes_message.splitlines():
if line.startswith(b"."):
line = line.replace(b".", b"..", 1)
lines.append(line)
# Recompose the message with <CRLF> only:
bytes_message = b"\r\n".join(lines)
# Make sure message ends with <CRLF>.<CRLF>:
bytes_message += | |
import copy
import json
import logging
import os
import pickle
import random
import statistics
import sys
import time
import numpy as np
import torch
import yaml
from inclearn.lib import factory
from inclearn.lib import metrics, utils, results_utils
from inclearn.lib.network import FeatureGenerator
from inclearn.lib.data.samplers import NPairSampler, AuxSampler
from inclearn.utils import LOGGER as logger
def train(args):
logger.LOGGER.setLevel(args["logging"].upper())
autolabel = _set_up_options(args)
if args["autolabel"]:
args["label"] = autolabel
if args["label"]:
logger.LOGGER.info("Label: {}".format(args["label"]))
try:
os.system("echo '\ek{}\e\\'".format(args["label"]))
except:
pass
if args["resume"] and not os.path.exists(args["resume"]):
raise IOError(f"Saved model {args['resume']} doesn't exist.")
if args["save_model"] != "never" and args["label"] is None:
raise ValueError(f"Saving model every {args['save_model']} but no label was specified.")
seed_list = copy.deepcopy(args["seed"])
device = copy.deepcopy(args["device"])
start_date = utils.get_date()
results_folder = results_utils.get_save_folder(args["model"], start_date, args["label"])
logger.add_file_headler(results_folder)
orders = copy.deepcopy(args["order"])
del args["order"]
if orders is not None:
assert isinstance(orders, list) and len(orders)
assert all(isinstance(o, list) for o in orders)
assert all([isinstance(c, int) for o in orders for c in o])
else:
orders = [None for _ in range(len(seed_list))]
avg_inc_accs, last_accs, forgettings = [], [], []
for i, seed in enumerate(seed_list):
logger.LOGGER.warning("Launching run {}/{}".format(i + 1, len(seed_list)))
args["seed"] = seed
args["device"] = device
start_time = time.time()
for avg_inc_acc, last_acc, forgetting in _train(args, start_date, orders[i], i):
yield avg_inc_acc, last_acc, forgetting, False
avg_inc_accs.append(avg_inc_acc)
last_accs.append(last_acc)
forgettings.append(forgetting)
logger.LOGGER.info("Training finished in {}s.".format(int(time.time() - start_time)))
yield avg_inc_acc, last_acc, forgetting, True
logger.LOGGER.info("Label was: {}".format(args["label"]))
logger.LOGGER.info(
"Results done on {} seeds: avg: {}, last: {}, forgetting: {}".format(
len(seed_list), _aggregate_results(avg_inc_accs), _aggregate_results(last_accs),
_aggregate_results(forgettings)
)
)
logger.LOGGER.info("Individual results avg: {}".format([round(100 * acc, 2) for acc in avg_inc_accs]))
logger.LOGGER.info("Individual results last: {}".format([round(100 * acc, 2) for acc in last_accs]))
logger.LOGGER.info(
"Individual results forget: {}".format([round(100 * acc, 2) for acc in forgettings])
)
logger.LOGGER.info(f"Command was {' '.join(sys.argv)}")
def _train(args, start_date, class_order, run_id):
_set_global_parameters(args)
inc_dataset, model = _set_data_model(args, class_order)
results, results_folder = _set_results(args, start_date)
memory, memory_val, pseudo_memory = None, None, None
metric_logger = metrics.MetricLogger(
inc_dataset.n_tasks, inc_dataset.n_classes, inc_dataset.increments
)
use_unlabeled = args.get("use_unlabeled", False)
print(f'use_unlabeled:{use_unlabeled}')
for task_id in range(inc_dataset.n_tasks):
pseudo_memory_n_samples = args.get("pseudo_memory_n_samples", 2)
task_info, train_loader, val_loader, test_loader, aux_loader, pseudo_memory_loader, pure_new_data = \
inc_dataset.new_task(memory, pseudo_memory, memory_val, pseudo_memory_n_samples=pseudo_memory_n_samples)
if task_info["task"] == args["max_task"]:
break
model.set_task_info(task_info)
# ---------------
# 1. Prepare Task
# ---------------
model.eval()
model.before_task(train_loader, val_loader if val_loader else test_loader)
# -------------
# 2. Train Task
# -------------
pseudo_memory = _train_task(args, model, train_loader, aux_loader, pseudo_memory, pseudo_memory_loader,
pure_new_data, inc_dataset, val_loader, test_loader, run_id, task_id, task_info,
results_folder)
# ----------------
# 3. Conclude Task
# ----------------
model.eval()
_after_task(args, model, inc_dataset, run_id, task_id, results_folder)
# ------------
# 4. Eval Task
# ------------
logger.LOGGER.info("Eval on {}->{}.".format(0, task_info["max_class"]))
ypreds, ytrue = model.eval_task(test_loader)
metric_logger.log_task(
ypreds, ytrue, task_size=task_info["increment"], zeroshot=args.get("all_test_classes")
)
if args["dump_predictions"] and args["label"]:
os.makedirs(
os.path.join(results_folder, "predictions_{}".format(run_id)), exist_ok=True
)
with open(
os.path.join(
results_folder, "predictions_{}".format(run_id),
str(task_id).rjust(len(str(30)), "0") + ".pkl"
), "wb+"
) as f:
pickle.dump((ypreds, ytrue), f)
if args["label"]:
logger.LOGGER.info(args["label"])
logger.LOGGER.info("Avg inc acc: {}.".format(metric_logger.last_results["incremental_accuracy"]))
logger.LOGGER.info("Current acc: {}.".format(metric_logger.last_results["accuracy"]))
logger.LOGGER.info(
"Avg inc acc top5: {}.".format(metric_logger.last_results["incremental_accuracy_top5"])
)
logger.LOGGER.info("Current acc top5: {}.".format(metric_logger.last_results["accuracy_top5"]))
logger.LOGGER.info("Forgetting: {}.".format(metric_logger.last_results["forgetting"]))
logger.LOGGER.info("Cord metric: {:.2f}.".format(metric_logger.last_results["cord"]))
if task_id > 0:
logger.LOGGER.info(
"Old accuracy: {:.2f}, mean: {:.2f}.".format(
metric_logger.last_results["old_accuracy"],
metric_logger.last_results["avg_old_accuracy"]
)
)
logger.LOGGER.info(
"New accuracy: {:.2f}, mean: {:.2f}.".format(
metric_logger.last_results["new_accuracy"],
metric_logger.last_results["avg_new_accuracy"]
)
)
if args.get("all_test_classes"):
logger.LOGGER.info(
"Seen classes: {:.2f}.".format(metric_logger.last_results["seen_classes_accuracy"])
)
logger.LOGGER.info(
"unSeen classes: {:.2f}.".format(
metric_logger.last_results["unseen_classes_accuracy"]
)
)
results["results"].append(metric_logger.last_results)
avg_inc_acc = results["results"][-1]["incremental_accuracy"]
last_acc = results["results"][-1]["accuracy"]["total"]
forgetting = results["results"][-1]["forgetting"]
yield avg_inc_acc, last_acc, forgetting
memory = model.get_memory()
memory_val = model.get_val_memory()
logger.LOGGER.info(
"Average Incremental Accuracy: {}.".format(results["results"][-1]["incremental_accuracy"])
)
if args["label"] is not None:
results_utils.save_results(
results, args["label"], args["model"], start_date, run_id, args["seed"]
)
del model
del inc_dataset
def get_pseudo_memory(aux_loader, model, pseudo_memory, load_folder, save_folder, run_id, task_id, re_mine=False,
n_classes_samples=100):
unlabeled_data_save_path = os.path.join(save_folder, f'pseudo_memory_{task_id}_task_{run_id}.pth')
unlabeled_data_load_path = None
if load_folder is not None:
unlabeled_data_load_path = os.path.join(load_folder, f'pseudo_memory_{task_id}_task_{run_id}.pth')
if unlabeled_data_load_path is not None and os.path.exists(unlabeled_data_load_path):
pseudo_memory = torch.load(unlabeled_data_load_path)
logger.LOGGER.info(f'Loaded existing pseudo data form {unlabeled_data_load_path}.')
new_data, new_label = pseudo_memory[0], pseudo_memory[1]
else:
if pseudo_memory is not None and not re_mine:
existing_pseudo_mem_cls = torch.unique(pseudo_memory[1])
else:
existing_pseudo_mem_cls = None
new_pseudo_memory = model.get_pseudo_memory(aux_loader, existing_cls=existing_pseudo_mem_cls,
n_classes_samples=n_classes_samples)
if existing_pseudo_mem_cls is not None and not re_mine:
new_data = np.concatenate((pseudo_memory[0], new_pseudo_memory[0]), axis=0)
new_label = torch.cat((pseudo_memory[1], new_pseudo_memory[1]), dim=0).cpu()
pseudo_memory = (new_data, new_label)
else:
pseudo_memory = new_pseudo_memory
new_data = new_pseudo_memory[0]
new_label = new_pseudo_memory[1]
logger.LOGGER.info(f'Now unlabeled data: {len(pseudo_memory[0])}')
if (unlabeled_data_load_path is not None and not os.path.exists(unlabeled_data_load_path)) and \
not os.path.exists(unlabeled_data_save_path):
# torch.save(pseudo_memory, unlabeled_data_save_path)
logger.LOGGER.info(f' Saved pseudo memory to {unlabeled_data_save_path}.')
return pseudo_memory, new_data, new_label
# ------------------------
# Lifelong Learning phases
# ------------------------
def _train_task(config, model, train_loader, aux_loader, pseudo_memory, pseudo_memory_loader, pure_new_data,
inc_dataset, val_loader, test_loader, run_id, task_id, task_info, results_folder):
pseudo_memory_valid_map_idx = config.get('pseudo_memory_valid_map_idx', 0.5)
if config["resume"] is not None and os.path.isdir(config["resume"]) \
and ((config["resume_first"] and task_id == 0) or not config["resume_first"]):
model.load_parameters(config["resume"], run_id, device=config['device'][0])
logger.LOGGER.info(
"Skipping training phase {} because reloading pretrained model.".format(task_id)
)
elif config["resume"] is not None and os.path.isfile(config["resume"]) and \
os.path.exists(config["resume"]) and task_id == 0:
# In case we resume from a single model file, it's assumed to be from the first task.
# model.network = config["resume"]
model.load_parameters(config["resume"], run_id, device=config['device'][0])
logger.LOGGER.info(
"Skipping initial training phase {} because reloading pretrained model.".
format(task_id)
)
else:
logger.LOGGER.info("Train on {}->{}.".format(task_info["min_class"], task_info["max_class"]))
model.train()
logger.LOGGER.info(f'Pseudo memory feature map selection is {pseudo_memory_valid_map_idx}.')
model.train_task(train_loader, pseudo_memory_loader, val_loader if val_loader else test_loader,
freeze_layers=task_id != 0)
finetuning_config = config.get("finetuning_config")
use_unlabeled = config.get('use_unlabeled', False)
generator_config = config.get("generator_config", {})
train_generator_config = generator_config.get("train_config", {})
batch_size = config.get("labeled_batch_size", 128)
re_mined = config.get("pseudo_re_mined", False)
n_classes_samples = config.get("pseudo_mem_n_classes_samples", 100)
if task_id < task_info["max_task"] - 1 and use_unlabeled:
p = get_pseudo_memory(aux_loader, model, pseudo_memory, config["resume"], results_folder, run_id, task_id,
re_mine=re_mined, n_classes_samples=n_classes_samples)
pseudo_memory = p[0]
new_pseudo_memory = (p[1], p[2])
pseudo_memory_n_samples = train_generator_config.get('train_generator_unlabel_n_samples', 2)
current_pseudo_class = torch.unique(new_pseudo_memory[1])
tmp_pseudo_memory_loader = inc_dataset.get_pseudo_memory_loader(pseudo_memory, pseudo_memory_n_samples,
batch_size=pseudo_memory_n_samples * len(
current_pseudo_class))
# train the generator network
if generator_config:
n_class_mem = train_generator_config.get("train_generator_memory_n_samples", 12)
n_class_new = train_generator_config.get("train_generator_new_n_samples", 12)
model.after_task_intensive(inc_dataset, train_generator=True)
current_memory = model.get_memory()
mem_sampler = AuxSampler(current_memory[1], batch_size=n_class_mem * int(task_info['increment']),
n_sample=n_class_mem)
memory_loader_PK = inc_dataset.get_loader(*current_memory, memory_flags=np.zeros(current_memory[0].shape),
mode="train", sampler=mem_sampler, sampler_init=False)
nb_class = min(len(np.unique(pure_new_data[1])), int(batch_size / n_class_new))
train_sampler = NPairSampler(y=pure_new_data[1], n_classes=nb_class, n_samples=n_class_new)
train_loader_PK = inc_dataset.get_loader(*pure_new_data, memory_flags=np.zeros(len(pure_new_data[1])),
mode="train", sampler=train_sampler, sampler_init=False)
train_generator_data = {
'labeled_loader': train_loader_PK,
'memory_loader': memory_loader_PK,
}
input_dim = generator_config.get("input_dim", 64)
latent_dim = generator_config.get("latent_dim", 64)
num_blocks = generator_config.get("n_blocks", 2)
for cls in range(int(task_info['min_class']), int(task_info['max_class'])):
cls_encoder = FeatureGenerator(input_dim, latent_dim=latent_dim, num_blocks=num_blocks).to(model.device)
lr = train_generator_config.get("lr", 0.1)
model.create_generator_optimizer(cls_encoder.parameters(), lr=lr)
model.class_encoders[cls] = cls_encoder
res = False
if task_id == 0 and config["resume"] is not None:
generator_path = os.path.join(config["resume"], 'generators')
if os.path.exists(generator_path):
res = load_generator_params(model, config, generator_path, run_id, task_info['min_class'],
task_info['max_class'])
use_generators = not config.get('softmax_ce_not_unlabeled', False)
if not res and task_id < task_info["max_task"] - 1 and use_generators:
model.train_task(train_loader, tmp_pseudo_memory_loader, val_loader if val_loader else test_loader,
train_generator_config=train_generator_config,
train_generator_data=train_generator_data)
if task_id >= 0:
save_generator_path = os.path.join(results_folder, 'generator')
if not os.path.isdir(save_generator_path):
os.makedirs(save_generator_path)
save_generator_params(model, save_generator_path, run_id, task_info['min_class'], task_info['max_class'])
# fine-tune
if finetuning_config:
model.fine_tune(pseudo_memory_loader, pseudo_memory_valid_map_idx, val_loader)
return pseudo_memory
def save_generator_params(model, results_folder, run_id, min_class, max_class):
for cls in range(min_class, max_class):
e_save_path = os.path.join(results_folder, f'generators_run{run_id}_class{cls}.pth')
torch.save({'state_dict': model.class_encoders[cls].state_dict()}, e_save_path)
logger.LOGGER.info(f'Saved generator encoder for class {cls} to file {e_save_path}.')
def load_generator_params(model, config, results_folder, run_id, min_class, max_class):
for cls in range(min_class, max_class):
e_save_path = os.path.join(results_folder, f'generator_encoders_run{run_id}_class{cls}.pth')
try:
state_dict_saved = torch.load(e_save_path, map_location=config['device'][0])
except Exception as e:
logger.LOGGER.warning(f'Loading file `{e_save_path}` failed. Try to train it again.')
return False
model.class_encoders[cls].load_state_dict(state_dict_saved['state_dict'])
logger.LOGGER.info(f'Loaded generator encoder for class {cls} from file {e_save_path}.')
d_save_path = os.path.join(results_folder, f'generators_run{run_id}.pth')
try:
state_dict_saved = torch.load(d_save_path, map_location=config['device'][0])
except:
logger.LOGGER.warning(f'Loading file `{d_save_path}` failed. Try to train it again.')
return False
return True
def _after_task(config, model, inc_dataset, run_id, task_id, results_folder):
if config["resume"] and os.path.isdir(config["resume"]) and not config["recompute_meta"] \
and ((config["resume_first"] and task_id == 0) or not config["resume_first"]):
model.load_metadata(config["resume"], run_id)
else:
model.after_task_intensive(inc_dataset)
model.after_task(inc_dataset)
if config["label"] and (
config["save_model"] == "task" or
(config["save_model"] == "last" and task_id == inc_dataset.n_tasks - 1) or
(config["save_model"] == "first" and task_id == 0)
):
model.save_parameters(results_folder, run_id)
model.save_metadata(results_folder, run_id)
# ----------
# Parameters
# ----------
def _set_results(config, start_date):
if config["label"]:
results_folder = results_utils.get_save_folder(config["model"], start_date, config["label"])
else:
results_folder = None
if config["save_model"]:
logger.LOGGER.info("Model will be save at this rythm: {}.".format(config["save_model"]))
results = results_utils.get_template_results(config)
return results, results_folder
def _set_data_model(config, class_order):
inc_dataset = factory.get_data(config, class_order)
config["classes_order"] = inc_dataset.class_order
model = factory.get_model(config)
model.inc_dataset = inc_dataset
return inc_dataset, model
def _set_global_parameters(config):
_set_seed(config["seed"], config["threads"], config["no_benchmark"], config["detect_anomaly"])
factory.set_device(config)
def _set_seed(seed, nb_threads, no_benchmark, detect_anomaly):
logger.LOGGER.info("Set seed {}".format(seed))
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
if no_benchmark:
logger.LOGGER.warning("CUDA algos are not determinists but faster!")
else:
logger.LOGGER.warning("CUDA algos are determinists but very slow!")
torch.backends.cudnn.deterministic = not no_benchmark # This will slow down training.
torch.set_num_threads(nb_threads)
if detect_anomaly:
logger.LOGGER.info("Will detect autograd anomaly.")
torch.autograd.set_detect_anomaly(detect_anomaly)
def _set_up_options(args):
options_paths = args["options"] or []
autolabel = []
for option_path | |
"""
dispparam.py
Definition of a class that sets the parameters to be used when an
image is displayed
"""
from math import fabs
import numpy as np
from matplotlib import pyplot as plt
from astropy import units as u
from astropy.coordinates import SkyCoord
# -----------------------------------------------------------------------
class DispParam(object):
"""
A DispParam object sets and stores parameters that are used to govern
what a displayed image looks like
The code here was originally part of the Image class, but has been
split out for clarity.
"""
def __init__(self, plthdu):
"""
Initiates a DispParam object and initializes all of the
attributes
Inputs:
plthdu - A WcsHDU object that contains the data to be displayed
"""
"""
Initialize default display parameters
- The scale for the display (i.e., the data values that correspond
to full black and full white on a greyscale display) are
(by default) set in terms of the "clipped mean" and "clipped rms".
Those values are the mean and rms of the data after a sigma
clipping algorithm has been applied to reject outliers.
- The display min and max values are stored as self.fmin and self.fmax
- For more information see the set_display_limits method
"""
self.found_rms = False # Have clipped rms / mean been calculated?
self.mean_clip = 0.0 # Value of the clipped mean
self.rms_clip = 0.0 # Value of the clipped rms
self.fmin = None # Lower flux limit used in image display
self.fmax = None # Upper flux limit used in image display
self.statsize = 2048 # Stats region size if image is too big
self.statsec = None # Region to use for pixel statistics
self.mode = 'radec' # Default display units are arcsec offsets
self.extval = None # Just label the axes by pixels
self.cmap = plt.cm.YlOrBr_r # This corresponds to the 'gaia' cmap
self.title = None # Title on displayed image
self.dpi = 100. # Dots per inch in saved image
self.facecolor = 'w' # Color for region surrounding the plot
self.zeropos = None # Used to set non-default origin location
""" Link the data to be displayed to this DispParam object """
self.plthdu = plthdu
# -----------------------------------------------------------------------
"""
Use properties to set many/most of the attributes that are associated
with this class.
These attributes include: dpi, mode, title, ...
For an attribute called x, use two declarations as can be seen below:
@property
@x.setter
These two will then allow you to set the value of x via
foo.x = [value]
but the @property construction also allows you to add some evaluation
of what the value is and how to respond. For example, if x needs to
be non-negative, then with the appropriate coding, foo.x = -10 will
actually set foo.x = 0, in a way that is transparent to the user.
"""
# -----------------------------------
""" Dots per inch in the final figure """
@property
def dpi(self):
return self.__dpi
@dpi.setter
def dpi(self, dpi):
if isinstance(dpi, int):
dpi = float(dpi)
if dpi is None or isinstance(dpi, float) is not True:
self.__dpi = 100.
elif dpi < 20.:
print('WARNING: Requested dpi is too low, setting to 100')
self.__dpi = 100.
else:
self.__dpi = dpi
# -----------------------------------------------------------------------
def set_cmap(self, cmap='gaia'):
"""
Sets the color map for the image display.
Inputs:
cmap - name of the color map to use. There are only a limited
number of choices:
---
None
'gaia' (default)
'gray' or 'grey'
'gray_inv' or 'grey_inv'
'heat' or 'hot'
'jet'
'viridis'
"""
if cmap == 'gray' or cmap == 'grey':
self.cmap = plt.cm.gray
elif cmap == 'gray_inv' or cmap == 'grey_inv':
self.cmap = plt.cm.gray_r
elif cmap == 'heat' or cmap == 'hot':
self.cmap = plt.cm.hot
elif cmap == 'Yl_Or_Br' or cmap == 'gaia':
self.cmap = plt.cm.YlOrBr_r
elif cmap == 'jet':
self.cmap = plt.cm.jet
elif cmap == 'viridis':
self.cmap = plt.cm.viridis
elif cmap == 'plasma':
self.cmap = plt.cm.plasma
elif cmap == 'magma':
self.cmap = plt.cm.magma
elif cmap == 'inferno':
self.cmap = plt.cm.inferno
else:
print(' WARNING - Requested unknown color map. Using gaia'
' colors')
self.cmap = plt.cm.YlOrBr_r
# -----------------------------------------------------------------------
def set_wcsextent(self, zeropos=None):
"""
For making plots with WCS information, it is necessary to define
the boundaries in terms of RA and Dec offsets from the center, in
arcsec. For this purpose, the imshow and contour methods in
matplotlib.pyplot have an 'extent' parameter.
This set_wcsextent method will use the WCS information in the fits
header to properly set the extent parameter values and return them.
These are put into the "extval" container, which is part of the Image
class. extval is a four-element tuple containing the coordinates of
the lower left and upper right corners, in terms of RA and Dec
offsets.
Optional inputs:
zeropos - By default, which happens when zeropos=None, the (0, 0)
point on the output image, as designated by the image
axis labels, will be at the center of the image.
However, you can shift the (0, 0) point to be somewhere
else by setting zeropos. For example, zeropos=(0.5, 0.3)
will shift the origin to the point that would have been
(0.5, 0.3) if the origin were at the center of the image
"""
# self.get_wcs(self['plotim'].header)
data = self.plthdu.data
xpix = [-0.5, data.shape[1]-0.5]
ypix = [-0.5, data.shape[0]-0.5]
ra, dec = self.plthdu.wcsinfo.wcs_pix2world(xpix, ypix, 0)
skycoord = SkyCoord(ra, dec, unit=(u.degree, u.degree))
dalpha, ddelta = skycoord.spherical_offsets_to(skycoord[0])
dalpha -= (dalpha[1] / 2.)
ddelta -= (ddelta[1] / 2.)
extx1 = dalpha[1].to(u.arcsec).value
extx2 = dalpha[0].to(u.arcsec).value
exty1 = ddelta[1].to(u.arcsec).value
exty2 = ddelta[0].to(u.arcsec).value
""" Old version of code below here:
icoords = np.indices(data.shape).astype(np.float32)
pltc = np.zeros(icoords.shape)
pltc[0] = (icoords[0] - data.shape[0] / 2.) * self['input'].pixscale[1]
pltc[1] = (icoords[1] - data.shape[1] / 2.) * self['input'].pixscale[0]
pltc[1] *= -1.
maxi = np.atleast_1d(data.shape) - 1
extx1 = pltc[1][0, 0]
exty1 = pltc[0][0, 0]
extx2 = pltc[1][maxi[0], maxi[1]] - self['input'].pixscale[1]
exty2 = pltc[0][maxi[0], maxi[1]] + self['input'].pixscale[0]
"""
if zeropos is not None:
dx = zeropos[0]
dy = zeropos[1]
else:
dx = 0.
dy = 0.
extx1 -= dx
extx2 -= dx
exty1 -= dy
exty2 -= dy
""" Set the extval values, and also record the zerpos values used """
self.extval = (extx1, extx2, exty1, exty2)
self.zeropos = (dx, dy)
# -----------------------------------------------------------------------
def set_extval(self, mode):
"""
"""
# -----------------------------------------------------------------------
def set_flux_limits(self, fmin=-1., fmax=10., funits='sigma',
mask=None, verbose=False, debug=False):
"""
The method used to set the flux limits for the image display. The
two numbers that are generated by this method will be used for the
vmin and vmax values when the actual call to imshow (from
matplotlib.pyplot) is made. The two values will be stored within the
DispParam class as fmin and fmax.
Inputs:
fmin - Value that is used to set the minimum of the displayed flux
range, where the actual value depends on the
value of the funits paramters (see below).
NOTE: If fmin is None then switch to interactive mode
fmax - Value that is used to set the maximum of the displayed flux
range, where the actual value depends on the
value of the funits paramters (see below).
NOTE: If fmin is None then switch to interactive mode
funits - Either 'sigma' (the default) or 'abs'. Used to determine
the method of setting fmin and fmax.
If funits is 'abs' then the two numbers in the disprange
list just get stored as fmin and fmax.
If funits is 'sigma' (the default) then the two numbers
in disprange represent the numbers of clipped standard
devitations relative to the clipped mean. In that case,
the method will first calculate the clipped mean and
standarddeviations and then multiply them by the passed
values.
"""
if self.plthdu is not None:
plthdu = self.plthdu
else:
raise ValueError('The DispParam object was not initialized with'
' an image dataset')
"""
If funits is 'abs', then just set self.fmin and self.fmax directly from
the disprange values if those are set. Otherwise, query the user for
the values.
"""
if funits == 'abs':
""" | |
<gh_stars>0
"""
Sandbox of new developments
Use at your own risks
Photometric package using Astropy Units
=======================================
Defines a Filter class and associated functions to extract photometry.
This also include functions to keep libraries up to date
.. note::
integrations are done using :func:`trapz`
Why not Simpsons? Simpsons principle is to take sequence of 3 points to
make a quadratic interpolation. Which in the end, when filters have sharp
edges, the error due to this "interpolation" are extremely large in
comparison to the uncertainties induced by trapeze integration.
"""
from __future__ import print_function, division
import os
from functools import wraps
import numpy as np
import tables
from scipy.integrate import trapz
from .simpletable import SimpleTable
from .vega import Vega
from .config import libsdir
from .licks import reduce_resolution as _reduce_resolution
from .licks import LickIndex, LickLibrary
# directories
# __default__ = libsdir + '/filters.hd5'
# __default__ = libsdir + '/filters'
__default__ = libsdir + '/new_filters.hd5'
__default_lick__ = libsdir + '/licks.dat'
from .ezunits import unit as Unit
class Constants(object):
""" A namespace for constants """
# Planck's constant in erg * sec
h = 6.626075540e-27 * Unit('erg * s')
# Speed of light in cm/s
c = Unit('c').to('AA/s')
def hasUnit(val):
""" Check is an object has units """
return hasattr(val, 'unit') or hasattr(val, 'units')
class set_method_default_units(object):
""" Decorator for classmethods that makes sure that
the inputs of slamb, sflux are in given units
expects the decorated method to be defined as
>> def methodname(self, lamb, flux)
"""
def __init__(self, wavelength_unit, flux_unit, output_unit=None):
self.wavelength_unit = Unit(wavelength_unit)
self.flux_unit = Unit(flux_unit)
self.output_unit = output_unit
@classmethod
def force_units(cls, value, unit):
if unit is None:
return value
try:
return value.to(unit)
except AttributeError:
msg = 'Warning: assuming {0:s} units to unitless object.'
print(msg.format(str(unit)))
return value * unit
def __call__(self, func):
@wraps(func)
def wrapper(filter_, slamb, sflux, *args, **kwargs):
_slamb = set_method_default_units.force_units(slamb,
self.wavelength_unit)
_sflux = set_method_default_units.force_units(sflux,
self.flux_unit)
output = func(filter_, _slamb, _sflux, *args, **kwargs)
return set_method_default_units.force_units(output,
self.output_unit)
return wrapper
def _drop_units(q):
""" Drop the unit definition silently """
try:
return q.value
except AttributeError:
try:
return q.magnitude
except AttributeError:
return q
class UnitFilter(object):
""" Evolution of Filter that makes sure the input spectra and output fluxes
have units to avoid mis-interpretation.
Note the usual (non SI) units of flux definitions:
flam = erg/s/cm**2/AA
fnu = erg/s/cm**2/Hz
photflam = photon/s/cm**2/AA
photnu = photon/s/cm**2/Hz
Define a filter by its name, wavelength and transmission
The type of detector (energy or photon counter) can be specified for
adapting calculations. (default: photon)
Attributes
----------
name: str
name of the filter
cl: float
central wavelength of the filter
norm: float
normalization factor of the filter
lpivot: float
pivot wavelength of the filter
wavelength: ndarray
wavelength sequence defining the filter transmission curve
transmit: ndarray
transmission curve of the filter
dtype: str
detector type, either "photon" or "energy" counter
unit: str
wavelength units
"""
def __init__(self, wavelength, transmit, name='', dtype="photon",
unit=None):
"""Constructor"""
self.name = name
self.set_dtype(dtype)
try: # get units from the inputs
self._wavelength = wavelength.value
unit = str(wavelength.unit)
except AttributeError:
self._wavelength = wavelength
self.set_wavelength_unit(unit)
# make sure input data are ordered and cleaned of weird values.
idx = np.argsort(self._wavelength)
self._wavelength = self._wavelength[idx]
self.transmit = np.clip(transmit[idx], 0., np.nanmax(transmit))
self.norm = trapz(self.transmit, self._wavelength)
self._lT = trapz(self._wavelength * self.transmit, self._wavelength)
self._lpivot = self._calculate_lpivot()
if self.norm > 0:
self._cl = self._lT / self.norm
else:
self._cl = 0.
def _calculate_lpivot(self):
if self.transmit.max() <= 0:
return 0.
if 'photon' in self.dtype:
lpivot2 = self._lT / trapz(self.transmit / self._wavelength,
self._wavelength)
else:
lpivot2 = self.norm / trapz(self.transmit / self._wavelength ** 2,
self._wavelength)
return np.sqrt(lpivot2)
def set_wavelength_unit(self, unit):
""" Set the wavelength units """
try: # get units from the inputs
self.wavelength_unit = str(self._wavelength.unit)
except AttributeError:
self.wavelength_unit = unit
def set_dtype(self, dtype):
""" Set the detector type (photon or energy)"""
_d = dtype.lower()
if "phot" in _d:
self.dtype = "photon"
elif "ener" in _d:
self.dtype = "energy"
else:
raise ValueError('Unknown detector type {0}'.format(dtype))
def info(self, show_zeropoints=True):
""" display information about the current filter"""
msg = """Filter object information:
name: {s.name:s}
detector type: {s.dtype:s}
wavelength units: {s.wavelength_unit}
central wavelength: {s.cl:f}
pivot wavelength: {s.lpivot:f}
effective wavelength: {s.leff:f}
photon wavelength: {s.lphot:f}
minimum wavelength: {s.lmin:f}
maximum wavelength: {s.lmax:f}
norm: {s.norm:f}
effective width: {s.width:f}
fullwidth half-max: {s.fwhm:f}
definition contains {s.transmit.size:d} points"""
print(msg.format(s=self).replace('None', 'unknown'))
# zero points only if units
if (self.wavelength_unit is None) or (not show_zeropoints):
return
print("""
Zeropoints
Vega: {s.Vega_zero_mag:f} mag,
{s.Vega_zero_flux},
{s.Vega_zero_Jy}
{s.Vega_zero_photons}
AB: {s.AB_zero_mag:f} mag,
{s.AB_zero_flux},
{s.AB_zero_Jy}
ST: {s.ST_zero_mag:f} mag,
{s.ST_zero_flux},
{s.ST_zero_Jy}
""".format(s=self))
def __repr__(self):
return "Filter: {0:s}, {1:s}".format(self.name, object.__repr__(self))
@property
def wavelength(self):
""" Unitwise wavelength definition """
if self.wavelength_unit is not None:
return self._wavelength * Unit(self.wavelength_unit)
else:
return self._wavelength
@property
def lmax(self):
""" Calculated as the last value with a transmission at least 1% of
maximum transmission """
cond = (self.transmit / self.transmit.max()) > 1./100
return max(self.wavelength[cond])
@property
def lmin(self):
""" Calculate das the first value with a transmission at least 1% of
maximum transmission """
cond = (self.transmit / self.transmit.max()) > 1./100
return min(self.wavelength[cond])
@property
def width(self):
""" Effective width
Equivalent to the horizontal size of a rectangle with height equal
to maximum transmission and with the same area that the one covered by
the filter transmission curve.
W = int(T dlamb) / max(T)
"""
return (self.norm / max(self.transmit)) * Unit(self.wavelength_unit)
@property
def fwhm(self):
""" the difference between the two wavelengths for which filter
transmission is half maximum
..note::
This calculation is not exact but rounded to the nearest passband
data points
"""
vals = self.transmit / self.transmit.max() - 0.5
zero_crossings = np.where(np.diff(np.sign(vals)))[0]
lambs = self.wavelength[zero_crossings]
return np.diff(lambs)[0]
@property
def lpivot(self):
""" Unitwise wavelength definition """
if self.wavelength_unit is not None:
return self._lpivot * Unit(self.wavelength_unit)
else:
return self._lpivot
@property
def cl(self):
""" Unitwise wavelength definition """
if self.wavelength_unit is not None:
return self._cl * Unit(self.wavelength_unit)
else:
return self._cl
@property
def leff(self):
""" Unitwise Effective wavelength
leff = int (lamb * T * Vega dlamb) / int(T * Vega dlamb)
"""
with Vega() as v:
s = self.reinterp(v.wavelength)
w = s._wavelength
if s.transmit.max() > 0:
leff = np.trapz(w * s.transmit * v.flux.value, w, axis=-1)
leff /= np.trapz(s.transmit * v.flux.value, w, axis=-1)
else:
leff = float('nan')
if s.wavelength_unit is not None:
leff = leff * Unit(s.wavelength_unit)
if self.wavelength_unit is not None:
return leff.to(self.wavelength_unit)
return leff
else:
return leff
@classmethod
def _validate_sflux(cls, slamb, sflux):
""" clean data for inf in input """
_sflux = _drop_units(sflux)
_slamb = _drop_units(slamb)
if True in np.isinf(sflux):
indinf = np.where(np.isinf(_sflux))
indfin = np.where(np.isfinite(_sflux))
_sflux[indinf] = np.interp(_slamb[indinf], _slamb[indfin],
_sflux[indfin], left=0, right=0)
try:
_unit = str(sflux.unit)
return _sflux * Unit(_unit)
except AttributeError:
return _sflux
@classmethod
def _get_zero_like(cls, sflux, axis=-1):
"""return a zero value corresponding to a flux calculation on sflux"""
# _sflux = _drop_units(sflux)
# shape = _sflux.shape
# if axis < 0:
# axis = len(shape) + axis
# newshape = shape[:axis] + shape[axis + 1:]
# return np.zeros(newshape, _sflux.dtype)
return np.zeros_like(sflux).sum(axis=axis)
@property
def lphot(self):
""" Photon distribution based effective wavelength. Defined as
lphot = int(lamb ** 2 * T * Vega dlamb) / int(lamb * T * Vega dlamb)
which we calculate as
lphot = get_flux(lamb * vega) / get_flux(vega)
"""
if self.wavelength_unit is None:
raise AttributeError('Needs wavelength units')
with Vega() as v:
wave = v.wavelength.value
# Cheating units to avoid making a new filter
f_vega = self.get_flux(v.wavelength, v.flux, axis=-1)
f_lamb_vega = self.get_flux(v.wavelength, wave * v.flux, axis=-1)
f_lamb2_vega = self.get_flux(v.wavelength, wave ** 2 * v.flux,
axis=-1)
if 'photon' in self.dtype:
lphot = (f_lamb_vega / f_vega)
else:
lphot = f_lamb2_vega / f_lamb_vega
return (lphot * Unit(str(v.wavelength.unit))).to(self.wavelength_unit)
def _get_filter_in_units_of(self, slamb=None):
w = self.wavelength
if hasUnit(slamb) & hasUnit(w):
return w.to(str(slamb.unit)).value
else:
print("Warning: assuming units are consistent")
return self._wavelength
@set_method_default_units('AA', 'flam',
output_unit='photon*s**-1*cm**-2*AA**-1')
def get_Nphotons(self, slamb, sflux, axis=-1):
"""getNphot the number of photons through the filter
(Ntot / width in the documentation)
getflux() * leff / hc
Parameters
----------
slamb: ndarray(dtype=float, ndim=1)
spectrum wavelength definition domain
sflux: ndarray(dtype=float, ndim=1)
associated flux in erg/s/cm2/AA
Returns
-------
N: float
Number of photons of the spectrum within the filter
"""
passb = self.reinterp(slamb)
wave = passb._wavelength
dlambda = np.diff(wave)
# h = 6.626075540e-27 # erg | |
# -*- coding: utf-8 -*-
# Copyright 2020 Green Valley Belgium NV
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# @@license_version:1.7@@
import base64
import datetime
import hashlib
import json
import logging
import os
import re
import string
import threading
import time
import traceback
import urllib
import uuid
from copy import deepcopy
from random import choice
from dateutil.parser import parse as datutil_parse
from google.appengine.api import users, mail, urlfetch
from google.appengine.api.datastore import Key
from google.appengine.ext import db
from google.appengine.ext.deferred import deferred
from google.appengine.ext.deferred.deferred import PermanentTaskFailure
import python_http_client
from log_offload.log_offload import LogOffload
from typing import Optional
from mcfw.consts import MISSING
from mcfw.properties import azzert, get_members
from rogerthat.consts import OFFICIALLY_SUPPORTED_LANGUAGES, DEBUG, FAST_QUEUE
from rogerthat.utils.languages import OFFICIALLY_SUPPORTED_ISO_LANGUAGES, get_iso_lang
try:
import cPickle as pickle
except ImportError:
import pickle
try:
from cStringIO import StringIO
except ImportError:
from StringIO import StringIO
SLOG_HEADER = "[XX-SLOGv1]"
OFFLOAD_HEADER = "[XX-OFFLOADv1]"
PRIVATIZE_LIST_KEYS = ('email_addresses', 'matched_addresses', 'categories', 'components', 'coords')
PRIVATIZE_KEYS = ('message', 'caption', 'latitude', 'longitude', 'password', 'qrcode', 'icon', 'avatar', 'chunk',
'static_flow', 'staticFlow', 'shareDescription', 'data', 'profileData', 'cursor', 'content',
'description', 'secret', 'image', 'picture', 'details')
ANONIMIZE_KEYS = ('member', 'email', 'sender', 'qualifiedIdentifier', 'qualified_identifier', 'user')
class _TLocal(threading.local):
def __init__(self):
self.parent = None
_tlocal = _TLocal()
del _TLocal
log_offload = LogOffload(offload_header=OFFLOAD_HEADER)
def foreach(func, iterable, *args, **kwargs):
for item in iterable:
func(item, *args, **kwargs)
def runeach(iterable):
foreach(lambda f: f(), iterable)
def first(func, iterable):
for item in iterable:
if func(item):
return item
return None
def circular(lst):
while True:
for item in lst:
yield item
class jsonEncoder(json.JSONEncoder):
def default(self, obj):
isa = lambda *xs: any(isinstance(obj, x) for x in xs) # shortcut
return obj.isoformat() if isa(datetime.datetime) else \
dict((p, getattr(obj, p)) for p in obj.properties()) if isa(db.Model) else \
obj.email() if isa(users.User) else \
json.JSONEncoder.default(self, obj)
def today():
now_ = now()
return now_ - (now_ % (3600 * 24))
_now_impl = lambda: int(time.time())
def now():
return _now_impl()
def months_between(d1, d2):
return (d1.year - d2.year) * 12 + d1.month - d2.month
def trace(target):
return target
from mcfw.cache import set_cache_key
def wrapper(*args, **kwargs):
from google.appengine.api import quota
start_cpu = quota.get_request_cpu_usage()
start_api = quota.get_request_api_cpu_usage()
my_parent = _tlocal.parent
start = time.time()
_tlocal.parent = start
try:
return target(*args, **kwargs)
finally:
_tlocal.parent = my_parent
end = time.time()
end_cpu = quota.get_request_cpu_usage()
end_api = quota.get_request_api_cpu_usage()
logging.info("""*** USAGE TRACING ***:
{"function": "%s.%s", "cpu": %s, "api": %s, "elapsed": %s, "start": %f, "parent": %s}""" % (
target.__module__, target.__name__,
int(round(quota.megacycles_to_cpu_seconds(end_cpu - start_cpu) * 1000)),
int(round(quota.megacycles_to_cpu_seconds(end_api - start_api) * 1000)),
int(round((end - start) * 1000)),
start, "%f" % my_parent if my_parent else "null"))
set_cache_key(wrapper, target)
if hasattr(target, "meta"):
wrapper.meta.update(target.meta)
wrapper.__name__ = target.__name__
wrapper.__module__ = target.__module__
return wrapper
def hash_user_identifier(id_):
if id_ is None:
return None
if isinstance(id_, users.User):
id_ = id_.email()
if isinstance(id_, unicode):
id_ = id_.encode('utf-8')
d = hashlib.md5()
index = 0
for ch in id_:
if ch == ":":
break
d.update(ch)
index += 1
else:
return d.hexdigest()
return d.hexdigest() + id_[index:]
def privatize(data, anonimize=False):
if isinstance(data, dict):
result = {}
for key, value in data.iteritems():
if value is MISSING:
continue
if key == "accept_missing":
continue
if isinstance(value, (list, dict)):
if key in PRIVATIZE_LIST_KEYS and value and isinstance(value, list):
value = ["***%d items***" % len(value)]
else:
value = privatize(value, anonimize)
elif value and key in PRIVATIZE_KEYS:
value = "*****" if isinstance(value, (str, unicode)) else "#####"
elif anonimize and key in ANONIMIZE_KEYS:
value = hash_user_identifier(value)
result[key] = value
return result
elif isinstance(data, list):
return [privatize(value, anonimize) for value in data]
return data
def duplicate_entity(entity, **kwargs):
clazz = entity.__class__
attributes = dict((k, v.__get__(entity, clazz)) for k, v in clazz.properties().iteritems())
attributes.update(kwargs)
return clazz(**attributes)
def ed(val):
return base64.b64encode(val, "._").replace("=", "-")
def dd(val):
return base64.b64decode(val.replace("-", "="), "._")
def urlencode(d):
if not isinstance(d, dict):
d = dict(d)
r = dict()
for k, v in d.iteritems():
if isinstance(v, unicode):
r[k] = v.encode('UTF8')
else:
r[k] = v
return urllib.urlencode(r)
def guid():
return str(uuid.uuid4())
def generate_random_key():
digester = hashlib.sha256()
for x in xrange(100):
digester.update(str(x))
digester.update(str(uuid.uuid4()))
digester.update(str(time.time()))
key = digester.hexdigest()
return key
def _send_mail(from_, email, subject, body, reply_to, html, attachments, bcc_emails=None):
emails = []
if isinstance(email, basestring):
if email.endswith('@rogerth.at'):
return
emails = [email]
else:
for e in email:
if not e.endswith('@rogerth.at'):
emails.append(e)
if emails:
_send_mail_via_sendgrid_api(from_, emails, subject, body, reply_to, html, attachments, bcc_emails=bcc_emails)
def _send_mail_via_sendgrid_api(from_, email, subject, body, reply_to, html, attachments, bcc_emails=None):
if DEBUG:
logging.info('Not sending email in debug\nFrom: %s to:%s subject: %s\nbody: %s', from_, email, subject, body)
return
import sendgrid
from sendgrid.helpers import mail as sgMail
from rogerthat.settings import get_server_settings
settings = get_server_settings()
if not settings.sendGridApiKey:
logging.error("sendGridApiKey is not set", _suppress=False)
return
sg = sendgrid.SendGridAPIClient(apikey=(settings.sendGridApiKey))
message = sgMail.Mail()
message.from_email = sgMail.Email(from_)
message.subject = subject
personalization = sgMail.Personalization()
if isinstance(email, basestring):
personalization.add_to(sgMail.Email(email))
else:
for e in email:
personalization.add_to(sgMail.Email(e))
if not bcc_emails:
bcc_emails = []
for e in bcc_emails:
personalization.add_bcc(sgMail.Email(e))
message.add_personalization(personalization)
if reply_to:
message.reply_to = sgMail.Email(reply_to)
message.add_content(sgMail.Content('text/plain', body))
if html:
message.add_content(sgMail.Content("text/html", html))
if attachments:
for attachment_name, attachment_value in attachments:
extension = attachment_name.split('.')[1]
mime_type = mail.EXTENSION_MIME_MAP.get(extension, None)
if mime_type is None:
mime_type = 'application/octet-stream'
attachment = sgMail.Attachment()
attachment.content = attachment_value
attachment.type = mime_type
attachment.filename = attachment_name
attachment.disposition = "attachment"
message.add_attachment(attachment)
if DEBUG:
logging.warn("Not sending real email via api\n%s", message.get())
return
try:
response = sg.client.mail.send.post(request_body=message.get())
except python_http_client.HTTPError as e:
logging.debug('Status code: %s', e.status_code)
logging.debug('Reason: %s', e.reason)
logging.debug('Body: %s', e.body)
logging.debug('Headers: %s', e.headers)
raise e
try:
# try/catch just to be sure the mail is not sent over and over
logging.debug('Status code: %s', response.status_code)
logging.debug('Body: %s', response.body)
logging.debug('Headers: %s', response.headers)
except:
pass
def send_mail(from_, email, subject, body, reply_to=None, html=None, attachments=None, transactional=None, bcc_emails=None):
if transactional is None:
transactional = db.is_in_transaction()
deferred.defer(_send_mail, from_, email, subject, body, reply_to, html, attachments, bcc_emails=bcc_emails,
_transactional=transactional, _queue=FAST_QUEUE)
def send_mail_via_mime(from_, to, mime, transactional=None, send_in_deferred=True):
try:
azzert(to)
except:
logging.exception('There were no recipients. Not sending out the email.', _suppress=False)
return
if transactional is None:
transactional = db.is_in_transaction()
if send_in_deferred:
deferred.defer(_send_mail_via_mime, from_, to, mime, _transactional=transactional, _queue=FAST_QUEUE)
else:
_send_mail_via_mime(from_, to, mime)
def _send_mail_via_mime(from_, to, mime):
import smtplib
from rogerthat.settings import get_server_settings
settings = get_server_settings()
mime_string = mime.as_string()
logging.info("mime_string type: %s", type(mime_string))
if DEBUG:
logging.warn("Not sending real email via mime\n%s", mime_string[:1000])
for part in mime.walk():
logging.info("part.get_content_type(): %s", part.get_content_type())
if part.get_content_type() in ('text/plain', 'text/html'):
logging.info(base64.b64decode(part.get_payload()))
return
if isinstance(to, basestring) and to.endswith('@rogerth.at'):
logging.debug("Not sending real email to rogerth.at domains")
return
if settings.dkimPrivateKey:
if from_ == settings.dashboardEmail or "<%s>" % settings.dashboardEmail in from_:
logging.info("Adding dkim signature")
try:
import dkim
signature = dkim.sign(mime_string,
'dashboard.email',
settings.dashboardEmail.split('@')[1],
settings.dkimPrivateKey,
include_headers=['To', 'From', 'Subject'])
logging.info("signature type: %s", type(signature))
mime_string = signature.encode('utf-8') + mime_string
except:
logging.exception("Could not create dkim signature!")
else:
logging.info("Skipping dkim signature because '%s' != '%s'", from_, settings.dashboardEmail)
mailserver = smtplib.SMTP_SSL(settings.smtpserverHost, int(settings.smtpserverPort))
mailserver.ehlo()
mailserver.login(settings.smtpserverLogin, settings.smtpserverPassword)
mailserver.sendmail(from_, to, mime_string)
mailserver.quit()
def xml_escape(value):
return value.replace("&", "&").replace("\"", """).replace("'", "'").replace("<", "<").replace(">",
">")
def bizz_check(condition, error_message='', error_class=None):
if not condition:
from rogerthat.rpc.service import BusinessException
if error_class is None:
error_class = BusinessException
else:
azzert(issubclass(error_class, BusinessException))
raise error_class(error_message)
def strip_weird_chars(val):
allowed = " abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'"
return "".join((l for l in val if l in allowed))
def determine_if_platform_supports_rogerthat_by_user_agent(user_agent):
user_agent = user_agent.lower()
return "android" in user_agent \
or "iphone" in user_agent \
or "ipad" in user_agent \
or "ipod" in user_agent
def get_platform_by_user_agent(user_agent):
user_agent = user_agent.lower()
if "android" in user_agent:
return "android"
if "iphone" in user_agent or "ipad" in user_agent or "ipod" in user_agent:
return "ios"
return False
def get_smartphone_install_url_by_user_agent(user_agent, app_id):
from rogerthat.dal.app import get_app_by_id
if "Android" in user_agent:
return get_app_by_id(app_id).android_market_android_uri
elif any((i in user_agent for i in ('iPhone', 'iPad', 'iPod'))):
return get_app_by_id(app_id).ios_appstore_ios_uri
return None
def safe_file_name(filename):
safe = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ.0123456789"
return str("".join((l in safe and l or "_" for l in filename)))
def parse_color(color):
color = color.lstrip('#')
if len(color) == 3:
color = color[0] * 2 + color[1] * 2 + color[2] * 2
m = re.match("^([a-fA-F0-9]{2})([a-fA-F0-9]{2})([a-fA-F0-9]{2})$", color)
if not m:
raise ValueError("%s is not a valid color." % color)
return tuple(map(lambda x: int(x, 16), m.groups()))
def slog(msg_=None, email_=None, function_=None, **kwargs):
seconds = time.time()
t = time.strftime('%a, %d %b %Y %H:%M:%S.%%03d', time.gmtime(seconds)) % ((seconds % 1) * 1000)
d = {
'T': t,
't': seconds,
'_c': os.environ.get('HTTP_X_APPENGINE_COUNTRY', '<unknown>'),
'_i': os.environ.get('INSTANCE_ID', '<unknown>'),
'_dc': os.environ.get('DATACENTER', '<unknown>'),
'_t': threading.current_thread().ident
}
if msg_:
d['m'] = msg_
if email_:
d['e'] | |
<reponame>samrushing/caesure
# -*- Mode: Python -*-
import os
import struct
from pprint import pprint as pp
import sys
from caesure.script import pprint_script, OPCODES, parse_script, is_unspendable, VerifyError
from caesure.block_db import BlockDB
from caesure.bitcoin import *
from caesure.txfaa import UTXO_Map, UTXO_Scan_Map
import coro
from coro.log import Facility
LOG = Facility ('ledger')
class RecentBlocks:
def __init__ (self, ledger, db):
self.db = db
# we always begin with one tip.
self.blocks = {ledger.block_name : ledger}
# these will be recomputed upon the call to self.new_block()
self.root = set([ledger])
self.leaves = set([ledger])
# we keep a horizon of this many blocks back from the tip.
self.horizon = 20
self.highest = 0
def new_block (self, block, verify=False):
from __main__ import G
tip = None
for name, lx in self.blocks.iteritems():
if block.prev_block == lx.block_name:
tip = lx
break
if tip is None:
height = block.get_height()
if height > self.highest:
# I think this happens when the hoover delivers blocks out of order.
# we know the previous block is in the database...
self.new_block (G.block_db[block.prev_block], verify)
self.new_block (block, verify)
elif height <= (self.highest - self.horizon):
LOG ('recent', 'stale', height, str(block.name), str(block.prev_block))
else:
LOG ('recent', 'nochain', height, str(block.name), str(block.prev_block))
else:
t0 = timer()
self.blocks[block.name] = tip.extend (block, tip.height + 1, verify)
LOG ('extend', t0.end())
if len(self.blocks) > 2:
# otherwise we are in 'catch up' mode.
self.trim()
def find_lowest_common_ancestor (self, leaves, db):
# find the lowest common ancestor of <leaves>.
# http://en.wikipedia.org/wiki/Lowest_common_ancestor
# aka MRCA 'most recent common ancestor'.
search = leaves[:]
while 1:
if len(search) == 1:
# we're done.
break
else:
# find the highest leaf.
search.sort()
# scoot it back by one level.
h, name = search[-1]
scoot = (h-1, db.prev[name])
if scoot in search:
# we found a common ancestor
del search[-1]
else:
search[-1] = scoot
return search[0][1]
def trim (self):
# this is more complex than I would like, but it solves a difficult problem:
# we need to trim the set of recent blocks back to our horizon, *except* in
# the case where the most recent common ancestor is *outside* the horizon.
from __main__ import G
db = G.block_db
# get them sorted by height
blocks = [(lx.height, lx) for lx in self.blocks.values()]
blocks.sort()
self.highest = blocks[-1][0]
# --- identify leaves within our horizon ---
# note: we can't use db.next[name] to identify leaves because
# the db is often past our ledger on startup, and leaves in
# self.blocks can have children in the db.
cutoff = self.highest - self.horizon
names = set (self.blocks.keys())
prevs = set ([db.prev[lx.block_name] for lx in self.blocks.values()])
leaves = names.difference (prevs)
leaves = [self.blocks[name] for name in leaves]
# only those leaves within our horizon...
leaves = [(lx.height, lx.block_name) for lx in leaves if lx.height >= cutoff]
leaves.sort()
lca = self.find_lowest_common_ancestor (leaves, db)
lca = self.blocks[lca]
if lca.height < cutoff:
# if the lca is behind the horizon, we must keep it.
cutoff = lca.height
self.root = lca
LOG ('ancestor cutoff', repr(lca.block_name))
else:
# lca is inside the horizon: crawl back til we hit the cutoff.
root = lca
while root.height > cutoff:
prev = db.prev[root.block_name]
if self.blocks.has_key (prev):
root = self.blocks[prev]
else:
# we are building the ledger and don't have horizon nodes yet.
break
self.root = root
# perform the trim, identify root and leaves.
for h, lx in blocks:
if h < cutoff:
del self.blocks[lx.block_name]
self.leaves = set (self.blocks[x[1]] for x in leaves)
def save_ledger_thread (self):
while 1:
# roughly once an hour, flush the oldest recent block's ledger.
coro.sleep_relative (67 * 60)
self.root.save_state()
class LedgerState:
save_path = 'utxo.bin'
do_yields = True
def __init__ (self, load=False):
self.outpoints = UTXO_Map()
self.block_name = ZERO_NAME
self.height = -1
self.total = 0
self.lost = 0
self.fees = 0
if load:
from __main__ import G
save_path = os.path.join (G.args.base, self.save_path)
self.load_state (save_path)
def clone (self):
ob = LedgerState()
ob.block_name = self.block_name
ob.height = self.height
ob.total = self.total
ob.lost = self.lost
ob.fees = self.fees
ob.outpoints = self.outpoints.copy()
return ob
def extend (self, block, height, verify=True):
ob = self.clone()
ob.feed_block (block, height, verify)
return ob
def get_total_outpoints (self):
total = 0
for k, v in self.outpoints:
total += len(v)
return total
cache_version = 3
def save_state (self):
from coro.asn1.data_file import DataFileWriter
from __main__ import G
save_path = os.path.join (G.args.base, self.save_path)
f = open (save_path + '.tmp', 'wb')
df = DataFileWriter (f)
t0 = timer()
df.write_object ([
self.cache_version,
self.height,
str(self.block_name),
self.total,
self.lost,
self.fees,
len(self.outpoints)
])
n = 0
for item in self.outpoints:
df.write_object (item)
n += 1
if n % 1000 == 999:
coro.yield_slice()
f.close()
os.rename (save_path + '.tmp', save_path)
LOG ('saved outpoints', len(self.outpoints), n, t0.end())
def load_state (self, path=None):
from coro.asn1.data_file import DataFileReader
from __main__ import G
if path is None:
path = os.path.join (G.args.base, self.save_path)
LOG ('cache', 'start')
t0 = timer()
try:
f = open (path, 'rb')
df = DataFileReader (f)
info = df.read_object()
if info[0] < self.cache_version:
LOG ('old cache version, ignoring')
return
assert (info[0] == self.cache_version) # version
[_, self.height, self.block_name, self.total, self.lost, self.fees, size] = info
LOG ('cache', self.height, size)
self.block_name = Name (self.block_name)
n = [0]
df.next = df.read_object
self.outpoints.build (df, size)
f.close()
LOG ('cache', 'stop', len(self.outpoints), n[0])
LOG ('cache', self.height, repr(self.block_name))
except IOError:
pass
LOG ('cache', 'stop', t0.end())
def store_outputs (self, tx):
output_sum = 0
outputs = []
for i, (amt, lock_script) in enumerate (tx.outputs):
#if len(lock_script) > 500:
# W ('%r len(script) = %d\n' % (tx.name, len(lock_script)))
if not is_unspendable (lock_script):
outputs.append ((i, amt, lock_script))
output_sum += amt
self.outpoints.new_entry (str(tx.name), outputs)
self.total += output_sum
return output_sum
def get_utxo (self, name, index):
return self.outpoints.get_utxo (name, index)
def feed_tx (self, index, tx, timestamp, verify=False):
input_sum = 0
for j in range (len (tx.inputs)):
(outpoint, index), script, sequence = tx.inputs[j]
outstr = str(outpoint)
amt, lock_script = self.outpoints.pop_utxo (outstr, index)
if verify:
try:
tx.verify (j, lock_script, timestamp)
except VerifyError:
self.outpoints.new_entry (outstr, [(index, amt, lock_script)])
raise
input_sum += amt
if self.do_yields and j % 20 == 19:
coro.yield_slice()
output_sum = self.store_outputs (tx)
return input_sum, output_sum
def feed_block (self, b, height, verify=False):
if b.prev_block != self.block_name:
raise ValueError (b.prev_block, self.block_name)
# assume coinbase is ok for now
tx0 = b.transactions[0]
reward0 = self.store_outputs (tx0)
fees = 0
for i, tx in enumerate (b.transactions):
if i == 0:
continue
input_sum, output_sum = self.feed_tx (i, tx, b.timestamp, verify)
fees += input_sum - output_sum
self.total -= input_sum
self.fees += fees
reward1 = compute_reward (height)
if reward1 + fees != reward0:
lost = (reward1 + fees) - reward0
#W ('reward mismatch height=%d lost=%s\n' % (height, lost))
self.lost += lost
self.height = height
self.block_name = b.name
# XXX a version of this that feeds all blocks, not just the main chain.
# hmmmm how about one that feeds them in timestamp order!
def catch_up (G):
db = G.block_db
def get_names():
"get the chain of all block names, ignoring forks"
if not db.num_block:
return []
else:
names = list (db.num_block[db.last_block])
# XXX handle this case
assert (len(names) == 1)
b = db[names[0]]
r = []
name = b.name
while 1:
r.append(name)
name = db.prev[name]
if name == ZERO_NAME:
break
r.reverse()
return r
ledger = LedgerState (load=True)
if len(ledger.outpoints) == 0:
LOG ('no cache')
ledger.outpoints = UTXO_Scan_Map()
fast_scan = True
else:
fast_scan = False
t0 = timer()
names = get_names()
#if fast_scan:
# # TRIM FOR TESTING ONLY
# names = names[:225430]
# drop back by a 20-block horizon
most_names = names[:-20]
i = 0
fed = 0
for name in most_names:
if i == ledger.height + 1:
if i % 1000 == 0:
LOG ('scan', i)
block = db[name]
ledger.feed_block (block, i)
fed += 1
elif i <= ledger.height:
pass
else:
LOG ('block too high?')
import pdb; pdb.set_trace()
i += 1
coro.yield_slice()
LOG ('total/lost/fees', ledger.total, ledger.lost, ledger.fees)
LOG ('scan', t0.end(), fed)
if fed > 150:
LOG ('saving', repr(ledger.block_name))
ledger.save_state()
if fast_scan:
LOG ('fast scan done, reloading')
ledger.outpoints = None
ledger.outpoints = UTXO_Map()
ledger.load_state()
LOG ('topping off recent blocks')
G.recent_blocks = RecentBlocks (ledger, | |
54 -9.87 55 -10.91 56 -9.91
ABUNDANCE CHANGE 57 -10.87 58 -10.46 59 -11.33 60 -10.54 61 -20.00 62 -11.03
ABUNDANCE CHANGE 63 -11.53 64 -10.92 65 -11.69 66 -10.90 67 -11.78 68 -11.11
ABUNDANCE CHANGE 69 -12.04 70 -10.96 71 -11.98 72 -11.16 73 -12.17 74 -10.93
ABUNDANCE CHANGE 75 -11.76 76 -10.59 77 -10.69 78 -10.24 79 -11.03 80 -10.91
ABUNDANCE CHANGE 81 -11.14 82 -10.09 83 -11.33 84 -20.00 85 -20.00 86 -20.00
ABUNDANCE CHANGE 87 -20.00 88 -20.00 89 -20.00 90 -11.95 91 -20.00 92 -12.54
ABUNDANCE CHANGE 93 -20.00 94 -20.00 95 -20.00 96 -20.00 97 -20.00 98 -20.00
ABUNDANCE CHANGE 99 -20.00
READ DECK6 72 RHOX,T,P,XNE,ABROSS,ACCRAD,VTURB, FLXCNV,VCONV,VELSND
7.27209007E-05 5007.4 7.272E-01 4.129E+09 1.834E-03 3.691E-01 2.000E+05 0.000E+00 0.000E+00 1.749E+06
9.69853593E-05 5039.4 9.698E-01 5.279E+09 1.832E-03 3.630E-01 2.000E+05 0.000E+00 0.000E+00 1.572E+06
1.29460866E-04 5068.0 1.295E+00 6.674E+09 1.820E-03 3.543E-01 2.000E+05 0.000E+00 0.000E+00 1.423E+06
1.72666781E-04 5100.3 1.727E+00 8.521E+09 1.841E-03 3.448E-01 2.000E+05 0.000E+00 0.000E+00 1.294E+06
2.29253803E-04 5134.7 2.292E+00 1.091E+10 1.889E-03 3.348E-01 2.000E+05 0.000E+00 0.000E+00 1.185E+06
3.02304168E-04 5170.2 3.023E+00 1.395E+10 1.963E-03 3.248E-01 2.000E+05 0.000E+00 0.000E+00 1.095E+06
3.95488057E-04 5206.2 3.955E+00 1.776E+10 2.063E-03 3.155E-01 2.000E+05 0.000E+00 0.000E+00 1.022E+06
5.13074943E-04 5242.2 5.130E+00 2.249E+10 2.191E-03 3.068E-01 2.000E+05 0.000E+00 0.000E+00 9.633E+05
6.59988021E-04 5277.9 6.599E+00 2.830E+10 2.349E-03 2.987E-01 2.000E+05 0.000E+00 0.000E+00 9.162E+05
8.41931209E-04 5313.4 8.419E+00 3.537E+10 2.540E-03 2.912E-01 2.000E+05 0.000E+00 0.000E+00 8.787E+05
1.06545373E-03 5348.3 1.065E+01 4.392E+10 2.766E-03 2.842E-01 2.000E+05 0.000E+00 0.000E+00 8.490E+05
1.33821562E-03 5382.5 1.338E+01 5.414E+10 3.031E-03 2.783E-01 2.000E+05 0.000E+00 0.000E+00 8.256E+05
1.66939895E-03 5415.3 1.669E+01 6.620E+10 3.335E-03 2.732E-01 2.000E+05 0.000E+00 0.000E+00 8.073E+05
2.07020084E-03 5446.8 2.070E+01 8.031E+10 3.680E-03 2.692E-01 2.000E+05 0.000E+00 0.000E+00 7.930E+05
2.55393025E-03 5477.0 2.554E+01 9.677E+10 4.071E-03 2.667E-01 2.000E+05 0.000E+00 0.000E+00 7.820E+05
3.13619775E-03 5506.3 3.136E+01 1.159E+11 4.516E-03 2.658E-01 2.000E+05 0.000E+00 0.000E+00 7.734E+05
3.83517766E-03 5534.9 3.835E+01 1.382E+11 5.024E-03 2.670E-01 2.000E+05 0.000E+00 0.000E+00 7.670E+05
4.67157076E-03 5563.3 4.671E+01 1.643E+11 5.608E-03 2.697E-01 2.000E+05 0.000E+00 0.000E+00 7.621E+05
5.66872851E-03 5591.9 5.668E+01 1.949E+11 6.285E-03 2.736E-01 2.000E+05 0.000E+00 0.000E+00 7.585E+05
6.85294938E-03 5621.1 6.853E+01 2.309E+11 7.070E-03 2.787E-01 2.000E+05 0.000E+00 0.000E+00 7.560E+05
8.25388203E-03 5651.1 8.253E+01 2.734E+11 7.985E-03 2.848E-01 2.000E+05 0.000E+00 0.000E+00 7.544E+05
9.90513430E-03 5681.9 9.905E+01 3.235E+11 9.048E-03 2.920E-01 2.000E+05 0.000E+00 0.000E+00 7.534E+05
1.18453048E-02 5713.5 1.184E+02 3.825E+11 1.028E-02 3.009E-01 2.000E+05 0.000E+00 0.000E+00 7.530E+05
1.41179761E-02 5746.2 1.412E+02 4.523E+11 1.172E-02 3.110E-01 2.000E+05 0.000E+00 0.000E+00 7.530E+05
1.67741930E-02 5779.4 1.677E+02 5.343E+11 1.339E-02 3.225E-01 2.000E+05 0.000E+00 0.000E+00 7.534E+05
1.98734263E-02 5813.4 1.987E+02 6.308E+11 1.531E-02 3.356E-01 2.000E+05 0.000E+00 0.000E+00 7.542E+05
2.34834862E-02 5848.1 2.348E+02 7.444E+11 1.754E-02 3.501E-01 2.000E+05 0.000E+00 0.000E+00 7.551E+05
2.76846289E-02 5883.3 2.768E+02 8.774E+11 2.010E-02 3.660E-01 2.000E+05 0.000E+00 0.000E+00 7.563E+05
3.25721679E-02 5918.7 3.257E+02 1.033E+12 2.305E-02 3.837E-01 2.000E+05 0.000E+00 0.000E+00 7.577E+05
3.82613628E-02 5953.8 3.826E+02 1.213E+12 2.638E-02 4.036E-01 2.000E+05 0.000E+00 0.000E+00 7.593E+05
4.48923738E-02 5988.5 4.489E+02 1.420E+12 3.016E-02 4.256E-01 2.000E+05 0.000E+00 0.000E+00 7.610E+05
5.26369520E-02 6022.0 5.263E+02 1.657E+12 3.439E-02 4.497E-01 2.000E+05 0.000E+00 0.000E+00 7.629E+05
6.17070246E-02 6054.5 6.170E+02 1.928E+12 3.911E-02 4.768E-01 2.000E+05 0.000E+00 0.000E+00 7.649E+05
7.23599576E-02 6085.4 7.236E+02 2.233E+12 4.434E-02 5.071E-01 2.000E+05 0.000E+00 0.000E+00 7.670E+05
8.49054633E-02 6115.3 8.490E+02 2.581E+12 5.016E-02 5.413E-01 2.000E+05 0.000E+00 0.000E+00 7.692E+05
9.97090792E-02 6144.3 9.970E+02 2.976E+12 5.664E-02 5.802E-01 2.000E+05 0.000E+00 0.000E+00 7.715E+05
1.17195944E-01 6172.8 1.172E+03 3.428E+12 6.393E-02 6.252E-01 2.000E+05 0.000E+00 0.000E+00 7.739E+05
1.37856287E-01 6201.3 1.378E+03 3.948E+12 7.217E-02 6.770E-01 2.000E+05 0.000E+00 0.000E+00 7.764E+05
1.62246643E-01 6230.5 1.622E+03 4.552E+12 8.158E-02 7.366E-01 2.000E+05 0.000E+00 0.000E+00 7.789E+05
1.90998238E-01 6260.8 1.910E+03 5.257E+12 9.237E-02 8.056E-01 2.000E+05 0.000E+00 0.000E+00 7.815E+05
2.24816533E-01 6292.7 2.248E+03 6.089E+12 1.049E-01 8.864E-01 2.000E+05 0.000E+00 0.000E+00 7.841E+05
2.64452051E-01 6327.2 2.644E+03 7.081E+12 1.196E-01 9.822E-01 2.000E+05 0.000E+00 0.000E+00 7.867E+05
3.10706861E-01 6365.0 3.107E+03 8.278E+12 1.370E-01 1.097E+00 2.000E+05 0.000E+00 0.000E+00 7.905E+05
3.64337803E-01 6407.9 3.643E+03 9.756E+12 1.582E-01 1.240E+00 2.000E+05 0.000E+00 0.000E+00 7.919E+05
4.25989413E-01 6457.1 4.260E+03 1.161E+13 1.843E-01 1.418E+00 2.000E+05 0.000E+00 0.000E+00 7.945E+05
4.96125820E-01 6514.7 4.961E+03 1.398E+13 2.174E-01 1.646E+00 2.000E+05 0.000E+00 0.000E+00 7.971E+05
5.74854876E-01 6582.9 5.748E+03 1.711E+13 2.602E-01 1.944E+00 2.000E+05 0.000E+00 0.000E+00 7.996E+05
6.61731226E-01 6665.0 6.617E+03 2.133E+13 3.176E-01 2.347E+00 2.000E+05 0.000E+00 0.000E+00 8.021E+05
7.55580967E-01 6764.1 7.555E+03 2.719E+13 3.967E-01 2.908E+00 2.000E+05 0.000E+00 0.000E+00 8.044E+05
8.54437102E-01 6884.2 8.543E+03 3.557E+13 5.093E-01 3.715E+00 2.000E+05 0.000E+00 0.000E+00 8.065E+05
9.55525844E-01 7029.5 9.553E+03 4.787E+13 6.752E-01 4.917E+00 2.000E+05 0.000E+00 0.000E+00 8.084E+05
1.05522833E+00 7206.0 1.055E+04 6.652E+13 9.319E-01 6.797E+00 2.000E+05 2.681E-11 2.685E+02 8.103E+05
1.14944220E+00 7418.8 1.149E+04 9.550E+13 1.346E+00 9.869E+00 2.000E+05 2.854E-09 1.211E+03 8.125E+05
1.23419909E+00 7674.8 1.234E+04 1.417E+14 2.050E+00 1.516E+01 2.000E+05 1.048E-07 3.824E+03 8.160E+05
1.30607438E+00 7982.9 1.306E+04 2.175E+14 3.324E+00 2.482E+01 2.000E+05 4.384E-06 1.257E+04 8.222E+05
1.36374533E+00 8339.6 1.363E+04 3.392E+14 5.669E+00 4.265E+01 2.000E+05 2.571E-04 4.631E+04 8.332E+05
1.40645696E+00 8792.0 1.405E+04 5.556E+14 1.071E+01 8.035E+01 2.000E+05 3.008E-03 9.966E+04 8.536E+05
1.43547307E+00 9321.3 1.434E+04 9.105E+14 2.105E+01 1.436E+02 2.000E+05 5.469E-02 2.509E+05 8.874E+05
1.45706948E+00 9752.6 1.455E+04 1.285E+15 3.402E+01 1.755E+02 2.000E+05 2.836E-01 4.392E+05 9.233E+05
1.47677296E+00 10065.1 1.475E+04 1.601E+15 4.626E+01 1.672E+02 2.000E+05 4.753E-01 5.256E+05 9.543E+05
1.49691825E+00 10335.8 1.495E+04 1.898E+15 5.836E+01 1.497E+02 2.000E+05 6.128E-01 5.745E+05 9.844E+05
1.51877054E+00 10578.1 1.516E+04 2.177E+15 7.023E+01 1.346E+02 2.000E+05 7.097E-01 6.090E+05 1.014E+06
1.54324232E+00 10825.4 1.540E+04 2.468E+15 8.290E+01 1.267E+02 2.000E+05 7.792E-01 6.438E+05 1.046E+06
1.57116973E+00 11079.0 1.568E+04 2.765E+15 9.578E+01 1.163E+02 2.000E+05 8.127E-01 6.485E+05 1.080E+06
1.60392931E+00 11333.0 1.600E+04 3.056E+15 1.074E+02 1.218E+02 2.000E+05 8.393E-01 6.856E+05 1.116E+06
1.64309916E+00 11674.5 1.639E+04 3.409E+15 1.191E+02 1.291E+02 2.000E+05 8.446E-01 7.039E+05 1.166E+06
1.69116207E+00 12025.0 1.686E+04 3.731E+15 1.268E+02 1.327E+02 2.000E+05 8.278E-01 6.893E+05 1.217E+06
1.75249358E+00 12508.9 1.747E+04 4.072E+15 1.292E+02 2.454E+02 2.000E+05 7.930E-01 8.198E+05 1.284E+06
1.84812413E+00 14008.4 1.839E+04 4.352E+15 9.155E+01 4.167E+02 2.000E+05 1.190E-01 4.460E+05 1.475E+06
2.07967379E+00 16763.5 2.063E+04 4.271E+15 3.603E+01 2.454E+02 2.000E+05 6.915E-05 4.664E+04 1.695E+06
2.70374992E+00 18924.8 2.674E+04 5.011E+15 2.597E+01 1.712E+02 2.000E+05 1.712E-06 1.284E+04 1.812E+06
3.74103117E+00 21064.2 3.695E+04 6.295E+15 2.226E+01 1.480E+02 2.000E+05 6.428E-08 4.476E+03 2.014E+06
PRADK 4.8309E+00
TEFF 8000. GRAVITY 3.50000 LTE
TITLE [-1.0] VTURB=2 L/H=1.25 NOVER NEW ODF
OPACITY IFOP 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 0 0 0 0
CONVECTION ON 1.25 TURBULENCE OFF 0.00 0.00 0.00 0.00
ABUNDANCE SCALE 0.10000 ABUNDANCE CHANGE 1 0.92140 2 0.07843
ABUNDANCE CHANGE 3 -10.94 4 -10.64 5 -9.49 6 -3.52 7 -4.12 8 -3.21
ABUNDANCE CHANGE 9 -7.48 10 -3.96 11 -5.71 12 -4.46 13 -5.57 14 -4.49
ABUNDANCE CHANGE 15 -6.59 16 -4.71 17 -6.54 18 -5.64 19 -6.92 20 -5.68
ABUNDANCE CHANGE 21 -8.87 22 -7.02 23 -8.04 24 -6.37 25 -6.65 26 -4.54
ABUNDANCE CHANGE 27 -7.12 28 -5.79 29 -7.83 30 -7.44 31 -9.16 32 -8.63
ABUNDANCE CHANGE 33 -9.67 34 -8.63 35 -9.41 36 -8.73 37 -9.44 38 -9.07
ABUNDANCE CHANGE 39 -9.80 40 -9.44 41 -10.62 42 -10.12 43 -20.00 44 -10.20
ABUNDANCE CHANGE 45 -10.92 46 -10.35 47 -11.10 48 -10.27 49 -10.38 50 -10.04
ABUNDANCE CHANGE 51 -11.04 52 -9.80 53 -10.53 54 -9.87 55 -10.91 56 -9.91
ABUNDANCE CHANGE 57 -10.87 58 -10.46 59 -11.33 60 -10.54 61 -20.00 62 -11.03
ABUNDANCE CHANGE 63 -11.53 64 -10.92 65 -11.69 66 -10.90 67 -11.78 68 -11.11
ABUNDANCE CHANGE 69 -12.04 70 -10.96 71 -11.98 72 -11.16 73 -12.17 74 -10.93
ABUNDANCE CHANGE 75 -11.76 76 -10.59 77 -10.69 78 -10.24 79 -11.03 80 -10.91
ABUNDANCE CHANGE 81 -11.14 82 -10.09 83 -11.33 84 -20.00 85 -20.00 86 -20.00
ABUNDANCE CHANGE 87 -20.00 88 -20.00 89 -20.00 90 -11.95 91 -20.00 92 -12.54
ABUNDANCE CHANGE 93 -20.00 94 -20.00 95 -20.00 96 -20.00 97 -20.00 98 -20.00
ABUNDANCE CHANGE 99 -20.00
READ DECK6 72 RHOX,T,P,XNE,ABROSS,ACCRAD,VTURB, FLXCNV,VCONV,VELSND
3.05518751E-05 5022.9 9.660E-02 1.565E+09 4.365E-03 5.149E-01 2.000E+05 0.000E+00 0.000E+00 3.534E+06
4.09612058E-05 5049.5 1.295E-01 1.973E+09 4.181E-03 5.095E-01 2.000E+05 0.000E+00 0.000E+00 3.141E+06
5.55515909E-05 5073.0 1.756E-01 2.475E+09 3.949E-03 5.010E-01 2.000E+05 0.000E+00 0.000E+00 2.792E+06
7.59683841E-05 5102.8 2.402E-01 3.177E+09 3.798E-03 4.910E-01 2.000E+05 0.000E+00 0.000E+00 2.468E+06
1.04137658E-04 5136.7 3.293E-01 4.128E+09 3.703E-03 4.791E-01 2.000E+05 0.000E+00 0.000E+00 2.179E+06
1.42389041E-04 5174.1 4.502E-01 5.405E+09 3.660E-03 4.658E-01 2.000E+05 0.000E+00 0.000E+00 1.927E+06
1.93662757E-04 5213.9 6.123E-01 7.097E+09 3.662E-03 4.513E-01 2.000E+05 0.000E+00 0.000E+00 1.713E+06
2.61633472E-04 5255.1 8.272E-01 9.310E+09 3.701E-03 4.365E-01 2.000E+05 0.000E+00 0.000E+00 1.532E+06
3.50875279E-04 5297.2 1.109E+00 1.217E+10 3.775E-03 4.220E-01 2.000E+05 0.000E+00 0.000E+00 1.381E+06
4.67043888E-04 5339.5 1.477E+00 1.584E+10 3.883E-03 4.084E-01 2.000E+05 0.000E+00 0.000E+00 1.255E+06
6.17064538E-04 5381.9 1.951E+00 2.049E+10 4.024E-03 3.957E-01 2.000E+05 0.000E+00 0.000E+00 1.152E+06
8.09285010E-04 5424.0 2.559E+00 2.635E+10 4.204E-03 3.840E-01 2.000E+05 0.000E+00 0.000E+00 1.068E+06
1.05370684E-03 5465.5 3.332E+00 3.365E+10 4.424E-03 3.730E-01 2.000E+05 0.000E+00 0.000E+00 9.993E+05
1.36243351E-03 5506.0 4.308E+00 4.263E+10 4.684E-03 3.633E-01 2.000E+05 0.000E+00 0.000E+00 9.439E+05
1.75018440E-03 5545.0 5.534E+00 5.355E+10 4.985E-03 3.550E-01 2.000E+05 0.000E+00 0.000E+00 8.995E+05
2.23491329E-03 5582.4 7.067E+00 6.669E+10 5.329E-03 3.487E-01 2.000E+05 0.000E+00 0.000E+00 8.641E+05
2.83811265E-03 5618.3 8.974E+00 8.240E+10 | |
# coding: utf-8
# Copyright (c) 2016, 2022, Oracle and/or its affiliates. All rights reserved.
# This software is dual-licensed to you under the Universal Permissive License (UPL) 1.0 as shown at https://oss.oracle.com/licenses/upl or Apache License 2.0 as shown at http://www.apache.org/licenses/LICENSE-2.0. You may choose either license.
from oci.util import formatted_flat_dict, NONE_SENTINEL, value_allowed_none_or_none_sentinel # noqa: F401
from oci.decorators import init_model_state_from_kwargs
@init_model_state_from_kwargs
class RecommendationDetails(object):
"""
Details of a recommendation.
"""
#: A constant which can be used with the recommendation_type property of a RecommendationDetails.
#: This constant has a value of "LINK_GLOSSARY_TERM"
RECOMMENDATION_TYPE_LINK_GLOSSARY_TERM = "LINK_GLOSSARY_TERM"
#: A constant which can be used with the recommendation_status property of a RecommendationDetails.
#: This constant has a value of "ACCEPTED"
RECOMMENDATION_STATUS_ACCEPTED = "ACCEPTED"
#: A constant which can be used with the recommendation_status property of a RecommendationDetails.
#: This constant has a value of "REJECTED"
RECOMMENDATION_STATUS_REJECTED = "REJECTED"
#: A constant which can be used with the recommendation_status property of a RecommendationDetails.
#: This constant has a value of "INFERRED"
RECOMMENDATION_STATUS_INFERRED = "INFERRED"
#: A constant which can be used with the source_object_type property of a RecommendationDetails.
#: This constant has a value of "DATA_ENTITY"
SOURCE_OBJECT_TYPE_DATA_ENTITY = "DATA_ENTITY"
#: A constant which can be used with the source_object_type property of a RecommendationDetails.
#: This constant has a value of "ATTRIBUTE"
SOURCE_OBJECT_TYPE_ATTRIBUTE = "ATTRIBUTE"
#: A constant which can be used with the source_object_type property of a RecommendationDetails.
#: This constant has a value of "TERM"
SOURCE_OBJECT_TYPE_TERM = "TERM"
#: A constant which can be used with the source_object_type property of a RecommendationDetails.
#: This constant has a value of "CATEGORY"
SOURCE_OBJECT_TYPE_CATEGORY = "CATEGORY"
#: A constant which can be used with the target_object_type property of a RecommendationDetails.
#: This constant has a value of "DATA_ENTITY"
TARGET_OBJECT_TYPE_DATA_ENTITY = "DATA_ENTITY"
#: A constant which can be used with the target_object_type property of a RecommendationDetails.
#: This constant has a value of "ATTRIBUTE"
TARGET_OBJECT_TYPE_ATTRIBUTE = "ATTRIBUTE"
#: A constant which can be used with the target_object_type property of a RecommendationDetails.
#: This constant has a value of "TERM"
TARGET_OBJECT_TYPE_TERM = "TERM"
#: A constant which can be used with the target_object_type property of a RecommendationDetails.
#: This constant has a value of "CATEGORY"
TARGET_OBJECT_TYPE_CATEGORY = "CATEGORY"
def __init__(self, **kwargs):
"""
Initializes a new RecommendationDetails object with values from keyword arguments.
The following keyword arguments are supported (corresponding to the getters/setters of this class):
:param recommendation_key:
The value to assign to the recommendation_key property of this RecommendationDetails.
:type recommendation_key: str
:param recommendation_type:
The value to assign to the recommendation_type property of this RecommendationDetails.
Allowed values for this property are: "LINK_GLOSSARY_TERM", 'UNKNOWN_ENUM_VALUE'.
Any unrecognized values returned by a service will be mapped to 'UNKNOWN_ENUM_VALUE'.
:type recommendation_type: str
:param recommendation_status:
The value to assign to the recommendation_status property of this RecommendationDetails.
Allowed values for this property are: "ACCEPTED", "REJECTED", "INFERRED", 'UNKNOWN_ENUM_VALUE'.
Any unrecognized values returned by a service will be mapped to 'UNKNOWN_ENUM_VALUE'.
:type recommendation_status: str
:param confidence_score:
The value to assign to the confidence_score property of this RecommendationDetails.
:type confidence_score: float
:param source_object_key:
The value to assign to the source_object_key property of this RecommendationDetails.
:type source_object_key: str
:param source_object_name:
The value to assign to the source_object_name property of this RecommendationDetails.
:type source_object_name: str
:param source_object_type:
The value to assign to the source_object_type property of this RecommendationDetails.
Allowed values for this property are: "DATA_ENTITY", "ATTRIBUTE", "TERM", "CATEGORY", 'UNKNOWN_ENUM_VALUE'.
Any unrecognized values returned by a service will be mapped to 'UNKNOWN_ENUM_VALUE'.
:type source_object_type: str
:param target_object_key:
The value to assign to the target_object_key property of this RecommendationDetails.
:type target_object_key: str
:param target_object_name:
The value to assign to the target_object_name property of this RecommendationDetails.
:type target_object_name: str
:param target_object_type:
The value to assign to the target_object_type property of this RecommendationDetails.
Allowed values for this property are: "DATA_ENTITY", "ATTRIBUTE", "TERM", "CATEGORY", 'UNKNOWN_ENUM_VALUE'.
Any unrecognized values returned by a service will be mapped to 'UNKNOWN_ENUM_VALUE'.
:type target_object_type: str
:param properties:
The value to assign to the properties property of this RecommendationDetails.
:type properties: dict(str, dict(str, str))
"""
self.swagger_types = {
'recommendation_key': 'str',
'recommendation_type': 'str',
'recommendation_status': 'str',
'confidence_score': 'float',
'source_object_key': 'str',
'source_object_name': 'str',
'source_object_type': 'str',
'target_object_key': 'str',
'target_object_name': 'str',
'target_object_type': 'str',
'properties': 'dict(str, dict(str, str))'
}
self.attribute_map = {
'recommendation_key': 'recommendationKey',
'recommendation_type': 'recommendationType',
'recommendation_status': 'recommendationStatus',
'confidence_score': 'confidenceScore',
'source_object_key': 'sourceObjectKey',
'source_object_name': 'sourceObjectName',
'source_object_type': 'sourceObjectType',
'target_object_key': 'targetObjectKey',
'target_object_name': 'targetObjectName',
'target_object_type': 'targetObjectType',
'properties': 'properties'
}
self._recommendation_key = None
self._recommendation_type = None
self._recommendation_status = None
self._confidence_score = None
self._source_object_key = None
self._source_object_name = None
self._source_object_type = None
self._target_object_key = None
self._target_object_name = None
self._target_object_type = None
self._properties = None
@property
def recommendation_key(self):
"""
**[Required]** Gets the recommendation_key of this RecommendationDetails.
Unique identifier of the recommendation.
:return: The recommendation_key of this RecommendationDetails.
:rtype: str
"""
return self._recommendation_key
@recommendation_key.setter
def recommendation_key(self, recommendation_key):
"""
Sets the recommendation_key of this RecommendationDetails.
Unique identifier of the recommendation.
:param recommendation_key: The recommendation_key of this RecommendationDetails.
:type: str
"""
self._recommendation_key = recommendation_key
@property
def recommendation_type(self):
"""
**[Required]** Gets the recommendation_type of this RecommendationDetails.
Type of recommendation.
Allowed values for this property are: "LINK_GLOSSARY_TERM", 'UNKNOWN_ENUM_VALUE'.
Any unrecognized values returned by a service will be mapped to 'UNKNOWN_ENUM_VALUE'.
:return: The recommendation_type of this RecommendationDetails.
:rtype: str
"""
return self._recommendation_type
@recommendation_type.setter
def recommendation_type(self, recommendation_type):
"""
Sets the recommendation_type of this RecommendationDetails.
Type of recommendation.
:param recommendation_type: The recommendation_type of this RecommendationDetails.
:type: str
"""
allowed_values = ["LINK_GLOSSARY_TERM"]
if not value_allowed_none_or_none_sentinel(recommendation_type, allowed_values):
recommendation_type = 'UNKNOWN_ENUM_VALUE'
self._recommendation_type = recommendation_type
@property
def recommendation_status(self):
"""
**[Required]** Gets the recommendation_status of this RecommendationDetails.
Status of a recommendation.
Allowed values for this property are: "ACCEPTED", "REJECTED", "INFERRED", 'UNKNOWN_ENUM_VALUE'.
Any unrecognized values returned by a service will be mapped to 'UNKNOWN_ENUM_VALUE'.
:return: The recommendation_status of this RecommendationDetails.
:rtype: str
"""
return self._recommendation_status
@recommendation_status.setter
def recommendation_status(self, recommendation_status):
"""
Sets the recommendation_status of this RecommendationDetails.
Status of a recommendation.
:param recommendation_status: The recommendation_status of this RecommendationDetails.
:type: str
"""
allowed_values = ["ACCEPTED", "REJECTED", "INFERRED"]
if not value_allowed_none_or_none_sentinel(recommendation_status, allowed_values):
recommendation_status = 'UNKNOWN_ENUM_VALUE'
self._recommendation_status = recommendation_status
@property
def confidence_score(self):
"""
Gets the confidence_score of this RecommendationDetails.
Level of confidence, on a scale between 0 and 1, that the recommendation is applicable.
:return: The confidence_score of this RecommendationDetails.
:rtype: float
"""
return self._confidence_score
@confidence_score.setter
def confidence_score(self, confidence_score):
"""
Sets the confidence_score of this RecommendationDetails.
Level of confidence, on a scale between 0 and 1, that the recommendation is applicable.
:param confidence_score: The confidence_score of this RecommendationDetails.
:type: float
"""
self._confidence_score = confidence_score
@property
def source_object_key(self):
"""
Gets the source_object_key of this RecommendationDetails.
Unique identifier of the source object; the one for which a recommendation is made.
:return: The source_object_key of this RecommendationDetails.
:rtype: str
"""
return self._source_object_key
@source_object_key.setter
def source_object_key(self, source_object_key):
"""
Sets the source_object_key of this RecommendationDetails.
Unique identifier of the source object; the one for which a recommendation is made.
:param source_object_key: The source_object_key of this RecommendationDetails.
:type: str
"""
self._source_object_key = source_object_key
@property
def source_object_name(self):
"""
Gets the source_object_name of this RecommendationDetails.
Name of the source object; the one for which a recommendation is made.
:return: The source_object_name of this RecommendationDetails.
:rtype: str
"""
return self._source_object_name
@source_object_name.setter
def source_object_name(self, source_object_name):
"""
Sets the source_object_name of this RecommendationDetails.
Name of the source object; the one for which a recommendation is made.
:param source_object_name: The source_object_name of this RecommendationDetails.
:type: str
"""
self._source_object_name = source_object_name
@property
def source_object_type(self):
"""
Gets the source_object_type of this RecommendationDetails.
Type of the source object; the one for which a recommendation is made.
Allowed values for this property are: "DATA_ENTITY", "ATTRIBUTE", "TERM", "CATEGORY", 'UNKNOWN_ENUM_VALUE'.
Any unrecognized values returned by a service will be mapped to 'UNKNOWN_ENUM_VALUE'.
:return: The source_object_type of this RecommendationDetails.
:rtype: str
"""
return self._source_object_type
@source_object_type.setter
def source_object_type(self, source_object_type):
"""
Sets the source_object_type of this RecommendationDetails.
Type of the source object; the one for which a recommendation is made.
:param source_object_type: The source_object_type of this RecommendationDetails.
:type: str
"""
allowed_values | |
0] + dsa[:, 1]@dsa[:, 1]
Gb = dsa[:, 0]@dsb[:, 0] + dsa[:, 1]@dsb[:, 1]
Gc = dsa[:, 0]@dsc[:, 0] + dsa[:, 1]@dsc[:, 1]
Gd = dsb[:, 0]@dsb[:, 0] + dsb[:, 1]@dsb[:, 1]
Ge = dsb[:, 0]@dsc[:, 0] + dsb[:, 1]@dsc[:, 1]
Gf = dsc[:, 0]@dsc[:, 0] + dsc[:, 1]@dsc[:, 1]
G = six_to_nine(Ga, Gb, Gc, Gd, Ge, Gf)
G /= sigmasq
# Add in the negative Hessian of the log-prior.
G += Id
return G
def _grad_metric_helper(sens: np.ndarray) -> np.ndarray:
S, dS = sens[:, 2:8], sens[:, 8:]
dG = np.zeros((3, 3, 3))
# Derivatives with respect to `a`.
dGaa = 2*dS[:, 0]@S[:, 0] + 2*dS[:, 1]@S[:, 1]
dGab = dS[:, 0]@S[:, 2] + S[:, 0]@dS[:, 2] + dS[:, 1]@S[:, 3] + S[:, 1]@dS[:, 3]
dGac = dS[:, 0]@S[:, 4] + S[:, 0]@dS[:, 4] + dS[:, 1]@S[:, 5] + S[:, 1]@dS[:, 5]
dGad = 2*dS[:, 2]@S[:, 2] + 2*dS[:, 3]@S[:, 3]
dGae = dS[:, 2]@S[:, 4] + S[:, 2]@dS[:, 4] + dS[:, 3]@S[:, 5] + S[:, 3]@dS[:, 5]
dGaf = 2*dS[:, 4]@S[:, 4] + 2*dS[:, 5]@S[:, 5]
dGa = six_to_nine(dGaa, dGab, dGac, dGad, dGae, dGaf)
# Derivatives with respect to `b`.
dGba = 2*dS[:, 2]@S[:, 0] + 2*dS[:, 3]@S[:, 1]
dGbb = dS[:, 2]@S[:, 2] + S[:, 0]@dS[:, 6] + dS[:, 3]@S[:, 3] + S[:, 1]@dS[:, 7]
dGbc = dS[:, 2]@S[:, 4] + S[:, 0]@dS[:, 8] + dS[:, 3]@S[:, 5] + S[:, 1]@dS[:, 9]
dGbd = 2*dS[:, 6]@S[:, 2] + 2*dS[:, 7]@S[:, 3]
dGbe = dS[:, 6]@S[:, 4] + S[:, 2]@dS[:, 8] + dS[:, 7]@S[:, 5] + S[:, 3]@dS[:, 9]
dGbf = 2*dS[:, 8]@S[:, 4] + 2*dS[:, 9]@S[:, 5]
dGb = six_to_nine(dGba, dGbb, dGbc, dGbd, dGbe, dGbf)
# Derivatives with respect to `c`.
dGca = 2*dS[:, 4]@S[:, 0] + 2*dS[:, 5]@S[:, 1]
dGcb = dS[:, 4]@S[:, 2] + S[:, 0]@dS[:, 8] + dS[:, 5]@S[:, 3] + S[:, 1]@dS[:, 9]
dGcc = dS[:, 4]@S[:, 4] + S[:, 0]@dS[:, 10] + dS[:, 5]@S[:, 5] + S[:, 1]@dS[:, 11]
dGcd = 2*dS[:, 8]@S[:, 2] + 2*dS[:, 9]@S[:, 3]
dGce = dS[:, 8]@S[:, 4] + S[:, 2]@dS[:, 10] + dS[:, 9]@S[:, 5] + S[:, 3]@dS[:, 11]
dGcf = 2*dS[:, 10]@S[:, 4] + 2*dS[:, 11]@S[:, 5]
dGc = six_to_nine(dGca, dGcb, dGcc, dGcd, dGce, dGcf)
# Stack the component matrices.
dG = np.array([dGa, dGb, dGc]).swapaxes(0, -1)
dG /= sigmasq
return dG
def _grad_log_posterior(a: float, b: float, c: float) -> np.ndarray:
"""The gradient of the log-posterior of the Fitzhugh-Nagumo model with respect
to the model parameters.
Args:
a: Parameter of the Fitzhugh-Nagumo model.
b: Parameter of the Fitzhugh-Nagumo model.
c: Parameter of the Fitzhugh-Nagumo model.
Returns:
da: The derivative with respect to the parameter `a`.
db: The derivative with respect to the parameter `b`.
dc: The derivative with respect to the parameter `c`.
"""
sens = odeint(fn_sensitivity, aug, t, (a, b, c), atol=atol, rtol=rtol, hmax=hmax, mxstep=mxstep)
return _grad_log_posterior_helper(sens, a, b, c)
def _metric(a: float, b: float, c: float) -> np.ndarray:
"""The Fisher information metric of the Fitzhugh-Nagumo model. The sensitivity
differential equation allows us to propogate the derivatives of the
trajectory states into the metric.
Args:
a: Parameter of the Fitzhugh-Nagumo model.
b: Parameter of the Fitzhugh-Nagumo model.
c: Parameter of the Fitzhugh-Nagumo model.
Returns:
G: The Fisher information metric of the Fitzhugh-Nagumo model.
"""
sens = odeint(fn_sensitivity, aug, t, (a, b, c), rtol=rtol, atol=atol, hmax=hmax, mxstep=mxstep)
return _metric_helper(sens)
def _grad_metric(a: float, b: float, c: float) -> np.ndarray:
"""The gradient of the Fisher information metric of the Fitzhugh-Nagumo model
with respect to the model parameters.
Args:
a: Parameter of the Fitzhugh-Nagumo model.
b: Parameter of the Fitzhugh-Nagumo model.
c: Parameter of the Fitzhugh-Nagumo model.
Returns:
G: The gradient of the Fisher information metric of the Fitzhugh-Nagumo
model.
"""
sens = odeint(fn_higher_sensitivity, augh, t, (a, b, c), rtol=rtol, atol=atol, hmax=hmax, mxstep=mxstep)
return _grad_metric_helper(sens)
def grad_log_posterior_and_metric_and_grad_metric(q: np.ndarray) -> np.ndarray:
a, b, c = q
sens = odeint(fn_higher_sensitivity, augh, t, (a, b, c), rtol=rtol, atol=atol, hmax=hmax, mxstep=mxstep)
glp = _grad_log_posterior_helper(sens, a, b, c)
G = _metric_helper(sens)
dG = _grad_metric_helper(sens)
return glp, G, dG
# Convert functions that whose inputs were `a`, `b`, and `c` to take a
# single vector-valued argument representing the concatenation of those
# variables.
log_posterior = lambda q: _log_posterior(q[0], q[1], q[2])
grad_log_posterior = lambda q: _grad_log_posterior(q[0], q[1], q[2])
metric = lambda q: _metric(q[0], q[1], q[2])
grad_metric = lambda q: _grad_metric(q[0], q[1], q[2])
return (
log_posterior, grad_log_posterior, metric, grad_metric,
grad_log_posterior_and_metric_and_grad_metric
)
def main():
import time
from hmc.linalg import solve_psd
from hmc.numpy_wrapper import use_jax
# Integrator parameters.
rtol = 1e-12
atol = 1e-12
hmax = 1e-3
mxstep = 1000000
# Generate observations from the Fitzhugh-Nagumo model.
a = 0.2
b = 0.2
c = 3.0
sigma = 0.5
state = np.array([-1.0, 1.0])
t = np.linspace(0.0, 10.0, 200)
start = time.time()
y = generate_data(state, t, sigma, a, b, c, rtol, atol, hmax, mxstep=mxstep)
elapsed = time.time() - start
print('data generation elapsed: {:.4f}'.format(elapsed))
# Compute the sensitivities of the states to finite difference perturbations.
_fn_sensitivity = lambda t, state, *args: fn_sensitivity(state, t, *args)
aug = np.hstack((state, np.zeros(6)))
start = time.time()
sens = odeint(fn_sensitivity, aug, t, (a, b, c), rtol=rtol, atol=atol, hmax=hmax, mxstep=mxstep)
elapsed = time.time() - start
print('state sensitivities elapsed: {:.4f}'.format(elapsed))
delta = 1e-5
yh = odeint(fn_dynamics, state, t, (a + 0.5*delta, b, c), rtol=rtol, atol=atol, hmax=hmax, mxstep=mxstep)
yl = odeint(fn_dynamics, state, t, (a - 0.5*delta, b, c), rtol=rtol, atol=atol, hmax=hmax, mxstep=mxstep)
fda = (yh - yl) / delta
assert np.allclose(sens[:, 2:4], fda)
yh = odeint(fn_dynamics, state, t, (a, b + 0.5*delta, c), rtol=rtol, atol=atol, hmax=hmax, mxstep=mxstep)
yl = odeint(fn_dynamics, state, t, (a, b - 0.5*delta, c), rtol=rtol, atol=atol, hmax=hmax, mxstep=mxstep)
fdb = (yh - yl) / delta
assert np.allclose(sens[:, 4:6], fdb)
yh = odeint(fn_dynamics, state, t, (a, b, c + 0.5*delta), rtol=rtol, atol=atol, hmax=hmax, mxstep=mxstep)
yl = odeint(fn_dynamics, state, t, (a, b, c - 0.5*delta), rtol=rtol, atol=atol, hmax=hmax, mxstep=mxstep)
fdc = (yh - yl) / delta
assert np.allclose(sens[:, 6:8], fdc)
# Check sensitivity of sensitivity with respect to `a`.
augh = np.hstack((state, np.zeros(18)))
sens = odeint(fn_higher_sensitivity, augh, t, (a, b, c), rtol=rtol, atol=atol, hmax=hmax, mxstep=mxstep)
sh = odeint(fn_sensitivity, aug, t, (a + 0.5*delta, b, c), rtol=rtol, atol=atol, hmax=hmax, mxstep=mxstep)
sl = odeint(fn_sensitivity, aug, t, (a - 0.5*delta, b, c), rtol=rtol, atol=atol, hmax=hmax, mxstep=mxstep)
fds = (sh - sl) / delta
assert np.allclose(fds[:, 2], sens[:, 8])
assert np.allclose(fds[:, 4], sens[:, 10])
assert np.allclose(fds[:, 6], sens[:, 12])
assert np.allclose(fds[:, 3], sens[:, 9])
assert np.allclose(fds[:, 5], sens[:, 11])
assert np.allclose(fds[:, 7], sens[:, 13])
# Check sensitivity of sensitivity with respect to `b`.
sh = odeint(fn_sensitivity, aug, t, (a, b + 0.5*delta, c), rtol=rtol, atol=atol, hmax=hmax, mxstep=mxstep)
sl = odeint(fn_sensitivity, aug, t, (a, b - 0.5*delta, c), rtol=rtol, atol=atol, hmax=hmax, mxstep=mxstep)
fds = (sh - sl) / delta
assert np.allclose(fds[:, 2], sens[:, 10])
assert np.allclose(fds[:, 4], sens[:, 14])
assert np.allclose(fds[:, 6], sens[:, 16])
assert np.allclose(fds[:, 3], sens[:, 11])
assert np.allclose(fds[:, 5], sens[:, 15])
assert np.allclose(fds[:, 7], sens[:, 17])
# Check sensitivity of sensitivity with respect to `c`.
sh = odeint(fn_sensitivity, aug, t, (a, b, c + 0.5*delta), rtol=rtol, atol=atol, hmax=hmax, mxstep=mxstep)
sl = odeint(fn_sensitivity, aug, t, (a, b, c - 0.5*delta), rtol=rtol, atol=atol, hmax=hmax, mxstep=mxstep)
fds = (sh - sl) / delta
assert np.allclose(fds[:, 2], sens[:, 12])
assert np.allclose(fds[:, 4], sens[:, 16])
assert np.allclose(fds[:, 6], sens[:, 18])
assert np.allclose(fds[:, 3], sens[:, 13])
assert np.allclose(fds[:, 5], sens[:, 17])
assert np.allclose(fds[:, 7], sens[:, 19])
# Check the gradient of the log-posterior against finite differences.
log_posterior, grad_log_posterior, metric, grad_metric, _ = posterior_factory(state, y, t, sigma, rtol=rtol, atol=atol, hmax=hmax, mxstep=mxstep)
a, b, c = 0.1, 0.5, 2.0
q = np.array([a, b, c])
u = np.random.normal(size=q.shape)
g = grad_log_posterior(q)@u
delta = 1e-5
fd = (log_posterior(q + 0.5*delta*u) - log_posterior(q - 0.5*delta*u)) / delta
assert np.allclose(g, fd)
err = g - fd
rerr = err / np.linalg.norm(fd)
print('log-posterior gradient abs. error: | |
self._segment_path = lambda: "prefix-name"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.PrefixName, ['prefix_length'], name, value)
class Prefix(_Entity_):
"""
Prefix
.. attribute:: afi
AFI
**type**\: :py:class:`BgpOcAfi <ydk.models.cisco_ios_xr.Cisco_IOS_XR_ipv4_bgp_oc_oper.BgpOcAfi>`
**config**\: False
.. attribute:: ipv4_address
IPv4 Addr
**type**\: str
**pattern:** (([0\-9]\|[1\-9][0\-9]\|1[0\-9][0\-9]\|2[0\-4][0\-9]\|25[0\-5])\\.){3}([0\-9]\|[1\-9][0\-9]\|1[0\-9][0\-9]\|2[0\-4][0\-9]\|25[0\-5])(%[\\p{N}\\p{L}]+)?
**config**\: False
.. attribute:: ipv6_address
IPv6 Addr
**type**\: str
**pattern:** ((\:\|[0\-9a\-fA\-F]{0,4})\:)([0\-9a\-fA\-F]{0,4}\:){0,5}((([0\-9a\-fA\-F]{0,4}\:)?(\:\|[0\-9a\-fA\-F]{0,4}))\|(((25[0\-5]\|2[0\-4][0\-9]\|[01]?[0\-9]?[0\-9])\\.){3}(25[0\-5]\|2[0\-4][0\-9]\|[01]?[0\-9]?[0\-9])))(%[\\p{N}\\p{L}]+)?
**config**\: False
"""
_prefix = 'ipv4-bgp-oc-oper'
_revision = '2017-09-07'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.PrefixName.Prefix, self).__init__()
self.yang_name = "prefix"
self.yang_parent_name = "prefix-name"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('afi', (YLeaf(YType.enumeration, 'afi'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_ipv4_bgp_oc_oper', 'BgpOcAfi', '')])),
('ipv4_address', (YLeaf(YType.str, 'ipv4-address'), ['str'])),
('ipv6_address', (YLeaf(YType.str, 'ipv6-address'), ['str'])),
])
self.afi = None
self.ipv4_address = None
self.ipv6_address = None
self._segment_path = lambda: "prefix"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.PrefixName.Prefix, ['afi', 'ipv4_address', 'ipv6_address'], name, value)
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_ipv4_bgp_oc_oper as meta
return meta._meta_table['OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.PrefixName.Prefix']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_ipv4_bgp_oc_oper as meta
return meta._meta_table['OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.PrefixName']['meta_info']
class RouteAttrList(_Entity_):
"""
RouteAttributesList
.. attribute:: next_hop
NextHopAddress
**type**\: :py:class:`NextHop <ydk.models.cisco_ios_xr.Cisco_IOS_XR_ipv4_bgp_oc_oper.OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList.NextHop>`
**config**\: False
.. attribute:: aggregrator_attributes
AggregatorList
**type**\: :py:class:`AggregratorAttributes <ydk.models.cisco_ios_xr.Cisco_IOS_XR_ipv4_bgp_oc_oper.OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList.AggregratorAttributes>`
**config**\: False
.. attribute:: origin_type
Origin Attribute Type
**type**\: :py:class:`BgpOcOriginAttr <ydk.models.cisco_ios_xr.Cisco_IOS_XR_ipv4_bgp_oc_oper.BgpOcOriginAttr>`
**config**\: False
.. attribute:: as_path
AS Path
**type**\: str
**config**\: False
.. attribute:: as4_path
AS4 Path
**type**\: str
**config**\: False
.. attribute:: med
Med
**type**\: int
**range:** 0..4294967295
**config**\: False
.. attribute:: local_pref
LocalPref
**type**\: int
**range:** 0..4294967295
**config**\: False
.. attribute:: atomic_aggr
AtomicAggr
**type**\: bool
**config**\: False
.. attribute:: community
CommunityArray
**type**\: list of :py:class:`Community <ydk.models.cisco_ios_xr.Cisco_IOS_XR_ipv4_bgp_oc_oper.OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList.Community>`
**config**\: False
"""
_prefix = 'ipv4-bgp-oc-oper'
_revision = '2017-09-07'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList, self).__init__()
self.yang_name = "route-attr-list"
self.yang_parent_name = "route"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([("next-hop", ("next_hop", OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList.NextHop)), ("aggregrator-attributes", ("aggregrator_attributes", OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList.AggregratorAttributes)), ("community", ("community", OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList.Community))])
self._leafs = OrderedDict([
('origin_type', (YLeaf(YType.enumeration, 'origin-type'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_ipv4_bgp_oc_oper', 'BgpOcOriginAttr', '')])),
('as_path', (YLeaf(YType.str, 'as-path'), ['str'])),
('as4_path', (YLeaf(YType.str, 'as4-path'), ['str'])),
('med', (YLeaf(YType.uint32, 'med'), ['int'])),
('local_pref', (YLeaf(YType.uint32, 'local-pref'), ['int'])),
('atomic_aggr', (YLeaf(YType.boolean, 'atomic-aggr'), ['bool'])),
])
self.origin_type = None
self.as_path = None
self.as4_path = None
self.med = None
self.local_pref = None
self.atomic_aggr = None
self.next_hop = OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList.NextHop()
self.next_hop.parent = self
self._children_name_map["next_hop"] = "next-hop"
self.aggregrator_attributes = OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList.AggregratorAttributes()
self.aggregrator_attributes.parent = self
self._children_name_map["aggregrator_attributes"] = "aggregrator-attributes"
self.community = YList(self)
self._segment_path = lambda: "route-attr-list"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList, ['origin_type', 'as_path', 'as4_path', 'med', 'local_pref', 'atomic_aggr'], name, value)
class NextHop(_Entity_):
"""
NextHopAddress
.. attribute:: afi
AFI
**type**\: :py:class:`BgpOcAfi <ydk.models.cisco_ios_xr.Cisco_IOS_XR_ipv4_bgp_oc_oper.BgpOcAfi>`
**config**\: False
.. attribute:: ipv4_address
IPv4 Addr
**type**\: str
**pattern:** (([0\-9]\|[1\-9][0\-9]\|1[0\-9][0\-9]\|2[0\-4][0\-9]\|25[0\-5])\\.){3}([0\-9]\|[1\-9][0\-9]\|1[0\-9][0\-9]\|2[0\-4][0\-9]\|25[0\-5])(%[\\p{N}\\p{L}]+)?
**config**\: False
.. attribute:: ipv6_address
IPv6 Addr
**type**\: str
**pattern:** ((\:\|[0\-9a\-fA\-F]{0,4})\:)([0\-9a\-fA\-F]{0,4}\:){0,5}((([0\-9a\-fA\-F]{0,4}\:)?(\:\|[0\-9a\-fA\-F]{0,4}))\|(((25[0\-5]\|2[0\-4][0\-9]\|[01]?[0\-9]?[0\-9])\\.){3}(25[0\-5]\|2[0\-4][0\-9]\|[01]?[0\-9]?[0\-9])))(%[\\p{N}\\p{L}]+)?
**config**\: False
"""
_prefix = 'ipv4-bgp-oc-oper'
_revision = '2017-09-07'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList.NextHop, self).__init__()
self.yang_name = "next-hop"
self.yang_parent_name = "route-attr-list"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('afi', (YLeaf(YType.enumeration, 'afi'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_ipv4_bgp_oc_oper', 'BgpOcAfi', '')])),
('ipv4_address', (YLeaf(YType.str, 'ipv4-address'), ['str'])),
('ipv6_address', (YLeaf(YType.str, 'ipv6-address'), ['str'])),
])
self.afi = None
self.ipv4_address = None
self.ipv6_address = None
self._segment_path = lambda: "next-hop"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList.NextHop, ['afi', 'ipv4_address', 'ipv6_address'], name, value)
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_ipv4_bgp_oc_oper as meta
return meta._meta_table['OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList.NextHop']['meta_info']
class AggregratorAttributes(_Entity_):
"""
AggregatorList
.. attribute:: as_
AS number
**type**\: int
**range:** 0..4294967295
**config**\: False
.. attribute:: as4
AS4 number
**type**\: int
**range:** 0..4294967295
**config**\: False
.. attribute:: address
IPv4 address
**type**\: str
**pattern:** (([0\-9]\|[1\-9][0\-9]\|1[0\-9][0\-9]\|2[0\-4][0\-9]\|25[0\-5])\\.){3}([0\-9]\|[1\-9][0\-9]\|1[0\-9][0\-9]\|2[0\-4][0\-9]\|25[0\-5])(%[\\p{N}\\p{L}]+)?
**config**\: False
"""
_prefix = 'ipv4-bgp-oc-oper'
_revision = '2017-09-07'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList.AggregratorAttributes, self).__init__()
self.yang_name = "aggregrator-attributes"
self.yang_parent_name = "route-attr-list"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('as_', (YLeaf(YType.uint32, 'as'), ['int'])),
('as4', (YLeaf(YType.uint32, 'as4'), ['int'])),
('address', (YLeaf(YType.str, 'address'), ['str'])),
])
self.as_ = None
self.as4 = None
self.address = None
self._segment_path = lambda: "aggregrator-attributes"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList.AggregratorAttributes, ['as_', 'as4', 'address'], name, value)
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_ipv4_bgp_oc_oper as meta
return meta._meta_table['OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList.AggregratorAttributes']['meta_info']
class Community(_Entity_):
"""
CommunityArray
.. attribute:: objects
BGP OC objects
**type**\: str
**config**\: False
"""
_prefix = 'ipv4-bgp-oc-oper'
_revision = '2017-09-07'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList.Community, self).__init__()
self.yang_name = "community"
self.yang_parent_name = "route-attr-list"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('objects', (YLeaf(YType.str, 'objects'), ['str'])),
])
self.objects = None
self._segment_path = lambda: "community"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList.Community, ['objects'], name, value)
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_ipv4_bgp_oc_oper as meta
return meta._meta_table['OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList.Community']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_ipv4_bgp_oc_oper as meta
return meta._meta_table['OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.RouteAttrList']['meta_info']
class ExtAttributesList(_Entity_):
"""
ExtAttributesList
.. attribute:: originator_id
OriginatorID
**type**\: str
**pattern:** (([0\-9]\|[1\-9][0\-9]\|1[0\-9][0\-9]\|2[0\-4][0\-9]\|25[0\-5])\\.){3}([0\-9]\|[1\-9][0\-9]\|1[0\-9][0\-9]\|2[0\-4][0\-9]\|25[0\-5])(%[\\p{N}\\p{L}]+)?
**config**\: False
.. attribute:: aigp
AIGP
**type**\: int
**range:** 0..18446744073709551615
**config**\: False
.. attribute:: path_id
PathId
**type**\: int
**range:** 0..4294967295
**config**\: False
.. attribute:: cluster
ClusterList
**type**\: list of str
**pattern:** (([0\-9]\|[1\-9][0\-9]\|1[0\-9][0\-9]\|2[0\-4][0\-9]\|25[0\-5])\\.){3}([0\-9]\|[1\-9][0\-9]\|1[0\-9][0\-9]\|2[0\-4][0\-9]\|25[0\-5])(%[\\p{N}\\p{L}]+)?
**config**\: False
.. attribute:: ext_community
ExtendedCommunityArray
**type**\: list of :py:class:`ExtCommunity <ydk.models.cisco_ios_xr.Cisco_IOS_XR_ipv4_bgp_oc_oper.OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.ExtAttributesList.ExtCommunity>`
**config**\: False
.. attribute:: unknown_attributes
UnknownAttributes
**type**\: list of :py:class:`UnknownAttributes <ydk.models.cisco_ios_xr.Cisco_IOS_XR_ipv4_bgp_oc_oper.OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.ExtAttributesList.UnknownAttributes>`
**config**\: False
"""
_prefix = 'ipv4-bgp-oc-oper'
_revision = '2017-09-07'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.ExtAttributesList, self).__init__()
self.yang_name = "ext-attributes-list"
self.yang_parent_name = "route"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([("ext-community", ("ext_community", OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.ExtAttributesList.ExtCommunity)), ("unknown-attributes", ("unknown_attributes", OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.ExtAttributesList.UnknownAttributes))])
self._leafs = OrderedDict([
('originator_id', (YLeaf(YType.str, 'originator-id'), ['str'])),
('aigp', (YLeaf(YType.uint64, 'aigp'), ['int'])),
('path_id', (YLeaf(YType.uint32, 'path-id'), ['int'])),
('cluster', (YLeafList(YType.str, 'cluster'), ['str'])),
])
self.originator_id = None
self.aigp = None
self.path_id = None
self.cluster = []
self.ext_community = YList(self)
self.unknown_attributes = YList(self)
self._segment_path = lambda: "ext-attributes-list"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.ExtAttributesList, ['originator_id', 'aigp', 'path_id', 'cluster'], name, value)
class ExtCommunity(_Entity_):
"""
ExtendedCommunityArray
.. attribute:: objects
BGP OC objects
**type**\: str
**config**\: False
"""
_prefix = 'ipv4-bgp-oc-oper'
_revision = '2017-09-07'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.ExtAttributesList.ExtCommunity, self).__init__()
self.yang_name = "ext-community"
self.yang_parent_name = "ext-attributes-list"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('objects', (YLeaf(YType.str, 'objects'), ['str'])),
])
self.objects = None
self._segment_path = lambda: "ext-community"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.ExtAttributesList.ExtCommunity, ['objects'], name, value)
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_ipv4_bgp_oc_oper as meta
return meta._meta_table['OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.ExtAttributesList.ExtCommunity']['meta_info']
class UnknownAttributes(_Entity_):
"""
UnknownAttributes
.. attribute:: attribute_type
AttributeType
**type**\: int
**range:** 0..65535
**config**\: False
.. attribute:: attribute_length
AttributeLength
**type**\: int
**range:** 0..65535
**config**\: False
.. attribute:: attribute_value
Atributevalue
**type**\: str
**pattern:** ([0\-9a\-fA\-F]{2}(\:[0\-9a\-fA\-F]{2})\*)?
**config**\: False
"""
_prefix = 'ipv4-bgp-oc-oper'
_revision = '2017-09-07'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.ExtAttributesList.UnknownAttributes, self).__init__()
self.yang_name = "unknown-attributes"
self.yang_parent_name = "ext-attributes-list"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('attribute_type', (YLeaf(YType.uint16, 'attribute-type'), ['int'])),
('attribute_length', (YLeaf(YType.uint16, 'attribute-length'), ['int'])),
('attribute_value', (YLeaf(YType.str, 'attribute-value'), ['str'])),
])
self.attribute_type = None
self.attribute_length = None
self.attribute_value = None
self._segment_path = lambda: "unknown-attributes"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.ExtAttributesList.UnknownAttributes, ['attribute_type', 'attribute_length', 'attribute_value'], name, value)
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_ipv4_bgp_oc_oper as meta
return meta._meta_table['OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.ExtAttributesList.UnknownAttributes']['meta_info']
@staticmethod
def _meta_info():
from ydk.models.cisco_ios_xr._meta import _Cisco_IOS_XR_ipv4_bgp_oc_oper as meta
return meta._meta_table['OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.ExtAttributesList']['meta_info']
class LastModifiedDate(_Entity_):
"""
LastModifiedDate
.. attribute:: time_value
TimeValue
**type**\: str
**config**\: False
"""
_prefix = 'ipv4-bgp-oc-oper'
_revision = '2017-09-07'
def __init__(self):
if sys.version_info > (3,):
super().__init__()
else:
super(OcBgp.BgpRib.AfiSafiTable.Ipv4Unicast.OpenConfigNeighbors.OpenConfigNeighbor.AdjRibInPost.Routes.Route.LastModifiedDate, self).__init__()
self.yang_name = "last-modified-date"
self.yang_parent_name = "route"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
| |
if __name__ == '__main__':
import matplotlib
matplotlib.use('Agg')
import os
import sys
import time
import tempfile
import numpy as np
import fitsio
from legacypipe.runbrick import main
from astrometry.util.fits import fits_table
def set_env():
for v in ['UNWISE_COADDS_TIMERESOLVED_DIR', 'SKY_TEMPLATE_DIR',
'LARGEGALAXIES_CAT', 'GAIA_CAT_DIR', 'TYCHO2_KD_DIR']:
if v in os.environ:
del os.environ[v]
def rbmain():
from legacypipe.catalog import read_fits_catalog
from legacypipe.survey import LegacySurveyData, wcs_for_brick
from tractor.galaxy import DevGalaxy
from tractor import PointSource, Catalog
from tractor import GaussianMixturePSF
from legacypipe.survey import BrickDuck
from legacypipe.forced_photom import main as forced_main
from astrometry.util.file import trymakedirs
import shutil
ceres = 'ceres' in sys.argv
psfex = 'psfex' in sys.argv
set_env()
oldargs = sys.argv
sys.argv = [sys.argv[0]]
main()
sys.argv = oldargs
# Test create_kdtree and (reading CCD kd-tree)!
indir = os.path.join(os.path.dirname(__file__), 'testcase6')
with tempfile.TemporaryDirectory() as surveydir:
files = ['calib', 'gaia', 'images', 'survey-bricks.fits.gz',
'tycho2.kd.fits']
for fn in files:
src = os.path.join(indir, fn)
dst = os.path.join(surveydir, fn)
#trymakedirs(dst, dir=True)
print('Copy', src, dst)
if os.path.isfile(src):
shutil.copy(src, dst)
else:
shutil.copytree(src, dst)
from legacypipe.create_kdtrees import create_kdtree
infn = os.path.join(indir, 'survey-ccds-1.fits.gz')
outfn = os.path.join(surveydir, 'survey-ccds-1.kd.fits')
create_kdtree(infn, outfn, False)
os.environ['TYCHO2_KD_DIR'] = surveydir
outdir = 'out-testcase6-kd'
main(args=['--brick', '1102p240', '--zoom', '500', '600', '650', '750',
'--force-all', '--no-write', '--no-wise', '--no-gaia',
'--survey-dir', surveydir,
'--outdir', outdir])
fn = os.path.join(outdir, 'tractor', '110', 'tractor-1102p240.fits')
assert(os.path.exists(fn))
T = fits_table(fn)
assert(len(T) == 2)
# Since there is a Tycho-2 star in the blob, forced to be PSF.
assert(T.type[0].strip() == 'PSF')
assert(T.type[1].strip() == 'PSF')
# There is a Tycho-2 star in the blob.
I = np.flatnonzero(T.ref_cat == 'T2')
assert(len(I) == 1)
assert(T.ref_id[I][0] == 1909016711)
cat = read_fits_catalog(T)
assert(len(cat) == 2)
assert(isinstance(cat[0], PointSource))
assert(isinstance(cat[1], PointSource))
cat,ivs = read_fits_catalog(T, invvars=True)
assert(len(cat) == 2)
assert(isinstance(cat[0], PointSource))
assert(isinstance(cat[1], PointSource))
cat2 = Catalog(*cat)
assert(len(ivs) == len(cat2.getParams()))
# test --fit-on-coadds
outdir = 'out-testcase6-coadds'
main(args=['--brick', '1102p240', '--zoom', '500', '600', '650', '750',
'--force-all', '--no-write', '--no-wise', '--no-gaia',
'--survey-dir', surveydir, '--fit-on-coadds',
'--outdir', outdir])
fn = os.path.join(outdir, 'tractor', '110', 'tractor-1102p240.fits')
assert(os.path.exists(fn))
T = fits_table(fn)
assert(len(T) == 2)
# Since there is a Tycho-2 star in the blob, forced to be PSF.
assert(T.type[0].strip() == 'PSF')
assert(T.type[1].strip() == 'PSF')
# There is a Tycho-2 star in the blob.
I = np.flatnonzero(T.ref_cat == 'T2')
assert(len(I) == 1)
assert(T.ref_id[I][0] == 1909016711)
del os.environ['TYCHO2_KD_DIR']
# test --skip-coadds
r = main(args=['--brick', '1102p240',
'--zoom', '500', '600', '650', '750',
'--force-all', '--no-write', '--no-wise', '--no-gaia',
'--survey-dir', surveydir,
'--outdir', outdir, '--skip-coadd'])
assert(r == 0)
# test --skip
r = main(args=['--brick', '1102p240',
'--zoom', '500', '600', '650', '750',
'--force-all', '--no-write', '--no-wise', '--no-gaia',
'--survey-dir', surveydir,
'--outdir', outdir, '--skip'])
assert(r == 0)
# NothingToDoError (neighbouring brick)
r = main(args=['--brick', '1102p240', '--zoom', '0','100','0','100',
'--force-all', '--no-write', '--no-wise', '--no-gaia',
'--survey-dir', surveydir,
'--outdir', outdir])
assert(r == 0)
surveydir = os.path.join(os.path.dirname(__file__), 'testcase9')
# Test for some get_tractor_image kwargs
survey = LegacySurveyData(surveydir)
fakebrick = BrickDuck(9.1228, 3.3975, 'quack')
wcs = wcs_for_brick(fakebrick, W=100, H=100)
ccds = survey.ccds_touching_wcs(wcs)
ccd = ccds[0]
im = survey.get_image_object(ccd)
H,W = wcs.shape
targetrd = np.array([wcs.pixelxy2radec(x,y) for x,y in
[(1,1),(W,1),(W,H),(1,H),(1,1)]])
tim = im.get_tractor_image(radecpoly=targetrd)
assert(tim.getImage() is not None)
assert(tim.getInvError() is not None)
assert(tim.dq is not None)
tim2 = im.get_tractor_image(radecpoly=targetrd, pixels=False)
assert(np.all(tim2.getImage() == 0.))
tim4 = im.get_tractor_image(radecpoly=targetrd, invvar=False)
u = np.unique(tim4.inverr)
assert(len(u) == 1)
u = u[0]
target = tim4.zpscale / tim4.sig1
assert(np.abs(u / target - 1.) < 0.001)
tim3 = im.get_tractor_image(radecpoly=targetrd, invvar=False, dq=False)
assert(not hasattr(tim3, 'dq'))
tim5 = im.get_tractor_image(radecpoly=targetrd, gaussPsf=True)
print(tim5.getPsf())
assert(isinstance(tim5.getPsf(), GaussianMixturePSF))
surveydir = os.path.join(os.path.dirname(__file__), 'testcase12')
os.environ['TYCHO2_KD_DIR'] = surveydir
os.environ['GAIA_CAT_DIR'] = os.path.join(surveydir, 'gaia')
os.environ['GAIA_CAT_VER'] = '2'
os.environ['UNWISE_MODEL_SKY_DIR'] = os.path.join(surveydir, 'images', 'unwise-mod')
#python legacypipe/runbrick.py --radec --width 100 --height 100 --outdir dup5b --survey-dir test/testcase12 --force-all --no-wise
unwdir = os.path.join(surveydir, 'images', 'unwise')
main(args=['--radec', '346.684', '12.791', '--width', '100',
'--height', '100', '--no-wise-ceres',
'--unwise-dir', unwdir, '--survey-dir', surveydir,
'--outdir', 'out-testcase12', '--skip-coadd', '--force-all'])
# --plots for stage_wise_forced
main(args=['--radec', '346.684', '12.791', '--width', '100',
'--height', '100', '--no-wise-ceres',
'--unwise-dir', unwdir, '--survey-dir', surveydir,
'--outdir', 'out-testcase12', '--stage', 'wise_forced',
'--plots'])
del os.environ['GAIA_CAT_DIR']
del os.environ['GAIA_CAT_VER']
del os.environ['TYCHO2_KD_DIR']
del os.environ['UNWISE_MODEL_SKY_DIR']
M = fitsio.read('out-testcase12/coadd/cus/custom-346684p12791/legacysurvey-custom-346684p12791-maskbits.fits.fz')
# Count masked & unmasked bits (the cluster splits this 100x100 field)
from collections import Counter
c = Counter(M.ravel())
from legacypipe.bits import MASKBITS
assert(c[0] >= 4000)
assert(c[MASKBITS['CLUSTER']] >= 4000)
surveydir = os.path.join(os.path.dirname(__file__), 'testcase9')
os.environ['GAIA_CAT_DIR'] = os.path.join(surveydir, 'gaia')
os.environ['GAIA_CAT_VER'] = '2'
os.environ['LARGEGALAXIES_CAT'] = os.path.join(surveydir,
'sga-sub.kd.fits')
main(args=['--radec', '9.1228', '3.3975', '--width', '100',
'--height', '100', '--old-calibs-ok', '--no-wise-ceres',
'--no-wise', '--survey-dir', surveydir,
'--outdir', 'out-testcase9', '--skip', '--force-all',
'--ps', 'tc9-ps.fits', '--ps-t0', str(int(time.time()))])
# (omit --force-all --no-write... reading from pickles below!)
# Test with --apodize
main(args=['--radec', '9.1228', '3.3975', '--width', '100',
'--height', '100', '--old-calibs-ok', '--no-wise',
'--force-all', '--no-write', '--survey-dir', surveydir,
'--outdir', 'out-testcase9-ap', '--apodize'])
main(args=['--radec', '9.1228', '3.3975', '--width', '100',
'--height', '100', '--old-calibs-ok', '--no-wise-ceres',
'--no-wise', '--survey-dir',
surveydir, '--outdir', 'out-testcase9',
'--plots', '--stage', 'halos'])
main(args=['--radec', '9.1228', '3.3975', '--width', '100',
'--height', '100', '--old-calibs-ok', '--no-wise-ceres',
'--no-wise', '--survey-dir',
surveydir, '--outdir', 'out-testcase9-coadds',
'--stage', 'image_coadds', '--blob-image'])
T = fits_table('out-testcase9/tractor/cus/tractor-custom-009122p03397.fits')
assert(len(T) == 4)
# Gaia star becomes a DUP!
assert(np.sum([t == 'DUP' for t in T.type]) == 1)
# LSLGA galaxy exists!
Igal = np.flatnonzero([r == 'L3' for r in T.ref_cat])
assert(len(Igal) == 1)
assert(np.all(T.ref_id[Igal] > 0))
assert(T.type[Igal[0]] == 'SER')
# --brick and --zoom rather than --radec --width --height
main(args=['--survey-dir', surveydir, '--outdir', 'out-testcase9b',
'--zoom', '1950', '2050', '340', '440', '--brick', '0091p035', '--force-all'])
# test forced phot??
shutil.copy('test/testcase9/survey-bricks.fits.gz', 'out-testcase9b')
forced_main(args=['--survey-dir', surveydir,
'--no-ceres',
'--catalog-dir', 'out-testcase9b',
'--expnum', '372546', '--ccdname', 'N26', '--out', 'forced1.fits'])
assert(os.path.exists('forced1.fits'))
_ = fits_table('forced1.fits')
# ... more tests...!
forced_main(args=['--survey-dir', surveydir,
'--no-ceres',
'--catalog-dir', 'out-testcase9b',
'--derivs', '--threads', '2',
'--apphot',
'--expnum', '372547', '--ccdname', 'N26', '--out', 'forced2.fits'])
assert(os.path.exists('forced2.fits'))
_ = fits_table('forced2.fits')
forced_main(args=['--survey-dir', surveydir,
'--no-ceres',
'--catalog-dir', 'out-testcase9b',
'--agn',
'--expnum', '257266', '--ccdname', 'S21', '--out', 'forced3.fits'])
assert(os.path.exists('forced3.fits'))
_ = fits_table('forced3.fits')
if ceres:
forced_main(args=['--survey-dir', surveydir,
'--catalog-dir', 'out-testcase9b',
'--derivs', '--threads', '2',
'--apphot',
'--expnum', '372546', '--ccdname', 'N26', '--out', 'forced4.fits'])
assert(os.path.exists('forced4.fits'))
_ = fits_table('forced4.fits')
# Test cache_dir
with tempfile.TemporaryDirectory() as cachedir, \
tempfile.TemporaryDirectory() as tempsurveydir:
files = []
for dirpath, _, filenames in os.walk(surveydir):
for fn in filenames:
path = os.path.join(dirpath, fn)
relpath = os.path.relpath(path, surveydir)
files.append(relpath)
# cache or no?
files.sort()
files_cache = files[::2]
files_nocache = files[1::2]
# Survey-ccds *must* be in nocache.
fn = 'survey-ccds-1.kd.fits'
if fn in files_cache:
files_cache.remove(fn)
files_nocache.append(fn)
for fn in files_cache:
src = os.path.join(surveydir, fn)
dst = os.path.join(cachedir, fn)
trymakedirs(dst, dir=True)
print('Copy', src, dst)
shutil.copy(src, dst)
for fn in files_nocache:
src = os.path.join(surveydir, fn)
dst = os.path.join(tempsurveydir, fn)
trymakedirs(dst, dir=True)
print('Copy', src, dst)
shutil.copy(src, dst)
main(args=['--radec', '9.1228', '3.3975', '--width', '100',
'--height', '100', '--no-wise',
'--survey-dir', tempsurveydir,
'--cache-dir', cachedir,
'--outdir', 'out-testcase9cache', '--force-all'])
del os.environ['GAIA_CAT_DIR']
del os.environ['GAIA_CAT_VER']
del os.environ['LARGEGALAXIES_CAT']
# if ceres:
# surveydir = os.path.join(os.path.dirname(__file__), 'testcase3')
# main(args=['--brick', '2447p120', '--zoom', '1020', '1070', '2775', '2815',
# '--no-wise', '--force-all', '--no-write', '--ceres',
# '--survey-dir', surveydir,
# '--outdir', 'out-testcase3-ceres',
# '--no-depth-cut'])
# MzLS + BASS data
# python legacypipe/runbrick.py --run north --brick 1773p595 --zoom 1300 1500 700 900 --survey-dir dr9-north -s coadds
# fitscopy coadd/177/1773p595/legacysurvey-1773p595-ccds.fits"[#row<3 || #row==12]" cx.fits
# python legacyanalysis/create_testcase.py cx.fits test/mzlsbass2 1773p595 --survey-dir dr9-north/ --fpack
surveydir2 = os.path.join(os.path.dirname(__file__), 'mzlsbass2')
os.environ['GAIA_CAT_DIR'] = os.path.join(surveydir2, 'gaia')
os.environ['GAIA_CAT_VER'] = '2'
main(args=['--brick', '1773p595', '--zoom', '1300', '1500', '700', '900',
'--no-wise', '--force-all', '--no-write',
'--survey-dir', surveydir2,
'--outdir', 'out-mzlsbass2'])
T = fits_table('out-mzlsbass2/tractor/177/tractor-1773p595.fits')
assert(np.sum(T.ref_cat == 'G2') == 3)
assert(np.sum(T.ref_id > 0) == 3)
# Test --max-blobsize, --checkpoint, --bail-out
outdir = 'out-mzlsbass2b'
chk = 'checkpoint-mzb2b.p'
if os.path.exists(chk):
os.unlink(chk)
main(args=['--brick', '1773p595', '--zoom', '1300', '1500', '700', '900',
'--no-wise', '--force-all', '--stage', 'fitblobs',
'--write-stage', 'srcs',
'--survey-dir', surveydir2,
'--outdir', outdir,
'--checkpoint', chk,
'--nblobs', '3'])
# err... --max-blobsize does not result in bailed-out blobs masked,
# because it treats large blobs as *completed*...
#'--max-blobsize', '3000',
outdir = 'out-mzlsbass2c'
main(args=['--brick', '1773p595', '--zoom', '1300', '1500', '700', '900',
'--no-wise', '--force-all',
'--survey-dir', surveydir2,
'--outdir', outdir, '--bail-out', '--checkpoint', chk,
'--no-write'])
del os.environ['GAIA_CAT_DIR']
del os.environ['GAIA_CAT_VER']
M = fitsio.read(os.path.join(outdir, 'coadd', '177', '1773p595',
'legacysurvey-1773p595-maskbits.fits.fz'))
assert(np.sum((M & MASKBITS['BAILOUT'] ) > 0) >= 1000)
# Test RexGalaxy
surveydir = os.path.join(os.path.dirname(__file__), 'testcase6')
outdir = 'out-testcase6-rex'
the_args = ['--brick', '1102p240', '--zoom', '500', '600', '650', '750',
'--force-all', '--no-write', '--no-wise',
'--skip-calibs',
#'--rex', #'--plots',
'--survey-dir', surveydir,
'--outdir', outdir]
print('python legacypipe/runbrick.py', ' '.join(the_args))
os.environ['GAIA_CAT_DIR'] = os.path.join(surveydir, 'gaia')
os.environ['GAIA_CAT_VER'] = '2'
main(args=the_args)
fn = os.path.join(outdir, 'tractor', '110', 'tractor-1102p240.fits')
assert(os.path.exists(fn))
T = fits_table(fn)
assert(len(T) == 2)
print('Types:', T.type)
# Since there is a Tycho-2 star in the blob, forced to be PSF.
assert(T.type[0].strip() == 'PSF')
cmd = ('(cd %s && sha256sum -c %s)' %
(outdir, os.path.join('tractor', '110', 'brick-1102p240.sha256sum')))
print(cmd)
rtn = os.system(cmd)
assert(rtn == 0)
# Test with a Tycho-2 star in the blob.
| |
variable from check file - %s' % (metric_check_file))
fail_check(skyline_app, metric_failed_check_dir, str(metric_check_file))
return
value = None
try:
# metric_vars.value
# value = str(metric_vars.value)
key = 'value'
value_list = [var_array[1] for var_array in metric_vars_array if var_array[0] == key]
value = float(value_list[0])
anomalous_value = value
if settings.ENABLE_IONOSPHERE_DEBUG:
logger.info('debug :: metric variable - value - %s' % str(value))
except:
logger.error('error :: failed to read value variable from check file - %s' % (metric_check_file))
value = None
if not value:
logger.error('error :: failed to load value variable from check file - %s' % (metric_check_file))
fail_check(skyline_app, metric_failed_check_dir, str(metric_check_file))
return
from_timestamp = None
try:
# metric_vars.from_timestamp
# from_timestamp = str(metric_vars.from_timestamp)
key = 'from_timestamp'
value_list = [var_array[1] for var_array in metric_vars_array if var_array[0] == key]
from_timestamp = int(value_list[0])
if settings.ENABLE_IONOSPHERE_DEBUG:
logger.info('debug :: metric variable - from_timestamp - %s' % str(from_timestamp))
except:
# @added 20160822 - Bug #1460: panorama check file fails
# Added exception handling here
logger.info(traceback.format_exc())
logger.error('error :: failed to read from_timestamp variable from check file - %s' % (metric_check_file))
fail_check(skyline_app, metric_failed_check_dir, str(metric_check_file))
return
if not from_timestamp:
logger.error('error :: failed to load from_timestamp variable from check file - %s' % (metric_check_file))
fail_check(skyline_app, metric_failed_check_dir, str(metric_check_file))
return
metric_timestamp = None
try:
# metric_vars.metric_timestamp
# metric_timestamp = str(metric_vars.metric_timestamp)
key = 'metric_timestamp'
value_list = [var_array[1] for var_array in metric_vars_array if var_array[0] == key]
metric_timestamp = int(value_list[0])
if settings.ENABLE_IONOSPHERE_DEBUG:
logger.info('debug :: metric variable - metric_timestamp - %s' % str(metric_timestamp))
except:
logger.error('error :: failed to read metric_timestamp variable from check file - %s' % (metric_check_file))
metric_timestamp = None
if not metric_timestamp:
logger.error('error :: failed to load metric_timestamp variable from check file - %s' % (metric_check_file))
fail_check(skyline_app, metric_failed_check_dir, str(metric_check_file))
return
try:
# metric_vars.algorithms
# algorithms = metric_vars.algorithms
key = 'algorithms'
value_list = [var_array[1] for var_array in metric_vars_array if var_array[0] == key]
algorithms = value_list[0]
if settings.ENABLE_IONOSPHERE_DEBUG:
logger.info('debug :: metric variable - algorithms - %s' % str(algorithms))
except:
logger.error('error :: failed to read algorithms variable from check file setting to all - %s' % (metric_check_file))
algorithms = 'all'
try:
# metric_vars.triggered_algorithms
# triggered_algorithms = metric_vars.triggered_algorithms
key = 'triggered_algorithms'
value_list = [var_array[1] for var_array in metric_vars_array if var_array[0] == key]
triggered_algorithms = value_list[0]
if settings.ENABLE_IONOSPHERE_DEBUG:
logger.info('debug :: metric variable - triggered_algorithms - %s' % str(triggered_algorithms))
except:
logger.error('error :: failed to read triggered_algorithms variable from check file setting to all - %s' % (metric_check_file))
triggered_algorithms = 'all'
added_by = None
try:
# metric_vars.added_by
# added_by = str(metric_vars.added_by)
key = 'added_by'
value_list = [var_array[1] for var_array in metric_vars_array if var_array[0] == key]
added_by = str(value_list[0])
if settings.ENABLE_IONOSPHERE_DEBUG:
logger.info('debug :: metric variable - added_by - %s' % added_by)
except:
logger.error('error :: failed to read added_by variable from check file - %s' % (metric_check_file))
added_by = None
if not added_by:
fail_check(skyline_app, metric_failed_check_dir, str(metric_check_file))
return
# @added 20170117 - Feature #1854: Ionosphere learn - generations
if str(added_by) == 'ionosphere_learn':
logger.info('debug :: metric variable - added_by - %s' % added_by)
try:
# metric_vars.added_at
# added_at = str(metric_vars.added_at)
key = 'added_at'
value_list = [var_array[1] for var_array in metric_vars_array if var_array[0] == key]
added_at = int(value_list[0])
if settings.ENABLE_IONOSPHERE_DEBUG:
logger.info('debug :: metric variable - added_at - %s' % str(added_at))
except:
logger.error('error :: failed to read added_at variable from check file setting to all - %s' % (metric_check_file))
added_at = metric_timestamp
# @added 20161228 - Feature #1828: ionosphere - mirage Redis data features
# Added full_duration which needs to be recorded to allow Mirage metrics
# to be profiled on Redis timeseries data at FULL_DURATION
full_duration = None
try:
# metric_vars.full_duration
# full_duration = str(metric_vars.full_duration)
key = 'full_duration'
value_list = [var_array[1] for var_array in metric_vars_array if var_array[0] == key]
full_duration = int(value_list[0])
if settings.ENABLE_IONOSPHERE_DEBUG:
logger.info('debug :: metric variable - full_duration - %s' % str(full_duration))
except:
logger.error('error :: failed to read full_duration variable from check file - %s' % (metric_check_file))
full_duration = None
if not full_duration:
fail_check(skyline_app, metric_failed_check_dir, str(metric_check_file))
return
# @added 20170127 - Feature #1886: Ionosphere learn - child like parent with evolutionary maturity
# Added ionosphere_parent_id, always zero from Analyzer and Mirage
ionosphere_parent_id = None
ionosphere_parent_id_determined = False
try:
key = 'ionosphere_parent_id'
value_list = [var_array[1] for var_array in metric_vars_array if var_array[0] == key]
ionosphere_parent_id = int(value_list[0])
ionosphere_parent_id_determined = True
if settings.ENABLE_IONOSPHERE_DEBUG:
logger.info('debug :: metric variable - ionosphere_parent_id - %s' % str(ionosphere_parent_id))
except:
logger.error('error :: failed to read ionosphere_parent_id variable from check file - %s' % (metric_check_file))
ionosphere_parent_id = None
if not ionosphere_parent_id_determined:
logger.error('error :: failed to determine ionosphere_parent_id variable from check file - %s' % (metric_check_file))
fail_check(skyline_app, metric_failed_check_dir, str(metric_check_file))
return
# @modified 20170116 - Feature #1854: Ionosphere learn
# Do not check the cache key or anomaly age if added by ionosphere_learn
if added_by != 'ionosphere_learn':
# @added 20170101 - Feature #1830: Ionosphere alerts
# Remove check file is an alert key exists
cache_key = 'ionosphere.%s.alert.%s.%s' % (added_by, metric_timestamp, base_name)
last_alert = False
try:
last_alert = self.redis_conn.get(cache_key)
except Exception as e:
logger.error('error :: could not query Redis for cache_key: %s' % e)
if not last_alert:
logger.info('debug :: no alert cache key - %s' % cache_key)
else:
logger.info('debug :: removing check - alert cache key exists - %s' % cache_key)
self.remove_metric_check_file(str(metric_check_file))
return
now = time()
anomaly_age = int(now) - int(metric_timestamp)
if anomaly_age > max_age_seconds:
logger.info(
'Ionosphere check max age exceeded - %s - %s seconds old, older than %s seconds discarding' % (
metric, str(anomaly_age), str(max_age_seconds)))
with open(metric_check_file, 'rt') as fr:
metric_check_file_contents = fr.readlines()
logger.info(
'debug :: metric check file contents\n%s' % (str(metric_check_file_contents)))
self.remove_metric_check_file(str(metric_check_file))
return
else:
logger.info('processing check_file for ionosphere_learn - %s' % str(metric_check_file))
# @added 20161222 - ionosphere should extract features for every anomaly
# check that is sent through and calculate a feature_profile ready for
# submission by the user if they so choose. Further ionosphere could
# make itself more useful by comparing any training data profiles to
# further anomalies, however the feature profiles for subsequent
# anomalies may be similar enough to match a few times and each a closer
# match to the next.
training_metric = False
metrics_id = None
metric_ionosphere_enabled = None
# @added 20170115 - Feature #1854: Ionosphere learn - generations
# Create the metrics_db_object so it is available to determine all
# the details of all features profiles for the metric, this has all
# the generations values avaialble in it. Here we go! Learn!
metrics_db_object = None
# @added 20170825 - Task #2132: Optimise Ionosphere DB usage
# Try memcache first
try:
engine
except:
engine = None
memcache_metrics_db_object = None
metrics_db_object_key = 'metrics_db_object.%s' % str(base_name)
memcache_metric_dict = None
if settings.MEMCACHE_ENABLED:
memcache_metric_dict = get_memcache_metric_object(skyline_app, base_name)
query_metric_table = True
if memcache_metric_dict:
query_metric_table = False
metrics_id = int(memcache_metric_dict['id'])
metric_ionosphere_enabled = int(memcache_metric_dict['ionosphere_enabled'])
metrics_db_object = memcache_metric_dict
if metric_ionosphere_enabled is not None:
training_metric = False
else:
training_metric = True
logger.info('using %s key data from memcache' % metrics_db_object_key)
# Check if the metric has ionosphere_enabled, if not remove the check
# file but not the data directory
# @modified 20161230 - Feature #1830: Ionosphere alerts
# Use SQLAlchemy method
# query = "SELECT ionosphere_enabled FROM metrics WHERE metric='%s'" % metric
# result = mysql_select(skyline_app, query)
# if str(result[0]) != '1':
# logger.info('Ionosphere not enabled on %s' % (metric))
# # @modified 20161222 - do not remove metric file until features
# # calculated
# # self.remove_metric_check_file(str(metric_check_file))
# # return
# training_metric = True
# @modified 20170825 - Task #2132: Optimise Ionosphere DB usage
# If no memcache data then MySQL query_metric_table
if query_metric_table:
try:
engine, log_msg, trace = get_an_engine()
logger.info(log_msg)
except:
logger.error(traceback.format_exc())
logger.error('error :: could not get a MySQL engine to determine ionosphere_enabled')
if not engine:
logger.error('error :: engine not obtained to determine ionosphere_enabled')
# Get the metrics_table metadata
metrics_table = None
try:
metrics_table, log_msg, trace = metrics_table_meta(skyline_app, engine)
logger.info('metrics_table OK for %s' % base_name)
except:
logger.error(traceback.format_exc())
logger.error('error :: failed to get metrics_table meta for %s' % base_name)
try:
connection = engine.connect()
# stmt = select([metrics_table.c.ionosphere_enabled]).where(metrics_table.c.metric == str(metric))
stmt = select([metrics_table]).where(metrics_table.c.metric == base_name)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.